Configuration for the MRC LMS Jex cluster

All nf-core pipelines have been configured for use on the MRC LMS Jex cluster and can be used by running with -profile jex.

The jex.config file has been customised to suit the cluster’s SLURM system and streamlines the use of containerised nf-core workflows. Using this profile, docker images containing the required software will be downloaded and converted to singularity images prior to execution and jobs will be submitted to the correct SLURM partitions and QoS groups. Converted singularity images are cached in a central area to reduce duplication.

To run a pipeline, nextflow and singularity will need to be loaded into your environment:

#
# NOTE:
# We use `module reset` rather than `module purge`, as Jex makes use of various
# default modules that provide a consistent user environment.
#
 
module reset
 
module load nextflow
module load singularityce

Full documentation on now to run nextflow on Jex can be found on the internal wiki. Remember:

Jex provides a special SLURM partition for running workflow managers, including nextflow. Manager processes submitted with --partition ctrl --qos qos_ctrl have elevated priorities and wallclock limits of up to 30 days.

Jex makes available various shared genome resources to avoid duplication. These are located within /opt/resources and can be searched and referenced using the asset command.

Config file

See config file on GitHub

jex.config
params {
    config_profile_name        = 'Jex'
    config_profile_description = 'Nextflow config file for the MRC LMS Jex cluster'
    config_profile_contact     = 'George Young (@A-N-Other)'
    config_profile_url         = 'https://lms.mrc.ac.uk/research-facility/bioinformatics-facility/'
}
 
process {
    resourceLimits = [
        memory: 4000.GB,
        cpus: 16,
        time: 3.d
    ]
    executor       = 'slurm'
    queue          = {
        if (task.time <= 6.h && task.cpus <= 8 && task.memory <= 64.GB) {
            'nice'
        }
        else if (task.memory > 256.GB) {
            'hmem'
        }
        else {
            'cpu'
        }
    }
    clusterOptions = '--qos qos_batch'
}
 
singularity {
    enabled    = true
    autoMounts = true
    cacheDir   = '/opt/resources/apps/singularity/cache'
}
 
params {
    max_memory = 4000.GB
    max_cpus   = 16
    max_time   = 3.d
}