WEHI Milton HPC Configuration

nf-core pipelines have been successfully configured for use on the Milton HPC cluster at WEHI.

To use the WEHI profile, run the pipeline with -profile wehi. This will download and apply wehi.config which has been pre-configured for the WEHI HPC cluster “Milton”. Using this profile, all Nextflow processes will be run within singularity containers, which will be downloaded and converted from docker containers as required.

Note: the WEHI profile is based on the ‘regular’ SLURM partition. Should you require resources outside of these limits (e.g. more memory, more walltime or gpus) you will need to provide a custom config specifying an appropriate SLURM partition (e.g. ‘bigmem’, ‘long’ or ‘gpuq’).

A Nextflow module is available on the Milton HPC cluster, to use run module load nextflow or module load nextflow/<version> prior to running your pipeline. In order to load this module, you with require a “VAST” scratch directory, which will be used as the the default work directory for nextflow pipelines. Please contact WEHI Research Computing for assistance with setting up a VAST scratch directory.

Config file

See config file on GitHub

wehi.config
params {
    config_profile_description = 'Walter and Eliza Hall Institute (WEHI) Milton HPC cluster profile'
    config_profile_contact     = 'Jacob Munro (munro.j@wehi.edu.au)'
    config_profile_url         = "https://www.wehi.edu.au/"
    max_memory = 1.3.TB
    max_cpus   = 128
    max_time   = 48.h
}
process {
    executor = 'slurm'
    cache    = 'lenient'
}
executor {
    queueSize         = 100
    queueStatInterval = '10 sec'
    pollInterval      = '10 sec'
    submitRateLimit   = '10 sec'
}
singularity {
    enabled    = true
    autoMounts = true
    runOptions = '-B /vast -B /stornext -B /wehisan'
}
cleanup = true
profiles {
    debug {
        cleanup = false
    }
}