WUSTL High Throughput Computing Facility Configuration

Forked from the prince configuration.

nf-core pipelines that use this repo

All nf-core pipelines that use this config repo (which is most), can be run on the HTCF. Before running a pipeline for the first time, go into an interactive slurm session on a compute node (srun --pty --time=02:00:00 -c 2), as the docker images for the pipeline will need to be pulled and converted.

Now, run the pipeline of your choice with -profile wustl_htcf. This will download and launch wustl_htcf.config which has been pre-configured with a setup suitable for the HTCF cluster. Using this profile, a docker image containing all of the required software will be downloaded, and converted to a singularity image before execution of the pipeline. This step takes time!! An example commandline:

nextflow run nf-core/<pipeline name> -profile wustl_htcf <additional flags>

nf-core pipelines that do not use this repo

If the pipeline has not yet been configured to use the repository, then you will have to do it manually. Add the following lines to the end of the pipeline’s nextflow.config

// Allow the use of configuration files
includeConfig "https://raw.githubusercontent.com//master/nfcore_custom.config"

Config file

See config file on GitHub

wustl_htcf.config
// Forked from https://github.com/nf-core/configs/blob/master/conf/prince.config
 
def labEnvVar = System.getenv("LAB")
 
if (labEnvVar) {
    System.out.println("Lab: " + labEnvVar)
    singularityDir = "/ref/$LAB/data/singularity_images_nextflow" // If $LAB is set, use that
} else {
    def id = "id -nG".execute().text
    def labAutodetect = id.split(" ").last()
    System.out.println("Lab: " + labAutodetect)
    singularityDir = "/ref/" + labAutodetect + "/data/singularity_images_nextflow"
}
 
params {
    config_profile_description  = """
    WUSTL High Throughput Computing Facility cluster profile provided by nf-core/configs.
    Run from your scratch directory, the output files may be large!
    Please consider running the pipeline on a compute node the first time, as it will be pulling the docker image, which will be converted into a singularity image, which is heavy on the login node. Subsequent runs can be done on the login node, as the docker image will only be pulled and converted once. By default, the images will be stored in $singularityDir
    """.stripIndent()
    config_profile_contact      = "Gavin John <gavinjohn@wustl.edu>"
    config_profile_url          = "https://github.com/nf-core/configs/blob/master/docs/wustl_htcf.md"
 
    max_cpus = 24
    max_memory = 750.GB
    max_time = 168.h
}
 
spack {
    enabled = true
}
 
singularity {
    enabled  = true
    cacheDir = singularityDir
}
 
process {
    beforeScript = "exec \$( spack load --sh singularity )"
    executor     = "slurm"
}