Tufts HPC Configuration

nf-core pipelines have been configured for use on the Tufts HPC clusters operated by Research Technology at Tufts University.

To use Tufts’s profile, run the pipeline with -profile tufts.

Example: nextflow run <pipeline> -profile tufts

Users can also put the nextflow ... command into a batch script and submit the job to computing nodes by sbatch or launch interative jobs to computing nodes by srun. Using this way, both nextflow manager processes and tasks will run on the allocated compute nodes using the local executor. It is recommended to use -profile singularity

Example: nextflow run <pipeline> -profile singularity

By default, the batch partition is used for job submission. Other partitions can be specified using the --partition <PARTITION NAME> argument to the run.

Environment module

Before running the pipeline, you will need to load the Nextflow module by:

module purge ## Optional but recommended
module load nextflow singularity

Config file

See config file on GitHub

tufts.config
//Profile config names for nf-core/configs
params {
        config_profile_description = 'The Tufts University HPC cluster profile provided by nf-core/configs.'
        config_profile_contact = 'Yucheng Zhang'
        config_profile_contact_github = '@zhan4429'
        config_profile_contact_email = 'Yucheng.Zhang@tufts.edu'
        config_profile_url = 'https://it.tufts.edu/high-performance-computing'
}
 
params {
        max_memory = 120.GB
        max_cpus = 72
        max_time = 168.h
        partition = 'batch'
        igenomes_base = '/cluster/tufts/biocontainers/datasets/igenomes/'
}
 
process {
        executor = 'slurm'
        clusterOptions = "-N 1 -n 1 -p $params.partition"
    }
 
executor {
        queueSize = 16
        pollInterval = '1 min'
        queueStatInterval = '5 min'
        submitRateLimit = '10 sec'
}
 
// Set $NXF_SINGULARITY_CACHEDIR in your ~/.bashrc
// to stop downloading the same image for every run
singularity {
        enabled = true
        autoMounts = true
}