nf-core/configs: Cannon Configuration
All nf-core pipelines have been successfully configured for use on the Cannon CLUSTER at Harvard FAS.
To use, run the pipeline with -profile cannon
. This will download and launch the cannon.config
which has been pre-configured with a setup suitable for the Cannon cluster. Using this profile, a docker image containing all of the required software will be downloaded, and converted to a Singularity image before execution of the pipeline.
Below are non-mandatory information e.g. on modules to load etc
Before running the pipeline you will need to load Java and Python using the environment module system on Cannon. You can do this by issuing the commands below:
## Load Nextflow and Singularity environment modules
module purge
module load jdk
module load python
You will need an account to use the HPC cluster on PROFILE CLUSTER in order to run the pipeline. If in doubt contact FASRC.
Nextflow will need to submit the jobs via the job scheduler to the HPC cluster and as such the commands above will have to be executed on one of the login nodes or on an interactive node. For best practice, submit the nextflow head job as a sbatch script. Example below. If in doubt contact FASRC.
#!/bin/bash
#SBATCH -c 1 # Number of cores (-c)
#SBATCH -t 0-02:00 # Runtime in D-HH:MM, minimum of 10 minutes
#SBATCH -p shared # Partition to submit to
#SBATCH --mem=8G # Memory pool for all cores (see also --mem-per-cpu)
#SBATCH -o nf_job_%j.out # File to which STDOUT will be written, including job ID
# need to load modules
module load jdk
module load python
# Run nextflow
nextflow run nf-core/rnaseq -profile cannon
Config file
params{
config_profile_description = 'Harvard FAS Cannon Profile for running nextflow pipelines.'
config_profile_contact = 'Lei Ma (@microlei)'
config_profile_url = 'https://www.rc.fas.harvard.edu/'
max_memory = 2000.GB
max_cpus = 112
time = 14.d
}
singularity {
enabled = true
autoMounts = true
}
process {
executor = 'slurm'
queueSize = 2000
submitRateLimit = '10/sec'
resourceLimits = [
memory: 2000.GB,
cpus: 112,
time: 14.d
]
scratch = true
queue = {
switch (true) {
case { task.memory >= 1000.GB && task.time >= 3.d }:
return 'bigmem_intermediate'
case { task.memory >= 1000.GB }:
return 'bigmem'
case { task.memory >= 184.GB && task.time >= 3.d }:
return 'intermediate'
case { task.memory >= 184.GB }:
return 'sapphire'
case { task.time >= 3.d }:
return 'intermediate'
default:
return 'shared'
}
}
}