McCleary Configuration

All nf-core pipelines have been successfully configured for use on the Yale University McCleary cluster. To use, run the pipeline with -profile mccleary.

NB: You will need an account to use the HPC cluster on the McCleary cluster in order to run the pipeline. If in doubt contact IT. To use nf-core pipelines on McCleary:

  1. Install Nextflow for your user. Move the Nextflow executable to a folder in your $PATH variable (e.g. ~/bin).
module load Java/17.0.4
curl -s | bash
  1. Submit your pipeline script via sbatch With the following script. Update --job-name,—time, and —partition` as needed for your head job. 2 CPUs and 5GB of memory is usually sufficient for the Nextflow head job but you can also update as needed.
#! /bin/bash
#SBATCH --job-name=nf-core
#SBATCH --out="slurm-%j.out"
#SBATCH --time=07-00:00:00
#SBATCH --cpus-per-task=2
#SBATCH --mem=5G
#SBATCH --mail-type=ALL
#SBATCH --partition=week
module load Java/17.0.4
nextflow pull nf-core/<pipeline> -r <release>
nextflow run nf-core/<pipeline> -r <release> \
-profile mccleary \
--outdir "results" \

Pipeline Specific profiles

There are no specific profiles added for now

Config file

See config file on GitHub

//Profile config names for nf-core/configs
params {
    config_profile_description = 'McCleary Cluster at Yale'
    config_profile_contact = 'Gisela Gabernet'
    config_profile_email = ''
    config_profile_github = '@ggabernet'
    config_profile_url = ''
singularity {
    enabled = true
executor {
    name = 'slurm'
    queueSize = 50
process {
    queue = { task.time > 24.h ? 'week' : 'day' }
    scratch = 'true'
    executor = 'slurm'
    max_memory = 983.GB
    max_cpus = 64