Longleaf High-Performance Computing Cluster, University of North Carolina at Chapel Hill

NB: You will need an account to use the HPC cluster to run the pipeline.

Before running the pipeline you will need to load Nextflow and Apptainer. You can do this by including the commands below in your SLURM/sbatch script:

## Load Nextflow environment modules
module load nextflow/23.04.2;

All of the intermediate files required to run the pipeline will be stored in the work/ directory, which will be generated inside the location you ran the nf-core pipeline. It is recommended to delete this directory after the pipeline has finished successfully because it can get quite large, and all of the main output files will be saved in the results/ directory anyway. You can also specify the working directory using the Nextflow -w or -work-dir option.

This configuration will automatically submit jobs to the general SLURM queue, where it may automatically be shuffled to different partitions depending on the time required by each process.

run nextflow nf-core/rnaseq -profile unc_longleaf

NB: Nextflow will need to submit the jobs via SLURM to the HPC cluster and as such the commands above will have to be submitted from one of the login nodes.

Config file

See config file on GitHub

unc_longleaf.config
params {
    config_profile_description = "BARC nf-core profile for UNC's Longleaf HPC."
    config_profile_contact = 'Austin Hepperla'
    config_profile_contact_github = '@ahepperla'
    config_profile_contact_email = 'hepperla@unc.edu'
    config_profile_url = "https://help.rc.unc.edu/longleaf-cluster/"
}
 
singularity {
    enabled = true
    autoMounts = true
    cacheDir = "/work/appscr/singularity/nf-core/singularity_images_cache"
    registry = 'quay.io'
}
 
process {
    executor = 'slurm'
    queue = 'general'
}
 
executor {
    queueGlobalStatus = true
}
 
params {
    max_memory = 3041.GB
    max_cpus = 256
    max_time = 240.h
}