BigPurple Configuration

nf-core pipelines that use this repo

All nf-core pipelines that use this config repo (which is most), can be run on BigPurple. Before running a pipeline for the first time, go into an interactive slurm session on a compute node (srun --pty --time=02:00:00 -c 2), as the docker image for the pipeline will need to be pulled and converted. Once in the interactive session:

module load singularity/3.1
module load squashfs-tools/4.3

Now, run the pipeline of your choice with -profile bigpurple. This will download and launch the bigpurple.config which has been pre-configured with a setup suitable for the BigPurple cluster. Using this profile, a docker image containing all of the required software will be downloaded, and converted to a singularity image before execution of the pipeline. An example commandline:

nextflow run nf-core/<pipeline name> -profile bigpurple <additional flags>

nf-core pipelines that do not use this repo

If the pipeline has not yet been configured to use this config, then you will have to do it manually. git clone this repo, copy the bigpurple.config from the conf folder and then you can invoke the pipeline like this:

nextflow run nf-core/<pipeline name> -c bigpurple.config <additional flags>

NB: You will need an account to use the HPC cluster BigPurple in order to run the pipeline. If in doubt contact MCIT. NB: You will need to install nextflow in your home directory - instructions are on (or ask the writer of this profile). The reason there is no module for nextflow on the cluster, is that the development cycle of nextflow is rapid and it’s easy to update yourself: nextflow self-update

Config file

See config file on GitHub

singularityDir = "/gpfs/scratch/${USER}/singularity_images_nextflow"
params {
    config_profile_description = """
    NYU School of Medicine BigPurple cluster profile provided by nf-core/configs.
    module load both singularity/3.1 and squashfs-tools/4.3 before running the pipeline with this profile!!
    Run from your scratch or lab directory - Nextflow makes a lot of files!!
    Also consider running the pipeline on a compute node (srun --pty /bin/bash -t=01:00:00) the first time, as it will be pulling the docker image, which will be converted into a singularity image, which is heavy on the login node and will take some time. Subsequent runs can be done on the login node, as the docker image will only be pulled and converted once. By default the images will be stored in $singularityDir
    config_profile_contact = 'Tobias Schraink (@tobsecret)'
    config_profile_url = ''
singularity {
    enabled = true
    autoMounts = true
    cacheDir = singularityDir
process {
    beforeScript = """
    module load singularity/3.1
    module load squashfs-tools/4.3
    executor = 'slurm'