KU Leuven/UHasselt Tier-2 High Performance Computing Infrastructure (VSC)

NB: You will need an account to use the HPC cluster to run the pipeline.

First you should go to the cluster you want to run the pipeline on. You can check what clusters have the most free space using following command sinfo --cluster wice|genius.

Before running the pipeline you will need to create a slurm script that acts as a master script to submit all jobs.

$ more job.pbs
#!/bin/bash
#SBATCH --account=...
#SBATCH --chdir=....
#SBATCH --partition=batch_long
#SBATCH --nodes="1"
#SBATCH --ntasks-per-node="1"
 
module load Nextflow
 
nextflow run <pipeline> -profile vsc_kul_uhasselt,<CLUSTER> --project <your-credential-acc> <Add your other parameters>

NB: You have to specify your credential account, else the jobs will fail!

Here the cluster options are:

  • genius
  • wice
  • superdome

NB: The vsc_kul_uhasselt profile is based on a selected amount of SLURM partitions. Should you require resources outside of these limits (e.g.gpus) you will need to provide a custom config specifying an appropriate SLURM partition (e.g. ‘gpu*’).

Use the --cluster option to specify the cluster you intend to use when submitting the job:

sbatch --cluster=wice|genius job.slurm 

All of the intermediate files required to run the pipeline will be stored in the work/ directory. It is recommended to delete this directory after the pipeline has finished successfully because it can get quite large, and all of the main output files will be saved in the results/ directory anyway.

The config contains a cleanup command that removes the work/ directory automatically once the pipeline has completed successfully. If the run does not complete successfully then the work/ dir should be removed manually to save storage space. The default work directory is set to $VSC_SCRATCH/work per this configuration

NB: The default directory where the work/ and singularity/ (cache directory for images) is located in $VSC_SCRATCH.

Config file

See config file on GitHub

vsc_kul_uhasselt.config
// Define the Scratch directory
def scratch_dir = System.getenv("VSC_SCRATCH") ?: "scratch/"
 
// Specify the work directory
workDir = "$scratch_dir/work"
 
// Perform work directory cleanup when the run has succesfully completed
cleanup = true
 
// Reduce the job submit rate to about 30 per minute, this way the server won't be bombarded with jobs
// Limit queueSize to keep job rate under control and avoid timeouts
executor {
    submitRateLimit = '30/1min'
    queueSize = 10
}
 
// Add backoff strategy to catch cluster timeouts and proper symlinks of files in scratch to the work directory
process {
    stageInMode = "symlink"
    stageOutMode = "rsync"
    errorStrategy = { sleep(Math.pow(2, task.attempt) * 200 as long); return 'retry' }
    maxRetries    = 5
}
 
// Specify that singularity should be used and where the cache dir will be for the images
singularity {
    enabled = true
    autoMounts = true
    cacheDir = "$scratch_dir/.singularity"
}
 
env {
    SINGULARITY_CACHEDIR="$scratch_dir/.singularity"
    APPTAINER_CACHEDIR="$scratch_dir/.apptainer"
}
 
// AWS maximum retries for errors (This way the pipeline doesn't fail if the download fails one time)
aws {
        maxErrorRetry = 3
}
 
// Define profiles for each cluster
profiles {
    genius {
        params {
            config_profile_description = 'HPC_GENIUS profile for use on the genius cluster of the VSC HPC.'
            config_profile_contact = 'joon.klaps@kuleuven.be'
            config_profile_url = 'https://docs.vscentrum.be/en/latest/index.html'
            max_memory = 703.GB  // 768 - 65 so 65GB for overhead, max is 720000MB
            max_time = 168.h
            max_cpus = 36
        }
 
        process {
            executor = 'slurm'
            queue = {
                switch (task.memory) {
                case { it >=  175.GB }: // max is 180000
                    switch (task.time) {
                    case { it >= 72.h }:
                        return 'dedicated_big_bigmem'
                    default:
                        return 'bigmem'
                    }
                default:
                    switch (task.time) {
                    case { it >= 72.h }:
                        return 'batch_long'
                    default:
                        return 'batch'
                    }
                }
            }
            clusterOptions = { "--cluster genius --account=${params.project}" }
            scratch = "$scratch_dir"
        }
    }
 
    wice {
        params {
            config_profile_description = 'HPC_WICE profile for use on the Wice cluster of the VSC HPC.'
            config_profile_contact = 'joon.klaps@kuleuven.be'
            config_profile_url = 'https://docs.vscentrum.be/en/latest/index.html'
            max_memory = 1968.GB // max is 2016000
            max_cpus = 72
            max_time = 168.h
        }
 
        process {
            executor = 'slurm'
            queue = {
                switch (task.memory) {
                case { it >=  239.GB }:  // max is 244800
                    switch (task.time) {
                    case { it >= 72.h }:
                        return 'dedicated_big_bigmem'
                    default:
                        return 'bigmem'
                    }
                default:
                    switch (task.time) {
                    case { it >= 72.h }:
                        return 'batch_long'
                    default:
                        return 'batch'
                    }
                }
            }
            clusterOptions = { "--cluster wice --account=${params.project}" }
            scratch = "$scratch_dir"
        }
    }
 
    superdome {
        params {
            config_profile_description = 'HPC_SUPERDOME profile for use on the genius cluster of the VSC HPC.'
            config_profile_contact = 'joon.klaps@kuleuven.be'
            config_profile_url = 'https://docs.vscentrum.be/en/latest/index.html'
            max_memory = 5772.GB // 6000 - 228 so 228GB for overhead, max is 5910888MB
            max_cpus = 14
            max_time = 168.h
        }
 
        process {
            executor = 'slurm'
            queue = { task.time <= 72.h ? 'superdome' : 'superdome_long' }
            clusterOptions = { "--cluster genius --account=${params.project}" }
            scratch = "$scratch_dir"
        }
    }
}