KU Leuven/UHasselt Tier-2 High Performance Computing Infrastructure (VSC)

NB: You will need an account to use the HPC cluster to run the pipeline.

  1. Install Nextflow on the cluster
conda create --name nf-core python=3.12 nf-core nextflow
Note

A nextflow module is available that can be loaded module load Nextflow but it does not support plugins. So it’s not recommended

  1. Set up the environment variables in ~/.bashrc or ~/.bash_profile:
export SLURM_ACCOUNT="<your-credential-account>"
 
# Needed for running Nextflow jobs
export NXF_HOME="$VSC_SCRATCH/.nextflow"
export NXF_WORK="$VSC_SCRATCH/work"
 
# Needed for running Apptainer containers
export APPTAINER_CACHEDIR="$VSC_SCRATCH/.apptainer/cache"
export APPTAINER_TMPDIR="$VSC_SCRATCH/.apptainer/tmp"
export NXF_CONDA_CACHEDIR="$VSC_SCRATCH/miniconda3/envs"
 
# Optional tower key
# export TOWER_ACCESS_TOKEN="<your_tower_access_token>"
# export NXF_VER="<version>"      # make sure it's larger then 24.04.0
Warning

The current config is setup with array jobs. Make sure nextflow version >= 24.04.0, read array jobs in nextflow you can do this in

export NXF_VER=24.04.0
  1. Make the submission script.

NB: you should go to the cluster you want to run the pipeline on. You can check what clusters have the most free space using following command sinfo --cluster wice|genius.

$ more job.pbs
#!/bin/bash -l
#SBATCH --account=...
#SBATCH --chdir=....
#SBATCH --partition=batch_long
#SBATCH --nodes="1"
#SBATCH --ntasks-per-node="1"
 
# module load Nextflow # does not support plugins
conda activate nf-core
 
nextflow run <pipeline> -profile vsc_kul_uhasselt,<CLUSTER> <Add your other parameters>

NB: You have to specify your credential account, by setting export SLURM_ACCOUNT="<your-credential-account>" else the jobs will fail!

Here the cluster options are:

  • genius
  • wice
  • superdome

NB: The vsc_kul_uhasselt profile is based on a selected amount of SLURM partitions. Should you require resources outside of these limits (e.g.gpus) you will need to provide a custom config specifying an appropriate SLURM partition (e.g. ‘gpu*’).

Use the --cluster option to specify the cluster you intend to use when submitting the job:

sbatch --cluster=wice|genius job.slurm 

All of the intermediate files required to run the pipeline will be stored in the work/ directory. It is recommended to delete this directory after the pipeline has finished successfully because it can get quite large, and all of the main output files will be saved in the results/ directory anyway.

Config file

See config file on GitHub

vsc_kul_uhasselt.config
// Default to /tmp directory if $VSC_SCRATCH scratch env is not available,
// see: https://github.com/nf-core/configs?tab=readme-ov-file#adding-a-new-config
def scratch_dir = System.getenv("VSC_SCRATCH") ?: "/tmp"
 
// Specify the work directory
workDir = "$scratch_dir/work"
 
// Perform work directory cleanup when the run has succesfully completed
// cleanup = true
 
// Get the hostname and check some values for tier1
def hostname = "genius"
try {
    hostname = ['/bin/bash', '-c', 'sinfo --clusters=genius,wice -s | head -n 1'].execute().text.replace('CLUSTER: ','')
} catch (java.io.IOException e) {
    System.err.println("WARNING: Could not run sinfo to determine current cluster, defaulting to genius")
}
 
def tier1_project = System.getenv("SLURM_ACCOUNT") ?: null
 
if (! tier1_project && (hostname.contains("genius") || hostname.contains("wice"))) {
    // Hard-code that Tier 1 cluster dodrio requires a project account
    System.err.println("Please specify your VSC project account with environment variable SLURM_ACCOUNT.")
    System.exit(1)
}
 
 
// Reduce the job submit rate to about 30 per minute, this way the server won't be bombarded with jobs
// Limit queueSize to keep job rate under control and avoid timeouts
executor {
    submitRateLimit = '50/1min'
    queueSize = 30
    exitReadTimeout = "3day"
}
 
// Add backoff strategy to catch cluster timeouts and proper symlinks of files in scratch to the work directory
process {
    stageInMode = "symlink"
    stageOutMode = "rsync"
    errorStrategy = { sleep(Math.pow(2, task.attempt) * 200 as long); return 'retry' }
    maxRetries    = 5
    array = 10
}
 
// Specify that singularity should be used and where the cache dir will be for the images
singularity {
    enabled = true
    autoMounts = true
    cacheDir = "$scratch_dir/.singularity"
}
 
env {
    APPTAINER_TMPDIR="$scratch_dir/.apptainer/tmp"
    APPTAINER_CACHEDIR="$scratch_dir/.apptainer/cache"
}
 
// AWS maximum retries for errors (This way the pipeline doesn't fail if the download fails one time)
aws {
        maxErrorRetry = 3
}
 
// Define profiles for each cluster
profiles {
    genius {
        params {
            config_profile_description = 'HPC_GENIUS profile for use on the genius cluster of the VSC HPC.'
            config_profile_contact = 'joon.klaps@kuleuven.be'
            config_profile_url = 'https://docs.vscentrum.be/en/latest/index.html'
            max_memory = 703.GB  // 768 - 65 so 65GB for overhead, max is 720000MB
            max_time = 168.h
            max_cpus = 36
        }
 
        process {
            executor = 'slurm'
            queue = {
                switch (task.memory) {
                case { it >=  175.GB }: // max is 180000
                    switch (task.time) {
                    case { it >= 72.h }:
                        return 'dedicated_big_bigmem,dedicated_big_batch,bigmem_long'
                    default:
                        return 'bigmem'
                    }
                default:
                    switch (task.time) {
                    case { it >= 72.h }:
                        return 'batch_long'
                    default:
                        return 'batch'
                    }
                }
            }
            clusterOptions = { "--cluster genius --account=$tier1_project" }
            scratch = "$scratch_dir"
        }
    }
 
    wice {
        params {
            config_profile_description = 'HPC_WICE profile for use on the Wice cluster of the VSC HPC.'
            config_profile_contact = 'joon.klaps@kuleuven.be'
            config_profile_url = 'https://docs.vscentrum.be/en/latest/index.html'
            max_memory = 1968.GB // max is 2016000
            max_cpus = 72
            max_time = 168.h
        }
 
        process {
            executor = 'slurm'
            queue = {
                switch (task.memory) {
                case { it >=  239.GB }:  // max is 244800
                    switch (task.time) {
                    case { it >= 72.h }:
                        return 'dedicated_big_bigmem'
                    default:
                        return 'bigmem,hugemem'
                    }
                default:
                    switch (task.time) {
                    case { it >= 72.h }:
                        return 'batch_long,batch_icelake_long,batch_sapphirerapids_long'
                    default:
                        return 'batch,batch_sapphirerapids,batch_icelake'
                    }
                }
            }
            clusterOptions = { "--cluster wice --account=$tier1_project" }
            scratch = "$scratch_dir"
        }
    }
 
    superdome {
        params {
            config_profile_description = 'HPC_SUPERDOME profile for use on the genius cluster of the VSC HPC.'
            config_profile_contact = 'joon.klaps@kuleuven.be'
            config_profile_url = 'https://docs.vscentrum.be/en/latest/index.html'
            max_memory = 5772.GB // 6000 - 228 so 228GB for overhead, max is 5910888MB
            max_cpus = 14
            max_time = 168.h
        }
 
        process {
            executor = 'slurm'
            queue = { task.time <= 72.h ? 'superdome' : 'superdome_long' }
            clusterOptions = { "--cluster genius --account=$tier1_project" }
            scratch = "$scratch_dir"
        }
    }
}