genius profile for use on the genius cluster of the VSC HPC.
KU Leuven/UHasselt Tier-2 High Performance Computing Infrastructure (VSC)
NB: You will need an account to use the HPC cluster to run the pipeline.
Install Nextflow on the cluster
Note
A nextflow module is available that can be loaded module load Nextflow but it does not support plugins. So it’s not recommended
Set up the environment variables in ~/.bashrc or ~/.bash_profile:
Warning
The current config is setup with array jobs. Make sure nextflow version >= 24.10.1, read array jobs in nextflow you can do this in
Make the submission script.
NB: you should go to the cluster you want to run the pipeline on. You can check what clusters have the most free space using following command sinfo --cluster wice|genius.
NB: You have to specify your credential account, by setting export SLURM_ACCOUNT="<your-credential-account>" else the jobs will fail!
Here the cluster options are:
genius
genius_gpu
wice
wice_gpu
superdome
NB: The vsc_kul_uhasselt profile is based on a selected amount of SLURM partitions. The profile will select to its best ability the most appropriate partition for the job. Including modules with a label containing gpuwill be allocated to a gpu partition when the ‘normal’ genius profile is selected. Select the genius_gpu or wice_gpu profile to force the job to be allocated to a gpu partition.
NB: If the module does not have accelerator set, it will determine the number of GPUs based on the requested resources.
Use the --cluster option to specify the cluster you intend to use when submitting the job:
All of the intermediate files required to run the pipeline will be stored in the work/ directory. It is recommended to delete this directory after the pipeline has finished successfully because it can get quite large, and all of the main output files will be saved in the results/ directory anyway.