HKI Configuration

All nf-core pipelines have been successfully configured for use on clusters at the Leibniz Institute for Natural Product Research and Infection Biology Hans Knöll Institute.

To use, run the pipeline with -profile hki,<cluster>. This will download and launch the hki.config which contains specific profiles for each cluster. The number of parallel jobs that run is currently limited to 8.

The currently available profiles are:

  • apate (uses singularity, cleanup set to true by default)
  • arges (uses singularity, cleanup set to true by default)
  • aither (uses singularity, cleanup set to true by default)
  • debug (sets cleanup to false for debugging purposes, use e.g. profile hki,<cluster>,debug)

Note that Nextflow is not necessarily installed by default on the HKI HPC cluster(s). You will need to install it into a directory you have write access to. Follow these instructions from the Nextflow documentation.

  • Install Nextflow : here

All of the intermediate files required to run the pipeline will be stored in the work/ directory. It is recommended to delete this directory after the pipeline has finished successfully because it can get quite large, and all of the main output files will be saved in the results/ directory anyway.

NB: You will need an account to use the HKI HPC clusters in order to run the pipeline. If in doubt contact the ICT Service Desk. NB: Nextflow will need to submit the jobs via SLURM to the HKI HPC clusters and as such the commands above will have to be executed on the login node. If in doubt contact ICT.

Config file

See config file on GitHub

hki.config
params {
  config_profile_description = 'HKI clusters profile provided by nf-core/configs.'
  config_profile_contact = 'James Fellows Yates (@jfy133)'
  config_profile_url = 'https://leibniz-hki.de'
}
 
profiles {
    apate {
        params {
            config_profile_description = 'apate HKI cluster profile provided by nf-core/configs'
            config_profile_contact = 'James Fellows Yates (@jfy133)'
            config_profile_url = 'https://leibniz-hki.de'
            max_memory = 128.GB
            max_cpus = 32
            max_time = 1440.h
        }
        process {
            executor = 'local'
            maxRetries = 2
        }
 
        executor {
            queueSize = 8
        }
 
        singularity {
            enabled = true
            autoMounts = true
            cacheDir = '/Net/Groups/ccdata/apps/singularity'
        }
 
        conda {
            cacheDir = '/Net/Groups/ccdata/apps/conda_envs'
        }
 
        cleanup = true
    }
 
    aither {
        params {
            config_profile_description = 'aither HKI cluster profile provided by nf-core/configs'
            config_profile_contact = 'James Fellows Yates (@jfy133)'
            config_profile_url = 'https://leibniz-hki.de'
            max_memory = 128.GB
            max_cpus = 32
            max_time = 1440.h
        }
        process {
            executor = 'local'
            maxRetries = 2
        }
 
        executor {
            queueSize = 8
        }
 
        singularity {
            enabled = true
            autoMounts = true
            cacheDir = '/Net/Groups/ccdata/apps/singularity'
        }
 
        conda {
            cacheDir = '/Net/Groups/ccdata/apps/conda_envs'
        }
 
        cleanup = true
    }
 
    arges {
        params {
            config_profile_description = 'arges HKI cluster profile provided by nf-core/configs'
            config_profile_contact = 'James Fellows Yates (@jfy133)'
            config_profile_url = 'https://leibniz-hki.de'
            max_memory = 64.GB
            max_cpus = 12
            max_time = 1440.h
        }
        process {
            executor = 'local'
            maxRetries = 2
        }
 
        executor {
            queueSize = 8
        }
 
        singularity {
            enabled = true
            autoMounts = true
            cacheDir = '/Net/Groups/ccdata/apps/singularity'
        }
 
        conda {
            cacheDir = '/Net/Groups/ccdata/apps/conda_envs'
        }
 
        cleanup = true
    }
 
    debug {
        cleanup = false
    }
}