LUGH configuration

Author: Barry Digby

Contact Info:

System Administrator: Chris Duke

Quick Start

To use the lugh configuration profile with your pipeline, add -profile lugh to your nextflow run command:

nextflow -bg run nf-core/rnaseq -profile test,nuig

Please take care to use the -bg flag, or run the job on a compute node.


The configuration file will load prerequisite modules for users (Java & Singularity), however it is up to the user to have a functional version of nextflow installed in their path. Follow nextflow installation instructions at the following link.

Queue Resources

QueueHostnamesMax MemoryMax CPUSMax Time

The configuration profile design is very simple. If your process exceeds 64GB memory or 16 cpus, it is sent to the highmem queue. If not, it is sent to the normal queue. Please do not use the MSC queue, this is reserved for Masters students.

Take others into consideration when deploying your workflow (do not hog the cluster 🐷). If you need to hammer the cluster with a pipeline, please reach out to me and we can tweak the configuration profile to dispatch jobs to only a handful of compute nodes via hostnames.

Container Cache

Your workflow containers are stored under /data/containers/ which is accessible to all users on lugh.

Config file

See config file on GitHub

//Profile config names for nf-core/configs
params {
    config_profile_description = 'National University of Ireland, Galway Lugh cluster profile provided by nf-core/configs'
    config_profile_contact = 'Barry Digby (@BarryDigby)'
    config_profile_url = ''
singularity {
    enabled = true
    autoMounts = true
    cacheDir = '/data/containers'
process {
    beforeScript = """
    module load EasyBuild/3.4.1
    module load Java/1.8.0_144
    module load singularity/3.4.1
    ulimit -s unlimited
    containerOptions = '-B /data/'
    executor = 'slurm'
    queue = { task.memory >= 64.GB || task.cpus > 16 ? 'highmem' : 'normal' }
params {
    max_time = 120.h
    max_memory = 128.GB
    max_cpus = 32