Pawsey Setonix HPC Configuration

nf-core pipelines have been successfully run on Setonix HPC at Pawsey Supercomputing Centre.

To run an nf-core pipeline at Pawsey’s Setonix HPC, run the pipeline with -profile singularity,pawsey_setonix. This will download and launch the pawsey_setonix.config which has been pre-configured with a setup suitable for the Setonix HPC cluster. Using this profile, a docker image containing all of the required software will be downloaded, and converted to a Singularity image before execution of the pipeline.

Access to Setonix HPC

Please be aware that you will need to have a user account, be a member of a Setonix project, and have a service unit allocation to your project in order to use this infrastructure. See documentation for details regarding access mechanisms for Setonix HPC.

Launch an nf-core pipeline on Setonix


Before running the pipeline you will need to load Nextflow and Singularity, both of which are globally installed modules on Setonix. You can do this by running the commands below:

module purge
module load nextflow singularity

Execution command

module load nextflow
module load singularity
nextflow run <nf-core_pipeline>/ \
  -profile singularity,pawsey_setonix \
  <additional flags>

Cluster considerations

This config currently determines which Setonix queue to submit your task to based on the amount of memory required. For the sake of resource and service unit efficiency, the following rules are applied by this config:

  • Tasks requesting less than 238 Gb will be submitted to the work queue
  • Tasks requesting more than 230 Gb and less than 980 Gb will be submitted to the highmem queue

See the Setonix documentation for details regarding queue resource structure and resource limits.

Config file

See config file on GitHub

// Pawsey Setonix nf-core configuration profile
params {
    config_profile_description = 'Pawsey Setonix HPC profile provided by nf-core/configs'
    config_profile_contact = 'Sarah Beecroft (@SarahBeecroft), Georgie Samaha (@georgiesamaha)'
    config_profile_url = ''
    max_cpus = 64
    max_memory = 230.Gb
// Enable use of Singularity to run containers
singularity {
    enabled = true
    autoMounts = true
    autoCleanUp = true
// Submit up to 256 concurrent jobs (Setonix work partition max)
executor {
    queueSize = 1024
// Define process resource limits
// See:
process {
    executor = 'slurm'
    clusterOptions = "--account=${System.getenv('PAWSEY_PROJECT')}"
    module = 'singularity/3.11.4-slurm'
    cache = 'lenient'
    stageInMode = 'symlink'
    queue = { task.memory < 230.GB ? 'work' : (task.memory > 230.GB && task.memory <= 980.GB ? 'highmem' : '') }