Introduction

Nextflow handles job submissions on SLURM or other environments, and supervises running the jobs. Thus the Nextflow process must run until the pipeline is finished. We recommend that you put the process running in the background through screen / tmux or similar tool. Alternatively you can run nextflow within a cluster job submitted your job scheduler.

It is recommended to limit the Nextflow Java virtual machines memory. We recommend adding the following line to your environment (typically in ~/.bashrc or ~./bash_profile):

NXF_OPTS='-Xms1g -Xmx4g'

Running the pipeline

The typical command for running the pipeline is as follows:

nextflow run nf-core/smrnaseq --reads '*.fastq.gz' -profile docker

This will launch the pipeline with the docker configuration profile. See below for more information about profiles.

Note that the pipeline will create the following files in your working directory:

work            # Directory containing the nextflow working files
results         # Finished results (configurable, see below)
.nextflow_log   # Log file from Nextflow
# Other nextflow hidden files, eg. history of pipeline runs and old logs.

Updating the pipeline

When you run the above command, Nextflow automatically pulls the pipeline code from GitHub and stores it as a cached version. When running the pipeline after this, it will always use the cached version if available - even if the pipeline has been updated since. To make sure that you’re running the latest version of the pipeline, make sure that you regularly update the cached version of the pipeline:

nextflow pull nf-core/smrnaseq

Reproducibility

It’s a good idea to specify a pipeline version when running the pipeline on your data. This ensures that a specific version of the pipeline code and software are used when you run your pipeline. If you keep using the same tag, you’ll be running the same version of the pipeline, even if there have been changes to the code since.

First, go to the nf-core/smrnaseq releases page and find the latest version number - numeric only (eg. 1.3.1). Then specify this when running the pipeline with -r (one hyphen) - eg. -r 1.3.1.

This version number will be logged in reports when you run the pipeline, so that you’ll know what you used when you look back in the future.

Main Arguments

-profile

Use this parameter to choose a configuration profile. Profiles can give configuration presets for different compute environments. Note that multiple profiles can be loaded, for example: -profile docker - the order of arguments is important!

If -profile is not specified at all the pipeline will be run locally and expects all software to be installed and available on the PATH.

  • awsbatch
    • A generic configuration profile to be used with AWS Batch.
  • conda
    • A generic configuration profile to be used with conda
    • Pulls most software from Bioconda
  • docker
  • singularity
  • test
    • A profile with a complete configuration for automated testing
    • Includes links to test data so needs no other parameters

--reads

Location of the input FastQ files:

 --reads 'path/to/data/*.fastq.gz'

Please note the following requirements:

  1. The path must be enclosed in quotes
  2. The path must have at least one * wildcard character

--protocol

Protocol for constructing smRNA-seq libraries. Note that trimming parameters and 3’ adapter sequence are pre-defined with a specified protocol. Default: “illumina”

--protocol [one protocol listed in the table below]
ProtocolLibrary Prep KitTrimming Parameter3’ Adapter Sequence
illuminaIllumina TruSeq Small RNAclip_R1 = 0; three_prime_clip_R1 = 0TGGAATTCTCGGGTGCCAAGG
nextflexBIOO SCIENTIFIC NEXTFLEX Small RNA-Seqclip_R1 = 4; three_prime_clip_R1 = 4TGGAATTCTCGGGTGCCAAGG
qiaseqQIAGEN QIAseq miRNAclip_R1 = 0; three_prime_clip_R1 = 0AACTGTAGGCACCATCAAT
catsDiagenode CATS Small RNA-seqclip_R1 = 3; three_prime_clip_R1 = 0GATCGGAAGAGCACACGTCTG

Reference genomes

The pipeline config files come bundled with paths to the illumina iGenomes reference index files. If running with docker or AWS, the configuration is set up to use the AWS-iGenomes resource.

--genome (using iGenomes)

The reference genome to use of the analysis, needs to be one of the genome specified in the config file. The human GRCh37 genome is used by default.

--genome 'GRCh37'

Supported genomes

ParameterLatin NameCommon Name
AGPv3Zea maysMaize
BDGP6Drosophila melanogasterFruit fly
CanFam3.1Canis familiarisDog
CHIMP2.1.4Pan troglodytesChimpanze
EquCab2Equus caballusHorse
Galgal4Gallus gallusChicken
Gm01Glycine maxSoybean
GRCh37Homo sapiensHuman
GRCm38Mus musculusMouse
GRCz10Danio rerioZebrafish
IRGSP-1.0Oryza sativa japonicaRice
Mmul_1Macaca mulattaMacaque
Rnor_6.0Rattus norvegicusRat
Sbi1Sorghum bicolorGreat millet
Sscrofa10.2Sus scrofaPig
TAIR10Arabidopsis thalianaThale cress
UMD3.1Bos taurusCow
WBcel235Caenorhabditis elegansNematode

There are 31 different species supported in the iGenomes references. To run the pipeline, you must specify which to use with the --genome flag.

You can find the keys to specify the genomes in the iGenomes config file. Common genomes that are supported are:

  • Human
    • --genome GRCh37
  • Mouse
    • --genome GRCm38
  • Drosophila
    • --genome BDGP6
  • S. cerevisiae
    • --genome 'R64-1-1'

There are numerous others - check the config file for more.

Note that you can use the same configuration setup to save sets of reference files for your own use, even if they are not part of the iGenomes resource. See the Nextflow documentation for instructions on where to save such a file.

The syntax for this reference configuration is as follows:

params {
  genomes {
    'GRCh37' {
      fasta   = '<path to the genome fasta file>' // Used if no star index given
      mature  = '<path to the genome fasta file>' //mature.fa"
      hairpin = '<path to the genome fasta file>' //hairpin.fa"
      bowtie  = '<path to the genome fasta file>' // index
      gtf     = '<path to the gtf file>'
      mirtrace_species = "sps" // species according mirbase
 
    }
    // Any number of additional genomes, key is used with --genome
  }
}

--saveReference

Supply this parameter to save any generated reference genome files to your results folder. These can then be used for future pipeline runs, reducing processing times.

--fasta

If you prefer, you can specify the full path to your reference genome when you run the pipeline:

--fasta '[path to Fasta reference]'

--igenomesIgnore

Do not load igenomes.config when running the pipeline. You may choose this option if you observe clashes between custom parameters and those supplied in igenomes.config.

--mature

If you prefer, you can specify the full path to the FASTA file of mature miRNAs when you run the pipeline:

--mature [path to the FASTA file of mature miRNAs]

--hairpin

If you prefer, you can specify the full path to the FASTA file of miRNA precursors when you run the pipeline:

--hairpin [path to the FASTA file of miRNA precursors]

--bt_index

If you prefer, you can specify the full path to your reference genome when you run the pipeline:

--bt_index [path to Bowtie 1 index]

Trimming options

--min_length [int]

Discard reads that became shorter than length [int] because of either quality or adapter trimming. Default: 18

--clip_R1 [int]

Instructs Trim Galore to remove bp from the 5’ end of read 1

--three_prime_clip_R1 [int]

Instructs Trim Galore to remove bp from the 3’ end of read 1 AFTER adapter/quality trimming has been performed

--three_prime_adapter [sequence]

Instructs Trim Galore to remove 3’ adapters which are typically used in smRNA-seq library preparation

Skipping QC steps

--skipQC

Skip all QC steps aside from MultiQC

--skipFastqc

Skip FastQC

--skipMultiqc

Skip MultiQC

Job resources

Automatic resubmission

Each step in the pipeline has a default set of requirements for number of CPUs, memory and time. For most of the steps in the pipeline, if the job exits with an error code of 143 (exceeded requested resources) it will automatically resubmit with higher requests (2 x original, then 3 x original). If it still fails after three times then the pipeline is stopped.

Custom resource requests

Wherever process-specific requirements are set in the pipeline, the default value can be changed by creating a custom config file. See the files hosted at nf-core/configs for examples.

If you are likely to be running nf-core pipelines regularly it may be a good idea to request that your custom config file is uploaded to the nf-core/configs git repository. Before you do this please can you test that the config file works with your pipeline of choice using the -c parameter (see definition below). You can then create a pull request to the nf-core/configs repository with the addition of your config file, associated documentation file (see examples in nf-core/configs/docs), and amending nfcore_custom.config to include your custom profile.

If you have any questions or issues please send us a message on Slack.

AWS Batch specific parameters

Running the pipeline on AWS Batch requires a couple of specific parameters to be set according to your AWS Batch configuration. Please use the -awsbatch profile and then specify all of the following parameters.

--awsqueue

The JobQueue that you intend to use on AWS Batch.

--awsregion

The AWS region to run your job in. Default is set to eu-west-1 but can be adjusted to your needs.

Please make sure to also set the -w/--work-dir and --outdir parameters to a S3 storage bucket of your choice - you’ll get an error message notifying you if you didn’t.

Other command line parameters

--outdir

The output directory where the results will be saved.

--email

Set this parameter to your e-mail address to get a summary e-mail with details of the run sent to you when the workflow exits. If set in your user config file (~/.nextflow/config) then you don’t need to specify this on the command line for every run.

-name

Name for the pipeline run. If not specified, Nextflow will automatically generate a random mnemonic.

This is used in the MultiQC report (if not default) and in the summary HTML / e-mail (always).

NB: Single hyphen (core Nextflow option)

--seq_center

Text about sequencing center which will be added in the header of output bam files.

-resume

Specify this when restarting a pipeline. Nextflow will used cached results from any pipeline steps where the inputs are the same, continuing from where it got to previously.

You can also supply a run name to resume a specific run: -resume [run-name]. Use the nextflow log command to show previous run names.

NB: Single hyphen (core Nextflow option)

-c

Specify the path to a specific config file (this is a core NextFlow command). Useful if using different UPPMAX projects or different sets of reference genomes. NOTE! One hyphen only (core Nextflow parameter).

NB: Single hyphen (core Nextflow option)

Note - you can use this to override defaults. For example, we run on UPPMAX but don’t want to use the MultiQC environment module as is the default. So we specify a config file using -c that contains the following:

process.$multiqc.module = []

Stand-alone scripts

The bin directory contains some scripts used by the pipeline which may also be run manually:

  • edgeR_miRBase.r
    • R script using for processing reads counts of mature miRNAs and miRNA precursors (hairpins).

--custom_config_version

Provide git commit id for custom Institutional configs hosted at nf-core/configs. This was implemented for reproducibility purposes. Default is set to master.

## Download and use config file with following git commid id
--custom_config_version d52db660777c4bf36546ddb188ec530c3ada1b96

--custom_config_base

If you’re running offline, nextflow will not be able to fetch the institutional config files from the internet. If you don’t need them, then this is not a problem. If you do need them, you should download the files from the repo and tell nextflow where to find them with the custom_config_base option. For example:

## Download and unzip the config files
cd /path/to/my/configs
wget https://github.com/nf-core/configs/archive/master.zip
unzip master.zip
 
## Run the pipeline
cd /path/to/my/data
nextflow run /path/to/pipeline/ --custom_config_base /path/to/my/configs/configs-master/

Note that the nf-core/tools helper package has a download command to download all required pipeline files + singularity containers + institutional configs in one go for you, to make this process easier.

--max_memory

Use to set a top-limit for the default memory requirement for each process. Should be a string in the format integer-unit. eg. --max_memory '8.GB'

--max_time

Use to set a top-limit for the default time requirement for each process. Should be a string in the format integer-unit. eg. --max_time '2.h'

--max_cpus

Use to set a top-limit for the default CPU requirement for each process. Should be a string in the format integer-unit. eg. --max_cpus 1

--plaintext_email

Set to receive plain-text e-mails instead of HTML formatted.

--monochrome_logs

Set to disable colourful command line output and live life in monochrome.

--multiqc_config

Specify a path to a custom MultiQC configuration file.