Samplesheet input

You will need to create a samplesheet file with information about the samples in your experiment before running the pipeline. Use this parameter to specify its location. It has to be a comma-separated file with 5 columns, and a header row as shown in the examples below.

--input '[path to samplesheet file]'

Multiple replicates

The group identifier is the same when you have multiple replicates from the same experimental group, just increment the replicate identifier appropriately. The first replicate value for any given experimental group must be 1. Below is an example for a single experimental group in triplicate:

group,replicate,fastq_1,fastq_2,strandedness
control,1,AEG588A1_S1_L002_R1_001.fastq.gz,AEG588A1_S1_L002_R2_001.fastq.gz,reverse
control,2,AEG588A2_S2_L002_R1_001.fastq.gz,AEG588A2_S2_L002_R2_001.fastq.gz,reverse
control,3,AEG588A3_S3_L002_R1_001.fastq.gz,AEG588A3_S3_L002_R2_001.fastq.gz,reverse

Multiple runs of the same library

The group and replicate identifiers are the same when you have re-sequenced the same sample more than once (e.g. to increase sequencing depth). The pipeline will concatenate the raw reads before alignment. Below is an example for two samples sequenced across multiple lanes:

group,replicate,fastq_1,fastq_2,strandedness
control,1,AEG588A1_S1_L002_R1_001.fastq.gz,AEG588A1_S1_L002_R2_001.fastq.gz,unstranded
control,1,AEG588A1_S1_L003_R1_001.fastq.gz,AEG588A1_S1_L003_R2_001.fastq.gz,unstranded
treatment,1,AEG588A4_S4_L003_R1_001.fastq.gz,AEG588A4_S4_L003_R2_001.fastq.gz,unstranded
treatment,1,AEG588A4_S4_L004_R1_001.fastq.gz,AEG588A4_S4_L004_R2_001.fastq.gz,unstranded

Full design

A final design file consisting of both single- and paired-end data may look something like the one below. This is for two experimental groups in triplicate, where the last replicate of the treatment group has been sequenced twice.

group,replicate,fastq_1,fastq_2,strandedness
control,1,AEG588A1_S1_L002_R1_001.fastq.gz,AEG588A1_S1_L002_R2_001.fastq.gz,forward
control,2,AEG588A2_S2_L002_R1_001.fastq.gz,AEG588A2_S2_L002_R2_001.fastq.gz,forward
control,3,AEG588A3_S3_L002_R1_001.fastq.gz,AEG588A3_S3_L002_R2_001.fastq.gz,forward
treatment,1,AEG588A4_S4_L003_R1_001.fastq.gz,,forward
treatment,2,AEG588A5_S5_L003_R1_001.fastq.gz,,forward
treatment,3,AEG588A6_S6_L003_R1_001.fastq.gz,,forward
treatment,3,AEG588A6_S6_L004_R1_001.fastq.gz,,forward
ColumnDescription
groupGroup identifier for sample. This will be identical for replicate samples from the same experimental group.
replicateInteger representing replicate number. Must start from 1..<number of replicates>.
fastq_1Full path to FastQ file for read 1. File has to be zipped and have the extension “.fastq.gz” or “.fq.gz”.
fastq_2Full path to FastQ file for read 2. File has to be zipped and have the extension “.fastq.gz” or “.fq.gz”.
strandednessSample strand-specificity. Must be one of unstranded, forward or reverse.

An example samplesheet has been provided with the pipeline.

Direct download of public repository data

NB: This is an experimental feature but should work beautifully when it does! :)

The pipeline has been set-up to automatically download and process the raw FastQ files from public repositories. Identifiers can be provided in a file, one-per-line via the --public_data_ids parameter. Currently, the following identifiers are supported:

SRAENAGEO
SRR11605097ERR4007730GSM4432381
SRX8171613ERX4009132GSE147507
SRS6531847ERS4399630
SAMN14689442SAMEA6638373
SRP256957ERP120836
SRA1068758ERA2420837
PRJNA625551PRJEB37513

If SRR/ERR run ids are provided then these will be resolved back to their appropriate SRX/ERX ids to be able to merge multiple runs from the same experiment. This is conceptually the same as merging multiple libraries sequenced from the same sample.

The final sample information for all identifiers is obtained from the ENA which provides direct download links for FastQ files as well as their associated md5 sums. If download links exist, the files will be downloaded in parallel by FTP otherwise they will NOT be downloaded. This is intentional because the tools such as parallel-fastq-dump, fasterq-dump, prefetch etc require pre-existing configuration files in the users home directory which makes automation tricky across different platforms and containerisation.

As a bonus, the pipeline will also generate a valid samplesheet with paths to the downloaded data that can be used with the --input parameter, however, it is highly recommended that you double-check that all of the identifiers you defined using --public_data_ids are represented in the samplesheet. Also, public databases don’t reliably hold information such as experimental group, replicate identifiers or strandedness information so you may need to amend these entries too. All of the sample metadata obtained from the ENA has been appended as additional columns to help you manually curate the samplesheet before you run the pipeline.

Alignment options

By default, the pipeline uses STAR (i.e. --aligner star) to align the raw FastQ reads to the reference genome. STAR is fast but requires a lot of memory to run, typically around 38GB for the Human GRCh37 reference genome. Since the RSEM (i.e. --aligner star_rsem) workflow in the pipeline also uses STAR you should use the HISAT2 aligner (i.e. --aligner hisat2) if you have memory limitations.

You also have the option to pseudo-align and quantify your data with Salmon by providing the --pseudo_aligner salmon parameter. Salmon will then be run in addition to the standard alignment workflow defined by --aligner, mainly because it allows you to obtain QC metrics with respect to the genomic alignments. However, you can provide the --skip_alignment parameter if you would like to run Salmon in isolation. By default, the pipeline will use the genome fasta and gtf file to generate the transcripts fasta file, and then to build the Salmon index. You can override these parameters using the --transcript_fasta and --salmon_index parameters, respectively.

Reference genome files

The minimum reference genome requirements are a FASTA and GTF file, all other files required to run the pipeline can be generated from these files. However, it is more storage and compute friendly if you are able to re-use reference genome files as efficiently as possible. It is recommended to use the --save_reference parameter if you are using the pipeline to build new indices (e.g. those unavailable on AWS iGenomes) so that you can save them somewhere locally. The index building step can be quite a time-consuming process and it permits their reuse for future runs of the pipeline to save disk space. You can then either provide the appropriate reference genome files on the command-line via the appropriate parameters (e.g. --star_index '/path/to/STAR/index/') or via a custom config file.

  • If --genome is provided then the FASTA and GTF files (and existing indices) will be automatically obtained from AWS-iGenomes unless these have already been downloaded locally in the path specified by --igenomes_base.
  • If --gff is provided as input then this will be converted to a GTF file, or the latter will be used if both are provided.
  • If --gene_bed is not provided then it will be generated from the GTF file.
  • If --additional_fasta is provided then the features in this file (e.g. ERCC spike-ins) will be automatically concatenated onto both the reference FASTA file as well as the GTF annotation before building the appropriate indices.

NB: Compressed reference files are also supported by the pipeline i.e. standard files with the .gz extension and indices folders with the tar.gz extension.

If you are using GENCODE reference genome files please specify the --gencode parameter because the format of these files is slightly different to ENSEMBL genome files:

  • The --fc_group_features_type parameter will automatically be set to gene_type as opposed to gene_biotype, respectively.
  • If you are running Salmon, the --gencode flag will also be passed to the index building step to overcome parsing issues resulting from the transcript IDs in GENCODE fasta files being separated by vertical pipes (|) instead of spaces (see this issue).

Running the pipeline

The typical command for running the pipeline is as follows:

nextflow run nf-core/rnaseq --input samplesheet.csv --genome GRCh37 -profile docker

This will launch the pipeline with the docker configuration profile. See below for more information about profiles.

Note that the pipeline will create the following files in your working directory:

work            # Directory containing the nextflow working files
results         # Finished results (configurable, see below)
.nextflow_log   # Log file from Nextflow
# Other nextflow hidden files, eg. history of pipeline runs and old logs.

Updating the pipeline

When you run the above command, Nextflow automatically pulls the pipeline code from GitHub and stores it as a cached version. When running the pipeline after this, it will always use the cached version if available - even if the pipeline has been updated since. To make sure that you’re running the latest version of the pipeline, make sure that you regularly update the cached version of the pipeline:

nextflow pull nf-core/rnaseq

Reproducibility

It’s a good idea to specify a pipeline version when running the pipeline on your data. This ensures that a specific version of the pipeline code and software are used when you run your pipeline. If you keep using the same tag, you’ll be running the same version of the pipeline, even if there have been changes to the code since.

First, go to the nf-core/rnaseq releases page and find the latest version number - numeric only (eg. 1.3.1). Then specify this when running the pipeline with -r (one hyphen) - eg. -r 1.3.1.

This version number will be logged in reports when you run the pipeline, so that you’ll know what you used when you look back in the future.

Core Nextflow arguments

NB: These options are part of Nextflow and use a single hyphen (pipeline parameters use a double-hyphen).

-profile

Use this parameter to choose a configuration profile. Profiles can give configuration presets for different compute environments.

Several generic profiles are bundled with the pipeline which instruct the pipeline to use software packaged using different methods (Docker, Singularity, Podman, Conda) - see below.

We highly recommend the use of Docker or Singularity containers for full pipeline reproducibility, however when this is not possible, Conda is also supported.

The pipeline also dynamically loads configurations from https://github.com/nf-core/configs when it runs, making multiple config profiles for various institutional clusters available at run time. For more information and to see if your system is available in these configs please see the nf-core/configs documentation.

Note that multiple profiles can be loaded, for example: -profile test,docker - the order of arguments is important! They are loaded in sequence, so later profiles can overwrite earlier profiles.

If -profile is not specified, the pipeline will run locally and expect all software to be installed and available on the PATH. This is not recommended.

  • docker
    • A generic configuration profile to be used with Docker
    • Pulls software from Docker Hub: nfcore/rnaseq
  • singularity
  • podman
    • A generic configuration profile to be used with Podman
    • Pulls software from Docker Hub: nfcore/rnaseq
  • conda
    • Please only use Conda as a last resort i.e. when it’s not possible to run the pipeline with Docker, Singularity or Podman.
    • A generic configuration profile to be used with Conda
    • Pulls most software from Bioconda
  • test
    • A profile with a complete configuration for automated testing
    • Includes links to test data so needs no other parameters

-resume

Specify this when restarting a pipeline. Nextflow will used cached results from any pipeline steps where the inputs are the same, continuing from where it got to previously.

You can also supply a run name to resume a specific run: -resume [run-name]. Use the nextflow log command to show previous run names.

-c

Specify the path to a specific config file (this is a core Nextflow command). See the nf-core website documentation for more information.

Custom resource requests

Each step in the pipeline has a default set of requirements for number of CPUs, memory and time. For most of the steps in the pipeline, if the job exits with an error code of 143 (exceeded requested resources) it will automatically resubmit with higher requests (2 x original, then 3 x original). If it still fails after three times then the pipeline is stopped.

Whilst these default requirements will hopefully work for most people with most data, you may find that you want to customise the compute resources that the pipeline requests. You can do this by creating a custom config file. For example, to give the workflow process star 32GB of memory, you could use the following config:

process {
  withName: star {
    memory = 32.GB
  }
}

See the main Nextflow documentation for more information.

If you are likely to be running nf-core pipelines regularly it may be a good idea to request that your custom config file is uploaded to the nf-core/configs git repository. Before you do this please can you test that the config file works with your pipeline of choice using the -c parameter (see definition above). You can then create a pull request to the nf-core/configs repository with the addition of your config file, associated documentation file (see examples in nf-core/configs/docs), and amending nfcore_custom.config to include your custom profile.

If you have any questions or issues please send us a message on Slack on the #configs channel.

Running in the background

Nextflow handles job submissions and supervises the running jobs. The Nextflow process must run until the pipeline is finished.

The Nextflow -bg flag launches Nextflow in the background, detached from your terminal so that the workflow does not stop if you log out of your session. The logs are saved to a file.

Alternatively, you can use screen / tmux or similar tool to create a detached session which you can log back into at a later time. Some HPC setups also allow you to run nextflow within a cluster job submitted your job scheduler (from where it submits more jobs).

Nextflow memory requirements

In some cases, the Nextflow Java virtual machines can start to request a large amount of memory. We recommend adding the following line to your environment to limit this (typically in ~/.bashrc or ~./bash_profile):

NXF_OPTS='-Xms1g -Xmx4g'