nf-core/ampliseq
Amplicon sequencing analysis workflow using DADA2 and QIIME2
2.5.0
). The latest
stable release is
2.11.0
.
Table of Contents
- Running the pipeline
- Core Nextflow arguments
- Custom configuration
- Running in the background
- Nextflow memory requirements
Running the pipeline
Quick start
The typical command for running the pipeline is as follows:
In this example, --input
is the Direct FASTQ input, other options are Samplesheet input and ASV/OTU fasta input. For more details on metadata, see Metadata. For Reproducibility, specify the version to run using -r
(= release, here: 2.3.2). See the nf-core/ampliseq website documentation for more information about pipeline specific parameters.
It is possible to not provide primer sequences (--FW_primer
& --RV_primer
) and skip primer trimming using --skip_cutadapt
, but this is only for data that indeed does not contain any PCR primers in their sequences. Also, metadata (--metadata
) isnt required, but aids downstream analysis.
This will launch the pipeline with the singularity
configuration profile. See below -profile
for more information about profiles.
Note that the pipeline will create the following files in your working directory:
NB: If the data originates from multiple sequencing runs, the error profile of each of those sequencing runs needs to be considered separately. Using the
run
column in the samplesheet input or adding--multiple_sequencing_runs
for Direct FASTQ input will separate certain processes by the sequencing run. Please see the following example:
Setting parameters in a file
Pipeline settings can be provided in a yaml or json file via -params-file <file>
instead of using command line parameters. Do not use -c <file>
to specify parameters as this will result in errors. The above pipeline run specified with a params file in yaml format:
with params.yaml
containing:
Input specifications
The input data can be passed to nf-core/ampliseq in three possible ways using the --input
parameter, either a folder containing zipped FastQ files, a tab-separated samplesheet, or a fasta file to be taxonomically classified.
Optionally, a metadata sheet can be specified for downstream analysis.
Direct FASTQ input
The easiest way is to specify directly the path to the folder that contains your input FASTQ files. For example:
File names must follow a specific pattern, default is /*_R{1,2}_001.fastq.gz
, but this can be adjusted with --extension
.
For example, the following files in folder data
would be processed as sample1
and sample2
:
All sequencing data should originate from one sequencing run, because processing relies on run-specific error models that are unreliable when data from several sequencing runs are mixed. Sequencing data originating from multiple sequencing runs requires additionally the parameter --multiple_sequencing_runs
and a specific folder structure, for example:
Where sample1
and sample2
were sequenced in one sequencing run and sample3
and sample4
in another sequencing run.
Please note the following additional requirements:
- Files names must be unique
- Valid file extensions:
.fastq.gz
,.fq.gz
(files must be compressed) - The path must be enclosed in quotes
--extension
must have at least one*
wildcard character- When using the pipeline with paired end data, the
--extension
must use{1,2}
(or similar) notation to specify read pairs - To run single-end data you must additionally specify
--single_end
and--extension
may not include curly brackets{}
- Sample identifiers are extracted from file names, i.e. the string before the first underscore
_
, these must be unique (also across sequencing runs) - If your data is scattered, produce a sample sheet
Samplesheet input
The sample sheet file is an alternative way to provide input reads, it must be a tab-separated file ending with .tsv
that must have two to four columns with the following headers:
Column | Necessity | Description |
---|---|---|
sampleID | required | Unique sample identifiers |
forwardReads | required | Paths to (forward) reads zipped FastQ files |
reverseReads | optional | Paths to reverse reads zipped FastQ files, required if the data is paired-end |
run | optional | If the data was produced by multiple sequencing runs, any string |
For example, the samplesheet may contain:
sampleID | forwardReads | reverseReads | run |
---|---|---|---|
sample1 | ./data/S1_R1_001.fastq.gz | ./data/S1_R2_001.fastq.gz | A |
sample2 | ./data/S2_fw.fastq.gz | ./data/S2_rv.fastq.gz | A |
sample3 | ./S4x.fastq.gz | ./S4y.fastq.gz | B |
sample4 | ./a.fastq.gz | ./b.fastq.gz | B |
Please note the following requirements:
- 2 to 4 tab-separated columns
- Valid file extension:
.tsv
- Must contain the header
sampleID
andforwardReads
- May contain the header
reverseReads
andrun
- Sample IDs must be unique
- Sample IDs must not contain a dot
.
- Sample IDs may not start with a number
- FastQ files must be compressed (
.fastq.gz
,.fq.gz
) - Within one samplesheet, only one type of raw data should be specified (same amplicon & sequencing method)
An example samplesheet has been provided with the pipeline.
Please note: All characters other than letters, numbers and underline in Sample IDs will be converted to dots
.
. Avoid those conversions, because they might make summary files not merging correctly and will fail to match to metadata (which can be adjusted though).
ASV/OTU fasta input
When pointing at a file ending with .fasta
, .fna
or .fa
, the containing ASV/OTU sequences will be taxonomically classified.
Most of the steps of the pipeline will be skipped, but ITSx & Barrnap & length filtering can be applied before taxonomic classification.
The sequence header line may contain a description, that will be kept as part of the sequence name. However, tabs will be changed into spaces.
Please note the following requirements:
- Valid file extensions:
.fasta
,.fna
or.fa
Metadata
Metadata is optional, but for performing downstream analysis such as barplots, diversity indices or differential abundance testing, a metadata file is essential.
For example:
ID | condition |
---|---|
sample1 | control |
sample2 | treatment |
sample3 | control |
sample4 | treatment |
Please note the following requirements:
- The path must be enclosed in quotes
- The metadata file has to follow the QIIME2 specifications (https://docs.qiime2.org/2021.2/tutorials/metadata/)
The metadata file must be tab-separated with a header line. The first column in the tab-separated metadata file is the sample identifier column (required header: ID) and defines the sample or feature IDs associated with the dataset. In addition to the sample identifier column, metadata files are required to have additional metadata columns.
Sample identifiers should be 36 characters long or less, and also contain only ASCII alphanumeric characters (i.e. in the range of [a-z], [A-Z], or [0-9]), or the dash (-) character. For downstream analysis, by default all numeric columns, blanks or NA are removed, and only columns with multiple different values but not all unique are selected.
The columns which are to be assessed can be specified by --metadata_category
. If --metadata_category
isn’t specified than all columns that fit the specification are automatically chosen.
Updating the pipeline
When you run the above command, Nextflow automatically pulls the pipeline code from GitHub and stores it as a cached version. When running the pipeline after this, it will always use the cached version if available - even if the pipeline has been updated since. To make sure that you’re running the latest version of the pipeline, make sure that you regularly update the cached version of the pipeline:
Reproducibility
It is a good idea to specify a pipeline version when running the pipeline on your data. This ensures that a specific version of the pipeline code and software are used when you run your pipeline. If you keep using the same tag, you’ll be running the same version of the pipeline, even if there have been changes to the code since.
First, go to the nf-core/ampliseq releases page and find the latest pipeline version - numeric only (eg. 2.4.1
). Then specify this when running the pipeline with -r
(one hyphen) - eg. -r 2.4.1
. Of course, you can switch to another version by changing the number after the -r
flag.
This version number will be logged in reports when you run the pipeline, so that you’ll know what you used when you look back in the future. For example, at the bottom of the MultiQC reports.
Core Nextflow arguments
NB: These options are part of Nextflow and use a single hyphen (pipeline parameters use a double-hyphen).
-profile
Use this parameter to choose a configuration profile. Profiles can give configuration presets for different compute environments.
Several generic profiles are bundled with the pipeline which instruct the pipeline to use software packaged using different methods (Docker, Singularity, Podman, Shifter, Charliecloud, Conda) - see below.
We highly recommend the use of Docker or Singularity containers for full pipeline reproducibility, however when this is not possible, Conda is also supported.
The pipeline also dynamically loads configurations from https://github.com/nf-core/configs when it runs, making multiple config profiles for various institutional clusters available at run time. For more information and to see if your system is available in these configs please see the nf-core/configs documentation.
Note that multiple profiles can be loaded, for example: -profile test,docker
- the order of arguments is important!
They are loaded in sequence, so later profiles can overwrite earlier profiles.
If -profile
is not specified, the pipeline will run locally and expect all software to be installed and available on the PATH
. This is not recommended, since it can lead to different results on different machines dependent on the computer enviroment.
test
- A profile with a complete configuration for automated testing
- Includes links to test data so needs no other parameters
docker
- A generic configuration profile to be used with Docker
singularity
- A generic configuration profile to be used with Singularity
podman
- A generic configuration profile to be used with Podman
shifter
- A generic configuration profile to be used with Shifter
charliecloud
- A generic configuration profile to be used with Charliecloud
conda
- A generic configuration profile to be used with Conda. Please only use Conda as a last resort i.e. when it’s not possible to run the pipeline with Docker, Singularity, Podman, Shifter or Charliecloud.
-resume
Specify this when restarting a pipeline. Nextflow will use cached results from any pipeline steps where the inputs are the same, continuing from where it got to previously. For input to be considered the same, not only the names must be identical but the files’ contents as well. For more info about this parameter, see this blog post.
You can also supply a run name to resume a specific run: -resume [run-name]
. Use the nextflow log
command to show previous run names.
-c
Specify the path to a specific config file (this is a core Nextflow command). See the nf-core website documentation for more information.
Custom configuration
Resource requests
Whilst the default requirements set within the pipeline will hopefully work for most people and with most input data, you may find that you want to customise the compute resources that the pipeline requests. Each step in the pipeline has a default set of requirements for number of CPUs, memory and time. For most of the steps in the pipeline, if the job exits with any of the error codes specified here it will automatically be resubmitted with higher requests (2 x original, then 3 x original). If it still fails after the third attempt then the pipeline execution is stopped.
For example, if the nf-core/rnaseq pipeline is failing after multiple re-submissions of the STAR_ALIGN
process due to an exit code of 137
this would indicate that there is an out of memory issue:
For beginners
A first step to bypass this error, you could try to increase the amount of CPUs, memory, and time for the whole pipeline. Therefor you can try to increase the resource for the parameters --max_cpus
, --max_memory
, and --max_time
. Based on the error above, you have to increase the amount of memory. Therefore you can go to the parameter documentation of rnaseq and scroll down to the show hidden parameter
button to get the default value for --max_memory
. In this case 128GB, you than can try to run your pipeline again with --max_memory 200GB -resume
to skip all process, that were already calculated. If you can not increase the resource of the complete pipeline, you can try to adapt the resource for a single process as mentioned below.
Advanced option on process level
To bypass this error you would need to find exactly which resources are set by the STAR_ALIGN
process. The quickest way is to search for process STAR_ALIGN
in the nf-core/rnaseq Github repo.
We have standardised the structure of Nextflow DSL2 pipelines such that all module files will be present in the modules/
directory and so, based on the search results, the file we want is modules/nf-core/star/align/main.nf
.
If you click on the link to that file you will notice that there is a label
directive at the top of the module that is set to label process_high
.
The Nextflow label
directive allows us to organise workflow processes in separate groups which can be referenced in a configuration file to select and configure subset of processes having similar computing requirements.
The default values for the process_high
label are set in the pipeline’s base.config
which in this case is defined as 72GB.
Providing you haven’t set any other standard nf-core parameters to cap the maximum resources used by the pipeline then we can try and bypass the STAR_ALIGN
process failure by creating a custom config file that sets at least 72GB of memory, in this case increased to 100GB.
The custom config below can then be provided to the pipeline via the -c
parameter as highlighted in previous sections.
NB: We specify the full process name i.e.
NFCORE_RNASEQ:RNASEQ:ALIGN_STAR:STAR_ALIGN
in the config file because this takes priority over the short name (STAR_ALIGN
) and allows existing configuration using the full process name to be correctly overridden.If you get a warning suggesting that the process selector isn’t recognised check that the process name has been specified correctly.
Updating containers (advanced users)
The Nextflow DSL2 implementation of this pipeline uses one container per process which makes it much easier to maintain and update software dependencies. If for some reason you need to use a different version of a particular tool with the pipeline then you just need to identify the process
name and override the Nextflow container
definition for that process using the withName
declaration. For example, in the nf-core/viralrecon pipeline a tool called Pangolin has been used during the COVID-19 pandemic to assign lineages to SARS-CoV-2 genome sequenced samples. Given that the lineage assignments change quite frequently it doesn’t make sense to re-release the nf-core/viralrecon everytime a new version of Pangolin has been released. However, you can override the default container used by the pipeline by creating a custom config file and passing it as a command-line argument via -c custom.config
.
-
Check the default version used by the pipeline in the module file for Pangolin
-
Find the latest version of the Biocontainer available on Quay.io
-
Create the custom config accordingly:
-
For Docker:
-
For Singularity:
-
For Conda:
-
NB: If you wish to periodically update individual tool-specific results (e.g. Pangolin) generated by the pipeline then you must ensure to keep the
work/
directory otherwise the-resume
ability of the pipeline will be compromised and it will restart from scratch.
nf-core/configs
In most cases, you will only need to create a custom config as a one-off but if you and others within your organisation are likely to be running nf-core pipelines regularly and need to use the same settings regularly it may be a good idea to request that your custom config file is uploaded to the nf-core/configs
git repository. Before you do this please can you test that the config file works with your pipeline of choice using the -c
parameter. You can then create a pull request to the nf-core/configs
repository with the addition of your config file, associated documentation file (see examples in nf-core/configs/docs
), and amending nfcore_custom.config
to include your custom profile.
See the main Nextflow documentation for more information about creating your own configuration files.
If you have any questions or issues please send us a message on Slack on the #configs
channel.
Azure Resource Requests
To be used with the azurebatch
profile by specifying the -profile azurebatch
.
We recommend providing a compute params.vm_type
of Standard_D16_v3
VMs by default but these options can be changed if required.
Note that the choice of VM size depends on your quota and the overall workload during the analysis. For a thorough list, please refer the Azure Sizes for virtual machines in Azure.
Running in the background
Nextflow handles job submissions and supervises the running jobs. The Nextflow process must run until the pipeline is finished.
The Nextflow -bg
flag launches Nextflow in the background, detached from your terminal so that the workflow does not stop if you log out of your session. The logs are saved to a file.
Alternatively, you can use screen
/ tmux
or similar tool to create a detached session which you can log back into at a later time.
Some HPC setups also allow you to run nextflow within a cluster job submitted your job scheduler (from where it submits more jobs).
Nextflow memory requirements
In some cases, the Nextflow Java virtual machines can start to request a large amount of memory.
We recommend adding the following line to your environment to limit this (typically in ~/.bashrc
or ~./bash_profile
):