nf-core/mag
Assembly and binning of metagenomes
Define where the pipeline should find input data and save output data.
CSV samplesheet file containing information about the samples in the experiment.
string
^\S+\.csv$
Use this to specify the location of your input FastQ files and their associated metadata. You can also use the CSV file to assign different groups or to include long reads for hybrid assembly with metaSPAdes. The CSV file must have at least two columns (sample, short_reads1) and with a maximum CSV sheet having the headers: sample,run,group,short_reads_1,short_reads_2,long_reads. See usage docs.
Specifies that the input is single-end reads.
boolean
By default, the pipeline expects paired-end data. If you have single-end data, you need to specify --single_end
on the command line when you launch the pipeline. A normal glob pattern, enclosed in quotation marks, can then be used for --input
. For example:
--single_end --input '*.fastq'
It is not possible to run a mixture of single-end and paired-end files in one run.
Additional input CSV samplesheet containing information about pre-computed assemblies. When set, both read pre-processing and assembly are skipped and the pipeline begins at the binning stage.
string
^\S+\.csv$
If you have pre-computed assemblies from another source, it is possible to jump straight to the binning stage of the pipeline by supplying these assemblies in a CSV file. This CSV file should have at minimum three columns and the following header: id,group,assembler,fasta
(group is only required when --coassemble_group). Short reads must still be supplied in to
--input` in CSV format. See usage docs for further details.
The output directory where the results will be saved. You have to use absolute paths to storage on Cloud infrastructure.
string
Email address for completion summary.
string
^([a-zA-Z0-9_\-\.]+)@([a-zA-Z0-9_\-\.]+)\.([a-zA-Z]{2,5})$
Set this parameter to your e-mail address to get a summary e-mail with details of the run sent to you when the workflow exits. If set in your user config file (~/.nextflow/config
) then you don't need to specify this on the command line for every run.
MultiQC report title. Printed as page header, used for filename if not otherwise specified.
string
Reference genome related files and options required for the workflow.
Do not load the iGenomes reference config.
boolean
Do not load igenomes.config
when running the pipeline. You may choose this option if you observe clashes between custom parameters and those supplied in igenomes.config
.
The base path to the igenomes reference files
string
s3://ngi-igenomes/igenomes/
Parameters used to describe centralised config profiles. These should not be edited.
Git commit id for Institutional configs.
string
master
Base directory for Institutional configs.
string
https://raw.githubusercontent.com/nf-core/configs/master
If you're running offline, Nextflow will not be able to fetch the institutional config files from the internet. If you don't need them, then this is not a problem. If you do need them, you should download the files from the repo and tell Nextflow where to find them with this parameter.
Institutional config name.
string
Institutional config description.
string
Institutional config contact information.
string
Institutional config URL link.
string
Less common options for the pipeline, typically set in a config file.
Display version and exit.
boolean
Method used to save pipeline results to output directory.
string
The Nextflow publishDir
option specifies which intermediate files should be saved to the output directory. This option tells the pipeline what method should be used to move these files. See Nextflow docs for details.
Use monochrome_logs
boolean
Email address for completion summary, only when pipeline fails.
string
^([a-zA-Z0-9_\-\.]+)@([a-zA-Z0-9_\-\.]+)\.([a-zA-Z]{2,5})$
An email address to send a summary email to when the pipeline is completed - ONLY sent if the pipeline does not exit successfully.
Send plain-text email instead of HTML.
boolean
File size limit when attaching MultiQC reports to summary emails.
string
25.MB
^\d+(\.\d+)?\.?\s*(K|M|G|T)?B$
Incoming hook URL for messaging service
string
Incoming hook URL for messaging service. Currently, MS Teams and Slack are supported.
Custom config file to supply to MultiQC.
string
Custom logo file to supply to MultiQC. File name must also be set in the MultiQC config file
string
Custom MultiQC yaml file containing HTML including a methods description.
string
Boolean whether to validate parameters against the schema at runtime
boolean
true
Base URL or local path to location of pipeline test dataset files
string
https://raw.githubusercontent.com/nf-core/test-datasets/
Use these parameters to also enable reproducible results from the individual assembly and binning tools .
Fix number of CPUs for MEGAHIT to 1. Not increased with retries.
boolean
MEGAHIT only generates reproducible results when run single-threaded.
When using this parameter do not change the number of CPUs for the megahit
process with a custom config file. This would result in an error.
Default: The number of CPUs is specified in the base.config
file, and increased with each retry.
Fix number of CPUs used by SPAdes. Not increased with retries.
integer
-1
SPAdes is designed to be deterministic for a given number of threads. To generate reproducible results fix the number of CPUs using this parameter.
When using this parameter do not change the number of CPUs for the spades
process with a custom config file. This would result in an error.
Default: -1 (the number of CPUs is specified in the base.config
or in a custom config file, and increased with each retry).
Fix number of CPUs used by SPAdes hybrid. Not increased with retries.
integer
-1
SPAdes is designed to be deterministic for a given number of threads. To generate reproducible results fix the number of CPUs using this parameter.
When using this parameter do not change the number of CPUs for the spadeshybrid
process with a custom config file. This would result in an error.
Default: -1 (the number of CPUs is specified in the base.config
or in a custom config file, and increased with each retry).
RNG seed for MetaBAT2.
integer
1
MetaBAT2 is run by default with a fixed seed within this pipeline, thus producing reproducible results. You can set it also to any other positive integer to ensure reproducibility. Set the parameter to 0 to use a random seed.
Specify which adapter clipping tool to use.
string
Specify to save the resulting clipped FASTQ files to --outdir.
boolean
The minimum length of reads must have to be retained for downstream analysis.
integer
15
Minimum phred quality value of a base to be qualified in fastp.
integer
15
The mean quality requirement used for per read sliding window cutting by fastp.
integer
15
Save reads that fail fastp filtering in a separate file. Not used downstream.
boolean
The minimum base quality for low-quality base trimming by AdapterRemoval.
integer
2
Turn on quality trimming by consecutive stretch of low quality bases, rather than by window.
boolean
Default base-quality trimming is set to trim by 'windows', as in FastP. Specifying this flag will use trim via contiguous stretch of low quality bases (Ns) instead.
Replaces --trimwindows 4 with --trimqualities in AdapterRemoval
Forward read adapter to be trimmed by AdapterRemoval.
string
AGATCGGAAGAGCACACGTCTGAACTCCAGTCACNNNNNNATCTCGTATGCCGTCTTCTGCTTG
Reverse read adapter to be trimmed by AdapterRemoval for paired end data.
string
AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGTAGATCTCGGTGGTCGCCGTATCATT
Name of iGenomes reference for host contamination removal.
string
This parameter is mutually exclusive with --host_fasta
. Host read removal is done with Bowtie2.
Both the iGenomes FASTA file as well as corresponding, already pre-built Bowtie 2 index files will be used.
Fasta reference file for host contamination removal.
string
This parameter is mutually exclusive with --host_genome
. The reference can be masked. Host read removal is done with Bowtie2.
Bowtie2 index directory corresponding to --host_fasta
reference file for host contamination removal.
string
This parameter must be used in combination with --host_fasta
, and should be a directory containing files from the output of bowtie2-build
, i.e. files ending in .bt2
Use the --very-sensitive
instead of the--sensitive
setting for Bowtie 2 to map reads against the host genome.
boolean
Save the read IDs of removed host reads.
boolean
Specify to save input FASTQ files with host reads removed to --outdir.
boolean
Keep reads similar to the Illumina internal standard PhiX genome.
boolean
Genome reference used to remove Illumina PhiX contaminant reads.
string
${baseDir}/assets/data/GCA_002596845.1_ASM259684v1_genomic.fna.gz
Skip read preprocessing using fastp or adapterremoval.
boolean
Specify to save input FASTQ files with phiX reads removed to --outdir.
boolean
Run BBnorm to normalize sequence depth.
boolean
Set BBnorm target maximum depth to this number.
integer
100
Set BBnorm minimum depth to this number.
integer
5
Save normalized read files to output directory.
boolean
Skip removing adapter sequences from long reads.
boolean
Discard any read which is shorter than this value.
integer
1000
Keep this percent of bases.
integer
90
The higher the more important is read length when choosing the best reads.
integer
10
The default value focuses on length instead of quality to improve assembly size.
In order to assign equal weights to read lengths and read qualities set this parameter to 1.
This might be useful, for example, to benefit indirectly from the removal of short host reads (causing lower qualities for reads not overlapping filtered short reads).
Keep reads similar to the ONT internal standard Escherichia virus Lambda genome.
boolean
Genome reference used to remove ONT Lambda contaminant reads.
string
${baseDir}/assets/data/GCA_000840245.1_ViralProj14204_genomic.fna.gz
Specify to save input FASTQ files with lamba reads removed to --outdir.
boolean
Specify to save the resulting clipped FASTQ files to --outdir.
boolean
Specify to save the resulting length filtered FASTQ files to --outdir.
boolean
Specify which long read adapter trimming tool to use.
string
Taxonomic classification is disabled by default. You have to specify one of the options below to activate it.
Database for taxonomic binning with centrifuge.
string
Local directory containing *.cf
files, or a URL or local path to a downloaded compressed tar archive of a Centrifuge database. E.g. ftp://ftp.ccb.jhu.edu/pub/infphilo/centrifuge/data/p_compressed+h+v.tar.gz.
Database for taxonomic binning with kraken2.
string
Path to a local directory, archive file, or a URL to compressed tar archive that contains at least the three files hash.k2d
, opts.k2d
and taxo.k2d
. E.g. ftp://ftp.ccb.jhu.edu/pub/data/kraken2_dbs/minikraken_8GB_202003.tgz.
Database for taxonomic binning with krona
string
Path to taxonomy.tab
file for Krona, instead of downloading the default file. Point at the .tab
file.
Skip creating a krona plot for taxonomic binning.
boolean
Database for taxonomic classification of metagenome assembled genomes. Can be either a zipped file or a directory containing the extracted output of such.
string
E.g. https://tbb.bio.uu.nl/bastiaan/CAT_prepare/CAT_prepare_20210107.tar.gz. This parameter is mutually exclusive with --cat_db_generate
. The file needs to contain a folder named *taxonomy*
and *database*
that hold the respective files.
Generate CAT database.
boolean
Download the taxonomy files from NCBI taxonomy, the nr database and generate CAT database. This parameter is mutually exclusive with --cat_db
. Useful to build a CAT database with the same DIAMOND version as used for running CAT classification, avoiding compatibility problems.
Save the CAT database generated when specified by --cat_db_generate
.
boolean
Useful to allow reproducibility, as old versions of prebuild CAT databases do not always remain accessible and underlying NCBI taxonomy and nr databases change.
Only return official taxonomic ranks (Kingdom, Phylum, etc.) when running CAT.
boolean
Skip the running of GTDB, as well as the automatic download of the database
boolean
Specify the location of a GTDBTK database. Can be either an uncompressed directory or a .tar.gz
archive. If not specified will be downloaded for you when GTDBTK or binning QC is not skipped.
string
https://data.gtdb.ecogenomic.org/releases/release220/220.0/auxillary_files/gtdbtk_package/full_package/gtdbtk_r220_data.tar.gz
Specify the location of a GTDBTK mash database. If missing, GTDB-Tk will skip the ani_screening step
string
Min. bin completeness (in %) required to apply GTDB-tk classification.
number
50
Completeness assessed with BUSCO analysis (100% - %Missing). Must be greater than 0 (min. 0.01) to avoid GTDB-tk errors. If too low, GTDB-tk classification results can be impaired due to not enough marker genes!
Max. bin contamination (in %) allowed to apply GTDB-tk classification.
number
10
Contamination approximated based on BUSCO analysis (%Complete and duplicated). If too high, GTDB-tk classification results can be impaired due to contamination!
Min. fraction of AA (in %) in the MSA for bins to be kept.
number
10
Min. alignment fraction to consider closest genome.
number
0.65
Number of CPUs used for the by GTDB-Tk run tool pplacer.
integer
1
A low number of CPUs helps to reduce the memory required/reported by GTDB-Tk. See also the GTDB-Tk documentation.
Speed up pplacer step of GTDB-Tk by loading to memory.
boolean
Will be faster than writing to disk (default setting), however at the expense of much larger memory (RAM) requirements for GDTBTK/CLASSIFY.
Database for virus classification with geNomad
string
Must be a directory containing the uncompressed contents from https://zenodo.org/doi/10.5281/zenodo.6994741 (nf-core/mag tested with v1.1)
Co-assemble samples within one group, instead of assembling each sample separately.
boolean
Additional custom options for SPAdes and SPAdesHybrid. Do not specify --meta
as this will be added for you!
string
An example is adjusting k-mers ("-k 21,33,55,77") or adding advanced options. But not --meta, -t, -m, -o or --out-prefix, because these are already in use. Must be used like this: --spades_options "-k 21,33,55,77")
Additional custom options for MEGAHIT.
string
An example is adjusting presets (e.g. "--presets meta-large"), k-mers (e.g. "-k 21,33,55,77") or adding other advanced options. For example, increase the minimum k-mer in the event of an error message such as "Too many vertices in the unitig graph, you may increase the kmer size to remove tons of erroneous kmers." in the MEGAHIT log file. But not --threads, --memory, -o or input read files, because these are already in use. Must be used like this: --megahit_options "--presets meta-large"
Skip Illumina-only SPAdes assembly.
boolean
Skip SPAdes hybrid assembly.
boolean
Skip MEGAHIT assembly.
boolean
Skip metaQUAST.
boolean
Skip Prodigal gene prediction
boolean
Skip Prokka genome annotation.
boolean
Skip MetaEuk gene prediction and annotation
boolean
A string containing the name of one of the databases listed in the mmseqs2 documentation. This database will be downloaded and formatted for eukaryotic genome annotation. Incompatible with --metaeuk_db.
string
mmseqs2 lists a large number of databases, not all of which are appropriate for use with MetaEuk. MetaEuk requires protein inputs, so you should select one of the Aminoacid or Profile options.
Path to either a local fasta file of protein sequences, or to a directory containing an mmseqs2-formatted database, for annotation of eukaryotic genomes.
string
One option would be the databases from the MetaEuk publication (https://wwwuser.gwdg.de/~compbiol/metaeuk/), however it should be noted that these are focused on marine eukaryotes.
Save the downloaded mmseqs2 database specified in --metaeuk_mmseqs_db
.
boolean
Run virus identification.
boolean
Minimum geNomad score for a sequence to be considered viral
number
0.7
Number of groups that geNomad's MMSeqs2 databse should be split into (reduced memory requirements)
integer
1
Defines mapping strategy to compute co-abundances for binning, i.e. which samples will be mapped against the assembly.
string
Available: all
, group
or own
. Note that own
cannot be specified in combination with --coassemble_group
.
Note that specifying all
without additionally specifying --coassemble_group
results in n^2
mapping processes for each assembly method, where n
is the number of samples.
Skip metagenome binning entirely
boolean
Skip MetaBAT2 Binning
boolean
Skip MaxBin2 Binning
boolean
Skip CONCOCT Binning
boolean
Minimum contig size to be considered for binning and for bin quality check.
integer
1500
For forwarding into downstream analysis, i.e. QUAST and BUSCO, and reporting.
Minimal length of contigs that are not part of any bin but treated as individual genome.
integer
1000000
Contigs that do not fulfill the thresholds of --min_length_unbinned_contigs
and --max_unbinned_contigs
are pooled for downstream analysis and reporting, except contigs that also do not fullfill --min_contig_size
are not considered further.
Maximal number of contigs that are not part of any bin but treated as individual genome.
integer
100
Contigs that do not fulfill the thresholds of --min_length_unbinned_contigs
and --max_unbinned_contigs
are pooled for downstream analysis and reporting, except contigs that also do not fullfill --min_contig_size
are not considered further.
Bowtie2 alignment mode
string
Bowtie2 alignment mode options, for example: --very-fast
, --very-sensitive-local -N 1
, ... Must be used like this: --bowtie2_mode "--very-sensitive"
Save the output of mapping raw reads back to assembled contigs
boolean
Specify to save the BAM and BAI files generated when mapping input reads back to the assembled contigs (performed in preparation for binning and contig depth estimations).
Enable domain-level (prokaryote or eukaryote) classification of bins using Tiara. Processes which are domain-specific will then only receive bins matching the domain requirement.
boolean
Enable this if it is likely that your metagenome samples contain a mixture of eukaryotic and prokaryotic genomes. This will ensure that prokaryote-only steps only receive putatively prokaryotic genomes, and vice-versa. Additionally, may improve the performance of DAS Tool by ensuring it only receives prokaryotic genomes.
Specify which tool to use for domain classification of bins. Currently only 'tiara' is implemented.
string
tiara
Minimum contig length for Tiara to use for domain classification. For accurate classification, should be longer than 3000 bp.
integer
3000
Disable bin QC with BUSCO or CheckM.
boolean
Specify which tool for bin quality-control validation to use.
string
Download URL for BUSCO lineage dataset, or path to a tar.gz archive, or local directory containing already downloaded and unpacked lineage datasets.
string
E.g. https://busco-data.ezlab.org/v5/data/lineages/bacteria_odb10.2024-01-08.tar.gz or '/path/to/buscodb' (files still need to be unpacked manually). Available databases are listed here: https://busco-data.ezlab.org/v5/data/lineages/.
Run BUSCO with automated lineage selection, but ignoring eukaryotes (saves runtime).
boolean
Save the used BUSCO lineage datasets provided via --busco_db
.
boolean
Useful to allow reproducibility, as BUSCO datasets are frequently updated and old versions do not always remain accessible.
Enable clean-up of temporary files created during BUSCO runs.
boolean
By default, BUSCO creates a large number of intermediate files every run. This may cause problems on some clusters which have file number limits in plate, particularly with large numbers of bins. Enabling this option cleans these files, reducing the total file count of the work directory.
URL pointing to checkM database for auto download, if local path not supplied.
string
https://zenodo.org/records/7401545/files/checkm_data_2015_01_16.tar.gz
You can use this parameter to point to an online copy of the checkM database TAR archive that the pipeline will use for auto download if a local path is not supplied to --checkm_db
.
Path to local folder containing already downloaded and uncompressed CheckM database.
string
The pipeline can also download this for you if not specified, and you can save the resulting directory into your output directory by specifying --save_checkm_data
. You should move this directory to somewhere else on your machine (and supply back to the pipeline in future runs again with --checkm_db
.
Save the used CheckM reference files downloaded when not using --checkm_db parameter.
boolean
If specified, the directories and files decompressed from the tar.gz
file downloaded from the CheckM FTP server will be stored in your output directory alongside your CheckM results.
Turn on bin refinement using DAS Tool.
boolean
Specify single-copy gene score threshold for bin refinement.
number
0.5
Score threshold for single-copy gene selection algorithm to keep selecting bins, with a value ranging from 0-1.
For description of scoring algorithm, see: Sieber, Christian M. K., et al. 2018. Nature Microbiology 3 (7): 836–43. https://doi.org/10.1038/s41564-018-0171-1.
Modifies DAS Tool parameter --score_threshold
Specify which binning output is sent for downstream annotation, taxonomic classification, bin quality control etc.
string
raw_bins_only
: only bins (and unbinned contigs) from the binners.
refined_bins_only
: only bins (and unbinned contigs) from the bin refinement step .
~~both
: bins and unbinned contigs from both the binning and bin refinement steps.~~ both
option is disabled in v2.4 due a bug that will be fixed in a later release.
Turn on GUNC genome chimerism checks
boolean
Specify a path to a pre-downloaded GUNC dmnd database file
string
Specify which database to auto-download if not supplying own
string
Save the used GUNC reference files downloaded when not using --gunc_db parameter.
boolean
If specified, the corresponding DIAMOND file downloaded from the GUNC server will be stored in your output directory alongside your GUNC results.
Performs ancient DNA assembly validation and contig consensus sequence recalling.
Turn on/off the ancient DNA subworfklow
boolean
PyDamage accuracy threshold
number
0.5
deactivate damage correction of ancient contigs using variant and consensus calling
boolean
Ploidy for variant calling
integer
1
minimum base quality required for variant calling
integer
20
minimum minor allele frequency for considering variants
number
0.33
minimum genotype quality for considering a variant high quality
integer
30
minimum genotype quality for considering a variant medium quality
integer
20
minimum number of bases supporting the alternative allele
integer
3