nf-core/sarek
Analysis pipeline to detect germline or somatic variants (pre-processing, variant calling and annotation) from WGS / targeted sequencing
3.1.1
). The latest
stable release is
3.5.0
.
Define where the pipeline should find input data and save output data.
Starting step
string
The pipeline starts from this step and then runs through the possible subsequent steps.
Path to comma-separated file containing information about the samples in the experiment.
string
^\S+\.csv$
A design file with information about the samples in your experiment. Use this parameter to specify the location of the input files. It has to be a comma-separated file with a header row. See usage docs.
If no input file is specified, sarek will attempt to locate one in the {outdir}
directory.
The output directory where the results will be saved. You have to use absolute paths to storage on Cloud infrastructure.
string
Most common options used for the pipeline
Specify how many reads each split of a FastQ file contains. Set 0 to turn off splitting at all.
integer
50000000
Use the the tool FastP to split FASTQ file by number of reads. This parallelizes across fastq file shards speeding up mapping.
Enable when exome or panel data is provided.
boolean
With this parameter flags in various tools are set for targeted sequencing data. It is recommended to enable for whole-exome and panel data analysis.
Path to target bed file in case of whole exome or targeted sequencing or intervals file.
string
To speed up preprocessing and variant calling processes, the execution is parallelized across a reference chopped into smaller pieces.
Parts of preprocessing and variant calling are done by these intervals, the different resulting files are then merged.
This can parallelize processes, and push down wall clock time significantly.
We are aligning to the whole genome, and then run Base Quality Score Recalibration and Variant Calling on the supplied regions.
Whole Genome Sequencing:
The (provided) intervals are chromosomes cut at their centromeres (so each chromosome arm processed separately) also additional unassigned contigs.
We are ignoring the hs37d5
contig that contains concatenated decoy sequences.
The calling intervals can be defined using a .list or a BED file.
A .list file contains one interval per line in the format chromosome:start-end
(1-based coordinates).
A BED file must be a tab-separated text file with one interval per line.
There must be at least three columns: chromosome, start, and end (0-based coordinates).
Additionally, the score column of the BED file can be used to provide an estimate of how many seconds it will take to call variants on that interval.
The fourth column remains unused.
|chr1|10000|207666|NA|47.3|
This indicates that variant calling on the interval chr1:10001-207666 takes approximately 47.3 seconds.
The runtime estimate is used in two different ways.
First, when there are multiple consecutive intervals in the file that take little time to compute, they are processed as a single job, thus reducing the number of processes that needs to be spawned.
Second, the jobs with largest processing time are started first, which reduces wall-clock time.
If no runtime is given, a time of 1000 nucleotides per second is assumed. See -nucleotides_per_second
on how to customize this.
Actual figures vary from 2 nucleotides/second to 30000 nucleotides/second.
If you prefer, you can specify the full path to your reference genome when you run the pipeline:
NB If none provided, will be generated automatically from the FASTA reference
NB Use --no_intervals to disable automatic generation.
Targeted Sequencing:
The recommended flow for targeted sequencing data is to use the workflow as it is, but also provide a BED
file containing targets for all steps using the --intervals
option. In addition, the parameter --wes
should be set.
It is advised to pad the variant calling regions (exons or target) to some extent before submitting to the workflow.
The procedure is similar to whole genome sequencing, except that only BED file are accepted. See above for formatting description.
Adding every exon as an interval in case of WES
can generate >200K processes or jobs, much more forks, and similar number of directories in the Nextflow work directory. These are appropriately grouped together to reduce number of processes run in parallel (see above and --nucleotides_per_second
for details).
Furthermore, primers and/or baits are not 100% specific, (certainly not for MHC and KIR, etc.), quite likely there going to be reads mapping to multiple locations.
If you are certain that the target is unique for your genome (all the reads will certainly map to only one location), and aligning to the whole genome is an overkill, it is actually better to change the reference itself.
Estimate interval size.
number
1000
Intervals are parts of the chopped up genome used to speed up preprocessing and variant calling. See --intervals
for more info.
Changing this parameter, changes the number of intervals that are grouped and processed together. Bed files from target sequencing can contain thousands or small intervals. Spinning up a new process for each can be quite resource intensive. Instead it can be desired to process small intervals together on larger nodes.
In order to make use of this parameter, no runtime estimate can be present in the bed file (column 5).
Disable usage of intervals.
boolean
Intervals are parts of the chopped up genome used to speed up preprocessing and variant calling. See --intervals
for more info.
If --no_intervals
is set no intervals will be taken into account for speed up or data processing.
Tools to use for variant calling and/or for annotation.
string
^((ascat|cnvkit|controlfreec|deepvariant|freebayes|haplotypecaller|manta|merge|mpileup|msisensorpro|mutect2|snpeff|strelka|tiddit|vep)?,?)*[^,]+$
Multiple tools separated with commas.
Variant Calling:
Germline variant calling can currently be performed with the following variant callers:
- SNPs/Indels: DeepVariant, FreeBayes, HaplotypeCaller, mpileup, Strelka
- Structural Variants: Manta, TIDDIT
- Copy-number: CNVKit
Tumor-only somatic variant calling can currently be performed with the following variant callers:
- SNPs/Indels: FreeBayes, mpileup, Mutect2, Strelka
- Structural Variants: Manta, TIDDIT
- Copy-number: CNVKit, ControlFREEC
Somatic variant calling can currently only be performed with the following variant callers:
- SNPs/Indels: FreeBayes, Mutect2, Strelka2
- Structural variants: Manta, TIDDIT
- Copy-Number: ASCAT, CNVKit, Control-FREEC
- Microsatellite Instability: MSIsensorpro
NB Mutect2 for somatic variant calling cannot be combined with
--no_intervals
Annotation:
- snpEff, VEP, merge (both consecutively).
NB As Sarek will use bgzip and tabix to compress and index VCF files annotated, it expects VCF files to be sorted when starting from
--step annotate
.
Disable specified tools.
string
^((baserecalibrator|baserecalibrator_report|bcftools|documentation|fastqc|markduplicates|markduplicates_report|mosdepth|multiqc|samtools|vcftools|versions)?,?)*[^,]+$
Multiple tools can be specified, separated by commas.
NB
--skip_tools baserecalibrator_report
is actually just not saving the reports.
NB--skip_tools markduplicates_report
does not skipMarkDuplicates
but prevent the collection of duplicate metrics that slows down performance.
Trim fastq file or handle UMIs
Run FastP for read trimming
boolean
Use this to perform adapter trimming. Adapter are detected automatically by using the FastP flag --detect_adapter_for_pe
. For more info see FastP
Remove bp from the 5' end of read 1
integer
This may be useful if the qualities were very poor, or if there is some sort of unwanted bias at the 5' end. Corresponds to the FastP flag --trim_front1
.
Remove bp from the 5' end of read 2
integer
This may be useful if the qualities were very poor, or if there is some sort of unwanted bias at the 5' end. Corresponds to the FastP flag --trim_front2
.
Remove bp from the 3' end of read 1
integer
This may remove some unwanted bias from the 3'. Corresponds to the FastP flag --three_prime_clip_r1
.
Remove bp from the 3' end of read 2
integer
This may remove some unwanted bias from the 3' end. Corresponds to the FastP flag --three_prime_clip_r2
.
Removing poly-G tails.
integer
DetectS polyG in read tails and trim them. Corresponds to the FastP flag --trim_poly_g
.
Save trimmed FastQ file intermediates.
boolean
Specify UMI read structure
string
One structure if UMI is present on one end (i.e. '+T 2M11S+T'), or two structures separated by a blank space if UMIs a present on both ends (i.e. '2M11S+T 2M11S+T'); please note, this does not handle duplex-UMIs.
For more info on UMI usage in the pipeline, also check docs here.
Default strategy with UMI
string
Adjacency
Available values: Identity, Edit, Adjacency, Paired
If set, publishes split FASTQ files. Intended for testing purposes.
boolean
Configure preprocessing tools
Specify aligner to be used to map reads to reference genome.
string
Sarek
will build missing indices automatically if not provided. Set --bwa false
if indices should be (re-)built.
If DragMap
is selected as aligner, it is recommended to skip baserecalibration with --skip_tools baserecalibrator
. See here for more info.
Save mapped files.
boolean
If the parameter --split-fastq
is used, the sharded bam files are merged and converted to CRAM before saving them.
Saves output from mapping (if --save_mapped
), Markduplicates & Baserecalibration as BAM file instead of CRAM
boolean
Enable usage of GATK Spark implementation for duplicate marking and/or base quality score recalibration
string
^((baserecalibrator|markduplicates)?,?)*[^,]+$
Multiple separated with commas.
The GATK4 Base Quality Score recalibration tools
Baserecalibrator
andApplyBQSR
are currently available as Beta release. Use with caution!
Configure variant calling tools
If true, skips germline variant calling for matched normal to tumor sample. Normal samples without matched tumor will still be processed through germline variant calling tools.
boolean
This can speed up computation for somatic variant calling with matched normal samples. If false, all normal samples are processed as well through the germline variantcalling tools. If true, only somatic variant calling is done.
Turn on the joint germline variant calling for GATK haplotypecaller
boolean
Uses all normal germline samples (as designated by status
in the input csv) in the joint germline variant calling process.
Overwrite Ascat min base quality required for a read to be counted.
number
20
For more details see here
Overwrite Ascat minimum depth required in the normal for a SNP to be considered.
number
10
For more details, see here.
Overwrite Ascat min mapping quality required for a read to be counted.
number
35
For more details, see here.
Overwrite ASCAT ploidy.
number
ASCAT: optional argument to override ASCAT optimization and supply psi parameter (expert parameter, don’t adapt unless you know what you’re doing). See here
Overwrite ASCAT purity.
number
Overwrites ASCAT's rho_manual
parameter. Expert use only, see here for details.
Requires that --ascat_ploidy
is set.
Specify a custom chromosome length file.
string
Control-FREEC requires a file containing all chromosome lenghts. By default the fasta.fai is used. If the fasta.fai file contains chromosomes not present in the intervals, it fails (see: https://github.com/BoevaLab/FREEC/issues/106).
In this case, a custom chromosome length can be specified. It must be of the same format as the fai, but only contain the relevant chromosomes.
Overwrite Control-FREEC coefficientOfVariation
number
0.05
Details, see ControlFREEC manual.
Overwrite Control-FREEC contaminationAdjustement
boolean
Details, see ControlFREEC manual.
Design known contamination value for Control-FREEC
number
Details, see ControlFREEC manual.
Minimal sequencing quality for a position to be considered in BAF analysis.
number
Details, see ControlFREEC manual.
Minimal read coverage for a position to be considered in BAF analysis.
number
Details, see ControlFREEC manual.
Genome ploidy used by ControlFREEC
string
2
In case of doubt, you can set different values and Control-FREEC will select the one that explains most observed CNAs Example: ploidy=2 , ploidy=2,3,4. For more details, see the manual.
string
https://cnvkit.readthedocs.io/en/stable/pipeline.html?highlight=reference.cnn#batch
Panel-of-normals VCF (bgzipped) for GATK Mutect2
string
Without PON, there will be no calls with PASS in the INFO field, only an unfiltered VCF is written.
It is highly recommended to make your own PON, as it depends on sequencer and library preparation.
The pipeline is shipped with a panel-of-normals for --genome GATK.GRCh38
provided by GATK.
NB PON file should be bgzipped.
Index of PON panel-of-normals VCF.
string
If none provided, will be generated automatically from the PON bgzipped VCF file.
Do not analyze soft clipped bases in the reads for GATK Mutect2.
boolean
use the --dont-use-soft-clipped-bases
params with GATK Mutect2.
Allow usage of fasta file for annotation with VEP
boolean
By pointing VEP to a FASTA file, it is possible to retrieve reference sequence locally. This enables VEP to retrieve HGVS notations (--hgvs), check the reference sequence given in input data, and construct transcript models from a GFF or GTF file without accessing a database.
For details, see here.
Path to dbNSFP processed file.
string
To be used with --vep_dbnsfp
.
dbNSFP files and more information are available at https://www.ensembl.org/info/docs/tools/vep/script/vep_plugins.html#dbnsfp and https://sites.google.com/site/jpopgen/dbNSFP/
Path to dbNSFP tabix indexed file.
string
To be used with --vep_dbnsfp
.
Consequence to annotate with
string
To be used with --vep_dbnsfp
.
This params is used to filter/limit outputs to a specific effect of the variant.
The set of consequence terms is defined by the Sequence Ontology and an overview of those used in VEP can be found here: https://www.ensembl.org/info/genome/variation/prediction/predicted_data.html
If one wants to filter using several consequences, then separate those by using '&' (i.e. 'consequence=3_prime_UTR_variant&intron_variant'.
Fields to annotate with
string
rs_dbSNP,HGVSc_VEP,HGVSp_VEP,1000Gp3_EAS_AF,1000Gp3_AMR_AF,LRT_score,GERP++_RS,gnomAD_exomes_AF
To be used with --vep_dbnsfp
.
This params can be used to retrieve individual values from the dbNSFP file. The values correspond to the name of the columns in the dbNSFP file and are separated by comma.
The column names might differ between the different dbNSFP versions. Please check the Readme.txt file, which is provided with the dbNSFP file, to obtain the correct column names. The Readme file contains also a short description of the provided values and the version of the tools used to generate them.
Default value are explained below:
rs_dbSNP - rs number from dbSNP
HGVSc_VEP - HGVS coding variant presentation from VEP. Multiple entries separated by ';', corresponds to Ensembl_transcriptid
HGVSp_VEP - HGVS protein variant presentation from VEP. Multiple entries separated by ';', corresponds to Ensembl_proteinid
1000Gp3_EAS_AF - Alternative allele frequency in the 1000Gp3 East Asian descendent samples
1000Gp3_AMR_AF - Alternative allele counts in the 1000Gp3 American descendent samples
LRT_score - Original LRT two-sided p-value (LRTori), ranges from 0 to 1
GERP++_RS - Conservation score. The larger the score, the more conserved the site, ranges from -12.3 to 6.17
gnomAD_exomes_AF - Alternative allele frequency in the whole gnomAD exome samples.
Path to spliceai raw scores snv file.
string
To be used with --vep_spliceai
.
Path to spliceai raw scores snv tabix indexed file.
string
To be used with --vep_spliceai
.
Path to spliceai raw scores indel file.
string
To be used with --vep_spliceai
.
Path to spliceai raw scores indel tabix indexed file.
string
To be used with --vep_spliceai
.
Path to snpEff cache.
string
To be used with --annotation_cache
.
Path to VEP cache.
string
To be used with --annotation_cache
.
VEP output-file format.
string
Sets the format of the output-file from VEP. Available formats: json, tab and vcf.
Reference genome related files and options required for the workflow.
Name of iGenomes reference.
string
GATK.GRCh38
If using a reference genome configured in the pipeline using iGenomes, use this parameter to give the ID for the reference. This is then used to build the full paths for all required reference genome files e.g. --genome GRCh38
.
See the nf-core website docs for more details.
ASCAT genome.
string
If you use AWS iGenomes, this has already been set for you appropriately.
Must be set to run ASCAT, either hg19 or hg38. If you use AWS iGenomes, this has already been set for you appropriately.
Path to ASCAT allele zip file.
string
If you use AWS iGenomes, this has already been set for you appropriately.
Path to ASCAT loci zip file.
string
If you use AWS iGenomes, this has already been set for you appropriately.
Path to ASCAT GC content correction file.
string
If you use AWS iGenomes, this has already been set for you appropriately.
Path to ASCAT RT (replictiming) correction file.
string
If you use AWS iGenomes, this has already been set for you appropriately.
Path to BWA mem indices.
string
If you use AWS iGenomes, this has already been set for you appropriately.
If you wish to recompute indices available on igenomes, set --bwa false
.
NB If none provided, will be generated automatically from the FASTA reference. Combine with
--save_reference
to save for future runs.
Path to bwa-mem2 mem indices.
string
If you use AWS iGenomes, this has already been set for you appropriately.
If you wish to recompute indices available on igenomes, set --bwamem2 false
.
NB If none provided, will be generated automatically from the FASTA reference, if
--aligner bwa-mem2
is specified. Combine with--save_reference
to save for future runs.
Path to chromosomes folder used with ControLFREEC.
string
If you use AWS iGenomes, this has already been set for you appropriately.
Path to dbsnp file.
string
If you use AWS iGenomes, this has already been set for you appropriately.
Path to dbsnp index.
string
If you use AWS iGenomes, this has already been set for you appropriately.
NB If none provided, will be generated automatically from the dbsnp file. Combine with
--save_reference
to save for future runs.
label string for VariantRecalibration (haplotypecaller joint variant calling)
string
Path to FASTA dictionary file.
string
If you use AWS iGenomes, this has already been set for you appropriately.
NB If none provided, will be generated automatically from the FASTA reference. Combine with
--save_reference
to save for future runs.
Path to dragmap indices.
string
If you use AWS iGenomes, this has already been set for you appropriately.
If you wish to recompute indices available on igenomes, set --dragmap false
.
NB If none provided, will be generated automatically from the FASTA reference, if
--aligner dragmap
is specified. Combine with--save_reference
to save for future runs.
Path to FASTA genome file.
string
\.fn?a(sta)?(\.gz)?$
If you use AWS iGenomes, this has already been set for you appropriately.
This parameter is mandatory if --genome
is not specified.
Path to FASTA reference index.
string
If you use AWS iGenomes, this has already been set for you appropriately.
NB If none provided, will be generated automatically from the FASTA reference. Combine with
--save_reference
to save for future runs.
Path to GATK Mutect2 Germline Resource File.
string
If you use AWS iGenomes, this has already been set for you appropriately.
The germline resource VCF file (bgzipped and tabixed) needed by GATK4 Mutect2 is a collection of calls that are likely present in the sample, with allele frequencies.
The AF info field must be present.
You can find a smaller, stripped gnomAD VCF file (most of the annotation is removed and only calls signed by PASS are stored) in the AWS iGenomes Annotation/GermlineResource folder.
Path to GATK Mutect2 Germline Resource Index.
string
If you use AWS iGenomes, this has already been set for you appropriately.
NB If none provided, will be generated automatically from the Germline Resource file, if provided. Combine with
--save_reference
to save for future runs.
Path to known indels file.
string
If you use AWS iGenomes, this has already been set for you appropriately.
Path to known indels file index.
string
If you use AWS iGenomes, this has already been set for you appropriately.
NB If none provided, will be generated automatically from the known index file, if provided. Combine with
--save_reference
to save for future runs.
If you use AWS iGenomes, this has already been set for you appropriately.
1st label string for VariantRecalibration (haplotypecaller joint variant calling)
string
If you use AWS iGenomes, this has already been set for you appropriately.
Path to known snps file.
string
Path to known snps file snps.
string
If you use AWS iGenomes, this has already been set for you appropriately.
NB If none provided, will be generated automatically from the known index file, if provided. Combine with
--save_reference
to save for future runs.
If you use AWS iGenomes, this has already been set for you appropriately.
label string for VariantRecalibration (haplotypecaller joint variant calling)
string
Path to Control-FREEC mappability file.
string
If you use AWS iGenomes, this has already been set for you appropriately.
snpEff DB version.
string
If you use AWS iGenomes, this has already been set for you appropriately.
This is used to specify the database to be use to annotate with.
Alternatively databases' names can be listed with the snpEff databases
.
snpEff genome.
string
If you use AWS iGenomes, this has already been set for you appropriately.
This is used to specify the genome when using the container with pre-downloaded cache.
snpEff version.
string
If you use AWS iGenomes, this has already been set for you appropriately.
This is used to specify the snpeff version when using the container with pre-downloaded cache.
VEP genome.
string
If you use AWS iGenomes, this has already been set for you appropriately.
This is used to specify the genome when using the container with pre-downloaded cache.
VEP species.
string
If you use AWS iGenomes, this has already been set for you appropriately.
Alternatively species listed in Ensembl Genomes caches can be used.
VEP cache version.
number
If you use AWS iGenomes, this has already been set for you appropriately.
Alternatively cache version can be use to specify the correct Ensembl Genomes version number as these differ from the concurrent Ensembl/VEP version numbers
VEP version.
string
If you use AWS iGenomes, this has already been set for you appropriately.
This is used to specify the VEP version when using the container with pre-downloaded cache.
Save built references.
boolean
Set this parameter, if you wish to save all computed reference files. This is useful to avoid re-computation on future runs.
Directory / URL base for iGenomes references.
string
s3://ngi-igenomes/igenomes/
Do not load the iGenomes reference config.
boolean
Do not load igenomes.config
when running the pipeline.
You may choose this option if you observe clashes between custom parameters and those supplied in igenomes.config
.
NB You can then run
Sarek
by specifying at least a FASTA genome file.
Parameters used to describe centralised config profiles. These should not be edited.
Git commit id for Institutional configs.
string
master
Base directory for Institutional configs.
string
https://raw.githubusercontent.com/nf-core/configs/master
If you're running offline, Nextflow will not be able to fetch the institutional config files from the internet. If you don't need them, then this is not a problem. If you do need them, you should download the files from the repo and tell Nextflow where to find them with this parameter.
Institutional config name.
string
Institutional config description.
string
Institutional config contact information.
string
Institutional config URL link.
string
Sequencing center information to be added to read group (CN field).
string
Sequencing platform information to be added to read group (PL field).
string
ILLUMINA
Default: ILLUMINA. Will be used to create a proper header for further GATK4 downstream analysis.
Set the top limit for requested resources for any single job.
Maximum number of CPUs that can be requested for any single job.
integer
16
Use to set an upper-limit for the CPU requirement for each process. Should be an integer e.g. --max_cpus 1
.
Maximum amount of memory that can be requested for any single job.
string
128.GB
^\d+(\.\d+)?\.?\s*(K|M|G|T)?B$
Use to set an upper-limit for the memory requirement for each process. Should be a string in the format integer-unit e.g. --max_memory '8.GB'
.
Maximum amount of time that can be requested for any single job.
string
240.h
^(\d+\.?\s*(s|m|h|day)\s*)+$
Use to set an upper-limit for the time requirement for each process. Should be a string in the format integer-unit e.g. --max_time '2.h'
.
Less common options for the pipeline, typically set in a config file.
Display help text.
boolean
Method used to save pipeline results to output directory.
string
The Nextflow publishDir
option specifies which intermediate files should be saved to the output directory. This option tells the pipeline what method should be used to move these files. See Nextflow docs for details.
Email address for completion summary.
string
^([a-zA-Z0-9_\-\.]+)@([a-zA-Z0-9_\-\.]+)\.([a-zA-Z]{2,5})$
Set this parameter to your e-mail address to get a summary e-mail with details of the run sent to you when the workflow exits. If set in your user config file (~/.nextflow/config
) then you don't need to specify this on the command line for every run.
Email address for completion summary, only when pipeline fails.
string
^([a-zA-Z0-9_\-\.]+)@([a-zA-Z0-9_\-\.]+)\.([a-zA-Z]{2,5})$
An email address to send a summary email to when the pipeline is completed - ONLY sent if the pipeline does not exit successfully.
Send plain-text email instead of HTML.
boolean
File size limit when attaching MultiQC reports to summary emails.
string
25.MB
^\d+(\.\d+)?\.?\s*(K|M|G|T)?B$
Do not use coloured log outputs.
boolean
MultiQC report title. Printed as page header, used for filename if not otherwise specified.
string
Custom config file to supply to MultiQC.
string
Custom logo file to supply to MultiQC. File name must also be set in the MultiQC config file
string
Custom MultiQC yaml file containing HTML including a methods description.
string
Directory to keep pipeline Nextflow logs and reports.
string
${params.outdir}/pipeline_info
Boolean whether to validate parameters against the schema at runtime
boolean
true
Show all params when using --help
boolean
Run this workflow with Conda. You can also use '-profile conda' instead of providing this parameter.
boolean
Incoming hook URL for messaging service
string
Incoming hook URL for messaging service. Currently, only MS Teams is supported.