Define where the pipeline should find input data and save output data.

Path to comma-separated file containing information about the samples in the experiment.

required
type: string
pattern: ^\S+\.csv$

You will need to create a design file with information about the samples in your experiment before running the pipeline. Use this parameter to specify its location. It has to be a tab-separated file with 6 columns, and a header row. See usage docs.

For example:

--input 'design_hybrid.csv'

An example of properly formatted input files can be found at the nf-core/test-datasets.

For example, this is the input used for a hybrid assembly in testing:
ID R1 R2 LongFastQ Fast5 GenomeSize
ERR044595 https://github.com/nf-core/test-datasets/raw/bacass/ERR044595_1M_1.fastq.gz https://github.com/nf-core/test-datasets/raw/bacass/ERR044595_1M_2.fastq.gz https://github.com/nf-core/test-datasets/raw/bacass/nanopore/subset15000.fq.gz NA 2.8m

  • ID: The identifier to use for handling the dataset e.g. sample name
  • R1: The forward reads in case of available short-read data
  • R2: The reverse reads in case of available short-read data
  • LongFastQ: The long read FastQ file with reads in FASTQ format
  • Fast5: The folder containing the basecalled fast5 files
  • GenomeSize: The expected genome size of the assembly. Only used by the canu assembler.

Missing values (e.g. Fast5 folder in case of short reads) can be omitted by using a NA in the TSV file. The pipeline will handle such cases appropriately then.

Path to the output directory where the results will be saved.

type: string
default: ./results

Email address for completion summary.

type: string
pattern: ^([a-zA-Z0-9_\-\.]+)@([a-zA-Z0-9_\-\.]+)\.([a-zA-Z]{2,5})$

Set this parameter to your e-mail address to get a summary e-mail with details of the run sent to you when the workflow exits. If set in your user config file (~/.nextflow/config) then you don't need to specify this on the command line for every run.

Path to Kraken2 database.

type: string

See Kraken2 homepage for download
links. Minikraken2 8GB is a reasonable choice, since we run Kraken here mainly just to check for
sample purity.

Parameters for the assembly

The assembler to use for assembly. Available options are Unicycler, Canu, Miniasm. The latter two are only available for long-read data, whereas Unicycler can be used for short or hybrid assembly projects.

type: string
default: unicycler

Which type of assembly to perform.

type: string
default: short

This adjusts the type of assembly done with the input data and can be any of long, short or hybrid. Short & Hybrid assembly will always run Unicycler, whereas long-read assembly can be configured separately using the --assembler parameter.

Extra arguments for Unicycler

type: string

This advanced option allows you to pass extra arguments to Unicycler (e.g. "--mode conservative" or "--no_correct"). For this to work you need to quote the arguments and add at least one space.

This can be used to supply extra options to the Canu assembler. Will be ignored when other assemblers are used.

type: string

Which assembly polishing method to use.

type: string
default: medaka

Can be used to define which polishing method is used by default for long reads. Default is medaka, available options are nanopolish or medaka.

The annotation method to annotate the final assembly. Default choice is prokka, but the dfast tool is also available. For the latter, make sure to create your specific config if you're not happy with the default one provided. See #dfast_config to find out how.

type: string
default: prokka

Extra arguments for prokka annotation tool.

type: string

This advanced option allows you to pass extra arguments to Prokka (e.g. " --rfam" or " --genus name"). For this to work you need to quote the arguments and add at least one space between the arguments. Example:

--prokka_args `--rfam --genus Escherichia Coli`  

Specifies a configuration file for the DFAST annotation method.

type: string
default: assets/test_config_dfast.py

This can be used instead of PROKKA if required to specify a specific config file for annotation. If you want to know how to create your config file, please refer to the DFAST readme on how to create one. The default config (assets/test_config_dfast.py) is just included for testing, so if you want to annotate using DFAST, you have to create a config!

Skip running Kraken2 classifier on reads.

type: boolean

Skip annotating the assembly with Prokka /DFAST.

type: boolean

Skip running PycoQC on long read input.

type: boolean

Skip polishing the long-read assembly with fast5 input. Will not affect short/hybrid assemblies.

type: boolean

Parameters used to describe centralised config profiles. These should not be edited.

Git commit id for Institutional configs.

hidden
type: string
default: master

Base directory for Institutional configs.

hidden
type: string
default: https://raw.githubusercontent.com/nf-core/configs/master

If you're running offline, Nextflow will not be able to fetch the institutional config files from the internet. If you don't need them, then this is not a problem. If you do need them, you should download the files from the repo and tell Nextflow where to find them with this parameter.

Institutional configs hostname.

hidden
type: string

Institutional config name.

hidden
type: string

Institutional config description.

hidden
type: string

Institutional config contact information.

hidden
type: string

Institutional config URL link.

hidden
type: string

Set the top limit for requested resources for any single job.

Maximum number of CPUs that can be requested for any single job.

hidden
type: integer
default: 16

Use to set an upper-limit for the CPU requirement for each process. Should be an integer e.g. --max_cpus 1

Maximum amount of memory that can be requested for any single job.

hidden
type: string
default: 128.GB
pattern: ^\d+(\.\d+)?\.?\s*(K|M|G|T)?B$

Use to set an upper-limit for the memory requirement for each process. Should be a string in the format integer-unit e.g. --max_memory '8.GB'

Maximum amount of time that can be requested for any single job.

hidden
type: string
default: 240.h
pattern: ^(\d+\.?\s*(s|m|h|day)\s*)+$

Use to set an upper-limit for the time requirement for each process. Should be a string in the format integer-unit e.g. --max_time '2.h'

Less common options for the pipeline, typically set in a config file.

Display help text.

hidden
type: boolean

Method used to save pipeline results to output directory.

hidden
type: string

The Nextflow publishDir option specifies which intermediate files should be saved to the output directory. This option tells the pipeline what method should be used to move these files. See Nextflow docs for details.

MultiQC report title. Printed as page header, used for filename if not otherwise specified.

type: string

Email address for completion summary, only when pipeline fails.

hidden
type: string
pattern: ^([a-zA-Z0-9_\-\.]+)@([a-zA-Z0-9_\-\.]+)\.([a-zA-Z]{2,5})$

An email address to send a summary email to when the pipeline is completed - ONLY sent if the pipeline does not exit successfully.

Send plain-text email instead of HTML.

hidden
type: boolean

File size limit when attaching MultiQC reports to summary emails.

hidden
type: string
default: 25.MB
pattern: ^\d+(\.\d+)?\.?\s*(K|M|G|T)?B$

Do not use coloured log outputs.

hidden
type: boolean

Custom config file to supply to MultiQC.

hidden
type: string

Directory to keep pipeline Nextflow logs and reports.

hidden
type: string
default: ${params.outdir}/pipeline_info

Boolean whether to validate parameters against the schema at runtime

hidden
type: boolean
default: true

Show all params when using --help

hidden
type: boolean

By default, parameters set as hidden in the schema are not shown on the command line when a user runs with --help. Specifying this option will tell the pipeline to show all parameters.

Run this workflow with Conda. You can also use '-profile conda' instead of providing this parameter.

hidden
type: boolean

Instead of directly downloading Singularity images for use with Singularity, force the workflow to pull and convert Docker containers instead.

hidden
type: boolean

This may be useful for example if you are unable to directly pull Singularity containers to run the pipeline due to http/https proxy issues.