nf-core/bacass
Simple bacterial assembly and annotation pipeline
2.0.0
). The latest
stable release is
2.3.1
.
Define where the pipeline should find input data and save output data.
Path to comma-separated file containing information about the samples in the experiment.
string
^\S+\.csv$
You will need to create a design file with information about the samples in your experiment before running the pipeline. Use this parameter to specify its location. It has to be a tab-separated file with 6 columns, and a header row. See usage docs.
For example:
--input 'design_hybrid.csv'
An example of properly formatted input files can be found at the nf-core/test-datasets.
For example, this is the input used for a hybrid assembly in testing:
ID R1 R2 LongFastQ Fast5 GenomeSize
ERR044595 https://github.com/nf-core/test-datasets/raw/bacass/ERR044595_1M_1.fastq.gz https://github.com/nf-core/test-datasets/raw/bacass/ERR044595_1M_2.fastq.gz https://github.com/nf-core/test-datasets/raw/bacass/nanopore/subset15000.fq.gz NA 2.8m
ID
: The identifier to use for handling the dataset e.g. sample nameR1
: The forward reads in case of available short-read dataR2
: The reverse reads in case of available short-read dataLongFastQ
: The long read FastQ file with reads in FASTQ formatFast5
: The folder containing the basecalled fast5 filesGenomeSize
: The expected genome size of the assembly. Only used by the canu assembler.
Missing values (e.g. Fast5 folder in case of short reads) can be omitted by using a NA
in the TSV file. The pipeline will handle such cases appropriately then.
Path to the output directory where the results will be saved.
string
./results
Email address for completion summary.
string
^([a-zA-Z0-9_\-\.]+)@([a-zA-Z0-9_\-\.]+)\.([a-zA-Z]{2,5})$
Set this parameter to your e-mail address to get a summary e-mail with details of the run sent to you when the workflow exits. If set in your user config file (~/.nextflow/config
) then you don't need to specify this on the command line for every run.
Path to Kraken2 database.
string
See Kraken2 homepage for download
links. Minikraken2 8GB is a reasonable choice, since we run Kraken here mainly just to check for
sample purity.
Parameters for the assembly
The assembler to use for assembly. Available options are Unicycler
, Canu
, Miniasm
. The latter two are only available for long-read data, whereas Unicycler can be used for short or hybrid assembly projects.
string
unicycler
Which type of assembly to perform.
string
short
This adjusts the type of assembly done with the input data and can be any of long
, short
or hybrid
. Short & Hybrid assembly will always run Unicycler, whereas long-read assembly can be configured separately using the --assembler
parameter.
Extra arguments for Unicycler
string
This advanced option allows you to pass extra arguments to Unicycler (e.g. "--mode conservative"
or "--no_correct"
). For this to work you need to quote the arguments and add at least one space.
This can be used to supply extra options to the Canu assembler. Will be ignored when other assemblers are used.
string
Which assembly polishing method to use.
string
medaka
Can be used to define which polishing method is used by default for long reads. Default is medaka
, available options are nanopolish
or medaka
.
The annotation method to annotate the final assembly. Default choice is prokka
, but the dfast
tool is also available. For the latter, make sure to create your specific config if you're not happy with the default one provided. See #dfast_config to find out how.
string
prokka
Extra arguments for prokka annotation tool.
string
This advanced option allows you to pass extra arguments to Prokka (e.g. " --rfam"
or " --genus name"
). For this to work you need to quote the arguments and add at least one space between the arguments. Example:
--prokka_args `--rfam --genus Escherichia Coli`
Specifies a configuration file for the DFAST annotation method.
string
assets/test_config_dfast.py
This can be used instead of PROKKA if required to specify a specific config file for annotation. If you want to know how to create your config file, please refer to the DFAST readme on how to create one. The default config (assets/test_config_dfast.py
) is just included for testing, so if you want to annotate using DFAST, you have to create a config!
Skip running Kraken2 classifier on reads.
boolean
Skip annotating the assembly with Prokka /DFAST.
boolean
Skip running PycoQC
on long read input.
boolean
Skip polishing the long-read assembly with fast5 input. Will not affect short/hybrid assemblies.
boolean
Parameters used to describe centralised config profiles. These should not be edited.
Git commit id for Institutional configs.
string
master
Base directory for Institutional configs.
string
https://raw.githubusercontent.com/nf-core/configs/master
If you're running offline, Nextflow will not be able to fetch the institutional config files from the internet. If you don't need them, then this is not a problem. If you do need them, you should download the files from the repo and tell Nextflow where to find them with this parameter.
Institutional configs hostname.
string
Institutional config name.
string
Institutional config description.
string
Institutional config contact information.
string
Institutional config URL link.
string
Set the top limit for requested resources for any single job.
Maximum number of CPUs that can be requested for any single job.
integer
16
Use to set an upper-limit for the CPU requirement for each process. Should be an integer e.g. --max_cpus 1
Maximum amount of memory that can be requested for any single job.
string
128.GB
^\d+(\.\d+)?\.?\s*(K|M|G|T)?B$
Use to set an upper-limit for the memory requirement for each process. Should be a string in the format integer-unit e.g. --max_memory '8.GB'
Maximum amount of time that can be requested for any single job.
string
240.h
^(\d+\.?\s*(s|m|h|day)\s*)+$
Use to set an upper-limit for the time requirement for each process. Should be a string in the format integer-unit e.g. --max_time '2.h'
Less common options for the pipeline, typically set in a config file.
Display help text.
boolean
Method used to save pipeline results to output directory.
string
The Nextflow publishDir
option specifies which intermediate files should be saved to the output directory. This option tells the pipeline what method should be used to move these files. See Nextflow docs for details.
MultiQC report title. Printed as page header, used for filename if not otherwise specified.
string
Email address for completion summary, only when pipeline fails.
string
^([a-zA-Z0-9_\-\.]+)@([a-zA-Z0-9_\-\.]+)\.([a-zA-Z]{2,5})$
An email address to send a summary email to when the pipeline is completed - ONLY sent if the pipeline does not exit successfully.
Send plain-text email instead of HTML.
boolean
File size limit when attaching MultiQC reports to summary emails.
string
25.MB
^\d+(\.\d+)?\.?\s*(K|M|G|T)?B$
Do not use coloured log outputs.
boolean
Custom config file to supply to MultiQC.
string
Directory to keep pipeline Nextflow logs and reports.
string
${params.outdir}/pipeline_info
Boolean whether to validate parameters against the schema at runtime
boolean
true
Show all params when using --help
boolean
Run this workflow with Conda. You can also use '-profile conda' instead of providing this parameter.
boolean
Instead of directly downloading Singularity images for use with Singularity, force the workflow to pull and convert Docker containers instead.
boolean
This may be useful for example if you are unable to directly pull Singularity containers to run the pipeline due to http/https proxy issues.