Define where the pipeline should find input data and save output data.

Variants in VCF or TSV format.

required
type: string

Use this to specify the location of your input variant files in VCF or TSV format. For example:

--input 'path/to/data/*.vcf'  

Please note the following requirements:

  1. The path must be enclosed in quotes
  2. The path must have at least one * wildcard character

Alleles as TXT file.

required
type: string

The path to the file containing the MHC alleles. Alleles should be provided in the format A*01:01, one per line.

Peptide sequences in TSV format.

type: string

Instead of genomic variants, peptide sequences can be provided in a TSV file. In this case, MHC binding predictions will be made for the provided sequences. The TSV file has to include the following columns: id, sequence. All additional columns will be added to the prediction output as annotation.

Protein sequences in FASTA format.

type: string

The output directory where the results will be saved.

type: string
default: ./results

Email address for completion summary.

type: string
pattern: ^([a-zA-Z0-9_\-\.]+)@([a-zA-Z0-9_\-\.]+)\.([a-zA-Z]{2,5})$

Set this parameter to your e-mail address to get a summary e-mail with details of the run sent to you when the workflow exits. If set in your user config file (~/.nextflow/config) then you don't need to specify this on the command line for every run.

Options for the genome and proteome reference.

Specifies the human reference genome version.

type: string

This defines against which human reference genome the pipeline performs the analysis including the incorporation of genetic variants e.g..

Specifies the reference proteome.

type: string

Specifies the reference proteome files that are used for self-filtering. Should be either a folder of FASTA files or a single FASTA file containing the reference proteome(s).

Options for the reference genome indices used to align reads.

Directory / URL base for iGenomes references.

hidden
type: string
default: s3://ngi-igenomes/igenomes/

Do not load the iGenomes reference config.

hidden
type: boolean

Do not load igenomes.config when running the pipeline. You may choose this option if you observe clashes between custom parameters and those supplied in igenomes.config.

Options for the peptide prediction step.

Filter against human proteome.

type: boolean

Specifies that peptides should be filtered against the specified human proteome references.

MHC class for prediction.

type: integer

Specifies whether the predictions should be done for MHC class I or class II.

Specifies the maximum peptide length.

type: integer
default: 11

Specifies the maximum peptide length (not applied when --peptides is specified). Default: MHC class I: 11 aa, MHC class II: 16 aa

Specifies the minimum peptide length.

type: integer
default: 8

Specifies the minimum peptide length (not applied when --peptides is specified). Default: MCH class I: 8 aa, MHC class II: 15 aa

Specifies the prediction tool(s) to use.

type: string
default: syfpeithi

Specifies the tool(s) to use. Available are: syfpeithi, mhcflurry, mhcnuggets-class-1, mhcnuggets-class-2. Can be combined in a list separated by comma.

Specifies whether wild-type sequences should be predicted.

type: boolean

Specifies whether wild-type sequences of mutated peptides should be predicted as well.

Specifies that sequences of proteins, affected by provided variants, will be written to a FASTA file.

type: boolean

Specifies that sequences of proteins that are affected by the provided genomic variants are written to a FASTA file. The resulting FASTA file will contain the wild-type and mutated protein sequences.

Writes out supported prediction models.

type: boolean

Writes out supported models. Does not run any predictions.

Options for optimising the pipeline run execution.

Specifies the maximum number of peptide chunks.

type: integer
default: 100

Used in combination with --peptides or --proteins. Maximum number of peptide chunks that will be created for parallelisation.

Specifies the minimum number of peptides that should be written into one chunk.

type: integer
default: 5000

Used in combination with --peptides or --proteins: minimum number of peptides that should be written into one chunk.

Specifies which memory mode should be used.

type: string
default: low

Specifies which memory mode should be used for processes requiring a bit more memory, useful e.g. when running on arbitrary big protein or peptide input data. Available: low, intermediate, high (corresponding to max. 7.GB, 40.GB, 500.GB). Default: low.

Less common options for the pipeline, typically set in a config file.

Display help text.

hidden
type: boolean

Method used to save pipeline results to output directory.

hidden
type: string

The Nextflow publishDir option specifies which intermediate files should be saved to the output directory. This option tells the pipeline what method should be used to move these files. See Nextflow docs for details.

Workflow name.

hidden
type: string

A custom name for the pipeline run. Unlike the core nextflow -name option with one hyphen this parameter can be reused multiple times, for example if using -resume. Passed through to steps such as MultiQC and used for things like report filenames and titles.

Email address for completion summary, only when pipeline fails.

hidden
type: string
pattern: ^([a-zA-Z0-9_\-\.]+)@([a-zA-Z0-9_\-\.]+)\.([a-zA-Z]{2,5})$

This works exactly as with --email, except emails are only sent if the workflow is not successful.

Send plain-text email instead of HTML.

hidden
type: boolean

Set to receive plain-text e-mails instead of HTML formatted.

File size limit when attaching MultiQC reports to summary emails.

hidden
type: string
default: 25.MB

If file generated by pipeline exceeds the threshold, it will not be attached.

Do not use coloured log outputs.

hidden
type: boolean

Set to disable colourful command line output and live life in monochrome.

Custom config file to supply to MultiQC.

hidden
type: string

Directory to keep pipeline Nextflow logs and reports.

hidden
type: string
default: ${params.outdir}/pipeline_info

Set the top limit for requested resources for any single job.

Maximum number of CPUs that can be requested for any single job.

hidden
type: integer
default: 16

Use to set an upper-limit for the CPU requirement for each process. Should be an integer e.g. --max_cpus 1

Maximum amount of memory that can be requested for any single job.

hidden
type: string
default: 128.GB

Use to set an upper-limit for the memory requirement for each process. Should be a string in the format integer-unit e.g. --max_memory '8.GB'

Maximum amount of time that can be requested for any single job.

hidden
type: string
default: 240.h

Use to set an upper-limit for the time requirement for each process. Should be a string in the format integer-unit e.g. --max_time '2.h'

Parameters used to describe centralised config profiles. These should not be edited.

Git commit id for Institutional configs.

hidden
type: string
default: master

Provide git commit id for custom Institutional configs hosted at nf-core/configs. This was implemented for reproducibility purposes. Default: master.

## Download and use config file with following git commit id  
--custom_config_version d52db660777c4bf36546ddb188ec530c3ada1b96  

Base directory for Institutional configs.

hidden
type: string
default: https://raw.githubusercontent.com/nf-core/configs/master

If you're running offline, nextflow will not be able to fetch the institutional config files from the internet. If you don't need them, then this is not a problem. If you do need them, you should download the files from the repo and tell nextflow where to find them with the custom_config_base option. For example:

## Download and unzip the config files  
cd /path/to/my/configs  
wget https://github.com/nf-core/configs/archive/master.zip  
unzip master.zip  
  
## Run the pipeline  
cd /path/to/my/data  
nextflow run /path/to/pipeline/ --custom_config_base /path/to/my/configs/configs-master/  

Note that the nf-core/tools helper package has a download command to download all required pipeline files + singularity containers + institutional configs in one go for you, to make this process easier.

Institutional configs hostname.

hidden
type: string

Institutional config description.

hidden
type: string

Institutional config contact information.

hidden
type: string

Institutional config URL link.

hidden
type: string