Define where the pipeline should find input data and save output data.

URI/path to an SDRF file (.sdrf.tsv) OR OpenMS-style experimental design with paths to spectra files (.tsv)

required
type: string
pattern: ^\S+\.(?:tsv|sdrf)$

Input is specified by using a path or URI to a PRIDE Sample to Data Relation Format file (SDRF), e.g. as part of a submitted and
annotated PRIDE experiment (see here for examples). Input files will be downloaded and cached from the URIs specified in the SDRF file.
An OpenMS-style experimental design will be generated based on the factor columns of the SDRF. The settings for the
following parameters will currently be overwritten by the ones specified in the SDRF:

* `fixed_mods`,  
* `variable_mods`,  
* `precursor_mass_tolerance`,  
* `precursor_mass_tolerance_unit`,  
* `fragment_mass_tolerance`,  
* `fragment_mass_tolerance_unit`,  
* `fragment_method`,  
* `enzyme`  

You can also specify an OpenMS-style experimental design directly (.tsv ending). In this case, the aforementioned parameters have to be specified or defaults will be used.

The output directory where the results will be saved.

required
type: string
default: ./results

Email address for completion summary.

type: string
pattern: ^([a-zA-Z0-9_\-\.]+)@([a-zA-Z0-9_\-\.]+)\.([a-zA-Z]{2,5})$

Set this parameter to your e-mail address to get a summary e-mail with details of the run sent to you when the workflow exits. If set in your user config file (~/.nextflow/config) then you don't need to specify this on the command line for every run.

MultiQC report title. Printed as page header, used for filename if not otherwise specified.

type: string

Root folder in which the spectrum files specified in the SDRF/design are searched

type: string

This optional parameter can be used to specify a root folder in which the spectrum files specified in the SDRF/design are searched.
It is usually used if you have a local version of the experiment already. Note that this option does not support recursive
searching yet.

Overwrite the file type/extension of the filename as specified in the SDRF/design

type: string
default: mzML

If the above --root_folder was given to load local input files, this overwrites the file type/extension of
the filename as specified in the SDRF/design. Usually used in case you have an mzML-converted version of the files already. Needs to be
one of 'mzML' or 'raw' (the letter cases should match your files exactly).

Proteomics data acquisition method

type: string

Settings that relate to the mandatory protein database and the optional generation of decoy entries. Note: Decoys for DIA will be created internally.

The fasta protein database used during database search. Note: For DIA data, it must not contain decoys.

required
type: string
pattern: ^\S+\.(?:fasta|fa)$

Since the database is not included in an SDRF, this parameter always needs to be given to specify the input protein database
when you run the pipeline. Remember to include contaminants (and decoys if not in DIA mode and if not added in the pipeline with --add_decoys)

--database '[path to fasta protein database]'  

Generate and append decoys to the given protein database

type: boolean

If decoys were not yet included in the input database, they have to be appended by OpenMS DecoyGenerator by adding this flag (TODO allow specifying generator type).
Default: pseudo-reverse peptides

Pre- or suffix of decoy proteins in their accession

type: string
default: DECOY_

If --add-decoys was set, this setting is used during generation and passed to all tools that need decoy information.
If decoys were appended to the database externally, this setting needs to match the used affix. (While OpenMS tools can infer the affix automatically, some thirdparty tools might not.)
Typical values are 'rev', 'decoy', 'dec'. Look for them in your database.

Location of the decoy marker string in the fasta accession. Before (prefix) or after (suffix)

type: string
default: prefix

Prefix is highly recommended. Only in case an external tool marked decoys with a suffix, e.g. sp|Q12345|ProteinA_DECOY change this parameter to suffix.

Choose the method to produce decoys from input target database.

type: string

Maximum nr. of attempts to lower the amino acid sequence identity between target and decoy for the shuffle algorithm

type: integer
default: 30

Target-decoy amino acid sequence identity threshold for the shuffle algorithm. if the sequence identity is above this threshold, shuffling is repeated. In case of repeated failure, individual amino acids are 'mutated' to produce a difference amino acid sequence.

type: number
default: 0.5

Debug level for DecoyDatabase step. Increase for verbose logging.

hidden
type: integer

In case you start from profile mode mzMLs or the internal preprocessing during conversion with the ThermoRawFileParser fails (e.g. due to new instrument types), preprocessing has to be performed with OpenMS. Use this section to configure.

Activate OpenMS-internal peak picking

type: boolean

Activate OpenMS-internal peak picking with the tool PeakPickerHiRes. Skips already picked spectra.

Perform peakpicking in memory

type: boolean

Perform peakpicking in memory. Use only if problems occur.

Which MS levels to pick as comma separated list. Leave empty for auto-detection.

type: string

Which MS levels to pick as comma separated list, e.g. --peakpicking_ms_levels 1,2. Leave empty for auto-detection.

Convert bruker .d files to mzML

type: boolean

Whether to convert raw .d bruker files to .mzML

Force initial re-indexing of input mzML files. Also fixes some common mistakes in slightly incomplete/outdated mzMLs. (Default: true for safety)

type: boolean
default: true

Force re-indexing in the beginning of the pipeline to make sure that indices are up-to-date and to avoid redundant indexing on-demand in steps that require an index (e.g., Comet).

A comma separated list of search engines to use (and combine). Valid: comet, msgf, sage

type: string
default: comet

A comma-separated list of search engines to run in parallel on each mzML file. Currently supported: comet, msgf and sage (default: comet)
If more than one search engine is given, results are combined based on posterior error probabilities (see the different types
of estimation procedures under --posterior_probabilities). Combination is done with
ConsensusID.
See also its corresponding --consensusid_algorithm parameter for different combination strategies.
Combinations may profit from an increased --num_hits parameter.

Number of sage processes to be spawned.

type: integer
default: 1

Since sage's runtime benefits from building an index only once per database and processing files in parallel, you can choose the number of sage processes to be spawned here. Input mzMLs will be distributed equally among them in arbitrary order.

The enzyme to be used for in-silico digestion, in 'OpenMS format'

type: string
default: Trypsin

Specify which enzymatic restriction should be applied, e.g. 'unspecific cleavage', 'Trypsin' (default), see OpenMS
enzymes. Note: MSGF does not support extended
cutting rules, as used by default with Trypsin. I.e. if you specify Trypsin with MSGF, it will be automatically converted to
Trypsin/P= 'Trypsin without proline rule'.

Specify the amount of termini matching the enzyme cutting rules for a peptide to be considered. Valid values are fully (default), semi, or none

type: string

Warning: not supported by sage yet.

Specify the maximum number of allowed missed enzyme cleavages in a peptide. The parameter is not applied if unspecific cleavage is specified as enzyme.

type: integer
default: 2

Precursor mass tolerance used for database search. For High-Resolution instruments a precursor mass tolerance value of 5 ppm is recommended (i.e. 5). See also --precursor_mass_tolerance_unit.

type: integer
default: 5

Precursor mass tolerance unit used for database search. Possible values are 'ppm' (default) and 'Da'.

type: string

Fragment mass tolerance used for database search. The default of 0.03 Da is for high-resolution instruments.

type: number
default: 0.03

Caution: for Comet we are estimating the fragment_bin_tolerance parameter based on this automatically.

Fragment mass tolerance unit used for database search. Possible values are 'ppm' (default) and 'Da'.

type: string

Caution: for Comet we are estimating the fragment_bin_tolerance parameter based on this automatically.

A comma-separated list of fixed modifications with their Unimod name to be searched during database search

type: string
default: Carbamidomethyl (C)

Specify which fixed modifications should be applied to the database search (eg. '' or 'Carbamidomethyl (C)', see Unimod modifications
in the style '({unimod name} ({optional term specificity} {optional origin})').
All possible modifications can be found in the restrictions mentioned in the command line documentation of e.g. CometAdapter (scroll down a bit for the complete set).
Multiple fixed modifications can be specified comma separated (e.g. 'Carbamidomethyl (C),Oxidation (M)').
Fixed modifications need to be found at every matching amino acid for a peptide to be reported.

A comma-separated list of variable modifications with their Unimod name to be searched during database search

type: string
default: Oxidation (M)

Specify which variable modifications should be applied to the database search (eg. '' or 'Oxidation (M)', see Unimod modifications
in the style '({unimod name} ({optional term specificity} {optional origin})').
All possible modifications can be found in the restrictions mentioned in the command line documentation of e.g. CometAdapter (scroll down a bit for the complete set).
Multiple variable modifications can be specified comma separated (e.g. 'Carbamidomethyl (C),Oxidation (M)').
Variable modifications may or may not be found at matching amino acids for a peptide to be reported.

The fragmentation method used during tandem MS. (MS/MS or MS2).

hidden
type: string
default: HCD

Currently unsupported. Defaults to ALL for Comet and from_spectrum, for MSGF. Should be a sensible default for 99% of the cases.

Comma-separated range of integers with allowed isotope peak errors for precursor tolerance (like MS-GF+ parameter '-ti'). E.g. -1,3

type: string
default: 0,1

Range of integers with allowed isotope peak errors (like MS-GF+ parameter '-ti'). Takes into account the error introduced by choosing a non-monoisotopic peak for fragmentation. Combined with 'precursor_mass_tolerance'/'precursor_error_units', this determines the actual precursor mass tolerance. E.g. for experimental mass 'exp' and calculated mass 'calc', '-precursor_mass_tolerance 20 -precursor_error_units ppm -isotope_error_range -1,2' tests '|exp - calc - n * 1.00335 Da| < 20 ppm' for n = -1, 0, 1, 2.

Type of instrument that generated the data. 'low_res' or 'high_res' (default; refers to LCQ and LTQ instruments)

type: string
default: high_res

MSGF only: Labeling or enrichment protocol used, if any. Default: automatic

type: string
default: automatic

Minimum precursor ion charge. Omit the '+'

type: integer
default: 2

Maximum precursor ion charge. Omit the '+'

type: integer
default: 4

Minimum peptide length to consider (works with MSGF and in newer Comet versions)

type: integer
default: 6

Maximum peptide length to consider (works with MSGF and in newer Comet versions)

type: integer
default: 40

Specify the maximum number of top peptide candidates per spectrum to be reported by the search engine. Default: 1

type: integer
default: 1

Maximum number of modifications per peptide. If this value is large, the search may take very long.

type: integer
default: 3

The minimum precursor m/z for the in silico library generation or library-free search

type: number

The maximum precursor m/z for the in silico library generation or library-free search

type: number

The minimum fragment m/z for the in silico library generation or library-free search

type: number

The maximum fragment m/z for the in silico library generation or library-free search

type: number

Debug level when running the database search. Logs become more verbose and at '>5' temporary files are kept.

hidden
type: integer

Settings for calculating a localization probability with LucXor for modifications with multiple candidate amino acids in a peptide.

Turn the mechanism on.

type: boolean

Which variable modifications to use for scoring their localization.

type: string
default: Phospho (S),Phospho (T),Phospho (Y)

List of neutral losses to consider for mod. localization.

hidden
type: string

List the types of neutral losses that you want to consider. The residue field is case sensitive. For example: lower case 'sty' implies that the neutral loss can only occur if the specified modification is present.
Syntax: 'NL = <RESDIUES> -<NEUTRAL_LOSS_MOLECULAR_FORMULA> <MASS_LOST>'
(default: '[sty -H3PO4 -97.97690]')

How much to add to an amino acid to make it a decoy for mod. localization.

hidden
type: number

List of neutral losses to consider for mod. localization from an internally generated decoy sequence.

hidden
type: string

For handling the neutral loss from a decoy sequence. The syntax for this is identical to that of the normal neutral losses given above except that the residue is always 'X'. Syntax: DECOY_NL = X -<NEUTRAL_LOSS_MOLECULAR_FORMULA> <MASS_LOST> (default: '[X -H3PO4 -97.97690]')

Debug level for Luciphor step. Increase for verbose logging and keeping temp files.

hidden
type: integer

What to do when peptides are found that do not follow a unified set of rules (since search engines sometimes differ in their interpretation of them).

type: string

Should isoleucine and leucine be treated interchangeably when mapping search engine hits to the database? Default: true

type: boolean
default: true

Choose between different rescoring/posterior probability calculation methods and set them up.

How to calculate posterior probabilities for PSMs:

  • 'percolator' = Re-score based on PSM-feature-based SVM and transform distance
    to hyperplane for posteriors
  • 'fit_distributions' = Fit positive and negative distributions to scores
    (similar to PeptideProphet)
type: string

FDR cutoff on PSM level (or peptide level; see Percolator options) per run before going into feature finding, map alignment and inference. This can be seen as a pre-filter. See

type: number
default: 0.01

Debug level when running the IDFilter step. Increase for verbose logging

hidden
type: integer

Debug level when running the re-scoring. Logs become more verbose and at '>5' temporary files are kept.

hidden
type: integer

Debug level when running the re-scoring. Logs become more verbose and at '>5' temporary files are kept.

hidden
type: integer

In the following you can find help for the Percolator specific options that are only used if [`--posterior_probabilities`](#posterior_probabilities) was set to 'percolator'. Note that there are currently some restrictions to the original options of Percolator: * no Percolator protein FDR possible (currently OpenMS' FDR is used on protein level) * no support for separate target and decoy databases (i.e. no min-max q-value calculation or target-decoy competition strategy) * no support for combined or experiment-wide peptide re-scoring. Currently search results per input file are submitted to Percolator independently.

Calculate FDR on PSM ('psm-level-fdrs') or peptide level ('peptide-level-fdrs')?

type: string

The FDR cutoff to be used during training of the SVM.

type: number
default: 0.05

The FDR cutoff to be used during testing of the SVM.

type: number
default: 0.05

Only train an SVM on a subset of PSMs, and use the resulting score vector to evaluate the other PSMs. Recommended when analyzing huge numbers (>1 million) of PSMs. When set to 0, all PSMs are used for training as normal. This is a runtime vs. quality tradeoff. Default: 300,000

type: integer
default: 300000

Retention time features are calculated as in Klammer et al. instead of with Elude. Default: false

hidden
type: boolean

Use additional features whose values are learnt by correct entries. See help text. Default: 0 = none

type: integer

Percolator provides the possibility to use so called description of correct features, i.e. features for which desirable values are learnt from the previously identified target PSMs. The absolute value of the difference between desired value and observed value is then used as predictive features.

1 -> iso-electric point

2 -> mass calibration

4 -> retention time

8 -> delta_retention_time * delta_mass_calibration

Debug level for Percolator step. Increase for verbose logging

hidden
type: integer

Use this instead of Percolator if there are problems with Percolator (e.g. due to bad separation) or for performance

How to handle outliers during fitting:

  • ignore_iqr_outliers (default): ignore outliers outside of 3*IQR from Q1/Q3 for fitting
  • set_iqr_to_closest_valid: set IQR-based outliers to the last valid value for fitting
  • ignore_extreme_percentiles: ignore everything outside 99th and 1st percentile (also removes equal values like potential censored max values in XTandem)
  • none: do nothing
type: string

Debug level for IDPEP step. Increase for verbose logging

hidden
type: integer

How to combine the probabilities from the single search engines: best, combine using a sequence similarity-matrix (PEPMatrix), combine using shared ion count of peptides (PEPIons). See help for further info.

type: string

Specifies how search engine results are combined: ConsensusID offers several algorithms that can aggregate results from multiple peptide identification engines ('search engines') into consensus identifications - typically one per MS2 spectrum. This works especially well for search engines that provide more than one peptide hit per spectrum, i.e. that report not just the best hit, but also a list of runner-up candidates with corresponding scores.

The available algorithms are:

  • PEPMatrix: Scoring based on posterior error probabilities (PEPs) and peptide sequence similarities. This algorithm uses a substitution matrix to score the similarity of sequences not listed by all search engines. It requires PEPs as the scores for all peptide hits.
  • PEPIons: Scoring based on posterior error probabilities (PEPs) and fragment ion similarities ('shared peak count'). This algorithm, too, requires PEPs as scores.
  • best: For each peptide ID, this uses the best score of any search engine as the consensus score.
  • worst: For each peptide ID, this uses the worst score of any search engine as the consensus score.
  • average: For each peptide ID, this uses the average score of all search engines as the consensus score.
  • ranks: Calculates a consensus score based on the ranks of peptide IDs in the results of different search engines. The final score is in the range (0, 1], with 1 being the best score.

To make scores comparable, for best, worst and average, PEPs are used as well. Peptide IDs are only considered the same if they map to exactly the same sequence (including modifications and their localization). Also isobaric aminoacids are (for now) only considered equal with the PEPMatrix/PEPIons algorithms.

Only use the top N hits per search engine and spectrum for combination. Default: 0 = all

type: integer

Limits the number of alternative peptide hits considered per spectrum/feature for each identification run. This helps to reduce runtime, especially for the PEPMatrix and PEPIons algorithms, which involve costly 'all vs. all' comparisons of peptide hits per spectrum across engines.

A threshold for the ratio of occurrence/similarity scores of a peptide in other runs, to be reported. See help.

type: number

This allows filtering of peptide hits based on agreement between search engines. Every peptide sequence in the analysis has been identified by at least one search run. This parameter defines which fraction (between 0 and 1) of the remaining search runs must 'support' a peptide identification that should be kept. The meaning of 'support' differs slightly between algorithms: For best, worst, average and rank, each search run supports peptides that it has also identified among its top consensusid_considered_top_hits candidates. So min_consensus_support simply gives the fraction of additional search engines that must have identified a peptide. (For example, if there are three search runs, and only peptides identified by at least two of them should be kept, set min_support to 0.5.) For the similarity-based algorithms PEPMatrix and PEPIons, the 'support' for a peptide is the average similarity of the most-similar peptide from each (other) search run. (In the context of the JPR publication, this is the average of the similarity scores used in the consensus score calculation for a peptide.) Note: For most of the subsequent algorithms, only the best identification per spectrum is used.

Debug level for ConsensusID. Increase for verbose logging

hidden
type: integer

Extracts and normalizes labeling information

Operate only on MSn scans where any of its precursors features a certain activation method. Set to empty to disable.

type: string

Allowed shift (left to right) in Th from the expected position

type: number
default: 0.002

Minimum intensity of the precursor to be extracted

type: number
default: 1

Minimum fraction of the total intensity. 0.0:1.0

type: number

Minimum fraction of the total intensity in the isolation window of the precursor spectrum

Minimum intensity of the individual reporter ions to be extracted.

type: number

Maximum allowed deviation (in ppm) between theoretical and observed isotopic peaks of the precursor peak

type: number
default: 10

Enable isotope correction (highly recommended)

type: boolean
default: true

Enable normalization of the channel intensities

type: boolean

The normalization is done by using the Median of Ratios. Also the ratios the medians is provided as control measure.

The reference channel, e.g. for calculating ratios.

type: integer
default: 126

Set the debug level

hidden
type: integer

Assigns protein/peptide identifications to features or consensus features. Here, features generated from isobaric reporter intensities of fragment spectra.

Debug level for IDMapper step. Increase for verbose logging

hidden
type: integer

To group proteins, calculate scores on the protein (group) level and to potentially modify associations from peptides to proteins.

The inference method to use. 'aggregation' (default) or 'bayesian'.

type: string

Infer proteins through:

  • 'aggregation' = aggregates all peptide scores across a protein (by calculating the maximum) (default)
  • 'bayesian' = compute a posterior probability for every protein based on a Bayesian network (i.e. using Epifany)
  • ('percolator' not yet supported)

Note: If protein grouping is performed also depends on the protein_quant parameter (i.e. if peptides have to be unique or unique to a group only)

[Ignored in Bayesian] How to aggregate scores of peptides matching to the same protein

type: string

[Ignored in Bayesian] Also use shared peptides during score aggregation to protein level

type: boolean
default: true

[Ignored in Bayesian] Minimum number of peptides needed for a protein identification

type: integer
default: 1

Consider only the top X PSMs per spectrum to find the best PSM per peptide. 0 considers all.

type: integer
default: 1

[Bayesian-only; Experimental] Update PSM probabilities with their posteriors under consideration of the protein probabilities.

hidden
type: boolean

The experiment-wide protein (group)-level FDR cutoff. Default: 0.01

type: number
default: 0.01

This can be protein level if 'strictly_unique_peptides' are used for protein quantification. See --protein_quant

Use picked protein FDRs

type: boolean
default: true

The experiment-wide PSM-level FDR cutoff. Default: 0.01

type: number
default: 0.01

After applying protein-level FDR cutoff, this additionally filters PSMs to be used for quantification and reporting.

Debug level for the protein inference step. Increase for verbose logging

hidden
type: integer

General protein quantification settings for both LFQ and isobaric labelling.

Specify the labelling method that was used. Will be ignored if SDRF was given but is mandatory otherwise

type: string

Quantification method used in the experiment.

Calculate protein abundance from this number of proteotypic peptides (most abundant first; '0' for all, Default 3)

type: integer
default: 3

Averaging method used to compute protein abundances from peptide abundances.

type: string

Distinguish between fraction and charge states of a peptide. (default: 'false')

type: boolean

Add the log2 ratios of the abundance values to the output.

type: boolean
default: false

Scale peptide abundances so that medians of all samples are equal.(Default false)

type: boolean
default: false

Use the same peptides for protein quantification across all samples.(Default false)

type: boolean
default: false

Include results for proteins with fewer proteotypic peptide than indicated by top.

type: boolean
default: true

Quantify proteins based on:

  • 'unique_peptides' = use peptides mapping to single proteins or a group of indistinguishable proteins (according to the set of experimentally identified peptides)
  • 'strictly_unique_peptides' (only LFQ) = use peptides mapping to a unique single protein only
  • 'shared_peptides' = use shared peptides, too, but only greedily for its best group (by inference score and nr. of peptides)
type: string

Export the results in mzTab format.

type: boolean
default: true

Choose between feature-based quantification based on integrated MS1 signals ('feature_intensity'; default) or spectral counting of PSMs ('spectral_counting'). WARNING: 'spectral_counting' is not compatible with our MSstats step yet. MSstats will therefore be disabled automatically with that choice.

type: string

Recalibrates masses based on precursor mass deviations to correct for instrument biases. (default: 'false')

type: boolean

Only looks for quantifiable features at locations with an identified spectrum. Set to false to include unidentified features so they can be linked and matched to identified ones (= match between runs). (default: 'true')

type: boolean
default: true

The minimum probability (e.g.: 0.25) an identified (=id targeted) feature must have to be kept for alignment and linking (0=no filter).

type: number

The minimum probability (e.g.: 0.25) an identified (=id targeted) feature must have to be kept for alignment and linking (0=no filter). (default: '0.0') (min: '0.0' max: '1.0')

The minimum probability (e.g.: 0.75) an unidentified feature must have to be kept for alignment and linking (0=no filter).

type: number

The minimum probability (e.g.: 0.75) an unidentified feature must have to be kept for alignment and linking (0=no filter). (default: '0.0') (min: '0.0' max: '1.0')

The minimum intensity for a feature to be considered for quantification. (default: '10000')

type: number
default: 10000

The minimum intensity for a feature to be considered for quantification. (default: '10000')

The order in which maps are aligned. Star = all vs. the reference with most IDs (default). TreeGuided = an alignment tree is calculated first based on similarity measures of the IDs in the maps.

type: string

Also quantify decoys? (Usually only needed for Triqler post-processing output with --add_triqler_output, where it is auto-enabled)

type: boolean

Debug level when running the re-scoring. Logs become more verbose and at '>666' potentially very large temporary files are kept.

hidden
type: integer

Settings for DIA-NN - a universal software for data-independent acquisition (DIA) proteomics data processing.

Choosing the MS2 mass accuracy setting automatically

type: boolean
default: true

Choosing scan_window setting automatically

type: boolean
default: true

Set the scan window radius to a specific value

type: integer
default: 7

Ideally, should be approximately equal to the average number of data points per peak

Only peaks with correlation sum exceeding min_corr will be considered

type: number
default: 2

Peaks with correlation sum below corr_diff from maximum will not be considered

type: number
default: 1

A single score will be used until RT alignment to save memory

type: boolean
default: true

Controls the protein inference mode

type: number

Instructs DIA-NN to add the organism identifier to the gene names

type: boolean

The spectral library to use for DIA-NN

type: string

If passed, will use that spectral library to carry out the DIA-NN search, instead of predicting one from the fasta file.

Debug level

hidden
type: integer

Enable cross-run normalization between runs by diann.

type: boolean
default: true

Skip MSstats/MSstatsTMT for statistical post-processing?

type: boolean

Experimental: Instead of all pairwise contrasts (default), uses the given condition name/number (corresponding to your experimental design) as a reference and creates pairwise contrasts against it.

type: string

Experimental: Allows full control over contrasts by specifying a set of contrasts in a semicolon separated list of R-compatible contrasts with the condition names/numbers as variables (e.g. 1-2;1-3;2-3). Overwrites --ref_condition.

type: string

The threshold value for differential expressed proteins in MSstats plots based on adjusted p-value

type: number
default: 0.05

Also create an output in Triqler's format for an alternative manual post-processing with that tool

type: boolean

Which features to use for quantification per protein: 'top3' or 'highQuality' which removes outliers only

type: string

which summary method to use: 'TMP' (Tukey's median polish) or 'linear' (linear mixed model)

type: string

Omit proteins with only one quantified feature?

type: boolean
default: true

Keep features with only one or two measurements across runs?

type: boolean
default: true

Use unique peptide for each protein

type: boolean
default: true

Remove the features that have 1 or 2 measurements within each run

type: boolean
default: true

select the feature with the largest summmation or maximal value

type: string

summarization methods to protein-level can be perfomed

type: string

Reference channel based normalization between MS runs on protein level data?

type: boolean
default: true

Remove 'Norm' channels from protein level data

type: boolean
default: true

Reference channel based normalization between MS runs on protein level data

type: boolean
default: true

Export MSstats profile QC plots including all proteins

type: boolean

Enable generation of pmultiqc report? default: 'false'

type: boolean

Skip idXML files (do not generate search engine scores) in pmultiqc report? default: 'true'

type: boolean

Parameters used to describe centralised config profiles. These should not be edited.

Git commit id for Institutional configs.

hidden
type: string
default: master

Base directory for Institutional configs.

hidden
type: string
default: https://raw.githubusercontent.com/nf-core/configs/master

If you're running offline, Nextflow will not be able to fetch the institutional config files from the internet. If you don't need them, then this is not a problem. If you do need them, you should download the files from the repo and tell Nextflow where to find them with this parameter.

Institutional config name.

hidden
type: string

Institutional config description.

hidden
type: string

Institutional config contact information.

hidden
type: string

Institutional config URL link.

hidden
type: string

Set the top limit for requested resources for any single job.

Maximum number of CPUs that can be requested for any single job.

hidden
type: integer
default: 16

Use to set an upper-limit for the CPU requirement for each process. Should be an integer e.g. --max_cpus 1

Maximum amount of memory that can be requested for any single job.

hidden
type: string
default: 128.GB
pattern: ^\d+(\.\d+)?\.?\s*(K|M|G|T)?B$

Use to set an upper-limit for the memory requirement for each process. Should be a string in the format integer-unit e.g. --max_memory '8.GB'

Maximum amount of time that can be requested for any single job.

hidden
type: string
default: 240.h
pattern: ^(\d+\.?\s*(s|m|h|d|day)\s*)+$

Use to set an upper-limit for the time requirement for each process. Should be a string in the format integer-unit e.g. --max_time '2.h'

Less common options for the pipeline, typically set in a config file.

Display help text.

hidden
type: boolean

Display version and exit.

hidden
type: boolean

Method used to save pipeline results to output directory.

hidden
type: string

The Nextflow publishDir option specifies which intermediate files should be saved to the output directory. This option tells the pipeline what method should be used to move these files. See Nextflow docs for details.

Email address for completion summary, only when pipeline fails.

hidden
type: string
pattern: ^([a-zA-Z0-9_\-\.]+)@([a-zA-Z0-9_\-\.]+)\.([a-zA-Z]{2,5})$

An email address to send a summary email to when the pipeline is completed - ONLY sent if the pipeline does not exit successfully.

Send plain-text email instead of HTML.

hidden
type: boolean

File size limit when attaching MultiQC reports to summary emails.

hidden
type: string
default: 25.MB
pattern: ^\d+(\.\d+)?\.?\s*(K|M|G|T)?B$

Do not use coloured log outputs.

hidden
type: boolean

Incoming hook URL for messaging service

hidden
type: string

Incoming hook URL for messaging service. Currently, MS Teams and Slack are supported.

Custom config file to supply to MultiQC.

hidden
type: string

Skip protein/peptide table plots with pmultiqc for large dataset.

type: boolean

Custom logo file to supply to MultiQC. File name must also be set in the MultiQC config file

hidden
type: string

Custom MultiQC yaml file containing HTML including a methods description.

type: string

Boolean whether to validate parameters against the schema at runtime

hidden
type: boolean
default: true

Show all params when using --help

hidden
type: boolean

By default, parameters set as hidden in the schema are not shown on the command line when a user runs with --help. Specifying this option will tell the pipeline to show all parameters.

Validation of parameters fails when an unrecognised parameter is found.

hidden
type: boolean

By default, when an unrecognised parameter is found, it returns a warinig.

Validation of parameters in lenient more.

hidden
type: boolean

Allows string values that are parseable as numbers or booleans. For further information see JSONSchema docs.