Pipeline: nf-core/proteomicslfq (dev)

Launch ID: 1711725927_55540fac3e42

Go through the pipeline inputs below, setting them to the values that you would like. When you're done, click Launch and your parameters will be saved.

The page shown will show a command that you can use to directly launch the workflow. For those running on a system with no internet connection, you can copy the parameters JSON to a file and use the supplied command to launch.

Nextflow command-line flags
Nextflow command-line flags

General Nextflow flags to control how the pipeline runs.

These are not specific to the pipeline and will not be saved in any parameter file. They are just used when building the `nextflow run` launch command.
Must match pattern ^[a-zA-Z0-9-_]+$

Unique name for this nextflow run

Configuration profile

Work directory for intermediate files

Resume previous run, if found

Execute the script using the cached results, useful to continue executions that was stopped by an error

Input/output options

Define where the pipeline should find input data and save output data.

This parameter is required

URI/path to an SDRF file (with ending .sdrf or .sdrf.tsv) OR a tab-separated experimental design file (.tsv) in OpenMS' own format. All input files need to be specified with full paths in the corresponding columns. Those can be any URIs or local paths with schemata supported by nextflow (e.g. http/ftp/s3)

The input to the pipeline can be specified in two mutually exclusive ways:

  • using a path or URI to a PRIDE Sample to Data Relation Format file (SDRF), e.g. as part of a submitted and annotated PRIDE experiment (see here for examples). An OpenMS-style experimental design will be generated based on the factor columns of the SDRF. The settings for the following parameters will currently be overwritten by the ones specified in the SDRF:

    • fixed_mods,
    • variable_mods,
    • precursor_mass_tolerance,
    • precursor_mass_tolerance_unit,
    • fragment_mass_tolerance,
    • fragment_mass_tolerance_unit,
    • fragment_method,
    • enzyme
  • by specifying a tab-separated experimental design file in OpenMS' own format. In this case, setting additional parameters is recommended. All input file paths/URIs will be used to download and cache inputs if necessary (i.e. remote files were used).

The output directory where the results will be saved.

Must match pattern ^([a-zA-Z0-9_\-\.]+)@([a-zA-Z0-9_\-\.]+)\.([a-zA-Z]{2,5})$

Email address for completion summary.

Set this parameter to your e-mail address to get a summary e-mail with details of the run sent to you when the workflow exits. If set in your user config file (~/.nextflow/config) then you don't need to specify this on the command line for every run.

Staging of files

Allows to overwrite the origins and types of input files as specified in the input design/SDRF.

Root folder in which the spectrum files specified in the design/SDRF are searched

This optional parameter can be used to specify a root folder in which the spectrum files specified in the design/SDRF are searched. It is usually used if you have a local version of the experiment already. Note that this option does not support recursive searching yet.

Overwrite the file type/extension of the filename as specified in the SDRF

If the above --root_folder was given to load local input files, this overwrites the file type/extension of the filename as specified in the design/SDRF. Usually used in case you have an mzML-converted version of the files already. Needs to be one of 'mzML' or 'raw' (the letter cases should match your files exactly).

Protein database

Settings that relate to the mandatory protein database and the optional generation of decoy entries.

This parameter is required

The fasta protein database used during database search.

Since the database is not included in an SDRF, this parameter always needs to be given to specify the input protein database when you run the pipeline. Remember to include contaminants (and decoys if not added in the pipeline with --add-decoys)

--database '[path to Fasta protein database]'

Generate and append decoys to the given protein database

If decoys were not yet included in the input database, they have to be appended by OpenMS DecoyGenerator by adding this flag (TODO allow specifying type). Default: pseudo-reverse peptides

Pre- or suffix of decoy proteins in their accession

If --add-decoys was set, this setting is used during generation and passed to all tools that need decoy information. If decoys were appended to the database externally, this setting needs to match the used affix. (While OpenMS tools can infer the affix automatically, some thirdparty tools might not.) Typical values are 'rev', 'decoy', 'dec'. Look for them in your database.

Location of the decoy marker string in the fasta accession. Before (prefix) or after (suffix)

Prefix is highly recommended. Only in case an external tool marked decoys with a suffix, e.g. sp|Q12345|ProteinA_DECOY change this parameter to suffix.

Choose the method to produce decoys from the input target database.

Must be an integer

Maximum nr. of attempts to lower the amino acid sequence identity between target and decoy for the shuffle algorithm.

Target-decoy amino acid sequence identity threshold for the shuffle algorithm. If the sequence identity is above this threshold, shuffling is repeated. In case of repeated failure, individual amino acids are 'mutated' to produce a different amino acid sequence.

Spectrum preprocessing

In case you start from profile mode mzMLs or the internal preprocessing during conversion with the ThermoRawFileParser fails (e.g. due to new instrument types), preprocessing has to be performed with OpenMS. Use this section to configure.

Activate OpenMS-internal peak picking

Activate OpenMS-internal peak picking with the tool PeakPickerHiRes. Skips already picked spectra.

Perform peakpicking in memory

Perform peakpicking in memory. Use only if problems occur.

Which MS levels to pick as comma separated list. Leave empty for auto-detection.

Which MS levels to pick as comma separated list, e.g. --peakpicking_ms_levels 1,2. Leave empty for auto-detection.

Modification localization

Settings for calculating a localization probability with LucXor for modifications with multiple candidate amino acids in a peptide.

Turn the mechanism on.

Which variable modifications to use for scoring their localization.

Peptide re-indexing

Should isoleucine and leucine be treated interchangeably when mapping search engine hits to the database? Default: true

PSM re-scoring (general)

Choose between different rescoring/posterior probability calculation methods and set them up.

How to calculate posterior probabilities for PSMs:

  • 'percolator' = Re-score based on PSM-feature-based SVM and transform distance to hyperplane for posteriors
  • 'fit_distributions' = Fit positive and negative distributions to scores (similar to PeptideProphet)

FDR cutoff on PSM level (or potential peptide level; see Percolator options) before going into feature finding, map alignment and inference.

Must be an integer

Debug level when running the re-scoring. Logs become more verbose and at '>5' temporary files are kept.

PSM re-scoring (Percolator)

In the following you can find help for the Percolator specific options that are only used if [`--posterior_probabilities`](#--posterior_probabilities) was set to 'percolator'. Note that there are currently some restrictions to the original options of Percolator: * no Percolator protein FDR possible (currently OpenMS' FDR is used on protein level) * no support for separate target and decoy databases (i.e. no min-max q-value calculation or target-decoy competition strategy) * no support for combined or experiment-wide peptide re-scoring. Currently search results per input file are submitted to Percolator independently.

Calculate FDR on PSM ('psm-level-fdrs') or peptide level ('peptide-level-fdrs')?

The FDR cutoff to be used during training of the SVM.

The FDR cutoff to be used during testing of the SVM.

Must be an integer

Only train an SVM on a subset of PSMs, and use the resulting score vector to evaluate the other PSMs. Recommended when analyzing huge numbers (>1 million) of PSMs. When set to 0, all PSMs are used for training as normal. This is a runtime vs. discriminability tradeoff. Default: 300,000

Must be an integer

Use additional features whose values are learnt by correct entries. See help text. Default: 0 = none

Percolator provides the possibility to use so called description of correct features, i.e. features for which desirable values are learnt from the previously identified target PSMs. The absolute value of the difference between desired value and observed value is then used as predictive features.

1 -> iso-electric point

2 -> mass calibration

4 -> retention time

8 -> delta_retention_time * delta_mass_calibration

PSM re-scoring (distribution fitting)

Use this instead of Percolator if there are problems with Percolator (e.g. due to bad separation) or for performance

How to handle outliers during fitting:

  • ignore_iqr_outliers (default): ignore outliers outside of 3*IQR from Q1/Q3 for fitting
  • set_iqr_to_closest_valid: set IQR-based outliers to the last valid value for fitting
  • ignore_extreme_percentiles: ignore everything outside 99th and 1st percentile (also removes equal values like potential censored max values in XTandem)
  • none: do nothing
Consensus ID

How to combine the probabilities from the single search engines: best, combine using a sequence similarity-matrix (PEPMatrix), combine using shared ion count of peptides (PEPIons). See help for further info.

Specifies how search engine results are combined: ConsensusID offers several algorithms that can aggregate results from multiple peptide identification engines ('search engines') into consensus identifications - typically one per MS2 spectrum. This works especially well for search engines that provide more than one peptide hit per spectrum, i.e. that report not just the best hit, but also a list of runner-up candidates with corresponding scores.

The available algorithms are:

  • PEPMatrix: Scoring based on posterior error probabilities (PEPs) and peptide sequence similarities. This algorithm uses a substitution matrix to score the similarity of sequences not listed by all search engines. It requires PEPs as the scores for all peptide hits.
  • PEPIons: Scoring based on posterior error probabilities (PEPs) and fragment ion similarities ('shared peak count'). This algorithm, too, requires PEPs as scores.
  • best: For each peptide ID, this uses the best score of any search engine as the consensus score.
  • worst: For each peptide ID, this uses the worst score of any search engine as the consensus score.
  • average: For each peptide ID, this uses the average score of all search engines as the consensus score.
  • ranks: Calculates a consensus score based on the ranks of peptide IDs in the results of different search engines. The final score is in the range (0, 1], with 1 being the best score.

To make scores comparable, for best, worst and average, PEPs are used as well. Peptide IDs are only considered the same if they map to exactly the same sequence (including modifications and their localization). Also isobaric aminoacids are (for now) only considered equal with the PEPMatrix/PEPIons algorithms.

Must be an integer

Only use the top N hits per search engine and spectrum for combination. Default: 0 = all

Limits the number of alternative peptide hits considered per spectrum/feature for each identification run. This helps to reduce runtime, especially for the PEPMatrix and PEPIons algorithms, which involve costly 'all vs. all' comparisons of peptide hits per spectrum across engines.

Must be an integer

A threshold for the ratio of occurence/similarity scores of a peptide in other runs, to be reported. See help.

This allows filtering of peptide hits based on agreement between search engines. Every peptide sequence in the analysis has been identified by at least one search run. This parameter defines which fraction (between 0 and 1) of the remaining search runs must 'support' a peptide identification that should be kept. The meaning of 'support' differs slightly between algorithms: For best, worst, average and rank, each search run supports peptides that it has also identified among its top consensusid_considered_top_hits candidates. So min_consensus_support simply gives the fraction of additional search engines that must have identified a peptide. (For example, if there are three search runs, and only peptides identified by at least two of them should be kept, set min_support to 0.5.) For the similarity-based algorithms PEPMatrix and PEPIons, the 'support' for a peptide is the average similarity of the most-similar peptide from each (other) search run. (In the context of the JPR publication, this is the average of the similarity scores used in the consensus score calculation for a peptide.) Note: For most of the subsequent algorithms, only the best identification per spectrum is used.

Protein inference

To group proteins, calculate scores on the protein (group) level and to potentially modify associations from peptides to proteins.

The inference method to use. 'aggregation' (default) or 'bayesian'.

Infer proteins through:

  • 'aggregation' = aggregates all peptide scores across a protein (by calculating the maximum) (default)
  • 'bayesian' = compute a posterior probability for every protein based on a Bayesian network (i.e. using Epifany)
  • ('percolator' not yet supported)

Note: If protein grouping is performed also depends on the protein_quant parameter (i.e. if peptides have to be unique or unique to a group only)

The experiment-wide protein (group)-level FDR cutoff. Default: 0.05

This can be protein level if 'strictly_unique_peptides' are used for protein quantification. See --protein_quant

Protein Quantification

Quantify proteins based on:

  • 'unique_peptides' = use peptides mapping to single proteins or a group of indistinguishable proteins (according to the set of experimentally identified peptides)
  • 'strictly_unique_peptides' = use peptides mapping to a unique single protein only
  • 'shared_peptides' = use shared peptides, too, but only greedily for its best group (by inference score)

Choose between feature-based quantification based on integrated MS1 signals ('feature_intensity'; default) or spectral counting of PSMs ('spectral_counting'). WARNING: 'spectral_counting' is not compatible with our MSstats step yet. MSstats will therefore be disabled automatically with that choice.

Recalibrates masses based on precursor mass deviations to correct for instrument biases. (default: 'false')

Tries a targeted requantification in files where an ID is missing, based on aggregate properties (i.e. RT) of the features in other aligned files (e.g. 'mean' of RT). (WARNING: increased memory consumption and runtime. Only useful with multiple fraction groups/samples). 'false' turns this feature off. (default: 'false')

Only looks for quantifiable features at locations with an identified spectrum. Set to false to include unidentified features so they can be linked and matched to identified ones (= match between runs). (default: 'true')

The order in which maps are aligned. Star = all vs. the reference with most IDs (default). TreeGuided = an alignment tree is calculated first based on similarity measures of the IDs in the maps.

Also quantify decoys? (Usually only needed for Triqler post-processing output with --add_triqler_output, where it is auto-enabled)

Must be an integer

Debug level when running the re-scoring. Logs become more verbose and at '>666' potentially very large temporary files are kept.

Statistical post-processing

Parameters for statistical post processing and quantification visualization. Currently only possible with `quantification_method = feature_based`.

Which features to use for quantification per protein: 'top3' or 'highQuality' which removes outliers only

which summary method to use: 'TMP' (Tukey's median polish) or 'linear' (linear mixed model)

Omit proteins with only one quantified feature?

Keep features with only one or two measurements across runs?

Instead of all pairwise contrasts (default), uses the given condition name/number (corresponding to your experimental design) as a reference and creates pairwise contrasts against it.

Allows full control over contrasts by specifying a set of contrasts in a semicolon seperated list of R-compatible limma-style contrasts with the condition names/numbers as variables (e.g. 1-2;1-3;2-3). Overwrites '--ref_condition Default is 'pairwise', a keyword to create all pairwise contrasts.

Also create an output in Triqler's format for an alternative manual post-processing with that tool

Quality control

Enable generation of quality control report by PTXQC? default: 'false' since it is still unstable

Specify a yaml file for the report layout (see PTXQC documentation) (TODO not yet fully implemented)