Introduction

This document describes the output produced by the pipeline. Most of the plots are taken from the MultiQC report, which summarises results at the end of the pipeline.

The directories listed below will be created in the results directory after the pipeline has finished. All paths are relative to the top-level results directory.

Pipeline overview

The pipeline is built using Nextflow and processes data using the following steps:

  • HMMER - If the pipeline is run in “search and place” mode, an initial HMMER search is performed to identify query sequences for placement
  • Alignment - Align query sequences to the reference alignment
  • Placement - Place query sequences in the reference phylogeny
  • Summary - Summarise placement with a grafted tree, a classification and a heattree
  • MultiQC - Aggregate report describing results and QC from the whole pipeline
  • Pipeline information - Report metrics generated during the workflow execution

Alignment

Alignment of query sequences is done either with HMMER or MAFFT.

HMMER

In the “search and place” mode of the pipeline, hmmsearch output files as well as a *.hmmrank.tsv.gz summarising the search is output.

When using HMMER as the alignment program, a profile is first built, which is then used to align both the query and reference sequences, hence the presence of alignment files for the reference sequences in the output. The realignment of the reference sequences is done because an alignment will likely result in a profile that doesn’t exactly reflect the structure of the alignment in all parts. In particular, gappy positions in the original alignment will typically not be covered by the profile. These positions are often not phylogenetically informative or reliable. The MAFFT alignment strategy keeps the structure of the original reference alignment.

Output files
  • hmmer/
    • *.query.hmmalign.sthlm.gz: Query sequences aligned to reference HMM, in Stockholm format.
    • *.query.hmmalign.masked.sthlm.gz: Masked query sequence alignment, in Stockholm format.
    • *.query.hmmalign.masked.afa.gz: Masked query sequence alignment, in Fasta format.
    • *.ref.hmmalign.sthlm.gz: Reference sequences aligned to reference HMM, in Stockholm format.
    • *.ref.hmmalign.masked.sthlm.gz: Masked query sequence alignment, in Stockholm format.
    • *.ref.hmmalign.masked.afa.gz: Masked query sequence alignment, in Fasta format.
    • *.ref.hmmbuild.txt: Log from HMM profile build.
    • *.ref.hmm.gz: HMM profile made from the reference alignment, if not provided using the hmmfile parameter.
    • *.ref.unaligned.afa.gz: “Unaligned”, i.e. without gap characters, reference sequences in Fasta format.
    • *.tbl.gz: Table format (-tblout) results for individual hmmsearch runs in “search and place” mode
    • *.tbl.gz: Standard, human-readable, format results for individual hmmsearch runs in “search and place” mode
    • *.hmmrank.tsv.gz: Summarised hmmsearch results

MAFFT

When MAFFT is used for alignment, it us run with the --keeplength option to ensure the structure of the query alignment is identical to the reference alignment. Since the resulting alignment contains both query and reference sequences it needs to be split, which is done with EPA-NG which places two files in the epang directory.

Output files
  • mafft/
    • *.fas: Full alignment, containing both reference and query sequences.
  • epang/
    • *.query.fasta.gz: Aligned query sequences in Fasta format.
    • *.reference.fasta.gz: Aligned query sequences in Fasta format.

Placement

Phylogenetic placement of query sequences is performed with EPA-NG.

Output files
  • epang/
    • *.epa_info.log: Log file from phylogenetic placement with EPA-NG.
    • *.epa_result.jplace.gz: Main result file from EPA-NG in jplace format.

Summary

A number of summary operations are performed with Gappa after placement. First, the query sequences are grafted on to the reference tree to produce a comprehensive tree containing all sequences. Second, the “heattree” function is called which produces phylogenies in different formats with branches coloured to indicate the number of placed sequences in various parts of the tree. Third, if the user provides a classification of the reference sequences, a classification of query sequences is performed.

Output files
  • gappa/
    • *.graft.*.newick: Full phylogeny with query sequences grafted on to the reference phylogeny.
    • *.heattree.*: Files from calling gappa examine heattree, see Gappa documentation for details.
    • *.taxonomy.*: Classification files from calling gappa examine examinassign, see Gappa documentation for details.

MultiQC

Output files
  • multiqc/
    • multiqc_report.html: a standalone HTML file that can be viewed in your web browser.
    • multiqc_data/: directory containing parsed statistics from the different tools used in the pipeline.
    • multiqc_plots/: directory containing static images from the report in various formats.

MultiQC is a visualization tool that generates a single HTML report summarising all samples in your project. Most of the pipeline QC results are visualised in the report and further statistics are available in the report data directory.

Results generated by MultiQC collate pipeline QC from supported tools e.g. FastQC. The pipeline has special steps which also allow the software versions to be reported in the MultiQC output for future traceability. For more information about how to use MultiQC reports, see http://multiqc.info.

Pipeline information

Output files
  • pipeline_info/
    • Reports generated by Nextflow: execution_report.html, execution_timeline.html, execution_trace.txt and pipeline_dag.dot/pipeline_dag.svg.
    • Reports generated by the pipeline: pipeline_report.html, pipeline_report.txt and software_versions.yml. The pipeline_report* files will only be present if the --email / --email_on_fail parameter’s are used when running the pipeline.
    • Reformatted samplesheet files used as input to the pipeline: samplesheet.valid.csv.
    • Parameters used by the pipeline run: params.json.

Nextflow provides excellent functionality for generating various reports relevant to the running and execution of the pipeline. This will allow you to troubleshoot errors with the running of the pipeline, and also provide you with other information such as launch commands, run times and resource usage.