Introduction

This document describes the output produced by the pipeline.

The directories listed below will be created in the results directory after the pipeline has finished. All paths are relative to the top-level results directory.

Pipeline overview

The pipeline is built using Nextflow and the results are organized as follow:

Module output

Preprocessing

FastQC

FastQC gives general quality metrics about your sequenced reads. It provides information about the quality score distribution across your reads, per base sequence content (%A/T/G/C), adapter contamination and overrepresented sequences. For further reading and documentation see the FastQC help pages. FastQC is run as part of Trim galore! therefore its output can be found in Trimgalore’s folder.

Output files
  • trimgalore/fastqc/
    • *_fastqc.html: FastQC report containing quality metrics for your untrimmed raw fastq files.

Trim galore!

Trimgalore is trimming primer sequences from sequencing reads. Primer sequences are non-biological sequences that often introduce point mutations that do not reflect sample sequences. This is especially true for degenerated PCR primers.

Output files
  • trimgalore/: directory containing log files with retained reads, trimming percentage, etc. for each sample.
    • *trimming_report.txt: report of read numbers that pass trimgalore.

MultiQC

MultiQC is a visualization tool that generates a single HTML report summarising all samples in your project. Most of the pipeline QC results are visualised in the report and further statistics are available in the report data directory.

Results generated by MultiQC collate pipeline QC from supported tools e.g. FastQC. The pipeline has special steps which also allow the software versions to be reported in the MultiQC output for future traceability. For more information about how to use MultiQC reports, see http://multiqc.info.

Output files
  • multiqc/
    • multiqc_report.html: a standalone HTML file that can be viewed in your web browser.
    • multiqc_data/: directory containing parsed statistics from the different tools used in the pipeline.
    • multiqc_plots/: directory containing static images from the report in various formats.
Note

The FastQC plots displayed in the MultiQC report shows untrimmed reads. They may contain adapter sequence and potentially regions with low quality.

BBduk

BBduk is a filtering tool that removes specific sequences from the samples using a reference fasta file. BBduk is built-in tool from BBmap.

Output files
  • bbmap/
    • *.bbduk.log: a text file with the results from BBduk analysis. Number of filtered reads can be seen in this log.

ORF caller step

Prokka

You can use Prokka to identify ORFs in any genomes for which a gff file is not provided. In addition to calling ORFs (done with Prodigal) Prokka will filter ORFs to only retain quality ORFs and will functionally annotate the ORFs.

Output files
  • prokka/
    • *.ffn.gz: nucleotides fasta file output
    • *.faa.gz: amino acids fasta file output
    • *.gff.gz: genome feature file output

Magmap output

Summary tables

Consistently named and formated output tables in tsv format ready for further analysis. Filenames start with assembly program and ORF caller, to allow reruns of the pipeline with different parameter settings without overwriting output files.

Output files
  • summary_tables/
    • magmap.overall_stats.tsv.gz: overall statistics from the pipeline, e.g. number of reads, number of called ORFs, number of reads mapping back to contigs/ORFs etc.
    • magmap.counts.tsv.gz: read counts per ORF and sample.
    • summary_table.taxonomy.tsv.gz: for each genomes this tsv file provides metrics and taxonomy.

Pipeline information

Output files
  • pipeline_info/
    • Reports generated by Nextflow: execution_report.html, execution_timeline.html, execution_trace.txt and pipeline_dag.dot/pipeline_dag.svg.
    • Reports generated by the pipeline: pipeline_report.html, pipeline_report.txt and software_versions.yml. The pipeline_report* files will only be present if the --email / --email_on_fail parameter’s are used when running the pipeline.
    • Reformatted samplesheet files used as input to the pipeline: samplesheet.valid.csv.
    • Parameters used by the pipeline run: params.json.

Nextflow provides excellent functionality for generating various reports relevant to the running and execution of the pipeline. This will allow you to troubleshoot errors with the running of the pipeline, and also provide you with other information such as launch commands, run times and resource usage.