This document describes the output produced by the pipeline. See main for a condensed overview of the steps in the pipeline, and the bioinformatics tools used at each step.

The directories listed below will be created in the results directory after the pipeline has finished. All paths are relative to the top-level results directory.

Pipeline overview

The pipeline is built using Nextflow and processes data using the following steps:

  • FastQC - Raw read QC
  • MultiQC - Aggregate report describing results and QC from the whole pipeline
  • Pipeline information - Report metrics generated during the workflow execution

Summary statistics of input files

Output files
  • stats/
    • complete_summary_stats.csv: csv file containing the summary for all the statistics computed on the input file.
    • sequences/
      • seqstats/*_seqstats.csv: file containing the sequence input length for each sequence in the family defined by the file name. If --calc_seq_stats is specified.
      • perc_sim/*_txt: file containing the pairwise sequence similarity for all input sequences. If --calc_sim is specified.
    • structures/ - plddt/*_full_plddt.csv: file containing the plddt of the structures for each sequence in the input file. If --extract_plddt is specified.

The subworkflow collects statistics about the input files and summarizes them into a final csv file.


Output files
  • trees/
    • *.dnd: guide tree files.

If you explicitly specifified (via the toolsheet) to compute guidetrees to be used by the MSA tool, those are stored here.


Output files
  • alignment/
    • */*.fa: each subdirectory is called as the input file. It contains all the alignments computed on it. The filename contains all the informations of the input file used and the tool. The file naming convention is: {Inputfile}{Tree}args-{Tree_args}{MSA}_args-{MSA_args}.aln

All MSA computed are stored here.


Output files
  • evaluation/
    • tcoffee_irmsd/: directory containing the files with the complete iRMSD files. If --calc_irmsd is specified.
    • tcoffee_tcs/: directory containing the files with the complete TCS files. If --calc_tcs is specified.
    • complete_summary_eval.csv: csv file containing the summary of all evaluation metrics for each input file.


Output files
  • shiny_app/
    • executable to start the shiny app.
    • *.py*: shiny app files.
    • *.csv: csv file used by shiny app.
    • trace.txt: trace file used by shiny app.

The if --skip_shiny=false is specified, a shiny app is prepared to visualize the summary statistics and evaluation of the produced alignments. To run the shiny app: cd shiny_app ./

Be aware that you have to have shiny installed to access this feature.


Output files
  • multiqc/
    • multiqc_report.html: a standalone HTML file that can be viewed in your web browser.
    • multiqc_data/: directory containing parsed statistics from the different tools used in the pipeline.
    • multiqc_plots/: directory containing static images from the report in various formats.

MultiQC is a visualization tool that generates a single HTML report summarising all samples in your project. Most of the pipeline QC results are visualised in the report and further statistics are available in the report data directory.

Results generated by MultiQC collate pipeline QC from supported tools e.g. FastQC. The pipeline has special steps which also allow the software versions to be reported in the MultiQC output for future traceability. For more information about how to use MultiQC reports, see

Pipeline information

Output files
  • pipeline_info/
    • Reports generated by Nextflow: execution_report.html, execution_timeline.html, execution_trace.txt and
    • Reports generated by the pipeline: pipeline_report.html, pipeline_report.txt and software_versions.yml. The pipeline_report* files will only be present if the --email / --email_on_fail parameter’s are used when running the pipeline.
    • Reformatted samplesheet files used as input to the pipeline: samplesheet.valid.csv.
    • Parameters used by the pipeline run: params.json.

Nextflow provides excellent functionality for generating various reports relevant to the running and execution of the pipeline. This will allow you to troubleshoot errors with the running of the pipeline, and also provide you with other information such as launch commands, run times and resource usage.