nf-core/drugresponseeval
Pipeline for testing drug response prediction models in a statistically and biologically sound way.
Introduction
DrEval is a bioinformatics framework which includes a PyPI package (drevalpy) and a Nextflow pipeline (this repo). DrEval ensures that evaluations are statistically sound, biologically meaningful, and reproducible. DrEval simplifies the implementation of drug response prediction models, allowing researchers to focus on advancing their modeling innovations by automating standardized evaluation protocols and preprocessing workflows. With DrEval, hyperparameter tuning is fair and consistent. With its flexible model interface, DrEval supports any model type, ranging from statistical models to complex neural networks. By contributing your model to the DrEval catalog, you can increase your work’s exposure, reusability, and transferability.
- The response data is loaded
- All models are trained and evaluated in a cross-validation setting
- For each CV split, the best hyperparameters are determined using a grid search per model
- The model is trained on the full training set (train & validation) with the best hyperparameters to predict the test set
- If randomization tests are enabled, the model is trained on the full training set with the best hyperparameters to predict the randomized test set
- If robustness tests are enabled, the model is trained N times on the full training set with the best hyperparameters
- Plots are created summarizing the results
For baseline models, no randomization or robustness tests are performed.
Usage
If you are new to Nextflow and nf-core, please refer to this page on how to set-up Nextflow. Make sure to test your setup with -profile test
before running the workflow on actual data.
Now, you can run the pipeline using:
Please provide pipeline parameters via the CLI or Nextflow -params-file
option. Custom config files including those provided by the -c
Nextflow option can be used to provide any configuration except for parameters; see docs.
For more details and further functionality, please refer to the usage documentation and the parameter documentation.
Pipeline output
To see the results of an example test run with a full size dataset refer to the results tab on the nf-core website pipeline page. For more details about the output files and reports, please refer to the output documentation.
Credits
nf-core/drugresponseeval was originally written by Judith Bernett (TUM) and Pascal Iversen (FU Berlin).
We thank the following people for their extensive assistance in the development of this pipeline:
Contributions and Support
If you would like to contribute to this pipeline, please see the contributing guidelines.
For further information or help, don’t hesitate to get in touch on the Slack #drugresponseeval
channel (you can join with this invite).
Citations
An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md
file.
You can cite the nf-core
publication as follows:
The nf-core framework for community-curated bioinformatics pipelines.
Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.
Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x.