Evaluating Simulations¶
ViMMS includes utilities for assessing how well a fragmentation strategy performed. These functions live in vimms.Evaluation
.
Generating Evaluation Data¶
When running an Environment
you can enable the save_eval
flag:
env = Environment(ms, controller, min_time, max_time, save_eval=True, out_file="run.mzML")
env.run()
This creates an additional pickle file alongside the mzML output containing a serialised EvaluationData
object. The object stores the chemicals, generated scans and fragmentation events.
Using the Evaluator¶
vimms.Evaluation
provides several evaluator classes that compute coverage and intensity statistics across multiple runs. A basic usage pattern is:
from vimms.Evaluation import evaluate_simulated_env
report = evaluate_simulated_env(["run1.p", "run2.p"])
print(report["times_fragmented_summary"])
The report dictionary contains metrics such as the number of times each chemical was fragmented, cumulative coverage and intensity information.
The evaluation helpers rely on peak picking using MZMine parameters defined in PeakPicking.py
. Intermediate files generated by evaluate_simulated_env
are cleaned up automatically unless keep_files
is set to True
.
Further Reading¶
Refer to the docstrings in Evaluation.py for a detailed list of available functions.