cpdbench
The Changepoint-Detection Workbench (CPD-Bench)
This library is a performance and test benchmark for changepoint detection algorithms, especially created for the changepoynt project.
Important links
Installation
Simply install the cpd-bench via pip and include it into your library:
pip install cpdbench
Usage
Basic usage
- Import cpdbench.CPDBench and create a CPDBench object cpdb
- Use the decorators "dataset", "algorithm", and "metric" of this cpdb object to annotate your respective changepoint dataset function, your changepoint algorithms and validation metrics. The functions have to look like this:
- dataset: def dataset_funtion() -> dataset: cpdbench.dataset.CPDDataset
- algorithm: def algorithm_function(signal: ndarray) -> changepoints: list[int], confidences: list[float]
- metric: def metric_funtion(changepoints: list[int], confidences: list[float], ground_truths: list[int]) -> result: float
- Use cpdb.start() to start the workbench
A very basic configuration created with included example functions looks like this:
from cpdbench.CPDBench import CPDBench
import cpdbench.examples.ExampleDatasets as example_datasets
import cpdbench.examples.ExampleAlgorithms as example_algorithms
import cpdbench.examples.ExampleMetrics as example_metrics
cpdb = CPDBench()
@cpdb.dataset
def get_apple_dataset():
return example_datasets.dataset_get_apple_dataset()
@cpdb.dataset
def get_bitcoin_dataset():
return example_datasets.dataset_get_bitcoin_dataset()
@cpdb.algorithm
def execute_esst_test(signal):
return example_algorithms.algorithm_execute_single_esst(signal)
@cpdb.metric
def calc_accuracy(indexes, scores, ground_truth):
return example_metrics.metric_accuracy_in_allowed_windows(indexes, scores, ground_truth, window_size=25)
if __name__ == '__main__':
cpdb.start()
Configuration
You can configure multiple settings using a config.yml file. For this create a config.yml file with the syntax/commands given in cpdbench.examples.configs.parametersConfig.yml and enter the file path when running the bench: cpdb.start(config_file)
Use of parameters
Use parameters in your own functions as global placeholders (global parameters) or to run the function multiple
times with different configurations (runtime parameters).
To use parameters declare them in your function heads as keyword-only parameters, for example:
def algorithm_function(signal, *, example_param)
Then enter the values in your config file:
- global param: user -> "param name: value"
- runtime param: user -> dataset-executions/algorithm-executions/metric-executions -> list of "param name: value" for the amount of executions/run configurations. Example:
user:
global_param1: 242
global_param2: 353
algorithm_executions:
- runtime_param1: 2424
runtime_param2: 3
- runtime_param1: 345
runtime_param2: 3
For more examples please refer to the "examples" package.
1""" 2# The Changepoint-Detection Workbench (CPD-Bench) 3 4This library is a performance and test benchmark for changepoint detection algorithms, 5especially created for the [changepoynt project](https://github.com/Lucew/changepoynt). 6 7## Important links 8- [Main project page on GitHub](https://github.com/Lucew/CPD-Bench) 9- [Changepoynt project](https://github.com/Lucew/changepoynt) 10- [Documentation](https://lucew.github.io/CPD-Bench/cpdbench.html) 11 12 13## Installation 14Simply install the cpd-bench via pip and include it into your library: 15`pip install cpdbench` 16 17## Usage 18### Basic usage 191. Import cpdbench.CPDBench and create a CPDBench object cpdb 202. Use the decorators "dataset", "algorithm", and "metric" of this cpdb object to annotate your respective 21changepoint dataset function, your changepoint algorithms and validation metrics. 22The functions have to look like this: 23- dataset: def dataset_funtion() -> dataset: cpdbench.dataset.CPDDataset 24- algorithm: def algorithm_function(signal: ndarray) -> changepoints: list[int], confidences: list[float] 25- metric: def metric_funtion(changepoints: list[int], confidences: list[float], ground_truths: list[int]) -> result: float 263. Use cpdb.start() to start the workbench 27 28A very basic configuration created with included example functions looks like this: 29 30``` 31from cpdbench.CPDBench import CPDBench 32import cpdbench.examples.ExampleDatasets as example_datasets 33import cpdbench.examples.ExampleAlgorithms as example_algorithms 34import cpdbench.examples.ExampleMetrics as example_metrics 35 36cpdb = CPDBench() 37 38 39@cpdb.dataset 40def get_apple_dataset(): 41 return example_datasets.dataset_get_apple_dataset() 42 43 44@cpdb.dataset 45def get_bitcoin_dataset(): 46 return example_datasets.dataset_get_bitcoin_dataset() 47 48 49@cpdb.algorithm 50def execute_esst_test(signal): 51 return example_algorithms.algorithm_execute_single_esst(signal) 52 53 54@cpdb.metric 55def calc_accuracy(indexes, scores, ground_truth): 56 return example_metrics.metric_accuracy_in_allowed_windows(indexes, scores, ground_truth, window_size=25) 57 58 59if __name__ == '__main__': 60 cpdb.start() 61``` 62 63### Configuration 64You can configure multiple settings using a config.yml file. 65For this create a config.yml file with the syntax/commands given in cpdbench.examples.configs.parametersConfig.yml 66and enter the file path when running the bench: cpdb.start(config_file) 67 68### Use of parameters 69Use parameters in your own functions as global placeholders (global parameters) or to run the function multiple 70times with different configurations (runtime parameters). 71To use parameters declare them in your function heads as keyword-only parameters, for example: 72`def algorithm_function(signal, *, example_param)` 73Then enter the values in your config file: 74- global param: user -> "param name: value" 75- runtime param: user -> dataset-executions/algorithm-executions/metric-executions -> list of "param name: value" for the 76amount of executions/run configurations. 77Example: 78``` 79user: 80 global_param1: 242 81 global_param2: 353 82 algorithm_executions: 83 - runtime_param1: 2424 84 runtime_param2: 3 85 - runtime_param1: 345 86 runtime_param2: 3 87``` 88 89For more examples please refer to the "examples" package. 90 91"""