Testing

Published

November 20, 2024

How did ValidMind develop the tests that are currently in the library?

All the existing tests were developed using open-source Python and R libraries.

The library test interface is a light wrapper that defines some utility functions to interact with different dataset and model backends in an agnostic way, and other functions to collect and post results to the ValidMind backend using a generic results schema.

Can tests be configured or customized, and can we add our own tests?

ValidMind allows tests to be configured at several levels:

  • Administrators can configure which tests are required to run programmatically depending on the model use case
  • You can change the thresholds and parameters for tests already available in the library (for instance, changing the threshold parameter for class imbalance flag).
  • In addition, ValidMind is implementing a feature that allows you to add your own tests to the library. You will also be able to connect your own custom tests with the library. These custom tests will be configurable and able to run programmatically, just like the rest of the library libraries (roadmap item – Q3’2023).

Is there a use case for synthetic data within ValidMind?

The ValidMind Library supports you bringing your own datasets, including synthetic datasets, for testing and benchmarking purposes, such as for fair lending and bias testing.

We are happy to discuss exploring specific use cases for synthetic data generation with you further.