ValidMind Developer Framework

ValidMind’s Python Developer Framework is a library of developer tools and methods designed to automate the documentation and validation of your models.

The Developer Framework is designed to be model agnostic. If your model is built in Python, ValidMind's Python library will provide all the standard functionality without requiring your developers to rewrite any functions.

The Developer Framework provides a rich suite of documentation tools and test suites, from documenting descriptions of your dataset to testing your models for weak spots and overfit areas. The Developer Framework helps you automate the generation of model documentation by feeding the ValidMind platform with documentation artifacts and test results to the ValidMind platform.

To install the client library:

pip install validmind

To initialize the client library, paste the code snippet with the client integration details directly into your development source code, replacing this example with your own:

import validmind as vm

vm.init(
  api_host = "https://api.dev.vm.validmind.ai/api/v1/tracking/tracking",
  api_key = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
  api_secret = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
  project = "<project-identifier>"
)

After you have pasted the code snippet into your development source code and executed the code, the Python client library will register with ValidMind. You can now use the Developer Framework to document and test your models, and to upload to the ValidMind Platform.

__version__ = '2.5.13'
def get_test_suite( test_suite_id: str = None, section: str = None, *args, **kwargs) -> validmind.vm_models.test_suite.test_suite.TestSuite:

Gets a TestSuite object for the current project or a specific test suite

This function provides an interface to retrieve the TestSuite instance for the current project or a specific TestSuite instance identified by test_suite_id. The project Test Suite will contain sections for every section in the project's documentation template and these Test Suite Sections will contain all the tests associated with that template section.

Arguments:
  • test_suite_id (str, optional): The test suite name. If not passed, then the project's test suite will be returned. Defaults to None.
  • section (str, optional): The section of the documentation template from which to retrieve the test suite. This only applies if test_suite_id is None. Defaults to None.
  • args: Additional arguments to pass to the TestSuite
  • kwargs: Additional keyword arguments to pass to the TestSuite
def init( project: Optional[str] = None, api_key: Optional[str] = None, api_secret: Optional[str] = None, api_host: Optional[str] = None, model: Optional[str] = None, monitoring=False):

Initializes the API client instances and calls the /ping endpoint to ensure the provided credentials are valid and we can connect to the ValidMind API.

If the API key and secret are not provided, the client will attempt to retrieve them from the environment variables VM_API_KEY and VM_API_SECRET.

Arguments:
  • project (str, optional): The project CUID. Alias for model. Defaults to None.
  • model (str, optional): The model CUID. Defaults to None.
  • api_key (str, optional): The API key. Defaults to None.
  • api_secret (str, optional): The API secret. Defaults to None.
  • api_host (str, optional): The API host. Defaults to None.
  • monitoring (str, optional): The ongoing monitoring flag. Defaults to False.
Raises:
  • ValueError: If the API key and secret are not provided
def init_dataset( dataset, model=None, index=None, index_name: str = None, date_time_index: bool = False, columns: list = None, text_column: str = None, target_column: str = None, feature_columns: list = None, extra_columns: dict = None, class_labels: dict = None, type: str = None, input_id: str = None, __log=True) -> validmind.vm_models.dataset.dataset.VMDataset:

Initializes a VM Dataset, which can then be passed to other functions that can perform additional analysis and tests on the data. This function also ensures we are reading a valid dataset type.

The following dataset types are supported:

  • Pandas DataFrame
  • Polars DataFrame
  • Numpy ndarray
  • Torch TensorDataset
Arguments:
  • dataset : dataset from various python libraries
  • model (VMModel): ValidMind model object
  • targets (vm.vm.DatasetTargets): A list of target variables
  • target_column (str): The name of the target column in the dataset
  • feature_columns (list): A list of names of feature columns in the dataset
  • extra_columns (dictionary): A dictionary containing the names of the
  • prediction_column and group_by_columns in the dataset
  • class_labels (dict): A list of class labels for classification problems
  • type (str): The type of dataset (one of DATASET_TYPES)
  • input_id (str): The input ID for the dataset (e.g. "my_dataset"). By default, this will be set to dataset but if you are passing this dataset as a test input using some other key than dataset, then you should set this to the same key.
Raises:
  • ValueError: If the dataset type is not supported
Returns:

vm.vm.Dataset: A VM Dataset instance

def init_model( model: object = None, input_id: str = 'model', attributes: dict = None, predict_fn: <built-in function callable> = None, __log=True, **kwargs) -> validmind.vm_models.model.VMModel:

Initializes a VM Model, which can then be passed to other functions that can perform additional analysis and tests on the data. This function also ensures we are creating a model supported libraries.

Arguments:
  • model: A trained model or VMModel instance
  • input_id (str): The input ID for the model (e.g. "my_model"). By default, this will be set to model but if you are passing this model as a test input using some other key than model, then you should set this to the same key.
  • attributes (dict): A dictionary of model attributes
  • predict_fn (callable): A function that takes an input and returns a prediction
  • **kwargs: Additional arguments to pass to the model
Raises:
  • ValueError: If the model type is not supported
Returns:

vm.VMModel: A VM Model instance

def init_r_model(model_path: str) -> validmind.vm_models.model.VMModel:

Initializes a VM Model for an R model

R models must be saved to disk and the filetype depends on the model type... Currently we support the following model types:

  • LogisticRegression glm model in R: saved as an RDS file with saveRDS
  • LinearRegression lm model in R: saved as an RDS file with saveRDS
  • XGBClassifier: saved as a .json or .bin file with xgb.save
  • XGBRegressor: saved as a .json or .bin file with xgb.save

LogisticRegression and LinearRegression models are converted to sklearn models by extracting the coefficients and intercept from the R model. XGB models are loaded using the xgboost since xgb models saved in .json or .bin format can be loaded directly with either Python or R

Arguments:
  • model_path (str): The path to the R model saved as an RDS or XGB file
  • model_type (str): The type of the model (one of R_MODEL_TYPES)
Returns:

vm.vm.Model: A VM Model instance

def metric(func_or_id):

DEPRECATED, use @vm.test instead

def preview_template():

Preview the documentation template for the current project

This function will display the documentation template for the current project. If the project has not been initialized, then an error will be raised.

Raises:
  • ValueError: If the project has not been initialized
def reload():

Reconnect to the ValidMind API and reload the project configuration

def run_documentation_tests( section=None, send=True, fail_fast=False, inputs=None, config=None, **kwargs):

Collect and run all the tests associated with a template

This function will analyze the current project's documentation template and collect all the tests associated with it into a test suite. It will then run the test suite, log the results to the ValidMind API, and display them to the user.

Arguments:
  • section (str or list, optional): The section(s) to preview. Defaults to None.
  • send (bool, optional): Whether to send the results to the ValidMind API. Defaults to True.
  • fail_fast (bool, optional): Whether to stop running tests after the first failure. Defaults to False.
  • inputs (dict, optional): A dictionary of test inputs to pass to the TestSuite
  • config: A dictionary of test parameters to override the defaults
  • **kwargs: backwards compatibility for passing in test inputs using keyword arguments
Returns:

TestSuite or dict: The completed TestSuite instance or a dictionary of TestSuites if section is a list.

Raises:
  • ValueError: If the project has not been initialized
def run_test_suite( test_suite_id, send=True, fail_fast=False, config=None, inputs=None, **kwargs):

High Level function for running a test suite

This function provides a high level interface for running a test suite. A test suite is a collection of tests. This function will automatically find the correct test suite class based on the test_suite_id, initialize each of the tests, and run them.

Arguments:
  • test_suite_id (str): The test suite name (e.g. 'classifier_full_suite')
  • config (dict, optional): A dictionary of parameters to pass to the tests in the test suite. Defaults to None.
  • send (bool, optional): Whether to post the test results to the API. send=False is useful for testing. Defaults to True.
  • fail_fast (bool, optional): Whether to stop running tests after the first failure. Defaults to False.
  • inputs (dict, optional): A dictionary of test inputs to pass to the TestSuite e.g. model, dataset models etc. These inputs will be accessible by any test in the test suite. See the test documentation or vm.describe_test() for more details on the inputs required for each.
  • **kwargs: backwards compatibility for passing in test inputs using keyword arguments
Raises:
  • ValueError: If the test suite name is not found or if there is an error initializing the test suite
Returns:

TestSuite: the TestSuite instance

def tags(*tags):

Decorator for specifying tags for a metric.

Arguments:
  • *tags: The tags to apply to the metric.
def tasks(*tasks):

Decorator for specifying the task types that a metric is designed for.

Arguments:
  • *tasks: The task types that the metric is designed for.
def test(func_or_id):

Decorator for creating and registering metrics with the ValidMind framework.

Creates a metric object and registers it with ValidMind under the provided ID. If no ID is provided, the function name will be used as to build one. So if the function name is my_metric, the metric will be registered under the ID validmind.custom_metrics.my_metric.

This decorator works by creating a new Metric class will be created whose run method calls the decorated function. This function should take as arguments the inputs it requires (dataset, datasets, model, models) followed by any parameters. It can return any number of the following types:

  • Table: Either a list of dictionaries or a pandas DataFrame
  • Plot: Either a matplotlib figure or a plotly figure
  • Scalar: A single number or string

The function may also include a docstring. This docstring will be used and logged as the metric's description.

Arguments:
  • func: The function to decorate
  • test_id: The identifier for the metric. If not provided, the function name is used.
Returns:

The decorated function.

def log_figure(figure):

Logs a figure

Arguments:
  • figure (Figure): The Figure object wrapper
Raises:
  • Exception: If the API call fails
Returns:

dict: The response from the API

def log_metrics(metrics, inputs=None):

Logs metrics to ValidMind API.

Arguments:
  • metrics (list): A list of Metric objects
  • inputs (list): A list of input names to associate with the metrics
Raises:
  • Exception: If the API call fails
Returns:

dict: The response from the API

def log_test_results( results: List[validmind.vm_models.test.threshold_test_result.ThresholdTestResults], inputs) -> List[Callable[..., Dict[str, Any]]]:

Logs test results information

This method will be called automatically be any function running tests but can also be called directly if the user wants to run tests on their own.

Arguments:
  • results (list): A list of ThresholdTestResults objects
  • inputs (list): A list of input IDs that were used to run the test
Raises:
  • Exception: If the API call fails
Returns:

list: list of responses from the API