ValidMind for model validation 2 — Start the model validation process

Learn how to use ValidMind for your end-to-end model validation process with our series of four introductory notebooks. In this second notebook, independently verify the data quality tests performed on the dataset used to train the champion model.

You'll learn how to run relevant validation tests with ValidMind, log the results of those tests to the ValidMind Platform, and insert your logged test results as evidence into your validation report. You'll become familiar with the tests available in ValidMind, as well as how to run them. Running tests during model validation is crucial to the effective challenge process, as we want to independently evaluate the evidence and assessments provided by the model development team.

While running our tests in this notebook, we'll focus on:

For a full list of out-of-the-box tests, refer to our Test descriptions or try the interactive Test sandbox.

Learn by doing

Our course tailor-made for validators new to ValidMind combines this series of notebooks with more a more in-depth introduction to the ValidMind Platform — Validator Fundamentals

Prerequisites

In order to independently assess the quality of your datasets with notebook, you'll need to first have:

Need help with the above steps?

Refer to the first notebook in this series: 1 — Set up the ValidMind Library for validation

Setting up

Initialize the ValidMind Library

First, let's connect up the ValidMind Library to our model we previously registered in the ValidMind Platform:

  1. On the left sidebar that appears for your model, select Getting Started and select Validation from the DOCUMENT drop-down menu.
  2. Click Copy snippet to clipboard.
  3. Next, load your model identifier credentials from an .env file or replace the placeholder with your own code snippet:
# Make sure the ValidMind Library is installed

%pip install -q validmind

# Load your model identifier credentials from an `.env` file

%load_ext dotenv
%dotenv .env

# Or replace with your code snippet

import validmind as vm

vm.init(
    # api_host="...",
    # api_key="...",
    # api_secret="...",
    # model="...",
    document="validation-report",
)
Note: you may need to restart the kernel to use updated packages.
2026-04-03 03:05:07,874 - INFO(validmind.api_client): 🎉 Connected to ValidMind!
📊 Model: [ValidMind Academy] Model validation (ID: cmalguc9y02ok199q2db381ib)
📁 Document Type: validation_report

Load the sample dataset

Let's first import the public Bank Customer Churn Prediction dataset from Kaggle, which was used to develop the dummy champion model.

We'll use this dataset to review steps that should have been conducted during the initial development and documentation of the model to ensure that the model was built correctly. By independently performing steps taken by the model development team, we can confirm whether the model was built using appropriate and properly processed data.

In our below example, note that:

  • The target column, Exited has a value of 1 when a customer has churned and 0 otherwise.
  • The ValidMind Library provides a wrapper to automatically load the dataset as a Pandas DataFrame object. A Pandas Dataframe is a two-dimensional tabular data structure that makes use of rows and columns.
from validmind.datasets.classification import customer_churn as demo_dataset

print(
    f"Loaded demo dataset with: \n\n\t• Target column: '{demo_dataset.target_column}' \n\t• Class labels: {demo_dataset.class_labels}"
)

raw_df = demo_dataset.load_data()
raw_df.head()
Loaded demo dataset with: 

    • Target column: 'Exited' 
    • Class labels: {'0': 'Did not exit', '1': 'Exited'}
CreditScore Geography Gender Age Tenure Balance NumOfProducts HasCrCard IsActiveMember EstimatedSalary Exited
0 619 France Female 42 2 0.00 1 1 1 101348.88 1
1 608 Spain Female 41 1 83807.86 1 0 1 112542.58 0
2 502 France Female 42 8 159660.80 3 1 0 113931.57 1
3 699 France Female 39 1 0.00 2 0 0 93826.63 0
4 850 Spain Female 43 2 125510.82 1 1 1 79084.10 0

Verifying data quality adjustments

Let's say that thanks to the documentation submitted by the model development team (Learn more ...), we know that the sample dataset was first modified before being used to train the champion model. After performing some data quality assessments on the raw dataset, it was determined that the dataset required rebalancing, and highly correlated features were also removed.

Identify qualitative tests

During model validation, we use the same data processing logic and training procedure to confirm that the model's results can be reproduced independently, so let's start by doing some data quality assessments by running a few individual tests just like the development team did.

Use the vm.tests.list_tests() function introduced by the first notebook in this series in combination with vm.tests.list_tags() and vm.tests.list_tasks() to find which prebuilt tests are relevant for data quality assessment:

  • tasks represent the kind of modeling task associated with a test. Here we'll focus on classification tasks.
  • tags are free-form descriptions providing more details about the test, for example, what category the test falls into. Here we'll focus on the data_quality tag.
# Get the list of available task types
sorted(vm.tests.list_tasks())
['classification',
 'clustering',
 'data_validation',
 'feature_extraction',
 'monitoring',
 'nlp',
 'regression',
 'residual_analysis',
 'text_classification',
 'text_generation',
 'text_qa',
 'text_summarization',
 'time_series_forecasting',
 'visualization']
# Get the list of available tags
sorted(vm.tests.list_tags())
['AUC',
 'analysis',
 'anomaly',
 'anomaly_detection',
 'bias_and_fairness',
 'binary_classification',
 'calibration',
 'categorical_data',
 'classification',
 'classification_metrics',
 'clustering',
 'correlation',
 'credit_risk',
 'data_analysis',
 'data_distribution',
 'data_quality',
 'data_validation',
 'descriptive_statistics',
 'dimensionality_reduction',
 'distribution',
 'embeddings',
 'feature_importance',
 'feature_selection',
 'few_shot',
 'forecasting',
 'frequency_analysis',
 'kmeans',
 'linear_regression',
 'llm',
 'logistic_regression',
 'metadata',
 'model_comparison',
 'model_diagnosis',
 'model_explainability',
 'model_interpretation',
 'model_performance',
 'model_predictions',
 'model_selection',
 'model_training',
 'model_validation',
 'multiclass_classification',
 'nlp',
 'normality',
 'numerical_data',
 'outlier',
 'outliers',
 'qualitative',
 'rag_performance',
 'ragas',
 'regression',
 'retrieval_performance',
 'scorecard',
 'seasonality',
 'senstivity_analysis',
 'sklearn',
 'stationarity',
 'statistical_test',
 'statistics',
 'statsmodels',
 'tabular_data',
 'text_data',
 'threshold_optimization',
 'time_series_data',
 'unit_root_test',
 'visualization',
 'zero_shot']

You can pass tags and tasks as parameters to the vm.tests.list_tests() function to filter the tests based on the tags and task types.

For example, to find tests related to tabular data quality for classification models, you can call list_tests() like this:

vm.tests.list_tests(task="classification", tags=["tabular_data", "data_quality"])
ID Name Description Has Figure Has Table Required Inputs Params Tags Tasks
validmind.data_validation.ClassImbalance Class Imbalance Evaluates and quantifies class distribution imbalance in a dataset used by a machine learning model.... True True ['dataset'] {'min_percent_threshold': {'type': 'int', 'default': 10}} ['tabular_data', 'binary_classification', 'multiclass_classification', 'data_quality'] ['classification']
validmind.data_validation.DescriptiveStatistics Descriptive Statistics Performs a detailed descriptive statistical analysis of both numerical and categorical data within a model's... False True ['dataset'] {} ['tabular_data', 'time_series_data', 'data_quality'] ['classification', 'regression']
validmind.data_validation.Duplicates Duplicates Tests dataset for duplicate entries, ensuring model reliability via data quality verification.... False True ['dataset'] {'min_threshold': {'type': '_empty', 'default': 1}} ['tabular_data', 'data_quality', 'text_data'] ['classification', 'regression']
validmind.data_validation.HighCardinality High Cardinality Assesses the number of unique values in categorical columns to detect high cardinality and potential overfitting.... False True ['dataset'] {'num_threshold': {'type': 'int', 'default': 100}, 'percent_threshold': {'type': 'float', 'default': 0.1}, 'threshold_type': {'type': 'str', 'default': 'percent'}} ['tabular_data', 'data_quality', 'categorical_data'] ['classification', 'regression']
validmind.data_validation.HighPearsonCorrelation High Pearson Correlation Identifies highly correlated feature pairs in a dataset suggesting feature redundancy or multicollinearity.... False True ['dataset'] {'max_threshold': {'type': 'float', 'default': 0.3}, 'top_n_correlations': {'type': 'int', 'default': 10}, 'feature_columns': {'type': 'list', 'default': None}} ['tabular_data', 'data_quality', 'correlation'] ['classification', 'regression']
validmind.data_validation.MissingValues Missing Values Evaluates dataset quality by ensuring missing value percentage across all features does not exceed a set threshold.... False True ['dataset'] {'min_percentage_threshold': {'type': 'float', 'default': 1.0}} ['tabular_data', 'data_quality'] ['classification', 'regression']
validmind.data_validation.MissingValuesBarPlot Missing Values Bar Plot Assesses the percentage and distribution of missing values in the dataset via a bar plot, with emphasis on... True False ['dataset'] {'threshold': {'type': 'int', 'default': 80}, 'fig_height': {'type': 'int', 'default': 600}} ['tabular_data', 'data_quality', 'visualization'] ['classification', 'regression']
validmind.data_validation.Skewness Skewness Evaluates the skewness of numerical data in a dataset to check against a defined threshold, aiming to ensure data... False True ['dataset'] {'max_threshold': {'type': '_empty', 'default': 1}} ['data_quality', 'tabular_data'] ['classification', 'regression']
validmind.plots.BoxPlot Box Plot Generates customizable box plots for numerical features in a dataset with optional grouping using Plotly.... True False ['dataset'] {'columns': {'type': 'Optional', 'default': None}, 'group_by': {'type': 'Optional', 'default': None}, 'width': {'type': 'int', 'default': 1800}, 'height': {'type': 'int', 'default': 1200}, 'colors': {'type': 'Optional', 'default': None}, 'show_outliers': {'type': 'bool', 'default': True}, 'title_prefix': {'type': 'str', 'default': 'Box Plot of'}} ['tabular_data', 'visualization', 'data_quality'] ['classification', 'regression', 'clustering']
validmind.plots.HistogramPlot Histogram Plot Generates customizable histogram plots for numerical features in a dataset using Plotly.... True False ['dataset'] {'columns': {'type': 'Optional', 'default': None}, 'bins': {'type': 'Union', 'default': 30}, 'color': {'type': 'str', 'default': 'steelblue'}, 'opacity': {'type': 'float', 'default': 0.7}, 'show_kde': {'type': 'bool', 'default': True}, 'normalize': {'type': 'bool', 'default': False}, 'log_scale': {'type': 'bool', 'default': False}, 'title_prefix': {'type': 'str', 'default': 'Histogram of'}, 'width': {'type': 'int', 'default': 1200}, 'height': {'type': 'int', 'default': 800}, 'n_cols': {'type': 'int', 'default': 2}, 'vertical_spacing': {'type': 'float', 'default': 0.15}, 'horizontal_spacing': {'type': 'float', 'default': 0.1}} ['tabular_data', 'visualization', 'data_quality'] ['classification', 'regression', 'clustering']
validmind.stats.DescriptiveStats Descriptive Stats Provides comprehensive descriptive statistics for numerical features in a dataset.... False True ['dataset'] {'columns': {'type': 'Optional', 'default': None}, 'include_advanced': {'type': 'bool', 'default': True}, 'confidence_level': {'type': 'float', 'default': 0.95}} ['tabular_data', 'statistics', 'data_quality'] ['classification', 'regression', 'clustering']
Want to learn more about navigating ValidMind tests?

Refer to our notebook outlining the utilities available for viewing and understanding available ValidMind tests: Explore tests

Initialize the ValidMind dataset

With the individual tests we want to run identified, the next step is to connect your data with a ValidMind Dataset object. This step is always necessary every time you want to connect a dataset to documentation and produce test results through ValidMind, but you only need to do it once per dataset.

Initialize a ValidMind dataset object using the init_dataset function from the ValidMind (vm) module. For this example, we'll pass in the following arguments:

  • dataset — The raw dataset that you want to provide as input to tests.
  • input_id — A unique identifier that allows tracking what inputs are used when running each individual test.
  • target_column — A required argument if tests require access to true values. This is the name of the target column in the dataset.
# vm_raw_dataset is now a VMDataset object that you can pass to any ValidMind test
vm_raw_dataset = vm.init_dataset(
    dataset=raw_df,
    input_id="raw_dataset",
    target_column="Exited",
)

Run data quality tests

Now that we know how to initialize a ValidMind dataset object, we're ready to run some tests!

You run individual tests by calling the run_test function provided by the validmind.tests module. For the examples below, we'll pass in the following arguments:

  • test_id — The ID of the test to run, as seen in the ID column when you run list_tests.
  • params — A dictionary of parameters for the test. These will override any default_params set in the test definition.

Run tabular data tests

The inputs expected by a test can also be found in the test definition — let's take validmind.data_validation.DescriptiveStatistics as an example.

Note that the output of the describe_test() function below shows that this test expects a dataset as input:

vm.tests.describe_test("validmind.data_validation.DescriptiveStatistics")
Test: Descriptive Statistics ('validmind.data_validation.DescriptiveStatistics')

Now, let's run a few tests to assess the quality of the dataset:

result2 = vm.tests.run_test(
    test_id="validmind.data_validation.ClassImbalance",
    inputs={"dataset": vm_raw_dataset},
    params={"min_percent_threshold": 30},
)

❌ Class Imbalance

The Class Imbalance test evaluates the distribution of target classes in the dataset to identify potential imbalances that could impact model performance. The results table presents the percentage of records for each class in the target variable "Exited," alongside a pass/fail assessment based on a minimum threshold of 30%. The accompanying bar plot visually depicts the proportion of each class, highlighting the relative frequencies.

Key insights:

  • Majority class exceeds threshold: The "Exited = 0" class constitutes 79.80% of the dataset and passes the 30% minimum threshold.
  • Minority class below threshold: The "Exited = 1" class represents 20.20% of the dataset and fails the 30% minimum threshold, indicating under-representation.
  • Visual confirmation of imbalance: The bar plot demonstrates a pronounced disparity between the two classes, with the majority class substantially outnumbering the minority class.

The results indicate a notable class imbalance in the dataset, with the minority class ("Exited = 1") falling below the specified 30% threshold. This distribution suggests that the dataset is skewed toward the majority class, which may influence model learning and prediction behavior. The observed imbalance warrants consideration in subsequent model development and evaluation processes.

Parameters:

{
  "min_percent_threshold": 30
}
            

Tables

Exited Class Imbalance

Exited Percentage of Rows (%) Pass/Fail
0 79.80% Pass
1 20.20% Fail

Figures

ValidMind Figure validmind.data_validation.ClassImbalance:b56c

The output above shows that the class imbalance test did not pass according to the value we set for min_percent_threshold — great, this matches what was reported by the model development team.

To address this issue, we'll re-run the test on some processed data. In this case let's apply a very simple rebalancing technique to the dataset:

import pandas as pd

raw_copy_df = raw_df.sample(frac=1)  # Create a copy of the raw dataset

# Create a balanced dataset with the same number of exited and not exited customers
exited_df = raw_copy_df.loc[raw_copy_df["Exited"] == 1]
not_exited_df = raw_copy_df.loc[raw_copy_df["Exited"] == 0].sample(n=exited_df.shape[0])

balanced_raw_df = pd.concat([exited_df, not_exited_df])
balanced_raw_df = balanced_raw_df.sample(frac=1, random_state=42)

With this new balanced dataset, you can re-run the individual test to see if it now passes the class imbalance test requirement.

As this is technically a different dataset, remember to first initialize a new ValidMind Dataset object to pass in as input as required by run_test():

# Register new data and now 'balanced_raw_dataset' is the new dataset object of interest
vm_balanced_raw_dataset = vm.init_dataset(
    dataset=balanced_raw_df,
    input_id="balanced_raw_dataset",
    target_column="Exited",
)
# Pass the initialized `balanced_raw_dataset` as input into the test run
result = vm.tests.run_test(
    test_id="validmind.data_validation.ClassImbalance",
    inputs={"dataset": vm_balanced_raw_dataset},
    params={"min_percent_threshold": 30},
)

✅ Class Imbalance

The Class Imbalance test evaluates the distribution of target classes in the dataset to identify potential imbalances that could impact model performance. The results table presents the percentage of records for each class in the target variable "Exited," alongside a pass/fail assessment based on a minimum threshold of 30%. The accompanying bar plot visually displays the proportion of each class, facilitating interpretation of class distribution.

Key insights:

  • Equal class distribution observed: Both classes (Exited = 0 and Exited = 1) each represent 50% of the dataset, indicating a perfectly balanced class distribution.
  • All classes exceed threshold: Each class surpasses the 30% minimum percentage threshold, resulting in a "Pass" outcome for both classes.
  • No evidence of class imbalance: The visual and tabular results confirm the absence of under-represented classes in the target variable.

The results demonstrate that the dataset used for model development exhibits a balanced distribution across the target classes, with both classes equally represented and exceeding the specified minimum threshold. This balanced class structure reduces the risk of model bias related to class imbalance and supports reliable model training and evaluation.

Parameters:

{
  "min_percent_threshold": 30
}
            

Tables

Exited Class Imbalance

Exited Percentage of Rows (%) Pass/Fail
0 50.00% Pass
1 50.00% Pass

Figures

ValidMind Figure validmind.data_validation.ClassImbalance:0557

Remove highly correlated features

Next, let's also remove highly correlated features from our dataset as outlined by the development team. Removing highly correlated features helps make the model simpler, more stable, and easier to understand.

You can utilize the output from a ValidMind test for further use — in this below example, to retrieve the list of features with the highest correlation coefficients and use them to reduce the final list of features for modeling.

First, we'll run validmind.data_validation.HighPearsonCorrelation with the balanced_raw_dataset we initialized previously as input as is for comparison with later runs:

corr_result = vm.tests.run_test(
    test_id="validmind.data_validation.HighPearsonCorrelation",
    params={"max_threshold": 0.3},
    inputs={"dataset": vm_balanced_raw_dataset},
)

❌ High Pearson Correlation

The High Pearson Correlation test identifies pairs of features in the dataset that exhibit strong linear relationships, with the aim of detecting potential feature redundancy or multicollinearity. The results table lists the top ten feature pairs ranked by the absolute value of their Pearson correlation coefficients, along with their corresponding Pass or Fail status based on a threshold of 0.3. Only one feature pair exceeds the threshold, while the remaining pairs display lower correlation values and pass the test criteria.

Key insights:

  • One feature pair exceeds correlation threshold: The pair (Age, Exited) shows a Pearson correlation coefficient of 0.3489, surpassing the 0.3 threshold and resulting in a Fail status.
  • All other feature pairs pass threshold: The remaining nine feature pairs have absolute correlation coefficients ranging from 0.1866 to 0.0396, all below the 0.3 threshold and marked as Pass.
  • Highest negative correlation observed at -0.1866: The pair (Balance, NumOfProducts) exhibits the strongest negative correlation among the top ten, but remains below the threshold.

The test results indicate that the majority of feature pairs in the dataset do not exhibit high linear correlations, with only the (Age, Exited) pair exceeding the specified threshold. This suggests limited evidence of feature redundancy or multicollinearity among the evaluated features, with the exception of the identified pair. The overall correlation structure supports the interpretability and stability of the feature set.

Parameters:

{
  "max_threshold": 0.3
}
            

Tables

Columns Coefficient Pass/Fail
(Age, Exited) 0.3489 Fail
(Balance, NumOfProducts) -0.1866 Pass
(IsActiveMember, Exited) -0.1752 Pass
(Balance, Exited) 0.1378 Pass
(CreditScore, Exited) -0.0659 Pass
(NumOfProducts, Exited) -0.0606 Pass
(NumOfProducts, IsActiveMember) 0.0457 Pass
(Tenure, IsActiveMember) -0.0418 Pass
(Tenure, EstimatedSalary) 0.0409 Pass
(HasCrCard, Exited) -0.0396 Pass

The output above shows that the test did not pass according to the value we set for max_threshold — as reported and expected.

corr_result is an object of type TestResult. We can inspect the result object to see what the test has produced:

print(type(corr_result))
print("Result ID: ", corr_result.result_id)
print("Params: ", corr_result.params)
print("Passed: ", corr_result.passed)
print("Tables: ", corr_result.tables)
<class 'validmind.vm_models.result.result.TestResult'>
Result ID:  validmind.data_validation.HighPearsonCorrelation
Params:  {'max_threshold': 0.3}
Passed:  False
Tables:  [ResultTable]

Let's remove the highly correlated features and create a new VM dataset object.

We'll begin by checking out the table in the result and extracting a list of features that failed the test:

# Extract table from `corr_result.tables`
features_df = corr_result.tables[0].data
features_df
Columns Coefficient Pass/Fail
0 (Age, Exited) 0.3489 Fail
1 (Balance, NumOfProducts) -0.1866 Pass
2 (IsActiveMember, Exited) -0.1752 Pass
3 (Balance, Exited) 0.1378 Pass
4 (CreditScore, Exited) -0.0659 Pass
5 (NumOfProducts, Exited) -0.0606 Pass
6 (NumOfProducts, IsActiveMember) 0.0457 Pass
7 (Tenure, IsActiveMember) -0.0418 Pass
8 (Tenure, EstimatedSalary) 0.0409 Pass
9 (HasCrCard, Exited) -0.0396 Pass
# Extract list of features that failed the test
high_correlation_features = features_df[features_df["Pass/Fail"] == "Fail"]["Columns"].tolist()
high_correlation_features
['(Age, Exited)']

Next, extract the feature names from the list of strings (example: (Age, Exited) > Age):

high_correlation_features = [feature.split(",")[0].strip("()") for feature in high_correlation_features]
high_correlation_features
['Age']

Now, it's time to re-initialize the dataset with the highly correlated features removed.

Note the use of a different input_id. This allows tracking the inputs used when running each individual test.

# Remove the highly correlated features from the dataset
balanced_raw_no_age_df = balanced_raw_df.drop(columns=high_correlation_features)

# Re-initialize the dataset object
vm_raw_dataset_preprocessed = vm.init_dataset(
    dataset=balanced_raw_no_age_df,
    input_id="raw_dataset_preprocessed",
    target_column="Exited",
)

Re-running the test with the reduced feature set should pass the test:

corr_result = vm.tests.run_test(
    test_id="validmind.data_validation.HighPearsonCorrelation",
    params={"max_threshold": 0.3},
    inputs={"dataset": vm_raw_dataset_preprocessed},
)

✅ High Pearson Correlation

The High Pearson Correlation test evaluates the linear relationships between feature pairs to identify potential redundancy or multicollinearity within the dataset. The results table presents the top ten absolute Pearson correlation coefficients, along with the corresponding feature pairs and Pass/Fail status based on a threshold of 0.3. All reported coefficients are below the threshold, and each feature pair is marked as Pass.

Key insights:

  • No feature pairs exceed correlation threshold: All absolute Pearson correlation coefficients are below the 0.3 threshold, with the highest magnitude observed at 0.1866 between Balance and NumOfProducts.
  • Weak linear relationships among top pairs: The strongest correlations, both positive and negative, remain modest in magnitude, indicating limited linear association between the evaluated features.
  • Consistent Pass status across all pairs: Every feature pair in the top ten list is marked as Pass, reflecting the absence of high linear dependencies within the tested feature set.

The results indicate that the dataset does not exhibit strong linear relationships or multicollinearity among the top feature pairs. All observed correlations are well below the specified threshold, supporting the independence of input features for model development and interpretation.

Parameters:

{
  "max_threshold": 0.3
}
            

Tables

Columns Coefficient Pass/Fail
(Balance, NumOfProducts) -0.1866 Pass
(IsActiveMember, Exited) -0.1752 Pass
(Balance, Exited) 0.1378 Pass
(CreditScore, Exited) -0.0659 Pass
(NumOfProducts, Exited) -0.0606 Pass
(NumOfProducts, IsActiveMember) 0.0457 Pass
(Tenure, IsActiveMember) -0.0418 Pass
(Tenure, EstimatedSalary) 0.0409 Pass
(HasCrCard, Exited) -0.0396 Pass
(Tenure, HasCrCard) 0.0272 Pass

You can also plot the correlation matrix to visualize the new correlation between features:

corr_result = vm.tests.run_test(
    test_id="validmind.data_validation.PearsonCorrelationMatrix",
    inputs={"dataset": vm_raw_dataset_preprocessed},
)

Pearson Correlation Matrix

The Pearson Correlation Matrix test evaluates the linear relationships between all pairs of numerical variables in the dataset, visualizing the strength and direction of these relationships using a heat map. The resulting matrix displays correlation coefficients ranging from -1 to 1, with higher absolute values indicating stronger linear associations. In this result, the heat map shows the pairwise correlations among variables such as CreditScore, Tenure, Balance, NumOfProducts, HasCrCard, IsActiveMember, EstimatedSalary, and Exited, with color intensity reflecting the magnitude and direction of each correlation.

Key insights:

  • No high correlations detected: All off-diagonal correlation coefficients are below the 0.7 absolute value threshold, indicating the absence of strong linear dependencies between any pair of variables.
  • Weak to moderate relationships observed: The highest observed correlation is -0.19 between Balance and NumOfProducts, and 0.14 between Balance and Exited, both of which are considered weak.
  • Target variable shows low correlation with predictors: The Exited variable exhibits low correlation with all other features, with the highest being -0.18 with IsActiveMember and 0.14 with Balance.

The correlation structure indicates that the numerical variables in the dataset are largely independent, with no evidence of multicollinearity or redundancy among predictors. The absence of strong linear relationships suggests that each variable contributes distinct information, supporting model interpretability and reducing the risk of overfitting due to redundant features.

Figures

ValidMind Figure validmind.data_validation.PearsonCorrelationMatrix:ac14

Documenting test results

Now that we've done some analysis on two different datasets, we can use ValidMind to easily document why certain things were done to our raw data with testing to support it. Every test result returned by the run_test() function has a .log() method that can be used to send the test results to the ValidMind Platform.

When logging validation test results to the platform, you'll need to manually add those results to the desired section of the validation report. To demonstrate how to add test results to your validation report, we'll log our data quality tests and insert the results via the ValidMind Platform.

Configure and run comparison tests

Below, we'll perform comparison tests between the original raw dataset (raw_dataset) and the final preprocessed (raw_dataset_preprocessed) dataset, again logging the results to the ValidMind Platform.

We can specify all the tests we'd ike to run in a dictionary called test_config, and we'll pass in the following arguments for each test:

  • params: Individual test parameters.
  • input_grid: Individual test inputs to compare. In this case, we'll input our two datasets for comparison.

Note here that the input_grid expects the input_id of the dataset as the value rather than the variable name we specified:

# Individual test config with inputs specified
test_config = {
    "validmind.data_validation.ClassImbalance": {
        "input_grid": {"dataset": ["raw_dataset", "raw_dataset_preprocessed"]},
        "params": {"min_percent_threshold": 30}
    },
    "validmind.data_validation.HighPearsonCorrelation": {
        "input_grid": {"dataset": ["raw_dataset", "raw_dataset_preprocessed"]},
        "params": {"max_threshold": 0.3}
    },
}

Then batch run and log our tests in test_config:

for t in test_config:
    print(t)
    try:
        # Check if test has input_grid
        if 'input_grid' in test_config[t]:
            # For tests with input_grid, pass the input_grid configuration
            if 'params' in test_config[t]:
                vm.tests.run_test(t, input_grid=test_config[t]['input_grid'], params=test_config[t]['params']).log()
            else:
                vm.tests.run_test(t, input_grid=test_config[t]['input_grid']).log()
        else:
            # Original logic for regular inputs
            if 'params' in test_config[t]:
                vm.tests.run_test(t, inputs=test_config[t]['inputs'], params=test_config[t]['params']).log()
            else:
                vm.tests.run_test(t, inputs=test_config[t]['inputs']).log()
    except Exception as e:
        print(f"Error running test {t}: {str(e)}")
validmind.data_validation.ClassImbalance

❌ Class Imbalance

The Class Imbalance test evaluates the distribution of target classes within the dataset to identify potential imbalances that could impact model performance. The results present the proportion of each class in both the raw and preprocessed datasets, with a minimum percentage threshold set at 30% for each class. The tables and plots display the class proportions and indicate whether each class meets the threshold criterion.

Key insights:

  • Imbalance detected in raw dataset: In the raw dataset, class 0 constitutes 79.80% of records while class 1 accounts for 20.20%. Class 1 falls below the 30% threshold and is flagged as failing the test.
  • Balanced distribution in preprocessed dataset: The preprocessed dataset shows both classes at 50.00%, with each class passing the 30% threshold.
  • Visual confirmation of class proportions: The bar plots visually reinforce the numerical findings, highlighting the skew in the raw dataset and the balanced distribution post-preprocessing.

The results indicate that the raw dataset exhibits a significant class imbalance, with the minority class underrepresented relative to the defined threshold. Preprocessing steps have resulted in a balanced class distribution, with both classes equally represented and meeting the minimum percentage requirement. This transition from imbalance to balance is clearly reflected in both the tabular and visual outputs.

Parameters:

{
  "min_percent_threshold": 30
}
            

Tables

dataset Exited Percentage of Rows (%) Pass/Fail
raw_dataset 0 79.80% Pass
raw_dataset 1 20.20% Fail
raw_dataset_preprocessed 0 50.00% Pass
raw_dataset_preprocessed 1 50.00% Pass

Figures

ValidMind Figure validmind.data_validation.ClassImbalance:81be
ValidMind Figure validmind.data_validation.ClassImbalance:9537
2026-04-03 03:05:34,397 - INFO(validmind.vm_models.result.result): Test driven block with result_id validmind.data_validation.ClassImbalance does not exist in model's document
validmind.data_validation.HighPearsonCorrelation

❌ High Pearson Correlation

The High Pearson Correlation test evaluates the linear relationships between feature pairs to identify potential redundancy or multicollinearity. The results table presents the top pairwise Pearson correlation coefficients for both the raw and preprocessed datasets, indicating whether each pair exceeds the absolute correlation threshold of 0.3. Each feature pair is listed with its corresponding coefficient and a Pass or Fail status based on the threshold.

Key insights:

  • One feature pair exceeds correlation threshold: In the raw dataset, the pair (Balance, NumOfProducts) shows a correlation coefficient of -0.3045, resulting in a Fail status as it surpasses the 0.3 threshold.
  • All other correlations below threshold: All remaining feature pairs in both the raw and preprocessed datasets have absolute correlation coefficients below 0.3, resulting in Pass status.
  • Lower correlations after preprocessing: The highest correlation in the preprocessed dataset is -0.1866 for (Balance, NumOfProducts), which is below the threshold and marked as Pass.

The results indicate that, with the exception of one feature pair in the raw dataset, all examined feature pairs exhibit low linear correlations. Preprocessing further reduces the magnitude of correlations, and no pairs in the preprocessed dataset exceed the specified threshold. This suggests limited risk of feature redundancy or multicollinearity based on linear relationships among the evaluated features.

Parameters:

{
  "max_threshold": 0.3
}
            

Tables

dataset Columns Coefficient Pass/Fail
raw_dataset (Balance, NumOfProducts) -0.3045 Fail
raw_dataset (Age, Exited) 0.2810 Pass
raw_dataset (IsActiveMember, Exited) -0.1515 Pass
raw_dataset (Balance, Exited) 0.1174 Pass
raw_dataset (Age, IsActiveMember) 0.0873 Pass
raw_dataset (NumOfProducts, Exited) -0.0523 Pass
raw_dataset (Age, NumOfProducts) -0.0306 Pass
raw_dataset (CreditScore, IsActiveMember) 0.0306 Pass
raw_dataset (Tenure, IsActiveMember) -0.0293 Pass
raw_dataset (Age, Balance) 0.0290 Pass
raw_dataset_preprocessed (Balance, NumOfProducts) -0.1866 Pass
raw_dataset_preprocessed (IsActiveMember, Exited) -0.1752 Pass
raw_dataset_preprocessed (Balance, Exited) 0.1378 Pass
raw_dataset_preprocessed (CreditScore, Exited) -0.0659 Pass
raw_dataset_preprocessed (NumOfProducts, Exited) -0.0606 Pass
raw_dataset_preprocessed (NumOfProducts, IsActiveMember) 0.0457 Pass
raw_dataset_preprocessed (Tenure, IsActiveMember) -0.0418 Pass
raw_dataset_preprocessed (Tenure, EstimatedSalary) 0.0409 Pass
raw_dataset_preprocessed (HasCrCard, Exited) -0.0396 Pass
raw_dataset_preprocessed (Tenure, HasCrCard) 0.0272 Pass
2026-04-03 03:05:38,170 - INFO(validmind.vm_models.result.result): Test driven block with result_id validmind.data_validation.HighPearsonCorrelation does not exist in model's document
Note the output returned indicating that a test-driven block doesn't currently exist in your model's documentation for some test IDs.

That's expected, as when we run validations tests the results logged need to be manually added to your report as part of your compliance assessment process within the ValidMind Platform.

Log tests with unique identifiers

Next, we'll use the previously initialized vm_balanced_raw_dataset (that still has a highly correlated Age column) as input to run an individual test, then log the result to the ValidMind Platform.

When running individual tests, you can use a custom result_id to tag the individual result with a unique identifier:

  • This result_id can be appended to test_id with a : separator.
  • The balanced_raw_dataset result identifier will correspond to the balanced_raw_dataset input, the dataset that still has the Age column.
result = vm.tests.run_test(
    test_id="validmind.data_validation.HighPearsonCorrelation:balanced_raw_dataset",
    params={"max_threshold": 0.3},
    inputs={"dataset": vm_balanced_raw_dataset},
)
result.log()

❌ High Pearson Correlation Balanced Raw Dataset

The High Pearson Correlation test evaluates the linear relationships between feature pairs in the dataset to identify potential feature redundancy or multicollinearity. The results table presents the top ten strongest absolute correlations, listing the feature pairs, their Pearson correlation coefficients, and a Pass/Fail status based on a threshold of 0.3. Only one feature pair exceeds the threshold, while the remaining pairs show lower correlation values and pass the test criteria.

Key insights:

  • One feature pair exceeds correlation threshold: The pair (Age, Exited) has a correlation coefficient of 0.3489, surpassing the 0.3 threshold and resulting in a Fail status.
  • All other feature pairs below threshold: The remaining nine feature pairs have absolute correlation coefficients ranging from 0.0396 to 0.1866, all below the threshold and marked as Pass.
  • Predominantly weak linear relationships: Most feature pairs exhibit weak linear associations, with coefficients clustered well below the threshold.

The test results indicate that the dataset contains predominantly low linear correlations among features, with only the (Age, Exited) pair exhibiting moderate correlation above the defined threshold. The overall correlation structure suggests minimal risk of feature redundancy or multicollinearity, aside from the identified pair.

Parameters:

{
  "max_threshold": 0.3
}
            

Tables

Columns Coefficient Pass/Fail
(Age, Exited) 0.3489 Fail
(Balance, NumOfProducts) -0.1866 Pass
(IsActiveMember, Exited) -0.1752 Pass
(Balance, Exited) 0.1378 Pass
(CreditScore, Exited) -0.0659 Pass
(NumOfProducts, Exited) -0.0606 Pass
(NumOfProducts, IsActiveMember) 0.0457 Pass
(Tenure, IsActiveMember) -0.0418 Pass
(Tenure, EstimatedSalary) 0.0409 Pass
(HasCrCard, Exited) -0.0396 Pass
2026-04-03 03:05:41,745 - INFO(validmind.vm_models.result.result): Test driven block with result_id validmind.data_validation.HighPearsonCorrelation:balanced_raw_dataset does not exist in model's document

Add test results to reporting

With some test results logged, let's head to the model we connected to at the beginning of this notebook and learn how to insert a test result into our validation report (Need more help?).

While the example below focuses on a specific test result, you can follow the same general procedure for your other results:

  1. From the Inventory in the ValidMind Platform, go to the model you connected to earlier.

  2. In the left sidebar that appears for your model, click Validation under Documents.

  3. Locate the Data Preparation section and click on 2.2.1. Data Quality to expand that section.

  4. Under the Class Imbalance Assessment section, locate Validator Evidence then click Link Evidence to Report:

    Screenshot showing the validation report with the link validator evidence to report option highlighted

  5. Select the Class Imbalance test results we logged: ValidMind Data Validation Class Imbalance

    Screenshot showing the ClassImbalance test selected

  6. Click Update Linked Evidence to add the test results to the validation report.

    Confirm that the results for the Class Imbalance test you inserted has been correctly inserted into section 2.2.1. Data Quality of the report:

    Screenshot showing the ClassImbalance test inserted into the validation report

  7. Note that these test results are flagged as Requires Attention — as they include comparative results from our initial raw dataset.

    Click See evidence details to review the LLM-generated description that summarizes the test results, that confirm that our final preprocessed dataset actually passes our test:

    Screenshot showing the ClassImbalance test generated description in the text editor

Here in this text editor, you can make qualitative edits to the draft that ValidMind generated to finalize the test results.

Learn more: Work with content blocks

Split the preprocessed dataset

With our raw dataset rebalanced with highly correlated features removed, let's now spilt our dataset into train and test in preparation for model evaluation testing.

To start, let's grab the first few rows from the balanced_raw_no_age_df dataset we initialized earlier:

balanced_raw_no_age_df.head()
CreditScore Geography Gender Tenure Balance NumOfProducts HasCrCard IsActiveMember EstimatedSalary Exited
4419 709 France Male 0 0.00 2 1 0 46811.77 0
992 562 Spain Male 6 161628.66 1 1 0 91482.50 0
3403 640 Spain Female 3 77826.80 1 1 1 168544.85 0
7054 562 Germany Female 6 130565.02 1 1 0 9854.72 1
3345 612 Spain Male 1 0.00 1 1 1 83256.26 1

Before training the model, we need to encode the categorical features in the dataset:

  • Use the OneHotEncoder class from the sklearn.preprocessing module to encode the categorical features.
  • The categorical features in the dataset are Geography and Gender.
balanced_raw_no_age_df = pd.get_dummies(
    balanced_raw_no_age_df, columns=["Geography", "Gender"], drop_first=True
)
balanced_raw_no_age_df.head()
CreditScore Tenure Balance NumOfProducts HasCrCard IsActiveMember EstimatedSalary Exited Geography_Germany Geography_Spain Gender_Male
4419 709 0 0.00 2 1 0 46811.77 0 False False True
992 562 6 161628.66 1 1 0 91482.50 0 False True True
3403 640 3 77826.80 1 1 1 168544.85 0 False True False
7054 562 6 130565.02 1 1 0 9854.72 1 True False False
3345 612 1 0.00 1 1 1 83256.26 1 False True True

Splitting our dataset into training and testing is essential for proper validation testing, as this helps assess how well the model generalizes to unseen data:

  • We start by dividing our balanced_raw_no_age_df dataset into training and test subsets using train_test_split, with 80% of the data allocated to training (train_df) and 20% to testing (test_df).
  • From each subset, we separate the features (all columns except "Exited") into X_train and X_test, and the target column ("Exited") into y_train and y_test.
from sklearn.model_selection import train_test_split

train_df, test_df = train_test_split(balanced_raw_no_age_df, test_size=0.20)

X_train = train_df.drop("Exited", axis=1)
y_train = train_df["Exited"]
X_test = test_df.drop("Exited", axis=1)
y_test = test_df["Exited"]

Initialize the split datasets

Next, let's initialize the training and testing datasets so they are available for use:

vm_train_ds = vm.init_dataset(
    input_id="train_dataset_final",
    dataset=train_df,
    target_column="Exited",
)

vm_test_ds = vm.init_dataset(
    input_id="test_dataset_final",
    dataset=test_df,
    target_column="Exited",
)

In summary

In this second notebook, you learned how to:

Next steps

Develop potential challenger models

Now that you're familiar with the basics of using the ValidMind Library, let's use it to develop a challenger model: 3 — Developing a potential challenger model


Copyright © 2023-2026 ValidMind Inc. All rights reserved.
Refer to LICENSE for details.
SPDX-License-Identifier: AGPL-3.0 AND ValidMind Commercial