February 14, 2024
We’ve improved the ValidMind user experience, from more supportive documentation templates, easier specification of inputs, and better filtering within the library, to the ability to view which user ran actions within the platform.
Release highlights
ValidMind Library (v1.26.6)
Support for tracking each test result with a unique identifier
Documentation templates have been updated to support logging each test run as a unique result, making it possible to run the same test across different datasets or models.
To make use of this new feature, you simply add a unique result_id
identifier as a suffix to a content_id
identifier in the content block definition of a metric
or test
content type.
For example, the following content blocks with the suffixes training_data
and test_data
enable you to log two individual results for the same test validmind.data_validation.Skewness
:
- content_type: test
content_id: validmind.data_validation.Skewness:training_data
- content_type: metric
content_id: validmind.data_validation.Skewness:test_data
You can configure each of these unique content_id
identifiers by passing the appropriate config
and inputs
in run_documentation_tests()
or run_test()
. For example, to configure two separate tests for Skewness
using different datasets and parameters:
= vm.tests.run_test(
test ="validmind.data_validation.Skewness:training_data",
test_id={
params"max_threshold": 1
},=vm_train_ds,
dataset
)
test.log()
= vm.tests.run_test(
test ="validmind.data_validation.Skewness:test_data",
test_id={
params"max_threshold": 1.5
},=vm_test_ds
dataset
) test.log()
Easier specification of inputs for individual tests
The run_documentation_tests()
function has been updated to allow passing both test inputs
and params
via the config
parameter.
Previously, config
could already pass params
to each test that you declare. In this example, the test SomeTest
receives a custom value for the param min_threshold
:
= vm.run_documentation_tests(
full_suite = {
inputs
...
},={
config"validmind.data_validation.SomeTest": {
"min_threshold": 1
}
} )
With the updated function, config
can now pass both params
and inputs
to each declared test. For example, to specify what model should be passed to each individual test instance:
= vm.run_documentation_tests(
full_suite = {
inputs "dataset": vm_dataset,
"model": xgb_model
},= {
config "validmind..model_validation.Accuracy:xgb_model": {
"params": { threshold: 0.5 },
"inputs": { "model": xgb_model }
},"validmind..model_validation.Accuracy:lr_model": {
"params": { threshold: 0.3 },
"inputs": { "model": lr_model }
},
} )
Here, the top-level inputs
parameter acts as a global inputs
parameter, and the individual tests can customize what they see as the input model via their own config
parameters.
ValidMind Library documentation inputs tracking
- We have added a new feature that tracks which datasets and models are used when running tests. Now, when you initialize datasets or models with
vm.init_dataset()
andvm.init_model()
, we link those inputs with the test results they generate. - This makes it clear which inputs were used for each result, improving transparency and making it easier to understand test outcomes. This update does not require any changes to your code and works with existing
init
methods.
ValidMind Platform (v1.13.13)
Updated events to show users
We now display the name of the user who ran the action instead of a generic “ValidMind Library” name whenever you generate documentation:
Simplified instructions for developers
We simplified the instructions for getting started with the ValidMind Library in the ValidMind Platform.
These instructions tell you how to use the code snippet for your model documentation with your own model or with one of our code samples:
Enhancements
Ability to edit model fields
- You can now edit the values for default fields displayed on the model details page.
- Previously it was only possible to edit inventory fields defined by your organization.
Performance improvements for the ValidMind Platform
We made improvements to page load times on our platform for a smoother user experience.
Filter the model inventory
You can now narrow down models in your Inventory with our advanced filter, search, and sort options.
Custom model inventory fields
- The model inventory has been updated to allow organizations to add additional fields.
- This enhancement enables administrators to customize the model inventory data schema according to your specific organizational needs.
User mentions in comments
We implemented a toggle feature in the Model Activity and Recent Activity sections under Comments to filter and display only specific user mentions.
Expanded rich-text editor support
- Forms in the Model Findings and Validation Report sections now support the rich-text editor interface found in the rest of our content blocks.
- This support enables you to use the editor for your finding descriptions and remediation plans, for example.
Bug fixes
Invalid content blocks for run_documentation_tests()
- We’ve fixed an issue where previously using an invalid test identifier would prevent
run_documentation_tests()
from running all available tests. - The full test suite now runs as expected, even when an invalid test identifier causes an error for an individual test.
Show all collapsed sections in documentation
- We’ve fixed an issue where previously the table of contents was not displaying every subsection that belongs to the parent section.
- The table of contents now accurately reflects the complete structure of the documentation, including all subsections.
Template swap diffs
- We’ve fixed an issue where previously the diff for validation reports was showing incorrectly when swapping templates.
- The correct diff between the current and the new template is now displayed.
Activity item links to the corresponding content block
- We’ve fixed an issue where previously clicking on an activity item would not redirect you to the corresponding content block.
- Clicking on a recent item now takes you to the correct content block as expected.
Documentation updates
New user management documentation
- Our user guide now includes end-to-end instructions for managing users on the ValidMind Platform.
- This new content covers common tasks such as inviting new users, adding them to user groups, and managing roles and permissions.
Updated sample notebooks with current input_id
usage
We updated our sample notebooks to show the current, recommended usage for input_id
when calling vm.init_dataset()
or vm.init_model()
.
How to upgrade
ValidMind Platform
To access the latest version of the ValidMind Platform,1 hard refresh your browser tab:
- Windows:
Ctrl
+Shift
+R
ORCtrl
+F5
- MacOS:
⌘ Cmd
+Shift
+R
OR hold down⌘ Cmd
and click theReload
button
ValidMind Library
To upgrade the ValidMind Library:2
In your Jupyter Notebook:
Then within a code cell or your terminal, run:
%pip install --upgrade validmind
You may need to restart your kernel after running the upgrade package for changes to be applied.