%pip install -q validmind
Summarization of financial data using Hugging Face NLP models
Document a natural language processing (NLP) model using ValidMind to summarize financial news, based on a dataset of just over 300,000 unique news articles written by journalists at CNN and the Daily Mail.
This interactive notebook shows you how to set up the ValidMind Library, initialize the library, and load the dataset, followed by running the model validation tests provided by the ValidMind Library to quickly generate documentation about the data and model.
About ValidMind
ValidMind’s suite of tools enables organizations to identify, document, and manage model risks for all types of models, including AI/ML models, LLMs, and statistical models. As a model developer, you use the ValidMind Library to automate documentation and validation tests, and then use the ValidMind Platform to collaborate on model documentation, these products simplify model risk management, facilitate compliance with regulations and institutional standards, and enhance collaboration between yourself and model validators.
If this is your first time trying out ValidMind, we recommend going through the following resources first:
- Get started — The basics, including key concepts, and how our products work
- Get started with the ValidMind Library — The path for developers, more code samples, and our developer reference
Before you begin
Signing up is FREE — Register with ValidMind
If you encounter errors due to missing modules in your Python environment, install the modules with pip install
, and then re-run the notebook. For more help, refer to Installing Python Modules.
Install the ValidMind Library
To install the library:
Initialize the ValidMind Library
ValidMind generates a unique code snippet for each registered model to connect with your developer environment. You initialize the ValidMind Library with this code snippet, which ensures that your documentation and tests are uploaded to the correct model when you run the notebook.
Get your code snippet
In a browser, log in to ValidMind.
In the left sidebar, navigate to Model Inventory and click + Register Model.
Enter the model details and click Continue. (Need more help?)
For example, to register a model for use with this notebook, select:
- Documentation template:
NLP-based Text Classification
- Use case:
Marketing/Sales - Analytics
You can fill in other options according to your preference.
- Documentation template:
Go to Getting Started and click Copy snippet to clipboard.
Next, load your model identifier credentials from an .env
file or replace the placeholder with your own code snippet:
# Load your model identifier credentials from an `.env` file
%load_ext dotenv
%dotenv .env
# Or replace with your code snippet
import validmind as vm
vm.init(# api_host="...",
# api_key="...",
# api_secret="...",
# model="...",
)
Preview the documentation template
A template predefines sections for your model documentation and provides a general outline to follow, making the documentation process much easier.
You will upload documentation and test results into this template later on. For now, take a look at the structure that the template provides with the vm.preview_template()
function from the ValidMind library and note the empty sections:
vm.preview_template()
Helper functions
Let’s define the following functions to help visualize datasets with long text fields:
import textwrap
from IPython.display import display, HTML
from tabulate import tabulate
def _format_cell_text(text, width=50):
"""Private function to format a cell's text."""
return "\n".join([textwrap.fill(line, width=width) for line in text.split("\n")])
def _format_dataframe_for_tabulate(df):
"""Private function to format the entire DataFrame for tabulation."""
= df.copy()
df_out
# Format all string columns
for column in df_out.columns:
if (
== object
df_out[column].dtype # Check if column is of type object (likely strings)
): = df_out[column].apply(_format_cell_text)
df_out[column] return df_out
def _dataframe_to_html_table(df):
"""Private function to convert a DataFrame to an HTML table."""
= df.columns.tolist()
headers = df.values.tolist()
table_data return tabulate(table_data, headers=headers, tablefmt="html")
def display_formatted_dataframe(df, num_rows=None):
"""Primary function to format and display a DataFrame."""
if num_rows is not None:
= df.head(num_rows)
df = _format_dataframe_for_tabulate(df)
formatted_df = _dataframe_to_html_table(formatted_df)
html_table display(HTML(html_table))
Load the dataset
The CNN Dailymail Dataset is an English-language dataset containing just over 300k unique news articles as written by journalists at CNN and the Daily Mail (https://huggingface.co/datasets/cnn_dailymail). The current version supports both extractive and abstractive summarization, though the original version was created for machine reading and comprehension and abstractive question answering.
import pandas as pd
= pd.read_csv("./datasets/cnn_dailymail_100_with_predictions.csv")
df =5) display_formatted_dataframe(df, num_rows
= vm.init_dataset(
vm_raw_ds =df,
dataset="raw_dataset",
input_id="article",
text_column="highlights",
target_column )
NLP data quality tests
Before we proceed with the analysis, it’s crucial to ensure the quality of our NLP data. We can run the data_preparation
section of the template to validate the data’s integrity and suitability:
= vm.run_documentation_tests(
text_data_test_plan ="data_preparation", inputs={"dataset": vm_raw_ds}
section )
from transformers import pipeline, T5Tokenizer, T5ForConditionalGeneration
= T5Tokenizer.from_pretrained("t5-small")
tokenizer = T5ForConditionalGeneration.from_pretrained("t5-small")
model
= pipeline(
summarizer_model ="summarization",
task=model,
model=tokenizer,
tokenizer=0,
min_length=60,
max_length=True,
truncation# Note: We specify cache_dir to use predownloaded models. )
= vm.init_dataset(
vm_test_ds =df,
dataset="test_dataset",
input_id="article",
text_column="highlights",
target_column )
= vm.init_model(
vm_model
summarizer_model,
)
# Assign model predictions to the test dataset
="t5_prediction") vm_test_ds.assign_predictions(vm_model, prediction_column
Run model validation tests
It’s possible to run a subset of tests on the documentation template by passing a section
parameter to run_documentation_tests()
. Let’s run only the tests that evaluate the model’s overall performance, including summarization metrics, by selecting the model_development
section of the template:
= vm.run_documentation_tests(
summarization_results ="model_development",
section={
inputs"dataset": vm_test_ds,
"model": vm_model,
}, )
Next steps
You can look at the results of this test suite right in the notebook where you ran the code, as you would expect. But there is a better way: view the prompt validation test results as part of your model documentation in the ValidMind Platform:
In the ValidMind Platform, go to the Documentation page for the model you registered earlier. (Need more help?
Expand 2. Data Preparation or 3. Model Development to review all test results.
What you can see now is a more easily consumable version of the prompt validation testing you just performed, along with other parts of your model documentation that still need to be completed.
If you want to learn more about where you are in the model documentation process, take a look at Get started with the ValidMind Library.
Upgrade ValidMind
Retrieve the information for the currently installed version of ValidMind:
%pip show validmind
If the version returned is lower than the version indicated in our production open-source code, restart your notebook and run:
%pip install --upgrade validmind
You may need to restart your kernel after running the upgrade package for changes to be applied.