%pip install -q validmind
Sentiment analysis of financial data using a large language model (LLM)
Document a large language model (LLM) specialized in sentiment analysis for financial news using the ValidMind Library.
This interactive notebook shows you how to set up the ValidMind Library, initializes the library, and uses a specific prompt template for analyzing the sentiment of sentences in a dataset. The notebook also includes example data to test the model’s ability to correctly identify sentiment as positive, negative, or neutral.
About ValidMind
ValidMind’s suite of tools enables organizations to identify, document, and manage model risks for all types of models, including AI/ML models, LLMs, and statistical models. As a model developer, you use the ValidMind Library to automate documentation and validation tests, and then use the ValidMind Platform to collaborate on documentation initiatives. Together, these products simplify model risk management, facilitate compliance with regulations and institutional standards, and enhance collaboration between yourself and model validators.
If this is your first time trying out ValidMind, we recommend going through the following resources first:
- Get started — The basics, including key concepts, and how our products work
- Get started with the ValidMind Library — The path for developers, more code samples, and our developer reference
Before you begin
Signing up is FREE — Register with ValidMind
This notebook requires an OpenAI API secret key to run. If you don’t have one, visit API keys on OpenAI’s site to create a new key for yourself. Note that API usage charges may apply.
If you encounter errors due to missing modules in your Python environment, install the modules with pip install
, and then re-run the notebook. For more help, refer to Installing Python Modules.
Install the ValidMind Library
To install the library:
Initialize the ValidMind Library
ValidMind generates a unique code snippet for each registered model to connect with your developer environment. You initialize the ValidMind Library with this code snippet, which ensures that your documentation and tests are uploaded to the correct model when you run the notebook.
Get your code snippet
In a browser, log in to ValidMind.
In the left sidebar, navigate to Model Inventory and click + Register Model.
Enter the model details and click Continue. (Need more help?)
For example, to register a model for use with this notebook, select:
- Documentation template:
LLM-based Text Classification
- Use case:
Marketing/Sales - Analytics
You can fill in other options according to your preference.
- Documentation template:
Go to Getting Started and click Copy snippet to clipboard.
Next, load your model identifier credentials from an .env
file or replace the placeholder with your own code snippet:
# Load your model identifier credentials from an `.env` file
%load_ext dotenv
%dotenv .env
# Or replace with your code snippet
import validmind as vm
vm.init(# api_host="...",
# api_key="...",
# api_secret="...",
# model="...",
)
Preview the documentation template
A template predefines sections for your model documentation and provides a general outline to follow, making the documentation process much easier.
You will upload documentation and test results into this template later on. For now, take a look at the structure that the template provides with the vm.preview_template()
function from the ValidMind library and note the empty sections:
vm.preview_template()
Get ready to run the analysis
Import the ValidMind FoundationModel
and Prompt
classes needed for the sentiment analysis later on:
from validmind.models import FoundationModel, Prompt
Check your access to the OpenAI API:
import os
import dotenv
dotenv.load_dotenv()
if os.getenv("OPENAI_API_KEY") is None:
raise Exception("OPENAI_API_KEY not found")
from openai import OpenAI
= OpenAI()
model
def call_model(prompt):
return (
model.chat.completions.create(="gpt-3.5-turbo",
model=[
messages"role": "user", "content": prompt},
{
],
)0]
.choices[
.message.content )
Set the prompt guidelines for the sentiment analysis:
= """
prompt_template You are an AI with expertise in sentiment analysis, particularly in the context of financial news.
Your task is to analyze the sentiment of a specific sentence provided below.
Before proceeding, take a moment to understand the context and nuances of the financial terminology used in the sentence.
Sentence to Analyze:
```
{Sentence}
```
Please respond with the sentiment of the sentence denoted by one of either 'positive', 'negative', or 'neutral'.
Please respond only with the sentiment enum value. Do not include any other text in your response.
Note: Ensure that your analysis is based on the content of the sentence and not on external information or assumptions.
""".strip()
= ["Sentence"] prompt_variables
Get your sample dataset ready for analysis
To perform the sentiment analysis for financial news we’re going to load a local copy of this dataset: https://www.kaggle.com/datasets/ankurzing/sentiment-analysis-for-financial-news.
This dataset contains two columns, Sentiment
and Sentence
. The sentiment can be negative
, neutral
or positive
.
import pandas as pd
= pd.read_csv("./datasets/sentiments_with_predictions.csv") df
Run the model documentation tests
First, use the ValidMind Library to initialize the dataset and model objects necessary for documentation. The ValidMind predict_fn
function allows the model to be tested and evaluated in a standardized manner:
= vm.init_dataset(
vm_test_ds =df,
dataset="test_dataset",
input_id="Sentence",
text_column="Sentiment",
target_column
)
= vm.init_model(
vm_model =FoundationModel(
model=call_model,
predict_fn=Prompt(
prompt=prompt_template,
template=prompt_variables,
variables
),
),="gpt_35_model",
input_id
)
# Assign model predictions to the test dataset
="gpt_35_prediction") vm_test_ds.assign_predictions(vm_model, prediction_column
Next, use the ValidMind Library to run validation tests on the model. The vm.run_documentation_tests
function analyzes the current model’s documentation template and collects all the tests associated with it into a test suite.
The function then runs the test suite, logs the results to the ValidMind API and displays them to you.
= vm.run_documentation_tests(
test_suite ={
inputs"dataset": vm_test_ds,
"model": vm_model,
} )
Next steps
You can look at the results of this test suite right in the notebook where you ran the code, as you would expect. But there is a better way: view the prompt validation test results as part of your model documentation in the ValidMind Platform:
In the ValidMind Platform, go to the Documentation page for the model you registered earlier. (Need more help?)
Expand 2. Data Preparation or 3. Model Development to review all test results.
What you can see now is a more easily consumable version of the prompt validation testing you just performed, along with other parts of your model documentation that still need to be completed.
If you want to learn more about where you are in the model documentation process, take a look at Get started with the ValidMind Library.
Upgrade ValidMind
Retrieve the information for the currently installed version of ValidMind:
%pip show validmind
If the version returned is lower than the version indicated in our production open-source code, restart your notebook and run:
%pip install --upgrade validmind
You may need to restart your kernel after running the upgrade package for changes to be applied.