Last month, we announced the availability of two high-performing Mistral AI models, Mistral 7B and Mixtral 8x7B on Amazon Bedrock. Mistral 7B, as the first foundation model of Mistral, supports English text generation tasks with natural coding capabilities. Mixtral 8x7B is a popular, high-quality, sparse Mixture-of-Experts (MoE) model, that is ideal for text summarization, question and answering, text classification, text completion, and code generation.
Today, we’re announcing the availability of Mistral Large on Amazon Bedrock. Mistral Large is ideal for complex tasks that require substantial reasoning capabilities, or ones that are highly specialized, such as Synthetic Text Generation or Code Generation.
What you need to know about Mistral Large:
- It’s natively fluent in English, French, Spanish, German, and Italian, with a nuanced understanding of grammar and cultural context.
- It has a 32K token context window allows precise information recall from large documents.
- Its precise instruction-following enables you to design your moderation policies – the folks at Mistral AI used it to set up the system-level moderation of their beta assistant demonstrator le Chat. Your first interaction with Large Language Models (LLMs) revolves around prompts. The art of crafting effective prompts is essential for generating desirable responses from LLMs and Mistral AI has a guide with example prompts showing different prompting capabilities.
Getting started with Mistral Large
To get started with Mistral Large on Bedrock, you first need to get access to the model. On the Amazon Bedrock console, select Model access, and then select Manage model access. Next, select Mistral Large, and then select Save changes.
Once you have access to Mistral Large, you can use the model on Bedrock. Refresh the base models table to view the updated status.
To see the complex reasoning aspect of Mistral Large in action I tried the following prompt in the Playgrounds section:
Prompt
Calculate the difference in payment dates between the two customers whose payment amounts are closest to each other in the given dataset:
'{
"transaction_id":{"0":"T1001","1":"T1002","2":"T1003","3":"T1004","4":"T1005"},
"customer_id":{"0":"C001","1":"C002","2":"C003","3":"C002","4":"C001"},
"payment_amount":{"0":125.5,"1":89.99,"2":120.0,"3":54.3,"4":210.2},
"payment_date":{"0":"2021-10-05","1":"2021-10-06","2":"2021-10-07","3":"2021-10-05","4":"2021-10-08"},
"payment_status":{"0":"Paid","1":"Unpaid","2":"Paid","3":"Paid","4":"Pending"}
}'
Output
To find the difference in payment dates between the two customers whose payment amounts are closest to each other, we first need to identify the two closest payment amounts and then find their respective payment dates.
The payment amounts are: 125.5, 89.99, 120.0, 54.3, 210.2
The two closest payment amounts are 125.5 and 120.0. The payment date for the payment amount 125.5 is '2021-10-05' and for the payment amount 120.0 is '2021-10-07'.
The difference in payment dates can be calculated as follows:
Date difference = |'2021-10-07' - '2021-10-05'| = 2 days
So, the difference in payment dates between the two customers whose payment amounts are closest to each other is 2 days.
Earlier in this post I mentioned that Mistral Large is natively fluent in several languages, with a nuanced understanding of grammar and cultural context. I tested the model’s understanding of the French language:
Prompt
Lequel est le plus lourd une livre de fer ou un kilogramme de plume
Output
Un kilogramme de plumes est plus lourd qu'une livre de fer.
Pour comprendre cela, il est important de se rappeler que :
1. Une livre (lb) est une unité de mesure de masse principalement utilisée aux États-Unis et dans d'autres systèmes de mesure impériaux, tandis qu'un kilogramme (kg) est l'unité de base de masse dans le système international d'unités (SI).
2. 1 kilogramme est approximativement égal à 2,2 livres.
Donc, un kilogramme de plumes est plus lourd qu'une livre de fer, car il correspond à environ 2,2 livres de plumes.
Programmatically interact with Mistral Large
You can also use AWS Command Line Interface (CLI) and AWS Software Development Kit (SDK) to make various calls using Amazon Bedrock APIs. Following, is a sample code in Python that interacts with Amazon Bedrock Runtime APIs with AWS SDK. If you specify in the prompt that “You will only respond with a JSON object with the key X, Y, and Z.”, you can use JSON format output in easy downstream tasks:
import boto3
import json
bedrock = boto3.client(service_name="bedrock-runtime")
prompt = "<s>[INST]You are a summarization system that can provide summaries with associated confidence
scores. In clear and concise language, provide three short summaries of the following essay,
along with their confidence scores. You will only respond with a JSON object with the key Summary
and Confidence. Do not provide explanations.[/INST]"
# Essay:
{insert essay text here}"
body = json.dumps({
"prompt": prompt,
"max_tokens": 512,
"top_p": 0.8,
"temperature": 0.5,
})
modelId = "mistral.mistral-large-instruct-v0:2"
accept = "application/json"
contentType = "application/json"
response = bedrock.invoke_model(
body=body,
modelId=modelId,
accept=accept,
contentType=contentType
)
print(json.loads(response.get('body').read()))
You can get JSON formatted output as like:
{
"Summaries": [
{
"Summary": "The author discusses their early experiences with programming and writing,
starting with writing short stories and programming on an IBM 1401 in 9th grade.
They then moved on to working with microcomputers, building their own from a Heathkit,
and eventually convincing their father to buy a TRS-80 in 1980. They wrote simple games,
a program to predict rocket flight trajectories, and a word processor.",
"Confidence": 0.9
},
{
"Summary": "The author began college as a philosophy major, but found it to be unfulfilling
and switched to AI. They were inspired by a novel and a PBS documentary, as well as the
potential for AI to create intelligent machines like those in the novel. Despite this
excitement, they eventually realized that the traditional approach to AI was flawed and
shifted their focus to Lisp.",
"Confidence": 0.85
},
{
"Summary": "The author briefly worked at Interleaf, where they found that their Lisp skills
were highly valued. They eventually left Interleaf to return to RISD, but continued to work
as a freelance Lisp hacker. While at RISD, they started painting still lives in their bedroom
at night, which led to them applying to art schools and eventually attending the Accademia
di Belli Arti in Florence.",
"Confidence": 0.9
}
]
}
To learn more prompting capabilities in Mistral AI models, visit Mistral AI documentation.
Now Available
Mistral Large, along with other Mistral AI models (Mistral 7B and Mixtral 8x7B), is available today on Amazon Bedrock in the US East (N. Virginia), US West (Oregon), and Europe (Paris) Regions; check the full Region list for future updates.
Share and learn with our generative AI community at community.aws. Give Mistral Large a try in the Amazon Bedrock console today and send feedback to AWS re:Post for Amazon Bedrock or through your usual AWS Support contacts.
Read about our collaboration with Mistral AI and what it means for our customers.
– Veliswa.