AWS is leaving no stone unturned to get generative AI tools embedded into every aspect of application development. At its annual re:Invent conference, AWS CEO Matt Garman showcased a large of features and tools that the company has built for developers.
The first major announcement coming from Garman’s keynote was about AWS combining its analytics and AI services into a new version of SageMaker — its AI and machine learning service.
SageMaker Unified Studio
AWS announced a new service named SageMaker Unified Studio, currently in preview, that combines SQL analytics, data processing, AI development, data streaming, business intelligence, and search analytics.
“It consolidates the functionality that data analysts and data scientists use across a wide range of standalone studios in AWS today, standalone query editors, and a variety of visual tools,” Garman explained.
Other updates in SageMaker include the launch of SageMaker Lakehouse, an Apache Iceberg-compatible lakehouse. The offering has been made generally available.
Amazon Q Developer updates target code translation
In 2023, the then CEO of AWS, Adam Selipsky premiered Amazon Q — the company’s answer to Microsoft’s GPT-driven Copilot generative AI assistant. This year, Garman showcased Amazon Q with updated capabilities to automate coding tasks for developers.
The new expanded capabilities for Q Developer include automating code reviews, unit tests, and generating documentation — all of which, according to Garman, will ease developers’ workloads and help them finish their development tasks faster.
AWS also unveiled several code translation capabilities for Q in preview, including the ability to modernize .Net apps from Windows to Linux, mainframe code modernization, and the ability to help migrate VMware workloads.
Garman pointed out that Q Developer can be used to investigate and fix operational issues. This capability, which is currently in preview, will guide an enterprise user through operational diagnostics and automate root cause analysis for problems in workloads.
Amazon Bedrock updates for model distillation, agent implementation
Another building block that Garman focused on during his keynote was AWS’ proprietary platform for building generative AI models and applications — Amazon Bedrock.
The first update to Bedrock came in the shape of Amazon Bedrock Model Distillation — a managed service currently in preview, Model Distillation is designed to help enterprises bring down their cost of running LLMs.
Model Distillation is the process of inferring specialized knowledge from a larger LLM into a smaller LLM for a specific use case. Enterprises often choose to distill larger models as the smaller models are cheaper and faster to run.
Bedrock Model Distillation is being offered as a managed service because, according to Garman, distilling a larger model can be cumbersome as machine learning (ML) experts need to manage training data, and workflows, tune model parameters, and worry about model weights.
The service works by generating responses from teacher models and fine-tunes a student model, the company said, adding that the service can improve response generation from a teacher model by adding proprietary data synthesis.
In addition, the CEO unveiled Automated Reasoning Checks, currently in preview. The capability has been added to Amazon Bedrock Guardrails to help prevent factual errors from hallucinations using mathematical, logic-based algorithmic verification and reasoning processes to verify the information generated by a model.
Following in the footsteps of its rivals, AWS has now added support for multi-agent collaboration support inside Amazon Bedrock Agents in preview.
Other Bedrock updates include features designed to help enterprises streamline the testing of applications before deployment.
New foundation large language models released
There has been speculation for some time, mostly since June this year, that AWS has been working on releasing a frontier model, dubbed Olympus, to take on the likes of OpenAI, xAI, and Google’s models.
Garman on Tuesday revealed a new range of large language models, under the name Nova, that he claimed are either at par or are better than rival models, especially in terms of cost.
The Nova family of models includes Micro — a text-to-text generation model, Lite, Pro, and Premier. All the models are generally available except Premier, which is expected to be made generally available by March.
The company said it also has plans to release two new models in the coming year under the names Nova Speech to Speech and Nova Any to Any.
While AWS announced a slew of software updates for developers, the company also showcased its new chip — Trainium2 — to boost support for generative AI workloads.
AWS Trainium2-powered EC2 instances have been generally available. Tranium2, which is an accelerator chip for AI and generative AI workloads, was showcased last year. The EC2 instances powered by Tranium2, according to the company, are four times faster, have four times the memory bandwidth, and have three times more memory capability than its previous generation powered by Tranium1.