Though generative AI is relatively new compared with other artificial intelligence technologies, it is already being used to support tasks ranging from vetting job applicants to diagnosing and recommending treatments for illnesses. IDC predicts that by 2028, 80% of chief information officers will rely on generative AI tools to accelerate analysis, boost decision-making, improve customer service, and more.

Companies are racing to tap the technology’s potential to boost customer satisfaction and employee productivity. To do so, they are looking to use the large language models (LLMs) best suited for powering generative AI applications, such as AI copilots and chatbots.

Understanding the diversity of LLMs

Having a wide array of LLMs to choose from means businesses are more likely to find the right one for their specific needs, instead of resorting to a one-size-fits-all option. This can speed innovation, though selecting from the hundreds of available models can be complicated.

When selecting an LLM, enterprises should consider its intended application, speed, security, cost, language, and ease of use.

Model types include:

  • Commercial models: Popular in health care and financial services industries, these models are commonly used for projects involving specialized customization or security restrictions.
  • Open-source models: Due to their accessibility and financial appeal, these models are often used in research, and by startups and small organizations.
  • General-purpose models: These models are trained on vast amounts of data and can be used as foundation models for building tailored AI applications.
  • Domain-specific models: These models are trained to suit a particular industry or use case, such as health care or financial services.
  • Task-specific models: These custom-built models are optimized for a single natural language processing (NLP) function, such as summarization, question answering, or translation.
  • Vision-language models: Dubbed VLMs, these models combine computer vision and NLP to generate images from text descriptions and recognize objects from images. Using both text and code allows them to create and understand images without being directly trained on visual data.

It’s also important to consider a model’s size, as this will affect its capabilities and limitations. Some factors include:

  • Inference speed: Smaller models generally provide quicker inference times, enabling real-time processing and increasing energy efficiency and cost savings.
  • Accuracy: Larger models enhanced with retrieval-augmented generation, aka RAG, often yield higher accuracy.
  • Deployability: Smaller models are well-suited for edge devices and mobile applications, while larger models run ideally in a cloud or data center.
  • Cost: Larger models require more compute infrastructure to run.

Developers should also consider which languages the AI model must support, based on who will use it and where it will be applied. This is particularly important in modern workplaces, where employees may speak many different languages. Ensuring the model can seamlessly translate languages is vital for effective communication and collaboration across its users.

Additionally, with the growing importance of sovereign AI, many countries are building proprietary models trained on local languages and data sets. This allows nations to maintain control and autonomy over AI, ensuring the development and application of these technologies align with their unique cultural, ethical, and legal standards.

How companies are using LLMs

LLMs are powering AI applications, including chatbots and predictive analytics tools, that are delivering breakthroughs and efficiencies across industries.

  • Health care: Insilico Medicine, a generative AI-driven drug discovery company, has created a new LLM transformer, called nach0, to answer biomedical questions and synthesize new molecules. The multi-domain model allows researchers to process and analyze large data sets efficiently with reduced memory requirements and enhanced processing speed, facilitating more effective data management and organization.
  • Telecommunications: Amdocs is using its amAIz platform to improve business efficiency, drive new revenue streams and deliver enhanced customer experiences. This includes a customer billing agent that provides immediate access to LLM-powered data insights and automation to address customer billing questions.
  • Financial services: Bank Negara, aka BNI, is integrating Cloudera’s AI Inference service to enhance customer experiences and increase operational efficiency using generative AI. This will allow BNI to efficiently deploy and manage large-scale AI models within a secure, enterprise environment, offering high performance and data privacy.

Different models tailored to specific needs allow rapid implementation of AI solutions and tools to help automate redundant work. This creates more time and space for people to focus on valuable projects that move the needle for companies and organizations.

Looking forward, developers will seek to build and deploy LLMs that can enhance industry-specific applications, as well as work on improving interoperability between systems, reducing operational costs, and boosting efficiency. Using tailored LLMs, companies can build AI applications that meet their distinct requirements for improving customer satisfaction and fostering operational excellence.

Amanda Saunders is director of enterprise generative AI product marketing at Nvidia.

Generative AI Insights provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss the challenges and opportunities of generative artificial intelligence. The selection is wide-ranging, from technology deep dives to case studies to expert opinion, but also subjective, based on our judgment of which topics and treatments will best serve InfoWorld’s technically sophisticated audience. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Contact doug_dineley@foundryco.com.