20.6 C
New York
Wednesday, June 18, 2025

How Cerebras + DataRobot Accelerates AI App Improvement


Sooner, smarter, extra responsive AI functions – that’s what your customers anticipate. However when giant language fashions (LLMs) are gradual to reply, consumer expertise suffers. Each millisecond counts. 

With Cerebras’ high-speed inference endpoints, you possibly can cut back latency, pace up mannequin responses, and preserve high quality at scale with fashions like Llama 3.1-70B. By following a number of easy steps, you’ll be capable of customise and deploy your individual LLMs, supplying you with the management to optimize for each pace and high quality.

On this weblog, we’ll stroll you thru you easy methods to:

  • Arrange Llama 3.1-70B within the DataRobot LLM Playground.
  • Generate and apply an API key to leverage Cerebras for inference.
  • Customise and deploy smarter, sooner functions.

By the tip, you’ll be able to deploy LLMs that ship pace, precision, and real-time responsiveness.

Prototype, customise, and take a look at LLMs in a single place

Prototyping and testing generative AI fashions typically require a patchwork of disconnected instruments. However with a unified, built-in atmosphere for LLMs, retrieval strategies, and analysis metrics, you possibly can transfer from concept to working prototype sooner and with fewer roadblocks.

This streamlined course of means you possibly can concentrate on constructing efficient, high-impact AI functions with out the trouble of piecing collectively instruments from completely different platforms.

Let’s stroll by means of a use case to see how one can leverage these capabilities to develop smarter, sooner AI functions

Use case: Dashing up LLM interference with out sacrificing high quality

Low latency is important for constructing quick, responsive AI functions. However accelerated responses don’t have to come back at the price of high quality. 

The pace of Cerebras Inference outperforms different platforms, enabling builders to construct functions that really feel easy, responsive, and clever.

When mixed with an intuitive growth expertise, you possibly can:

  • Cut back LLM latency for sooner consumer interactions.
  • Experiment extra effectively with new fashions and workflows.
  • Deploy functions that reply immediately to consumer actions.

The diagrams under present Cerebras’ efficiency on Llama 3.1-70B, illustrating sooner response instances and decrease latency than different platforms. This allows fast iteration throughout growth and real-time efficiency in manufacturing.

Image showing response time of llama 3.1 70B with Cerebras

How mannequin measurement impacts LLM pace and efficiency

As LLMs develop bigger and extra complicated, their outputs change into extra related and complete — however this comes at a price: elevated latency. Cerebras tackles this problem with optimized computations, streamlined information switch, and clever decoding designed for pace.

These pace enhancements are already reworking AI functions in industries like prescription drugs and voice AI. For instance:

  • GlaxoSmithKline (GSK) makes use of Cerebras Inference to speed up drug discovery, driving increased productiveness.
  • LiveKit has boosted the efficiency of ChatGPT’s voice mode pipeline, attaining sooner response instances than conventional inference options.

The outcomes are measurable. On Llama 3.1-70B, Cerebras delivers 70x sooner inference than vanilla GPUs, enabling smoother, real-time interactions and sooner experimentation cycles.

This efficiency is powered by  Cerebras’ third-generation Wafer-Scale Engine (WSE-3), a customized processor designed to optimize the tensor-based, sparse linear algebra operations that drive LLM inference.

By prioritizing efficiency, effectivity, and adaptability, the WSE-3 ensures sooner, extra constant outcomes throughout mannequin efficiency.

Cerebras Inference’s pace reduces the latency of AI functions powered by their fashions, enabling deeper reasoning and extra responsive consumer experiences. Accessing these optimized fashions is easy — they’re hosted on Cerebras and accessible through a single endpoint, so you can begin leveraging them with minimal setup.

Image showing tokens per second on Cerebras Inference

Step-by-step: Easy methods to customise and deploy Llama 3.1-70B for low-latency AI

Integrating LLMs like Llama 3.1-70B from Cerebras into DataRobot permits you to customise, take a look at, and deploy AI fashions in only a few steps.  This course of helps sooner growth, interactive testing, and larger management over LLM customization.

1. Generate an API key for Llama 3.1-70B within the Cerebras platform.

Image showing generating and API key on Cerebras

2. In DataRobot, create a customized mannequin within the Mannequin Workshop that calls out to the Cerebras endpoint the place Llama 3.1 70B is hosted.

Image of the model workshop on DataRobot (1)

3. Throughout the customized mannequin, place the Cerebras API key throughout the customized.py file.

Image of putting Cerebras API key into custom py file in DataRobot (1)

4. Deploy the customized mannequin to an endpoint within the DataRobot Console, enabling  LLM blueprints to leverage it for inference.

Image of deploying llama 3.1 70B on Cerebras in DataRobot

5. Add your deployed Cerebras LLM to the LLM blueprint within the DataRobot LLM Playground to begin chatting with Llama 3.1 -70B.

Image of adding an LLM to the playground in DataRobot

6. As soon as the LLM is added to the blueprint, take a look at responses by adjusting prompting and retrieval parameters, and examine outputs with different LLMs immediately within the DataRobot GUI.

Image of the DataRobot playground

Develop the boundaries of LLM inference to your AI functions

Deploying LLMs like Llama 3.1-70B with low latency and real-time responsiveness is not any small job. However with the proper instruments and workflows, you possibly can obtain each.

By integrating LLMs into DataRobot’s LLM Playground and leveraging Cerebras’ optimized inference, you possibly can simplify customization, pace up testing, and cut back complexity – all whereas sustaining the efficiency your customers anticipate. 

As LLMs develop bigger and extra highly effective, having a streamlined course of for testing, customization, and integration, will likely be important for groups seeking to keep forward. 

Discover it your self. Entry Cerebras Inference, generate your API key, and begin constructing AI functions in DataRobot.

Concerning the creator

Kumar Venkateswar
Kumar Venkateswar

VP of Product, Platform and Ecosystem

Kumar Venkateswar is VP of Product, Platform and Ecosystem at DataRobot. He leads product administration for DataRobot’s foundational companies and ecosystem partnerships, bridging the gaps between environment friendly infrastructure and integrations that maximize AI outcomes. Previous to DataRobot, Kumar labored at Amazon and Microsoft, together with main product administration groups for Amazon SageMaker and Amazon Q Enterprise.


Meet Kumar Venkateswar


Nathaniel Daly
Nathaniel Daly

Principal Product Supervisor

Nathaniel Daly is a Senior Product Supervisor at DataRobot specializing in AutoML and time sequence merchandise. He’s centered on bringing advances in information science to customers such that they will leverage this worth to unravel actual world enterprise issues. He holds a level in Arithmetic from College of California, Berkeley.


Meet Nathaniel Daly

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles