Logo
Sign in
Product Logo
HeatWave GenAIOracle

Oracle HeatWave GenAI provides integrated, automated, and secure generative AI with in-database large language models (LLMs); an automated, in-database vector store; scale-out vector processing; and the ability to have contextual conversations in natural language—letting you take advantage of generative AI without AI expertise, data movement, or additional cost. HeatWave GenAI is available in Oracle Cloud Infrastructure (OCI), Amazon Web Services (AWS), and Microsoft Azure.

Vendor

Vendor

Oracle

heatwave-g…ical-brief.pdf
Product details

Why use HeatWave GenAI?

Quickly use generative AI anywhere

Use in-database LLMs across clouds and regions to help retrieve data and generate or summarize content—without the hassle of external LLM selection and integration.

Easily get more accurate and relevant answers

Let LLMs search your proprietary documents to help get more accurate and contextually relevant answers—without AI expertise or moving data to a separate vector database. HeatWave GenAI automates embedding generation.

Get faster results at lower cost

For similarity search, HeatWave GenAI is less expensive and is 15X faster than Databricks, 18X faster than Google BigQuery, and 30X faster than Snowflake.

Converse in natural language

Get rapid insights from your documents via natural language conversations. The HeatWave Chat interface preserves context to help enable human-like conversations with follow-up questions.

Key features of HeatWave GenAI

In-database LLMs

Use the built-in LLMs in all Oracle Cloud Infrastructure (OCI) regions, OCI Dedicated Region, and across clouds and get consistent results with predictable performance across deployments. Help reduce infrastructure costs by eliminating the need to provision GPUs.

Integrated with other generative AI services

Access pretrained foundational models from Cohere and Meta via the OCI Generative AI service when using HeatWave GenAI on OCI and via Amazon Bedrock when using HeatWave GenAI on AWS.

HeatWave Chat

Have contextual conversations in natural language informed by your unstructured data in HeatWave Vector Store. Use the integrated Lakehouse Navigator to help guide LLMs to search through specific documents, helping you reduce costs while getting more accurate results faster.

In-database vector store

HeatWave Vector Store houses your proprietary documents in various formats, acting as the knowledge base for retrieval-augmented generation (RAG) to help you get more accurate and contextually relevant answers—without moving data to a separate vector database.

Automated generation of embeddings

Leverage the automated pipeline to help discover and ingest proprietary documents in HeatWave Vector Store, making it easier for developers and analysts without AI expertise to use the vector store.

Scale-out vector processing

Vector processing is parallelized across up to 512 HeatWave cluster nodes and executed at memory bandwidth, helping to deliver fast results with a reduced likelihood of accuracy loss.

Who benefits from HeatWave GenAI?

Developers can deliver apps with built-in AI

Built-in LLMs and HeatWave Chat help enable you to deliver apps that are preconfigured for contextual conversations in natural language. There’s no need for external LLMs and GPUs.

Analysts can rapidly get new insights

HeatWave GenAI can help you easily converse with your data, perform similarity searches across documents, and retrieve information from your proprietary data.

IT can help accelerate AI innovation

Empower developers and business teams with integrated capabilities and automation to take advantage of generative AI. Easily enable natural language conversations and RAG.

Find more products by category
Analytics SoftwareView all