Logo
Sign in
Product Logo
Palmyra X5Writer

Enterprise LLM with a 1 million-token context window, adaptive reasoning, and multi-modal support for advanced, large-scale AI and agentic workflows.

Vendor

Vendor

Writer

Company Website

Company Website

Product details

Palmyra X5 is Writer’s most advanced enterprise large language model, designed to power the next generation of AI agents and applications. It features a 1 million-token context window, enabling the model to process and reason over vast amounts of data, such as entire codebases or large document collections, in a single prompt. Palmyra X5 is built with adaptive reasoning, hybrid attention mechanisms, and supports multi-step, agentic workflows. The model is trained entirely on synthetic data, achieving high efficiency and cost-effectiveness, and is available via Amazon Bedrock for seamless enterprise integration. Palmyra X5 supports multiple languages, tool-calling, built-in retrieval-augmented generation (RAG), code generation, structured outputs, and multi-modality, making it suitable for complex, mission-critical enterprise use cases.

Key Features

1 Million-Token Context Window Processes and reasons over extremely large inputs.

  • Ingests entire codebases or large document sets
  • Enables long-context, multi-step workflows

Adaptive Reasoning and Hybrid Attention Delivers advanced, dynamic reasoning capabilities.

  • Supports agentic AI and multi-step task execution
  • Maintains accuracy across long contexts

Multi-Modality and Multi-Language Support Handles diverse data types and languages.

  • Text, code, and rich text formatting
  • English, Spanish, French, German, Chinese, and more

Tool-Calling and LLM Delegation Automates complex workflows and integrates with external tools.

  • Enables agentic systems and workflow automation
  • Supports LLM-to-LLM delegation for distributed reasoning

Built-in Retrieval-Augmented Generation (RAG) Enhances accuracy with real-time data integration.

  • Connects to enterprise data sources
  • Improves factuality and relevance

Enterprise-Grade Security and Deployment Designed for secure, scalable enterprise use.

  • Available via Amazon Bedrock (serverless deployment)
  • Compliance with industry standards

Benefits

Scalable AI for Complex Workflows Handles large-scale, multi-step enterprise tasks.

  • Powers agentic AI systems and automation
  • Reduces need for prompt engineering and context management

Cost and Efficiency Leadership Optimized for enterprise budgets and performance.

  • Trained with synthetic data for $1M in GPU costs
  • Among the fastest and most cost-effective large-context LLMs

Enhanced Accuracy and Reliability Maintains high performance on long-context and multi-modal tasks.

  • Reduces hallucinations and context loss
  • Consistent outputs for mission-critical applications

Seamless Enterprise Integration Easy to deploy and manage in enterprise environments.

  • Available through Amazon Bedrock API
  • Supports serverless, scalable deployment