
The Einstein Trust Layer is a robust set of features and guardrails that protect the privacy and security of your data, improve the safety and accuracy of your AI results, and promote the responsible use of AI across the Salesforce ecosystem.
Vendor
Salesforce
Company Website

What is secure AI?
Secure AI is AI that protects your customer data without compromising the quality of its outputs. Customer and company data are key to enriching and personalizing the results of AI models, but it's important to trust how that data is being used.
Trusted AI starts with securely grounded prompts.
A prompt is a set of instructions that steers a large language model (LLM) to return a result that is useful. The more context you give the prompt, the better the result will be. Features of the Einstein Trust Layer like secure data retrieval and dynamic grounding enable you to safely provide AI prompts with context about your business, while data masking and zero data retention protect the privacy and security of that data when the prompt is sent to a third-party LLM.
Seamless privacy and data controls.
Benefit from the scale and cost-effectiveness of third-party foundation LLMs while protecting the privacy and security of your data at each step of the generation process.
Secure Data Retrieval
Allow users to securely access the data to ground generative AI prompts in context about your business while maintaining permissions and data access controls.
Dynamic Grounding
Securely infuse AI prompts with business context from structured or unstructured data sources, utilizing multiple grounding techniques that work with prompt templates you can scale across your business.
Data Masking*
Mask sensitive data types like personal identifiable information (PII) and payment card industry (PCI) information before sending AI prompts to third-party large language models (LLMs), and configure masking settings to your organization’s needs. *Availability varies by feature, language, and geographic region.
Your data is not our product.
Salesforce gives customers control over the use of their data for AI. Whether using our own Salesforce-hosted models or external models that are part of our Shared Trust Boundary, like OpenAI, no context is stored. The large language model forgets both the prompt and the output as soon as the output is processed.
Mitigate toxicity and harmful outputs.
Empower employees to prevent the sharing of inappropriate or harmful content by scanning and scoring every prompt and output for toxicity. Ensure no output is shared before a human accepts or rejects it and record every step as metadata in our audit trail, simplifying compliance at scale.* * Coming soon
Deploy AI with Ethics by Design
Salesforce is committed to the delivery of software and solutions that are intentionally ethical and humane in use, particularly when it comes to data and AI. In order to empower customers and users to use AI responsibly, we have developed an AI Acceptable Use Policy to address the highest areas of risk. Generating individualized medical, legal, or financial advice is prohibited in an effort to maintain human decision-making in those areas. At Salesforce, we care about the real-world impact of our products, and that’s why we have specific protections in place to uphold our values, while empowering customers with the latest tools on the market.