Granica Screen is a data privacy platform that protects natural language processing data & models – from training and fine-tuning to inference. It discovers sensitive information in cloud data lake files and input prompts with state-of-the-art accuracy to ensure safe use with both in-house and external AI services.
It's hard to make data safe for use with AI, and the stakes keep getting higher
If AI teams train, fine tune, or prompt LLMs with PII and sensitive information it not only introduces bias that degrades AI results, it creates high risk of data leakage, i.e. breach, at inference time.
Preserving the privacy of training and prompt data has proven to be a persistent challenge. The two biggest issues? Low detection accuracy and poor cost-effectiveness at the scale AI requires.
Granica Screen lets you discover all your sensitive info and PII with state-of-the-art accuracy, no nothing leaks out. Screen is also highly compute-efficient, so you can unlock up to 10X more data at the same cloud infrastructure cost as traditional solutions.
Discover and mask sensitive information in training data to ensure it doesn’t accidentally leak at inference time.
Discover and mask sensitive information in end-user and application generated prompts, whether for use with in-house or 3rd-party LLMs.
Replace sensitive information with synthetic - realistic but fake - data to improve accuracy and privacy when training, fine-tuning, and prompting LLMs.
With Granica Screen, you can confidently mitigate privacy risks while building better AI, faster.
As featured in
Learn how Granica Screen can help you make data safe for AI.
whitepaper
Read Our Latest White Paper "Building Trust, Impact and Efficiency into Traditional and Generative AI"
The Granica Screen data privacy platform delivers 5-10X higher compute efficiency, which lowers infrastructure cost per byte scanned vs. traditional approaches and enables cost-effective scanning of broad data sets.
Data types and classifiers supported
Granica supports a wide range of AI/ML/analytics data types and classifiers (e.g., phone number, SSN, VIN, etc.). Bring us your unique requirements, and we can customize it for your use case.
Clickstream
Logs
Tabular
g ~/ granica deploy
Success!
g ~/
No. Granica Screen is typically granted read-only access to private files. It reads and transforms sensitive information such as PII in those files using various de-identification techniques and then stores safe-for-use copies in a separate target bucket.
Granica Screen provides state-of-the-art named entity recognition (NER) accuracy for 50+ entities and global support for 100+ languages. High accuracy is the critical foundation of real data privacy, enabling truly safe and compliant use in ML and generative AI.
Screen is also is highly compute-efficient, lowering the cost to side-scan data by 5-10X, thus increasing the volume of data you can unlock for training by 5-10X at comparable costs. Finally, Granica Screen continuously monitors your data lake to detect and protect sensitive information and PII in new files immediately after they land.
Yes, both products are built on the Granica platform and are fully compatible with one another. You can maximize your benefits by using them together. For example, a common pattern is to first use Granica Screen to generate safe-for-use file copies in a target bucket. Then, using Granica Crunch on that target bucket minimizes the cost of storing and accessing those copies.
Unlock even more data for AI/ML teams to safely improve model performance, whether for private LLMs and generative AI or traditional AI and machine learning. The Granica Screen data privacy platform reduces breach and compliance risks while simultaneously improving the efficiency of downstream AI workflows.