The Data Compression Service for Traditional and Generative AI
Granica Crunch is the data compression service for traditional and generative AI. It dramatically shrinks the cost to store and access large scale data sets, especially training data, in modern cloud data lakes. Granica Crunch pre-processes this data via deep, and fast, compression before it is used in downstream stages and environments. Its advanced, patented machine learning-powered compression algorithms run transparently in the background ensuring that both existing and incoming data is efficiently compressed. Granica Crunch is consumed as an S3-compatible API by applications directly working with data in Amazon and Google Cloud data lakes.
How Granica Crunch helps
Granica Crunch reduces the cost to store, access and transfer petabyte-scale data typically by 25-60% and up to 80%, depending on the file type. Crunch is the simplest and most secure way to significantly reduce the cost associated with data without archival and/or deletion. With Crunch you can cost-effectively keep, grow, and most importantly use your data sets to maximize your model performance and ultimately your ROI on AI.
- Reduce cost to store data by up to 80%, depending on data types and access patterns
- Reduce bandwidth costs as well as transfer times by up to 80%
- Reduce compute cost and time for downstream transformation and training stages
- Increase model performance by performing more training at the same cost
If you’re storing 10 petabytes of data in Amazon or Google Cloud data lakes, that translates into >$1.3M per year (growing as your data grows) of cash savings. That's a lot of money to reinvest into your strategic projects.
Crunch pricing is extremely simple. There are no upfront costs. Simply deploy it into your AWS/GCP environment, consume the S3-compatible Granica API in apps that work with the data, and watch the savings accumulate each month. Then, simple pay us a small % of those savings and keep the majority remainder. Our outcome-based pricing model doesn’t cost budget, it frees up budget.
Key characteristics and features
- Byte-granular—Advanced ML-based data compression algorithms run statistical analyses at the byte level for maximum efficiency
- Adaptive—Data agnostic and adaptive, automatically routing data via the most optimal compression model for maximum efficiency and savings
- Cloud-native—Supports public cloud object stores: Amazon S3 and GCP Google Cloud Storage
- Lossless—Continuous scanning ensures full data integrity and fidelity at all times
- Secure—Data never leaves your environment: Crunch runs entirely within your VPC and respects your security policies
- Private—Multi-tenant data isolation and separation of crunched data for your end-user customers and internal tenants
- S3-compatible—Reduced data is immediately and transparently accessible via an S3-compatible API
- Scalable—Elastic clusters with each single-node instance capable of reducing up to tens of PBs
- Fast—Sustained throughput of up to 1 GB per second per node for transparent access to your data
- Resilient—Highly available clusters ensure always-on data access
- Powerfully simple—Start crunching your data in-place and cutting costs in 30 minutes, with 1 command
Crunch inherits many of these characteristics via internal shared services, see the Granica architecture for more details.
Granica Crunch is administered using the Granica control plane, and core data compression services are delivered via the Granica data plane. Check out additional details for how crunching works and how reads work.