The Data Reduction Service for Enterprise AI
Granica Crunch is the data reduction service for enterprise AI, pre-processing data in your training data aggregation environment before it is used in downstream stages and environments. It provides deep, and fast, inline data reduction of large-scale image and textual data in order to reduce the cost to store and access that data. Granica Crunch is consumed as an API by applications directly working with data, especially training data, in Amazon S3 and Google Cloud Storage. It has advanced, patented machine learning-powered data reduction algorithms which run transparently in the background.
How Granica Crunch helps
Granica Crunch reduces the cost to store, access and transfer petabyte-scale data typically by 25-60% and up to 80%, depending on the file type. Crunch is the simplest and most secure way to significantly reduce the cost associated with data in your training aggregation environment without archival and/or deletion. With Crunch you can cost-effectively keep, grow, and most importantly use your data sets to maximize your model performance and ultimately your ROI on AI.
- Reduce cost to store data by up to 80%, depending on data types and access patterns
- Reduce bandwidth costs as well as transfer times by up to 80%
- Reduce compute cost and time for downstream transformation and training stages
- Increase model performance by performing more training at the same cost
If you’re storing 10 petabytes of data in Amazon S3 or Google GCS that translates into >$1.3M per year (growing as your data grows) of cash savings. That's a lot of money to reinvest into your strategic projects.
Crunch pricing is extremely simple. There are no upfront costs. Simply deploy it into your AWS/GCP environment, consume the Granica API in apps that work with S3/GCS, and watch the savings accumulate each month. Then, simple pay us a small % of those savings and keep the majority remainder. Our outcome-based pricing model doesn’t cost budget, it frees up budget.
Key characteristics and features
- Byte-granular—Advanced ML-based data reduction algorithms run statistical analyses at the byte level for maximum efficiency
- Adaptive—Data agnostic and adaptive, automatically routing data via the most optimal reduction model for maximum reduction
- Cloud-native—Supports public cloud object stores: Amazon S3 and GCP Google Cloud Storage
- Lossless—Continuous scanning ensures full data integrity and fidelity at all times
- Secure—Data never leaves your environment: Crunch runs entirely within your VPC and respects your security policies
- Private—Multi-tenant data isolation and separation of crunched data for your end-user customers and internal tenants
- S3-compatible—Reduced data is immediately and transparently accessible via an S3-compatible API
- Scalable—Elastic clusters with each single-node instance capable of reducing up to tens of PBs
- Fast—Sustained throughput of up to 1 GB per second per node for transparent access to your data
- Resilient—Highly available clusters ensure always-on data access
- Powerfully simple—Start crunching your data in-place and cutting costs in 30 minutes, with 1 command
Crunch inherits many of these characteristics via internal shared services, see the Granica architecture for more details.
Granica Crunch is administered using the Granica control plane, and core data reduction services are delivered via the Granica data plane. Check out additional details for how crunching works and how reads work.