This is going to be one of the most exciting times to be at Google Cloud Next. Machine Learning and Generative AI have quickly become a high priority for many organizations, and there will be a lot to digest and learn in this content-packed conference.
At Granica, we believe that for AI to be truly useful it has to be trusted, and trusted AI begins with data privacy to ensure that data is dafe for use in model training and fine tuning, RAG, or directly in prompts. Side note - we will be showing off our data privacy capabilities on the show floor, you can find us at Booth # 1403 and learn more here.
Being such big fans of building trust, impact, and efficiency into AI, our team has curated the best sessions to attend. We think these are going to be great sessions for AI/ML teams (and their leaders) as well, so if that's you, we highly recommend adding them to your agenda. Head to your Cloud Next agenda portal where you can search for each of the sessions by code (e.g. “SEC207”).
SEC207 | A cybersecurity expert's guide to securing AI products with Google SAIF
April 9 | 09:15 AM - 10:00 AM PDT
Speakers: Anton Chuvakin, Senior Staff Security Consultant, Google Cloud; Shan Rao, Group Product Manager, Google Cloud
Learn a practical approach to addressing security and privacy concerns in AI using Google's Secure AI Framework (SAIF). Understand how to implement SAIF to secure AI products effectively.
AIML215 | Security, privacy, and governance considerations for using AI at scale
April 9 | 03:30 PM - 04:15 PM PDT
Speakers: Vincent Ciaravino, Product Manager, Cloud AI, Google Cloud; Ken McAfee, VP Enterprise Architecture, Equifax; Eesha Pathak, Product Manager, AI Ethics and Product Governance, Google Cloud; Mark Schadler, Principal Software Engineer, Google Cloud
Discover how enterprises use Vertex AI to safely enable AI at scale while addressing critical security, privacy, and responsible AI governance concerns. Learn from real-world examples, including Equifax, on managing these considerations.
SEC119 | Secure AI from source to service: Protect your data lifecycle
April 9 | 01:45 PM - 02:30 PM PDT
Speakers: Jordanna Chord, Senior Staff Software Engineer, Engineering, Google Cloud; Scott Ellis, Senior Product Manager, Engineering, Google Cloud; Anthony Tsoi, DataOps Lead Engineer, Charlotte Tilbury Beauty
Gain insights on protecting your most sensitive data assets across the entire AI/ML lifecycle, from data to training to serving. Learn how Sensitive Data Protection helps you manage security, privacy, and compliance risks.
SEC200 | Your model. Your data. Your business: A privacy-first approach to generative AI
April 9 |03:00 PM - 03:45 PM PDT
Speakers: Todd Moore, Vice President of Encryption Solutions, Thales; Amit Patil, Director, Cloud Security Platform and Products, Google Cloud; Nelly Porter, Director of Product Management, Google Cloud
Explore a privacy-centric approach to training and fine-tuning production AI models using your own data. Understand options for training in the cloud or on-premises while maintaining data privacy.
IHLT112 | Checks - Building safer and privacy compliant apps with AI
April 11 | 01:00 PM - 01:15 PM PDT
Speakers: Fergus Hurley, Checks Co-Founder & GM, Google; Pedro Rodriguez, Head of Engineering, Checks, Google
Learn how Google's AI-powered compliance platform, Checks, simplifies the complex challenge of building privacy-compliant apps. Understand how developers can leverage AI to create a safer ecosystem.
ARC207 | Delivering generative AI on premises for regulated industries
April 10 | 12:30 PM - 01:15 PM PDT
Speakers: DP Ayyadevara, Group Product Manager, Google Cloud; Gobind Johar, Product Manager, Google Cloud; Olivier Simon, VP, Smart Networks & Data, Orange
Discover how customers with stringent data residency requirements can access and integrate large language models (LLMs) in their applications. Learn how to enable generative AI-powered applications on-premises while managing infrastructure, data scale, security, and compliance.
Got other suggestions? Share your comments/questions below: