-
8:00
Registration & Light Breakfast
-
8:50
Welcome and Opening Remarks
Lisa M. Lum - Founder & CEO - Friends of the Metaverse
-
9:00
Opening Panel: Embedding Ethics in AI Development – Best Practices for Fair and Accountable Systems
- Discuss the ethical considerations surrounding AI development and deployment.
- Learn about best practices for building fair, transparent, and accountable AI systems.
- Explore the role of governance and regulation in promoting ethical AI.
Moderator: Lisa M. Lum, Director, Board of Directors, Cal Alumni Association | UC Berkeley
- Venkatesh Palani, Senior Director and Head of Engineering, eBay
- Sayan Maity, Principal ML Engineer, Disney Streaming
-
9:40
Beyond Human Reflexes: AI-Driven Real-Time Decisions
- Explore the transformative potential of AI for real-time decision making in various industries.
- Discuss the challenges and opportunities of implementing AI solutions for real-time applications.
- Learn how AI can enhance human capabilities for making rapid, informed decisions in dynamic environments.
- Examine the crucial role of human-AI interaction in ensuring responsible and effective real-time decision making.
- Discover real-world examples of AI-powered real-time decision-making systems across industries.
-
10:10
Case Study: AI-Driven Microbiome Insights for Personalized Care and Longevity
Elsa Jungman - Founder & CEO - HelloBiome
To follow ...
-
10:40
Coffee & Networking Break
-
11:00
Securing AI Collaboration and Unlocking the Benefits of Federated Learning 2.0
- Explore the latest advancements in federated learning.
- Learn how federated learning enables secure and collaborative AI development.
- Discover practical applications of federated learning across industries.
-
11:30
Memory Optimizations in Machine Learning
Tejas Chopra - Senior Software Engineer - Netflix
As Machine Learning continues to forge its way into diverse industries and applications, optimizing computational resources, particularly memory, has become a critical aspect of effective model deployment. This session, "Memory Optimizations for Machine Learning," aims to offer an exhaustive look into the specific memory requirements in Machine Learning tasks, including Large Language Models (LLMs), and the cutting-edge strategies to minimize memory consumption efficiently.
We'll begin by demystifying the memory footprint of typical Machine Learning data structures and algorithms, elucidating the nuances of memory allocation and deallocation during model training phases. The talk will then focus on memory-saving techniques such as data quantization, model pruning, and efficient mini-batch selection. These techniques offer the advantage of conserving memory resources without significant degradation in model performance.A special emphasis will be placed on the memory footprint of LLMs during inferencing. LLMs, known for their immense size and complexity, pose unique challenges in terms of memory consumption during deployment. We will explore the factors contributing to the memory footprint of LLMs, such as model architecture, input sequence length, and vocabulary size. Additionally, we will discuss practical strategies to optimize memory usage during LLM inferencing, including techniques like model distillation, dynamic memory allocation, and efficient caching mechanisms.
By the end of this session, attendees will have a comprehensive understanding of memory optimization techniques for Machine Learning, with a particular focus on the challenges and solutions related to LLM inferencing.
-
12:00
AI as a Strategic Asset: How Enterprises Can Build Competitive Advantage
- Explore how businesses can leverage AI as a strategic asset to gain a competitive edge.
- Learn about key strategies for building and implementing successful AI initiatives.
- Discover real-world examples of companies leveraging AI for strategic advantage.
-
12:30
Lunch & Networking Break
-
1:30
Duo Presentation: Making Smaller LLMs Punch Above Their Weight - Lessons in Post-Training and Fine-Tuning
LLMs have started providing great value to enterprise use cases. Deploying smaller LLMs in production is attractive since they are less resource hungry. Smaller LLMs, though efficient, often struggle to match the performance of their bigger counterparts. In this talk, we will discuss various techniques like knowledge distillation, post-training alignment like RLHF and DPO, etc. that can help bridge this gap. The speakers will discuss success stories from open source and enterprise applications.
Speakers:
- Sandeep Jha, Principal Staff Technical Program Manager, LinkedIn
- Aman Gupta, Sr. Staff Engineer - Applied AI Research, LinkedIn
-
2:00
Launching Gen AI Agents in Production: Best Practices for Enterprise Use Cases
Sai Kumar Arava - Machine Learning Manager, Gen AI / ML Applications - Adobe
- Understand the best practices for scaling, deploying, and managing Gen AI agents in enterprise environments.
- Learn how to address challenges related to security, compliance, and performance optimization in production.
- Gain actionable strategies for leveraging Gen AI to drive innovation, improve operational efficiency, and create business value.
-
2:30
Scaling Deep Learning-Based Recommender Model Training
Saurabh Vishwas Joshi - Tech Lead, Senior Staff Engineer - ML Platform - Pinterest
- Acquire insights on operationalizing, optimizing, and efficiently scaling deep learning model training.
- Learn from case studies, on managing ML platforms with web-scale data.
- Understand how modern ML computing frameworks like Ray and PyTorch can be utilized to create impactful ML products.
- Dive into deep technical discussions regarding ML/AI and infrastructure
-
2:50
Afternoon Networking Break
-
3:10
AI-Powered Predictive Maintenance for Industry 4.0
- Explore the role of AI in predictive maintenance for Industry 4.0.
- Learn how AI can optimize maintenance schedules and prevent costly downtime.
- Discover real-world use cases and benefits of AI-powered predictive maintenance.
-
3:40
Level Up: How Gaming is Winning with AI
- Achieving the Unimaginable: See how AI enables the creation of thousands of real college football players in College Football 25, enhancing player experiences at scale.
- Breaking Barriers with GPTs: Gain actionable strategies to expand into new markets and engage diverse audiences, including neurodiverse customers, using advanced generative AI tools.
- Navigating AI Risks: Understand the pitfalls of bias in large AI models and learn best practices to mitigate its impact for more equitable and ethical outcomes.
Speakers:
- Faith McGee, Sr. AI/Analytics Research Manager, Electronic Arts
- Natashia Tjandra, Research Director, Electronic Arts
-
4:10
Images at Instacart: Generation, Visual intelligence, VLMs and More
Prithvi Srinivasan - Machine Learning Lead - Instacart
To follow ...
-
4:40
The Rise of AI-Powered Customer Experience: Personalization, Automation, and Beyond
- -
- Explore the transformative impact of AI on customer experience (CX).
- Learn how to leverage AI to personalize customer interactions and automate CX processes.
- Discover real-world examples of AI-powered CX solutions driving business success.
-
5:00
Closing Remarks
Lisa M. Lum - Founder & CEO - Friends of the Metaverse
-
5:00
Networking Reception
-
6:00
End of Summit
Not Found