-
8:15
Registration & Light Breakfast
-
9:00
Welcome and Opening Remarks
-
9:10
Opening Keynote: The Future of AI - Exploring Key Drivers for Transformative Growth in 2025 and Beyond
- Explore the transformative growth of AI in 2025 and beyond.
- Discuss key drivers shaping the future of AI.
- Discover emerging trends and their potential impact across industries.
-
9:40
Mastering Scalable AI Infrastructure for Efficient Deployment Across Cloud and Edge
-
10:00
Panel: From Lab to Launch: Accelerating ML Innovation from Research to Real-World Impact
- Discover how to bridge the gap between cutting-edge AI research and real-world applications.
- Learn practical strategies for implementing and scaling machine learning models.
- Hear success stories and lessons learned from industry experts.
-
10:40
Coffee & Networking Break
-
11:00
Afternoon Keynote: Redefining Business Potential for Tomorrow's Success with Gen AI
- Explore the transformative potential of generative AI across industries.
- Learn how businesses can leverage generative AI to drive innovation and growth.
- Discover practical use cases and success stories of generative AI in action.
-
11:30
Memory Optimizations in Machine Learning
Tejas Chopra - Senior Software Engineer - Netflix
As Machine Learning continues to forge its way into diverse industries and applications, optimizing computational resources, particularly memory, has become a critical aspect of effective model deployment. This session, "Memory Optimizations for Machine Learning," aims to offer an exhaustive look into the specific memory requirements in Machine Learning tasks, including Large Language Models (LLMs), and the cutting-edge strategies to minimize memory consumption efficiently.
We'll begin by demystifying the memory footprint of typical Machine Learning data structures and algorithms, elucidating the nuances of memory allocation and deallocation during model training phases. The talk will then focus on memory-saving techniques such as data quantization, model pruning, and efficient mini-batch selection. These techniques offer the advantage of conserving memory resources without significant degradation in model performance.A special emphasis will be placed on the memory footprint of LLMs during inferencing. LLMs, known for their immense size and complexity, pose unique challenges in terms of memory consumption during deployment. We will explore the factors contributing to the memory footprint of LLMs, such as model architecture, input sequence length, and vocabulary size. Additionally, we will discuss practical strategies to optimize memory usage during LLM inferencing, including techniques like model distillation, dynamic memory allocation, and efficient caching mechanisms.
By the end of this session, attendees will have a comprehensive understanding of memory optimization techniques for Machine Learning, with a particular focus on the challenges and solutions related to LLM inferencing.
-
12:00
Graph Neural Networks for Scalable Decision-Making
- Understand the power of graph neural networks for complex decision-making.
- Learn how to leverage graph neural networks to analyse relationships and patterns in data.
- Discover practical applications of graph neural networks across industries.
-
12:10
Launching Gen AI Agents in Production: Best Practices for Enterprise Use Cases
Sai Kumar Arava - Machine Learning Manager, Gen AI / ML Applications - Adobe
To follow ...
-
12:30
Lunch
-
1:30
Images at Instacart: Generation, Visual intelligence, VLMs and More
Prithvi Srinivasan - Machine Learning Lead - Instacart
To follow ...
-
2:00
AI-Driven Microbiome Insights for Personalized Care and Longevity
Elsa Jungman - Founder & CEO - HelloBiome
To follow ...
-
2:30
Scaling Deep Learning-Based Recommender Model Training
Saurabh Vishwas Joshi - Tech Lead, Senior Staff Engineer - ML Platform - Pinterest
- Acquire insights on operationalizing, optimizing, and efficiently scaling deep learning model training.
- Learn from case studies, on managing ML platforms with web-scale data.
- Understand how modern ML computing frameworks like Ray and PyTorch can be utilized to create impactful ML products.
- Dive into deep technical discussions regarding ML/AI and infrastructure
-
3:00
AI Agents: Tackling Complex Domain-Specific Challenges with Advanced Reasoning
Sungae Park - Head of Design & Research for Episource - Optum
Still think AI lacks the human touch? Discover how GPT-5-powered AI agents transcend the limitations of generic AI by integrating human-like perception, advanced reasoning, and strategic planning. Explore real-world examples of agentic flow as they tackle complex domain-specific challenges, deliver human-like decisions, and rebuild trust—addressing skepticism sparked by the shortcomings of earlier generative AI models.
- Witness a live demo of GPT-5 LLMs creating AI agents designed to tackle complex, domain-specific challenges.
- Learn advanced reasoning techniques, such as abduction, to derive the best explanations for phenomena using hypotheses from incomplete data.
- Discover UX strategies rooted in psychology that embed anthropomorphic traits in AI, enhancing trust and reliability.
- Explore real-world examples of AI agents revolutionizing workflows and transforming the future of complex fields such as healthcare, fintech, and more.
-
3:20
Afternoon Networking Break
-
3:40
Fireside Chat: Beyond Human Reflexes: AI-Driven Real-Time Decisions
- Explore the transformative potential of AI for real-time decision making in various industries.
- Discuss the challenges and opportunities of implementing AI solutions for real-time applications.
- Learn how AI can enhance human capabilities for making rapid, informed decisions in dynamic environments.
- Examine the crucial role of human-AI interaction in ensuring responsible and effective real-time decision making.
- Discover real-world examples of AI-powered real-time decision-making systems across industries.
-
4:10
Closing Keynote: AI-Powered Predictive Maintenance for Industry 4.0
- Explore the role of AI in predictive maintenance for Industry 4.0.
- Learn how AI can optimize maintenance schedules and prevent costly downtime.
- Discover real-world use cases and benefits of AI-powered predictive maintenance.
-
4:50
Closing Remarks
-
5:00
Networking Reception
-
6:00
End of Day 1
Not Found
-
8:30
Registration & Light Breakfast
-
9:30
Welcome & Opening Remarks
-
9:40
Opening Keynote: Quantum Computing and AI: Unlocking the Next Frontier in Computational Power
- Dive deep into the intersection of quantum computing and AI.
- Learn how quantum computing can accelerate AI development and unlock new possibilities.
- Discover the potential impact of quantum computing on various industries.
-
10:10
Level Up: How Gaming is Winning with AI
- Achieving the Unimaginable: See how AI enables the creation of thousands of real college football players in College Football 25, enhancing player experiences at scale.
- Breaking Barriers with GPTs: Gain actionable strategies to expand into new markets and engage diverse audiences, including neurodiverse customers, using advanced generative AI tools.
- Navigating AI Risks: Understand the pitfalls of bias in large AI models and learn best practices to mitigate its impact for more equitable and ethical outcomes.
Speakers:
- Faith McGee, Sr. AI/Analytics Research Manager, Electronic Arts
- Natashia Tjandra, Research Director, Electronic Arts
-
10:40
Securing AI Collaboration and Unlocking the Benefits of Federated Learning 2.0
- Explore the latest advancements in federated learning.
- Learn how federated learning enables secure and collaborative AI development.
- Discover practical applications of federated learning across industries.
-
11:10
Coffee & Networking Break
-
11:40
Panel: Embedding Ethics in AI Development – Best Practices for Fair and Accountable Systems
- Discuss the ethical considerations surrounding AI development and deployment.
- Learn about best practices for building fair, transparent, and accountable AI systems.
- Explore the role of governance and regulation in promoting ethical AI.
Moderator: Lisa M. Lum, Director, Board of Directors, Cal Alumni Association | UC Berkeley
Speakers:
- Sayan Maity, Principal ML Engineer, Disney Streaming
- Venkatesh Palani, Senior Director and Head of Engineering, eBay -
12:20
Making Smaller LLMs Punch Above Their Weight - Lessons in Post-Training and Fine-Tuning
LLMs have started providing great value to enterprise use cases. Deploying smaller LLMs in production is attractive since they are less resource hungry. Smaller LLMs, though efficient, often struggle to match the performance of their bigger counterparts. In this talk, we will discuss various techniques like knowledge distillation, post-training alignment like RLHF and DPO, etc. that can help bridge this gap. The speakers will discuss success stories from open source and enterprise applications.
Speakers:
- Sandeep Jha, Principal Staff Technical Program Manager, LinkedIn
- Aman Gupta, Sr. Staff Engineer - Applied AI Research, LinkedIn
-
12:50
Lunch
-
1:50
Enhancing Predictive Maintenance through Robust Model Monitoring in Industry 4.0
Haonan Wang - Principal Machine Learning Engineer - OpenDrive
- See how model monitoring boosts predictive maintenance and reduces downtime.
- Learn to monitor models for consistent, accurate performance.
- Discover how monitoring supports proactive, disruption-free maintenance.
-
2:45
AI as a Strategic Asset: How Enterprises Can Build Competitive Advantage
Dr Astha Purohit - Director - Product (Tech) Ops - Walmart
- Explore how businesses can leverage AI as a strategic asset to gain a competitive edge.
- Learn about key strategies for building and implementing successful AI initiatives.
- Discover real-world examples of companies leveraging AI for strategic advantage.
-
2:40
Closing Remarks
-
3:00
End of Event
Not Found