From Arduinos to LLMs: Exploring the Spectrum of ML // Soham Chatterjee // MLOps Podcast #162

MLOps.community - A podcast by Demetrios Brinkmann

Categories:

MLOps Coffee Sessions #162 with Soham Chatterjee, From LLMs to TinyML: The Dynamic Spectrum of MLOps co-hosted by Abi Aryan. // Abstract Explore the spectrum of MLOps from large language models (LLMs) to TinyML. Soham highlights the difficulties of scaling machine learning models and cautions against relying exclusively on open AI's API due to its limitations. Soham is particularly interested in the effective deployment of models and the integration of IoT with deep learning. He offers insights into the challenges and strategies involved in deploying models in constrained environments, such as remote areas with limited power and utilizing small devices like Arduino Nano. // Bio Soham leads the machine learning team at Sleek, where he builds tools for automated accounting and back-office management. As an electrical engineer, Soham has a passion for the intersection of machine learning and electronics, specifically TinyML/Edge Computing. He has several courses on MLOps and TinyMLOps available on Udacity and LinkedIn, with more courses in the works. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Abi on LinkedIn: https://www.linkedin.com/in/goabiaryan/ Connect with Soham on LinkedIn: https://www.linkedin.com/in/soham-chatterjee Timestamps: [00:00] Soham's preferred coffee [01:49] Takeaways [05:33] Please share this episode with [07:02] Soham's background [09:00] From electrical engineering to Machine Learning [10:40] Deep learning, Edge Computing, and Quantum Computing [11:34] Tiny ML [13:29] Favorite area in Tiny ML chain [14:03] Applications explored [16:56] Operational challenges transformation [18:49] Building with Large Language Models [25:44] Most Optimal Model [26:33] LLMs path [29:19] Prompt engineering [33:17] Migrating infrastructures to new product [37:20] Your success where others failed [38:26] API Accessibility [39:02] Reality about LLMs [40:39] Compression angle adds to the bias [43:28] Wrap up

Visit the podcast's native language site