In episode 6 of the Cache-it podcast, Khawaja invites Prasad Chalasani, Co-Founder of Langroid, to dive into the company’s unique approach to Large Language Models (LLMs). Prasad has a strong background in computer science and machine learning. He was frustrated with the current frameworks which led to the co-creation of Langroid.
Prasad kicks off the conversation by highlighting the awesome capabilities LLMs provide. Such as text generation, reasoning, planning, and context learning. However, he acknowledges the limitations, including hallucinations, context length, and token cost, which can make solving problems more complex.
Langroid is presented as a principled programming framework designed to address these challenges. It revolves around the concept of “agents” as first-class citizens. Agents encapsulate various capabilities, which can include LLMs and vector databases, making them versatile components of the system.
Prasad illustrates Langroid’s approach with an example: automating the creation of Docker file for a code repository. He explains how multiple agents work together, which one agent obtaining information about the codebase from another agent and then guiding the process of Docker file creation.
Khawaja asks questions about the potential use cases for Langroid, including collaborative development and complex workflows. Prasad envisions scenarios where teams can collaborate using a library of agents, contributing and sharing agents to compose powerful workflows.
Make sure to subscribe to the Cache-it channel on YouTube so you never miss an episode.
About Prasad Chalasani
Prasad Chalasani is co-founder of Langroid, an Open-source Python framework that simplifies building LLM applications, using multi-agent programming. He has a BTech (CS) from IIT, Kharagpur, and a PhD (ML) from Carnegie Mellon University. He has over 25 years experience, and has worked at Los Alamos, hedge funds and Goldman Sachs, and led Machine Learning teams at Yahoo and MediaMath.