Evolution of LLM agents
What are Large Language Models (LLMs)?
Large Language Models (LLMs) represent a significant advancement in artificial intelligence, specifically within the domain of natural language processing (NLP). These computational models are trained on extensive corpora of text data, encompassing a broad spectrum of internet-sourced documents. Through this training process, LLMs acquire the capability to comprehend, generate, and interact using human language. Models such as GPT are trained to predict subsequent words in a sequence, based on preceding context. Such models demonstrate remarkable proficiency in a range of linguistic tasks. These tasks include, but are not limited to, composition of coherent texts, code generation, question answering, and more. The predictive nature of LLMs, grounded in statistical probabilities derived from their training data, enables them to apply learned patterns to new, unseen inputs, facilitating their application across diverse NLP tasks.
What are LLM-based Agents?
LLM-based agents are autonomous systems that leverage the capabilities of Large Language Models to perform specific tasks or series of tasks with minimal human intervention. These agents can understand and generate natural language, make decisions, solve problems, and execute actions based on the information processed. By integrating external tools and data, LLM-based agents can extend their functionality beyond the model’s pre-trained knowledge, enabling them to perform more complex and dynamic tasks. This includes everything from conducting web searches to interfacing with APIs, managing workflows, and even engaging in multi-agent conversations.
Research Mission
Our research mission is to explore the frontiers of enabling next-generation applications of LLMs through automated workflows that are informed by multi-agent conversations with minimal human effort or intervention. We aim to explore the orchestration, automation, and optimization of complex LLM workflows, focusing on how the inputs and outputs from each step in the workflow can be leveraged to maximize the performance of LLM models and overcome their inherent limitations. This involves investigating innovative methods to integrate external data, tools, and resources seamlessly into the LLM-based agents’ decision-making processes, thereby enhancing their ability to solve problems, generate insights, and perform tasks more efficiently and effectively.
Current Research
Our current research is centred on the conceptualization and development of agents as automated entities capable of handling a variety of input sources to produce one or multiple outputs. These agents are defined by a set of variables that trigger the start of a workflow, which then executes several steps. These steps can range from executing a series of sequential or parallel prompts based on input variables, calling external APIs to gather inputs, transform data, and inform other prompts within the workflow, among other actions. By designing these agents to be both flexible and intelligent, we aim to create systems that can autonomously navigate complex workflows, making decisions and taking actions that are informed by a deep understanding of the task at hand and the context in which they operate. This research not only pushes the boundaries of what LLM-based agents can achieve but also opens new opportunities for their application in various fields ranging from data analysis and content creation to complex problem-solving and beyond.