LAMARL: LLM-Aided Multi-Agent Reinforcement Learning for Cooperative Policy Generation

Guobin Zhu1,2 Rui Zhou1 Wenkang Ji2 Shiyu Zhao2

1 School of Automation Science and Electrical Engineering, Beihang University, Beijing, China

2 WINDY Lab, Department of Artificial Intelligence at Westlake University, Hangzhou, China

Abstract

Although Multi-Agent Reinforcement Learning (MARL) is effective for complex multi-robot tasks, it suffers from low sample efficiency and requires iterative manual reward tuning. Large Language Models (LLMs) have shown promise in single-robot settings, but their application in multi-robot systems remains largely unexplored. This paper introduces a novel LLM-Aided MARL (LAMARL) approach, which integrates MARL with LLMs, significantly enhancing sample efficiency without requiring manual design. LAMARL consists of two modules: the first module leverages LLMs to fully automate the generation of prior policy and reward functions. The second module is MARL, which uses the generated functions to guide robot policy training effectively. On a shape assembly benchmark, both simulation and real-world experiments demonstrate the unique advantages of LAMARL. Ablation studies show that the prior policy improves sample efficiency by an average of 185.9% and enhances task completion, while structured prompts based on Chain-of-Thought (CoT) and basic APIs improve LLM output success rates by 28.5%-67.5%.

Video

Methodology

PDF Image

Results

To be completed.