VLMs

Agentic Skill Discovery

We propose an LLM-driven framework that enables **robots to autonomously discover useful skills from scratch**. By generating tasks, rewards, and success criteria, the LLM guides reinforcement learning, while a vision-language model verifies outcomes. This allows the robot to build a meaningful skill library without relying on predefined primitives.

Details Make a Difference: Object State-Sensitive Neurorobotic Task Planning

We introduce OSSA (Object State-Sensitive Agent), a task-planning agent using pre-trained LLMs and VLMs to generate plans sensitive to object states. We compare two methods: a modular approach combining vision and language models, and a monolithic VLM approach. Evaluated on tabletop tasks involving clearing a table, OSSA’s monolithic model outperforms the modular one. A new multimodal benchmark dataset with object state annotations is provided.