Semantic Memory for Embodied Agent
GitHubROS2LangGraphGPT-4 VisionLLaMAVector StoreNAV2Gazebo
TLDR
ROS2 embodied AI agent with semantic mapping and long term memory. Uses GPT-4 Vision and LLaMA 70B for navigation.
Detailed
Tech Stack:
ROS2, LangGraph, GPT-4 Vision, LLaMA 70B, Vector Store, NAV2, Gazebo
Goal:
Build an embodied AI agent that maintains semantic maps and uses memory for navigation.
What I did:
- •Built knowledge graph using LangGraph (nodes = landmarks/obstacles, edges = relationships)
- •Integrated GPT-4 Vision for scene understanding from camera feed
- •Used LLaMA 70B for text interaction and reasoning
- •Stored observations in vector store for memory retrieval
- •Connected to NAV2 stack for robot control in Gazebo
What was achieved:
Agent builds semantic maps, remembers past experiences, and navigates using visual context and language commands.