Skip to content

[ICRA 2024] Learning from Human Guidance: Uncertainty-aware deep reinforcement learning for autonomous driving.

Notifications You must be signed in to change notification settings

OscarHuangWind/Learning-from-Intervention

Repository files navigation

🎉 ICRA 2024 ACCEPTED! 🎊

LfMG: UnaRL for Autonomous Driving

📃 Learning from Multimodal Guidance: Uncertainty-aware Reinforcement Learning for Autonomous Driving with Multimodal Digital Driver Guidance

💫 This work proposes a novel learning from multimodal guidance (LfMG) approach that considers the multi-modality and intrinsic uncertainty of human behaviors in the context of the Human-in-the-loop-RL framework.

🚗 LfMG aims to learn a robust uncertainty-aware autonomous driving policy through multimodal behaviors from multi-human concurrent interventions.

🔧 Realized in SMARTS simulator with Ubuntu 20.04 and Pytorch.

General Info

Video: Unprotected Left Turn

Left_Turn_Edited.mp4

Video: Ramp Merge

Ramp_Merge_Edited.mp4

Framework

User Guide

Clone the repository.

cd to your workspace and clone the repo.

git clone https://github.com/OscarHuangWind/Learning-from-Intervention.git

Create a new virtual environment with dependencies.

cd ~/$your workspace/Learning-from-Intervention
conda env create -f environment.yml

You can modify your virtual environment name and dependencies in environment.yml file.

Activate virtual environment.

conda activate UnaRL

Install Pytorch

Select the correct version based on your cuda version and device (cpu/gpu):

pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113

Install the SMARTS.

# Download SMARTS
git clone https://github.com/huawei-noah/SMARTS.git
cd <path/to/SMARTS>

# Install the system requirements.
bash utils/setup/install_deps.sh

# Install smarts.
pip install -e '.[camera_obs,test,train]'

# Install extra dependencies.
pip install -e .[extras]

Build the scenario in case of adding new scenarios

(:heavy_exclamation_mark:No need for LeftTurn, RampMerge, and T-Intersection scenarios:heavy_exclamation_mark:)

e.g. suppose to build Roundout scenario.

cd <path/to/Learning-from-Intervention>
scl scenario build --clean Scenario/Roundout/

DRL Training

python main.py

Visualization

Type the following command in the terminal:

scl envision start

Then go to http://localhost:8081/

Human Demonstration (collect human driver data through Logitech G29)

python demonstration.py

Learn N-human Digital Drivers

python N-human_policy_learning.py

UnaRL Training

Modify name of the mode in main.py file to "UnaRL", and run:

python main.py

Parameters

Feel free to play with the parameters in config.yaml.