Add sim2real GIF

This commit is contained in:
TJU-Lu 2025-08-08 18:02:00 +08:00
parent f583e353d6
commit 8852102cc1
3 changed files with 11 additions and 3 deletions

View File

@ -97,7 +97,7 @@ source devel/setup.bash
rosrun sensor_simulator sensor_simulator_cuda
```
You can refer to [config.yaml](Simulator/src/config/config.yaml) for modifications of the sensor (e.g., camera and LiDAR parameters) and environment (e.g., maze_type and obstacle density). For generalization, the policy trained in forest (type-5) can be zero-shot transferred to 3D Perlin (type-1).
You can refer to [config.yaml](Simulator/src/config/config.yaml) for modifications of the sensor (e.g., camera and LiDAR parameters) and environment (e.g., scenario type and obstacle density).
**3. Start the YOPO Planner**
@ -196,11 +196,19 @@ python test_yopo_ros.py --use_tensorrt=1
**4. Adapt to Your Platform**
+ You need to change `env: simulation` at the end of `test_yopo_ros.py` to `env: 435` (this affects the unit of the depth image), and modify the odometry to your own topic (in the NWU frame).
+ Configure your depth camera to match the training configuration (the pre-trained weights use a 16:9 resolution and a 90° FOV; for RealSense, you can set the resolution to 480×270).
+ Configure your depth camera to match the training configuration (the pre-trained weights use a 16:9 resolution and a 90° FOV; for RealSense, you can set the resolution in launch file to 480×270).
+ You may want to use the position controller like traditional planners in real flight to make it compatible with your controller. You should change `plan_from_reference: False` to `True` at the end of `test_yopo_ros.py`. You can test the changes in simulation using the position controller: `roslaunch so3_quadrotor_simulator simulator_position_control.launch
`
**5. Generalization**
We use random training scenes, images, and states to enhance generalization. Policy trained with ground truth depth images can be zero-shot transferred to stereo cameras and unseen scenarios:
<p align="center">
<img src="docs/sim2real.gif" alt="sim2real" />
</p>
## RKNN Deployment
On the RK3566 clip (only 1 TOPS NPU), after deploying with RKNN and INT8 quantization, inference takes only about 20 ms (backbone: ResNet-14). The update of deployment on RK3566 or RK3588 is coming soon.

View File

@ -49,7 +49,7 @@ window_size_max: 2.8
add_ceiling: 0 # 是否添加天花板
# 墙面
wall_width_min: 0.5
wall_width_max: 8.0
wall_width_max: 6.0
wall_thick: 0.5
wall_number: 100 # 墙面数量
wall_ceiling: 1 # 是否添加天花板

BIN
docs/sim2real.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.8 MiB