update readme

This commit is contained in:
TJU_Lu 2025-06-17 10:48:46 +08:00
parent 94ceefc057
commit 0fd210f37c
3 changed files with 14 additions and 9 deletions

View File

@ -3,9 +3,6 @@ MIT License
Copyright (c) 2024, TJU-Aerial-Robotics Copyright (c) 2024, TJU-Aerial-Robotics
Tianjin University, China Tianjin University, China
This work is developed based on Flightmare Simulator.
The original LICENSE can be found in the LICENSE_FLIGHTMARE file.
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights in the Software without restriction, including without limitation the rights

View File

@ -1,9 +1,11 @@
# You Only Plan Once # You Only Plan Once
Paper: [You Only Plan Once: A Learning-Based One-Stage Planner With Guidance Learning](https://ieeexplore.ieee.org/document/10528860) Original Paper: [You Only Plan Once: A Learning-Based One-Stage Planner With Guidance Learning](https://ieeexplore.ieee.org/document/10528860)
Video of this paper can be found: [YouTube](https://youtu.be/m7u1MYIuIn4), [bilibili](https://www.bilibili.com/video/BV15M4m1d7j5) Improvements and Applications: [YOPOv2-Tracker: An End-to-End Agile Tracking and Navigation Framework from Perception to Action](https://arxiv.org/html/2505.06923v1)
Video of the paper: [YouTube](https://youtu.be/m7u1MYIuIn4), [bilibili](https://www.bilibili.com/video/BV15M4m1d7j5)
Some realworld experiment: [YouTube](https://youtu.be/LHvtbKmTwvE), [bilibili](https://www.bilibili.com/video/BV1jBpve5EkP) Some realworld experiment: [YouTube](https://youtu.be/LHvtbKmTwvE), [bilibili](https://www.bilibili.com/video/BV1jBpve5EkP)
@ -15,7 +17,7 @@ Some realworld experiment: [YouTube](https://youtu.be/LHvtbKmTwvE), [bilibili](h
</tr> </tr>
</table> </table>
**Faster and Simpler:** The code is greatly simplified and refactored in Python/PyTorch. We also replaced the simulator with our CUDA-accelerated randomized environment, which is faster, lightweight, and boundless. For the stable version consistent with our paper, please refer to the main branch. **Faster and Simpler:** The code is greatly simplified and refactored in Python/PyTorch. We also replaced the simulator with our CUDA-accelerated randomized environment, which is faster, lightweight, and boundless. For the stable version consistent with our paper, please refer to the [main](https://github.com/TJU-Aerial-Robotics/YOPO/tree/main) branch.
### Hardware: ### Hardware:
Our drone designed by [@Mioulo](https://github.com/Mioulo) is also open-source. The hardware components are listed in [hardware_list.pdf](hardware/hardware_list.pdf), and the SolidWorks file of carbon fiber frame can be found in [/hardware](hardware/). Our drone designed by [@Mioulo](https://github.com/Mioulo) is also open-source. The hardware components are listed in [hardware_list.pdf](hardware/hardware_list.pdf), and the SolidWorks file of carbon fiber frame can be found in [/hardware](hardware/).
@ -107,13 +109,19 @@ python test_yopo_ros.py --trial=1 --epoch=50
**4. Visualization** **4. Visualization**
Start the RVIZ to visualize the images and trajectory. Start the RVIZ to visualize the images and trajectory.
You can click the `2D Nav Goal` on RVIZ as the goal (the map is infinite so the goal is freely), just like the following GIF.
``` ```
cd YOPO cd YOPO
rviz -d yopo.rviz rviz -d yopo.rviz
``` ```
Left: Random Forest(maze_type=5); Right: 3D Perlin (maze_type=1).
<p align="center">
<img src="docs/new_env.gif" alt="new_env" />
</p>
You can click the `2D Nav Goal` on RVIZ as the goal (the map is infinite so the goal is freely), just like the following GIF ( Flightmare Simulator).
<p align="center"> <p align="center">
<img src="docs/click_in_rviz.gif" alt="click_in_rviz" /> <img src="docs/click_in_rviz.gif" alt="click_in_rviz" />
</p> </p>
@ -135,7 +143,7 @@ YOPO/
├── Controller/ ├── Controller/
├── dataset/ ├── dataset/
``` ```
You can refer to [config.yaml](Simulator/src/config/config.yaml) for modifications of the sampling state, sensor, and environment. Besides, we use random states for data augmentation, and the distribution can be found in [state_samples](docs/state_samples.png) You can refer to [config.yaml](Simulator/src/config/config.yaml) for modifications of the sampling state, sensor, and environment. Besides, we use random `vel/acc/goal` for data augmentation, and the distribution can be found in [state_samples](docs/state_samples.png)
**2. Train the Policy** **2. Train the Policy**
``` ```

BIN
docs/new_env.gif Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.5 MiB