update readme

This commit is contained in:
menzzz 2025-07-28 14:02:01 +08:00
parent ae09e3fd07
commit 7838540953
2 changed files with 100 additions and 136 deletions

127
README.md Normal file → Executable file
View File

@ -1,36 +1,107 @@
# DPtraj # Stable-Time Path Planning
A double-polynomial discription for trajectory interfaced with learning-based front end.
This work is presented in the paper: Hierarchically Depicting Vehicle Trajectory with Stability in Complex Environments, published in Science Robotics. ## 1. Setup
The backend trajectory optimizer improvements build upon our previous work (available at https://github.com/ZJU-FAST-Lab/Dftpav), where singularity issues were addressed. All the tests are conducted in the Linux environment on a computer equipped with an Intel Core i7-10700 CPU and a GeForce RTX 2060 GPU.
Moreover, our software is developed and tested in Ubuntu 18.04, 20.04 with ROS installed.
ROS can be installed here: [ROS Installation](http://wiki.ros.org/ROS/Installation).
Moreover, the approach has recently been extended and applied to more complex multi-joint robotic platforms (see https://github.com/Tracailer/Tracailer). To build this project, ensure that you have the following dependencies installed: 
- [LibTorch](https://pytorch.org/): We necessitate invoking models generated by [PyTorch](https://pytorch.org/get-started/locally/) in C++, we require the utilization of LibTorch.
- [CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit): Required for GPU acceleration.
- [OMPL](https://ompl.kavrakilab.org/): A comprehensive library for motion planning and control.
If you find this repository helpful, please consider citing at least one of the following papers: ### 1. LibTorch
```bibtex To facilitate your workflow, we have prepared the LibTorch 2.1.0 in **repo's Release**, **strongly advocating for its direct usage to circumvent version incompatibilities**.
@article{han2025hierarchically,
title={Hierarchically depicting vehicle trajectory with stability in complex environments}, Matching LibTorch and CUDA versions is **imperative** - mismatches will cause model inference failures. We provide pre-verified combinations below.
author={Han, Zhichao and Tian, Mengze and Gongye, Zaitian and Xue, Donglai and Xing, Jiaxi and Wang, Qianhao and Gao, Yuman and Wang, Jingping and Xu, Chao and Gao, Fei}, The following combinations have been tested:
journal={Science Robotics},
volume={10}, <div align="center">
number={103},
pages={eads4551}, | LibTorch Version | CUDA 11.8 |
year={2025}, |-----------------|-----------------|
publisher={American Association for the Advancement of Science} | 2.1.0 | ✅ Support |
}
@article{han2023efficient, </div>
title={An efficient spatial-temporal trajectory planner for autonomous vehicles in unstructured environments},
author={Han, Zhichao and Wu, Yuwei and Li, Tong and Zhang, Lu and Pei, Liuao and Xu, Long and Li, Chengyang and Ma, Changjia and Xu, Chao and Shen, Shaojie and others}, ### 2. CUDA Toolkit
journal={IEEE Transactions on Intelligent Transportation Systems}, #### 1. Install CUDA Toolkit
volume={25}, Following the instructions in the [CUDA Toolkit download archive](https://developer.nvidia.com/cuda-11-8-0-download-archive)
number={2},
pages={1797--1814}, #### 2. Create a Symbolic Link
year={2023},
publisher={IEEE} ```bash
} stat cuda # check the original version of cuda
cd /usr/local
sudo rm -rf cuda
sudo ln -s /usr/local/cuda-11.8 /usr/local/cuda
stat cuda # check the updated version of cuda
``` ```
The code will be divided into several modules and gradually open-sourced in different branches. Currently, you can switch to the `backend` branch to try our efficient singularity-free backend optimization. This branch includes a README to guide you through quick deployment. #### 3. Set Environment Variables of CUDA
```bash
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64
export PATH=$PATH:/usr/local/cuda/bin
```
### 3. OMPL
```bash
apt-get install libompl-dev ompl-demos
```
## 2. Use the pre-trained model and test it in ROS.
### 1. Unzip the project and go to the corresponding directory.
```bash
cd ~/STPP_DEPLOY/NeuralTraj
```
### 2. Compile it.
```bash
catkin_make -DCMAKE_BUILD_TYPE=Release
```
### 3. Run.
Open a new terminal window and type:
```bash
cd ~/STPP_DEPLOY/NeuralTraj
source devel/setup.bash
```
Then, run the script:
```bash
./run.sh
```
### 4. Reproduce Results.
Use the **2D Nav Goal** in **RVIZ** to trigger the planning. The program will read the test data from `/src/plan_manage/testdata` and automatically fetch the next set of data (start and end states along with the environment) for planning every 10 seconds.
Here is an example:
![reproducing](figs/reproducing.gif)
If the map does not appear after waiting for several seconds, there is no need to worry. Simply trigger it using **2D Nav Goal**, and the map will be reloaded.
Here, the green curve represents the path directly outputted by our method, the yellow curve denotes the path outputted by Hybrid A, and the light blue curve illustrates the path outputted by Hybrid A* integrated with Transformer. Lastly, the red curve showcases the trajectory generated by further backend optimization of the path outputted by our network.
Switching to the terminal, it will display the computation times of the various frontend path planning algorithms for this instance, along with the corresponding times for backend optimization.
Note: Due to computational performance and solver randomness, slight deviations in results may occur.
## 3. Contact
If you have any questions, please feel free to contact Zhichao HAN (<zhichaohan@zju.edu.cn>) or Mengze TIAN(<mengze.tian@epfl.ch>).

View File

@ -1,107 +0,0 @@
# Stable-Time Path Planning
## 1. Setup
All the tests are conducted in the Linux environment on a computer equipped with an Intel Core i7-10700 CPU and a GeForce RTX 2060 GPU.
Moreover, our software is developed and tested in Ubuntu 18.04, 20.04 with ROS installed.
ROS can be installed here: [ROS Installation](http://wiki.ros.org/ROS/Installation).
To build this project, ensure that you have the following dependencies installed:&#x20;
- [LibTorch](https://pytorch.org/): We necessitate invoking models generated by [PyTorch](https://pytorch.org/get-started/locally/) in C++, we require the utilization of LibTorch.
- [CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit): Required for GPU acceleration.
- [OMPL](https://ompl.kavrakilab.org/): A comprehensive library for motion planning and control.
### 1. LibTorch
To facilitate your workflow, we have prepared the LibTorch 2.1.0 in **repo's Release**, **strongly advocating for its direct usage to circumvent version incompatibilities**.
Matching LibTorch and CUDA versions is **imperative** - mismatches will cause model inference failures. We provide pre-verified combinations below.
The following combinations have been tested:
<div align="center">
| LibTorch Version | CUDA 11.8 |
|-----------------|-----------------|
| 2.1.0 | ✅ Support |
</div>
### 2. CUDA Toolkit
#### 1. Install CUDA Toolkit
Following the instructions in the [CUDA Toolkit download archive](https://developer.nvidia.com/cuda-11-8-0-download-archive)
#### 2. Create a Symbolic Link
```bash
stat cuda # check the original version of cuda
cd /usr/local
sudo rm -rf cuda
sudo ln -s /usr/local/cuda-11.8 /usr/local/cuda
stat cuda # check the updated version of cuda
```
#### 3. Set Environment Variables of CUDA
```bash
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64
export PATH=$PATH:/usr/local/cuda/bin
```
### 3. OMPL
```bash
apt-get install libompl-dev ompl-demos
```
## 2. Use the pre-trained model and test it in ROS.
### 1. Unzip the project and go to the corresponding directory.
```bash
cd ~/STPP_DEPLOY/NeuralTraj
```
### 2. Compile it.
```bash
catkin_make -DCMAKE_BUILD_TYPE=Release
```
### 3. Run.
Open a new terminal window and type:
```bash
cd ~/STPP_DEPLOY/NeuralTraj
source devel/setup.bash
```
Then, run the script:
```bash
./run.sh
```
### 4. Reproduce Results.
Use the **2D Nav Goal** in **RVIZ** to trigger the planning. The program will read the test data from `/src/plan_manage/testdata` and automatically fetch the next set of data (start and end states along with the environment) for planning every 10 seconds.
Here is an example:
![reproducing](figs/reproducing.gif)
If the map does not appear after waiting for several seconds, there is no need to worry. Simply trigger it using **2D Nav Goal**, and the map will be reloaded.
Here, the green curve represents the path directly outputted by our method, the yellow curve denotes the path outputted by Hybrid A, and the light blue curve illustrates the path outputted by Hybrid A* integrated with Transformer. Lastly, the red curve showcases the trajectory generated by further backend optimization of the path outputted by our network.
Switching to the terminal, it will display the computation times of the various frontend path planning algorithms for this instance, along with the corresponding times for backend optimization.
Note: Due to computational performance and solver randomness, slight deviations in results may occur.
## 3. Contact
If you have any questions, please feel free to contact Zhichao HAN (<zhichaohan@zju.edu.cn>) or Mengze TIAN(<mengze.tian@epfl.ch>).