DPtraj/README.md

110 lines
3.7 KiB
Markdown
Raw Normal View History

2025-07-28 14:02:01 +08:00
# Stable-Time Path Planning
## 1. Setup
All the tests are conducted in the Linux environment on a computer equipped with an Intel Core i7-10700 CPU and a GeForce RTX 2060 GPU.
Moreover, our software is developed and tested in Ubuntu 18.04, 20.04 with ROS installed.
ROS can be installed here: [ROS Installation](http://wiki.ros.org/ROS/Installation).
To build this project, ensure that you have the following dependencies installed: 
- [LibTorch](https://pytorch.org/): We necessitate invoking models generated by [PyTorch](https://pytorch.org/get-started/locally/) in C++, we require the utilization of LibTorch.
- [CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit): Required for GPU acceleration.
- [OMPL](https://ompl.kavrakilab.org/): A comprehensive library for motion planning and control.
### 1. LibTorch
2025-07-28 14:44:57 +08:00
To facilitate your workflow, we have prepared the LibTorch 2.1.0 in [google drive](https://drive.google.com/file/d/1sW9OpkZalEzB3llRwt9eR9m20yW2g5IC/view?usp=drive_link).
Please download, unzip and put it in `~/NeuralTraj`.
2025-07-28 14:02:01 +08:00
Matching LibTorch and CUDA versions is **imperative** - mismatches will cause model inference failures. We provide pre-verified combinations below.
The following combinations have been tested:
<div align="center">
| LibTorch Version | CUDA 11.8 |
|-----------------|-----------------|
| 2.1.0 | ✅ Support |
</div>
### 2. CUDA Toolkit
#### 1. Install CUDA Toolkit
Following the instructions in the [CUDA Toolkit download archive](https://developer.nvidia.com/cuda-11-8-0-download-archive)
#### 2. Create a Symbolic Link
```bash
stat cuda # check the original version of cuda
cd /usr/local
sudo rm -rf cuda
sudo ln -s /usr/local/cuda-11.8 /usr/local/cuda
stat cuda # check the updated version of cuda
```
#### 3. Set Environment Variables of CUDA
```bash
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64
export PATH=$PATH:/usr/local/cuda/bin
```
### 3. OMPL
```bash
2025-08-07 11:22:44 +08:00
sudo apt-get install libompl-dev ompl-demos
2025-07-28 14:02:01 +08:00
```
## 2. Use the pre-trained model and test it in ROS.
### 1. Unzip the project and go to the corresponding directory.
```bash
2025-08-07 11:22:44 +08:00
cd ~/DPtraj/NeuralTraj
2025-07-28 14:02:01 +08:00
```
### 2. Compile it.
```bash
catkin_make -DCMAKE_BUILD_TYPE=Release
```
### 3. Run.
Open a new terminal window and type:
```bash
2025-08-07 11:22:44 +08:00
cd ~/DPtraj/NeuralTraj
2025-07-28 14:02:01 +08:00
source devel/setup.bash
```
Then, run the script:
```bash
./run.sh
```
### 4. Reproduce Results.
Use the **2D Nav Goal** in **RVIZ** to trigger the planning. The program will read the test data from `/src/plan_manage/testdata` and automatically fetch the next set of data (start and end states along with the environment) for planning every 10 seconds.
Here is an example:
![reproducing](figs/reproducing.gif)
If the map does not appear after waiting for several seconds, there is no need to worry. Simply trigger it using **2D Nav Goal**, and the map will be reloaded.
Here, the green curve represents the path directly outputted by our method, the yellow curve denotes the path outputted by Hybrid A, and the light blue curve illustrates the path outputted by Hybrid A* integrated with Transformer. Lastly, the red curve showcases the trajectory generated by further backend optimization of the path outputted by our network.
Switching to the terminal, it will display the computation times of the various frontend path planning algorithms for this instance, along with the corresponding times for backend optimization.
Note: Due to computational performance and solver randomness, slight deviations in results may occur.
## 3. Contact
If you have any questions, please feel free to contact Zhichao HAN (<zhichaohan@zju.edu.cn>) or Mengze TIAN(<mengze.tian@epfl.ch>).