Compare commits
2 Commits
v1.0
...
YOPO-Simpl
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
59933f7d8d | ||
|
|
b92ca64731 |
@ -20,7 +20,7 @@ Some realworld experiment: [YouTube](https://youtu.be/LHvtbKmTwvE), [bilibili](h
|
|||||||
**Faster and Simpler:** The code is greatly simplified and refactored in Python/PyTorch. We also replaced the simulator with our CUDA-accelerated randomized environment, which is faster, lightweight, and boundless. For the stable version consistent with our paper, please refer to the [main](https://github.com/TJU-Aerial-Robotics/YOPO/tree/main) branch.
|
**Faster and Simpler:** The code is greatly simplified and refactored in Python/PyTorch. We also replaced the simulator with our CUDA-accelerated randomized environment, which is faster, lightweight, and boundless. For the stable version consistent with our paper, please refer to the [main](https://github.com/TJU-Aerial-Robotics/YOPO/tree/main) branch.
|
||||||
|
|
||||||
### Hardware:
|
### Hardware:
|
||||||
Our drone designed by [@Mioulo](https://github.com/Mioulo) is also open-source. The hardware components are listed in [hardware_list.pdf](hardware/hardware_list.pdf), and the SolidWorks file of carbon fiber frame can be found in [/hardware](hardware/).
|
Our drone designed by [@Mioulo](https://github.com/Mioulo) is also open-source. The hardware components are listed in [hardware_list.pdf](hardware/hardware_list.pdf), and the SolidWorks file of carbon fiber frame can be found in [/hardware](hardware/) (complete assembly files are included in the [Release](https://github.com/TJU-Aerial-Robotics/YOPO/releases/tag/hardware)).
|
||||||
|
|
||||||
## Introduction:
|
## Introduction:
|
||||||
We propose **a learning-based planner for autonomous navigation in obstacle-dense environments** which integrates (i) perception and mapping, (ii) front-end path searching, and (iii) back-end optimization of classical methods into a single network.
|
We propose **a learning-based planner for autonomous navigation in obstacle-dense environments** which integrates (i) perception and mapping, (ii) front-end path searching, and (iii) back-end optimization of classical methods into a single network.
|
||||||
@ -101,7 +101,7 @@ You can refer to [config.yaml](Simulator/src/config/config.yaml) for modificatio
|
|||||||
|
|
||||||
**3. Start the YOPO Planner**
|
**3. Start the YOPO Planner**
|
||||||
|
|
||||||
You can refer to [traj_opt.yaml](YOPO/config/traj_opt.yaml) for modification of the flight speed (The given weights are pretrained at 6 m/s and perform smoothly at speeds between 0 - 6 m/s).
|
You can refer to [traj_opt.yaml](YOPO/config/traj_opt.yaml) for modification of the flight speed (The given weights are pretrained at 6 m/s and perform smoothly at speeds between 0 - 6 m/s, and more pretrained models are available at [Releases](https://github.com/TJU-Aerial-Robotics/YOPO/releases)).
|
||||||
|
|
||||||
```
|
```
|
||||||
cd YOPO
|
cd YOPO
|
||||||
|
|||||||
@ -66,7 +66,7 @@ class GuidanceLoss(nn.Module):
|
|||||||
goal_length = goal_dir.norm(dim=1) # [B]
|
goal_length = goal_dir.norm(dim=1) # [B]
|
||||||
|
|
||||||
# length difference along goal direction (cosine similarity)
|
# length difference along goal direction (cosine similarity)
|
||||||
parallel_diff = (goal_length - traj_along).abs() # [B]
|
parallel_diff = F.smooth_l1_loss(goal_length, traj_along, reduction='none')
|
||||||
|
|
||||||
# length perpendicular to goal direction
|
# length perpendicular to goal direction
|
||||||
traj_perp = traj_dir - traj_along.unsqueeze(1) * goal_dir_norm # [B, 3]
|
traj_perp = traj_dir - traj_along.unsqueeze(1) * goal_dir_norm # [B, 3]
|
||||||
|
|||||||
BIN
YOPO/saved/YOPO_1/epoch50.pth
Executable file → Normal file
BIN
YOPO/saved/YOPO_1/epoch50.pth
Executable file → Normal file
Binary file not shown.
Binary file not shown.
BIN
YOPO/saved/YOPO_1/events.out.tfevents.1763990916.610.3199703.0
Normal file
BIN
YOPO/saved/YOPO_1/events.out.tfevents.1763990916.610.3199703.0
Normal file
Binary file not shown.
@ -140,7 +140,7 @@ class YopoNet:
|
|||||||
|
|
||||||
obs = np.concatenate((vel_c, acc_c, goal_c), axis=0).astype(np.float32)
|
obs = np.concatenate((vel_c, acc_c, goal_c), axis=0).astype(np.float32)
|
||||||
obs_norm = self.state_transform.normalize_obs(torch.from_numpy(obs[None, :]))
|
obs_norm = self.state_transform.normalize_obs(torch.from_numpy(obs[None, :]))
|
||||||
return obs_norm.to(self.device, non_blocking=True)
|
return obs_norm
|
||||||
|
|
||||||
@torch.inference_mode()
|
@torch.inference_mode()
|
||||||
def callback_depth(self, data):
|
def callback_depth(self, data):
|
||||||
@ -171,9 +171,8 @@ class YopoNet:
|
|||||||
# input prepare
|
# input prepare
|
||||||
time1 = time.time()
|
time1 = time.time()
|
||||||
depth_input = torch.from_numpy(depth).to(self.device, non_blocking=True) # (non_blocking: copying speed 3x)
|
depth_input = torch.from_numpy(depth).to(self.device, non_blocking=True) # (non_blocking: copying speed 3x)
|
||||||
obs_norm = self.process_odom()
|
obs_norm = self.process_odom().to(self.device, non_blocking=True)
|
||||||
obs_input = self.state_transform.prepare_input(obs_norm)
|
obs_input = self.state_transform.prepare_input(obs_norm)
|
||||||
obs_input = obs_input.to(self.device, non_blocking=True)
|
|
||||||
# torch.cuda.synchronize()
|
# torch.cuda.synchronize()
|
||||||
|
|
||||||
time2 = time.time()
|
time2 = time.time()
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user