Pre-built CUDA extension wheels for TRELLIS.2 o-voxel module.
# Check your versions first
python -c "import torch; print(f'CUDA: {torch.version.cuda}, PyTorch: {torch.__version__}')"
# Then pick the matching wheel index below
| CUDA | PyTorch | Python | Platforms | Wheel Index |
|---|---|---|---|---|
| 12.4 | 2.5.1 | 3.10, 3.11, 3.12 | Linux, Windows | cu124-torch251/ |
| 12.6 | 2.6.0 | 3.10, 3.11, 3.12, 3.13 | Linux, Windows | cu126-torch260/ |
| 12.6 | 2.8.0 | 3.10, 3.11, 3.12, 3.13 | Linux, Windows | cu126-torch280/ |
| 12.8 | 2.8.0 | 3.10, 3.11, 3.12, 3.13 | Linux, Windows | cu128-torch280/ |
| 12.8 | 2.9.1 | 3.10, 3.11, 3.12, 3.13 | Linux, Windows | cu128-torch291/ |
# For CUDA 12.8 + PyTorch 2.8.0: pip install o_voxel --find-links https://pozzettiandrea.github.io/ovoxel-wheels/cu128-torch280/
GPU Support: RTX 20/30/40/50 series, Tesla T4, A100, H100, B100/B200