The instructions for setting up a virtual environment is here.
cd SFA3D/
pip install -r requirements.txt
Download the 3D KITTI detection dataset from here.
The downloaded data includes:
Please make sure that you construct the source code & dataset directories structure as below.
The pre-trained model was pushed to this repo.
python inference.py --no_cuda=True
python inference.py
Label of inference
python train.py --no_cuda=True
python train.py --gpu_idx 0
python train.py --multiprocessing-distributed --world-size 1 --rank 0 --batch_size 64 --num_workers 8
Two machines (two nodes), multiple GPUs
python train.py --dist-url 'tcp://IP_OF_NODE1:FREEPORT' --multiprocessing-distributed --world-size 2 --rank 0 --batch_size 64 --num_workers 8
python train.py --dist-url 'tcp://IP_OF_NODE2:FREEPORT' --multiprocessing-distributed --world-size 2 --rank 1 --batch_size 64 --num_workers 8
[1] SFA3D: PyTorch Implementation
└── kitti/
├── image_2/ (left color camera,非必须)
├── calib/ (非必须)
├── label_2/ (标注结果/标签,非必须)
└── velodyne/ (点云文件,必须)
${ROOT}
└── checkpoints/
├── fpn_resnet_18/
├── fpn_resnet_18_epoch_300.pth (点云目标检测标注模型)
└── sfa/ (点云标注算法)
├── config/
├── data_process/
├── models/
├── utils/
├── inference.py
└── train.py
├── README.md
├── LICENSE
└── requirements.txt