Skip to content

Latest commit

 

History

History
193 lines (163 loc) · 9.72 KB

README.md

File metadata and controls

193 lines (163 loc) · 9.72 KB

Installation

Preparation

Requirements

  • Ubuntu 16.04
  • Anaconda with python=3.6
  • pytorch>=1.3
  • torchvision with pillow<7
  • cuda=10.1
  • others: pip install termcolor opencv-python tensorboard h5py easydict
  • note

Datasets

Shape Classification on ModelNet40

You can download ModelNet40 for here (1.6 GB). Unzip and move (or link) it to data/ModelNet40/modelnet40_normal_resampled.

Part Segmentation on PartNet

You can download PartNet dataset from the ShapeNet official webpage (8.0 GB). Unzip and move (or link) it to data/PartNet/sem_seg_h5.

Part Segmentation on ShapeNetPart

You can download ShapeNetPart dataset from here (635M). Unzip and move (or link) it to data/ShapeNetPart/shapenetcore_partanno_segmentation_benchmark_v0.

Scene Segmentation on S3DIS

You can download the S3DIS dataset from here (4.8 GB). You only need to download the file named Stanford3dDataset_v1.2.zip, unzip and move (or link) it to data/S3DIS/Stanford3dDataset_v1.2.

The file structure should look like:

<pt-code-root>
├── cfgs
│   ├── modelnet
│   ├── partnet
│   └── s3dis
├── data
│   ├── ModelNet40
│   │   └── modelnet40_normal_resampled
│   │       ├── modelnet10_shape_names.txt
│   │       ├── modelnet10_test.txt
│   │       ├── modelnet10_train.txt
│   │       ├── modelnet40_shape_names.txt
│   │       ├── modelnet40_test.txt
│   │       ├── modelnet40_train.txt
│   │       ├── airplane
│   │       ├── bathtub
│   │       └── ...
│   ├── PartNet
│   │   └── sem_seg_h5
│   │       ├── Bag-1
│   │       ├── Bed-1
│   │       ├── Bed-2
│   │       ├── Bed-3
│   │       ├── Bottle-1
│   │       ├── Bottle-3
│   │       └── ...
│   ├── ShapeNetPart
│   │   └── shapenetcore_partanno_segmentation_benchmark_v0
│   │       ├── README.txt
│   │       ├── synsetoffset2category.txt
│   │       ├── train_test_split
│   │       ├── 02691156
│   │       ├── 02773838
│   │       ├── 02954340
│   │       ├── 02958343
│   │       ├── 03001627
│   │       ├── 03261776
│   │       └── ...
│   └── S3DIS
│       └── Stanford3dDataset_v1.2
│           ├── Area_1
│           ├── Area_2
│           ├── Area_3
│           ├── Area_4
│           ├── Area_5
│           └── Area_6
├── init.sh
├── datasets
├── function
├── models
├── ops
└── utils

Compile custom operators and pre-processing data

sh init.sh

Usage

Training

ModelNet

python -m torch.distributed.launch --master_port <port_num> --nproc_per_node <num_of_gpus_to_use> \
    function/train_modelnet_dist.py --cfg <config file> [--log_dir <log directory>]
  • <port_num> is the port number used for distributed training, you can choose like 12347.
  • <config file> is the yaml file that determines most experiment settings. Most config file are in the cfgs directory.
  • <log directory> is the directory that the log file, checkpoints will be saved, default is log.

PartNet

python -m torch.distributed.launch --master_port <port_num> --nproc_per_node <num_of_gpus_to_use> \
    function/train_partnet_dist.py --cfg <config file> [--log_dir <log directory>]

ShapeNetPart

python -m torch.distributed.launch --master_port <port_num> --nproc_per_node <num_of_gpus_to_use> \
    function/train_shapenetpart_dist.py --cfg <config file> [--log_dir <log directory>]

S3DIS

python -m torch.distributed.launch --master_port <port_num> --nproc_per_node <num_of_gpus_to_use> \
    function/train_s3dis_dist.py --cfg <config file> [--log_dir <log directory>]

Evaluating

For evaluation, we recommend using 1 gpu for more precise result.

ModelNet40

python -m torch.distributed.launch --master_port <port_num> --nproc_per_node 1 \
    function/evaluate_modelnet_dist.py --cfg <config file> --load_path <checkpoint> [--log_dir <log directory>]
  • <port_num> is the port number used for distributed evaluation, you can choose like 12347.
  • <config file> is the yaml file that determines most experiment settings. Most config file are in the cfgs directory.
  • <checkpoint> is the model checkpoint used for evaluating.
  • <log directory> is the directory that the log file, checkpoints will be saved, default is log_eval.

PartNet

python -m torch.distributed.launch --master_port <port_num> --nproc_per_node 1 \
    function/evaluate_partnet_dist.py --cfg <config file> --load_path <checkpoint> [--log_dir <log directory>]

ShapeNetPart

python -m torch.distributed.launch --master_port <port_num> --nproc_per_node 1 \
    function/evaluate_shapenetpart_dist.py --cfg <config file> --load_path <checkpoint> [--log_dir <log directory>]

S3DIS

python -m torch.distributed.launch --master_port <port_num> --nproc_per_node 1 \
    function/evaluate_s3dis_dist.py --cfg <config file> --load_path <checkpoint> [--log_dir <log directory>]

Models

ModelNet40

Method Acc Model
Point-wise MLP 93.0 Google / Baidu(fj13)
Pseudo Grid 93.1 Google / Baidu(gmh5)
Adapt Weights 92.9 Google / Baidu(bbus)
PosPool 93.0 Google / Baidu(wuuv)
PosPool* 93.3 Google / Baidu(qcc6)

S3DIS

Method mIoU Model
Point-wise MLP 66.3 Google / Baidu(53as)
Pseudo Grid 65.0 Google / Baidu(8skn)
Adapt Weights 64.5 Google / Baidu(b7zv)
PosPool 65.5 Google / Baidu(z752)
PosPool* 65.5 Google / Baidu(r96f)

Data iteration indices: Google / Baidu(m5bp)

PartNet

Method mIoU (val) mIoU (test) Model
Point-wise MLP 49.1 82.5 Google / Baidu(wxff)
Pseudo Grid 50.6 53.3 Google / Baidu(n6b7)
Adapt Weights 50.5 52.9 Google / Baidu(pc22)
PosPool 50.5 53.6 Google / Baidu(3qv5)
PosPool* 51.1 53.7 Google / Baidu(czyq)

ShapeNetPart

Method mIoU msIoU Acc Model
Point-wise MLP 85.7 84.1 94.5 Google / Baidu(mi2m)
Pseudo Grid 86.0 84.3 94.6 Google / Baidu(wde6)
Adapt Weights 85.9 84.5 94.6 Google / Baidu(dy1k)
PosPool 85.9 84.6 94.6 Google / Baidu(r2tr)
PosPool* 86.2 84.8 94.8 Google / Baidu(27ie)