Skip to content

Releases: Coderx7/SimpleNet

Initial ImageNet pretrained weights

15 Feb 18:13
8ae4059
Compare
Choose a tag to compare

Initial ImageNet pretrained weights for 1.5m, 3m, 5m and 9m variants can now be downloaded from the assets below.

ImageNet Result:

Method #Params ImageNet ImageNet-Real-Labels
SimpleNetV1_imagenet(36.23 MB) 9.5m 74.17/91.614 81.24/94.63
SimpleNetV1_imagenet(21.91 MB) 5.7m 71.936/90.3 79.12/93.68
SimpleNetV1_imagenet(12.52 MB) 3m 68.15/87.762 75.66/91.80
SimpleNetV1_imagenet(5.73 MB) 1.5m 61.524/83.43 69.11/88.10

Note 1

These models are converted from their Pytorch counterparts through onnx runtime.
The respective models can be accessed from our Official Pytorch repository.

Note 2

Please note that since models are converted from onnx to caffe, the mean, std and crop ratio used are as follows:

DEFAULT_CROP_PCT = 0.875
IMAGENET_DEFAULT_MEAN = (0.485, 0.456, 0.406)
IMAGENET_DEFAULT_STD = (0.229, 0.224, 0.225)

Also note that images were not channel swapped during training so you dont need to do any channel swap either.
You also DO NOT need to rescale the input to [0-255].

Initial ImageNet Models

31 Dec 19:57
4bff094
Compare
Choose a tag to compare

Initial ImageNet models -1.5m, 3m and 5m models.

ImageNet pretrained weights

14 Apr 15:10
4623818
Compare
Choose a tag to compare

Initial ImageNet pretrained weights for 1.5m, 3m, 5m and 9m variants can now be downloaded from the assets below.

m2 variants:

Method #Params ImageNet ImageNet-Real-Labels
simplenetv1_9m_m2(36 MB) 9.5m 74.23/91.748 81.22/94.756
simplenetv1_5m_m2(22 MB) 5.7m 72.03/90.324 79.328/93.714
simplenetv1_small_m2_075(12 MB) 3m 68.506/88.15 76.283/92.02
simplenetv1_small_m2_05(5 MB) 1.5m 61.67/83.488 69.31/ 88.195

m1 variants:

Method #Params ImageNet ImageNet-Real-Labels
simplenetv1_9m_m1(36 MB) 9.5m 73.792/91.486 81.196/94.512
simplenetv1_5m_m1(21 MB) 5.7m 71.548/89.94 79.076/93.36
simplenetv1_small_m1_075(12 MB) 3m 67.784/87.718 75.448/91.69
simplenetv1_small_m1_05(5 MB) 1.5m 61.122/82.988 68.58/87.64

Note 1

These models are converted from their Pytorch counterparts through onnx runtime.
The respective models can be accessed from our Official Pytorch repository.

Note 2

Please note that since models are converted from onnx to caffe, the mean, std and crop ratio used are as follows:

DEFAULT_CROP_PCT = 0.875
IMAGENET_DEFAULT_MEAN = (0.485, 0.456, 0.406)
IMAGENET_DEFAULT_STD = (0.229, 0.224, 0.225)

Also note that images were not channel swapped during training so you dont need to do any channel swap either.
You also DO NOT need to rescale the input to [0-255].