You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Note that for initializing training with checkpoints or pretrained parameters, refer to [Training process](#training-process) for more details.
331
331
332
332
### 6. Start validation/evaluation.
333
-
To evaluate the validation dataset located in `/imagenet/val`, you need to specify the pretrained parameters by `--from-pretrained-params` and set `eval_only` to `--run-scope`.
333
+
To evaluate the validation dataset located in `/imagenet/val`, you need to specify the pretrained weights by `--from-pretrained-params` and set `eval_only` to `--run-scope`.
@@ -611,10 +618,6 @@ Metrics gathered through both training and evaluation:
611
618
-`train.lr` - learning rate
612
619
613
620
614
-
### Checkpoints
615
-
We offered a checkpoint which is well pretrained on the ImageNet dataset with AMP mode. It achieves 77.11% top 1 accuracy on the test dataset. You can find out that checkpoint from [ResNet50 checkpoints (PaddlePaddle, AMP, ImageNet)](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/dle/models/resnet_50_paddle_ckpt), and resume training via the instructions in [Training process](#training-process).
616
-
617
-
618
621
### Automatic SParsity training process:
619
622
To enable automatic sparsity training workflow, turn on `--amp` and `--prune-mode` when training launches. Refer to [Command-line options](#command-line-options)
0 commit comments