Deep Residual Learning

貢獻者:游客129934331 類別:英文 時間:2019-11-02 17:33:14 收藏數:10 評分:1
返回上页 舉報此文章
请选择举报理由:




收藏到我的文章 改錯字
Deeper neural networks are more difficult to train.
We present a residual learning framework to ease the training of networks
that are substantially deeper than those used previously.
We explicitly reformulate the layers as learning residual functions with reference to
the layer inputs, instead of learning unreferenced functions.
We provide comprehensive empirical evidence showing that these
residual networks are easier to optimize,
and can gain accuracy from considerably increased depth.
On the ImageNet dataset we evaluate residual nets with a depth of up to
152 layers---8x deeper than VGG nets but still having lower complexity.
An ensemble of these residual nets achieves 3.57% error on the ImageNet test set.
This result won the 1st place on the ILSVRC 2015 classification task.
We also present analysis on CIFAR-10 with 100 and 1000 layers.
The depth of representations is of central importance for many visual recognition tasks.
Solely due to our extremely deep representations, we obtain a 28% relative improvement
on the COCO object detection dataset.
Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization,
COCO detection, and COCO segmentation.
声明:以上文章均为用户自行添加,仅供打字交流使用,不代表本站观点,本站不承担任何法律责任,特此声明!如果有侵犯到您的权利,请及时联系我们删除。
文章熱度:
文章難度:
文章質量:
說明:系統根據文章的熱度、難度、質量自動認證,已認證的文章將參與打字排名!

本文打字排名TOP20

登录后可见

用户更多文章推荐