Log in
Enquire now
User profile

malone aston

digital normad
Joined March 2022
96
Contributions
ContributionsActivity
‌
Wide Residual Networks
was edited bymalone aston profile picture
malone aston
April 6, 2022 7:29 am
‌

Wide Residual Networks

in this paper we conduct a detailed experimental study on the architecture of ResNet blocks, based on which we propose a novel architecture where we decrease depth and increase width of residual networks. We call the resulting network structures wide residual networks (WRNs) and show that these are far superior over their commonly.

Infobox
License
BSD-2-Clause License
Related technology
Repository
https://github.com/szagoruyko/wide-residual-networks
Website
https://arxiv.org/abs/1605.07146v4https://arxiv.org/pdf/1605.07146v4.pdf
Timeline  (+1 events) (+22 characters)

May 23, 2016

Wide Residual Networks
‌
Wide Residual Networks
was created bymalone aston profile picture
malone aston
"Created via: Web app"
April 6, 2022 7:27 am
‌

Wide Residual Networks

in this paper we conduct a detailed experimental study on the architecture of ResNet blocks, based on which we propose a novel architecture where we decrease depth and increase width of residual networks. We call the resulting network structures wide residual networks (WRNs) and show that these are far superior over their commonly.

‌
mixup: Beyond Empirical Risk Minimization
was edited bymalone aston profile picture
malone aston
April 6, 2022 7:27 am
‌

mixup: Beyond Empirical Risk Minimization

In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels.

Infobox
License
Attribution-NonCommercial 4.0 International
Related technology
Repository
https://github.com/facebookresearch/mixup-cifar10
Website
https://arxiv.org/abs/1710.09412v2https://arxiv.org/pdf/1710.09412v2.pdfhttps://openreview.net/forum?id=r1Ddp1-Rbhttps://openreview.net/pdf?id=r1Ddp1-Rb
Timeline  (+1 events) (+41 characters)

October 25, 2017

mixup: Beyond Empirical Risk Minimization
‌
mixup: Beyond Empirical Risk Minimization
was created bymalone aston profile picture
malone aston
"Created via: Web app"
April 6, 2022 7:24 am
‌

mixup: Beyond Empirical Risk Minimization

In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels.

‌
Squeeze-and-Excitation Networks
was edited bymalone aston profile picture
malone aston
April 6, 2022 7:24 am
‌

Squeeze-and-Excitation Networks

In this work, we focus instead on the channel relationship and propose a novel architectural unit, which we term the "Squeeze-and-Excitation" (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels.

Infobox
License
Apache-2.0 License
Parent industry
‌
Image Classification
Related technology
Repository
https://github.com/hujie-frank/SENet
Website
http://openaccess.thecvf.com/content_cvpr_2018/html/Hu_Squeeze-and-Excitation_Networks_CVPR_2018_paper.htmlhttp://openaccess.thecvf.com/content_cvpr_2018/papers/Hu_Squeeze-and-Excitation_Networks_CVPR_2018_paper.pdfhttps://arxiv.org/abs/1709.01507v4https://arxiv.org/pdf/1709.01507v4.pdf
Timeline  (+1 events) (+31 characters)

September 5, 2017

Squeeze-and-Excitation Networks
‌
Squeeze-and-Excitation Networks
was created bymalone aston profile picture
malone aston
"Created via: Web app"
April 6, 2022 7:22 am
‌

Squeeze-and-Excitation Networks

In this work, we focus instead on the channel relationship and propose a novel architectural unit, which we term the "Squeeze-and-Excitation" (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels.

‌
Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation
was edited bymalone aston profile picture
malone aston
April 6, 2022 7:22 am
‌

Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation

In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries.

Infobox
Parent industry
Deep learning
Deep learning
Related technology
Website
http://openaccess.thecvf.com/content_ECCV_2018/html/Liang-Chieh_Chen_Encoder-Decoder_with_Atrous_ECCV_2018_paper.htmlhttp://openaccess.thecvf.com/content_ECCV_2018/papers/Liang-Chieh_Chen_Encoder-Decoder_with_Atrous_ECCV_2018_paper.pdfhttps://arxiv.org/abs/1802.02611v3https://arxiv.org/pdf/1802.02611v3.pdf
Timeline  (+1 events) (+81 characters)

February 7, 2018

Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation
‌
Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation
was created bymalone aston profile picture
malone aston
"Created via: Web app"
April 6, 2022 7:15 am
‌

Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation

In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries.

‌
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
was edited bymalone aston profile picture
malone aston
April 6, 2022 7:14 am
‌

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization.

Infobox
Related technology
‌
General Classification
Website
https://arxiv.org/abs/1502.03167v3https://arxiv.org/pdf/1502.03167v3.pdf
Timeline  (+1 events) (+92 characters)

February 11, 2015

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
‌
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
was created bymalone aston profile picture
malone aston
"Created via: Web app"
April 6, 2022 7:12 am
‌

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization.

‌
Going Deeper with Convolutions
was edited bymalone aston profile picture
malone aston
April 6, 2022 7:12 am
‌

Going Deeper with Convolutions

We propose a deep convolutional neural network architecture codenamed "Inception", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014).

Infobox
Related technology
‌
General Classification
‌
Object Recognition
Repository
https://worksheets.codalab.org/worksheets/0xbcd424d2bf544c4786efcc0063759b1a
Child industry
‌
Image Classification
Website
http://openaccess.thecvf.com/content_cvpr_2015/html/Szegedy_Going_Deeper_With_2015_CVPR_paper.htmlhttp://openaccess.thecvf.com/content_cvpr_2015/papers/Szegedy_Going_Deeper_With_2015_CVPR_paper.pdfhttps://arxiv.org/abs/1409.4842v1https://arxiv.org/pdf/1409.4842v1.pdf
Timeline  (+1 events) (+30 characters)

September 17, 2014

Going Deeper with Convolutions
‌
Going Deeper with Convolutions
was created bymalone aston profile picture
malone aston
"Created via: Web app"
April 6, 2022 7:09 am
‌

Going Deeper with Convolutions

We propose a deep convolutional neural network architecture codenamed "Inception", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014).

‌
Image Classification
was edited bymalone aston profile picture
malone aston
March 22, 2022 9:07 am
Infobox
Parent industry
Computer Vision
Computer Vision
‌
Image Classification
was edited bymalone aston profile picture
malone aston
March 22, 2022 9:05 am
Infobox
Child industry
‌
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
‌
Deep Residual Learning for Image Recognition
‌
Very Deep Convolutional Networks for Large-Scale Image Recognition
‌
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
‌
Densely Connected Convolutional Networks
‌
CSPNet: A New Backbone that can Enhance Learning Capability of CNN
‌
MobileNetV2: Inverted Residuals and Linear Bottlenecks
‌
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
‌
Rethinking the Inception Architecture for Computer Vision
‌
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
‌
Dynamic Routing Between Capsules
‌
Dynamic Routing Between Capsules
was edited bymalone aston profile picture
malone aston
March 22, 2022 9:04 am
‌

Dynamic Routing Between Capsules

We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits.

Infobox
Related technology
Website
http://papers.nips.cc/paper/6975-dynamic-routing-between-capsuleshttp://papers.nips.cc/paper/6975-dynamic-routing-between-capsules.pdfhttps://arxiv.org/abs/1710.09829v2https://arxiv.org/pdf/1710.09829v2.pdf
Timeline  (+1 events) (+32 characters)

October 26, 2017

Dynamic Routing Between Capsules
‌
Dynamic Routing Between Capsules
was created bymalone aston profile picture
malone aston
"Created via: Web app"
March 22, 2022 9:02 am
‌

Dynamic Routing Between Capsules

We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits.

‌
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
was edited bymalone aston profile picture
malone aston
March 22, 2022 9:02 am
‌

An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale

We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks.

Infobox
Related technology
‌
Document Image Classification
Website
https://arxiv.org/abs/2010.11929v2https://arxiv.org/pdf/2010.11929v2.pdfhttps://openreview.net/forum?id=YicbFdNTTyhttps://openreview.net/pdf?id=YicbFdNTTy
Timeline  (+1 events) (+74 characters)

October 22, 2020

An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
‌
Document Image Classification
was created bymalone aston profile picture
malone aston
"Created via: Web app"
March 22, 2022 9:01 am
‌

Document Image Classification

‌
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
was created bymalone aston profile picture
malone aston
"Created via: Web app"
March 22, 2022 9:00 am
‌

An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale

We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks.

‌
Rethinking the Inception Architecture for Computer Vision
was edited bymalone aston profile picture
malone aston
March 22, 2022 9:00 am
‌

Rethinking the Inception Architecture for Computer Vision

Here we explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization.

Infobox
Related technology
‌
Retinal OCT Disease Classification
Website
http://openaccess.thecvf.com/content_cvpr_2016/html/Szegedy_Rethinking_the_Inception_CVPR_2016_paper.htmlhttp://openaccess.thecvf.com/content_cvpr_2016/papers/Szegedy_Rethinking_the_Inception_CVPR_2016_paper.pdfhttps://arxiv.org/abs/1512.00567v3https://arxiv.org/pdf/1512.00567v3.pdf
Timeline  (+1 events) (+57 characters)

December 2, 2015

Rethinking the Inception Architecture for Computer Vision