End-to-End Object Detection with Transformers

Review of paper by Nicolas Carion, Francisco Massa, Gabriel Synnaeve et al, Facebook AI Research, 2020

This paper describes a completely automated end-to-end object detection system combining convolutional networks and Transformers. The new model shows competitive performance on par with Faster R-CNN and can be generalized to other tasks such as panoptic segmentation.

CONTINUE READING >

Synthesizer: Rethinking Self-Attention in Transformer Models

Review of paper by Yi Tay, Dara Bahri, Donald Metzler et al, Google Research, 2020

Contrary to the common consensus that self-attention is largely responsible for the superior performance of Transformer models on various NLP tasks, this paper suggests that substituting outputs of self-attention layers with random or simply synthesized data is sufficient to achieve similar results with better efficiency.

CONTINUE READING >

ResNeSt: Split-Attention Networks

Review of paper by Hang Zhang1, Chongruo Wu2, Zhongyue Zhang1 et al, 1Amazon and 2UC Davis, 2020

The authors suggest a new ResNet-like network architecture that incorporates attention across groups of feature maps. Compared to previous attention models such as SENet and SKNet, the new attention block applies the squeeze-and-attention operation separately to each of the selected groups, which is done in a computationally efficient way and implemented in a simple modular structure.

CONTINUE READING >

No more pages to load