End-to-End Adversarial Text-to-Speech
The authors build a nearly end-to-end text-to-speech (TTS) synthesis pipeline, resulting in high-fidelity natural-sounding speech approaching state-of-the-art TTS systems.
The authors build a nearly end-to-end text-to-speech (TTS) synthesis pipeline, resulting in high-fidelity natural-sounding speech approaching state-of-the-art TTS systems.
This paper suggests an approximate way of calculating self-attention in Transformer architectures that has linear space and time complexity in terms of the sequence length, with the resulting performance on benchmark datasets similar to that of the RoBERTa model based on the original Transformers with much less efficient quadratic attention complexity.
This paper describes a completely automated end-to-end object detection system combining convolutional networks and Transformers. The new model shows competitive performance on par with Faster R-CNN and can be generalized to other tasks such as panoptic segmentation.
Contrary to the common consensus that self-attention is largely responsible for the superior performance of Transformer models on various NLP tasks, this paper suggests that substituting outputs of self-attention layers with random or simply synthesized data is sufficient to achieve similar results with better efficiency.
The authors used contrastive loss, which has recently been shown to be very effective at learning deep neural network representations in the self-supervised setting, for supervised learning, and achieved better results than those obtained with the cross-entropy loss for ResNet-50 and ResNet-200.
The authors suggest a new ResNet-like network architecture that incorporates attention across groups of feature maps. Compared to previous attention models such as SENet and SKNet, the new attention block applies the squeeze-and-attention operation separately to each of the selected groups, which is done in a computationally efficient way and implemented in a simple modular structure.