Other Posts

Understanding Google’s FNet and how it enhances the accuracy of NLP applications

Transformer architectures are widely used in Natural Language Processing applications. But it has a shortcoming – substantial computational overhead of its self-attention mechanism. However, the recent research from Google proposes to replace self-attention sublayers with simple linear transformations. In this article, we’ll see how it is done […]

Read More