HomeArtificial IntelligenceA Light Introduction to Positional Encoding In Transformer Fashions, Half 1

A Light Introduction to Positional Encoding In Transformer Fashions, Half 1

Final Up to date on March 2, 2022

In languages the order of the phrases and their place in a sentence actually issues. The which means of your entire sentence can change if the phrases are re-ordered. When implementing NLP options, the recurrent neural networks have an inbuilt mechanism that offers with the order of sequences. The transformer mannequin, nonetheless, doesn’t use recurrence or convolution and treats every knowledge level as unbiased of the opposite. Therefore, positional info is added to the mannequin explicitly to retain the data relating to the order of phrases in a sentence. Positional encoding is the scheme by way of which the data of order of objects in a sequence is maintained.

For this tutorial, we’ll simplify the notations used on this superior paper Consideration is all You Want by Vaswani et al. After finishing this tutorial, you’ll know:

  • What’s positional encoding and why it’s vital
  • Positional encoding in transformers
  • Code and visualize a positional encoding matrix in Python utilizing NumPy

Let’s get began.

A Light Introduction to Positional Encoding In Transformer Fashions
Photograph by Muhammad Murtaza Ghani on Unsplash, some rights reserved

Tutorial Overview

This tutorial is split into 4 elements; they’re:

  1. What’s positional encoding
  2. Arithmetic behind positional encoding in transformers
  3. Implementing the positional encoding matrix utilizing NumPy
  4. Understanding and visualizing the positional encoding matrix

What’s Positional Encoding?

Positional encoding describes the placement or place of an entity in a sequence so that every place is assigned a singular illustration. There are lots of explanation why a single quantity such because the index worth just isn’t used to signify an merchandise’s place in transformer fashions. For lengthy sequences, the indices can develop giant in magnitude. If you happen to normalize the index worth to lie between 0 and 1, it could create issues for variable size sequences as they’d be normalized otherwise.

Transformers use a sensible positional encoding scheme, the place every place/index is mapped to a vector. Therefore, the output of the positional encoding layer is a matrix, the place every row of the matrix represents an encoded object of the sequence summed with its positional info. An instance of the matrix that encodes solely the positional info is proven within the determine under.

A Fast Run-By Trigonometric Sine Operate

It is a fast recap of sine features and you may work equivalently with cosine features. The perform’s vary is [-1,+1]. The frequency of this waveform is the variety of cycles accomplished in a single second. The wavelength is the space over which the waveform repeats itself. The wavelength and frequency for various waveforms is proven under:

Positional Encoding Layer in Transformers

Let’s dive straight into this. Suppose we have now an enter sequence of size $L$ and we require the place of the $ok^{th}$ object inside this sequence. The positional encoding is given by sine and cosine features of various frequencies:

P(ok, 2i) &=& sinBig(frac{ok}{n^{2i/d}}Massive)
P(ok, 2i+1) &=& cosBig(frac{ok}{n^{2i/d}}Massive)

Right here:

$ok$: Place of an object in enter sequence, $0 leq ok < L/2$

$d$: Dimension of the output embedding area

$P(ok, j)$: Place perform for mapping a place $ok$ within the enter sequence to index $(ok,j)$ of the positional matrix

$n$: Person outlined scalar. Set to 10,000 by the authors of Consideration is all You Want.

$i$: Used for mapping to column indices $0 leq i < d/2$. A single worth of $i$ maps to each sine and cosine features

Within the above expression we will see that even positions correspond to sine perform and odd positions correspond to even positions.


To know the above expression, let’s take an instance of the phrase ‘I’m a robotic’, with n=100 and d=4. The next desk reveals the positional encoding matrix for this phrase. In truth the positional encoding matrix can be the identical for any 4 letter phrase with n=100 and d=4.

Coding the Positional Encoding Matrix From Scratch

Here’s a brief Python code to implement positional encoding utilizing NumPy. The code is simplified to make the understanding of positional encoding simpler.

Understanding the Positional Encoding Matrix

To know the positional encoding, let’s begin by trying on the sine wave for various positions with n=10,000 and d=512.

The next determine is the output of the above code:

Sine wave for various place indices

We are able to see that every place $ok$ corresponds to a unique sinusoid, which encodes a single place right into a vector. If we glance intently on the positional encoding perform, we will see that the wavelength for a set $i$ is given by:

lambda_{i} = 2 pi n^{2i/d}

Therefore, the wavelengths of the sinusoids kind a geometrical development and differ  from $2pi$ to $2pi n$. The scheme for positional encoding has an a variety of benefits.

  1. The sine and cosine features have values in [-1, 1], which retains the values of the positional encoding matrix in a normalized vary.
  2. Because the sinusoid for every place is totally different, we have now a singular approach of encoding every place.
  3. We’ve a approach of measuring or quantifying the similarity between totally different positions, therefore enabling us to encode relative positions of phrases.

Visualizing the Positional Matrix

Let’s visualize the positional matrix on larger values. We’ll use Python’s matshow() technique from the matplotlib library. Setting n=10,000 as carried out within the unique paper, we get the next:

The positional encoding matrix for n=10,000, d=512, sequence size=100

What’s the Ultimate Output of the Positional Encoding Layer?

The positional encoding layer sums the positional vector with the phrase encoding and outputs this matrix for the subsequent layers. All the course of is proven under.

The positional encoding layer in transformer

Additional Studying

This part gives extra sources on the subject if you’re trying to go deeper.





On this tutorial, you found positional encoding in transformers.

Particularly, you discovered:

  • What’s positional encoding and why it’s wanted.
  • Tips on how to implement positional encoding in Python utilizing NumPy
  • Tips on how to visualize the positional encoding matrix

Do you could have any questions on positional encoding mentioned on this submit? Ask your questions within the feedback under and I’ll do my finest to reply.



Please enter your comment!
Please enter your name here

Most Popular

Recent Comments