Using Transformers in Hugging Face Transformer models are usually very large. With millions to tens of billions of parameters, training and deploying these models is a complicated undertaking. Furthermore, with new models being released on a near-daily basis and each having its own implementation, trying them all out is no easy task.

The 🤗 Transformers library was created to solve this problem. Its goal is to provide a single API through which any Transformer model can be loaded, trained, and saved. The library’s main features are:
- Ease of use: Downloading, loading, and using a state-of-the-art NLP model for inference can be done in just two lines of code.
- Flexibility: At their core, all models are simple PyTorch
nn.Module
or TensorFlowtf.keras.Model
classes and can be handled like any other models in their respective machine learning (ML) frameworks. - Simplicity: Hardly any abstractions are made across the library. The “All in one file” is a core concept: a model’s forward pass is entirely defined in a single file, so that the code itself is understandable and hackable.
This last feature makes 🤗 Transformers quite different from other ML libraries. The models are not built on modules that are shared across files; instead, each model has its own layers. In addition to making the models more approachable and understandable, this allows you to easily experiment on one model without affecting others.
This chapter will begin with an end-to-end example where we use a model and a tokenizer together to replicate the pipeline()
function introduced in Chapter 1. Next, we’ll discuss the model API: we’ll dive into the model and configuration classes, and show you how to load a model and how it processes numerical inputs to output predictions.
Then we’ll look at the tokenizer API, which is the other main component of the pipeline()
function. Tokenizers take care of the first and last processing steps, handling the conversion from text to numerical inputs for the neural network, and the conversion back to text when it is needed. Finally, we’ll show you how to handle sending multiple sentences through a model in a prepared batch, then wrap it all up with a closer look at the high-level tokenizer()
function.
Using Transformers in Hugging Face
Let’s start with a complete example, taking a look at what happened behind the scenes when we executed the following code in Chapter 1:

and obtained:
[{‘label’: ‘POSITIVE’, ‘score’: 0.9598047137260437},
{‘label’: ‘NEGATIVE’, ‘score’: 0.9994558095932007}]
As we saw in Chapter 1, this pipeline groups together three steps: preprocessing, passing the inputs through the model, and postprocessing:
Let’s quickly go over each of these.
Preprocessing with a tokenizer
Like other neural networks, Transformer models can’t process raw text directly, so the first step of our pipeline is to convert the text inputs into numbers that the model can make sense of. To do this we use a tokenizer, which will be responsible for:
- Splitting the input into words, subwords, or symbols (like punctuation) that are called tokens
- Mapping each token to an integer
- Adding additional inputs that may be useful to the model
All this preprocessing needs to be done in exactly the same way as when the model was pretrained, so we first need to download that information from the Model Hub. To do this, we use the AutoTokenizer
class and its from_pretrained()
method. Using the checkpoint name of our model, it will automatically fetch the data associated with the model’s tokenizer and cache it (so it’s only downloaded the first time you run the code below).
Since the default checkpoint of the sentiment-analysis
pipeline is distilbert-base-uncased-finetuned-sst-2-english
(you can see its model card here), we run the following:

Once we have the tokenizer, we can directly pass our sentences to it and we’ll get back a dictionary that’s ready to feed to our model! The only thing left to do is to convert the list of input IDs to tensors.
You can use 🤗 Transformers without having to worry about which ML framework is used as a backend; it might be PyTorch or TensorFlow, or Flax for some models. However, Transformer models only accept tensors as input. If this is your first time hearing about tensors, you can think of them as NumPy arrays instead. A NumPy array can be a scalar (0D), a vector (1D), a matrix (2D), or have more dimensions. It’s effectively a tensor; other ML frameworks’ tensors behave similarly, and are usually as simple to instantiate as NumPy arrays.
To specify the type of tensors we want to get back (PyTorch, TensorFlow, or plain NumPy), we use the return_tensors
argument:

Using Transformers in Hugging Face
Don’t worry about padding and truncation just yet; we’ll explain those later. The main things to remember here are that you can pass one sentence or a list of sentences, as well as specifying the type of tensors you want to get back (if no type is passed, you will get a list of lists as a result).
Here’s what the results look like as PyTorch tensors:

The output itself is a dictionary containing two keys, input_ids
and attention_mask
. input_ids
contains two rows of integers (one for each sentence) that are the unique identifiers of the tokens in each sentence. We’ll explain what the attention_mask
is later in this chapter.
Going through the model
We can download our pretrained model the same way we did with our tokenizer. 🤗 Transformers provides an AutoModel
class which also has a from_pretrained()
method:

In this code snippet, we have downloaded the same checkpoint we used in our pipeline before (it should actually have been cached already) and instantiated a model with it.
This architecture contains only the base Transformer module: given some inputs, it outputs what we’ll call hidden states, also known as features. For each model input, we’ll retrieve a high-dimensional vector representing the contextual understanding of that input by the Transformer model.
Using Transformers in Hugging Face
If this doesn’t make sense, don’t worry about it. We’ll explain it all later.
While these hidden states can be useful on their own, they’re usually inputs to another part of the model, known as the head. In Chapter 1, the different tasks could have been performed with the same architecture, but each of these tasks will have a different head associated with it.
A high-dimensional vector?
The vector output by the Transformer module is usually large. It generally has three dimensions:
- Batch size: The number of sequences processed at a time (2 in our example).
- Sequence length: The length of the numerical representation of the sequence (16 in our example).
- Hidden size: The vector dimension of each model input.
It is said to be “high dimensional” because of the last value. The hidden size can be very large (768 is common for smaller models, and in larger models this can reach 3072 or more).
We can see this if we feed the inputs we preprocessed to our model:
Using Transformers in Hugging Face
outputs = model(**inputs)
print(outputs.last_hidden_state.shape)
torch.Size([2, 16, 768])
Note that the outputs of 🤗 Transformers models behave like namedtuple
s or dictionaries. You can access the elements by attributes (like we did) or by key (outputs["last_hidden_state"]
), or even by index if you know exactly where the thing you are looking for is (outputs[0]
).
Model heads: Making sense out of numbers
The model heads take the high-dimensional vector of hidden states as input and project them onto a different dimension. They are usually composed of one or a few linear layers:
The output of the Transformer model is sent directly to the model head to be processed.
In this diagram, the model is represented by its embeddings layer and the subsequent layers. The embeddings layer converts each input ID in the tokenized input into a vector that represents the associated token. The subsequent layers manipulate those vectors using the attention mechanism to produce the final representation of the sentences.
There are many different architectures available in 🤗 Transformers, with each one designed around tackling a specific task. Here is a non-exhaustive list:
*Model
(retrieve the hidden states)*ForCausalLM
*ForMaskedLM
*ForMultipleChoice
*ForQuestionAnswering
*ForSequenceClassification
*ForTokenClassification
- and others 🤗
For our example, we will need a model with a sequence classification head (to be able to classify the sentences as positive or negative). So, we won’t actually use the AutoModel
class, but AutoModelForSequenceClassification
:

Now if we look at the shape of our outputs, the dimensionality will be much lower: the model head takes as input the high-dimensional vectors we saw before, and outputs vectors containing two values (one per label):
print(outputs.logits.shape)
torch.Size([2, 2])
Since we have just two sentences and two labels, the result we get from our model is of shape 2 x 2.
Postprocessing the output
The values we get as output from our model don’t necessarily make sense by themselves. Let’s take a look:
print(outputs.logits)
tensor([[-1.5607, 1.6123], [ 4.1692, -3.3464]], grad_fn=<AddmmBackward>)
Our model predicted [-1.5607, 1.6123]
for the first sentence and [ 4.1692, -3.3464]
for the second one. Those are not probabilities but logits, the raw, unnormalized scores outputted by the last layer of the model. To be converted to probabilities, they need to go through a SoftMax layer (all 🤗 Transformers models output the logits, as the loss function for training will generally fuse the last activation function, such as SoftMax, with the actual loss function, such as cross entropy):


Now we can see that the model predicted [0.0402, 0.9598]
for the first sentence and [0.9995, 0.0005]
for the second one. These are recognizable probability scores.
To get the labels corresponding to each position, we can inspect the id2label
attribute of the model config (more on this in the next section):
model.config.id2label
{0: ‘NEGATIVE’, 1: ‘POSITIVE’}
Now we can conclude that the model predicted the following:
- First sentence: NEGATIVE: 0.0402, POSITIVE: 0.9598
- Second sentence: NEGATIVE: 0.9995, POSITIVE: 0.0005
We have successfully reproduced the three steps of the pipeline: preprocessing with tokenizers, passing the inputs through the model, and postprocessing! Now let’s take some time to dive deeper into each of those steps.
If you’re looking to jumpstart your career as a Data Scientist, consider enrolling in our comprehensive AI Residency Program Our program provides you with the skills and experience necessary to succeed in today’s data-driven world. You’ll learn the fundamentals of statistical analysis, as well as how to use tools such as SQL, Python, Excel, and PowerBI to analyze and visualize data designed by Mohammad Arshad, 19 years of Data Science & AI Experience. But that’s not all – our program also includes a 3-month internship with us where you can showcase your Capstone Project.
Are you passionate about AI and Data Science? Looking to connect with like-minded individuals, learn new concepts, and apply them in real-world situations? Join our growing AI community today! We provide a platform where you can engage in insightful discussions, share resources, collaborate on projects, and learn from experts in the field.
Don’t miss out on this opportunity to broaden your horizons and sharpen your skills. Visit https://decodingdatascience.com/ai and be part of our AI community. We can’t wait to see what you’ll bring to the table. Let’s shape the future of AI together!
Great