From the course: Hugging Face Transformers: Introduction to Pretrained Models

Unlock this course with a free trial

Join today to access over 24,900 courses taught by industry experts.

Summarization with pipelines

Summarization with pipelines

- [Instructor] The code examples for this chapter are in the notebook Code_02_XX Text Summarization. Let's open the notebook now. For summarization, we use an input text as shown here. We also remove the line feed characters before doing summarization. Input text may have to go through further pre-processing to remove formatting in real use cases. For summary, Hugging Face supports a predefined pipeline called summarization. We initialize the pipeline first. It uses extractive summarization by default. We can also set the minimum and maximum length of the summary desired. Executing the summary produces the output that we can print to the console. On running the code, the output is printed. As usual, if the pre-trained model is not cached locally, it will be downloaded from the Hugging Face repository. The result shows three lines from the original text extracted and printed. We can also print the model checkpoint used for summarization. This model uses Bart for conditional generation…

Contents