Break up language strings into parts using Natural

Hannah Davis
InstructorHannah Davis

Share this video with your friends

Send Tweet
Published 6 years ago
Updated 3 years ago

A part of Natural Language Processing (NLP) is processing text by “tokenizing” language strings. This means we can break up a string of text into parts by word, sentence, etc. In this lesson, we will use the natural library to tokenize a string. First, we will break the string into words using WordTokenizer, WordPunctTokenizer, and TreebankWordTokenizer. Then we will break the string into sentences using RegexpTokenizer.

[00:00] First, import the natural library. We'll also make a test string here. To create a new tokenizer, the syntax is new natural.WordTokenizer. From there, all we need to do is tokenizer.tokenize our string.

[00:21] WordTokenizer splits text by spaces and punctuation. Note that contractions are split on their apostrophes. WordTokenizer also discards the punctuation. If you want to retain the punctuation, you can use another tokenizer called WordPunctTokenizer.

[00:47] This will retain the punctuation putting it into its own tokens. Natural also has a TreebankWordTokenizer. This tries to preserve some of the semantics of the text. It splits contractions into their respective words. It also keeps the punctuation.

[01:06] Lastly, natural has a regular expression tokenizer. Here, you have to pass a regular expression pattern. In our case, we'll look for end of sentence punctuation. This splits the text into sentences.

FED
FED
~ 4 years ago

As someone who's unfamiliar with NLP, I find this library amazing! This is a really great overview of Natural!

Brian
Brian
~ 3 years ago

Given the version of the package featured in this tutorial (0.4.0) is now 3 years old, is most of the content still applicable or have there been major api changes since then?