API - Natural Language Processing¶
Natural Language Processing and Word Representation.
generate_skip_gram_batch (data, batch_size, …) |
Generate a training batch for the Skip-Gram model. |
sample ([a, temperature]) |
Sample an index from a probability array. |
sample_top ([a, top_k]) |
Sample from top_k probabilities. |
SimpleVocabulary (vocab, unk_id) |
Simple vocabulary wrapper, see create_vocab(). |
Vocabulary (vocab_file[, start_word, …]) |
Create Vocabulary class from a given vocabulary and its id-word, word-id convert, see create_vocab() and tutorial_tfrecord3.py . |
process_sentence (sentence[, start_word, …]) |
Converts a sentence string into a list of string words, add start_word and end_word, see create_vocab() and tutorial_tfrecord3.py . |
create_vocab (sentences, word_counts_output_file) |
Creates the vocabulary of word to word_id, see create_vocab() and tutorial_tfrecord3.py . |
simple_read_words ([filename]) |
Read context from file without any preprocessing. |
read_words ([filename, replace]) |
File to list format context. |
read_analogies_file ([eval_file, word2id]) |
Reads through an analogy question file, return its id format. |
build_vocab (data) |
Build vocabulary. |
build_reverse_dictionary (word_to_id) |
Given a dictionary for converting word to integer id. |
build_words_dataset ([words, …]) |
Build the words dictionary and replace rare words with ‘UNK’ token. |
words_to_word_ids ([data, word_to_id, unk_key]) |
Given a context (words) in list format and the vocabulary, Returns a list of IDs to represent the context. |
word_ids_to_words (data, id_to_word) |
Given a context (ids) in list format and the vocabulary, Returns a list of words to represent the context. |
save_vocab ([count, name]) |
Save the vocabulary to a file so the model can be reloaded. |
basic_tokenizer (sentence[, _WORD_SPLIT]) |
Very basic tokenizer: split the sentence into a list of tokens. |
create_vocabulary (vocabulary_path, …[, …]) |
Create vocabulary file (if it does not exist yet) from data file. |
initialize_vocabulary (vocabulary_path) |
Initialize vocabulary from file, return the word_to_id (dictionary) and id_to_word (list). |
sentence_to_token_ids (sentence, vocabulary) |
Convert a string to list of integers representing token-ids. |
data_to_token_ids (data_path, target_path, …) |
Tokenize data file and turn into token-ids using given vocabulary file. |
Iteration function for training embedding matrix¶
-
tensorlayer.nlp.
generate_skip_gram_batch
(data, batch_size, num_skips, skip_window, data_index=0)[source]¶ Generate a training batch for the Skip-Gram model.
Parameters: - data : a list
To present context.
- batch_size : an int
Batch size to return.
- num_skips : an int
How many times to reuse an input to generate a label.
- skip_window : an int
How many words to consider left and right.
- data_index : an int
Index of the context location. without using yield, this code use data_index to instead.
Returns: - batch : a list
Inputs
- labels : a list
Labels
- data_index : an int
Index of the context location.
References
Examples
>>> Setting num_skips=2, skip_window=1, use the right and left words. >>> In the same way, num_skips=4, skip_window=2 means use the nearby 4 words.
>>> data = [1,2,3,4,5,6,7,8,9,10,11] >>> batch, labels, data_index = tl.nlp.generate_skip_gram_batch(data=data, batch_size=8, num_skips=2, skip_window=1, data_index=0) >>> print(batch) ... [2 2 3 3 4 4 5 5] >>> print(labels) ... [[3] ... [1] ... [4] ... [2] ... [5] ... [3] ... [4] ... [6]]
Sampling functions¶
-
tensorlayer.nlp.
sample
(a=[], temperature=1.0)[source]¶ Sample an index from a probability array.
Parameters: - a : a list
List of probabilities.
- temperature : float or None
The higher the more uniform.
When a = [0.1, 0.2, 0.7],
temperature = 0.7, the distribution will be sharpen [ 0.05048273 0.13588945 0.81362782]
temperature = 1.0, the distribution will be the same [0.1 0.2 0.7]
temperature = 1.5, the distribution will be filtered [ 0.16008435 0.25411807 0.58579758]
If None, it will be
np.argmax(a)
Notes
No matter what is the temperature and input list, the sum of all probabilities will be one. Even if input list = [1, 100, 200], the sum of all probabilities will still be one.
For large vocabulary_size, choice a higher temperature to avoid error.
Vector representations of words¶
Vocabulary class¶
-
class
tensorlayer.nlp.
SimpleVocabulary
(vocab, unk_id)[source]¶ Simple vocabulary wrapper, see create_vocab().
Parameters: - vocab : A dictionary of word to word_id.
- unk_id : Id of the special ‘unknown’ word.
Methods
word_to_id
(word)Returns the integer id of a word string.
-
class
tensorlayer.nlp.
Vocabulary
(vocab_file, start_word='<S>', end_word='</S>', unk_word='<UNK>')[source]¶ Create Vocabulary class from a given vocabulary and its id-word, word-id convert, see create_vocab() and
tutorial_tfrecord3.py
.Parameters: - vocab_file : File containing the vocabulary, where the words are the first
whitespace-separated token on each line (other tokens are ignored) and the word ids are the corresponding line numbers.
- start_word : Special word denoting sentence start.
- end_word : Special word denoting sentence end.
- unk_word : Special word denoting unknown words.
Methods
id_to_word
(word_id)Returns the word string of an integer word id. word_to_id
(word)Returns the integer word id of a word string.
-
tensorlayer.nlp.
process_sentence
(sentence, start_word='<S>', end_word='</S>')[source]¶ Converts a sentence string into a list of string words, add start_word and end_word, see
create_vocab()
andtutorial_tfrecord3.py
.Returns: - A list of strings; the processed caption.
Examples
>>> c = "how are you?" >>> c = tl.nlp.process_sentence(c) >>> print(c) ... ['<S>', 'how', 'are', 'you', '?', '</S>']
-
tensorlayer.nlp.
create_vocab
(sentences, word_counts_output_file, min_word_count=1)[source]¶ Creates the vocabulary of word to word_id, see create_vocab() and
tutorial_tfrecord3.py
.The vocabulary is saved to disk in a text file of word counts. The id of each word in the file is its corresponding 0-based line number.
Parameters: - sentences : a list of lists of strings.
- word_counts_output_file : A string
The file name.
- min_word_count : a int
Minimum number of occurrences for a word.
Returns: - tl.nlp.SimpleVocabulary object.
Examples
>>> captions = ["one two , three", "four five five"] >>> processed_capts = [] >>> for c in captions: >>> c = tl.nlp.process_sentence(c, start_word="<S>", end_word="</S>") >>> processed_capts.append(c) >>> print(processed_capts) ...[['<S>', 'one', 'two', ',', 'three', '</S>'], ['<S>', 'four', 'five', 'five', '</S>']]
>>> tl.nlp.create_vocab(processed_capts, word_counts_output_file='vocab.txt', min_word_count=1) ... tensorlayer.nlp:Creating vocabulary. ... Total words: 8 ... Words in vocabulary: 8 ... Wrote vocabulary file: vocab.txt >>> vocab = tl.nlp.Vocabulary('vocab.txt', start_word="<S>", end_word="</S>", unk_word="<UNK>") ... tensorlayer.nlp:Instantiate Vocabulary from vocab.txt : <S> </S> <UNK> ... vocabulary with 9 words (includes unk_word)
Read words from file¶
-
tensorlayer.nlp.
simple_read_words
(filename='nietzsche.txt')[source]¶ Read context from file without any preprocessing.
Parameters: - filename : a string
A file path (like .txt file)
Returns: - The context in a string
-
tensorlayer.nlp.
read_words
(filename='nietzsche.txt', replace=['\n', '<eos>'])[source]¶ - File to list format context.
- Note that, this script can not handle punctuations.
For customized read_words method, see
tutorial_generate_text.py
.
Parameters: - filename : a string
A file path (like .txt file),
- replace : a list
[original string, target string], to disable replace use [‘’, ‘’]
Returns: - The context in a list, split by ‘ ‘ by default, and use ‘<eos>’ to represent ‘
- ‘.
e.g. [… ‘how’, ‘useful’, ‘it’, “‘s” … ]
Read analogy question file¶
-
tensorlayer.nlp.
read_analogies_file
(eval_file='questions-words.txt', word2id={})[source]¶ Reads through an analogy question file, return its id format.
Parameters: - eval_data : a string
The file name.
- word2id : a dictionary
Mapping words to unique IDs.
Returns: - analogy_questions : a [n, 4] numpy array containing the analogy question’s
word ids. questions_skipped: questions skipped due to unknown words.
Examples
>>> eval_file should be in this format : >>> : capital-common-countries >>> Athens Greece Baghdad Iraq >>> Athens Greece Bangkok Thailand >>> Athens Greece Beijing China >>> Athens Greece Berlin Germany >>> Athens Greece Bern Switzerland >>> Athens Greece Cairo Egypt >>> Athens Greece Canberra Australia >>> Athens Greece Hanoi Vietnam >>> Athens Greece Havana Cuba ...
>>> words = tl.files.load_matt_mahoney_text8_dataset() >>> data, count, dictionary, reverse_dictionary = tl.nlp.build_words_dataset(words, vocabulary_size, True) >>> analogy_questions = tl.nlp.read_analogies_file( eval_file='questions-words.txt', word2id=dictionary) >>> print(analogy_questions) ... [[ 3068 1248 7161 1581] ... [ 3068 1248 28683 5642] ... [ 3068 1248 3878 486] ... ..., ... [ 1216 4309 19982 25506] ... [ 1216 4309 3194 8650] ... [ 1216 4309 140 312]]
Build vocabulary, word dictionary and word tokenization¶
-
tensorlayer.nlp.
build_vocab
(data)[source]¶ Build vocabulary. Given the context in list format. Return the vocabulary, which is a dictionary for word to id. e.g. {‘campbell’: 2587, ‘atlantic’: 2247, ‘aoun’: 6746 …. }
Parameters: - data : a list of string
the context in list format
Returns: - word_to_id : a dictionary
mapping words to unique IDs. e.g. {‘campbell’: 2587, ‘atlantic’: 2247, ‘aoun’: 6746 …. }
Examples
>>> data_path = os.getcwd() + '/simple-examples/data' >>> train_path = os.path.join(data_path, "ptb.train.txt") >>> word_to_id = build_vocab(read_txt_words(train_path))
-
tensorlayer.nlp.
build_reverse_dictionary
(word_to_id)[source]¶ Given a dictionary for converting word to integer id. Returns a reverse dictionary for converting a id to word.
Parameters: - word_to_id : dictionary
mapping words to unique ids
Returns: - reverse_dictionary : a dictionary
mapping ids to words
-
tensorlayer.nlp.
build_words_dataset
(words=[], vocabulary_size=50000, printable=True, unk_key='UNK')[source]¶ Build the words dictionary and replace rare words with ‘UNK’ token. The most common word has the smallest integer id.
Parameters: - words : a list of string or byte
The context in list format. You may need to do preprocessing on the words, such as lower case, remove marks etc.
- vocabulary_size : an int
The maximum vocabulary size, limiting the vocabulary size. Then the script replaces rare words with ‘UNK’ token.
- printable : boolean
Whether to print the read vocabulary size of the given words.
- unk_key : a string
Unknown words = unk_key
Returns: - data : a list of integer
The context in a list of ids
- count : a list of tuple and list
count[0] is a list : the number of rare words
count[1:] are tuples : the number of occurrence of each word
e.g. [[‘UNK’, 418391], (b’the’, 1061396), (b’of’, 593677), (b’and’, 416629), (b’one’, 411764)]
- dictionary : a dictionary
word_to_id, mapping words to unique IDs.
- reverse_dictionary : a dictionary
id_to_word, mapping id to unique word.
Examples
>>> words = tl.files.load_matt_mahoney_text8_dataset() >>> vocabulary_size = 50000 >>> data, count, dictionary, reverse_dictionary = tl.nlp.build_words_dataset(words, vocabulary_size)
Convert words to IDs and IDs to words¶
-
tensorlayer.nlp.
words_to_word_ids
(data=[], word_to_id={}, unk_key='UNK')[source]¶ Given a context (words) in list format and the vocabulary, Returns a list of IDs to represent the context.
Parameters: - data : a list of string or byte
the context in list format
- word_to_id : a dictionary
mapping words to unique IDs.
- unk_key : a string
Unknown words = unk_key
Returns: - A list of IDs to represent the context.
Examples
>>> words = tl.files.load_matt_mahoney_text8_dataset() >>> vocabulary_size = 50000 >>> data, count, dictionary, reverse_dictionary = ... tl.nlp.build_words_dataset(words, vocabulary_size, True) >>> context = [b'hello', b'how', b'are', b'you'] >>> ids = tl.nlp.words_to_word_ids(words, dictionary) >>> context = tl.nlp.word_ids_to_words(ids, reverse_dictionary) >>> print(ids) ... [6434, 311, 26, 207] >>> print(context) ... [b'hello', b'how', b'are', b'you']
-
tensorlayer.nlp.
word_ids_to_words
(data, id_to_word)[source]¶ Given a context (ids) in list format and the vocabulary, Returns a list of words to represent the context.
Parameters: - data : a list of integer
the context in list format
- id_to_word : a dictionary
mapping id to unique word.
Returns: - A list of string or byte to represent the context.
Examples
>>> see words_to_word_ids
Save vocabulary¶
-
tensorlayer.nlp.
save_vocab
(count=[], name='vocab.txt')[source]¶ Save the vocabulary to a file so the model can be reloaded.
Parameters: - count : a list of tuple and list
count[0] is a list : the number of rare words
count[1:] are tuples : the number of occurrence of each word
e.g. [[‘UNK’, 418391], (b’the’, 1061396), (b’of’, 593677), (b’and’, 416629), (b’one’, 411764)]
Examples
>>> words = tl.files.load_matt_mahoney_text8_dataset() >>> vocabulary_size = 50000 >>> data, count, dictionary, reverse_dictionary = ... tl.nlp.build_words_dataset(words, vocabulary_size, True) >>> tl.nlp.save_vocab(count, name='vocab_text8.txt') >>> vocab_text8.txt ... UNK 418391 ... the 1061396 ... of 593677 ... and 416629 ... one 411764 ... in 372201 ... a 325873 ... to 316376
Functions for translation¶
Word Tokenization¶
-
tensorlayer.nlp.
basic_tokenizer
(sentence, _WORD_SPLIT=re.compile(b'([., !?"\':;)(])'))[source]¶ Very basic tokenizer: split the sentence into a list of tokens.
Parameters: - sentence : tensorflow.python.platform.gfile.GFile Object
- _WORD_SPLIT : regular expression for word spliting.
References
- Code from
/tensorflow/models/rnn/translation/data_utils.py
Examples
>>> see create_vocabulary >>> from tensorflow.python.platform import gfile >>> train_path = "wmt/giga-fren.release2" >>> with gfile.GFile(train_path + ".en", mode="rb") as f: >>> for line in f: >>> tokens = tl.nlp.basic_tokenizer(line) >>> print(tokens) >>> exit() ... [b'Changing', b'Lives', b'|', b'Changing', b'Society', b'|', b'How', ... b'It', b'Works', b'|', b'Technology', b'Drives', b'Change', b'Home', ... b'|', b'Concepts', b'|', b'Teachers', b'|', b'Search', b'|', b'Overview', ... b'|', b'Credits', b'|', b'HHCC', b'Web', b'|', b'Reference', b'|', ... b'Feedback', b'Virtual', b'Museum', b'of', b'Canada', b'Home', b'Page']
Create or read vocabulary¶
-
tensorlayer.nlp.
create_vocabulary
(vocabulary_path, data_path, max_vocabulary_size, tokenizer=None, normalize_digits=True, _DIGIT_RE=re.compile(b'\\d'), _START_VOCAB=[b'_PAD', b'_GO', b'_EOS', b'_UNK'])[source]¶ Create vocabulary file (if it does not exist yet) from data file.
Data file is assumed to contain one sentence per line. Each sentence is tokenized and digits are normalized (if normalize_digits is set). Vocabulary contains the most-frequent tokens up to max_vocabulary_size. We write it to vocabulary_path in a one-token-per-line format, so that later token in the first line gets id=0, second line gets id=1, and so on.
Parameters: - vocabulary_path : path where the vocabulary will be created.
- data_path : data file that will be used to create vocabulary.
- max_vocabulary_size : limit on the size of the created vocabulary.
- tokenizer : a function to use to tokenize each data sentence.
if None, basic_tokenizer will be used.
- normalize_digits : Boolean
if true, all digits are replaced by 0s.
References
- Code from
/tensorflow/models/rnn/translation/data_utils.py
-
tensorlayer.nlp.
initialize_vocabulary
(vocabulary_path)[source]¶ Initialize vocabulary from file, return the word_to_id (dictionary) and id_to_word (list).
We assume the vocabulary is stored one-item-per-line, so a file:
dog
cat
will result in a vocabulary {“dog”: 0, “cat”: 1}, and this function will also return the reversed-vocabulary [“dog”, “cat”].
Parameters: - vocabulary_path : path to the file containing the vocabulary.
Returns: - vocab : a dictionary
Word to id. A dictionary mapping string to integers.
- rev_vocab : a list
Id to word. The reversed vocabulary (a list, which reverses the vocabulary mapping).
Raises: - ValueError : if the provided vocabulary_path does not exist.
Examples
>>> Assume 'test' contains ... dog ... cat ... bird >>> vocab, rev_vocab = tl.nlp.initialize_vocabulary("test") >>> print(vocab) >>> {b'cat': 1, b'dog': 0, b'bird': 2} >>> print(rev_vocab) >>> [b'dog', b'cat', b'bird']
Convert words to IDs and IDs to words¶
-
tensorlayer.nlp.
sentence_to_token_ids
(sentence, vocabulary, tokenizer=None, normalize_digits=True, UNK_ID=3, _DIGIT_RE=re.compile(b'\\d'))[source]¶ Convert a string to list of integers representing token-ids.
For example, a sentence “I have a dog” may become tokenized into [“I”, “have”, “a”, “dog”] and with vocabulary {“I”: 1, “have”: 2, “a”: 4, “dog”: 7”} this function will return [1, 2, 4, 7].
Parameters: - sentence : tensorflow.python.platform.gfile.GFile Object
The sentence in bytes format to convert to token-ids.
see basic_tokenizer(), data_to_token_ids()
- vocabulary : a dictionary mapping tokens to integers.
- tokenizer : a function to use to tokenize each sentence;
If None, basic_tokenizer will be used.
- normalize_digits : Boolean
If true, all digits are replaced by 0s.
Returns: - A list of integers, the token-ids for the sentence.
-
tensorlayer.nlp.
data_to_token_ids
(data_path, target_path, vocabulary_path, tokenizer=None, normalize_digits=True, UNK_ID=3, _DIGIT_RE=re.compile(b'\\d'))[source]¶ Tokenize data file and turn into token-ids using given vocabulary file.
This function loads data line-by-line from data_path, calls the above sentence_to_token_ids, and saves the result to target_path. See comment for sentence_to_token_ids on the details of token-ids format.
Parameters: - data_path : path to the data file in one-sentence-per-line format.
- target_path : path where the file with token-ids will be created.
- vocabulary_path : path to the vocabulary file.
- tokenizer : a function to use to tokenize each sentence;
if None, basic_tokenizer will be used.
- normalize_digits : Boolean; if true, all digits are replaced by 0s.
References
- Code from
/tensorflow/models/rnn/translation/data_utils.py