astonzhang commented on a change in pull request #10074: Add vocabulary and 
embedding
URL: https://github.com/apache/incubator-mxnet/pull/10074#discussion_r174650933
 
 

 ##########
 File path: docs/api/python/gluon/text.md
 ##########
 @@ -0,0 +1,332 @@
+# Gluon Text API
+
+## Overview
+
+The `mxnet.gluon.text` APIs refer to classes and functions related to text 
data processing, such
+as bulding indices and loading pre-trained embedding vectors for text tokens 
and storing them in the
+`mxnet.ndarray.NDArray` format.
+
+This document lists the text APIs in `mxnet.gluon`:
+
+```eval_rst
+.. autosummary::
+    :nosignatures:
+
+    mxnet.gluon.text.embedding
+    mxnet.gluon.text.vocab
+    mxnet.gluon.text.utils
+```
+
+All the code demonstrated in this document assumes that the following modules 
or packages are
+imported.
+
+```python
+>>> from mxnet import gluon
+>>> from mxnet import nd
+>>> from mxnet.gluon import text
+>>> import collections
+
+```
+
+### Access pre-trained word embeddings for indexed words
+
+As a common use case, let us access pre-trained word embedding vectors for 
indexed words in just a
+few lines of code. 
+
+To begin with, let us create a fastText word embedding instance by specifying 
the embedding name
+`fasttext` and the pre-trained file name `wiki.simple.vec`.
+
+```python
+>>> fasttext = text.embedding.create('fasttext', file_name='wiki.simple.vec')
+
+```
+
+Now, suppose that we have a simple text data set in the string format. We can 
count word frequency
+in the data set.
+
+```python
+>>> text_data = " hello world \n hello nice world \n hi world \n"
+>>> counter = text.count_tokens_from_str(text_data)
+
+```
+
+The obtained `counter` has key-value pairs whose keys are words and values are 
word frequencies.
+Suppose that we want to build indices for all the keys in `counter` and load 
the defined fastText
+word embedding for all such indexed words. We need a Vocabulary instance with 
`counter` and
+`fasttext` as its arguments.
+
+```python
+>>> my_vocab = text.Vocabulary(counter, embedding=fasttext)
+
+```
+
+Now we are ready to access the fastText word embedding vectors for indexed 
words, such as 'hello'
+and 'world'.
+
+```python
+>>> my_vocab.embedding[['hello', 'world']]
+
+[[  3.95669997e-01   2.14540005e-01  -3.53889987e-02  -2.42990002e-01
+    ...
+   -7.54180014e-01  -3.14429998e-01   2.40180008e-02  -7.61009976e-02]
+ [  1.04440004e-01  -1.08580001e-01   2.72119999e-01   1.32990003e-01
+    ...
+   -3.73499990e-01   5.67310005e-02   5.60180008e-01   2.90190000e-02]]
+<NDArray 2x300 @cpu(0)>
+
+```
+
+### Using pre-trained word embeddings in `gluon`
+
+To demonstrate how to use pre-trained word embeddings in the `gluon` package, 
let us first obtain
+indices of the words 'hello' and 'world'.
+
+```python
+>>> my_vocab[['hello', 'world']]
+[2, 1]
+
+```
+
+We can obtain the vector representation for the words 'hello' and 'world' by 
specifying their
+indices (2 and 1) and the weight matrix `my_vocab.embedding.idx_to_vec` in
+`mxnet.gluon.nn.Embedding`.
+ 
+```python
+>>> input_dim, output_dim = my_vocab.embedding.idx_to_vec.shape
+>>> layer = gluon.nn.Embedding(input_dim, output_dim)
+>>> layer.initialize()
+>>> layer.weight.set_data(my_vocab.embedding.idx_to_vec)
+>>> layer(nd.array([2, 1]))
+
+[[  3.95669997e-01   2.14540005e-01  -3.53889987e-02  -2.42990002e-01
+    ...
+   -7.54180014e-01  -3.14429998e-01   2.40180008e-02  -7.61009976e-02]
+ [  1.04440004e-01  -1.08580001e-01   2.72119999e-01   1.32990003e-01
+    ...
+   -3.73499990e-01   5.67310005e-02   5.60180008e-01   2.90190000e-02]]
+<NDArray 2x300 @cpu(0)>
+
+```
+
+## Vocabulary
+
+The vocabulary builds indices for text tokens and can be assigned with token 
embeddings. The input
+counter whose keys are candidate indices may be obtained via
+[`count_tokens_from_str`](#mxnet.gluon.text.utils.count_tokens_from_str).
+
+
+```eval_rst
+.. currentmodule:: mxnet.gluon.text.vocab
+.. autosummary::
+    :nosignatures:
+
+    Vocabulary
+```
+
+Suppose that we have a simple text data set in the string format. We can count 
word frequency in the
+data set.
+
+```python
+>>> text_data = " hello world \n hello nice world \n hi world \n"
+>>> counter = text.utils.count_tokens_from_str(text_data)
+
+```
+
+The obtained `counter` has key-value pairs whose keys are words and values are 
word frequencies.
+Suppose that we want to build indices for the 2 most frequent keys in 
`counter` with the unknown
+token representation '(unk)' and a reserved token '(pad)'.
+
+```python
+>>> my_vocab = text.Vocabulary(counter, max_size=2, unknown_token='(unk)', 
+...     reserved_tokens=['(pad)'])
+
+```
+
+We can access properties such as `token_to_idx` (mapping tokens to indices), 
`idx_to_token` (mapping
+indices to tokens), `unknown_token` (representation of any unknown token) and 
`reserved_tokens`
+(reserved tokens).
+
+
+```python
+>>> my_vocab.token_to_idx
+{'(unk)': 0, '(pad)': 1, 'world': 2, 'hello': 3}
+>>> my_vocab.idx_to_token
+['(unk)', '(pad)', 'world', 'hello']
+>>> my_vocab.unknown_token
+'(unk)'
+>>> my_vocab.reserved_tokens
+['(pad)']
+>>> len(my_vocab)
+4
+>>> my_vocab[['hello', 'world']]
+[3, 2]
+```
+
+Besides the specified unknown token '(unk)' and reserved_token '(pad)' are 
indexed, the 2 most
+frequent words 'world' and 'hello' are also indexed.
+
+
+### Assign token embedding to vocabulary
 
 Review comment:
   Resolved

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to