astonzhang commented on a change in pull request #8763: Add mxnet.text APIs
URL: https://github.com/apache/incubator-mxnet/pull/8763#discussion_r160545648
 
 

 ##########
 File path: docs/api/python/text/text.md
 ##########
 @@ -0,0 +1,213 @@
+# Text API
+
+## Overview
+
+The mxnet.text APIs refer to classes and functions related to text data
+processing, such as bulding indices and loading pre-trained embedding vectors
+for text tokens and storing them in the `mxnet.ndarray` format.
+
+This document lists the text APIs in mxnet:
+
+```eval_rst
+.. autosummary::
+    :nosignatures:
+
+    mxnet.text.utils
+    mxnet.text.indexer
+    mxnet.text.embedding
+    mxnet.text.glossary
+```
+
+## Text utilities
+
+The following functions provide utilities for text data processing.
+
+```eval_rst
+.. currentmodule:: mxnet.text.utils
+.. autosummary::
+    :nosignatures:
+
+    count_tokens_from_str
+    tokens_to_indices
+    indices_to_tokens
+```    
+
+## Text token indexer
+
+The text token indexer builds indices for text tokens. Such indexed tokens can
+be used by instances of `mxnet.text.embedding.TokenEmbedding` and
+`mxnet.text.glossary.Glossary`.
+
+
+```eval_rst
+.. currentmodule:: mxnet.text.indexer
+.. autosummary::
+    :nosignatures:
+
+    TokenIndexer
+```
+
+```python
+>>> from mxnet.text.indexer import TokenIndexer
+>>> from collections import Counter
+>>> counter = Counter(['a', 'b', 'b', 'c', 'c', 'c', 'some_word$'])
+>>> token_indexer = TokenIndexer(counter, most_freq_count=None, min_freq=1,
+...                              unknown_token='<unk>', 
+...                              reserved_tokens=['<pad>'])
+>>> len(token_indexer)
+6
+>>> token_indexer.token_to_idx
+{'<unk>': 0, '<pad>': 1, 'c': 2, 'b': 3, 'a': 4, 'some_word$': 5}
+>>> token_indexer.idx_to_token
+['<unk>', '<pad>', 'c', 'b', 'a', 'some_word$']
+>>> token_indexer.unknown_token
+'<unk>'
+>>> token_indexer.reserved_tokens
+['<pad>']
+>>> token_indexer2 = TokenIndexer(counter, most_freq_count=2, min_freq=3,
+...                               unknown_token='<unk>', reserved_tokens=None)
+>>> len(token_indexer2)
+2
+>>> token_indexer2.token_to_idx
+{'<unk>': 0, 'c': 1}
+>>> token_indexer2.idx_to_token
+['<unk>', 'c']
+>>> token_indexer2.unknown_token
+'<unk>'
+```
+
+## Text token embedding
+
+The text token embedding builds indices for text tokens. Such indexed tokens 
can
+be used by instances of `mxnet.text.embedding.TokenEmbedding` and
+`mxnet.text.glossary.Glossary`.
+
+To load token embeddings from an externally hosted pre-trained token embedding
+file, such as those of GloVe and FastText, use
+`TokenEmbedding.create(embedding_name, pretrained_file_name)`. To get all the
+available `embedding_name` and `pretrained_file_name`, use
+`TokenEmbedding.get_embedding_and_pretrained_file_names()`.
+
+Alternatively, to load embedding vectors from a custom pre-trained text token
+embedding file, use `mxnet.text.embeddings.CustomEmbedding`.
+
+
+```eval_rst
+.. currentmodule:: mxnet.text.embedding
+.. autosummary::
+    :nosignatures:
+
+    TokenEmbedding
+    Glove
+    FastText
+    CustomEmbedding
+```
+
+```python
+>>> from mxnet.text.embedding import TokenEmbedding
+>>> TokenEmbedding.get_embedding_and_pretrained_file_names()
+{'glove': ['glove.42B.300d.txt', 'glove.6B.50d.txt', 'glove.6B.100d.txt',
+'glove.6B.200d.txt', 'glove.6B.300d.txt', 'glove.840B.300d.txt',
+'glove.twitter.27B.25d.txt', 'glove.twitter.27B.50d.txt',
+'glove.twitter.27B.100d.txt', 'glove.twitter.27B.200d.txt'],
+'fasttext': ['wiki.en.vec', 'wiki.simple.vec', 'wiki.zh.vec']}
+>>> glove_6b_50d = TokenEmbedding.create('glove',
+...                                      
pretrained_file_name='glove.6B.50d.txt')
+>>> len(glove_6b_50d)
+400001
+>>> glove_6b_50d.vec_len
+50
+>>> glove_6b_50d.token_to_idx['hi']
+11084
+>>> glove_6b_50d.idx_to_token[11084]
+'hi'
+>>> # 0 is the index for any unknown token.
+... glove_6b_50d.idx_to_vec[0]
+
+[ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.
+  ...
+  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.]
+<NDArray 50 @cpu(0)>
+>>> glove_6b_50d.get_vecs_by_tokens('<unk$unk@unk>')
+
+[ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.
+  ...
+  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.]
+<NDArray 50 @cpu(0)>
+>>> glove_6b_50d.get_vecs_by_tokens(['<unk$unk@unk>', '<unk$unk@unk>'])
+
+[[ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.
+   ...
+   0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.]
+ [ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.
+   ...
+   0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.]]
+<NDArray 2x50 @cpu(0)>
+
+```
+
+## Implement a new text token embedding
+
+For ``optimizer``, create a subclass of `mxnet.text.embedding.TokenEmbedding`.
+Also add ``@TokenEmbedding.register`` before this class. See
+[`embedding.py`](https://github.com/dmlc/mxnet/blob/master/python/mxnet/text/embedding.py)
+for examples.
+
+
+## Glossary
 
 Review comment:
   resolved.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to