astonzhang commented on a change in pull request #10074: Add vocabulary and embedding URL: https://github.com/apache/incubator-mxnet/pull/10074#discussion_r174658086
########## File path: docs/api/python/gluon/text.md ########## @@ -0,0 +1,332 @@ +# Gluon Text API + +## Overview + +The `mxnet.gluon.text` APIs refer to classes and functions related to text data processing, such +as bulding indices and loading pre-trained embedding vectors for text tokens and storing them in the +`mxnet.ndarray.NDArray` format. + +This document lists the text APIs in `mxnet.gluon`: + +```eval_rst +.. autosummary:: + :nosignatures: + + mxnet.gluon.text.embedding + mxnet.gluon.text.vocab + mxnet.gluon.text.utils +``` + +All the code demonstrated in this document assumes that the following modules or packages are +imported. + +```python +>>> from mxnet import gluon +>>> from mxnet import nd +>>> from mxnet.gluon import text +>>> import collections + +``` + +### Access pre-trained word embeddings for indexed words + +As a common use case, let us access pre-trained word embedding vectors for indexed words in just a +few lines of code. + +To begin with, let us create a fastText word embedding instance by specifying the embedding name +`fasttext` and the pre-trained file name `wiki.simple.vec`. + +```python +>>> fasttext = text.embedding.create('fasttext', file_name='wiki.simple.vec') + +``` + +Now, suppose that we have a simple text data set in the string format. We can count word frequency +in the data set. + +```python +>>> text_data = " hello world \n hello nice world \n hi world \n" +>>> counter = text.count_tokens_from_str(text_data) Review comment: resolved ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services