[GitHub] astonzhang commented on a change in pull request #10074: Add vocabulary and embedding

2018-03-14 Thread GitBox
astonzhang commented on a change in pull request #10074: Add vocabulary and 
embedding
URL: https://github.com/apache/incubator-mxnet/pull/10074#discussion_r174659007
 
 

 ##
 File path: python/mxnet/gluon/text/vocab.py
 ##
 @@ -0,0 +1,325 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# coding: utf-8
+# pylint: disable=consider-iterating-dictionary
+
+"""Vocabulary."""
+from __future__ import absolute_import
+from __future__ import print_function
+
+import collections
+from ... import nd
+
+from . import _constants as C
+from . import embedding as ebd
+
+
+class Vocabulary(object):
+"""Indexing and embedding assignment for text tokens.
+
+
+Parameters
+--
+counter : collections.Counter or None, default None
+Counts text token frequencies in the text data. Its keys will be 
indexed according to
+frequency thresholds such as `max_size` and `min_freq`. Keys of 
`counter`,
+`unknown_token`, and values of `reserved_tokens` must be of the same 
hashable type.
+Examples: str, int, and tuple.
+max_size : None or int, default None
+The maximum possible number of the most frequent tokens in the keys of 
`counter` that can be
+indexed. Note that this argument does not count any token from 
`reserved_tokens`. Suppose
+that there are different keys of `counter` whose frequency are the 
same, if indexing all of
+them will exceed this argument value, such keys will be indexed one by 
one according to
+their __cmp__() order until the frequency threshold is met. If this 
argument is None or
+larger than its largest possible value restricted by `counter` and 
`reserved_tokens`, this
+argument has no effect.
+min_freq : int, default 1
+The minimum frequency required for a token in the keys of `counter` to 
be indexed.
+unknown_token : hashable object, default ''
+The representation for any unknown token. In other words, any unknown 
token will be indexed
+as the same representation. Keys of `counter`, `unknown_token`, and 
values of
+`reserved_tokens` must be of the same hashable type. Examples: str, 
int, and tuple.
+reserved_tokens : list of hashable objects or None, default None
+A list of reserved tokens that will always be indexed, such as special 
symbols representing
+padding, beginning of sentence, and end of sentence. It cannot contain 
`unknown_token`, or
+duplicate reserved tokens. Keys of `counter`, `unknown_token`, and 
values of
+`reserved_tokens` must be of the same hashable type. Examples: str, 
int, and tuple.
+embedding : instance or list of instances of `embedding.TokenEmbedding`, 
default None
+The embedding to be assigned to the indexed tokens. If a list of 
multiple embeddings are
+provided, their embedding vectors will be concatenated for the same 
token.
+
+
+Properties
+--
+embedding : instance of :class:`~mxnet.gluon.text.embedding.TokenEmbedding`
+The embedding of the indexed tokens.
+idx_to_token : list of strs
+A list of indexed tokens where the list indices and the token indices 
are aligned.
+reserved_tokens : list of strs or None
+A list of reserved tokens that will always be indexed.
+token_to_idx : dict mapping str to int
+A dict mapping each token to its index integer.
+unknown_token : hashable object
+The representation for any unknown token. In other words, any unknown 
token will be indexed
+as the same representation.
+
+
+Examples
+
+>>> fasttext = text.embedding.create('fasttext', 
file_name='wiki.simple.vec')
+>>> text_data = " hello world \n hello nice world \n hi world \n"
+>>> counter = text.count_tokens_from_str(text_data)
+>>> my_vocab = text.Vocabulary(counter, embedding=fasttext)
+>>> my_vocab.embedding[['hello', 'world']]
+[[  3.95669997e-01   2.14540005e-01  -3.53889987e-02  -2.42990002e-01
+...
+   -7.54180014e-01  -3.14429998e-01   2.40180008e-02  -7.61009976e-02]
+ [  1.04440004e-01  -1.08580001e-01   2.7211e-01   1.32990003e-01
+...
+   

[GitHub] astonzhang commented on a change in pull request #10074: Add vocabulary and embedding

2018-03-14 Thread GitBox
astonzhang commented on a change in pull request #10074: Add vocabulary and 
embedding
URL: https://github.com/apache/incubator-mxnet/pull/10074#discussion_r174658117
 
 

 ##
 File path: python/mxnet/gluon/text/vocab.py
 ##
 @@ -0,0 +1,325 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# coding: utf-8
+# pylint: disable=consider-iterating-dictionary
+
+"""Vocabulary."""
+from __future__ import absolute_import
+from __future__ import print_function
+
+import collections
+from ... import nd
+
+from . import _constants as C
+from . import embedding as ebd
+
+
+class Vocabulary(object):
+"""Indexing and embedding assignment for text tokens.
+
+
+Parameters
+--
+counter : collections.Counter or None, default None
+Counts text token frequencies in the text data. Its keys will be 
indexed according to
+frequency thresholds such as `max_size` and `min_freq`. Keys of 
`counter`,
+`unknown_token`, and values of `reserved_tokens` must be of the same 
hashable type.
+Examples: str, int, and tuple.
+max_size : None or int, default None
+The maximum possible number of the most frequent tokens in the keys of 
`counter` that can be
+indexed. Note that this argument does not count any token from 
`reserved_tokens`. Suppose
+that there are different keys of `counter` whose frequency are the 
same, if indexing all of
+them will exceed this argument value, such keys will be indexed one by 
one according to
+their __cmp__() order until the frequency threshold is met. If this 
argument is None or
+larger than its largest possible value restricted by `counter` and 
`reserved_tokens`, this
+argument has no effect.
+min_freq : int, default 1
+The minimum frequency required for a token in the keys of `counter` to 
be indexed.
+unknown_token : hashable object, default ''
+The representation for any unknown token. In other words, any unknown 
token will be indexed
+as the same representation. Keys of `counter`, `unknown_token`, and 
values of
+`reserved_tokens` must be of the same hashable type. Examples: str, 
int, and tuple.
+reserved_tokens : list of hashable objects or None, default None
+A list of reserved tokens that will always be indexed, such as special 
symbols representing
+padding, beginning of sentence, and end of sentence. It cannot contain 
`unknown_token`, or
+duplicate reserved tokens. Keys of `counter`, `unknown_token`, and 
values of
+`reserved_tokens` must be of the same hashable type. Examples: str, 
int, and tuple.
+embedding : instance or list of instances of `embedding.TokenEmbedding`, 
default None
+The embedding to be assigned to the indexed tokens. If a list of 
multiple embeddings are
+provided, their embedding vectors will be concatenated for the same 
token.
+
+
+Properties
+--
+embedding : instance of :class:`~mxnet.gluon.text.embedding.TokenEmbedding`
+The embedding of the indexed tokens.
+idx_to_token : list of strs
+A list of indexed tokens where the list indices and the token indices 
are aligned.
+reserved_tokens : list of strs or None
+A list of reserved tokens that will always be indexed.
+token_to_idx : dict mapping str to int
+A dict mapping each token to its index integer.
+unknown_token : hashable object
+The representation for any unknown token. In other words, any unknown 
token will be indexed
+as the same representation.
+
+
+Examples
+
+>>> fasttext = text.embedding.create('fasttext', 
file_name='wiki.simple.vec')
+>>> text_data = " hello world \n hello nice world \n hi world \n"
 
 Review comment:
   resolved


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] astonzhang commented on a change in pull request #10074: Add vocabulary and embedding

2018-03-14 Thread GitBox
astonzhang commented on a change in pull request #10074: Add vocabulary and 
embedding
URL: https://github.com/apache/incubator-mxnet/pull/10074#discussion_r174658718
 
 

 ##
 File path: python/mxnet/gluon/text/embedding.py
 ##
 @@ -0,0 +1,582 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# coding: utf-8
+# pylint: disable=consider-iterating-dictionary
+
+"""Text token embedding."""
+from __future__ import absolute_import
+from __future__ import print_function
+
+import io
+import logging
+import os
+import tarfile
+import warnings
+import zipfile
+
+from . import _constants as C
+from ... import nd
+from ... import registry
+from ..utils import check_sha1, download, _get_repo_file_url
+
+
+def register(embedding_cls):
+"""Registers a new token embedding.
+
+
+Once an embedding is registered, we can create an instance of this 
embedding with
+:func:`~mxnet.gluon.text.embedding.create`.
+
+
+Examples
+
+>>> @mxnet.gluon.text.embedding.register
+... class MyTextEmbed(mxnet.gluon.text.embedding.TokenEmbedding):
+... def __init__(self, file_name='my_pretrain_file'):
+... pass
+>>> embed = mxnet.gluon.text.embedding.create('MyTokenEmbed')
+>>> print(type(embed))
+
+"""
+
+register_text_embedding = registry.get_register_func(TokenEmbedding, 
'token embedding')
+return register_text_embedding(embedding_cls)
+
+
+def create(embedding_name, **kwargs):
+"""Creates an instance of token embedding.
+
+
+Creates a token embedding instance by loading embedding vectors from an 
externally hosted
+pre-trained token embedding file, such as those of GloVe and FastText. To 
get all the valid
+`embedding_name` and `file_name`, use 
`mxnet.gluon.text.embedding.get_file_names()`.
+
+
+Parameters
+--
+embedding_name : str
+The token embedding name (case-insensitive).
+
+
+Returns
+---
+An instance of `mxnet.gluon.text.embedding.TokenEmbedding`:
+A token embedding instance that loads embedding vectors from an 
externally hosted
+pre-trained token embedding file.
+"""
+
+create_text_embedding = registry.get_create_func(TokenEmbedding, 'token 
embedding')
+return create_text_embedding(embedding_name, **kwargs)
+
+
+def get_file_names(embedding_name=None):
+"""Get valid token embedding names and their pre-trained file names.
+
+
+To load token embedding vectors from an externally hosted pre-trained 
token embedding file,
+such as those of GloVe and FastText, one should use
+`mxnet.gluon.text.embedding.create(embedding_name, file_name)`. This 
method returns all the
+valid names of `file_name` for the specified `embedding_name`. If 
`embedding_name` is set to
+None, this method returns all the valid names of `embedding_name` with 
their associated
+`file_name`.
+
+
+Parameters
+--
+embedding_name : str or None, default None
+The pre-trained token embedding name.
+
+
+Returns
+---
+dict or list:
+A list of all the valid pre-trained token embedding file names 
(`file_name`) for the
+specified token embedding name (`embedding_name`). If the text 
embeding name is set to None,
+returns a dict mapping each valid token embedding name to a list of 
valid pre-trained files
+(`file_name`). They can be plugged into
+`mxnet.gluon.text.embedding.create(embedding_name, file_name)`.
+"""
+
+text_embedding_reg = registry.get_registry(TokenEmbedding)
+
+if embedding_name is not None:
+if embedding_name not in text_embedding_reg:
+raise KeyError('Cannot find `embedding_name` %s. Use '
+   '`get_file_names(embedding_name=None).keys()` to 
get all the valid'
+   'embedding names.' % embedding_name)
+return 
list(text_embedding_reg[embedding_name].pretrained_file_name_sha1.keys())
+else:
+return {embedding_name: 
list(embedding_cls.pretrained_file_name_sha1.keys())
+for embedding_name, embedding_cls in 
registry.get_registry(TokenEmbedding).items()}
+
+
+class TokenEmbedding(object):
+"""Token 

[GitHub] astonzhang commented on a change in pull request #10074: Add vocabulary and embedding

2018-03-14 Thread GitBox
astonzhang commented on a change in pull request #10074: Add vocabulary and 
embedding
URL: https://github.com/apache/incubator-mxnet/pull/10074#discussion_r174658117
 
 

 ##
 File path: python/mxnet/gluon/text/vocab.py
 ##
 @@ -0,0 +1,325 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# coding: utf-8
+# pylint: disable=consider-iterating-dictionary
+
+"""Vocabulary."""
+from __future__ import absolute_import
+from __future__ import print_function
+
+import collections
+from ... import nd
+
+from . import _constants as C
+from . import embedding as ebd
+
+
+class Vocabulary(object):
+"""Indexing and embedding assignment for text tokens.
+
+
+Parameters
+--
+counter : collections.Counter or None, default None
+Counts text token frequencies in the text data. Its keys will be 
indexed according to
+frequency thresholds such as `max_size` and `min_freq`. Keys of 
`counter`,
+`unknown_token`, and values of `reserved_tokens` must be of the same 
hashable type.
+Examples: str, int, and tuple.
+max_size : None or int, default None
+The maximum possible number of the most frequent tokens in the keys of 
`counter` that can be
+indexed. Note that this argument does not count any token from 
`reserved_tokens`. Suppose
+that there are different keys of `counter` whose frequency are the 
same, if indexing all of
+them will exceed this argument value, such keys will be indexed one by 
one according to
+their __cmp__() order until the frequency threshold is met. If this 
argument is None or
+larger than its largest possible value restricted by `counter` and 
`reserved_tokens`, this
+argument has no effect.
+min_freq : int, default 1
+The minimum frequency required for a token in the keys of `counter` to 
be indexed.
+unknown_token : hashable object, default ''
+The representation for any unknown token. In other words, any unknown 
token will be indexed
+as the same representation. Keys of `counter`, `unknown_token`, and 
values of
+`reserved_tokens` must be of the same hashable type. Examples: str, 
int, and tuple.
+reserved_tokens : list of hashable objects or None, default None
+A list of reserved tokens that will always be indexed, such as special 
symbols representing
+padding, beginning of sentence, and end of sentence. It cannot contain 
`unknown_token`, or
+duplicate reserved tokens. Keys of `counter`, `unknown_token`, and 
values of
+`reserved_tokens` must be of the same hashable type. Examples: str, 
int, and tuple.
+embedding : instance or list of instances of `embedding.TokenEmbedding`, 
default None
+The embedding to be assigned to the indexed tokens. If a list of 
multiple embeddings are
+provided, their embedding vectors will be concatenated for the same 
token.
+
+
+Properties
+--
+embedding : instance of :class:`~mxnet.gluon.text.embedding.TokenEmbedding`
+The embedding of the indexed tokens.
+idx_to_token : list of strs
+A list of indexed tokens where the list indices and the token indices 
are aligned.
+reserved_tokens : list of strs or None
+A list of reserved tokens that will always be indexed.
+token_to_idx : dict mapping str to int
+A dict mapping each token to its index integer.
+unknown_token : hashable object
+The representation for any unknown token. In other words, any unknown 
token will be indexed
+as the same representation.
+
+
+Examples
+
+>>> fasttext = text.embedding.create('fasttext', 
file_name='wiki.simple.vec')
+>>> text_data = " hello world \n hello nice world \n hi world \n"
 
 Review comment:
   resolved


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] astonzhang commented on a change in pull request #10074: Add vocabulary and embedding

2018-03-14 Thread GitBox
astonzhang commented on a change in pull request #10074: Add vocabulary and 
embedding
URL: https://github.com/apache/incubator-mxnet/pull/10074#discussion_r174658095
 
 

 ##
 File path: docs/api/python/gluon/text.md
 ##
 @@ -0,0 +1,332 @@
+# Gluon Text API
+
+## Overview
+
+The `mxnet.gluon.text` APIs refer to classes and functions related to text 
data processing, such
+as bulding indices and loading pre-trained embedding vectors for text tokens 
and storing them in the
+`mxnet.ndarray.NDArray` format.
+
+This document lists the text APIs in `mxnet.gluon`:
+
+```eval_rst
+.. autosummary::
+:nosignatures:
+
+mxnet.gluon.text.embedding
+mxnet.gluon.text.vocab
+mxnet.gluon.text.utils
+```
+
+All the code demonstrated in this document assumes that the following modules 
or packages are
+imported.
+
+```python
+>>> from mxnet import gluon
+>>> from mxnet import nd
+>>> from mxnet.gluon import text
+>>> import collections
+
+```
+
+### Access pre-trained word embeddings for indexed words
+
+As a common use case, let us access pre-trained word embedding vectors for 
indexed words in just a
+few lines of code. 
+
+To begin with, let us create a fastText word embedding instance by specifying 
the embedding name
+`fasttext` and the pre-trained file name `wiki.simple.vec`.
+
+```python
+>>> fasttext = text.embedding.create('fasttext', file_name='wiki.simple.vec')
+
+```
+
+Now, suppose that we have a simple text data set in the string format. We can 
count word frequency
+in the data set.
+
+```python
+>>> text_data = " hello world \n hello nice world \n hi world \n"
+>>> counter = text.count_tokens_from_str(text_data)
+
+```
+
+The obtained `counter` has key-value pairs whose keys are words and values are 
word frequencies.
 
 Review comment:
   resolved


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] astonzhang commented on a change in pull request #10074: Add vocabulary and embedding

2018-03-14 Thread GitBox
astonzhang commented on a change in pull request #10074: Add vocabulary and 
embedding
URL: https://github.com/apache/incubator-mxnet/pull/10074#discussion_r174658086
 
 

 ##
 File path: docs/api/python/gluon/text.md
 ##
 @@ -0,0 +1,332 @@
+# Gluon Text API
+
+## Overview
+
+The `mxnet.gluon.text` APIs refer to classes and functions related to text 
data processing, such
+as bulding indices and loading pre-trained embedding vectors for text tokens 
and storing them in the
+`mxnet.ndarray.NDArray` format.
+
+This document lists the text APIs in `mxnet.gluon`:
+
+```eval_rst
+.. autosummary::
+:nosignatures:
+
+mxnet.gluon.text.embedding
+mxnet.gluon.text.vocab
+mxnet.gluon.text.utils
+```
+
+All the code demonstrated in this document assumes that the following modules 
or packages are
+imported.
+
+```python
+>>> from mxnet import gluon
+>>> from mxnet import nd
+>>> from mxnet.gluon import text
+>>> import collections
+
+```
+
+### Access pre-trained word embeddings for indexed words
+
+As a common use case, let us access pre-trained word embedding vectors for 
indexed words in just a
+few lines of code. 
+
+To begin with, let us create a fastText word embedding instance by specifying 
the embedding name
+`fasttext` and the pre-trained file name `wiki.simple.vec`.
+
+```python
+>>> fasttext = text.embedding.create('fasttext', file_name='wiki.simple.vec')
+
+```
+
+Now, suppose that we have a simple text data set in the string format. We can 
count word frequency
+in the data set.
+
+```python
+>>> text_data = " hello world \n hello nice world \n hi world \n"
+>>> counter = text.count_tokens_from_str(text_data)
 
 Review comment:
   resolved


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] astonzhang commented on a change in pull request #10074: Add vocabulary and embedding

2018-03-14 Thread GitBox
astonzhang commented on a change in pull request #10074: Add vocabulary and 
embedding
URL: https://github.com/apache/incubator-mxnet/pull/10074#discussion_r174655150
 
 

 ##
 File path: python/mxnet/gluon/text/vocab.py
 ##
 @@ -0,0 +1,325 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# coding: utf-8
+# pylint: disable=consider-iterating-dictionary
+
+"""Vocabulary."""
+from __future__ import absolute_import
+from __future__ import print_function
+
+import collections
+from ... import nd
+
+from . import _constants as C
+from . import embedding as ebd
+
+
+class Vocabulary(object):
+"""Indexing and embedding assignment for text tokens.
+
+
+Parameters
+--
+counter : collections.Counter or None, default None
+Counts text token frequencies in the text data. Its keys will be 
indexed according to
+frequency thresholds such as `max_size` and `min_freq`. Keys of 
`counter`,
+`unknown_token`, and values of `reserved_tokens` must be of the same 
hashable type.
+Examples: str, int, and tuple.
+max_size : None or int, default None
+The maximum possible number of the most frequent tokens in the keys of 
`counter` that can be
+indexed. Note that this argument does not count any token from 
`reserved_tokens`. Suppose
+that there are different keys of `counter` whose frequency are the 
same, if indexing all of
+them will exceed this argument value, such keys will be indexed one by 
one according to
+their __cmp__() order until the frequency threshold is met. If this 
argument is None or
+larger than its largest possible value restricted by `counter` and 
`reserved_tokens`, this
+argument has no effect.
+min_freq : int, default 1
+The minimum frequency required for a token in the keys of `counter` to 
be indexed.
+unknown_token : hashable object, default ''
+The representation for any unknown token. In other words, any unknown 
token will be indexed
+as the same representation. Keys of `counter`, `unknown_token`, and 
values of
+`reserved_tokens` must be of the same hashable type. Examples: str, 
int, and tuple.
+reserved_tokens : list of hashable objects or None, default None
+A list of reserved tokens that will always be indexed, such as special 
symbols representing
+padding, beginning of sentence, and end of sentence. It cannot contain 
`unknown_token`, or
+duplicate reserved tokens. Keys of `counter`, `unknown_token`, and 
values of
+`reserved_tokens` must be of the same hashable type. Examples: str, 
int, and tuple.
+embedding : instance or list of instances of `embedding.TokenEmbedding`, 
default None
+The embedding to be assigned to the indexed tokens. If a list of 
multiple embeddings are
+provided, their embedding vectors will be concatenated for the same 
token.
+
+
+Properties
+--
+embedding : instance of :class:`~mxnet.gluon.text.embedding.TokenEmbedding`
+The embedding of the indexed tokens.
+idx_to_token : list of strs
+A list of indexed tokens where the list indices and the token indices 
are aligned.
+reserved_tokens : list of strs or None
+A list of reserved tokens that will always be indexed.
+token_to_idx : dict mapping str to int
+A dict mapping each token to its index integer.
+unknown_token : hashable object
+The representation for any unknown token. In other words, any unknown 
token will be indexed
+as the same representation.
+
+
+Examples
+
+>>> fasttext = text.embedding.create('fasttext', 
file_name='wiki.simple.vec')
+>>> text_data = " hello world \n hello nice world \n hi world \n"
+>>> counter = text.count_tokens_from_str(text_data)
+>>> my_vocab = text.Vocabulary(counter, embedding=fasttext)
+>>> my_vocab.embedding[['hello', 'world']]
+[[  3.95669997e-01   2.14540005e-01  -3.53889987e-02  -2.42990002e-01
+...
+   -7.54180014e-01  -3.14429998e-01   2.40180008e-02  -7.61009976e-02]
+ [  1.04440004e-01  -1.08580001e-01   2.7211e-01   1.32990003e-01
+...
+   

[GitHub] astonzhang commented on a change in pull request #10074: Add vocabulary and embedding

2018-03-14 Thread GitBox
astonzhang commented on a change in pull request #10074: Add vocabulary and 
embedding
URL: https://github.com/apache/incubator-mxnet/pull/10074#discussion_r174651725
 
 

 ##
 File path: python/mxnet/gluon/text/vocab.py
 ##
 @@ -0,0 +1,325 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# coding: utf-8
+# pylint: disable=consider-iterating-dictionary
+
+"""Vocabulary."""
+from __future__ import absolute_import
+from __future__ import print_function
+
+import collections
+from ... import nd
+
+from . import _constants as C
+from . import embedding as ebd
+
+
+class Vocabulary(object):
+"""Indexing and embedding assignment for text tokens.
+
+
+Parameters
+--
+counter : collections.Counter or None, default None
+Counts text token frequencies in the text data. Its keys will be 
indexed according to
+frequency thresholds such as `max_size` and `min_freq`. Keys of 
`counter`,
+`unknown_token`, and values of `reserved_tokens` must be of the same 
hashable type.
+Examples: str, int, and tuple.
+max_size : None or int, default None
+The maximum possible number of the most frequent tokens in the keys of 
`counter` that can be
+indexed. Note that this argument does not count any token from 
`reserved_tokens`. Suppose
+that there are different keys of `counter` whose frequency are the 
same, if indexing all of
+them will exceed this argument value, such keys will be indexed one by 
one according to
+their __cmp__() order until the frequency threshold is met. If this 
argument is None or
+larger than its largest possible value restricted by `counter` and 
`reserved_tokens`, this
+argument has no effect.
+min_freq : int, default 1
+The minimum frequency required for a token in the keys of `counter` to 
be indexed.
+unknown_token : hashable object, default ''
+The representation for any unknown token. In other words, any unknown 
token will be indexed
+as the same representation. Keys of `counter`, `unknown_token`, and 
values of
+`reserved_tokens` must be of the same hashable type. Examples: str, 
int, and tuple.
+reserved_tokens : list of hashable objects or None, default None
+A list of reserved tokens that will always be indexed, such as special 
symbols representing
+padding, beginning of sentence, and end of sentence. It cannot contain 
`unknown_token`, or
+duplicate reserved tokens. Keys of `counter`, `unknown_token`, and 
values of
+`reserved_tokens` must be of the same hashable type. Examples: str, 
int, and tuple.
+embedding : instance or list of instances of `embedding.TokenEmbedding`, 
default None
+The embedding to be assigned to the indexed tokens. If a list of 
multiple embeddings are
+provided, their embedding vectors will be concatenated for the same 
token.
+
+
+Properties
+--
+embedding : instance of :class:`~mxnet.gluon.text.embedding.TokenEmbedding`
+The embedding of the indexed tokens.
+idx_to_token : list of strs
+A list of indexed tokens where the list indices and the token indices 
are aligned.
+reserved_tokens : list of strs or None
+A list of reserved tokens that will always be indexed.
+token_to_idx : dict mapping str to int
+A dict mapping each token to its index integer.
+unknown_token : hashable object
+The representation for any unknown token. In other words, any unknown 
token will be indexed
+as the same representation.
+
+
+Examples
+
+>>> fasttext = text.embedding.create('fasttext', 
file_name='wiki.simple.vec')
+>>> text_data = " hello world \n hello nice world \n hi world \n"
 
 Review comment:
   resolved


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] astonzhang commented on a change in pull request #10074: Add vocabulary and embedding

2018-03-14 Thread GitBox
astonzhang commented on a change in pull request #10074: Add vocabulary and 
embedding
URL: https://github.com/apache/incubator-mxnet/pull/10074#discussion_r174650933
 
 

 ##
 File path: docs/api/python/gluon/text.md
 ##
 @@ -0,0 +1,332 @@
+# Gluon Text API
+
+## Overview
+
+The `mxnet.gluon.text` APIs refer to classes and functions related to text 
data processing, such
+as bulding indices and loading pre-trained embedding vectors for text tokens 
and storing them in the
+`mxnet.ndarray.NDArray` format.
+
+This document lists the text APIs in `mxnet.gluon`:
+
+```eval_rst
+.. autosummary::
+:nosignatures:
+
+mxnet.gluon.text.embedding
+mxnet.gluon.text.vocab
+mxnet.gluon.text.utils
+```
+
+All the code demonstrated in this document assumes that the following modules 
or packages are
+imported.
+
+```python
+>>> from mxnet import gluon
+>>> from mxnet import nd
+>>> from mxnet.gluon import text
+>>> import collections
+
+```
+
+### Access pre-trained word embeddings for indexed words
+
+As a common use case, let us access pre-trained word embedding vectors for 
indexed words in just a
+few lines of code. 
+
+To begin with, let us create a fastText word embedding instance by specifying 
the embedding name
+`fasttext` and the pre-trained file name `wiki.simple.vec`.
+
+```python
+>>> fasttext = text.embedding.create('fasttext', file_name='wiki.simple.vec')
+
+```
+
+Now, suppose that we have a simple text data set in the string format. We can 
count word frequency
+in the data set.
+
+```python
+>>> text_data = " hello world \n hello nice world \n hi world \n"
+>>> counter = text.count_tokens_from_str(text_data)
+
+```
+
+The obtained `counter` has key-value pairs whose keys are words and values are 
word frequencies.
+Suppose that we want to build indices for all the keys in `counter` and load 
the defined fastText
+word embedding for all such indexed words. We need a Vocabulary instance with 
`counter` and
+`fasttext` as its arguments.
+
+```python
+>>> my_vocab = text.Vocabulary(counter, embedding=fasttext)
+
+```
+
+Now we are ready to access the fastText word embedding vectors for indexed 
words, such as 'hello'
+and 'world'.
+
+```python
+>>> my_vocab.embedding[['hello', 'world']]
+
+[[  3.95669997e-01   2.14540005e-01  -3.53889987e-02  -2.42990002e-01
+...
+   -7.54180014e-01  -3.14429998e-01   2.40180008e-02  -7.61009976e-02]
+ [  1.04440004e-01  -1.08580001e-01   2.7211e-01   1.32990003e-01
+...
+   -3.7340e-01   5.67310005e-02   5.60180008e-01   2.9019e-02]]
+
+
+```
+
+### Using pre-trained word embeddings in `gluon`
+
+To demonstrate how to use pre-trained word embeddings in the `gluon` package, 
let us first obtain
+indices of the words 'hello' and 'world'.
+
+```python
+>>> my_vocab[['hello', 'world']]
+[2, 1]
+
+```
+
+We can obtain the vector representation for the words 'hello' and 'world' by 
specifying their
+indices (2 and 1) and the weight matrix `my_vocab.embedding.idx_to_vec` in
+`mxnet.gluon.nn.Embedding`.
+ 
+```python
+>>> input_dim, output_dim = my_vocab.embedding.idx_to_vec.shape
+>>> layer = gluon.nn.Embedding(input_dim, output_dim)
+>>> layer.initialize()
+>>> layer.weight.set_data(my_vocab.embedding.idx_to_vec)
+>>> layer(nd.array([2, 1]))
+
+[[  3.95669997e-01   2.14540005e-01  -3.53889987e-02  -2.42990002e-01
+...
+   -7.54180014e-01  -3.14429998e-01   2.40180008e-02  -7.61009976e-02]
+ [  1.04440004e-01  -1.08580001e-01   2.7211e-01   1.32990003e-01
+...
+   -3.7340e-01   5.67310005e-02   5.60180008e-01   2.9019e-02]]
+
+
+```
+
+## Vocabulary
+
+The vocabulary builds indices for text tokens and can be assigned with token 
embeddings. The input
+counter whose keys are candidate indices may be obtained via
+[`count_tokens_from_str`](#mxnet.gluon.text.utils.count_tokens_from_str).
+
+
+```eval_rst
+.. currentmodule:: mxnet.gluon.text.vocab
+.. autosummary::
+:nosignatures:
+
+Vocabulary
+```
+
+Suppose that we have a simple text data set in the string format. We can count 
word frequency in the
+data set.
+
+```python
+>>> text_data = " hello world \n hello nice world \n hi world \n"
+>>> counter = text.utils.count_tokens_from_str(text_data)
+
+```
+
+The obtained `counter` has key-value pairs whose keys are words and values are 
word frequencies.
+Suppose that we want to build indices for the 2 most frequent keys in 
`counter` with the unknown
+token representation '(unk)' and a reserved token '(pad)'.
+
+```python
+>>> my_vocab = text.Vocabulary(counter, max_size=2, unknown_token='(unk)', 
+... reserved_tokens=['(pad)'])
+
+```
+
+We can access properties such as `token_to_idx` (mapping tokens to indices), 
`idx_to_token` (mapping
+indices to tokens), `unknown_token` (representation of any unknown token) and 
`reserved_tokens`
+(reserved tokens).
+
+
+```python
+>>> my_vocab.token_to_idx
+{'(unk)': 0, '(pad)': 1, 'world': 2, 'hello': 3}
+>>> my_vocab.idx_to_token
+['(unk)', '(pad)', 

[GitHub] astonzhang commented on a change in pull request #10074: Add vocabulary and embedding

2018-03-13 Thread GitBox
astonzhang commented on a change in pull request #10074: Add vocabulary and 
embedding
URL: https://github.com/apache/incubator-mxnet/pull/10074#discussion_r174324845
 
 

 ##
 File path: python/mxnet/text/embedding.py
 ##
 @@ -38,8 +38,12 @@
 
 def register(embedding_cls):
 """Registers a new token embedding.
+
+
 Once an embedding is registered, we can create an instance of this 
embedding with
-:func:`~mxnet.contrib.text.embedding.create`.
+:func:`~mxnet.text.embedding.create`.
+
+
 Examples
 
 >>> @mxnet.contrib.text.embedding.register
 
 Review comment:
   Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services