Author: wangwei
Date: Sat Jun 13 13:49:33 2015
New Revision: 1685258

URL: http://svn.apache.org/r1685258
Log:
CMS commit to singa by wangwei

Modified:
    incubator/singa/site/trunk/content/markdown/develop/schedule.md

Modified: incubator/singa/site/trunk/content/markdown/develop/schedule.md
URL: 
http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/develop/schedule.md?rev=1685258&r1=1685257&r2=1685258&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/markdown/develop/schedule.md (original)
+++ incubator/singa/site/trunk/content/markdown/develop/schedule.md Sat Jun 13 
13:49:33 2015
@@ -21,22 +21,24 @@ Notice:    Licensed to the Apache Softwa
 | Release | Module| Feature | Status |
 |---------|---------|-------------|--------|
 | 0.1     | Neural Network |1.1. Feed forward neural network, including CNN, 
MLP | done|
-|         |                |1.2. RBM-like model, including RBM | working|
+| -Early July |                |1.2. RBM-like model, including RBM | working|
 |         |                |1.3. Recurrent neural network, including standard 
RNN | working|
 |         | Architecture   |1.4. One worker group on single node (with data 
partition)| done|
-|         |                |1.5. Multi worker groups on single node using 
[Hogwild](http://www.eecs.berkeley.edu/~brecht/papers/hogwildTR.pdf)|working|
-|         |                |1.6. Multi groups across nodes, like 
[Downpour](http://papers.nips.cc/paper/4687-large-scale-distributed-deep-networks)|working|
-|         | Resource Management |1.7. Integration with Mesos | working|
-|         | Failure recovery|1.8. Checkpoint and restore |working|
+|         |                |1.5. Multi worker groups on single node using 
[Shared Memory 
Hogwild](http://www.eecs.berkeley.edu/~brecht/papers/hogwildTR.pdf)|testing|
+|         |                |1.6. Distributed Hogwild | working|
+|         |                |1.7. Multi groups across nodes, like 
[Downpour](http://papers.nips.cc/paper/4687-large-scale-distributed-deep-networks)|working|
+|         |                |1.8  All-Reduce training architecture like 
[DeepImage](http://arxiv.org/abs/1501.02876)| working|
+|         | Resource Management |1.9. Integration with Mesos | working|
+|         | Failure recovery|1.10. Checkpoint and restore |testing|
 |         | Tools|1.9. Installation with GNU auto tools| done|
 |0.2      | Neural Network |2.1. Feed forward neural network, including 
auto-encoders, hinge loss layers, HDFS data layers||
-|         |                |2.2. RBM-like model, including DBM | |
-|         |                |2.3. Recurrent neural network, including LSTM| |
+| July-        |                |2.2. RBM-like model, including DBM | |
+| End of August         |                |2.3. Recurrent neural network, 
including LSTM| |
 |         |                |2.4. Model partition ||
 |         | Communication  |2.5. MPI||
 |         | GPU            |2.6. Single GPU ||
 |         |                |2.7. Multiple GPUs on single node||
-|         | Architecture   |2.8. All-Reduce training architecture like 
[DeepImage](http://arxiv.org/abs/1501.02876)||
+|         | Architecture   |2.8. Update to support GPUs
 |         | Fault Tolerance|2.9. Node failure detection and recovery||
 |         | Binding        |2.9. Python binding ||
 |         | User Interface |2.10. Web front-end for job submission and 
performance visualization||


Reply via email to