Hi Kyle,

I'm actively working on it now. It's pretty close to completion, I'm just 
trying to figure out bottlenecks and optimize as much as possible.
As Phase 1, I implemented multi model training on Gradient Descent. Instead of 
performing Vector-Vector operations on rows (examples) and weights,
I've batched them into matrices so that we can use Level 3 BLAS to speed things 
up. I've also added support for Sparse Matrices 
(https://github.com/apache/spark/pull/2294) as making use of sparsity will 
allow you to train more models at once.

Best,
Burak

----- Original Message -----
From: "Kyle Ellrott" <kellr...@soe.ucsc.edu>
To: dev@spark.apache.org
Sent: Tuesday, September 16, 2014 3:21:53 PM
Subject: [mllib] State of Multi-Model training

I'm curious about the state of development Multi-Model learning in MLlib
(training sets of models during the same training session, rather then one
at a time). The JIRA lists it as in progress targeting Spark 1.2.0 (
https://issues.apache.org/jira/browse/SPARK-1486 ). But there hasn't been
any notes on it in over a month.
I submitted a pull request for a possible method to do this work a little
over two months ago (https://github.com/apache/spark/pull/1292), but
haven't yet received any feedback on the patch yet.
Is anybody else working on multi-model training?

Kyle


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org

Reply via email to