[
https://issues.apache.org/jira/browse/MAHOUT-792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13096570#comment-13096570
]
Lance Norskog edited comment on MAHOUT-792 at 9/4/11 12:27 AM:
---------------------------------------------------------------
Please run this:
{code}
public static void main(String[] args) {
RandomTrinaryMatrix rtm = new RandomTrinaryMatrix(0, 30, 40, true);
for(int row = 0; row < 30; row++)
for(int column = 0; column < 40; column++)
System.out.println(rtm.get(row, column));
rtm = new RandomTrinaryMatrix(0, 30, 40, true);
for(int row = 0; row < 30; row++)
for(int column = 0; column < 40; column++)
System.out.println(rtm.get(row, column));
}
{code}
The "low-grade" mode does not emit 0. The high-grade mode does not emit
negative numbers.
I'm curious: what is the math behind an even split of -1,0,1? The Achlioptas
math says +1/-1 or +1/-1/0/0/0 . So, the low-grade mode is correct, just not
what the comment describes :)
was (Author: lancenorskog):
Please run this:
{code}
public static void main(String[] args) {
RandomTrinaryMatrix rtm = new RandomTrinaryMatrix(0, 30, 40, true);
for(int row = 0; row < 30; row++)
for(int column = 0; column < 40; column++)
System.out.println(rtm.get(row, column));
rtm = new RandomTrinaryMatrix(0, 30, 40, true);
for(int row = 0; row < 30; row++)
for(int column = 0; column < 40; column++)
System.out.println(rtm.get(row, column));
}
{code}
> Add new stochastic decomposition code
> -------------------------------------
>
> Key: MAHOUT-792
> URL: https://issues.apache.org/jira/browse/MAHOUT-792
> Project: Mahout
> Issue Type: New Feature
> Reporter: Ted Dunning
> Attachments: MAHOUT-792.patch, MAHOUT-792.patch, sd-2.pdf
>
>
> I have figured out some simplification for our SSVD algorithms. This
> eliminates the QR decomposition and makes life easier.
> I will produce a patch that contains the following:
> - a CholeskyDecomposition implementation that does pivoting (and thus
> rank-revealing) or not. This should actually be useful for solution of large
> out-of-core least squares problems.
> - an in-memory SSVD implementation that should work for matrices up to
> about 1/3 of available memory.
> - an out-of-core SSVD threaded implementation that should work for very
> large matrices. It should take time about equal to the cost of reading the
> input matrix 4 times and will require working disk roughly equal to the size
> of the input.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira