See that module's docstring; reading the input is slower than processing it with the stochastic decomposition.
In short: in order for distributed computing to make sense (performance-wise), the data would already need to be pre-distributed, too. This is true in Hadoop, so I guess stochastic decomposition is an algo where MAHOUT could really make a difference on terabyte+ problems. Radim
