I'm not completely sure how to address this (code and tests in separate
modules) as I write, but I will give it a shot soon.


On Mon, Jul 7, 2014 at 9:18 AM, Pat Ferrel <[email protected]> wrote:

> OK, I’m spending more time on this than I have to spare. The test class
> extends MahoutLocalContext, which provides an implicit Spark context. I
> haven’t found a way to test parallel execution of cooccurrence without it.
> So far the only obvious option is to put cf into math-scala but the tests
> would have to remain in spark and that seems like trouble so I’d rather not
> do that.
>
> I suspect as more math-scala consuming algos get implemented this issue
> will proliferate. We will have implementations that do not require Spark
> but tests that do. We could create a new sub-project that allows for this I
> suppose but a new sub-project will require changes to SparkEngine and
> mahout’s script.
>
> If someone (Anand?) wants to offer a PR with some way around this I’d be
> happy to integrate.
>
> On Jun 30, 2014, at 5:39 PM, Pat Ferrel <[email protected]> wrote:
>
> No argument, just trying to decide whether to create core-scala or keep
> dumping anything not Spark dependent in math-scala.
>
> On Jun 30, 2014, at 9:32 AM, Ted Dunning <[email protected]> wrote:
>
> On Mon, Jun 30, 2014 at 8:36 AM, Pat Ferrel <[email protected]> wrote:
>
> > Speaking for Sebastian and Dmitriy (with some ignorance) I think the idea
> > was to isolate things with Spark dependencies something like we did
> before
> > with Hadoop.
>
>
> Go ahead and speak for me as well here!
>
> I think isolating the dependencies is crucial for platform nimbleness
> (nimbility?)
>
>
>

Reply via email to