If that is the case, why not commit so much already (i.e, separate modules for code and test) since that has been the "norm" thus far (see DSSVD, DSPCA etc.) Fixing code vs test modules could be a separate task/activity (which I'm happy to pick up) on which cf code move need not be dependent on.
On Tue, Jul 8, 2014 at 6:14 PM, Pat Ferrel <[email protected]> wrote: > I already did the code and tests in separate modules, that works but is > not a good way to go imo. If there are tests that will work in math-scala > then we can put the code in math-scala. I couldn’t find a way to do it. > > > On Jul 8, 2014, at 4:40 PM, Anand Avati <[email protected]> wrote: > > I'm not completely sure how to address this (code and tests in separate > modules) as I write, but I will give it a shot soon. > > > On Mon, Jul 7, 2014 at 9:18 AM, Pat Ferrel <[email protected]> wrote: > > > OK, I’m spending more time on this than I have to spare. The test class > > extends MahoutLocalContext, which provides an implicit Spark context. I > > haven’t found a way to test parallel execution of cooccurrence without > it. > > So far the only obvious option is to put cf into math-scala but the tests > > would have to remain in spark and that seems like trouble so I’d rather > not > > do that. > > > > I suspect as more math-scala consuming algos get implemented this issue > > will proliferate. We will have implementations that do not require Spark > > but tests that do. We could create a new sub-project that allows for > this I > > suppose but a new sub-project will require changes to SparkEngine and > > mahout’s script. > > > > If someone (Anand?) wants to offer a PR with some way around this I’d be > > happy to integrate. > > > > On Jun 30, 2014, at 5:39 PM, Pat Ferrel <[email protected]> wrote: > > > > No argument, just trying to decide whether to create core-scala or keep > > dumping anything not Spark dependent in math-scala. > > > > On Jun 30, 2014, at 9:32 AM, Ted Dunning <[email protected]> wrote: > > > > On Mon, Jun 30, 2014 at 8:36 AM, Pat Ferrel <[email protected]> > wrote: > > > >> Speaking for Sebastian and Dmitriy (with some ignorance) I think the > idea > >> was to isolate things with Spark dependencies something like we did > > before > >> with Hadoop. > > > > > > Go ahead and speak for me as well here! > > > > I think isolating the dependencies is crucial for platform nimbleness > > (nimbility?) > > > > > > > >
