I think separating the concerns is a good thing.  It allows us to use better
implementations as well as handle cases where being a writable makes no
sense.

(just like Jake said)

On Fri, Oct 2, 2009 at 10:50 AM, Jake Mannix <[email protected]> wrote:

> On Thu, Oct 1, 2009 at 3:42 AM, Grant Ingersoll <[email protected]>
> wrote:
>
> >
> > On Oct 1, 2009, at 12:17 AM, Jake Mannix wrote:
> >
> > So why do we really need vectors to be Writable?  I see the appeal, it's
> >> nice and makes the code nicely integrated, but the way I ended up going,
> >> so that you could use decomposer either with or without Hadoop was to
> >> use a decorator - just have VectorWritable be an implementation of
> Vector
> >> which encapsulates the Writable methods, and delegates to a Hadoop -
> >> agnostic Vector member instance.
> >>
> >> This way all the algorithms which use the Vectors don't need to care
> about
> >> Hadoop unless they really do.
> >>
> >
> > That sounds reasonable, just going to take a little refactoring.
> >
>
> So what do the rest of you think about doing this?  Do we want to do some
> refactoring (post 0.2, naturally) which separates the writableness from the
> Matrix/Vector-ness?
>
> Or are we fine with all of our linear algebraic classes being tied to HDFS
> at an interface level?  (Even Matrices, which will probably soon need to
> be adapted to the idea that often they won't live on any single machine,
> and thus you'll never be write()'ing them out all at once, and so won't
> always even make sense to have them be Writable).
>
>  -jake
>



-- 
Ted Dunning, CTO
DeepDyve

Reply via email to