That's a good perspective. I don't think moving to the new APIs is that burdensome, especially as we're doing a better and better job of centralizing and standardizing our approach to interacting with Hadoop.
I somehow sense we're going to need to run 'close to the metal' with Hadoop to squeeze maximum performance out of it. Hence maybe we aren't the casual users that would most benefit from a layer in between. But that's just a hunch. This is, so far, an isolated issue that I'm attacking. Maybe it is just bad practice to have two mappers with different inputs but I think it's a handy trick. For now, I may punt and just use the old code for this particular mechanism, since it's only deprecated. When/if it goes away, then we can really deal with it. By that time maybe new solutions are available. On Tue, May 25, 2010 at 8:45 PM, Ted Dunning <[email protected]> wrote: > I presume that Robin's rework addresses this, right? > > ---------- Forwarded message ---------- > From: Chris K Wensel <[email protected]> > Date: Tue, May 25, 2010 at 12:43 PM > Subject: Re: Moving to new Hadoop APIs > To: Ted Dunning <[email protected]> > > > you guys need to update the mail list page. still sends to lucene.apache.org > > On May 25, 2010, at 12:40 PM, Ted Dunning wrote: > > > Thanks. > > On Tue, May 25, 2010 at 12:20 PM, Chris K Wensel <[email protected]> wrote: > >> I'm not on the list. >> >> here is my opinion on the new apis >> >> http://groups.google.com/group/cascading-user/browse_thread/thread/4dc26b68401bbc0f# >> >> and here >> >> http://stackoverflow.com/questions/2855167/which-hadoop-api-version-should-i-use/2859863#2859863 >> >> I can reply to the list, but am rushing out of the office. let me know. >> >> ckw >>
