Matt,

Very good news indeed if my current code can run satisfactorily against
this upcoming release.  I am looking forward to testing it.

Any idea as to how long 'till I can download it?  It would help my planning.

Thanks

On 2 February 2012 18:53, Matt Foley <mfo...@hortonworks.com> wrote:

> I would differ on the terminology.  The 0.20/1.0 code line is STABLE.
>  0.21 is not, and probably never will be.  0.22 is said to be approaching
> stability, but has not been adjudged to be so yet.  1.0 is indeed derived
> from the 0.20 (security+append) branches, so it doesn't have some features
> (stable and unstable) that are in 0.21.  It does, however, have a lot of
> stable features originally in 0.21 and even 0.23, back-ported to the 0.20
> code line.
>
> At any rate, the good news for you is, MultipleInputs and MultipleOutputs
> are indeed included in the MAPREDUCE-3607 patch, which is part of 1.0.1,
> due out for vote tomorrow.
>
> Cheers,
> --Matt
>
>
> On Thu, Feb 2, 2012 at 3:06 PM, Geoffry Roberts <geoffry.robe...@gmail.com
> > wrote:
>
>> Yeesh! This is not good.  The whole reason I was using 0.21.0 was
>> MultipleOutputs.  It disappeared for a while then came back in 21.
>>
>> Can you comfort me with any hope that this 1.0.1 will be soon? I gotta
>> have MultipleOutputs and the WebHDFS, if it works as billed, will be most
>> helpful.
>>
>>
>> On 2 February 2012 13:40, Harsh J <ha...@cloudera.com> wrote:
>>
>>> Yeah I was afraid the API availability would be your next question.
>>> New API MultipleOutputs should be in the 1.0.1 micro update but there
>>> could still be lots of things missing.
>>>
>>> On Fri, Feb 3, 2012 at 3:00 AM, Geoffry Roberts
>>> <geoffry.robe...@gmail.com> wrote:
>>> > A down grade! I wouldn't have guessed. Thanks
>>> >
>>> > Do you know if anything happened to the class MultipleOutputs?
>>> >
>>> > I just tried running some of my old MR code against 1.0 and it seems
>>> > MultipleOutputs cannot be found in the new hadoop-core-1.0.0.jar.
>>> >
>>> >
>>> > On 2 February 2012 10:45, Harsh J <ha...@cloudera.com> wrote:
>>> >>
>>> >> Note that 0.21 to 1.0 is "sort-of" a downgrade in some ways,
>>> >> considering 1.0 is a rename of the 0.20-series. You probably want to
>>> >> review a lot of config params since those may not be present in 1.0.
>>> >>
>>> >> On Thu, Feb 2, 2012 at 11:47 PM, Geoffry Roberts
>>> >> <geoffry.robe...@gmail.com> wrote:
>>> >> > All,
>>> >> >
>>> >> > I seem to have solved my problem.
>>> >> >
>>> >> > In my hdfs.site.xml I had the following:
>>> >> >
>>> >> > <property>
>>> >> >   <name>dfs.name.dir</name>
>>> >> >   <value>file:///hdfs/name</
>>> >> >>
>>> >> >> value>
>>> >> >> </property>
>>> >> >> <property>
>>> >> >>   <name>dfs.data.dir</name>
>>> >> >>   <value>file:///hdfs/data</value>
>>> >> >> </property>
>>> >> >
>>> >> > The above worked on version 0.21.0, apparently not in 1.0.
>>> >> >
>>> >> > I changed them to
>>> >> > /hdfs/name and /hdfs/data respecively and, well, at least my name
>>> node
>>> >> > is
>>> >> > running.
>>> >> >
>>> >> >
>>> >> > On 2 February 2012 09:48, Geoffry Roberts <
>>> geoffry.robe...@gmail.com>
>>> >> > wrote:
>>> >> >>
>>> >> >> Thanks for the quick response.
>>> >> >>
>>> >> >> Here's a snippet from my hdfs.site.xml file.
>>> >> >>
>>> >> >>     <name>dfs.http.address</name>
>>> >> >>     <value>qq000:50070</value>
>>> >> >>
>>> >> >> qq000 is my name node. Is this correct?
>>> >> >>
>>> >> >> I have also noticed that my name node is crashing.  It says my
>>> hdfs is
>>> >> >> in
>>> >> >> a inconsistent state. I guess I'll have to (shudder) rebuild it.
>>> >> >>
>>> >> >> The complete contents of hdfs.site.xml is below.
>>> >> >>
>>> >> >> <configuration>
>>> >> >> <property>
>>> >> >>   <name>dfs.replication</name>
>>> >> >>   <value>3</value>
>>> >> >>   <description>Default block replication.
>>> >> >>   The actual number of replications can be specified when the file
>>> is
>>> >> >> created.
>>> >> >>   The default is used if replication is not specified in create
>>> time.
>>> >> >>   </description>
>>> >> >> </property>
>>> >> >> <property>
>>> >> >>   <name>dfs.name.dir</name>
>>> >> >>   <value>file:///hdfs/name</value>
>>> >> >> </property>
>>> >> >> <property>
>>> >> >>   <name>dfs.data.dir</name>
>>> >> >>   <value>file:///hdfs/data</value>
>>> >> >> </property>
>>> >> >> <property>
>>> >> >>   <name>dfs.hosts</name>
>>> >> >>   <value>includes</value>
>>> >> >>   <final>true</final>
>>> >> >> </property>
>>> >> >> <property>
>>> >> >>   <name>dfs.hosts.exclude</name>
>>> >> >>   <value>excludes</value>
>>> >> >>   <final>true</final>
>>> >> >> </property>
>>> >> >>
>>> >> >> <property>
>>> >> >>   <name>dfs.webhdfs.enabled</name>
>>> >> >>   <value>true</value>
>>> >> >> </property>
>>> >> >> <property>
>>> >> >>     <name>dfs.http.address</name>
>>> >> >>     <value>qq000:50070</value>
>>> >> >>     <description>The name of the default file system.  Either the
>>> >> >>        literal string "local" or a host:port for NDFS.
>>> >> >>     </description>
>>> >> >>     <final>true</final>
>>> >> >> </property>
>>> >> >> </configuration>
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >> On 2 February 2012 09:30, Harsh J <ha...@cloudera.com> wrote:
>>> >> >>>
>>> >> >>> Geoffry,
>>> >> >>>
>>> >> >>> What is your "dfs.http.address" set to? What's your NameNode's
>>> HTTP
>>> >> >>> address, basically? Have you tried that one?
>>> >> >>>
>>> >> >>> On Thu, Feb 2, 2012 at 10:54 PM, Geoffry Roberts
>>> >> >>> <geoffry.robe...@gmail.com> wrote:
>>> >> >>> > All,
>>> >> >>> >
>>> >> >>> > I have been using hadoop 0.21.0 for sometime now.  This past
>>> Monday
>>> >> >>> > I
>>> >> >>> > installed hadoop 1.0.
>>> >> >>> >
>>> >> >>> > I've been reading about WebHDFS and it sounds like something I
>>> could
>>> >> >>> > use but
>>> >> >>> > I can't seem to get it working.  I could definately use some
>>> >> >>> > guidance.
>>> >> >>> > I can
>>> >> >>> > find little in the way of documentation.
>>> >> >>> >
>>> >> >>> > I added the following property to hdfs_site.xml and bounced
>>> hadoop,
>>> >> >>> > but
>>> >> >>> > nothing seems to be listening on port 50070, which so far a I
>>> can
>>> >> >>> > glean
>>> >> >>> > is
>>> >> >>> > where WebHDFS should be listening.
>>> >> >>> >
>>> >> >>> > <property>
>>> >> >>> >     <name>dfs.webhdfs.enabled</name>
>>> >> >>> >     <value>true</value>
>>> >> >>> > </property>
>>> >> >>> >
>>> >> >>> > Am I on the correct port? Is there anything else?
>>> >> >>> >
>>> >> >>> > Thanks
>>> >> >>> >
>>> >> >>> > --
>>> >> >>> > Geoffry Roberts
>>> >> >>> >
>>> >> >>>
>>> >> >>>
>>> >> >>>
>>> >> >>> --
>>> >> >>> Harsh J
>>> >> >>> Customer Ops. Engineer
>>> >> >>> Cloudera | http://tiny.cloudera.com/about
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >> --
>>> >> >> Geoffry Roberts
>>> >> >>
>>> >> >
>>> >> >
>>> >> >
>>> >> > --
>>> >> > Geoffry Roberts
>>> >> >
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> Harsh J
>>> >> Customer Ops. Engineer
>>> >> Cloudera | http://tiny.cloudera.com/about
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > Geoffry Roberts
>>> >
>>>
>>>
>>>
>>> --
>>> Harsh J
>>> Customer Ops. Engineer
>>> Cloudera | http://tiny.cloudera.com/about
>>>
>>
>>
>>
>> --
>> Geoffry Roberts
>>
>>
>


-- 
Geoffry Roberts

Reply via email to