Thought it was against the trunk.. I did make it ignore whitespace changes,
maybe there's a problem there. I'll email you another patch off-list, let's
see if we can get it to work.

D

On Tue, Feb 15, 2011 at 2:45 PM, Matt Davies <[email protected]> wrote:

> Dmitriy,
>
> I saw the patch posted to PIG-1680 from last night. Thanks!  We have been
> trying to apply it, and find that the patch can't match some of the hunks.
> Are you patching against trunk or a certain version?
>
>
> Thanks!
> -Matt
>
> On Mon, Feb 14, 2011 at 4:14 PM, Matt Davies <[email protected]> wrote:
>
> > You are welcome! We did a filter before the store, and I confirmed that
> > there weren't any nulls.  Just something undocumented feature ;)
> >
> >
> > On Mon, Feb 14, 2011 at 3:03 PM, jacob <[email protected]>
> wrote:
> >
> >> Thanks for the mention. Off the top of my head I got these sorts of
> >> errors when trying to store either null records or records with null
> >> fields. What happens if you FILTER out any null values you might have.
> >> Does the problem persist?
> >>
> >> --jacob
> >> @thedatachef
> >>
> >> On Mon, 2011-02-14 at 14:57 -0700, Matt Davies wrote:
> >> > Hey All,
> >> >
> >> > Running into a problem storing data from a pig script storing results
> >> into
> >> > HBase.
> >> >
> >> > We are getting the following error:
> >> >
> >> > java.lang.NullPointerException
> >> >       at
> >>
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:126)
> >> >       at
> >>
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:81)
> >> >       at
> >>
> org.apache.pig.backend.hadoop.hbase.HBaseStorage.putNext(HBaseStorage.java:364)
> >> >       at
> >>
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:138)
> >> >       at
> >>
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:97)
> >> >       at
> >>
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:523)
> >> >       at
> >>
> org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
> >> >       at
> >>
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
> >> >       at
> >>
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapBase.runPipeline(PigMapBase.java:238)
> >> >       at
> >>
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapBase.map(PigMapBase.java:231)
> >> >       at
> >>
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapBase.map(PigMapBase.java:53)
> >> >       at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
> >> >       at
> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:639)
> >> >       at org.apache.hadoop.mapred.MapTask.run(MapTask.java:315)
> >> >       at org.apache.hadoop.mapred.Child$4.run(Child.java:217)
> >> >       at java.security.AccessController.doPrivileged(Native Method)
> >> >       at javax.security.auth.Subject.doAs(Subject.java:396)
> >> >       at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1063)
> >> >       at org.apache.hadoop.mapred.Child.main(Child.java:211)
> >> >
> >> >
> >> > We are using CDH3b3, and HBase 0.90.0 (from Apache direct). We've
> >> > followed the instructions to get pig 0.8.0 to work with CDH3 from
> >> > thedatachef(Thanks!)
> >> >
> >>
> http://thedatachef.blogspot.com/2011/01/apache-pig-08-with-cloudera-cdh3.html
> >> .
> >> >
> >> > The relevant line from the pig script is below. We've applied the
> >> > patch to get the "-noWAL" working:
> >> >
> >> > STORE links INTO 'p' USING
> >> > org.apache.pig.backend.hadoop.hbase.HBaseStorage('a:t a:t2
> >> > a:g','-noWAL');
> >> >
> >> >
> >> >
> >> > Anyone know what could be causing this problem?
> >> >
> >> >
> >> > Thanks in advance,
> >> >
> >> >
> >> > Matt
> >>
> >>
> >>
> >
>

Reply via email to