No, I didn't run it in hcatalog 0.5
Besides, I found that if I put the hadoop jars in the beginning of
CLASSPATH, it throws NPE error; while if I move hadoop jars in the last
position of CLASSPATH, it would run successfully.

With my debugging, I found that
the output of <key,value> are different in the last record, the NPE is due
to an NULL value of *itr.hasNext()* in Assert.asserFalse()
- without NPE
<0, >
<0, Row #: 11>
<11, Row #: 22>
...
<1256, Row #: 9999>
<1269, Row #: 100100>
*<1269, Row #: 100100>*

- with NPE
<0, >
<0, Row #: 11>
<11, Row #: 22>
...
<1256, Row #: 9999>
<1269, Row #: 100100>
<1284, >   *------ NPE error because the value is NULL here*

2013/2/27 Travis Crawford <[email protected]>

> Hi Bing -
>
> Do you see this same error with HCatalog 0.5.0?
>
> Thanks!
> Travis
>
>
> On Mon, Feb 25, 2013 at 7:08 AM, Bing Li <[email protected]> wrote:
> > Hi, All
> > When I ran hcatalog-0.4.0 with hadoop-1.1.1 and hive-0.9.0, (It could
> pass
> > with hadoop-1.0.3 )
> > I got the following NPE error:
> >
> >   <testcase classname="org.apache.hcatalog.data.TestReaderWriter"
> > name="test" time="6.932">
> >     <error
> > type="java.lang.NullPointerException">java.lang.NullPointerException
> >         at
> >
> org.apache.hadoop.fs.BufferedFSInputStream.getPos(BufferedFSInputStream.java:48)
> >         at
> > org.apache.hadoop.fs.FSDataInputStream.getPos(FSDataInputStream.java:41)
> >         at
> >
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.readChunk(ChecksumFileSystem.java:219)
> >         at
> >
> org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
> >         at
> > org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
> >         at
> org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
> >         at java.io.DataInputStream.read(DataInputStream.java:94)
> >         at
> org.apache.hadoop.util.LineReader.readLine(LineReader.java:134)
> >         at
> > org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:176)
> >         at
> > org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:43)
> >         at
> >
> org.apache.hcatalog.mapreduce.HCatRecordReader.nextKeyValue(HCatRecordReader.java:188)
> >         at
> >
> org.apache.hcatalog.data.transfer.impl.HCatInputFormatReader$HCatRecordItr.hasNext(HCatInputFormatReader.java:107)
> >         at
> >
> org.apache.hcatalog.data.TestReaderWriter.runsInSlave(TestReaderWriter.java:139)
> >         at
> > org.apache.hcatalog.data.TestReaderWriter.test(TestReaderWriter.java:104)
> > </error>
> >   </testcase>
> >
> > Did you met this before?
>

Reply via email to