In Hadoop environment, reverse DNS is needed. Eg. NameNode use reverse DNS
to verify the DataNode is the right one.

>From the dig output, looks like DNS server is not configured properly. No
DNS is found.


On Tue, Jul 1, 2014 at 10:18 AM, lulynn_2008 <lulynn_2...@163.com> wrote:

> Hi Gordon,
> Thanks for your reply. Here is the dig result, this machine can not
> resolve this ip in cmd. Could you share why this requirement is needed? Is
> this needed for hbase/hadoop environment? Thanks
>
> [root@hostname ~]# dig 9.181.64.230
>
> ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6 <<>> 9.181.64.230
> ;; global options: +cmd
> ;; connection timed out; no servers could be reached
>
>
>
>
>
>
>
>
>
>
> At 2014-06-30 04:29:16, "Gordon Wang" <gw...@gopivotal.com> wrote:
> >Make sure you can resolve 9.181.64.230 in cmd.
> >use
> >
> >dig 9.181.64.230
> >
> >to check.
> >
> >
> >On Mon, Jun 30, 2014 at 4:14 PM, lulynn_2008 <lulynn_2...@163.com> wrote:
> >
> >> Hi All,
> >> Following are the test case and error. Do you have any suggestion or
> >> comment? Thanks
> >>
> >> Test case:
> >>
> >> create hbase table in hbase shell:
> >> create 'employees', 'SN', 'department', 'address'
> >> put 'employees', 'Hong', 'address:country', 'China'
> >>
> >>
> >> load and dump the table in pig grunt:
> >>
> >> A = load 'hbase://employees' using
> >> org.apache.pig.backend.hadoop.hbase.HBaseStorage( 'address:country',
> >> '-loadKey true') as (SN:bytearray,country:bytearray);
> >> B = filter A by SN == 'Hong';
> >> dump B;
> >>
> >> Error:
> >> 2014-06-30 15:23:50,072 INFO org.apache.zookeeper.ZooKeeper: Client
> >> environment:java.io.tmpdir=/tmp
> >> 2014-06-30 15:23:50,072 INFO org.apache.zookeeper.ZooKeeper: Client
> >> environment:java.compiler=j9jit24
> >> 2014-06-30 15:23:50,072 INFO org.apache.zookeeper.ZooKeeper: Client
> >> environment:os.name=Linux
> >> 2014-06-30 15:23:50,072 INFO org.apache.zookeeper.ZooKeeper: Client
> >> environment:os.arch=amd64
> >> 2014-06-30 15:23:50,072 INFO org.apache.zookeeper.ZooKeeper: Client
> >> environment:os.version=2.6.32-358.el6.x86_64
> >> 2014-06-30 15:23:50,072 INFO org.apache.zookeeper.ZooKeeper: Client
> >> environment:user.name=pig
> >> 2014-06-30 15:23:50,072 INFO org.apache.zookeeper.ZooKeeper: Client
> >> environment:user.home=/home/pig
> >> 2014-06-30 15:23:50,072 INFO org.apache.zookeeper.ZooKeeper: Client
> >> environment:user.dir=/pig/bin
> >> 2014-06-30 15:23:50,073 INFO org.apache.zookeeper.ZooKeeper: Initiating
> >> client connection, connectString=hostname:2181 sessionTimeout=90000
> >> watcher=hconnection-0x363b363b
> >> 2014-06-30 15:23:50,083 INFO
> >> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Process
> >> identifier=hconnection-0x363b363b connecting to ZooKeeper
> >> ensemble=hostname:2181
> >> 2014-06-30 15:23:50,086 INFO org.apache.zookeeper.ClientCnxn: Opening
> >> socket connection to server hostname/9.181.64.230:2181. Will not
> attempt
> >> to authenticate using SASL (Unable to locate a login configuration)
> >> 2014-06-30 15:23:50,087 INFO org.apache.zookeeper.ClientCnxn: Socket
> >> connection established to hostname/9.181.64.230:2181, initiating
> session
> >> 2014-06-30 15:23:50,097 INFO org.apache.zookeeper.ClientCnxn: Session
> >> establishment complete on server hostname/9.181.64.230:2181, sessionid
> =
> >> 0x146eb9f0ee5005c, negotiated timeout = 40000
> >> 2014-06-30 15:23:50,361 ERROR
> >> org.apache.hadoop.hbase.mapreduce.TableInputFormatBase: Cannot resolve
> the
> >> host name for hostname/9.181.64.230 because of
> >> javax.naming.CommunicationException: DNS error [Root exception is
> >> java.net.PortUnreachableException: ICMP Port Unreachable]; Remaining
> name:
> >> '230.64.181.9.in-addr.arpa'
> >> 2014-06-30 15:24:35,889 WARN
> >>
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher:
> >> Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to
> >> stop immediately on failure.
> >> 2014-06-30 15:24:35,899 ERROR
> >> org.apache.pig.tools.pigstats.SimplePigStats: ERROR:
> java.io.IOException:
> >> Cannot create a record reader because of a previous error. Please look
> at
> >> the previous logs lines from the task's full log for more details.
> >> 2014-06-30 15:24:35,899 ERROR
> org.apache.pig.tools.pigstats.PigStatsUtil:
> >> 1 map reduce job(s) failed!
> >> 2014-06-30 15:24:35,931 ERROR org.apache.pig.tools.grunt.Grunt: ERROR
> >> 1066: Unable to open iterator for alias A. Backend error :
> >> java.io.IOException: Cannot create a record reader because of a previous
> >> error. Please look at the previous logs lines from the task's full log
> for
> >> more details.
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >
> >
> >--
> >Regards
> >Gordon Wang
>



-- 
Regards
Gordon Wang

Reply via email to