This is the configuration I used till now...It works, but give the
mentioned error (although the procedure seems to return correct
results anyway.
I think in /etc/hosts should be also the line
127.0.0.1 hostname

but in that case Hadoop does not start.

Alberto

On 14 September 2012 18:19, Shumin Wu <[email protected]> wrote:
> Would that work for you?
>
> 127.0.0.1        localhost
> 10.220.55.41  hostname
>
> -Shumin
>
> On Fri, Sep 14, 2012 at 6:18 AM, Alberto Cordioli <
> [email protected]> wrote:
>
>> Hi,
>>
>> I've successfully installed Apache HBase on a cluster with Hadoop.
>> It works fine, but when I try to use Pig to load some data from an
>> HBase table I get this error:
>>
>> ERROR org.apache.hadoop.hbase.mapreduce.TableInputFormatBase - Cannot
>> resolve the host name for /10.220.55.41 because of
>> javax.naming.OperationNotSupportedException: DNS service refused
>> [response code 5]; remaining name '41.55.220.10.in-addr.arpa'
>>
>> Pig returns in any case the correct results (actually I don't know
>> how), but I'd like to solve this issue.
>>
>> I discovered that this error is due to a mistake in /etc/hosts
>> configuration file. In fact, as reported in the documentation, I
>> should add the line
>> 127.0.0.1    hostname
>> (http://hbase.apache.org/book.html#os).
>>
>> But if I add this entry my Hadoop cluster does not start since the
>> datanote is bind to the local address instead to the hostname/IP
>> address. For this reason in many tutorial it's suggested to remove
>> such entry (e.g.
>>
>> http://stackoverflow.com/questions/8872807/hadoop-datanodes-cannot-find-namenode
>> ).
>>
>> Basically if I add that line Hadoop won't work, but if I keep the file
>> without the loopback address I get the above error.
>> What can I do? Which is the right configuration?
>>
>>
>> Thanks,
>> Alberto
>>
>>
>>
>>
>> --
>> Alberto Cordioli
>>



-- 
Alberto Cordioli

Reply via email to