I have a map reduce job using hbase table as input. when the job
starts, it says:
ERROR main org.apache.hadoop.hbase.mapreduce.TableInputFormatBase
Cannot resolve the host name for vc141/172.16.10.141 because of
javax.naming.CommunicationException: DNS error [Root exception is
Can you ping vc141 from this machine ?
Cheers
On Jun 25, 2014, at 1:29 AM, Li Li fancye...@gmail.com wrote:
I have a map reduce job using hbase table as input. when the job
starts, it says:
ERROR main org.apache.hadoop.hbase.mapreduce.TableInputFormatBase
Cannot resolve the host name for
yes
[hadoop@vc138 ~]$ ping vc141
PING vc141 (172.16.10.141) 56(84) bytes of data.
64 bytes from vc141 (172.16.10.141): icmp_seq=1 ttl=64 time=0.118 ms
On Wed, Jun 25, 2014 at 4:49 PM, Ted Yu yuzhih...@gmail.com wrote:
Can you ping vc141 from this machine ?
Cheers
On Jun 25, 2014, at 1:29 AM,
I have many map reduce jobs using hbase table as input. Others are all correct.
This one is a little bit difference because it use both hdfs and hbase
as input source.
btw, even there are errors, the job can run successfully.
My codes:
1. Hbase Table Mapper, mapper output key is Text and value
Do you use DNS server for name resolution ? Did you setup reverse DNS zone
in your cluster ? I have seen this errors before when there is no reverse
DNS setup. I believe that TableInputFormatBase class requires revers DNS
name resolution.
Regards
Samir
On Wed, Jun 25, 2014 at 10:57 AM, Li Li
there is not any DNS server for me . you mean find name by ip? if no
DNS server, will it work correctly?
On Wed, Jun 25, 2014 at 5:08 PM, Samir Ahmic ahmic.sa...@gmail.com wrote:
Do you use DNS server for name resolution ? Did you setup reverse DNS zone
in your cluster ? I have seen this errors
Yes. From IP address to name that is reverse DNS lookup. Yes mapreduce will
work but you will see that error in logs.
Regards
On Jun 25, 2014 11:12 AM, Li Li fancye...@gmail.com wrote:
there is not any DNS server for me . you mean find name by ip? if no
DNS server, will it work correctly?
On
Please see https://issues.apache.org/jira/browse/HBASE-10906
Cheers
On Jun 25, 2014, at 2:12 AM, Li Li fancye...@gmail.com wrote:
there is not any DNS server for me . you mean find name by ip? if no
DNS server, will it work correctly?
On Wed, Jun 25, 2014 at 5:08 PM, Samir Ahmic
thanks, I got it.
On Wed, Jun 25, 2014 at 5:25 PM, Ted Yu yuzhih...@gmail.com wrote:
Please see https://issues.apache.org/jira/browse/HBASE-10906
Cheers
On Jun 25, 2014, at 2:12 AM, Li Li fancye...@gmail.com wrote:
there is not any DNS server for me . you mean find name by ip? if no
DNS
Hi all,
we have been experiencing the same problem with 2 of our clusters. We
are currently using HDP 2.1 that comes with HBase 0.98.
The problem manifested by showing a huge differences (hundreds of GB)
between the output of df and du of the hdfs data directories.
Eventually, other systems
Hello,
I have keys in hbase of form `abc:xyz` and i would like to write/extend
custom InputFormat/TableInputFormat class with a custom RecordReader which
would be able to merge all rows with similar prefixed rowkeys e.g. `abc:*`
into one single record. I have no clue how to write getSplits for
Actually, it is not a server log. It is my client log. There is no stack
trace as such because no exception was thrown. Where the log ends, the
clients froze.
What you mean by config? HBaseConfiguration object in the client or config
file of the HBase server? Please find attached my HBase server
Hi,
With HBase 0.96, HConnection.getTable method used to throw an exception in
case the table did not exist. Based on this exection, I was creating tables
in HBase as required.
With HBase 0.98.3 I'm not getting exception. Is this the expected behavior
or am I missing something.
Thanks,
Anand
It's a change in 0.98, coming from HBASE-10080.
On Wed, Jun 25, 2014 at 3:18 PM, Anand Nalya anand.na...@gmail.com wrote:
Hi,
With HBase 0.96, HConnection.getTable method used to throw an exception in
case the table did not exist. Based on this exection, I was creating tables
in HBase as
from this
https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HConnection.html
it still shows it throws IOException
On Wed, Jun 25, 2014 at 6:48 PM, Anand Nalya anand.na...@gmail.com wrote:
Hi,
With HBase 0.96, HConnection.getTable method used to throw an exception in
case the
Before I was able to acquire a stack trace, I restarted the master.
However, the issue has just happened again and I was able to get a stack
trace:
http://pastebin.com/Mz5c6AML
(The pastebin is set to never expire, so anyone viewing an archived version
of this message should still be able to see
Looks like master was stuck in FileSystem.listStatus() call.
I noticed the following - did this show up if you take jstack one more time
?
1. at
org.apache.hadoop.hbase.master.SplitLogManager.waitForSplittingCompletion(SplitLogManager.java:372)
2. - locked
Yes, that stack is still there:
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at
org.apache.hadoop.hbase.master.SplitLogManager.waitForSplittingCompletion(SplitLogManager.java:372)
- locked 0xbfa0a068 (a
Apparently those file descriptors were stored by the HDFS
ShortCircuit cache.
As far as I understand this is issue of HDFS shorty-circuit-reads
implementation not HBase. HBase uses HDFS API to access
files. Did you ask this question on hdfs dev list? This looks like a very
serious bug.
Best
Can you look at the tail of master log to see which WAL takes long time to
split ?
Checking Namenode log if needed.
Cheers
On Wed, Jun 25, 2014 at 12:09 PM, Tom Brown tombrow...@gmail.com wrote:
Yes, that stack is still there:
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at
Hello,
I was wondering if there is a defined client configuration for hbase client
for connecting to a secure hbase cluster? With my client attempting to talk
to a secure cluster, I keep getting the following in my master logs:
WARN org.apache.hadoop.hbase.ipc.SecureServer: Incorrect header or
Forwarded
-- Forwarded message --
From: Vladimir Rodionov vrodio...@carrieriq.com
Date: Wed, Jun 25, 2014 at 12:03 PM
Subject: RE: Disk space leak when using HBase and HDFS ShortCircuit
To: user@hbase.apache.org user@hbase.apache.org
Apparently those file descriptors were
Li Li,
Were you able to figure out the cause of this? I am seeing something
similar.
On Wed, May 7, 2014 at 10:50 PM, Li Li fancye...@gmail.com wrote:
today I upgraded hbase 0.94.11 to 0.96.2-hadoop1. I have not changed
any client codes except replace 0.94.11 client jar to 0.96.2 's
When
Agreed, this seems like an hdfs issue unless hbase itself does not close
the hfiles properly. But judging from the fact that you were able to
circumvent the problem by getting reducing the cache size, it does seem
unlikely.
I don't think the local block reader will be notified when a file/block
Hi Swarnim,
It seems like you have a mismatch between client and server hbase
configuration. Do you have the configuration of both secure and non-secure
cluster in classpath of your application? If yes, please make sure that at
a given point of time only one set of configuration files are present
25 matches
Mail list logo