Hi St.Ack and J-D,
Thanks for looking into this.
It can definitely be a configuration problem, but I seriously doubt it
is a network or infrastructure problem. It's our own operated
infrastructure (not a cloud) and we have a lot of other services
running on it without any problem. Note that
Hello Stanley,
On Mon, Apr 11, 2011 at 4:56 PM, stanley@emc.com wrote:
hbase.cluster.distributed
defaule: false
The mode the cluster will be in. Possible values are false: standalone and
pseudo-distributed setups with managed Zookeeper true: fully-distributed with
unmanaged Zookeeper
Hi,
Is it possible that some table cannot cover the whole key space. What I saw
was like
hbase(main):006:0 put 'table1', 'abc', 'cfEStore:dasd', '123'
0 row(s) in 0.3030 seconds
hbase(main):007:0 put 'table1', 'LCgwzrx2XTFkB2Ymz9HeJWPY0Ok=',
'cfEStore:dasd', '123'
ERROR:
Or does this mean I've corrupted the .META. data? BTW, any way to recover
the .META.?
On Mon, Apr 11, 2011 at 7:50 PM, 茅旭峰 m9s...@gmail.com wrote:
Hi,
Is it possible that some table cannot cover the whole key space. What I saw
was like
hbase(main):006:0 put 'table1', 'abc',
Lior and Joe:
Sorry for the mvn lag. The mvn deploy system is smarter than me. The
deploy requires four full builds of HBase running all tests. I
inevitably get distracted and forget the process or else I am demented
and answer one of the questions just off and then I have to start
over. I'm
It's possible under some bugs, which HBase version are you using?
J-D
On Mon, Apr 11, 2011 at 4:50 AM, 茅旭峰 m9s...@gmail.com wrote:
Hi,
Is it possible that some table cannot cover the whole key space. What I saw
was like
hbase(main):006:0 put 'table1', 'abc', 'cfEStore:dasd', '123'
I think it changed somewhere between 0.20 and 0.90 as I remember being
able to use a separate ZK with a standalone HBase. So for the moment
you can just set hbase.cluster.distributed to true which will spawn
the master and the region server into 2 processes but it will still
work without HDFS
Same for the issue where the master isn't shutting down, we should be
shutting down region servers once checkFilesystem is called... at
least until we can find a way to ride over NN restarts.
gets/puts are probably working for files that are already opened since
HBase doesn't have to talk to the
On Sun, Apr 10, 2011 at 11:30 PM, Eran Kutner e...@gigya.com wrote:
Hi St.Ack and J-D,
Thanks for looking into this.
It can definitely be a configuration problem, but I seriously doubt it
is a network or infrastructure problem. It's our own operated
infrastructure (not a cloud) and we have
Hi all,
I had an issue recently where a scan job I frequently run caught ConnectionLoss
and subsequently failed to recover.
The stack trace looks like this:
11/04/08 12:20:04 INFO zookeeper.ZooKeeper: Session: 0x12f2497b00d03d8 closed
11/04/08 12:20:04 WARN
We are using hadoop-CDH3B4 and hbase0.90.1-CDH3B4. I'll check the
issue further, but my understanding is the meta info and the root
region are saved by zookeeper, right? Do I need to check them there?
m9suns
在 2011-4-12,0:40,Jean-Daniel Cryans jdcry...@apache.org 写道:
It's possible under some
I found the code still exists in this code base for the old mapred interfaces
src/main/java/org/apache/hadoop/hbase/mapred/TableInputFormatBase.java
I'll adapt it for my needs. Thanks!
Avery
On Apr 9, 2011, at 9:55 AM, Jean-Daniel Cryans wrote:
It's weird, I thought we already did something
I'm cleaning this up in this jira
https://issues.apache.org/jira/browse/HBASE-3755
But it's a failure case I haven't seen before, really interesting.
There's a HTable that's created in the guts if HCM that will throw a
ZookeeperConnectionException but it will bubble up as an IOE. I'll try
to
In the HBase pom.xml, the Hadoop branch is 0.20. Will HBase work with
the Hadoop 0.20.3 append branch?
We use JProfiler and connect to the remote VM via SSH tunnel. (Our testing is
done up in EC2.)
- Andy
From: Peter Haidinyak phaidin...@local.com
Subject: RE: cpu profiling
To: user@hbase.apache.org user@hbase.apache.org
Date: Monday, April 11, 2011, 8:51 AM
I've been using JProfiler
Alright so I was able to get the logs from Eran, the HDFS errors are a
red herring, what followed in the region server log that is really
important is:
2011-04-10 10:14:27,278 INFO org.apache.zookeeper.ClientCnxn: Client
session timed out, have not heard from server in 144490ms for
sessionid
Thanks J-D. I'll keep an eye on the Jira.
-Original Message-
From: jdcry...@gmail.com [mailto:jdcry...@gmail.com] On Behalf Of Jean-
Daniel Cryans
Sent: Monday, April 11, 2011 11:52
To: user@hbase.apache.org
Subject: Re: Catching ZK ConnectionLoss with HTable
I'm cleaning this
What does the log look like when you start hbase without starting your
own zookeeper? The Couldnt start ZK at requested address message
means that it does fall into that part of the code, but something must
be blocking it from starting... The log should tell you.
J-D
On Mon, Apr 11, 2011 at
0.20 is the name of the branch, if you generate a tar from it you'll
see that it's called 0.20.3-SNAPSHOT
J-D
On Mon, Apr 11, 2011 at 2:16 PM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
Well, just the difference in versions, the one in HBase is listed as
0.20 whereas the latest is
We're new to hbase, but somewhat familiar with the core concepts associated
with it. We use mysql now, but have also used cassandra for portions of our
code. We feel that hbase is a better fit because of the tight integration
with mapreduce and the proven stability of the underlying hadoop
This is basically what I do only I use a Java Client to aggregate and place the
data into HBase. I can process a log with a million rows in a little over 13
seconds. To write the data to HBase takes around 40 seconds. Then we hit HBase
via a thin client a SpringWS. Seems to work pretty well.
Head of branch-0.20-append is 0.20.3-SNAPSHOT
(http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.20-append/build.xml)
- Andy
From: Jason Rutherglen jason.rutherg...@gmail.com
Subject: Re: Hadoop 0.20.3 Append branch?
To: apurt...@apache.org, hbase-u...@hadoop.apache.org
Date:
I was confused by the reference in the pom.xml of HBase to the append
jar, I neglected to mention that part.
Thanks for the assistance.
On Mon, Apr 11, 2011 at 4:56 PM, Andrew Purtell apurt...@apache.org wrote:
Head of branch-0.20-append is 0.20.3-SNAPSHOT
I thought a lot more about this issue and it could be a bigger
undertaking than I thought, basically any HTable operation can throw
ZK-related errors and I think they should be considered as fatal.
In the mean time HBase could improve the situation a bit. You say it
was spinning, do you know
Is it too many regions ? Is the memory enough ?
HBase-0.20.6
2011-04-12 00:16:31,844 FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: OutOfMemoryError,
aborting.
java.lang.OutOfMemoryError: Java heap space
at java.io.BufferedInputStream.init(BufferedInputStream.java:178)
It's really a lot yes, but it could also be weird configurations or
too big values.
J-D
On Mon, Apr 11, 2011 at 5:35 PM, 陈加俊 cjjvict...@gmail.com wrote:
Is it too many regions ? Is the memory enough ?
HBase-0.20.6
2011-04-12 00:16:31,844 FATAL
Re: maxHeap=3991
Seems like an awful lot of data to put in a 4gb heap.
-Original Message-
From: 陈加俊 [mailto:cjjvict...@gmail.com]
Sent: Monday, April 11, 2011 8:35 PM
To: hbase-u...@hadoop.apache.org
Subject: too many regions cause OME ?
Is it too many regions ? Is the memory
Can I limit the numbers of regions on one RegionServer ?
On Tue, Apr 12, 2011 at 8:37 AM, Jean-Daniel Cryans jdcry...@apache.orgwrote:
It's really a lot yes, but it could also be weird configurations or
too big values.
J-D
On Mon, Apr 11, 2011 at 5:35 PM, 陈加俊 cjjvict...@gmail.com wrote:
And where will they go? The issue isn't the number of regions per say,
it's the amount of data being served by that region server. Also I
still don't know if that's really your issue or it's a configuration
issue (which I have yet to see).
J-D
On Mon, Apr 11, 2011 at 5:41 PM, 陈加俊
There is one table has 1.4T*3(replication) data .
On Tue, Apr 12, 2011 at 8:38 AM, Doug Meil doug.m...@explorysmedical.comwrote:
Re: maxHeap=3991
Seems like an awful lot of data to put in a 4gb heap.
-Original Message-
From: 陈加俊 [mailto:cjjvict...@gmail.com]
Sent: Monday,
Ok that looks fine, did the region server die under heavy load by
any chance? Or was it big scans? Or just normal load?
J-D
On Mon, Apr 11, 2011 at 5:50 PM, 陈加俊 cjjvict...@gmail.com wrote:
my configuration is follows:
property
namehbase.client.write.buffer/name
There is no big scan,and just norma load. Also strange is when one RS exited
then another RS exited and others RS like that.
On Tue, Apr 12, 2011 at 8:55 AM, Jean-Daniel Cryans jdcry...@apache.orgwrote:
Ok that looks fine, did the region server die under heavy load by
any chance? Or was it big
Hi Harsh J,
I thought it was that way, but according to the description of the
hbase.cluster.distributed, for pseudo-distributed setup with managed
zookeeper, this value should be set to false.
I think there's some more difference between the real-distributed mode and the
pseudo one.
From the output of scan '.META.' I pasted before, we can see there are two
key ranges
which might cover the put key 'LCgwzrx2XTFkB2Ymz9HeJWPY0Ok='. They are
#1, 'LC3MILeAUy8HmRFgU5-ESE-9T7w=' - 'LD4jOJWFyt4m7A3KGFST6d-uj3A='
#2, 'LC_vN8JYweYYsnKaKbpOo67kUNA=' - 'some end key'
The output has less
BUILD FAILED
.../branch-0 .20-append/build.xml:927: The following error
occurred while executing this line:
../branch-0 .20-append/build.xml:933: exec returned: 1
Total time: 1 minute 17 seconds
+ RESULT=1
+ '[' 1 '!=' 0 ']'
+ echo 'Build Failed: 64-bit build not run'
Build Failed: 64-bit
Were they opening the same region by any chance?
On Mon, Apr 11, 2011 at 6:13 PM, 陈加俊 cjjvict...@gmail.com wrote:
There is no big scan,and just norma load. Also strange is when one RS exited
then another RS exited and others RS like that.
On Tue, Apr 12, 2011 at 8:55 AM, Jean-Daniel Cryans
El 4/11/2011 10:45 PM, Alex Luya escribió:
BUILD FAILED
.../branch-0 .20-append/build.xml:927: The following error
occurred while executing this line:
../branch-0 .20-append/build.xml:933: exec returned: 1
Total time: 1 minute 17 seconds
+ RESULT=1
+ '[' 1 '!=' 0 ']'
+ echo 'Build
Regards to all. I was reading the guest post
(http://www.cloudera.com/blog/2010/06/integrating-hive-and-hbase/) on
the Cloudera Blog from John Sichi (http://people.apache.org/~jvs/) about
the integration efforts from many HBase hackers from Cloudera, Facebook,
StumbleUpon, Trend Micro and
2011/4/11 茅旭峰 m9s...@gmail.com:
We are using hadoop-CDH3B4 and hbase0.90.1-CDH3B4. I'll check the
issue further, but my understanding is the meta info and the root
region are saved by zookeeper, right? Do I need to check them there?
The .META. table is like any other and stored out on the
Can you open the region again? (See shell commands for opening regions).
What does hbck say: ./bin/hbase hbck.
Add the -details flag.
It might tell you a story about an offlined region.
0.90.2 has some fixes for issues in and around here (CDH3 release, out
on the 14th, has most of them
2011/4/11 stanley@emc.com:
Hi Harsh J,
I thought it was that way, but according to the description of the
hbase.cluster.distributed, for pseudo-distributed setup with managed
zookeeper, this value should be set to false.
I think there's some more difference between the
Yes ,I scan (or get or put) rows always .
On Tue, Apr 12, 2011 at 10:49 AM, Jean-Daniel Cryans jdcry...@apache.orgwrote:
Were they opening the same region by any chance?
On Mon, Apr 11, 2011 at 6:13 PM, 陈加俊 cjjvict...@gmail.com wrote:
There is no big scan,and just norma load. Also strange
the results of ./bin/hbase hbck show lots of 'inconsistence status' like
ERROR: Region
hdfs://cloud137:9000/hbase/table1/01c80f8b54523ad6c242c5f695544f16 on HDFS,
but not listed in META or deployed on any region server.
ERROR: Region
WARN : 04-12 12:54:13 Session 0x0 for server null, unexpected error, closing
socket connection and attempting reconnect
java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at
Anyway, looks like the two regions below have some kind of overlap, does
this make sense?
2011/4/12 茅旭峰 m9s...@gmail.com
From the output of scan '.META.' I pasted before, we can see there are two
key ranges
which might cover the put key 'LCgwzrx2XTFkB2Ymz9HeJWPY0Ok='. They are
#1,
Soryy I had make a mistake at hbase.zookeeper.property.clientPort
On Tue, Apr 12, 2011 at 1:08 PM, 陈加俊 cjjvict...@gmail.com wrote:
WARN : 04-12 12:54:13 Session 0x0 for server null, unexpected error,
closing socket connection and attempting reconnect
java.net.ConnectException: Connection
46 matches
Mail list logo