The read path is much more complex than the write one, so the response time
has much more variance.
The gap is so wide here that I would bet on Ted's or Stack's points, but
here are a few other sources of variance:
- hbase cache: as Anoop said, may be the data is already in the hbase cache
(removing dev list)
We observed there is an zxid mismatch on hbase server logs.
This looks like a bug. But the 0.94.15 is quite old now...
For hbase.client.retries.number=3, HBase uses an exponential back-off time,
so setting the #retry to 3 will reduce the hanging time to a few dozen of
seconds
Just to make it a little bit more complex, let me put repeat what Nick
already said in this thread:
However, a detail of our region recovery process
is that a region actually comes online for writes *before* it's available
for reads. That is, it can recover into a state that is
.
Googling, if I figure out what's this I will post it here. Will also
appreciate if someone knows how to cut this down.
Thanks,
Dejan
On Fri, Mar 20, 2015 at 3:49 PM Nicolas Liochon nkey...@gmail.com
wrote:
The split is done by the region servers (the master coordinates). Is
there
some
for digging in here guys.
On Mon, Mar 23, 2015 at 10:13 AM, Nicolas Liochon nkey...@gmail.com
wrote:
If the node is actually down it's fine. But the node may not be that
down
(CAP theorem here); and then it's looking for trouble.
HDFS, by default declare a node as dead after 10:30
with this default timeout, at least it
doesn't work the way we expected it to work.
So thanks again for everyone being spammed with this, and specially thanks
to Nicolas pointing me to the right direction.
On Mon, Mar 23, 2015 at 1:37 PM Nicolas Liochon nkey...@gmail.com wrote
the way we expected it to work.
So thanks again for everyone being spammed with this, and specially
thanks
to Nicolas pointing me to the right direction.
On Mon, Mar 23, 2015 at 1:37 PM Nicolas Liochon nkey...@gmail.com
wrote:
the attachments are rejected by the mailing list
would be the thing that prevents this, correct? I'm surprised it
didn't help Dejan.
On Mon, Mar 23, 2015 at 11:20 AM, Nicolas Liochon nkey...@gmail.com
wrote:
@bryan: yes, you can change hbase.lease.recovery.timeout if you changed
he
hdfs settings. But this setting is really for desperate
:
It was true all the time, together with dfs.namenode.avoid.read.stale.
datanode.
On Mon, Mar 23, 2015 at 5:29 PM Nicolas Liochon nkey...@gmail.com
wrote:
Actually, double checking the final patch in HDFS-4721, the stale mode
is
taken in account. Bryan is right, it's worth checking
Actually, double checking the final patch in HDFS-4721, the stale mode is
taken in account. Bryan is right, it's worth checking the namenodes config.
Especially, dfs.namenode.avoid.write.stale.datanode must be set to true on
the namenode.
On Mon, Mar 23, 2015 at 5:08 PM, Nicolas Liochon nkey
You've changed the value of hbase.zookeeper.timeout to 15 minutes? A very
reasonable target is 1 minute before relocating the regions. That's the
default iirc. You can push it to 20s, but then gc-stopping-the-world
becomes more of an issue. 15 minutes is really a lot. The hdfs stale mode
must
SplitLogManager task at 14:32:35 and
started transitioning just after that. So this is 15 minutes that I'm
talking about.
What am I missing?
On Fri, Mar 20, 2015 at 2:37 PM Nicolas Liochon nkey...@gmail.com wrote:
You've changed the value of hbase.zookeeper.timeout to 15 minutes? A very
If the node is dead, the NoRouteToHostException can happen.
This could be a hdfs or hbase bug (or something else).
For how long to you see the NoRouteToHostException exception?
Basically hbase will try to use that node under hdfs discovers that the
node is stale or dead.
With the default hdfs
As Bryan.
Le 5 mars 2015 17:55, Bryan Beaudreault bbeaudrea...@hubspot.com a
écrit :
You should run with a backup master in a production cluster. The failover
process works very well and will cause no downtime. I've done it literally
hundreds of times across our multiple production hbase
It's going to be fairly difficult imho.
What you need to look at is region. Tables are split in regions. Regions
are allocated to region server (i.e. an hbase node). Reads and writes are
directed to the region server owning the region. Regions can move from one
region server to another, that's the
If I understand the issue correctly, restarting the master should solve the
problem.
On Wed, Mar 4, 2015 at 5:55 AM, Ted Yu yuzhih...@gmail.com wrote:
Please see HBASE-13067 Fix caching of stubs to allow IP address changes of
restarted remote servers
Cheers
On Tue, Mar 3, 2015 at 8:26 PM,
It's in local memory. When HBase cannot connect to a server, it puts it
into the failedServerList for 2 seconds. This is to avoid having all the
threads going into a potentially long socket timeout. Are you sure that you
can connect from the master to this machine/port?
You can change the time it
You should first try with the 'autoflush' boolean on the htable: set it to
false. it buffers the writes for you and does the writes asynchronously. So
all the multithreading / buffering work is done for you.
If you need a synchronisation point (to free the resources on the sending
side), you can
fwiw CallerDisconnectedException: Aborting call multi means that:
- the query was under execution on the server
- the client reached its timeout and disconnected
- the server saw that and stopped the execution of the query.
So it's the consequence of a slow execution, not the cause.
It would
Hi Arul,
It's a pure client exception: it means that the client has not even tried
to send the query to the server, it failed before.
Why the client failed is another question.
I see that the pool size is 7, have you changed the default configuration?
Cheers,
Nicolas
On Tue, Nov 18, 2014 at
+1
Le 31 oct. 2014 23:49, Andrew Purtell apurt...@apache.org a écrit :
Based on the positive responses thus far, and unless we see an objection
between now and then, I plan to resolve HBASE-12397 next week by removing
support in 0.98 branch for Hadoop 1.0 (but not Hadoop 1.1) in time for
Hi,
I haven't seen it mentioned, but if I understand correctly each scan
returns a single row? If so you should use Scan#setSmall to save some rpc
calls.
Cheers,
Nicolas
On Sun, Oct 5, 2014 at 11:28 AM, Qiang Tian tian...@gmail.com wrote:
when using separate HConnection instance, both its
Congrats, guys!
On Tue, Aug 26, 2014 at 7:26 AM, Jeremy Carroll phobos...@gmail.com wrote:
Congratulations. I have seen so many of these names in JIRA. ;)
On Mon, Aug 25, 2014 at 5:24 PM, Jonathan Hsieh j...@cloudera.com wrote:
On behalf of the Apache HBase PMC, I am happy to belatedly
(moving to user)
In your first scenario (put table, row1, cf:a, value1, 100 then put
table, row1, cf:a, value1, 200), there is no deletion, so the
setting KEEP_DELETED_CELLS is not used at all
The behavior you describe is as expected: there are two versions until
the compaction occurs and removes
What's the HBase version?
You should be able to use guava 15 with HBase 0.96 or 0.98 (see HBASE-9667
for the details).
On Thu, Aug 7, 2014 at 7:57 AM, Deepa Jayaveer deepa.jayav...@tcs.com
wrote:
Is there any tutorials available in the net to connect Spark Java API
with HBase ?
Thanks
Sounds like a bug. AccessControlException should be a not retriable
exception.
On Fri, Jul 25, 2014 at 12:55 PM, Kashif Jawed Siddiqui kashi...@huawei.com
wrote:
Hi,
In HBase, RPCServer running in secure mode, I try to
simulate authorizeConnection failure which will throw
It's a change in 0.98, coming from HBASE-10080.
On Wed, Jun 25, 2014 at 3:18 PM, Anand Nalya anand.na...@gmail.com wrote:
Hi,
With HBase 0.96, HConnection.getTable method used to throw an exception in
case the table did not exist. Based on this exection, I was creating tables
in HBase as
What do you mean by down? Does it crash?
The server does not block on 0.96, it immediately sends back an exception.
(See HBASE-9467)
The client is implicitly slowed down by the retries, w/o blocking on the
server. It's managed by the hbase client itself, and it's transparent for
the client
codes
will send requests. And the server is too busy and gc stop the world
and zookeeper don't receive heartbeat and regards it as dead
On Wed, Jun 18, 2014 at 6:17 PM, Nicolas Liochon nkey...@gmail.com
wrote:
What do you mean by down? Does it crash?
The server does not block on 0.96
This can be considered as a bug. We're not going to survive long if there
is an OOM, so retrying is not very useful... You can create a jira for this
(https://issues.apache.org/jira/browse/HBASE), and submit a patch if
possible..
For the root cause, if you don't have much memory, you can lower
The version on my repo works with multiple HBase/Hadoop version. It's at
https://github.com/nkeywal/YCSB
Default is 0.92:
mvn clean package
For 0.94, it's something like, to be changed depending on your version:
mvn clean package -Dhbase.version=0.94.12 -Dhadoop.version=1.0.4
That's standard, even if it's a behavior change compared to 0.94.
It means you're sending puts faster that the server can write.
In such situation, in 0.94, the operation was paused server side. In 0.96,
HBase sends an exception back to the client, and it's up to it to retry
(and it does retry).
It's hbase-client
On Tue, Jan 28, 2014 at 4:36 PM, shapoor esmaili_...@yahoo.com wrote:
To use the new hbase 0.96.1-hadoop2 for a client which dependency should I
give in my pom.xml ?
I have tried it for example with hbase-it and I get the following
Exception:
14/01/28 16:30:18 INFO
It's very strange that you don't see a perf improvement when you increase
the number of nodes.
Nothing in what you've done change the performances at the end?
You may want to check:
- the number of regions for this table. Are all the region server busy? Do
you have some split on the table?
-
Hi,
It's fixed in HBase 0.96 (by HBASE-9667).
Cheers,
Nicolas
On Mon, Dec 16, 2013 at 11:01 AM, Kristoffer Sjögren sto...@gmail.comwrote:
Hi
At the moment HFileWriterV2.close breaks at startup when using Guava 15.
This is not a client problem - it happens because we start a master node
That means more or less backporting the patch to the 0.94, no?
It should work imho.
On Mon, Dec 16, 2013 at 3:16 PM, Kristoffer Sjögren sto...@gmail.comwrote:
Thanks! But we cant really upgrade to HBase 0.96 right now, but we need to
go to Guava 15 :-(
I was thinking of overriding the
It's hbase.regionserver.metahandler.count. Not sure it causes the issue
you're facing, thought. What's your HBase version?
On Tue, Dec 10, 2013 at 1:21 PM, Federico Gaule fga...@despegar.com wrote:
There is another set of handler we haven't customized PRI IPC (priority
?). What are those
It was written to be generic, but limited to 'put' to maintain the backward
compatibility.
Some 'Row' do not implement 'heapSize', so we have a limitation for some
types for the moment (we need the objects to implement heapSize as we need
to know how when it's time to flush the buffer). This can
My understanding is that HBase requires durable sync capabilities of
HDFS (i.e. hflush() hsync()), but does *not* require file append
capabilities.
99.99% true. The remaining 0.01% is an exceptional code path during the
data recovery (as a fall back mechanism to ensure that we can start the
Congratulations, Nick!
;-)
On Wed, Sep 11, 2013 at 8:17 AM, Anoop John anoop.hb...@gmail.com wrote:
Congratulations Nick... Welcome...
-Anoop-
On Wed, Sep 11, 2013 at 10:23 AM, Marcos Luis Ortiz Valmaseda
marcosluis2...@gmail.com wrote:
Congratulations, Nick !!! Keep doing this
That's linux terminology. 0.95 is a developper release It should not go in
production. When it's ready for production, it will be released as 0.96
0.96 should be ready soon, tests (and fixes are in progress). There is
already a release candidate available: 0.96.RC0.
There should be a new release
(redirected user mailing list, dev mailing list in bcc)
Various comments:
- you should not need to add the hadoop jar in your client application pom,
they will come with hbase. But this should not the cause of your issue.
- what does the server say in its logs?
- I'm suprised by this: Client
You won't have this directly. /hbase/rs contains the regionservers that are
online. When a regionserver dies, hbase (or zookeeper if it's a silent
failure) will remove it from this list. (And obviously this is internal to
hbase and could change or not at any time :-) ). But technically you can do
There is a comment in this class that is outdated (Once set, the
parameters that specify a column cannot be changed without deleting the
column and recreating it. If there is data stored in the column, it will be
deleted when the column is deleted.). This is from 2007. I will fix this.
It's
but no success. Any
hints ?
I am running Hbase from Cloudera version 0.94.6-cdh4.3.0.
On Mon, Sep 9, 2013 at 1:25 AM, Gaetan Deputier gae...@ividence-inc.com
wrote:
Exactly what i was looking for. Thank you very much !
On Mon, Sep 9, 2013 at 12:48 AM, Nicolas Liochon nkey
It's open source. My personal point of view is that if someone is willing
to spend time on the backport, there should be no issue, if the regression
risk is clearly acceptable the rolling restart possible. If it's
necessary (i.e. there is no agreement of the risk level), then we could as
well go
It's not uncommon to bump these values to something like 5 minutes, for the
exact reason you mention.
The obvious impact is that if the clients don't close the connections the
server will have to keep the resources. It's usually manageable.
Another one is that if the machine running the server
Well done, Rajesh!
On Tue, Aug 13, 2013 at 8:44 AM, Anoop John anoop.hb...@gmail.com wrote:
Good to see this Rajesh. Thanks a lot to Huawei HBase team!
-Anoop-
On Tue, Aug 13, 2013 at 11:49 AM, rajeshbabu chintaguntla
rajeshbabu.chintagun...@huawei.com wrote:
Hi,
We have been
Seems to be the same issue as here:
http://stackoverflow.com/questions/16847319/cassandra-on-solaris-10-64-bit-crashing-with-unsafe-getlong
It says it a jvm bug and it seems right.
You may want to try the very last jvm on your platform (it's unlikely to
work), as well as the jdk 1.6 (it could
It could be HBASE-6870?
On Mon, Jul 29, 2013 at 7:37 PM, Jean-Daniel Cryans jdcry...@apache.orgwrote:
Can you tell who's doing it? You could enable IPC debug for a few secs
to see who's coming in with scans.
You could also try to disable pre-fetching, set
hbase.client.prefetch.limit to 0
and stoprow
Scan scan = new Scan(Bytes.toBytes(adidas), Bytes.toBytes(adidas1));
Our cluster size is 15. The load average when I see in master is 78%...It
is not that overloaded. but writes are happening in the cluster...
Thanks
Kiran
On Wed, Jun 12, 2013 at 10:49 PM, Nicolas Liochon
What was your test exactly? You killed -9 a region server but kept the
datanode alive?
Could you detail the queries you were doing?
On Wed, Jun 12, 2013 at 2:10 PM, kiran kiran.sarvabho...@gmail.com wrote:
It is not possible for us to migrate to new version immediately.
@Anoop we
:
You can configure below to more value to close more regions at a time.
property
namehbase.regionserver.executor.closeregion.threads/name
value3/value
/property
On Wed, Jun 12, 2013 at 7:38 PM, Nicolas Liochon nkey...@gmail.com
wrote:
What was your test exactly? You
Hello,
Option 1:
We still have some flaky tests. You can benchmark you build against
https://builds.apache.org/job/HBase-TRUNK/ and
https://builds.apache.org/job/hbase-0.95/
You can also use this tool: https://github.com/jeffreyz88/jenkins-tools to
get a review on the last fails:
On 0.95, we
!
On Wed, May 29, 2013 at 3:12 PM, Nicolas Liochon nkey...@gmail.com
wrote:
Hello,
Option 1:
We still have some flaky tests. You can benchmark you build against
https://builds.apache.org/job/HBase-TRUNK/ and
https://builds.apache.org/job/hbase-0.95/
You can also use this tool
That's HDFS.
When a file is currently written, the size is not known, as the write is in
progress. So the namenode reports a size of zero (more exactly, it does not
take into account the hdfs block beeing written when it calculates the
size). When you read, you go to the datanode owning the data,
? Or it will be always
be zero for last one (even if it contains the data)
-Original Message-
From: Nicolas Liochon [mailto:nkey...@gmail.com]
Sent: Friday, May 17, 2013 4:39 PM
To: user
Subject: Re: Doubt Regading HLogs
That's HDFS.
When a file is currently written, the size
You can try Yourkit, they have evaluation licenses. There is one gotcha:
some classes are excluded by default, and this includes org.apache.* . So
you need to change the default config when using it with HBase.
On Thu, May 2, 2013 at 7:54 PM, Bryan Keller brya...@gmail.com wrote:
I ran one of
)
---at
org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:195)
---at
org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:163)
---at java.lang.Thread.run(Thread.java:662)
On Sat, Apr 20, 2013 at 1:16 AM, Nicolas Liochon nkey...@gmail.com
wrote
the UNDER_RECOVERY stage is for, in HDFS ?
Also
does
the stale node stuff kick in for that state ?
Thanks
Varun
On Fri, Apr 19, 2013 at 4:00 AM, Nicolas Liochon nkey...@gmail.com
wrote:
Thanks for the detailed scenario and analysis. I'm going to have a
look
-5723958680970112840_174056,
targets=[10.156.194.94:50010, 10.156.192.106:50010, 10.156.195.38:50010],
newGenerationStamp=174413)
Not sure but this looks like a case where data could be lost ?
Varun
On Fri, Apr 19, 2013 at 12:38 AM, Nicolas Liochon nkey...@gmail.com
wrote:
Hey Varun
I think there is something in the middle that could be done. It was
discussed here a while ago, but without any JIRA created. See thread:
http://mail-archives.apache.org/mod_mbox/hbase-user/201302.mbox/%3CCAKxWWm19OC+dePTK60bMmcecv=7tc+3t4-bq6fdqeppix_e...@mail.gmail.com%3E
If someone can spend
But don't forget you don't have to use pooled tables anymore. You can
create the tables you need on the fly, see 9.3.1.1. Connection Pooling.
IIRC, it's available in the version you're using (but I haven't checked).
Cheers,
Nicolas
On Wed, Apr 10, 2013 at 5:26 PM, Jim the Standing Bear
Yes, decommissioning the regionserver does not mean decommissioning the
datanode.
Here, if I understand well your first step, you migrated the regions to
other regions servers. Physically, the data was still on the previous
machine, with the hdfs datanode. It's not used anymore for writes if all
Welcome, Sergey!
On Sat, Feb 23, 2013 at 2:47 AM, Marcos Ortiz mlor...@uci.cu wrote:
Congratulations, Sergey.
On 02/22/2013 04:39 PM, Ted Yu wrote:
Hi,
Sergey has 51 issues under his name:
*
https://issues.apache.org/**jira/issues/?jql=project%20%**
Looking at the code, it seems possible to do this server side within the
multi invocation: we could group the get by region, and do a single scan.
We could also add some heuristics if necessary...
On Tue, Feb 19, 2013 at 9:02 AM, lars hofhansl la...@apache.org wrote:
I should qualify that
?
Thanks
Varun
On Tue, Feb 19, 2013 at 12:37 AM, Nicolas Liochon nkey...@gmail.com
wrote:
Looking at the code, it seems possible to do this server side within the
multi invocation: we could group the get by region, and do a single scan.
We could also add some heuristics if necessary
and issue a Scan for each
cluster.
-- Lars
From: Nicolas Liochon nkey...@gmail.com
To: user user@hbase.apache.org
Sent: Tuesday, February 19, 2013 9:28 AM
Subject: Re: Optimizing Multi Gets in hbase
Imho, the easiest thing to do would be to write a filter
As well, an advantage of going only to the servers needed is the famous
MTTR: there are a less chance to go to a dead server or to a region that
just moved.
On Tue, Feb 19, 2013 at 7:42 PM, Nicolas Liochon nkey...@gmail.com wrote:
Interesting, in the client we're doing a group by location
i) Yes, or, at least, of often yes.
II) You're right. It's difficult to guess how much it would improve the
performances (there is a lot of caching effect), but using a single scan
could be an interesting optimisation imho.
Nicolas
On Mon, Feb 18, 2013 at 10:57 AM, Varun Sharma
Congrats, Devaraj!
On Thu, Feb 7, 2013 at 2:26 PM, Marcos Ortiz mlor...@uci.cu wrote:
Congratulations, Devaraj.
On 02/07/2013 02:20 AM, Lars George wrote:
Congrats! Welcome aboard.
On Feb 7, 2013, at 6:19, Ted Yu yuzhih...@gmail.com wrote:
Hi,
We've brought in one new Apache HBase
Hi,
Looking at the code, no.
Nicolas
On Tue, Feb 5, 2013 at 9:21 AM, samar kumar samar.opensou...@gmail.comwrote:
Thanks Adrien.
Sure , but is there any other way without stoping the masters. Anything
like cleaning the zk. Is there a time-out which cleans the dead rs or max
count .
Hi,
From the logs, it seems you trying to use the non distributed mode:
WARN org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine: Not
starting a distinct region server because hbase.cluster.distributed is false
If it's the case, you don't have to launch a separate zookeeper region
Well, first you need to decide on what you want to do (i.e; distributed or
not) and acts accordingly.
I would recommend trying with not distributed when you start. This means
you have a single process to launch.
not distributed is the default, so if it can't find the configuration files
it will
Yes, HTable is not thread safe, and using synchronized around them could
work, but would be implementation dependent.
You can have one HTable per request at a reasonable cost since
https://issues.apache.org/jira/browse/HBASE-4805. It's seems to be
available in 0.92 as well.
Cheers,
Nicolas
On
the synchronization
granularity?
Thanks so much!
Bing
On Tue, Feb 5, 2013 at 5:31 AM, Nicolas Liochon nkey...@gmail.com wrote:
Yes, HTable is not thread safe, and using synchronized around them could
work, but would be implementation dependent.
You can have one HTable per request at a reasonable
Hi,
IIRC, it's still there on 0.94. 0.96 is not yet released, it's still in
dev, so 0.94 is anyway the version to use. HBASE-6553 contains the patch to
revert if you want to build your own 0.96 version with Avro.
From the mail archive, the reasons for deprecate it then remote it were:
Yep, I'm ok with that. It will need to be put in the interface (vs. the
implementation class). Would be nice if you could implement the two missing
methods (i.e. public HRegionLocation locateRegion(final byte [] regionName))
On Thu, Jan 3, 2013 at 7:33 PM, Jean-Marc Spaggiari
hbasetest.sh was developed before we had the parallelism by maven surefire.
It's not really maintained anymore (actually it's somewhere in my todo list
to remove it). But for sure there is no magic there :-).
On Wed, Jan 2, 2013 at 4:45 PM, Jean-Marc Spaggiari jean-m...@spaggiari.org
wrote:
If it works, it's ok: it's not going to give you wrong results. Just that
it adds nothing to a plain mvn test.
So if it worked for you on this patch, then you're done :-).
This should be enough most most cases:
mvn test
If you want to play it safe, you will run all the tests before sending the
You don't want the a balancer to move a region to a region server you're
about to close. And a rolling restart is stressful enough on the system;
minimizing any extra noise is safer.
But when a region server dies, its regions are reallocated, whatever the
balancer settings.
On Thu, Dec 27, 2012
Hi,
First, check the date/time on both server and check they don't differ;
that's what the error says.
You can configure the max allowed with hbase.master.maxclockskew, but
it's unlikely to be a good idea: it's always safer, in any distributed
system, to have the servers sharing the same time.
This should help:
http://hbase.apache.org/book/important_configurations.html#bigger.regions
On Mon, Dec 17, 2012 at 9:11 AM, tgh guanhua.t...@ia.ac.cn wrote:
Or what about the max size for one region and what about the max
size of region for one server?
HBASE-5718 seems to say it's reproducible only on openjdk.
HBase requires the jdk from Oracle (see
http://hbase.apache.org/book.html#basic.prerequisites).
Issues that occur on other jdk are not rejected, but usually receives a
lower priority. If someone provides a patch, it will be integrated.
I think it's safer to use a newer version (0.94): there are a lot of things
around performances volumes in the 0.92 0.94. As well, there are much
more bug fixes releases on the 0.94.
For the number of region, there is no maximum written in stone. Having too
many regions will essentially impact
After looking at the code, it's seems that it's done by the mini zk
cluster: the directory is deleted at startup.
This is because the default mode (non distributed, all threads in a single
process) uses a specific piece of code (this mini* stuff). As it's for
tests, it ensures that there is no
Christophe,
What do you mean by I can not do it ?
Is it giving you an error? You don't know the steps? You don't have the
rights?
--
JM
2012/12/13, Zbierski Christophe christophe.zbier...@atos.net:
Yes, I tried, but I can not do it ... :(
-Message d'origine-
De : Nicolas Liochon
2012/12/13, Zbierski Christophe christophe.zbier...@atos.net:
Yes, I tried, but I can not do it ... :(
-Message d'origine-
De : Nicolas Liochon [mailto:nkey...@gmail.com] Envoyé : jeudi 13
décembre 2012 13:07 À : user@hbase.apache.org Objet : Re: loss znode
After looking
is down for less than 3 minutes,
right?
Thanks,
JM
2012/12/2, Nicolas Liochon nkey...@gmail.com:
It's not accessible, but it's more or less transparent (latency impact
aside) for the end user: the hbase client will retry the operation.
On Sun, Dec 2, 2012 at 11:10 PM, Jean-Marc Spaggiari
It's not accessible, but it's more or less transparent (latency impact
aside) for the end user: the hbase client will retry the operation.
On Sun, Dec 2, 2012 at 11:10 PM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
?
Likely. What's the HBase version btw?
On Fri, Nov 30, 2012 at 6:51 PM, wayne li wayne.li.1...@gmail.com wrote:
Found 1 deadlock.
I have checked via a manual client, zookeeper and regionservers is working
fine. I can getData from znode /hbase/root-region-server.
It's a regression in 3.4.4 (HBASE-6917), fixed in 3.4.5 (HBASE-7159)
Cheers,
Nicolas
On Thu, Nov 29, 2012 at 10:16 AM, Yu Li car...@gmail.com wrote:
Dear all,
I checked the Zookeeper site and found the latest stable release is 3.4.5,
and although there's only 2 bug fixes from zk 3.4.4 to
Theoretically, there is no problem in using HBase 94.x ZK 3.4.5. If there
is a problem with this config, we should have it as well with HBase 0.96.
On Thu, Nov 29, 2012 at 5:33 PM, Yu Li car...@gmail.com wrote:
recently, no wonder I got zk 3.4.3 in my stale pom.xml...
Yes, it's not useful to set the master address in the client. I suppose it
was different a long time ago, hence there are some traces on different
documentation.
The master references itself in ZooKeeper. So if the master finds itself to
be locahost, ZooKeeper will contain locahost, and the
Hi Mohammad,
Your answer was right, just that specifying the master address is not
necessary (anymore I think). But it does no harm.
Changing the /etc/hosts (as you did) is right too.
Lastly, if the cluster is standalone and accessed locally, having localhost
in ZK will not be an issue. However,
Today, at this time of writing, it should be ok: the tests write in a
specific directory, and the ports are dynamic. But it's not without danger:
what if tomorrow someone creates a bug and hardcodes a port already used by
your cluster ? It's unlikely; but it happened in the past. As well, the CPU
and hadoop are both running for
a their own. I don't think there is a risk for the local test to fine
and write on the local HBase/Hadoop files.
JM
2012/11/8, Nicolas Liochon nkey...@gmail.com:
Today, at this time of writing, it should be ok: the tests write in a
specific directory
Hi Varun,
HDFS-3703 and HDFS-3912 are about this.
The story is not over yet (and there are other stuff like HDFS-3704,
HDFS-3705, HDFS-3706), but it helps by lowering the probability to go to a
dead datanode: hdfs waits 10 minutes before deciding a datanode is dead,
with the jiras mentionned
Hi,
It's ok with a capital 'K'
mvn -PlocalTests -Dtest=TestZoo*K*eeper test
Running org.apache.hadoop.hbase.TestZooKeeper
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 198.84 sec
;-)
Nicolas
On Tue, Oct 30, 2012 at 5:50 AM, Ted Yu yuzhih...@gmail.com wrote:
The following
Hi,
The schema design is important. There is this entry to look at at least:
http://hbase.apache.org/book.html#rowkey.design
For the config, could you pastebin the hdfs hbase config files you used?
N.
On Tue, Oct 23, 2012 at 5:48 PM, Nick maillard
nicolas.maill...@fifty-five.com wrote:
Hi
1 - 100 of 102 matches
Mail list logo