On Tue, May 13, 2014 at 9:58 AM, Liam Slusser lslus...@gmail.com wrote:
You can also create a table via the hbase shell with pre-split tables like
this...
Here is a 32-byte split into 16 different regions, using base16 (ie a md5
hash) for the key-type.
create 't1', {NAME = 'f1'},
{SPLITS=
Are you using HFileOutputFormat.configureIncrementalLoad() to set up the
partitioner and the reducers? That will take care of ordering your keys.
J-D
On Thu, May 1, 2014 at 5:38 AM, Guillermo Ortiz konstt2...@gmail.comwrote:
I have been looking at the code in HBase, but, I don't really
On Tue, Apr 15, 2014 at 12:17 AM, Hansi Klose hansi.kl...@web.de wrote:
Hi Jean-Daniel,
thank you for your answer and bring some light into the darkness.
You're welcome!
You can see the bad rows listed in the user logs for your MR job.
What log do you mean. The output from the command
Yeah you should use endtime, it was fixed as part of
https://issues.apache.org/jira/browse/HBASE-10395.
You can see the bad rows listed in the user logs for your MR job.
J-D
On Mon, Apr 14, 2014 at 3:06 AM, Hansi Klose hansi.kl...@web.de wrote:
Hi,
I wrote a little script which should
It's a simple leader election via ZooKeeper.
J-D
On Tue, Apr 8, 2014 at 7:18 AM, gortiz gor...@pragsis.com wrote:
Could someone explain me which it's the process to select the next HMaster
when the current one is gone down?? I've been looking for information about
it in the documentation,
On Mon, Mar 17, 2014 at 6:01 AM, Linlin Du linlindu2...@hotmail.com wrote:
Hi all,
First question:
According to documentation, hfile.block.cache.size is by default 40
percentage of maximum heap (-Xmx setting). If -Xmx is not used and only
-Xms is used, what will it be in this case?
Second
Resurrecting this old thread. The following error:
java.lang.RuntimeException: Failed suppression of fs shutdown hook
Is caused when HBase is compiled against Hadoop 1 and has Hadoop 2 jars on
its classpath. Someone on IRC just had the same issue and I was able to
repro after seeing the
IIRC it used to be an issue if the folder was already existing, even if
empty. It's not the case anymore.
J-D
On Fri, Feb 7, 2014 at 3:38 PM, Jay Vyas jayunit...@gmail.com wrote:
Hi hbase.
In normal installations, Im wondering who should create hbase root.dir.
1) I have seen
The problem with having a bunch of master racing is that it's not evident
for the operator who won, so specifying --backup to all but one master
ensures that you always easily know where the master is.
Relevant code from HMaster.java:
// If we're a backup master, stall until a primary to
That's right, round robin should only be applied when you start answering
some client request and stick to it until you're done.
J-D
On Fri, Dec 6, 2013 at 9:17 PM, Varun Sharma va...@pinterest.com wrote:
Hi everyone,
I have a question about the hbase thrift server and running scans in
Mesika asaf.mes...@gmail.com wrote:
Can you please explain why is this suspicious?
On Monday, October 7, 2013, Jean-Daniel Cryans wrote:
This line:
[CMS-concurrent-mark: 12.929/88.767 secs] [Times: user=14.30 sys=3.74,
real=88.77
secs]
Is suspicious. Are you swapping?
J-D
, Oct 25, 2013 at 9:00 PM, Jean-Daniel Cryans jdcry...@apache.org
wrote:
What's happening before this stack trace in the log?
J-D
On Fri, Oct 25, 2013 at 6:10 AM, Salih Kardan karda...@gmail.com
wrote:
Hi all
I am getting the error below while starting hbase (hbase 0.94.11). I
What's happening before this stack trace in the log?
J-D
On Fri, Oct 25, 2013 at 6:10 AM, Salih Kardan karda...@gmail.com wrote:
Hi all
I am getting the error below while starting hbase (hbase 0.94.11). I guess
since hbase cannot
connect to hadoop, I get this error.
On Wed, Oct 9, 2013 at 10:59 AM, Vladimir Rodionov
vrodio...@carrieriq.comwrote:
I can't say for SCR. There is a possibility that the feature is broken, of
course.
But the fact that hbase.regionserver.checksum.verify does not affect
performance means that OS caches
effectively HDFS checksum
This line:
[CMS-concurrent-mark: 12.929/88.767 secs] [Times: user=14.30 sys=3.74,
real=88.77
secs]
Is suspicious. Are you swapping?
J-D
On Mon, Oct 7, 2013 at 8:34 AM, prakash kadel prakash.ka...@gmail.comwrote:
Also,
why is the CMS not kicking in early, i have set XX:+
While we're on the topic of upcoming meetups, there's also a meetup at
Facebook's NYC office the week of Strata/Hadoop World (10/28). There's
still room for about 50 attendees.
http://www.meetup.com/HBase-NYC/events/135434632/
J-D
On Mon, Oct 7, 2013 at 2:10 PM, Enis Söztutar e...@apache.org
AM, prakash kadel prakash.ka...@gmail.com
wrote:
thanks,
yup, it seems so. I have 48 gb memory. i see it swaps at that point.
btw, why is the CMS not kicking in early? do you have any idea?
sincerely
On Tue, Oct 8, 2013 at 3:00 AM, Jean-Daniel Cryans jdcry...@apache.org
wrote
hbase.master was removed when we added zookeeper, so now a client will do a
lookup in ZK instead of talking to a pre-determined master. So in a
way, hbase.zookeeper.quorum is what replaces hbase.master
FWIW that was done in 0.20.0 which was released in September of 2009, so
hbase.master has been
I like the way you were able to dig down into multiple logs and present us
the information, but it looks more like GC than an HDFS failure. In your
region server log, go back to the first FATAL and see if it got a session
expired from ZK and other messages like a client not being able to talk to
a
That means that the master cluster isn't able to see any region servers in
the slave cluster... is cluster b up? Can you create tables?
J-D
On Fri, Sep 27, 2013 at 3:23 AM, Arnaud Lamy al...@ltutech.com wrote:
Hi,
I tried to configure a replication with 2 boxes (ab). A hosts hbase zk
and
Your details are missing important bits like you configurations,
Hadoop/HBase versions, etc.
Doing those random reads inside your MR job, especially if they are reading
cold data, will indeed make it slower. Just to get an idea, if you skip
doing the Gets, how fast does it became?
J-D
On Fri,
-Daniel Cryans jdcry...@apache.org
wrote:
Your details are missing important bits like you configurations,
Hadoop/HBase versions, etc.
Doing those random reads inside your MR job, especially if they are
reading
cold data, will indeed make it slower. Just to get an idea, if you skip
You'd need to use 0.94 (or CDH4.2+ since you are mentioning being on CDH)
to have access to TableInputFormat.SCAN_ROW_START and SCAN_ROW_STOP then
all you need to do is to copy Export's code and add what you're missing.
J-D
On Tue, Sep 24, 2013 at 5:42 PM, karunakar lkarunaka...@gmail.com
On flushing we do some cleanup, like removing deleted data that was already
in the MemStore or extra versions. Could it be that you are overwriting
recently written data?
48MB is the size of the Memstore that accumulated while the flushing
happened.
J-D
On Tue, Sep 24, 2013 at 3:50 AM, aiyoh79
On Mon, Sep 23, 2013 at 9:14 AM, John Foxinhead john.foxinh...@gmail.comwrote:
Hi all. I'm doing a project for my university so that i have to know
perfectly how all the Hbase ports work. Studing the documentation i found
that Zookeeper accept connection on port 2181, Hbase master on port
Could happen if a region moves since locks aren't persisted, but if I were
you I'd ask on the opentsdb mailing list first.
J-D
On Thu, Sep 19, 2013 at 10:09 AM, Tianying Chang tich...@ebaysf.com wrote:
Hi,
I have a customer who use openTSDB. Recently we found that only less than
10% data
You need to create the table with pre-splits, see
http://hbase.apache.org/book.html#perf.writing
J-D
On Thu, Sep 19, 2013 at 9:52 AM, Dolan Antenucci antenucc...@gmail.comwrote:
I have about 1 billion values I am trying to load into a new HBase table
(with just one column and column family),
You can always remove the NOT clause by changing the statement, but I'm
wondering what your use case really is. HBase doesn't have secondary
indexes so, unless you are doing a short-ish scan (let's say a million
rows), it means you want to do a full table scan and that doesn't scale.
J-D
On
Ah I see, well unless you setup Secure HBase there won't be any perms
enforcement.
So in which way is your application failing to use Selector? Do you have
an error message or stack trace handy?
J-D
On Tue, Sep 17, 2013 at 5:43 AM, BG bge...@mitre.org wrote:
Well we are trying to find out
(putting cdh user in BCC, please don't cross-post)
The web UIs for both the master and the region server have a section called
Tasks and has a bunch of links like this:
Tasks
Show All Monitored Tasks Show non-RPC Tasks Show All RPC Handler Tasks Show
Active RPC Calls Show Client Operations View
HBASE-8753 doesn't seem related.
Right now there's nothing in the shell that does the equivalent of this:
Delete.deleteFamily(byte [] family)
But it's possible to run java code in the jruby shell so in the end you can
still do it, just takes more lines.
J-D
On Mon, Sep 16, 2013 at 1:45 AM,
What are you trying to do bg? If you want to setup user permissions you
also need to have a secure HBase (the link that Ted posted) which
involves Kerberos.
J-D
On Mon, Sep 16, 2013 at 1:33 PM, Ted Yu yuzhih...@gmail.com wrote:
See http://hbase.apache.org/book.html#d0e5135
On Mon, Sep 16,
Release date is: when it gets released. We are currently going through
release candidates and as soon as one gets accepted we release it. I'd like
to say it's gonna happen this month but who knows.
There's probably one or two presentations online that explain what's in
0.96.0, but the source of
Or roll back to CDH 4.2's HBase. They are fully compatible.
J-D
On Thu, Sep 12, 2013 at 10:25 AM, lars hofhansl la...@apache.org wrote:
Not that I am aware of. Reduce the HFile block size will lessen this
problem (but then cause other issues).
It's just a fix to the RegexStringFilter. You
Yeah there isn't a whole lot of documentation about metrics. Could it be
that you are still running on a default 1GB heap and you are pounding it
with multiple clients? Try raising the heap size?
FWIW I gave a presentation at HBaseCon with Kevin O'dell about HBase
operations which could shed some
Scan.setBatch does what you are looking for, since with a Get there's no
way to iterate over mutliple calls:
https://github.com/apache/hbase/blob/0.94.2/src/main/java/org/apache/hadoop/hbase/client/Scan.java#L306
Just make sure to make the Scan start at the row you want and stop right
after it.
What's your /etc/hosts on the master like? HBase does a simple lookup to
get the machine's hostname and it seems your need reports itself as being
localhost.
On Tue, Sep 3, 2013 at 6:23 AM, Omkar Joshi omkar.jo...@lntinfotech.comwrote:
I'm trying to set up a 2-node HBase cluster in distributed
You probably put a string in there that was a number, and increment expects
a 8 bytes long. For example, if you did:
put 't1', '9row27', 'columnar:column1', '1'
Then did an increment on that, it would fail.
J-D
On Thu, Aug 29, 2013 at 4:42 AM, yeshwanth kumar yeshwant...@gmail.comwrote:
i
Region servers replicate data written to them, so look at how your regions
are distributed.
J-D
On Tue, Aug 27, 2013 at 11:29 AM, Demai Ni nid...@gmail.com wrote:
hi, guys,
I am using hbase 0.94.9. And setup replication from a 4-nodes master(3
regserver) to a 3-nodes slave(2 regserver).
FYI you'll be in the same situation with 0.95.2, actually worse since it's
really just a developer preview release.
But if you meant try in its strict sense, ie use it on a test cluster,
then yes please do. The more people we get to try it out the better 0.96.0
will be.
J-D
On Thu, Aug 22,
You can find a lot here: http://hbase.apache.org/replication.html
And how many logs you can queue is how much disk space you have :)
On Tue, Aug 20, 2013 at 7:23 AM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Hi,
If I have a master - slave replication, and master went down,
major compaction of 1 file(s), new
file=hdfs://x.x.x.x:9000/hbase/NOTIFICATION_HISTORY/b00086bca62ee55796a960002291aca4/n/4754838096619480671
i find a new file is created for every major compaction triggger.
Regards,
R.Monish
On Mon, Aug 19, 2013 at 11:52 PM, Jean-Daniel Cryans jdcry
stopped the HMaster, but not the Region
Servers, then restarting HDFS. What's the correct order of operations for
bouncing everything?
On Thu, Aug 1, 2013 at 5:21 PM, Jean-Daniel Cryans jdcry...@apache.orgwrote:
Can you follow the life of one of those blocks though the Namenode and
datanode logs
:07 PM, Jean-Daniel Cryans
jdcry...@apache.orgwrote:
Doing a bin/stop-hbase.sh is the way to go, then on the Hadoop side
you do stop-all.sh. I think your ordering is correct but I'm not sure
you are using the right commands.
J-D
On Fri, Aug 2, 2013 at 8:27 AM, Patrick Schless
patrick.schl
I can't think of a way how your missing blocks would be related to
HBase replication, there's something else going on. Are all the
datanodes checking back in?
J-D
On Thu, Aug 1, 2013 at 2:17 PM, Patrick Schless
patrick.schl...@gmail.com wrote:
I'm running:
CDH4.1.2
HBase 0.92.1
Hadoop 2.0.0
nothing special about data05, and it seems to be in the cluster, the same
as anyone else.
On Thu, Aug 1, 2013 at 5:04 PM, Jean-Daniel Cryans jdcry...@apache.orgwrote:
I can't think of a way how your missing blocks would be related to
HBase replication, there's something else going on. Are all
Unable to load realm info from SCDynamicStore is only a warning and
a red herring.
What seems to be happening is that your shell can't reach zookeeper.
Are Zookeeper and HBase running? What other health checks have you
done?
J-D
On Tue, Jul 30, 2013 at 10:28 PM, Seth Edwards
Can you tell who's doing it? You could enable IPC debug for a few secs
to see who's coming in with scans.
You could also try to disable pre-fetching, set hbase.client.prefetch.limit to 0
Also, is it even causing a problem or you're just worried it might
since it doesn't look normal?
J-D
On
You could always set hbase.online.schema.update.enable to true on your
master, restart it (but not the cluster), and you could do what you
are describing... but it's a risky feature to use before 0.96.0.
Did you also set hbase.replication to true? If not, you'll have to do
it on the region
0.95.1 is a developer preview release, if you are just starting with HBase
please grab the stable release from 0.94, for example
http://mirrors.sonic.net/apache/hbase/stable/
J-D
On Thu, Jul 18, 2013 at 1:51 PM, Jonathan Cardoso
jonathancar...@gmail.comwrote:
I was trying to follow the
Inline.
J-D
On Wed, Jul 17, 2013 at 7:10 AM, yonghu yongyong...@gmail.com wrote:
Thanks for your quick response!
For the question one, what will be the latency? How long we need to wait
until the daughter regions are again online?
Usually a matter of 1-2 seconds.
regards!
Yong
On
Yean WARN won't give us anything, and please try to get us a fat log. Post
it on pastebin or such.
Thx,
J-D
On Wed, Jul 17, 2013 at 11:03 AM, Anusauskas, Laimonas
lanusaus...@corp.untd.com wrote:
J-D,
I have log level org.apache=WARN and there is only following in the logs
before GC
1GB is a pretty small heap and it could be that the default size for logs
to replicate is set to high. The default
for replication.source.size.capacity is 64MB. Can you set it much lower on
your master cluster (on each RS), like 2MB, and see if it makes a
difference?
The logs and the jstack seem
Yes... your master cluster must have helluva backup to replicate :)
Seems to make a good argument to lower the default setting. What do you
think?
J-D
On Wed, Jul 17, 2013 at 3:37 PM, Anusauskas, Laimonas
lanusaus...@corp.untd.com wrote:
Thanks, setting replication.source.size.capacity to
The local filesystem implementation doesn't support multiple drives
AFAIK, so your best bet is to RAID your disks if that's really
something you want to do.
Else, you have to use HDFS.
J-D
On Tue, Jul 16, 2013 at 8:55 AM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Hi,
In standalone
Are those incremented cells?
J-D
On Thu, Jul 11, 2013 at 10:23 AM, Patrick Schless
patrick.schl...@gmail.com wrote:
I have had replication running for about a week now, and have had a lot of
data flowing to our slave cluster over that time. Now, I'm running the
verifyrep MR job over a 1-hour
correctly?
On Thu, Jul 11, 2013 at 12:53 PM, Jean-Daniel Cryans
jdcry...@apache.orgwrote:
Are those incremented cells?
J-D
On Thu, Jul 11, 2013 at 10:23 AM, Patrick Schless
patrick.schl...@gmail.com wrote:
I have had replication running for about a week now, and have had a lot
those incremented cells out, so the response from verifyRep is
meaningful? :)
On Thu, Jul 11, 2013 at 3:44 PM, Jean-Daniel Cryans
jdcry...@apache.orgwrote:
Yeah increments won't work. I guess the warning isn't really visible
but one place you can see it is:
$ ./bin/hadoop jar ../hbase
Do you know if it's a data or meta block?
J-D
On Mon, Jul 8, 2013 at 4:28 PM, Viral Bajaria viral.baja...@gmail.com wrote:
I was able to reproduce the same regionserver asking for the same local
block over 300 times within the same 2 minute window by running one of my
heavy workloads.
Let
of size
67108864 startOffset 13006577 length 54102287 short circuit checksum true
On Mon, Jul 8, 2013 at 4:37 PM, Jean-Daniel Cryans jdcry...@apache.orgwrote:
Do you know if it's a data or meta block?
Yeah that package documentation ought to be changed. Mind opening a jira?
Thx,
J-D
On Mon, Jul 1, 2013 at 1:51 PM, Patrick Schless
patrick.schl...@gmail.com wrote:
The first two tutorials for enabling replication that google gives me [1],
[2] take very different tones with regard to
On Thu, Jun 27, 2013 at 4:27 PM, Viral Bajaria viral.baja...@gmail.com wrote:
Hey JD,
Thanks for the clarification. I also came across a previous thread which
sort of talks about a similar problem.
On Fri, Jun 28, 2013 at 2:39 PM, Viral Bajaria viral.baja...@gmail.com wrote:
On Fri, Jun 28, 2013 at 9:31 AM, Jean-Daniel Cryans
jdcry...@apache.orgwrote:
On Thu, Jun 27, 2013 at 4:27 PM, Viral Bajaria viral.baja...@gmail.com
wrote:
It's not random, it picks the region with the most data
No, all your data eventually makes it into the log, just potentially
not as quickly :)
J-D
On Thu, Jun 27, 2013 at 2:06 PM, Viral Bajaria viral.baja...@gmail.com wrote:
Thanks Azuryy. Look forward to it.
Does DEFERRED_LOG_FLUSH impact the number of WAL files that will be created
? Tried
Did you find what the issue was? From your other thread it looks like
you got it working.
Thx,
J-D
On Mon, Jun 17, 2013 at 11:48 PM, Asaf Mesika asaf.mes...@gmail.com wrote:
Hi,
I have two cluster setup in a lab, each has 1 Master and 3 RS.
I'm inserting roughly 15GB into the master
.
On Sat, Jun 22, 2013 at 12:18 AM, Jean-Daniel Cryans
jdcry...@apache.orgwrote:
I think that the same way writing with more clients helped throughput,
writing with only 1 replication thread will hurt it. The clients in
both cases have to read something (a file from HDFS or the WAL) then
ship
TTL is enforced when compactions are running so there's no need to
rewrite the data. The alter is sufficient.
J-D
On Mon, Jun 24, 2013 at 4:15 PM, Kireet kir...@feedly.com wrote:
I need to remove the TTL setting from an existing HBase table and remove the
TTL from all existing rows. I think
I think that the same way writing with more clients helped throughput,
writing with only 1 replication thread will hurt it. The clients in
both cases have to read something (a file from HDFS or the WAL) then
ship it, meaning that you can utilize the cluster better since a
single client isn't
24GB is often cited as an upper limit, but YMMV.
It also depends if you need memory for MapReduce, if you are using it.
J-D
On Wed, Jun 19, 2013 at 3:17 PM, prakash kadel prakash.ka...@gmail.com wrote:
hi every one,
i am quite new to base and java. I have a few questions.
1. on the web ui
None of your attachements made it across, this mailing list often (but
not always) strips them.
Are you able to jstack when the drops happen and the queue time is
high? This could be https://issues.apache.org/jira/browse/HBASE-5898
but it seems a long stretch without more info.
You could also
Replication doesn't need to know about compression at the RPC level so
it won't refer to it and as far as I can tell you need to set
compression only on the master cluster and the slave will figure it
out.
Looking at the code tho, I'm not sure it works the same way it used to
work before
You cannot use the local job tracker (that is, the one that gets
started if you don't have one running) with the TotalOrderPartitioner.
You'll need to fully install hadoop on that vmware node.
Google that error to find other relevant comments.
J-D
On Fri, May 31, 2013 at 1:19 PM, David Poisson
, May 24, 2013 at 8:21 AM, Yves S. Garret
yoursurrogate...@gmail.com wrote:
Ok, weird, it still seems to be looking towards Cox.
Here is my hbase-site.xml file:
http://bin.cakephp.org/view/628322266
On Thu, May 23, 2013 at 7:35 PM, Jean-Daniel Cryans
jdcry...@apache.orgwrote:
No, I meant
It says your event_data table isn't assigned anywhere on the cluster.
Was it disabled?
J-D
On Fri, May 24, 2013 at 6:06 AM, Vimal Jain vkj...@gmail.com wrote:
Hi Tariq/Jyothi,
Sorry to trouble you again.
I think this problem is solved but i am not able to figure out why in
client's
that will
fix it
On Fri, May 24, 2013 at 12:41 PM, Jean-Daniel Cryans jdcry...@apache.org
wrote:
Ah yeah the master advertised itself as:
Attempting connect to Master server at
ip72-215-225-9.at.at.cox.net,46122,1369408257140
So the region server cannot find it since that's the public
fwiw stop_replication is a kill switch, not a general way to start and
stop replicating, and start_replication may put you in an inconsistent
state:
hbase(main):001:0 help 'stop_replication'
Stops all the replication features. The state in which each
stream stops in is undetermined.
WARNING:
You are looking at it the wrong way. Per
http://hbase.apache.org/book.html#trouble.general, always walk up the
log to the first exception. In this case it's a session timeout.
Whatever happens next is most probably a side effect of that.
To help debug your issue, I would suggest reading this
On Thu, May 23, 2013 at 2:50 PM, Jay Vyas jayunit...@gmail.com wrote:
1) Should hbase-master be changed to localhost?
Maybe Try changing /etc/hosts to match the actual non loopback ip of your
machine... (i.e. just run Ifconfig | grep 1 and see what ip comes out :))
and make sure your
yoursurrogate...@gmail.com wrote:
Here is my dump of the sole log file in the logs directory:
http://bin.cakephp.org/view/2116332048
On Thu, May 23, 2013 at 6:20 PM, Jean-Daniel Cryans
jdcry...@apache.orgwrote:
On Thu, May 23, 2013 at 2:50 PM, Jay Vyas jayunit...@gmail.com wrote:
1) Should
-projects, but I don't recall ever setting any networking
info to something other than localhost. What would cause this?
On Thu, May 23, 2013 at 6:26 PM, Jean-Daniel Cryans
jdcry...@apache.orgwrote:
That's your problem:
Caused by: java.net.BindException: Problem binding to
ip72-215-225-9
? I couldn't find
anything else in the docs. But having said that, both
are set to 0.0.0.0 by default.
Also, I checked out 127.0.0.1:60010 and 0.0.0.0:60010,
no web gui.
On Thu, May 23, 2013 at 7:19 PM, Jean-Daniel Cryans
jdcry...@apache.orgwrote:
It should only be a matter of network
I guess you are referring to
http://hbase.apache.org/book.html#client_dependencies ?
The thing is by default hbase.zookeeper.quorum is localhost, so your
client will look at your local machine to find HBase if you don't
configure anything.
J-D
On Tue, May 21, 2013 at 10:50 AM, Vimal Jain
The reference guide has a pretty good section about this:
http://hbase.apache.org/book.html#block.cache
What do you think is missing in order to fully answer your question?
Thx,
J-D
On Mon, May 20, 2013 at 5:07 AM, yun peng pengyunm...@gmail.com wrote:
Hi, All,
I am wondering what is exactly
I see:
2013-05-21 17:15:07,914 DEBUG
org.apache.hadoop.hbase.master.AssignmentManager: Handling
transition=RS_ZK_REGION_FAILED_OPEN,
server=hbase-regionserver1,60020,1369170595340, region=70236052/-ROOT-
Over and over. Look in the region server logs, you should see fat
stack traces on why it's
On Mon, May 20, 2013 at 3:48 PM, Varun Sharma va...@pinterest.com wrote:
Thanks JD for the response... I was just wondering if issues have ever been
seen with regards to moving over a large number of WAL(s) entirely from one
region server to another since that would double the replication
Yes, but the region server now has 2X the number of WAL(s) to replicate and
could suffer higher replication lag as a result...
In my experience this hasn't been an issue. Keep in mind that the RS
will only replicate what's in the queue when it was recovered and
nothing more. It means you have
It would be nice if you can isolate the use case that triggers the issue so
that we can reproduce.
You could also hit HBASE-6479 if you still have HFileV1 files around.
J-D
On Sun, May 5, 2013 at 10:49 PM, Viral Bajaria viral.baja...@gmail.comwrote:
On Sun, May 5, 2013 at 10:45 PM,
You can use the Export MR job provided with HBase, it lets you set a time
range: http://hbase.apache.org/book.html#export
J-D
On Mon, May 6, 2013 at 10:27 AM, Gaurav Pandit pandit.gau...@gmail.comwrote:
Hi Hbase users,
We have a use case where we need to know how data looked at a given time
goal is to generate data in some sort of delimited plain text file and hand
it over the caller.
- Gaurav
On Mon, May 6, 2013 at 1:33 PM, Jean-Daniel Cryans jdcry...@apache.org
wrote:
You can use the Export MR job provided with HBase, it lets you set a time
range: http://hbase.apache.org
in sequence file format? My
goal is to generate data in some sort of delimited plain text file and
hand
it over the caller.
- Gaurav
On Mon, May 6, 2013 at 1:33 PM, Jean-Daniel Cryans
jdcry...@apache.org
wrote:
You can use the Export MR job provided with HBase
Short answer is no, there's no knob or configuration to do that.
Longer answer is it depends. Are the reads and writes going to different
regions/tables? If so, disable the balancer and take it in charge
by segregating the offending regions on their own RS.
I also see you have the requirement to
Inline.
J-D
On Thu, Apr 25, 2013 at 1:09 PM, Aaron Zimmerman
azimmer...@sproutsocial.com wrote:
Hi,
If a region is being written to, and a scanner takes a lease out on the
region, what will happen to the writes? Is there a concept of Transaction
Isolation Levels?
There's MVCC, so
Can you run a RowCounter a bunch of times to see if it exhibits the same
issue? It would tell us if it's HBase or Pig that causes the issue.
http://hbase.apache.org/book.html#rowcounter
J-D
On Tue, Apr 9, 2013 at 3:58 AM, Eugene Morozov emoro...@griddynamics.comwrote:
Hello everyone.
I
Samir,
When you say And at what point balancer will start redistribute regions to
second server, do you mean that when you look at the master's web UI you
see that one region server has 0 region? That would be a problem. Else,
that line you posted in your original message should be repeated for
On Tue, Mar 26, 2013 at 6:56 AM, Robert Hamilton
rhamil...@whalesharkmedia.com wrote:
I am evaluating HBase 0.94.5 on a test cluster that happens to be running
Hadoop 0.20.2-cdh3u5
I've seen the compatibility warnings but I'm just doing a first look at the
features and not even thinking
On Fri, Mar 22, 2013 at 12:12 AM, Nicolas Seyvet
nicolas.sey...@gmail.com wrote:
@J-D: Thanks, this sounds very likely.
One more thing, from the logs of one slave, I can see the following:
2013-03-21 22:27:15,041 INFO org.apache.hadoop.hbase.regionserver.Store:
Completed major compaction of 9
On Thu, Mar 21, 2013 at 6:46 AM, Brennon Church bren...@getjar.com wrote:
Hello all,
As I understand it, a common performance tweak is to disable major
compactions so that you don't end up with storms taking things out at
inconvenient times. I'm thinking that I should just write a quick
You are likely just hitting the threshold for a minor compaction and
by picking up all the files (I'm making a guess that it does) it gets
upgraded to a major compaction. The threshold is 3 by default.
So after loading 3 files you should get a compaction per region, then
every other 2 loading you
On Thu, Mar 21, 2013 at 12:06 PM, Nicolas Seyvet
nicolas.sey...@gmail.com wrote:
@Ram: You are entirely correct, I made the exact same mistakes of mixing up
Large and minor compaction. By looking closely, what I see is that at
around 200 HFiles per region it starts minor compacting files per
compacting will do no
good (unless you mostly do full table scans, in which case the caching
doesn't do anything for you).
You shouldn't see a big difference turning major compactions off if
you don't delete/update a lot.
Thanks.
--Brennon
On 3/21/13 10:49 AM, Jean-Daniel Cryans wrote:
On Thu
1 - 100 of 1248 matches
Mail list logo