recompile 0.94 with a new version of Protobuf
(2.5), other than that it should work just fine.
-- Lars
- Original Message -
From: Timo Schaepe i...@timoschaepe.de
To: user@hbase.apache.org user@hbase.apache.org
Cc:
Sent: Friday, January 31, 2014 4:56 AM
Subject: Upgrade
Hello,
our cluster is running Hadoop 1.2.1 and HBase 0.94.11. Now we wanna upgrade the
cluster to the newest version, in particular Hadoop 2.2.0 and HBase 0.96.1.1.
How is the best approach to do that?
Because 0.94.11 is not running well with hadoop 2, my first idea was to upgrade
Hbase to
Hey guys,
we are searching for a consistent backup strategy with the export tool. Is this
article still up-to-date and I can use it?
http://hadoop-hbase.blogspot.com/2012/04/timestamp-consistent-backups-in-hbase.html
Thanks for answers.
cheers,
Timo
smime.p7s
Description: S/MIME
vrodio...@carrieriq.com:
Offline snapshots?
Best regards,
Vladimir Rodionov
Principal Platform Engineer
Carrier IQ, www.carrieriq.com
e-mail: vrodio...@carrieriq.com
From: Timo Schaepe [t...@timoschaepe.de]
Sent: Monday, December 23, 2013 10:53
at time T0 and the snapshot of Region N a time T0
+ X seconds
Matteo
On Mon, Dec 23, 2013 at 7:07 PM, Timo Schaepe t...@timoschaepe.de wrote:
Sorry, I forgot to mention. Taking the cluster offline is not an option.
We need an consistent backup of an online cluster. Our plan B is to build
. Thus it has to do a flush. If it cannot complete the flush in time,
it might cancel the split as well. Did you check that? Does your
compactions working as intended?
Enis
On Wed, Dec 18, 2013 at 10:06 AM, Timo Schaepe t...@timoschaepe.de wrote:
@Ted Yu:
Yep, nevertheless thanks a lot
@Ted Yu:
Yep, nevertheless thanks a lot!
Am 18.12.2013 um 10:03 schrieb Ted Yu yuzhih...@gmail.com:
Timo:
I went through namenode log and didn't find much clue.
Cheers
On Tue, Dec 17, 2013 at 9:37 PM, Timo Schaepe t...@timoschaepe.de wrote:
Hey Ted Yu,
I had digging the name
is not splitting automatically at 2 GB.
Thanks,
Timo
Thanks.
-- Lars
From: Timo Schaepe t...@timoschaepe.de
To: user@hbase.apache.org
Sent: Saturday, December 14, 2013 5:27 AM
Subject: Re: Problems with hbase.hregion.max.filesize
Sorry
=512M
-Xloggc:/home/hadoop/logs/g
c-hbase.log $HBASE_GC_OPTS
Thank,
Timo
Am 14.12.2013 um 15:45 schrieb Azuryy Yu azury...@gmail.com:
such a large java heap, did you check gc log? how did you configured jvm
options?
On 2013-12-14 9:27 PM, Timo Schaepe t...@timoschaepe.de wrote:
Sorry
long to split the files'
@ 13:54:20,194 and 13:55:10,533, I don't find much clue from the posted log.
If you have time, mind checking namenode log for 1 minute interval leading
up to 13:54:20,194 and 13:55:10,533, respectively ?
Thanks
On Sat, Dec 14, 2013 at 5:21 AM, Timo Schaepe t
that.
What is your hardware configuration?
Have you run hbck to see if everything is correct?
JM
2013/12/13 Timo Schaepe t...@timoschaepe.de
Hello again,
digging in the logs of the specific regionserver shows me that:
2013-12-12 13:54:20,194 INFO
Heapsize for Regionserver: 80 GB
bye,
Timo
Am 14.12.2013 um 14:21 schrieb Timo Schaepe t...@timoschaepe.de:
Hey,
@JM: Thanks for the hint with hbase.regionserver.fileSplitTimeout. At the
moment (the import is actually working) and after I splittet the specific
regions manually, we do
Hello,
during the loading of data in our cluster I noticed some strange behavior of
some regions, that I don't understand.
Scenario:
We convert data from a mysql database to HBase. The data is inserted with a put
to the specific HBase table. The row key is a timestamp. I know the problem
…
Thanks,
Timo
Am 13.12.2013 um 10:22 schrieb Timo Schaepe t...@timoschaepe.de:
Hello,
during the loading of data in our cluster I noticed some strange behavior of
some regions, that I don't understand.
Scenario:
We convert data from a mysql database to HBase. The data is inserted
14 matches
Mail list logo