Hey,
@JM: Thanks for the hint with hbase.regionserver.fileSplitTimeout. At the
moment (the import is actually working) and after I splittet the specific
regions manually, we do not have growing regions anymore.
hbase hbck says, all things are going fine.
0 inconsistencies detected.
Status: OK
Sorry, forgot our Hardwareconfiguration…
1 NameNode/SecondaryNameNode/HBase master
31 Datanodes/Regionserver
All of them with
2x XEON E5-2640 2.5 GHz
128 GB RAM
/dev/sda 90 GB
/dev/sdb 1.1 TB
/dev/sdc 1.1 TB
where sda are SSDs disks for system and sdb and sdc are disks for HDFS/HBase
Timo:
Other than two occurrences of 'Took too long to split the files'
@ 13:54:20,194 and 13:55:10,533, I don't find much clue from the posted log.
If you have time, mind checking namenode log for 1 minute interval leading
up to 13:54:20,194 and 13:55:10,533, respectively ?
Thanks
On Sat, Dec
Yep, I obviously turned it off for my use case since MTTR is not a big
concern.
I just wanted to talk about the default value, not about my personal case
especially.
If you consider MTTR is the main objective, then I agree 1 hour is a decent
default value :)
Vladimir you're right, I forgot to
Hi ,
Thanks for your reply . Actually i am trying to write my own framework
kinda. As i need to do lot of in memory operation [ via mem cache ] within
region . So i required to read the data directly in the region in that RS
.
If there is a way where i can read the data in that region in RS.
Did you observe anything interesting with such a large Java heap?
You said you have 3G for the memstore, most of the rest is for the block cache
I assume.
Any long GC pauses, or other strange behavior?
Thanks.
-- Lars
From: Timo Schaepe t...@timoschaepe.de
Congratulations James!
Now we need to get started on the checklist of incubation work items. :-)
See you over on d...@phoenix.incubator.apache.org!
On Fri, Dec 13, 2013 at 3:50 PM, James Taylor jtay...@salesforce.comwrote:
The Phoenix team is pleased to announce that Phoenix[1] has been
-- Forwarded message --
From: x ...@gmail.com
Date: Sat, Dec 14, 2013 at 10:28 AM
Subject: Re: Errors :Undefined table and DoNotRetryIOException while
querying from phoenix to hbase
To: yyy...@gmail.com
But I could drop the table in HBase using HBase shell and
Mathan,
We already answered your question on the Phoenix mailing list. If you
have a follow up question, please post it there. This is not an HBase
issue.
Thanks,
James
On Dec 14, 2013, at 2:10 PM, mathan kumar immathanku...@gmail.com wrote:
-- Forwarded message --
From: x
Congratulations, James.
On 2013-12-14 7:50 AM, James Taylor jtay...@salesforce.com wrote:
The Phoenix team is pleased to announce that Phoenix[1] has been accepted
as an Apache incubator project[2]. Over the next several weeks, we'll move
everything over to Apache and work toward our first
Its awesome, thanks the team.
On 2013-12-13 11:25 PM, Vivek Mishra vivek.mis...@impetus.co.in wrote:
Hi All,
We are happy to announce the release of Kundera 2.9 .
Kundera is a JPA 2.0 compliant, object-datastore mapping library for NoSQL
datastores. The idea behind Kundera is to make
such a large java heap, did you check gc log? how did you configured jvm
options?
On 2013-12-14 9:27 PM, Timo Schaepe t...@timoschaepe.de wrote:
Sorry, forgot our Hardwareconfiguration…
1 NameNode/SecondaryNameNode/HBase master
31 Datanodes/Regionserver
All of them with
2x XEON E5-2640 2.5
12 matches
Mail list logo