Hi,
We are using HBase 0.94.1.
We are facing a strange issue form long time not at got any proper solution.
Here is the issue:Whenever one region server in cluster goes down, whole HBase
wont respond for some time (around 5 to 10 minutes).We are unable to get any
clue about the issue.
Can some
Hi Ted -
You should benefit from using this link [0] to follow Nutch and HBase
integration. Link [1] would be the one you can download a production
release version of Nutch 2.2.1 that has HBase dependency.
Link [0] http://wiki.apache.org/nutch/Nutch2Tutorial
Link [1]
Hi Folks,
New to NOSQL designing data model for primary care system. i have normalized
sample DB relationship model e.g. HBASE-0.94.0
Patient table:
1) Patient_id - PK
2) Added_BY
3) Gender
4) Usual_GP
Patient Name table: [One to many relationship with patient [One] Name[Many]]
1)
I'm actually trying to do the same thing, and your data model is a great
starting place.
I would like to do a Google hangout to discuss this .
Anyone else in the hbase community willing to do a virtual meetup to go though
relational to multimap based storage and lookup in hbase?
On Oct 28,
Take a look at Phoenix (https://github.com/forcedotcom/phoenix) which
will allow you to issue SQL to directly create tables, insert data,
and run queries over HBase using the data model described below.
Thanks,
James
On Oct 28, 2013, at 8:47 AM, saiprabhur saiprab...@gmail.com wrote:
Hi Folks,
How much data do you that you need NoSQL db?
On Monday, October 28, 2013, saiprabhur wrote:
Hi Folks,
New to NOSQL designing data model for primary care system. i have
normalized
sample DB relationship model e.g. HBASE-0.94.0
Patient table:
1) Patient_id - PK
2) Added_BY
If your query (scan) needs a region on the failed region server, the client
will fail and silently retry about 10 times. The sleep time increase as
each retry fails and can reach 10min. On the server side, the master takes
3min to realize the RS failed thus issue a region move, which might take a
That seems like too much client threads. How much mb/sec did you on that 1
RS?
On Friday, October 25, 2013, Vladimir Rodionov wrote:
You can not saturate region server with one client (unless you probably
use hbase-async) if all data is cached in RAM.
In our performance tests we have run 10
I couldn't get the Row Value Constructor feature.
Do you perhaps have a real world use case to demonstrate this?
On Friday, October 25, 2013, James Taylor wrote:
The Phoenix team is pleased to announce the immediate availability of
Phoenix 2.1 [1].
More than 20 individuals contributed to the
Check through HDFS UI that your cluster haven't reached maximum disk
capacity
On Thursday, October 24, 2013, Vimal Jain wrote:
Hi Ted/Jean,
Can you please help here ?
On Tue, Oct 22, 2013 at 10:29 PM, Vimal Jain vkj...@gmail.comjavascript:;
wrote:
Hi Ted,
Yes i checked namenode and
Can't say I blame you, as it's a bit abstract. At Salesforce, we use it to
support query-more, where you want to be able to page through your data.
Without this feature, you have no way of establishing your prior position
to be able to get the next batch. This allows the client to jump right back
Hello all,
I have a couple of questions for HBase.
1. Is there any way to find out by how much time the replication is lagging
on the peer cluster or even primary cluster is ok.
2. Say replication is going on for some time for a table and then you
restore a snapshot on the Primary. Would
I need to copy data from Hadoop cluster A to cluster B. I know I can use
distCp tool to do that. Now the problem is: cluster A has version 1.2.1 and
cluster B has version 0.20.x. So distcp tool from either version does not
work on both versions. Is there a possible way to do that? So far I
Take a lookat using webHDFS protocol to use distcp between clusters with
different versions:
On Mon, Oct 28, 2013 at 3:14 PM, S. Zhou myx...@yahoo.com wrote:
I need to copy data from Hadoop cluster A to cluster B. I know I can use
distCp tool to do that. Now the problem is: cluster A has
Sorry, last email was accidentally sent before I can finish it.
Take a look at using webHDFS protocol to use distcp between clusters with
different versions:
Hi Sandeep,
Do you have more information regarding your cluster? Number of nodes,
hadoop version, what is not responding, etc.?
Thanks,
JM
2013/10/28 Asaf Mesika asaf.mes...@gmail.com
If your query (scan) needs a region on the failed region server, the client
will fail and silently retry
1) you can use replication metrics info such as:ageOfLastShippedOp,
timeStampsOfLastShippedOp, sizeOfLogQueue ageOfLastAppliedOp, and
timeStampsOfLastAppliedOp to figure out the lag.
2) with my best knowledge, the replication won't be together with snapshot.
That is the peers won't know there is
w.r.t. virtual meetup, there is Strata / Hadoop World this week.
So some people wouldn't be able to make it.
On Mon, Oct 28, 2013 at 8:55 AM, Jay Vyas jayunit...@gmail.com wrote:
I'm actually trying to do the same thing, and your data model is a great
starting place.
I would like to do a
Please see also 'Replication Metrics' at the bottom of
http://hbase.apache.org/replication.html
On Mon, Oct 28, 2013 at 3:23 PM, hdev ml hde...@gmail.com wrote:
Thanks Demai. Need to find out how to access HBase metrics.
But thanks for your insights.
Harshad
On Mon, Oct 28, 2013 at 2:45
Hi there,
I have a cluster running vanilla Hadoop 2.1.1 and am trying to deploy
HBase 0.96 on top. At first the master was crapping out with this
when I was trying to start it:
2013-10-28 16:11:32,778 FATAL [master:r12s1:9102] master.HMaster:
Unhandled exception. Starting shutdown.
On Mon, Oct 28, 2013 at 4:22 PM, Ted Yu yuzhih...@gmail.com wrote:
Please deploy hadoop 2.2
So 2.1.1 is not compatible with 2.1.0? Looks like that class isn't in the jar
~/hbase/lib @r12s1.cs1 unzip -l hadoop-common-2.1.0-beta.jar | grep
org.apache.hadoop.util.PlatformName
~/hbase/lib
See this thread:
http://search-hadoop.com/m/0Pnu41YGCIi/hbase+tianying+0.96subj=RE+Hbase+0+96+and+Hadoop+2+2
On Mon, Oct 28, 2013 at 4:24 PM, tsuna tsuna...@gmail.com wrote:
On Mon, Oct 28, 2013 at 4:22 PM, Ted Yu yuzhih...@gmail.com wrote:
Please deploy hadoop 2.2
So 2.1.1 is not
On Mon, Oct 28, 2013 at 4:26 PM, Ted Yu yuzhih...@gmail.com wrote:
See this thread:
Yes, I've seen this thread (that's the one I referred to in my first
post). Why would it not work with 2.1.1?
--
Benoit tsuna Sigoure
OK, whatever, I copied the missing .class file from 2.1.0 into the 2.1.1 jar
~/hbase/lib @r12s1.cs1 unzip hadoop-common-2.1.0-beta.jar~
org/apache/hadoop/util/PlatformName.class
Archive: hadoop-common-2.1.0-beta.jar~
inflating: org/apache/hadoop/util/PlatformName.class
~/hbase/lib @r12s1.cs1
After downloading from
http://mvnrepository.com/artifact/org.apache.hadoop/hadoop-auth/2.1.1-beta,
I could see:
$ jar tvf hadoop-auth-2.1.1-beta.jar | grep PlatformName
1862 Tue Sep 17 05:47:36 PDT 2013
org/apache/hadoop/util/PlatformName.class
On Mon, Oct 28, 2013 at 4:28 PM, tsuna
On Mon, Oct 28, 2013 at 4:41 PM, Ted Yu yuzhih...@gmail.com wrote:
After downloading from
http://mvnrepository.com/artifact/org.apache.hadoop/hadoop-auth/2.1.1-beta,
I could see:
$ jar tvf hadoop-auth-2.1.1-beta.jar | grep PlatformName
1862 Tue Sep 17 05:47:36 PDT 2013
The files from
http://www.us.apache.org/dist/hadoop/common/hadoop-2.1.1-beta/ were dated
Sept 17th.
Arun announced the passing of 2.1.1 RC on the 24th.
FYI
On Mon, Oct 28, 2013 at 4:45 PM, tsuna tsuna...@gmail.com wrote:
On Mon, Oct 28, 2013 at 4:41 PM, Ted Yu yuzhih...@gmail.com wrote:
On Mon, Oct 28, 2013 at 4:51 PM, Ted Yu yuzhih...@gmail.com wrote:
The files from
http://www.us.apache.org/dist/hadoop/common/hadoop-2.1.1-beta/ were dated
Sept 17th.
Arun announced the passing of 2.1.1 RC on the 24th.
Interesting, so the Apache mirrors don't have a correct copy of the
2.1.1
Thanks Ted. Will take a look at it.
On Mon, Oct 28, 2013 at 3:33 PM, Ted Yu yuzhih...@gmail.com wrote:
Please see also 'Replication Metrics' at the bottom of
http://hbase.apache.org/replication.html
On Mon, Oct 28, 2013 at 3:23 PM, hdev ml hde...@gmail.com wrote:
Thanks Demai. Need to
@Jean-mar
We are using HBase-0.94.1 with Hadoop-1.0.2.We have 15 Region servers, 5
Zookeepers, 1 master, 3 backup masters and 18 Dtatnodes.Our Java version is
jdk1.7.0_04
The issue is whenever a region server is down scan/get is not working.
Thanks,Sandeep.
From: jean-m...@spaggiari.org
Google HBase MTTR improvements in 0.96. 10-15 minutes of downtime is normal
on versions prior to 0.96 with
default HBase configuration settings.
Best regards,
Vladimir Rodionov
Principal Platform Engineer
Carrier IQ, www.carrieriq.com
e-mail: vrodio...@carrieriq.com
Hi all,
The first release candidate of AsyncHBase post-singularity is now
available for download. AsyncHBase remains true to its initial
promise: the API is still backward compatible, but under the hood it
continues to work with all production releases of HBase of the past
few years.
This
32 matches
Mail list logo