Thanks for those links JM - hadn't seen any of those before. I think it's
useful to have stuff like this, for new users to explore using HBase.
Re: Phoenix, I don't think it's fundamentally any more involved than any of
those, it's just a library. It exposes a JDBC driver interface, so GUI
Hello,
you can use Pentaho Data Integration that handles different types of sources.
-Ursprüngliche Nachricht-
Von: Aji Janis [mailto:aji1...@gmail.com]
Gesendet: Dienstag, 21. Mai 2013 23:02
An: user@hbase.apache.org
Betreff: ETL tools
Hello users,
I am interested in hearing about
Hi,
I have configured Hadoop and Hbase successfully ( i think so ) . I tried
creating some tables and it works fine. When i issue jps command to list
java processes , i see following :-
hadoop@Rupesh:~/HadoopBase$ jps
*16276 Main
13594 NameNode
14033 SecondaryNameNode
15643 HQuorumPeer
13812
Hi,
I have Hbase configured in pseudo distributed mode on Machine A.
I would like to connect to it through a Java program running on Machine B.
But i am unable to do so.What configurations are required in Java for this ?
Please help.
--
Thanks and Regards,
Vimal Jain
Hi,
How are you trying to connect?
Are you setting zookeeper host and client port in HBaseConfiguration?
Regards,
Jyothi
-Original Message-
From: Vimal Jain [mailto:vkj...@gmail.com]
Sent: 22 May 2013 14:57
To: user@hbase.apache.org
Subject: Not able to connect to Hbase remotly
Hi,
I
Hi,
Yes Jyothi.
Here is my Java code
Configuration config = HBaseConfiguration.create();
config.set(hbase.zookeeper.quorum ,192.168.0.181);
config.set(hbase.zookeeper.property.clientPort,2181);
tablePool = new HTablePool(config,Integer.MAX_VALUE);
Hi,
Code seem to be fine. Getting any exceptions at client side?
Does your client machine hosts file has an entry for 192.168.0.181?
Both machines able to access each other?
Regards,
Jyothi
-Original Message-
From: Vimal Jain [mailto:vkj...@gmail.com]
Sent: 22 May 2013 15:18
To:
No exception at client side .
Code just hangs on call to htablepool.getTable(tableName)
In zookeeper logs , i am able to see that it received a request for
connection from Java program but it did not proceed ahead.
Any reason to add 192.168.0.181 in /etc/hosts , because as per my
understanding
Connection between master and zookeeper are ok? Have you verified if your
cluster is working fine (zookeeper to master connection)?
I had connection issue though the IP was set in configuration and connection
was tried with host. Adding hosts entry resolved the issue. But no errors at
client in
Hi Jyothi,
I am running my hbase in pseudo distributed mode so zookeeper and master
are on same machine.
On Wed, May 22, 2013 at 4:28 PM, Jyothi Mandava
jyothi.mand...@huawei.comwrote:
Connection between master and zookeeper are ok? Have you verified if your
cluster is working fine
Hello Vimal,
Add the IP and hostname of your HBase machine into the hosts file
of your client machine and see if it helps.
Warm Regards,
Tariq
cloudfront.blogspot.com
On Wed, May 22, 2013 at 4:38 PM, Vimal Jain vkj...@gmail.com wrote:
Hi Jyothi,
I am running my hbase in pseudo
Hello VImal,
HQuorumPeer=ZK
Main=Eclipse or something(i feel)
Warm Regards,
Tariq
cloudfront.blogspot.com
On Wed, May 22, 2013 at 2:01 PM, Vimal Jain vkj...@gmail.com wrote:
Hi,
I have configured Hadoop and Hbase successfully ( i think so ) . I tried
creating some tables and
Ok.Thanks Tariq.
On Wed, May 22, 2013 at 4:43 PM, Mohammad Tariq donta...@gmail.com wrote:
Hello VImal,
HQuorumPeer=ZK
Main=Eclipse or something(i feel)
Warm Regards,
Tariq
cloudfront.blogspot.com
On Wed, May 22, 2013 at 2:01 PM, Vimal Jain vkj...@gmail.com wrote:
Hi Tariq,
I tried this but its not helping.
Let me brief you about the problem.
I have posted this on
http://stackoverflow.com/questions/16689594/unable-to-connect-to-hbase-remotly
Please help.
On Wed, May 22, 2013 at 4:41 PM, Mohammad Tariq donta...@gmail.com wrote:
Hello Vimal,
Add
Is the cluster ok? Have you tried creating table or any other operation using
Shell where master is involved?
Do you see anything in Master logs? Pls share your ZK log ( and master if there
is any useful info).
Regards,
Jyothi.
-Original Message-
From: Vimal Jain
Yes.
I tried creating some tables , and it works fine.
I have attached my master and zookeeper logs.
On Wed, May 22, 2013 at 5:11 PM, Jyothi Mandava
jyothi.mand...@huawei.comwrote:
Is the cluster ok? Have you tried creating table or any other operation
using Shell where master is involved?
Could not find the logs.
From: Vimal Jain [mailto:vkj...@gmail.com]
Sent: 22 May 2013 17:18
To: user@hbase.apache.org
Subject: Re: Not able to connect to Hbase remotly
Yes.
I tried creating some tables , and it works fine.
I have attached my master and zookeeper logs.
On Wed, May 22, 2013 at
I had attached it.May be attachments are blocked.
Attachments are big.How do i send it ?
On Wed, May 22, 2013 at 5:47 PM, Jyothi Mandava
jyothi.mand...@huawei.comwrote:
Could not find the logs.
From: Vimal Jain [mailto:vkj...@gmail.com]
Sent: 22 May 2013 17:18
To: user@hbase.apache.org
Hi,
I got below Runtime exception after i changed my catch block ( from
catch(IOException) to catch(Exception) ). Sorry for trouble.
Exception :
org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to find
region for event_data
where event_data is the table i have created in Hbase
Hi,all
I want to know how RS eliminates the unnecessary hlogs.
lastSeqNum stores RegionName, latest KV Seq id
and
outputfiles stores last Seq id before new hlog file, file path
So, how does rs guarantee that the kv in the hlog to be cleared have been
already flushed from memstore into hfile.
I
Hi, Just writing to say thanks folks (Jean-Marc Lars)! I didn't know
about these tools and H-rider is so useful and easy to use.
Regards,
Shahab
On Tue, May 21, 2013 at 11:41 PM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Using Phoenix for that is like trying to kill a mosquito with
Thanks Tariq and Jyothi for your help.
I have solved this issue but not sure which step did the trick.
I
1) changed configuration files ( Hadoop and Hbase both ) and replaced
hostnames with ip addresses.
2) played with /etc/hosts a bit
3) restarted hadoop and hbase several times ..
Thanks again.
*0.94.0*
The issue (I think) was related to a region split that didn't happen
cleanly. As a result there were references to daughter region present in
HDFS.
I believe the -fixSplitParents would have taken care of this but it is
not available in 0.94.0
I manually deleted reference files from
These tools seem just like what I want! Thank you.
I am trying to play with it now but looks like in our Hbase
configuration HBASE_MANAGE_ZK is set to False in hbase-env and
hbase.zookeeper.property.clientPort is not set in hbase-site and
therefore I can't use hbasemanager or hrider. I am new to
Thanks for the feedback Jay.
I helped someone who faced the same issue recently. Might deserve a fix...
(and so a JIRA with details)
Also, I would recommend you to migrate to a sooner 0.94.x version.
JM
2013/5/22 Jay Talreja jay.talr...@oracle.com
*0.94.0*
The issue (I think) was related
Still stuck on this.
I did something different, I tried version 0.92.2. This is the log from
that older version.
http://bin.cakephp.org/view/617939270
The other weird thing that I noticed with version on 0.92.2 is this. I
started
HBase and this is the output that I got:
$
Okay I finally fixed and the problem...
The reason it took long is because it's tricky to know if hmaster is failing
due to network errors vs file system or other lower level errors.
In my case, it was failing because of both, and i was just restarting the
master, which masked the error
Hey JM,
Can you expand on what you mean? Phoenix is a single jar, easily
deployed to any HBase cluster. It can map to existing HBase tables or
create new ones. It allows you to use SQL (a fairly popular language) to
query your data, and it surfaces it's functionality as a JDBC driver so
that
Hi Aji,
With Phoenix, you pass through the client port in your connection
string, so this would not be an issue. If you're familiar with SQL
Developer, then Phoenix supports something similar with SQuirrel:
https://github.com/forcedotcom/phoenix#sql-client
Regards,
James
On 05/22/2013 07:42
FSHLog (in trunk) stores the earliest seqnums for each region in current
memstore, and earliest flushing seqnum (see
FSHLog::start/complete/abortCacheFlush). When logs are deleted the logs
with seqnums that are above the earliest flushing/flushed seqnum for any
region are not deleted (see
I have not been able to bring up a simple standalone hbase pseuo
distributed mode cluster on a single ubuntu 12.x machine. A couple of
co-workers that have been using hbase for years also looked at it and we
have not been able to resolve.
Before I jump in to the details, i'll just say : hey is
OK found the issue, it was the old ubuntu localhost anomaly of 127.0.1.1
vs 127.0.0.1 Changing /etc/hosts to use the latter fixed the issue
Here is the quick start reference to it
http://hbase.apache.org/book/quickstart.html
Loopback IP
HBase expects the loopback IP address to be 127.0.0.1.
clearly there is a very high coupling to the /etc/hosts file in hbase. Is
this necessary?
On Wed, May 22, 2013 at 3:46 PM, Stephen Boesch java...@gmail.com wrote:
OK found the issue, it was the old ubuntu localhost anomaly of 127.0.1.1
vs 127.0.0.1 Changing /etc/hosts to use the latter
How does Zookeeper fit into this picture? For hbase.zookeeper.quorum,
I have it set to localhost. Would I need to include Zookeeper or start it
up in some way in order to get it to run?
On Wed, May 22, 2013 at 1:44 PM, Jay Vyas jayunit...@gmail.com wrote:
Yves, im going through the same
I found this thread on search-hadoop.com just now because I've been
wrestling with the same issue for a while and have as yet been unable to
solve it. However, I think I have an idea of the problem. My theory is
based on assumptions about what's going on in HBase and HDFS internally,
so please
Basically,
You had va-p-hbase-02 crash - that caused all the replication related data
in zookeeper to be moved to va-p-hbase-01 and have it take over for
replicating 02's logs. Now each region server also maintains an in-memory
state of whats in ZK, it seems like when you start up 01, its trying
Sandy:
Do you think the following JIRA would help with what you expect in this
regard ?
HBASE-8420 Port HBASE-6874 Implement prefetching for scanners from 0.89-fb
Cheers
On Wed, May 22, 2013 at 1:29 PM, Sandy Pratt prat...@adobe.com wrote:
I found this thread on search-hadoop.com just now
Also what version of HBase are you running ?
On Wed, May 22, 2013 at 1:38 PM, Varun Sharma va...@pinterest.com wrote:
Basically,
You had va-p-hbase-02 crash - that caused all the replication related data
in zookeeper to be moved to va-p-hbase-01 and have it take over for
replicating 02's
ls /hbase/replication/rs/va-p-hbase-01-c,60020,1369249873379
[1]
[zk: va-p-zookeeper-01-c:2181(CONNECTED) 2] ls
/hbase/replication/rs/va-p-hbase-01-c,60020,1369249873379/1
[]
I'm on hbase-0.94.2-cdh4.2.1
Thanks
On Wed, May 22, 2013 at 11:40 PM, Varun Sharma va...@pinterest.com wrote:
Also
What does this command show you ?
get /hbase/replication/rs/va-p-hbase-01-c,60020,1369249873379/1
Cheers
On Wed, May 22, 2013 at 1:46 PM, amit.mor.m...@gmail.com
amit.mor.m...@gmail.com wrote:
ls /hbase/replication/rs/va-p-hbase-01-c,60020,1369249873379
[1]
[zk:
[zk: va-p-zookeeper-01-c:2181(CONNECTED) 3] get
/hbase/replication/rs/va-p-hbase-01-c,60020,1369249873379/1
cZxid = 0x60281c1de
ctime = Wed May 22 15:11:17 EDT 2013
mZxid = 0x60281c1de
mtime = Wed May 22 15:11:17 EDT 2013
pZxid = 0x60281c1de
cversion = 0
dataVersion = 0
aclVersion = 0
Do an ls not a get here and give the output ?
ls /hbase/replication/rs/va-p-hbase-01-c,60020,1369249873379/1
On Wed, May 22, 2013 at 1:53 PM, amit.mor.m...@gmail.com
amit.mor.m...@gmail.com wrote:
[zk: va-p-zookeeper-01-c:2181(CONNECTED) 3] get
empty return:
[zk: va-p-zookeeper-01-c:2181(CONNECTED) 10] ls
/hbase/replication/rs/va-p-hbase-01-c,60020,1369249873379/1
[]
On Thu, May 23, 2013 at 12:05 AM, Varun Sharma va...@pinterest.com wrote:
Do an ls not a get here and give the output ?
ls
How could we improve the doc Stephen? Was the problem that it was only in
the quick start section? Thanks.
St.Ack
On Wed, May 22, 2013 at 12:46 PM, Stephen Boesch java...@gmail.com wrote:
OK found the issue, it was the old ubuntu localhost anomaly of 127.0.1.1
vs 127.0.0.1 Changing
What did the logs show regards who could not find who? (I would like to
answer Jay Vyas but thought I'd ask here first to see if could see what
tight-coupling to /etc/hosts we are guilty of).
Thanks,
St.Ack
On Wed, May 22, 2013 at 12:46 PM, Stephen Boesch java...@gmail.com wrote:
OK found the
I found this:
[zk: va-p-zookeeper-01-c:2181(CONNECTED) 17] ls
/hbase/replication/rs/va-p-hbase-02-d,60020,1369249862401
[1-va-p-hbase-02-e,60020,1369042377129-va-p-hbase-02-c,60020,1369042377731-va-p-hbase-02-d,60020,1369233252475,
2013-05-22 15:31:25,929 WARN
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient
ZooKeeper exception:
org.apache.zookeeper.KeeperException$SessionExpiredException:
KeeperErrorCode = Session expired for *
I see - so looks okay - there's just a lot of deep nesting in there - if
you look into these you nodes by doing ls - you should see a bunch of
WAL(s) which still need to be replicated...
Varun
On Wed, May 22, 2013 at 2:16 PM, Varun Sharma va...@pinterest.com wrote:
2013-05-22 15:31:25,929
Can you do ls /hbase/rs and see what you get for 02-d - instead of looking
in /replication/, could you look in /hbase/replication/rs - I want to see
if the timestamps are matching or not ?
Varun
On Wed, May 22, 2013 at 2:17 PM, Varun Sharma va...@pinterest.com wrote:
I see - so looks okay -
Basically
ls /hbase/rs and what do you see for va-p-02-d ?
On Wed, May 22, 2013 at 2:19 PM, Varun Sharma va...@pinterest.com wrote:
Can you do ls /hbase/rs and see what you get for 02-d - instead of looking
in /replication/, could you look in /hbase/replication/rs - I want to see
if the
va-p-hbase-02-d,60020,1369249862401
On Thu, May 23, 2013 at 12:20 AM, Varun Sharma va...@pinterest.com wrote:
Basically
ls /hbase/rs and what do you see for va-p-02-d ?
On Wed, May 22, 2013 at 2:19 PM, Varun Sharma va...@pinterest.com wrote:
Can you do ls /hbase/rs and see what you
I believe there were cascading failures which got these deep nodes
containing still to be replicated WAL(s) - I suspect there is either some
parsing bug or something which is causing the replication source to not
work - also which version are you using - does it have
yes, indeed - hyphens are part of the host name (annoying legacy stuff in
my company). It's hbase-0.94.2-cdh4.2.1. I have no idea if 0.94.6 was
backported by Cloudera into their flavor of 0.94.2, but
the mysterious occurrence of the percent sign in zkcli (ls
Hi Stack,
This section of the docs is clear. I would suggest it should be included
in *all *docs not just the standalone section. I had seen that section of
the docs before, and it was on my mind.. I had made the change from
127.0.1.1 to 127.0.0.1 in the past and it had not worked -but I had
Yes, I have checked the source files of the 0.94.2-cdh4.2.1 jar and
HBASE-8207 issues are present in the source codes, namely:
String[] parts = peerClusterZnode.split(-);
On Thu, May 23, 2013 at 12:42 AM, Amit Mor amit.mor.m...@gmail.com wrote:
yes, indeed - hyphens are part of the host name
I'd suggest to please patch the code with 8207; cdh4.2.1 doesn't have it.
With hyphens in the name, ReplicationSource gets confused and tried to set
data in a znode which doesn't exist.
Thanks,
Himanshu
On Wed, May 22, 2013 at 2:42 PM, Amit Mor amit.mor.m...@gmail.com wrote:
yes, indeed -
It seems to be in the ballpark of what I was getting at, but I haven't
fully digested the code yet, so I can't say for sure.
Here's what I'm getting at. Looking at
o.a.h.h.client.ClientScanner.next() in the 94.2 source I have loaded, I
see there are three branches with respect to the cache:
Hi, Jean
What is the jira #?
Thanks
Tian-Ying
-Original Message-
From: Jean-Marc Spaggiari [mailto:jean-m...@spaggiari.org]
Sent: Wednesday, May 22, 2013 7:57 AM
To: user@hbase.apache.org
Subject: Re: Inconsistent Table HBCK
Thanks for the feedback Jay.
I helped someone who faced the
Sandy:
Looking at patch v6 of HBASE-8420, I think it is different from your
approach below for the case of cache.size() == 0.
Maybe log a JIRA for further discussion ?
On Wed, May 22, 2013 at 3:33 PM, Sandy Pratt prat...@adobe.com wrote:
It seems to be in the ballpark of what I was getting at,
I can work on the doc. part. If you have a moment, would suggest filing an
issue w/ snippets of the logs where hbase is lost.
Thanks Stephen,
St.Ack
On Wed, May 22, 2013 at 2:45 PM, Stephen Boesch java...@gmail.com wrote:
Hi Stack,
This section of the docs is clear. I would suggest it
It seems I can reproduce this - I did a few rolling restarts and got
screwed with NoNode exceptions - I am running 0.94.7 which has the fix but
my nodes don't contain hyphens - nodes are no longer coming back up...
Thanks
Varun
On Wed, May 22, 2013 at 3:02 PM, Himanshu Vashishtha
That sounds like a bug for sure. Could you create a jira with logs/znode
dump/steps to reproduce it?
Thanks,
himanshu
On Wed, May 22, 2013 at 5:01 PM, Varun Sharma va...@pinterest.com wrote:
It seems I can reproduce this - I did a few rolling restarts and got
screwed with NoNode exceptions -
Hi,Sergey.
The version of hbase in our environment is 0.94.3, and the FSHLog.java
comes from 0.95 or version above.
And it adds such codes in FSHLog::cleanOldLogs,
long oldestOutstandingSeqNum = Long.MAX_VALUE;
synchronized (oldestSeqNumsLock) {
Long oldestFlushing =
63 matches
Mail list logo