://issues.apache.org/jira/browse/ZEPPELIN-651
Commit:
https://github.com/apache/incubator-zeppelin/commit/1940388e3422b86a322fc82a0e7868ff25126804
Looking forward to feedback and suggestions for improvements.
Rajat Venkatesh
Engg. Lead
Qubole
Dear HBase experts,
I have a Hadoop cluster which has Hive, HBase installed along with other Hadoop
components. I am currently exploring ways to automate a data migration process
from Hive to HBase which involves new columns of data added ever so often. I
was successful in creating a HBase
/avoiding-full-gcs-in-hbase-with-memstore-local-allocation-buffers-part-1/
J-D
On Mon, May 16, 2011 at 2:08 PM, Venkatesh vramanatha...@aol.com wrote:
Thanks J-D
Using hbase-0.20.6, 49 node cluster
The map reduce job involve a full table scan...(region size 4 gig)
The job runs great
, then the scanner would expire. That's orthogonal tho.
You need to figure what you're blocking on, add logging and try to
jstack your Child processes for example.
J-D
On Thu, May 12, 2011 at 7:21 PM, Venkatesh vramanatha...@aol.com wrote:
Hi
Using hbase-0.20.6
mapreduce job started failing
Hi
Using hbase-0.20.6
mapreduce job started failing in the map phase (using hbase table as input for
mapper)..(ran fine for a week or so starting with empty tables)..
task tracker log:
Task attempt_201105121141_0002_m_000452_0 failed to report status for 600
seconds. Killing
Region
...@gmail.com
To: user@hbase.apache.org
Sent: Wed, Apr 20, 2011 4:30 pm
Subject: Re: java.lang.IndexOutOfBoundsException
On Wed, Apr 20, 2011 at 10:04 AM, Venkatesh vramanatha...@aol.com wrote:
On 0.90.2, do you all think using HTablePool would help with performance
problem?
What performance
.
St.Ack
On Wed, Apr 20, 2011 at 10:41 AM, Venkatesh vramanatha...@aol.com wrote:
shell is no problems..ones/twos..i've tried mass puts from shell
we cant handle our production load (even 1/3 of it)
700 mill per day is full load..same load we handled with absolutely no issues
in 0.20.6
, Venkatesh vramanatha...@aol.com wrote:
Thanks St. Ack..
Sorry I had to roll back to 0.20.6..as our system is down way too long..
so..i don't have log rt now..i'll try to recreate in a different machine at a
later time..
yes..700 mil puts per day
cluster is 20 node (20 datanode+ region
yuzhih...@gmail.com wrote:
I have seen this before.
HTable isn't thread-safe.
Please describe your usage.
Thanks
On Wed, Apr 20, 2011 at 6:03 AM, Venkatesh vramanatha...@aol.com wrote:
Using hbase-0.90.2..(sigh..) Any tip? thanks
java.lang.IndexOutOfBoundsException
: java.lang.IndexOutOfBoundsException
When using HTablePool, try not to define maxSize yourself - use the default.
On Wed, Apr 20, 2011 at 10:04 AM, Venkatesh vramanatha...@aol.com wrote:
Yeah you J-D both hit it..
I knew it's bad..I was trying anything everything to solve
specifying Integer.MAX_VALUE as maxSize along with config.
On Wed, Apr 20, 2011 at 10:17 AM, Venkatesh vramanatha...@aol.com wrote:
If I use default ..i can't share/pass my HBaseConfiguration object..atleast
i don't see a constructor/setter..
that would go against previous suggestion
-
From: Stack st...@duboce.net
To: user@hbase.apache.org
Sent: Wed, Apr 20, 2011 1:30 pm
Subject: Re: hbase 0.90.2 - incredibly slow response
On Tue, Apr 19, 2011 at 11:58 AM, Venkatesh vramanatha...@aol.com wrote:
I was hoping that too..
I don't have scripts to generate # requests from
Just upgraded to 0.90.2 from 0.20.6..Doing a simple put to table ( 100 bytes
per put)..
Only code change was to retrofit the HTable API to work with 0.90.2
Initializing HBaseConfiguration in servlet.init()... reusing that config for
HTable constructor doing put
Performance is very slow
same lag?
St.Ack
On Tue, Apr 19, 2011 at 10:35 AM, Venkatesh vramanatha...@aol.com wrote:
Just upgraded to 0.90.2 from 0.20.6..Doing a simple put to table ( 100
bytes
per put)..
Only code change was to retrofit the HTable API to work with 0.90.2
Initializing
of your issues since you only get it occasionally.
J-D
On Tue, Apr 12, 2011 at 7:59 AM, Venkatesh vramanatha...@aol.com wrote:
I get this occasionally..(not all the time)..Upgrading from 0.20.6 to 0.90.2
Is this issue same as this JIRA
https://issues.apache.org/jira/browse/HBASE-3578
that buried table's config, or another way to kill
the
orphaned connections?
- Ruben
From: Venkatesh vramanatha...@aol.com
To: user@hbase.apache.org
Sent: Wed, April 13, 2011 10:20:50 AM
Subject: Re: hbase -0.90.x upgrade - zookeeper exception in mapreduce
while hbase is running).
J-D
On Wed, Apr 13, 2011 at 12:04 PM, Venkatesh vramanatha...@aol.com wrote:
Reuben:
Yes..I've the exact same issue now.. I'm also kicking off from another jvm
that runs for ever..
I don't have an alternate solution..either modify hbase code (or) modify my
code
deleteAllConnections works well for my case..I can live with this but not with
connection leaks
thanks for the idea
Venkatesh
-Original Message-
From: Ruben Quintero rfq_...@yahoo.com
To: user@hbase.apache.org
Sent: Wed, Apr 13, 2011 4:25 pm
Subject: Re: hbase -0.90.x upgrade
I get this occasionally..(not all the time)..Upgrading from 0.20.6 to 0.90.2
Is this issue same as this JIRA
https://issues.apache.org/jira/browse/HBASE-3578
I'm using HBaseConfiguration.create() setting that in job
thx
v
2011-04-12 02:13:06,870 ERROR Timer-0
Thanks St.Ack
Yes..I see these when map-reduce job is complete..but not always..I'll ignore
thanks..Getting close to 0.90.1 upgrade
-Original Message-
From: Stack st...@duboce.net
To: user@hbase.apache.org
Cc: Venkatesh vramanatha...@aol.com
Sent: Thu, Apr 7, 2011 11:55 pm
I see lot of these warnings..everything seems to be working otherwise..Is this
something that can be ignored?
2011-04-07 21:29:15,032 WARN Timer-0-SendThread(..:2181)
org.apache.zookeeper.ClientCnxn - Session 0x0 for server :2181, unexpected
error, closing socket connection and
Sorry about this..It was indeed an environment issue..my core-site.xml was
pointing to wrong hadoop
thanks for the tips
-Original Message-
From: Venkatesh vramanatha...@aol.com
To: user@hbase.apache.org
Sent: Fri, Apr 1, 2011 4:51 pm
Subject: Re: row_counter map reduce job
A big thankyou from a hbase user (sorry for the spam..but deserves thanks)
-Original Message-
From: Jean-Daniel Cryans jdcry...@apache.org
To: user@hbase.apache.org
Sent: Sat, Apr 2, 2011 3:51 pm
Subject: Re: HBase wiki updated
2 Internets for you Doug, that's awesome!
I'm able to run this job from the hadoop machine (where job task tracker
also runs)
/hadoop jar /home/maryama/hbase-0.90.1/hbase-0.90.1.jar rowcounter usertable
But, I'm not able to run the same job from
a) hbase client machine (full hbase hadoop installed)
b) hbase server machines
.
But you shouldn't have to do the latter at least. Compare where it
works to where it doesn't. Something is different.
St.Ack
On Fri, Apr 1, 2011 at 9:26 AM, Venkatesh vramanatha...@aol.com wrote:
Definitely yes..It'all referenced in -classpath option of jvm of
tasktracker/jobtracker
in 0.20.6..
What are the region parameters? I tried encoded nam it did n't like..I tried
name of the form
tbl_name,st_key,,
That did n't work either..
thanks
-Original Message-
From: Stack st...@duboce.net
To: user@hbase.apache.org
Cc: Venkatesh vramanatha...@aol.com
Sent: Thu
is that? Overlapping regions? Can you try merging them with
merge tool? Else, study whats in hdfs. One may have nothing in it
(check sizes). It might just be reference files only. If so, lets go
from there. And I describe how to merge.
St.Ack
On Tue, Mar 29, 2011 at 9:25 PM, Venkatesh
Thanks Lucas..I'll give it a try
-Original Message-
From: Lukas mr.bobu...@gmail.com
To: user@hbase.apache.org
Sent: Wed, Mar 30, 2011 4:19 am
Subject: Re: hole in META
Sorry for any inconvenience. This was in reply of
it not complete?
St.Ack
On Wed, Mar 30, 2011 at 4:13 AM, Venkatesh vramanatha...@aol.com wrote:
Yes..st.ack..overlapping.. one of them has no data..
there are too many of them about 800 or so..
there are some with holes too..
-Original Message-
From: Stack st
Hi
Using hbase-0.20.6..This has happened quite often..Is this a known issue in
0.20.6 that
we would n't see in 0.90.1 (or) see less of?
..Attempt to fix/avoid this earlier times by truncating table, running
add_table.rb before
What is the best way to fix this in 0.20.6? Now it's there in
AM, Venkatesh vramanatha...@aol.com wrote:
What is the best way to fix this in 0.20.6?
Move to 0.90.1 to avoid holes in .META. and to avoid losing data. Let
us know if we can help you with upgrade.
St.Ack
Hi,
If I export existing table using Export MR job, truncate the table, increase
region size, do a Import
will it make use of the new region size?
thanks
V
keys for you.
J-D
On Tue, Mar 29, 2011 at 2:33 PM, Venkatesh vramanatha...@aol.com wrote:
Thanks J-D
We have way too much data it won't fit in 1 region.Is Import smart enough
create
reqd # of regions?
Cld u pl. elaborate on pre-split table creation? steps?
Reason I'm doing
: Stack st...@duboce.net
To: user@hbase.apache.org
Sent: Tue, Mar 29, 2011 12:55 pm
Subject: Re: hole in META
On Tue, Mar 29, 2011 at 9:09 AM, Venkatesh vramanatha...@aol.com wrote:
I ran into missing jar with hadoop jar file when running a map reduce..which
i could n't fix it..That is the only
Does anyone how to get around this? Trying to run a mapreduce job in a
cluster..The one change was hbase upgraded to 0.90.1 (from 0.20.6)..No code
change
java.io.FileNotFoundException: File
/data/servers/datastore/mapred/mapred/system/job_201103151601_0363/libjars/zookeeper-3.2.2.jar
does
ships with zookeeper-3.3.2, not with 3.2.2.
St.Ack
On Wed, Mar 16, 2011 at 8:05 AM, Venkatesh vramanatha...@aol.com wrote:
Does anyone how to get around this? Trying to run a mapreduce job in a
cluster..The one change was hbase upgraded to 0.90.1 (from 0.20.6)..No code
change
:
The below is pretty basic error. Reference the jar that is actually
present on your cluster.
St.Ack
On Wed, Mar 16, 2011 at 3:50 PM, Venkatesh vramanatha...@aol.com wrote:
yeah..i was aware of that..I removed that tried with hadoop-0.20.2-core.jar
as I was n't ready to upgrade hadoop
Hi
When I upgraded to 0.90.1, mapreduce fails with exception..
system/job_201103151601_0121/libjars/hbase-0.90.1.jar does not exist.
I have the jar file in classpath (hadoop-env.sh)
any ideas?
thanks
they serve? Are you using lots
of families per table? Are you using LZO compression?
Thanks for helping us helping you :)
J-D
On Thu, Feb 10, 2011 at 11:32 AM, Venkatesh vramanatha...@aol.com wrote:
Thanks J-D..
Can't believe i missed that..I have had it before ..i did look
..but did help the real puts
-Original Message-
From: Ted Dunning tdunn...@maprtech.com
To: user@hbase.apache.org
Sent: Thu, Feb 10, 2011 3:45 pm
Subject: Re: region servers shutdown
Are your keys sequential or randomized?
On Thu, Feb 10, 2011 at 12:35 PM, Venkatesh
for anything
else than HBase, then you should only use 1 zk server and collocate it
with the master and the namenode, then use those 3 machines as region
servers to help spread the region load.
J-D
On Thu, Feb 10, 2011 at 12:35 PM, Venkatesh vramanatha...@aol.com wrote:
Thanks J-D..
I
Is there a script?
thanks
and removing
regions. They might inspire. Also look at the Merge.java class. See
how it edits .META. after merging two adjacent regions to create a new
region that spans the key space of the two old adjacent regions.
St.Ack
On Fri, Jan 28, 2011 at 12:29 PM, Venkatesh vramanatha...@aol.com
enabling DEBUG for
hbase and re-run the job to hopefully get more information.
J-D
On Wed, Jan 26, 2011 at 8:44 AM, Venkatesh vramanatha...@aol.com wrote:
Using 0.20.6..any solutions? Occurs during mapper phase..will increasing
retry count fix this?
thanks
here's the stack trace
Hi Sean:
Thx
Size of column family is very small 100 bytes
Investigating potential bottleneck spot..Our cluster is small (relatively
speaking)..10 node
Our hardware is high end (not commodity)
venkatesh
-Original Message-
From: Sean Bigdatafun sean.bigdata...@gmail.com
for a table size
of 10 mill records?
hbase.client.scanner.caching - If set in hbase-site.xml, Scan calls should
pick that up correct?
thanks
venkatesh
-Original Message-
From: Jean-Daniel Cryans jdcry...@apache.org
To: user@hbase.apache.org
Sent: Thu, Oct 14, 2010 2:39 pm
or are you doing
'new HTable(conf, tablename)' in your client code? Do the latter if
not -- share the configuration with HTable instances.
St.Ack
On Mon, Oct 11, 2010 at 10:47 PM, Venkatesh vramanatha...@aol.com wrote:
I would like to tune region server to increase throughput..On a 10
)
thanks
venkatesh
)
javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
-Original Message-
From: Venkatesh vramanatha...@aol.com
To: user@hbase.apache.org
Sent: Mon, Oct 11, 2010 2:35 pm
Subject: hbase.client.retries.number
HBase was seamless for first couple of weeks..now all kinds of issues in
production
without
code change)
thx
venkatesh
Some of the region servers suddenly dying..I've pasted relevant log lines..I
don't see any error in datanodes
Any ideas?
thanks
venkatsh
.
2010-10-10 12:55:36,664 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer
Exception: java.io.IOException: Unable to create new block.at
?
thanks
venkatesh
PS. setCaching(100) ..did n't make a dent in performance
Ahh ..ok..That makes sense
I've a 10 node cluster each with 36 gig..I've allocated 4gig for HBase Region
Servers..master.jsp
reports used heap is less than half on each region server.
I've close to 800 regions total..Guess it needs to kick off a jvm to see if
data exists
in all regions..
Also, do you think if I query using rowkey instead of hbase time stamp..it
would not kick off that many tasks..
since region server knows the exact locations?
thanks
venkatesh
-Original Message-
From: Venkatesh vramanatha...@aol.com
To: user@hbase.apache.org
Sent: Wed, Oct 6
Thanks J-D
I'll hookup Ganglia (wanting but kept pushing back..) get back
V
-Original Message-
From: Jean-Daniel Cryans jdcry...@apache.org
To: user@hbase.apache.org
Sent: Wed, Oct 6, 2010 12:22 pm
Subject: Re: HBase map reduce job timing
Also, do you think if I query
that would be?...
..map phase takes about couple of minutes..
..reduce phase takes the rest..
..i'll try increasing # of reduce tasks..Open to other other suggestion for
tunables..
thanks for your input
venkatesh
can do is pointing to the existing documentation
http://wiki.apache.org/hadoop/PerformanceTuning
J-D
On Tue, Oct 5, 2010 at 7:12 PM, Venkatesh vramanatha...@aol.com wrote:
I've a mapreduce job that is taking too long..over an hour..Trying to see
what can a tune
to to bring it down..One thing
@hbase.apache.org
Sent: Tue, Oct 5, 2010 11:14 pm
Subject: Re: HBase map reduce job timing
It'd be more useful if we knew where that data is coming from, and
where it's going. Are you scanning HBase and/or writing to it?
J-D
On Tue, Oct 5, 2010 at 8:05 PM, Venkatesh vramanatha...@aol.com wrote
..reports 10gig
That seems odd..Any ideas on what could be taking up space?..I don't have
permission to look the entire hdfs..yet
Just thought i'll ask the group
thanks
venkatesh
Don't know if this helps..but here are couple of reasons when I had the issue
how i resolved it
- If zookeeper is not running (or do not have the quorum) in a cluster setup,
hbase does not go down..bring up zookeeper
- Make sure pid file is not under /tmp...somtimes files get cleaned out of
..for the mapreduce to run in a cluster?
thanks
venkatesh
I'm running map/reduce jobs from java app (table mapper reducer) in true
distributed
mode..I don't see anything in jobtracker page..Map/reduce job runs fine..Am I
missing some config?
thanks
venkatesh
effective
Is there a force shutdown option? (other than kill -9)..?
venkatesh
-Original Message-
From: Jean-Daniel Cryans jdcry...@apache.org
To: user@hbase.apache.org
Sent: Fri, Aug 27, 2010 12:10 am
Subject: Re: jobtracker.jsp
HBase needs to know about the job tracker, it could
I wrestled with that idea of time bounded tables..Would it make it harder to
write code/run map reduce
on multiple tables ? Also, how do u decide to when to do the cut over (start of
a new day, week/month..)
if u do how to process data that cross those time boundaries efficiently..
Guess that
do I call close()..upon every operation (put/get/..) ? to avoid
memory leaks
thanks
venkatesh
65 matches
Mail list logo