Hi Guys,
I am running YCSB 0.1.4 against hbase 0.98.5,
bin/ycsb load hbase -P workloads/workloada -p columnfamily=f1 -p
recordcount=1000 -p threadcount=4 -s | tee -a workloada.dat stucked as
below:
10 sec: 0 operations;
20 sec: 0 operations;
30 sec: 0 operations;
40 sec: 0 operations;
50
Cycling bits:
http://search-hadoop.com/m/DHED4N0syk1
Andrew has his ycsb repo as well.
Cheers
On Oct 21, 2014, at 2:28 AM, Qiang Tian tian...@gmail.com wrote:
Hi Guys,
I am running YCSB 0.1.4 against hbase 0.98.5,
bin/ycsb load hbase -P workloads/workloada -p columnfamily=f1 -p
Do you want some sql-on-hadoop could access hbase file directly?
I did a quick search and find
http://www.slideshare.net/Stratio/integrating-sparkandcassandra(P35), but
not sure if I understand correctly.
On Tue, Oct 21, 2014 at 12:15 PM, Nick Dimiduk ndimi...@gmail.com wrote:
Not currently.
Hi,
I have a HBase table which is populated from pig using PigStorage.
While inserting, suppose for rowkey i have a duplicate value.
Is there a way to prevent an update?.
I want to maintain the version history for my values which are unique.
Regards,
Krishna
Hi Krishna,
HBase will store them in the same row, same cell but you will have 2
versions. If you want to keep just one, setup the version=1 on the table
side and only one will be stored. Is that what yo mean?
JM
2014-10-21 8:29 GMT-04:00 Krishna Kalyan krishnakaly...@gmail.com:
Hi,
I have a
Thanks Jean,
If i put the same value in my table for a particular column for a rowkey i
want HBase reject this value and retain old value with old time stamp.
In other words update only when value changes.
Regards,
Krishna
On Tue, Oct 21, 2014 at 6:02 PM, Jean-Marc Spaggiari
You can do check and puts to validate if value is already there, but it's
slower...
2014-10-21 8:50 GMT-04:00 Krishna Kalyan krishnakaly...@gmail.com:
Thanks Jean,
If i put the same value in my table for a particular column for a rowkey i
want HBase reject this value and retain old value with
You can achieve what you want using versions and some hackery with timestamps
Sent from my T-Mobile 4G LTE Device
Original message
From: Jean-Marc Spaggiari jean-m...@spaggiari.org
Date:10/21/2014 9:02 AM (GMT-05:00)
To: user user@hbase.apache.org
Cc:
Subject: Re:
The link is about Cassandra, not hbase.
Cheers
On Tue, Oct 21, 2014 at 2:53 AM, Qiang Tian tian...@gmail.com wrote:
Do you want some sql-on-hadoop could access hbase file directly?
I did a quick search and find
http://www.slideshare.net/Stratio/integrating-sparkandcassandra(P35), but
not
Hi all,
we are using HBase version 0.94.6-cdh4.3.1 and I have a suspicion that a
Delete written to hbase through HFileOutputFormat might be ignored (and
not delete any data) in the following scenario:
* a Delete object is used to delete the data at the client side
* call to deleteColumn
Hi, I tried to create a snapshot and it's not enabled
: java.io.IOException: java.lang.UnsupportedOperationException: To use
snapshots, You must add to the hbase-site.xml of the HBase Master:
'hbase.snapshot.enabled' property with value 'true'.
Is it by-default? If yes, then why? What is the
snapshots are off by default in 0.94 because it is a new feature backported
from the 0.96 branch.
from 0.96 snapshots are on by default.
Matteo
On Tue, Oct 21, 2014 at 4:34 PM, Serega Sheypak serega.shey...@gmail.com
wrote:
Hi, I tried to create a snapshot and it's not enabled
:
Thanks All.I will get back if I take that direction.
-Nishanth
On Tue, Oct 21, 2014 at 8:15 AM, Ted Yu yuzhih...@gmail.com wrote:
The link is about Cassandra, not hbase.
Cheers
On Tue, Oct 21, 2014 at 2:53 AM, Qiang Tian tian...@gmail.com wrote:
Do you want some sql-on-hadoop could
bq. When using Delete#deleteColumns everything seems to be working fine
Please confirm that the issue you observe was with Delete#deleteColumn
(different from the method mentioned in subject).
Can you tried with 0.94.24 (the latest 0.94 release) ?
If you can capture this using a unit test, that
Hi,
I have connected a client machine with two network interfaces to an
internal, isolated HBase cluster and an external network. The HBase cluster
is on its own private LAN, away from the external network. After installing
and updating the Hadoop and HBase configuration files on the client
Thanks for you replies Jean,Dhaval
On Tue, Oct 21, 2014 at 6:57 PM, Dhaval Shah prince_mithi...@yahoo.co.in
wrote:
You can achieve what you want using versions and some hackery with
timestamps
Sent from my T-Mobile 4G LTE Device
Original message
From: Jean-Marc
Matt,
You should create your own proto file and compile that with the Google
Protocol Buffer compiler. Take a look at the SingleColumnValueFilter's
code:
https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.java#L327
You
Do you use ipv6 ?
If so, this is related:
HBASE-12115
Cheers
On Tue, Oct 21, 2014 at 10:26 AM, Kevin kevin.macksa...@gmail.com wrote:
Hi,
I have connected a client machine with two network interfaces to an
internal, isolated HBase cluster and an external network. The HBase cluster
is on
All machines use ipv4
On Tue, Oct 21, 2014 at 1:36 PM, Ted Yu yuzhih...@gmail.com wrote:
Do you use ipv6 ?
If so, this is related:
HBASE-12115
Cheers
On Tue, Oct 21, 2014 at 10:26 AM, Kevin kevin.macksa...@gmail.com wrote:
Hi,
I have connected a client machine with two network
BTW, the error looks like you didn't distribute your custom filter to your
region servers.
On Tue, Oct 21, 2014 at 1:34 PM, Kevin kevin.macksa...@gmail.com wrote:
Matt,
You should create your own proto file and compile that with the Google
Protocol Buffer compiler. Take a look at the
Did you restart HMaster? You can check the master's runtime conf at
master-webui/conf and that should show this config.
On Sun, Oct 19, 2014 at 6:00 PM, ch huang justlo...@gmail.com wrote:
thanks for reply,but i am not deply the cluster use cloudera manager ,
that's information is not
Thanks Kevin!
I was under impression, probably mistakingly, that as of 0.96 placing
the filter on hdfs under hbase lib directory is sufficient and RS should
load the filter dynamically from hdfs. Is that not the case?
On Tuesday, October 21, 2014, Kevin kevin.macksa...@gmail.com wrote:
BTW,
Hi,
I've read that on a modern hardware I should increase the value of
io.file.buffer.size parameter of HDFS, up to 128kb or so. [1] Does this
advice still hold true in the context of HBase? We've done a series of
performance benchmarks with the different values of it, but couldn't
observe a
I haven't tried dynamic loading of filters on RS, but I know it does exist.
See https://issues.apache.org/jira/browse/HBASE-9301.
If you still can't get it to work, then I suggest distributing your filters
to the RS and restart them. Let us know how everything works out.
On Tue, Oct 21, 2014 at
Also, if you do end up using dynamic loading, you'll need a way to version
your filters because the RS will not reload a JAR if it changes.
On Tue, Oct 21, 2014 at 9:46 PM, Kevin kevin.macksa...@gmail.com wrote:
I haven't tried dynamic loading of filters on RS, but I know it does
exist. See
See this blog post:
http://www.flurry.com/2012/12/06/exploring-dynamic-loading-of-custom-filters-i#.VEcNtNR4rZg
Cheers
On Tue, Oct 21, 2014 at 6:48 PM, Kevin kevin.macksa...@gmail.com wrote:
Also, if you do end up using dynamic loading, you'll need a way to version
your filters because the RS
Thanks Ted.
do you mean I should rebuild the ycsb? could you point me Andrew's repo?
On Tue, Oct 21, 2014 at 5:37 PM, Ted Yu yuzhih...@gmail.com wrote:
Cycling bits:
http://search-hadoop.com/m/DHED4N0syk1
Andrew has his ycsb repo as well.
Cheers
On Oct 21, 2014, at 2:28 AM, Qiang Tian
Once you clone ycsb, you should build it with your choice of 0.98
Here's thread where Andrew mentioned his ycsb repo:
http://search-hadoop.com/m/DHED4NaxYb1/andrew+purtell+ycsb+2014subj=Re+Performance+oddity+between+AWS+instance+sizes
Cheers
On Tue, Oct 21, 2014 at 7:15 PM, Qiang Tian
As an aside, if there are changes we'd like to see in YCSB upstream has
started taking patches again with a bit of prodding.
On Tue, Oct 21, 2014 at 9:23 PM, Ted Yu yuzhih...@gmail.com wrote:
Once you clone ycsb, you should build it with your choice of 0.98
Here's thread where Andrew
Thanks Ted,
I also got Mapkeeper error, comment the module works around it(
https://github.com/brianfrankcooper/YCSB/issues/152)
On Wed, Oct 22, 2014 at 10:23 AM, Ted Yu yuzhih...@gmail.com wrote:
Once you clone ycsb, you should build it with your choice of 0.98
Here's thread where Andrew
30 matches
Mail list logo