Hey Kevin,
(Moved this to the HBase user lists as it is more appropriate there -
cause of the libs you are using per your question. BCC'd
mapreduce-user and CC'd you in case you aren't subscribed to HBase
user lists).
The TableOutputFormat ignores keys. So it is safe to pass a null
object. This
All
We have our application server running in Java 7 and hbase started and running
on Java 6 using the stumbleupon API. When we are trying to connect
from our compiled code on java 7 it will not talk to hbase. Are there any
issues anyone knows about?
Try to impliment something like this
Class RegexStringComparator
On Tue, Jun 19, 2012 at 5:06 AM, Amitanand Aiyer amitanan...@fb.com wrote:
You could set up a scan with the criteria you want (start row, end row,
keyonlyfilter etc), and do a delete for
The rows you get.
On 6/18/12 3:08
Hi
I've run into some performance issues with my hadoop MapReduce Job.
Basically what I'm doing with it is:
- read data from HDFS file
- the output goes also to HDFS file (multiple ones in my scenerio)
- in my mapper I process each line and enrich it with some data read
from HBase table (I do
One thing we observed with a similar setup was that if we added a reducer
and then used something like HRegionPartitioner to partition the data, our
GET performance improved dramatically. While you take a hit for adding the
reducer, it was worth it in our case. We never quite figured out why that
you can use Hbase RowFilter to do that.
Regards,
Mohammad Tariq
On Tue, Jun 19, 2012 at 1:13 PM, shashwat shriparv
dwivedishash...@gmail.com wrote:
Try to impliment something like this
Class RegexStringComparator
On Tue, Jun 19, 2012 at 5:06 AM, Amitanand Aiyer amitanan...@fb.com
Hi
Sorry I was not able to access your chart of put perf. Is it like the
performance gradually coming down and at some point again getting better all of
a sudden? [some spikes in graph]
It may be because of HBASE-3484
You can check the memstore size that u have given and see whether the
Sure, why not?
You can always open a connection to the counter table in your Mapper.setup()
method and then increment the counters within the Mapper.map() method.
Your update of the counter is an artifact and not the output of the
Mapper.map() method.
On Jun 18, 2012, at 7:49 PM, Sid Kumar
Oleg,
Here is some code that we used for deleting all rows with user name
foo. It should be fairly portable to your situation:
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import
Hi thanks for answer,
I've attached image of the chart.
Write performance is going down by the time. and never goes up.
As you can see on chart there are spikes but beside that generally write
performance goes down over time.
Thanks.
Giorgi
On Tue, Jun 19, 2012 at 3:21 PM, Anoop Sam John
Hi
What about the region size and how frequently flush is happening? Are you
using default configurations only?
Regards
Ram
From: Giorgi Jvaridze [mailto:giorgi.jvari...@gmail.com]
Sent: Tuesday, June 19, 2012 7:26 PM
To: user@hbase.apache.org
Subject: Re: Decreasing write speed
And is your regionserver getting hotspotted? Like all your requests are
targeted to one particular RegionServer or to one particular region alone?
Regards
Ram
From: Giorgi Jvaridze [mailto:giorgi.jvari...@gmail.com]
Sent: Tuesday, June 19, 2012 7:26 PM
To: user@hbase.apache.org
Subject:
Hi
I have vanilla CDH4 setup haven't changed any config.
I don't think that regionserver is getting hotspotted, because I'm using
auto-incremented and then reversed ids so it should be evenly distributed
across the regionservers.
Regards,
Giorgi
On Tue, Jun 19, 2012 at 6:22 PM,
Samar,
If you ran a local build on your HBase installation (via mvn or so),
please undo it. You can run {{mvn clean}} to clean up any bad build
files, and then restart your HMaster to see if this goes away.
On Tue, Jun 19, 2012 at 5:44 PM, samar kumar samar.opensou...@gmail.com wrote:
Hi All,
Hi,
Can you also share how exactly you invoke the import-tsv command?
On Tue, Jun 19, 2012 at 9:02 PM, AnandaVelMurugan Chandra Mohan
ananthu2...@gmail.com wrote:
Hi,
I am trying to use importtsv map-reduce job to load data into HBase.
I am creating TSV file after fetching data from MySQL
Thank you all for the answers. I try to speed up my solution and user
map/reduce over hbase
Here is the code:
I want to use Delete (map function to delete the row) and I pass the same
tableName at TableMapReduceUtil.initTableMapperJob
and TableMapReduceUtil.initTableReducerJob.
Question: is it
This is a common but hard problem. I do not have a good answer.
This issue with doing random reads for each line you are processing is
that there's no way to batch them so you're basically doing this:
- Open a socket to a region server
- Send the request over the network
- The region server
You're likely to get a better response if you post your error messages and
example code that reproduces the problem. :)
Best,
Dave
On Tue, Jun 19, 2012 at 12:27 AM, Ben Cuthbert bencuthb...@ymail.comwrote:
All
We have our application server running in Java 7 and hbase started and
running on
Maybe it's something else? What's the error?
Thx,
J-D
On Tue, Jun 19, 2012 at 12:27 AM, Ben Cuthbert bencuthb...@ymail.com wrote:
All
We have our application server running in Java 7 and hbase started and
running on Java 6 using the stumbleupon API. When we are trying to connect
from our
This question was answered here already:
http://mail-archives.apache.org/mod_mbox/hbase-user/201101.mbox/%3caanlktinnw2d7dmcyfu3ptv1hu_i3xqk_1pdsgd5nt...@mail.gmail.com%3E
Counters are not idempotent, this can be hard to manage.
J-D
On Mon, Jun 18, 2012 at 5:49 PM, Sid Kumar sqlsid...@gmail.com
I have a small cluster with 10 nodes. 8 nodes are datanodes/regionservers, and
2 nodes are running HA namenodes and HMaster. The question I have is, what
would be the best way to configure Zookeeper in my cluster? Currently I have it
running on one of the HMaster nodes. Running an instance on
I was trying to keep costs down as much as possible. Adding an 11th node just
for Zookeeper seems a bit expensive.
On Jun 19, 2012, at 1:45 PM, Mikael Sitruk wrote:
you should not put ZK server on the RS or data nodes, since you don't want
their work to interfere ZK responsiveness.
You can
you don't need much for it, 1 or 2 small disk (preferred ssd), and not much
memory, thats all.
on the 11 server you can put additional HBase master, or what ever other
process that is not pushed too much.
On Wed, Jun 20, 2012 at 12:08 AM, Bryan Keller brya...@gmail.com wrote:
I was trying to
Thanks for the info. It seems safer to do the aggregations in the MR code.
Do you guys think of any better alternative?
Sid
On Tue, Jun 19, 2012 at 9:55 AM, Jean-Daniel Cryans jdcry...@apache.orgwrote:
This question was answered here already:
As the the thread JD pointed out suggests - the best approach if you
want to avoid aggregations later on is to aggregate in an MR job,
output to a file with ad id and the number of impressions found for
that ad. Run a separate client application, likely single threaded if
the number of ads is not
hi,
I am running the following command from hadoop bin folder.
./hadoop jar /usr/local/hbase-0.92.1-security/hbase-0.92.1-security.jar
importtsv -Dimporttsv.columns=HBASE_ROW_KEY,report:path,report:time
tempptmd hdfs://namenode:9000/user/hadoop/temp/cbm/XYZ.tsv
My TSV file has three columns.
Hi All,
I am still facing this issue in our fully distributed cluster, but its
working fine in pseudo-distributed node.
We have installed *ZooKeeper *on our *Master node* which contains
*Hbase-Master,
Namenode Job tracker*.
4 slave nodes having *Hbase-region, Datanode Task tracker.*
Zookeeper
Hi,
I got this fixed, by changing my delimter to | (pipe) instead of tab. Now
it loads data into Hbase. Thanks!!
On Wed, Jun 20, 2012 at 10:30 AM, AnandaVelMurugan Chandra Mohan
ananthu2...@gmail.com wrote:
hi,
I am running the following command from hadoop bin folder.
./hadoop jar
@Harsh yes it works. I set my eclipse and that did an auto build.
Thanks
On Tue, Jun 19, 2012 at 9:16 PM, Harsh J ha...@cloudera.com wrote:
Samar,
If you ran a local build on your HBase installation (via mvn or so),
please undo it. You can run {{mvn clean}} to clean up any bad build
files,
29 matches
Mail list logo