Please take a look at our Apache incubator proposal, as I think that may
answer your questions: https://wiki.apache.org/incubator/PhoenixProposal
On Fri, Jan 3, 2014 at 11:47 PM, Li Li fancye...@gmail.com wrote:
so what's the relationship of Phoenix and HBase? something like hadoop and
hive?
Thanks James! I have some Phoenix specific questions. I suppose the
Phoenix group is a better place to discuss those though.
Henning
On 03.01.2014 22:34, James Taylor wrote:
No worries, Henning. It's a little deceiving, because the coprocessors that
do the index maintenance are invoked on a
Hi,
I have been running a map reduce job that joins 2 datasets of 1.3 and 4 GB
in size. Joining is done at reduce side. Output is written to either Hbase
or HDFS depending upon configuration. The problem I am having is that Hbase
takes about 60-80 minutes to write the processed data, on the other
im using CDH 4.5:
Hadoop: 2.0.0-cdh4.5.0
HBase: 0.94.6-cdh4.5.0
Regards
On Sun, Jan 5, 2014 at 1:24 AM, Ted Yu yuzhih...@gmail.com wrote:
What version of HBase / hdfs are you running with ?
Cheers
On Sat, Jan 4, 2014 at 12:17 PM, Akhtar Muhammad Din
akhtar.m...@gmail.comwrote:
You cay try MapReduce over snapshot files
https://issues.apache.org/jira/browse/HBASE-8369
but you will need to patch 0.94.
Best regards,
Vladimir Rodionov
Principal Platform Engineer
Carrier IQ, www.carrieriq.com
e-mail: vrodio...@carrieriq.com
From:
bq. Output is written to either Hbase
Looks like Akhtar wants to boost write performance to HBase.
MapReduce over snapshot files targets higher read throughput.
Cheers
On Sat, Jan 4, 2014 at 12:55 PM, Vladimir Rodionov
vrodio...@carrieriq.comwrote:
You cay try MapReduce over snapshot files
Have you tried writing out an hfile and then bulk loading the data?
On Jan 4, 2014 4:01 PM, Ted Yu yuzhih...@gmail.com wrote:
bq. Output is written to either Hbase
Looks like Akhtar wants to boost write performance to HBase.
MapReduce over snapshot files targets higher read throughput.
Thanks guys for your precious time.
Vladimir, as Ted rightly said i want to improve write performance currently
(of course i want to read data as fast as possible later on)
Kevin, my current understanding of bulk load is that you generate
StoreFiles and later load through a command line program. I
Could you give us a region server log to look at during a job?
On Jan 4, 2014 4:35 PM, Akhtar Muhammad Din akhtar.m...@gmail.com wrote:
Thanks guys for your precious time.
Vladimir, as Ted rightly said i want to improve write performance currently
(of course i want to read data as fast as
There're 8 items under:
http://hbase.apache.org/book.html#perf.writing
I guess you have through all of them :-)
On Sat, Jan 4, 2014 at 1:34 PM, Akhtar Muhammad Din
akhtar.m...@gmail.comwrote:
Thanks guys for your precious time.
Vladimir, as Ted rightly said i want to improve write
I think in this case, writing data to HDFS or HFile directly (for subsequent
bulk loading)
is the best option. HBase will never compete in write speed with HDFS.
Best regards,
Vladimir Rodionov
Principal Platform Engineer
Carrier IQ, www.carrieriq.com
e-mail: vrodio...@carrieriq.com
hi all,
I am new to hbase and encounter a problem of client connection. I
download latest stable version(0.94.15) and start the server
successfully. And I can use ./bin/hbase shell to connect to server in
local, But I can't connect to the server using a remote java client.
My setup
12 matches
Mail list logo