Congratulations.
From: James Taylor [mailto:jamestay...@apache.org]
Sent: Fri 5/23/2014 3:16 AM
To: HBase Dev; HBase User
Subject: [ANNOUNCE] Apache Phoenix has graduated as a top level project
I'm pleased to announce that Apache Phoenix has graduated from
Hi James,
Great to see the announcement . Have a nice future..
Thanks Regards,
Job M Thomas| Platform Technology
From: rajesh babu Chintaguntla [mailto:chrajeshbab...@gmail.com]
Sent: Sat 12/14/2013 7:33 AM
To: user@hbase.apache.org
Subject: Re:
I have created table in hbase via phoenix.
Why hbase table is not droped after deleted it from Phoenix and vise versa.?
Why deleting a table from Phoenix takes considerably more time than deleting
it from Hbase directly?
Thanks in advance,
Job M Thomas
including phoenix dev mailing list.
How many rows are there in your table ?
Cheers
On Thu, Nov 28, 2013 at 8:25 PM, Job Thomas j...@suntecgroup.com wrote:
I have created table in hbase via phoenix.
Why hbase table is not droped after deleted it from Phoenix and vise
versa.?
Why
your SQL of Phonenix, thanks.
On Fri, Nov 29, 2013 at 12:47 PM, Job Thomas j...@suntecgroup.com wrote:
Hi Ted,
My Table contain 10 Million rows, 1 column family and that contain 15
columns.
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Fri 11/29/2013
[mailto:yuzhih...@gmail.com]
Sent: Wed 11/27/2013 11:48 AM
To: user@hbase.apache.org
Subject: Re: HBase: Paralel Query
bq. I didn't enabled blockcache
What if you enable blockcache ?
Cheers
On Tue, Nov 26, 2013 at 8:45 PM, Job Thomas j...@suntecgroup.com wrote:
Hello lars,
Here re the answers
.
Please give your suggestions.
With Thanks,
Best Regards,
Job M Thomas
From: Job Thomas [mailto:j...@suntecgroup.com]
Sent: Wed 11/27/2013 1:52 PM
To: user@hbase.apache.org; user@hbase.apache.org
Subject: RE: HBase: Paralel Query
Hi Ted,All
I have set
Hi All,
How can we configure Hbase inorder to perform multythreading/parallel query
faster .
These are some bits from my analysis:
Each Thread contain 10 query ( Random)
TreadH2(Msec) Phoenix(Msec)
134 215
263 222
4
, etc)?
Is the data all in the blockcache?
If not, what does the disk IO look like?
Are the client in multiple threads in the process or multiple processes?
Sorry for all the questions, but we need a bit more data.
-- Lars
From: Job Thomas j
Hi all ,
Please tell me how to set Hbase block size in hbase-site.xml .
I am using HBase 0.94.12 with hadoop 1.0.4 .
I have red that by setting block size to small value(say 8 kb ) , we can
improve habse random read.
Is that currect? How that property affect the performance
Hi all,
Out of these property , which one is used to set HFile block size in hbae
0.94.12
hbase.hregion.max.filesize=16384
hfile.min.blocksize.size=16384
hbase.mapreduce.hfileoutputformat.blocksize=16384
Best Regards,
Job M Thomas
(HColumnDescriptor).
Even if we configure hbase.mapreduce.hfileoutputformat.blocksize during
bulkload/import it
will be overridden with actual block size from table schema. )
hfile.min.blocksize.size is the old
hbase.mapreduce.hfileoutputformat.blocksize (See HBase-3864)
2013/11/25 Job Thomas j
Hi Jean Ted,
Thank You,
I have created table manually in hbase with blocksize='8192' ( create
'TESTBLOCK', {NAME = 'cf', BLOCKSIZE = '8192'} ) , and then created table
with same name via Phoenix.
Best Regards,
Job M Thomas
From: Job Thomas
Sent
to declare global property in Phoneix, you have to
declare BLOCKSIZE
in each 'create' SQL.
such as:
CREATE TABLE IF NOT EXISTS STOCK_SYMBOL(id int, name string)
BLOOMFILTER='ROW', VERSIONS='1', BLOCKSIZE = '8192'
On Tue, Nov 26, 2013 at 12:36 PM, Job Thomas j...@suntecgroup.com wrote:
Hi
are measuring your
network :)
I would suggest you test with a few million rows. When you scan make sure you
use scanner caching. (Scan.setCaching)
-- Lars
From: Job Thomas j...@suntecgroup.com
To: user@hbase.apache.org
Sent: Sunday, November 24, 2013 7:45 PM
I am using Hbase 0.94.9
Best Regards,
Job M Thomas
From: Job Thomas [mailto:j...@suntecgroup.com]
Sent: Wed 9/4/2013 11:08 AM
To: user@hbase.apache.org
Subject: java.lang.NegativeArraySizeException: -1 in hbase
Hi All,
I am getting the following
Hi All,
I am getting the following error while runnig a simple hbase-mapreduce job to
get one table data and write back to another table .
13/09/04 10:24:03 INFO mapred.JobClient: map 0% reduce 0%
13/09/04 10:24:22 INFO mapred.JobClient: Task Id :
attempt_201309031846_0023_m_00_0,
17 matches
Mail list logo