Mujtaba, Thanks for your response. I am not including the cost of initializing Connection as part of the read time.
Regards On Thu, Jan 7, 2016 at 11:36 PM, Mujtaba Chohan <[email protected]> wrote: > Just a pointer that if you are measuring time via a newly created JVM then > you might also be measuring one time cost of initializing HConnection when > for the first time Phoenix establish connection to the cluster. > > On Thu, Jan 7, 2016 at 9:28 AM, James Taylor <[email protected]> > wrote: > >> Would be good to see a code snippet too. Your create table statement, >> query, and how you're measuring time, plus the same on the native HBase >> side. >> Thanks, >> James >> >> On Thu, Jan 7, 2016 at 9:20 AM, Thomas Decaux <[email protected]> wrote: >> >>> Can you update Phoenix to la test version? >>> >>> 1s is really slow, could be network or client issue ? >>> >>> Did you try apache drill to compare ? >>> Le 7 janv. 2016 2:50 PM, "Sreeram" <[email protected]> a écrit : >>> >>>> Hi, >>>> >>>> I am new to Phoenix & I am trying to perform basic full table select >>>> from a table. >>>> >>>> I am connecting using JDBC and I am seeing that a full table scan for >>>> 1000 records (14 columns, approx 150 bytes per record) is alwasy taking >>>> more than a second. Scan from Hbase on equivalent HBase table takes close >>>> to 170 ms on average. The HBase table has a composite row key & the same >>>> columns are provided as part of PRIMARY KEY CONSTRAINT in the Phoenix >>>> table. >>>> >>>> I use a two node cluster & have specified SALT_BUCKETS=2 as part of >>>> table creation. >>>> >>>> I am using Phoenix version 4.3 and Hbase version 1.0.0 >>>> >>>> I think I am missing something basic here - will appreciate any inputs >>>> on how I can reduce the Phoenix read latency. >>>> >>>> Regards, >>>> Sreeram >>>> >>> >> >
