In addition to Talat's comment about more info, you can check out the following properties:
phoenix.query.queueSize phoenix.query.timeoutMs http://phoenix.apache.org/tuning.html We have set these in the hbase-site-.xml on the client machine where the Squirrel was running in the cases where the table we were querying were pretty huge. Regards, Shahab On Tue, May 12, 2015 at 2:41 AM, Talat Uyarer <[email protected]> wrote: > Hi Kiru, > > Can you give more information about your data. I can not understand > what is exact problem. I have some question What is your table schema > ? What is data size ? What is your heap size ? Avg row size ?. BTW To > tell general something, You can read those sources for Hbase tuning > > [1] > http://www.ericsson.com/research-blog/data-knowledge/hbase-performance-tuners/ > [2] http://phoenix.apache.org/tuning.html > [3] http://hbase.apache.org/book.html#performance > > HTH > Talat > > 2015-05-12 3:47 GMT+03:00 Kiru Pakkirisamy > <[email protected]>: > > We are trying to benchmark/test Phoenix with large tables.A 'select * > from table1 limit 100000' hangs on a 1.4 billion row table (in sqlline.py > or SQuirreL)The same select of 1million rows works on smaller table (300 > million).Mainly we wanted to create a smaller version of the 1.4 billion > table and ran into this issue.Any ideas why this is happening ?We had quite > a few problems crossing the 1 billion mark even when loading (using > CsvBulkLoadTool) the table.We are also wondering whether our HBase is > configured correctly. > > Any tips on HBase Configuration for loading/running Phoenix is highly > appreciated as well.(We are on HBase 0.98.12 and Phoenix 4.3.1) Regards, > > - kiru > > > > -- > Talat UYARER > Websitesi: http://talat.uyarer.com > Twitter: http://twitter.com/talatuyarer > Linkedin: http://tr.linkedin.com/pub/talat-uyarer/10/142/304 >
