Re: high cardinality aggregation query performance

2015-02-27 Thread James Taylor
Try this code snippet to see if we can force the stats to be send over: conn.unwrap(PhoenixConnection.class).getQueryServices().clearCache(); PTable table = PhoenixRuntime.getTable(conn, PERF.BIG_OLAP_DOC); for (GuidePostsInfo info : table.getTableStats().getGuidePosts().values()) { for

Re: high cardinality aggregation query performance

2015-02-27 Thread Gary Schulte
509 guideposts according to system.stats, getting the table via runtime seems to work, guide posts, here: http://goo.gl/jvcFec As an aside, I am having issues getting a connection to phoenix/hbase remotely (so I can debug from my IDE). I have all the ports open that I think would play a part -

Re: how to drop SYSTEM.SEQUENCE table to reduce the no. of salt buckets for this table

2015-02-27 Thread James Taylor
I'd recommend dropping the SYSTEM.SEQUENCE table from the HBase shell (instead of deleting the folder in HDFS). Everything else sounded fine, but make sure to bounce your cluster and restart your clients after doing this. Thanks, James On Thu, Feb 26, 2015 at 12:28 PM, Vamsi Krishna

RE: installing Phoenix on a Cloudera 5.3.1 cluster

2015-02-27 Thread Brady, John
Anil, I would also like to know the name of the configuration in Cloudera Manager? Thanks. Also are you aware of the issue on AWS where internal and external IPs gett confused and zookeeper can’t connect to HBase properly? The solution posted below doesn’t work for cloudera clusters as it

Re: installing Phoenix on a Cloudera 5.3.1 cluster

2015-02-27 Thread anil gupta
Hi Job, You need to put the server jars. IMO, it s not a good practice to put non-cdh jars in lib folders of hbase because that jar might get wiped off when a cdh upgrade happens. You can add a folder in classpath of hbase from cloudera manager. In this way your custom jars will not get wiped off

Re: high cardinality aggregation query performance

2015-02-27 Thread Gary Schulte
James, When I simply added the skip scan hint, I got the same exception (even with device_type criteria removed) but the indexes in the exception changed. Interesting - I wouldn't have expected adding a skip scan hint would have altered the plan, since it was already doing a skip scan. 1:

Incorrect data retrieval: Phoenix table on HBase

2015-02-27 Thread Ganesh R
Hello,I am trying to create phoenix table with appropriate data types on existing HBase table.  HBase table: hbase(main):017:0 get 'P_VIEW_TEST', '1'COLUMN                                        CELL DATA:DT_VAL                                   timestamp=1425066171071, value=2015-02-27

Re: high cardinality aggregation query performance

2015-02-27 Thread James Taylor
See inline. Thanks for your help on this one, Gary. It'd be good to get to the bottom of it so it doesn't bite you again. On Fri, Feb 27, 2015 at 11:13 AM, Gary Schulte gschu...@marinsoftware.com wrote: James, When I simply added the skip scan hint, I got the same exception (even with

Salted Table meta information

2015-02-27 Thread Dhaval Rami
Hi All, I am using Salted Tables, I am trying to understand what will happen if one of my salt bucket splits. ( the region splits). How will scan be performed on two newly formed regions? does phoenix maintain a meta info for regions to salt key, or it just will send scans to all

Re: installing Phoenix on a Cloudera 5.3.1 cluster

2015-02-27 Thread anil gupta
Off the top of my head, following are the steps to add a custom folder in HBase classpath: 1. Under HBase Service Environment Safety Valve. Add the folder in HBASE_CLASSPATH=/ur/folder/ 2. Restart HBase deamons. Let me know if this doesnt work. Thanks, Anil Gupta On Fri, Feb 27, 2015 at 2:05

Re: high cardinality aggregation query performance

2015-02-27 Thread Gary Schulte
I have the query timeout set too low, but I believe the stats update completed as I see related rows in the stats table. Both skip and in-list queries run fine - no exceptions. Still null for the guideposts though - is it likely this is due to the timeout in the stats update? -Gary On Fri, Feb

Having difficulty to creating VIEW in phoenix

2015-02-27 Thread Sergey Belousov
Hi All Hope you guys can help me little bit with this one. I have a table in HBase with following structure (simplified) key: epoch in seconds rounded to the day. key k1-4bytek2-4bytetsk3-4byte cf: d cq: 0..23 (hourly counters) I have no problem to create horizontal VIEW so I can do query