Re: NPE when filtering on TIMESTAMP

2017-05-23 Thread James Taylor
Answered over on SO On Tue, May 23, 2017 at 3:34 PM, Barrett Strausser wrote: > Crossposted to SO > > https://stackoverflow.com/questions/44144925/apache- > phoenix-current-time-gives-npe >

NPE when filtering on TIMESTAMP

2017-05-23 Thread Barrett Strausser
Crossposted to SO https://stackoverflow.com/questions/44144925/apache-phoenix-current-time-gives-npe

Re: Phoenix hbase question

2017-05-23 Thread James Taylor
FWIW, we're exposing a way to do snapshot reads (PHOENIX-3744), starting with our MR integration (on top of which the Spark integration is built) for our 4.11 release. This is about as close as you can get to reading HDFS directly while still taking into account non flushed HBase data. Thanks,

Re: Phoenix hbase question

2017-05-23 Thread Ash N
No nothing in particular. just was looking if there was a way. employ spark plugin seems to be the standard. thank you so much for your input. On Tue, May 23, 2017 at 4:01 PM, Jonathan Leech wrote: > There is a Phoenix / mapreduce integration. If you bypass Hbase you will

Re: Phoenix hbase question

2017-05-23 Thread Jonathan Leech
There is a Phoenix / mapreduce integration. If you bypass Hbase you will need to take care to not miss edits that are only in memory and WAL. If you bypass both Phoenix and Hbase you will have to write code that can interpret both...Possible, yes, but not a good use of your time. Is there some

Re: Phoenix hbase question

2017-05-23 Thread Ash N
Thanks Jonathan. .. But am looking to access data directly from HDFS. not go through phoenix/hbase fir access. Is this possible? Best regards On May 23, 2017 3:35 PM, "Jonathan Leech" wrote: I think you would use Spark for that, via the Phoenix spark plugin. > On May

Phoenix hbase question

2017-05-23 Thread Ash N
Hi All, This may be a silly question. we are storing data through Apache Phoenix. Is there anything special we have to do so that machine learning and other analytics workloads can access this data from HDFS layer? Considering HBase stores data in HDFS. thanks, -ash

Re: Async Index Creation fails due to permission issue

2017-05-23 Thread anil gupta
I think you need to run the tool as "hbase" user. On Tue, May 23, 2017 at 5:43 AM, cmbendre wrote: > I created an ASYNC index and ran the IndexTool Map-Reduce job to populate > it. > Here is the command i used > > hbase org.apache.phoenix.mapreduce.index.IndexTool

Async Index Creation fails due to permission issue

2017-05-23 Thread cmbendre
I created an ASYNC index and ran the IndexTool Map-Reduce job to populate it. Here is the command i used hbase org.apache.phoenix.mapreduce.index.IndexTool --data-table MYTABLE --index-table MYTABLE_GLOBAL_INDEX --output-path MYTABLE_GLOBAL_INDEX_HFILE I can see that index HFiles are created