Re: Phoenix MR integration api only accepts Index of column for setting column value

2016-01-05 Thread James Taylor
With JDBC, both will already work. pstmt.setString("STOCK_NAME", stockName); pstmt.setString(1, stockName); On Tuesday, January 5, 2016, anil gupta wrote: > Hi, > > I am using Phoenix4.4. I am trying to integrate my MapReduce job with > Phoenix following this doc:

Re: Issue with connecting to Phoenix in kerberised cluster.

2016-01-05 Thread anil gupta
Hi Durga, Can you kinit using the same keytab/principal from the node you are trying to run this program? Is your program able to read keytab file? Can you try to run this program from the same node that is running sqlline. Also, dont pass the jdbcjaas.conf this time. This line seems to provide

Thin Client:: Connection Refused

2016-01-05 Thread CHARBEL . EL-KAED
Hi, I am running HBase on HDP 2.3.2 with the following parameters: Hadoop Version 2.7.1.2.3.2.0-2950, revision=5cc60e0003e33aa98205f18bccaeaf36cb193c1c Zookeeper Quorum sandbox.hortonworks.com:2181 Zookeeper Base Path /hbase-unsecure HBase Version 1.1.2.2.3.2.0-2950,

Re: Thin Client:: Connection Refused

2016-01-05 Thread Thomas Decaux
Did you run thin server ? (The http server that proxy to Phoenix) Le 5 janv. 2016 11:15 PM, a écrit : > Hi, > > I am running HBase on HDP 2.3.2 with the following parameters: > > Hadoop Version 2.7.1.2.3.2.0-2950, >

Re: Thin Client:: Connection Refused

2016-01-05 Thread CHARBEL . EL-KAED
Hello Thomas, Thank you ! this what was missing ! :) I noticed that the dbConnection.commit(); is not supported. Is there any other method to commit? the inserted values are not persisted. Thank you, -- C From: Thomas Decaux To: user@phoenix.apache.org, Date:

Can phoenix local indexes create a deadlock after an HBase full restart?

2016-01-05 Thread Pedro Gandola
Hi Guys, I have been testing out the Phoenix Local Indexes and I'm facing an issue after restart the entire HBase cluster. *Scenario:* I'm using Phoenix 4.4 and HBase 1.1.1. My test cluster contains 10 machines and the main table contains 300 pre-split regions which implies 300 regions on local

array of BIGINT index

2016-01-05 Thread Kumar Palaniappan
We have a table with a data type BIGINT[], Since phoenix doesnt support to index this data type, our queries are doing a full table scan when we have to do filtering on this field. What are the alternate approaches? Tried looking into Views, but nope. Appreciate your time. Kumar

Phoenix MR integration api only accepts Index of column for setting column value

2016-01-05 Thread anil gupta
Hi, I am using Phoenix4.4. I am trying to integrate my MapReduce job with Phoenix following this doc: https://phoenix.apache.org/phoenix_mr.html My phoenix table has around 1000 columns. I have some hesitation regarding using *COLUMN_INDEX* for setting its value rather than *NAME* as per

Re: array of BIGINT index

2016-01-05 Thread Kumar Palaniappan
Thanks James for the response. Our use case is, that array holds all the accounts for a particular customer. so the table and query is CREATE TABLE T ( ID VARCHAR PRIMARY KEY, A BIGINT ARRAY); find by account is a use case - select ID from table T where ANY (A); On Tue, Jan 5, 2016 at 3:34