Re: Issue with connecting to Phoenix in kerberised cluster.

2015-12-29 Thread Josh Elser
Ns G wrote: Hi All, I have written a simple class to access phoenix. I am able to establish connection. But when executing below line i get the error. conn = DriverManager.getConnection(dbUrl); I am facing below exception when accessing phoenix through JDBC from eclipse. INFO - Call

Re: linkage error using Groovy

2016-06-08 Thread Josh Elser
/hdp/current/phoenix-cient/phoenix-client.jar and run the following groovy script, assuming zookeeper is running on zknode: import groovy.sql.Sql Sql.newInstance("jdbc:phoenix:zknode:/hbase-unsecure", 'foo', 'bar', "org.apache.phoenix.jdbc.PhoenixDriver") On Jun 6, 2016, at

Re: linkage error using Groovy

2016-06-06 Thread Josh Elser
Looks like you're knocking up against Hadoop (in o.a.h.c.Configuration). Have you checked search results without Phoenix specifically? I haven't run into anything like this before, but I'm also not a bit Groovy aficionado. If you can share your environment (or some sample project that can

Re: linkage error using Groovy

2016-06-09 Thread Josh Elser
FWIW, I've also reproduced this with Groovy 2.4.3, Oracle Java 1.7.0_79 and Apache Phoenix 4.8.0-SNAPSHOT locally. Will dig some more. Brian Jeltema wrote: Groovy 2.4.3 JDK 1.8 On Jun 8, 2016, at 11:26 AM, Josh Elser <josh.el...@gmail.com <mailto:josh.el...@gmail.com>> wr

Re: phoenix on non-apache hbase

2016-06-09 Thread Josh Elser
Koert, Apache Phoenix goes through a lot of work to provide multiple versions of Phoenix for various versions of Apache HBase (0.98, 1.1, and 1.2 presently). The builds for each of these branches are tested against those specific versions of HBase, so I doubt that there are issues between

Re: linkage error using Groovy

2016-06-20 Thread Josh Elser
Negative, sorry :\ I'm not really sure how this all is supposed to work in Groovy. I'm a bit out of my element. Brian Jeltema wrote: Any luck with this? On Jun 9, 2016, at 10:07 PM, Josh Elser <josh.el...@gmail.com <mailto:josh.el...@gmail.com>> wrote: FWIW, I've als

Re: Bulk loading and index

2016-06-25 Thread Josh Elser
Hi Tongzhou, Maybe you can try `ALTER INDEX index ON table DISABLE`. And then the same command with USABLE after you update the index. Are you attempting to do this incrementally? Like, a bulk load of data then a bulk load of index data, repeat? Regarding the TTL, I assume so, but I'm not

Avatica/Phoenix-Query-Server .NET driver

2016-06-27 Thread Josh Elser
Hi, I was just made aware of a neat little .NET driver for Avatica (specifically, the authors were focused on Phoenix's use of Avatica in the Phoenix Query Server). https://www.nuget.org/packages/Microsoft.Phoenix.Client/1.0.0-preview I'll have to try it out at some point, but would love to

Re: Thin Client Commits?

2016-02-22 Thread Josh Elser
I only wired up commit/rollback in Calcite/Avatica in Calcite-1.6.0 [1], so I Phoenix-4.6 isn't going to have that in the binaries that you can download (Phoenix-4.6 is using 1.3.0-incubating). This should be included in the upcoming Phoenix-4.7.0. Sadly, I'm not sure why autoCommit=true

Re: Phoenix Query Server and/or Avatica Bug and/or My Misunderstanding

2016-02-16 Thread Josh Elser
Hi Steve, Sorry for the delayed response. Putting the "payload" (json or protobuf) into the POST instead of the header should be the 'recommended' way forward to avoid the limit as you ran into [1]. I think Phoenix >=4.6 was using Calcite-1.4, but my memory might be failing me. Regarding

Re: Phoenix Query Server - 413 Entity Too Large

2016-03-30 Thread Josh Elser
Hi Jared, Sounds like https://issues.apache.org/jira/browse/CALCITE-780 That version of Phoenix (probably) is using Calcite-1.2.0-incubating. You could ask the vendor to update to a newer version, or use Phoenix 4.7.0 (directly from Apache) which is using Calcite-1.6.0. Jared Katz wrote:

Re: Error while attempting join query

2016-04-06 Thread Josh Elser
(-cc dev@phoenix) Deepak, As the name suggests, that release is targeted for HBase-0.98.x release lines. Any compatibility of an older release of HBase than 0.98 is likely circumstantial. I can't speak on behalf of the HBase community, but I feel relatively confident in suggesting that it

Re: Phoenix Query Server - proper POST body format

2016-04-06 Thread Josh Elser
Hi Jared, This is just a bad error message on PQS' part. Sorry about that. IIRC, it was something obtuse like not finding the server-endpoint for the JSON message you sent. If you want to do a POST and use the body, you can just put the bytes for your JSON blob in there and that should be

Re: How do I query the phoenix query server?

2016-03-24 Thread Josh Elser
Correct, James: Phoenix-4.7.0 uses Calcite-1.6.0. This included lots of goodies includes commit/rollback support. Phoenix-4.6.0 used Calcite-1.3.0. In general, if you want to use the QueryServer, I'd strongly recommend trying to go with Phoenix-4.7.0. You'll inherit *lots* of

Re: Kerberos ticket renewal

2016-03-24 Thread Josh Elser
Also, setting -Dsun.security.krb5.debug=true when you launch your Java application will give you lots of very helpful information about what is happening "under the hood". Sanooj Padmakumar wrote: Thanks Josh and everyone else .. Shall try this suggestion On 22 Mar 2016 09:36, &

Re: Phoenix Query Server Avatica Upsert

2016-03-04 Thread Josh Elser
Yeah, I don't think the inclusion of Python code should be viewed as a barrier to inclusion (maybe just a hurdle). I've seen other projects (Ambari, iirc) which have tons of Python code and lots of integration. The serialization for PQS can be changed via a single configuration property in

Re: Phoenix jars on http://mvnrepository.com

2016-04-04 Thread Josh Elser
Hi Pierre, 1.1.2.2.4 is not a version of Apache HBase. Might you be needing to contact a vendor for specific information? Either way, the phoenix shaded client and server (targeted for HBase server) are not attached to the Maven build which means that they are not deployed via Maven as a

Re: Non-transactional table has transaction-like behavior

2016-04-04 Thread Josh Elser
Let's think back to before transactions where added to Phoenix. With autoCommit=false, updates to HBase will be batched in the Phoenix driver, eventually flushing on their own or whenever you invoked commit() on the connection. With autoCommit=true, updates to HBase are flushed with every

Re: Phoenix transactions not committing.

2016-04-04 Thread Josh Elser
If you invoked a commit on PQS, it should have flushed any cached values to HBase. The general messages you described in your initial post look correct at a glance. If you have an end-to-end example of this that I can play with, I can help explain what's happening inside of PQS. If you want

Re: apache phoenix json api

2016-04-13 Thread Josh Elser
For reference materials: definitely check out https://calcite.apache.org/avatica/ While JSON is easy to get started with, there are zero guarantees on compatibility between versions. If you use protobuf, we should be able to hide all schema drift from you as a client (e.g. applications you

Re: Phoenix - HBase: complex data

2016-05-06 Thread Josh Elser
Hi Mariana, You could try defining an array of whatever type you need. See https://phoenix.apache.org/array_type.html for more details. - Josh Mariana Medeiros wrote: Hello :) I have a Java class Example with a String and an ArrayList fields. I am using Apache phoenix to insert and read

Re: Phoenix : Update protobuf-java and guava version

2016-05-01 Thread Josh Elser
Hi Naveen, The Protocol Buffer dependency on 2.5 is very unlikely to change in Phoenix as that is directly inherited from HBase (as you can imagine, these need to be kept in sync). There are efforts, in both HBase and Phoenix, underway to provide shaded-jars for each project which would

Re: apache phoenix json api

2016-04-19 Thread Josh Elser
Nope, you shouldn't need to do this. "statements" that you create using the CreateStatementRequest are very similarly treated to the JDBC Statement interface (they essentially refer to an instance of a PhoenixStatement inside PQS, actually). You should be able to create one statement and

Re: apache phoenix json api

2016-04-19 Thread Josh Elser
"statementId": 20, "sql": "SELECT * FROM us_population", "maxRowCount": -1 } And this is the commit command response (if it can give you more insights) { "response": "resultSet", "connectionId": "8", "state

Re: apache phoenix json api

2016-04-17 Thread Josh Elser
Also, you're using the wrong command. You want "prepareAndExecute" not "prepareAndExecuteBatch". Josh Elser wrote: Thanks, will fix this. Plamen Paskov wrote: Ah i found the error. It should be "sqlCommands": instead of "sqlCommands", The documentati

Re: apache phoenix json api

2016-04-17 Thread Josh Elser
", "UPSERT INTO us_population(STATE,CITY,POPULATION) VALUES('C2','City 2',100)" ] } And this is the response i receive: Error 500 HTTP ERROR: 500 Problem accessing /. Reason: com.fasterxml.jackson.core.JsonParseException: Unexpected character (',' (code 44)): was ex

Re: apache phoenix json api

2016-04-17 Thread Josh Elser
; ] } And this is the response i receive: Error 500 HTTP ERROR: 500 Problem accessing /. Reason: com.fasterxml.jackson.core.JsonParseException: Unexpected character (',' (code 44)): was expecting a colon to separate field name and value at [Source: java.io.StringReader@41709697; line: 5,

Re: prepareAndExecute with UPSERT not working

2016-04-17 Thread Josh Elser
What version of Phoenix are you using? Plamen Paskov wrote: Hey folks, I'm trying to UPSERT some data via the json api but no luck for now. My requests looks like: { "request": "openConnection", "connectionId": "6" } { "request": "createStatement", "connectionId": "6" } { "request":

Re: Phoenix-queryserver-client jar is too fat in 4.8.0

2016-08-12 Thread Josh Elser
Hi Youngwoo, The inclusion of hadoop-common is probably the source of most of the bloat. We really only needed the UserGroupInformation code, but Hadoop doesn't provide a proper artifact with just that dependency for us to use downstream. What dependency issues are you running into? There

Re: Errors while launching sqlline

2016-07-18 Thread Josh Elser
You can check the dev list for the VOTE thread which contains a link to the release candidate but it is not an official Apache Phoenix release yet. Vasanth Bhat wrote: Thanks a lot Ankit. where do I download this from? I am looking at http://mirror.fibergrid.in/apache/phoenix/don't seem

Re: HBase checkAndPut Support

2016-07-19 Thread Josh Elser
Did you read James' response in PHOENIX-2271? [1] Restating for you: as a work-around, you could try to use the recent transaction support which was added via Apache Tephra to prevent multiple clients from modifying a cell. This would be much less efficient than the "native" checkAndPut API

Re: Phoenix error : Task rejected from org.apache.phoenix.job.JobManager

2016-07-06 Thread Josh Elser
It sounds like whatever query you were running was just causing the error to happen again locally. Like you said, if you launched a new instance of sqlline.py, you would have a new JVM and thus a new ThreadPool (and backing queue). vishnu rao wrote: hi i was using the "sqlline.py" client ..

Re: NoClassDefFoundError org/apache/hadoop/hbase/HBaseConfiguration

2016-07-05 Thread Josh Elser
Looking into this on the HDP side. Please feel free to reach out via HDP channels instead of Apache channels. Thanks for letting us know as well. Josh Mahonin wrote: Hi Robert, I recommend following up with HDP on this issue. The underlying problem is that the

Re: how dose count(1) works?

2016-07-01 Thread Josh Elser
Can you share the error that your RegionServers report in the log before they crash? It's hard to give an explanation without knowing the error you're facing. Thanks. kevin wrote: hi,all I have a test about hbase run top of alluxio . In my hbase there is a table a create by phoenix an

Re: I got a very weird message from user@phoenix.apache.org

2017-02-20 Thread Josh Elser
Short answer is (likely) that your mail provider (Gmail) is rejecting posts to user@p.a.o which hit its spam trigger but did not hit the ASF's spam trigger. This triggers the mailing list to tell you that a message it tried to send you was rejected. So, you get a warning about a message that you

Re: Phoenix Query Server tenant_id

2017-02-20 Thread Josh Elser
See https://github.com/apache/calcite/blob/5181563f9f26d1533a7d98ecca8443077e7b7efa/avatica/core/src/main/java/org/apache/calcite/avatica/remote/Service.java#L1759-L1768 This should be passed down just fine. If you can provide details as to how it isn't, that'd be great. Josh Elser wrote: I

Re: Phoenix Query Server tenant_id

2017-02-22 Thread Josh Elser
Done Done sqlline version 1.1.8 0: jdbc:phoenix:thin:url=http://pqs1.mydomain> !list 1 active connection: #0 open jdbc:phoenix:thin:url=http://pqs1.mydomain:8765;serialization=PROTOBUF Is this something that has changed in newer versions of Phoenix? On Mon, Feb 20, 2017 at 1:47 PM, Josh El

Re: Are values of a sequence deleted after an incrementation?

2017-02-23 Thread Josh Elser
I believe the sequences track the current value of the sequence. When your client requests 100 values, it would use 1-100, but Phoenix only needs to know that the next value it can give out is 101. I'm not 100% sure, but I think this is how it works. What are you concerned about? Cheyenne

Re: Phoenix Query Server tenant_id

2017-02-19 Thread Josh Elser
I thought arbitrary properties would be passed through, but I'm not sure off the top of my head anymore Would have to dig through the Avatica JDBC driver to (re)figure this one out. Michael Young wrote: Is it possible to pass the TenantID attribute on the URL when using the phoenix

Re: How to config zookeeper quorum in sqlline command?

2017-02-19 Thread Josh Elser
Please be aware that you're now only communicating with a single ZK server instead of the three you have deployed. That ZK server is unavailable, your client will fail upon the next read to ZK it needs to make. Presently, Phoenix doesn't support multiple ports for separate ZK servers. It

Re: Phoenix Query Server tenant_id

2017-02-22 Thread Josh Elser
in our production environment, unfortunately. We are using Phoenix 4.7 (from HDP 2.5 Community release). On Wed, Feb 22, 2017 at 4:07 PM, Josh Elser <els...@apache.org <mailto:els...@apache.org>> wrote: So, just that I'm on the same page as you, when you invoke the Java ap

Re: Protobuf serialized column

2017-02-15 Thread Josh Elser
No, PQS is just a proxy to the Phoenix (thick) JDBC driver. You are still limited to the capabilities of the Phoenix JDBC driver. You might be able to do something with a custom UDF, but I'm not sure. Sudhir Babu Pothineni wrote: Sorry for not asking the question properly, my understanding

Re: Can I use protobuf2 with Phoenix instead of protobuf3?

2017-02-15 Thread Josh Elser
This is a non-issue... Avatica's use of protobuf is completely shaded (relocated classes). You can use whatever version of protobuf in your client application you'd like. Mark Heppner wrote: If Cheyenne is talking about the query server, I'm not sure where you're getting that from, Ted. It

Re: Error at starting Phoenix shell with HBase

2017-01-18 Thread Josh Elser
nServer logs* How can above problem can be resolved ? Thanks. ​​ On Mon, Jan 16, 2017 at 10:22 PM, Josh Elser <els...@apache.org <mailto:els...@apache.org>> wrote: Did you check the RegionServers logs I asked in the last message? Chetan Khatri

Re: Timeline consistency using PQS

2017-01-20 Thread Josh Elser
Tulasi, Any property which you can provide in the `Properties` object when instantiating the PhoenixDriver (outside of PQS), you can pass into PQS via the same `Properties` object when instantiating the thin Driver. The OpenConnectionRequest[1] is the RPC mechanism which passes along this

Re: How to recreate table?

2017-01-16 Thread Josh Elser
You could create a new table with the same schema and then flip the underlying table out. * Rename the existing table to "foo" * Create your table via Phoenix with correct schema and desired name * Delete underlying HBase table that Phoenix created * Rename "foo" to the desired name I _think_

Re: Can I use the SQL WITH clause Phoenix?

2017-01-16 Thread Josh Elser
Phoenix's grammar is documented at http://phoenix.apache.org/language/index.html Cheyenne Forbes wrote: Can I use the SQL WITH clause Phoenix instead of "untidy" sub queries?

Re: Error at starting Phoenix shell with HBase

2017-01-16 Thread Josh Elser
Did you check the RegionServers logs I asked in the last message? Chetan Khatri wrote: Any updates for the above error guys ? On Fri, Jan 13, 2017 at 9:35 PM, Josh Elser <els...@apache.org <mailto:els...@apache.org>> wrote: (-cc dev@phoenix) phoenix-4.8.2-HBase-1.

Re: Phoenix Query Server query logging

2017-02-28 Thread Josh Elser
No, I don't believe there is any log4j logging done in PQS that would show queries being executed. Ideally, we would have a "query log" in Phoenix which would present an interface to this data and it wouldn't require anything special in PQS. However, I wouldn't be opposed to some trivial

Re: using MapReduce java.sql.SQLException: No suitable driver found for jdbc:phoenix occured.

2016-09-13 Thread Josh Elser
Hi, The trailing semi-colon on the URL seems odd, but I do not think it would cause issues in parsing when inspecting the logic in PhoenixEmbeddedDriver#acceptsURL(String). Does the Class.forName(..) call succeed? You have Phoenix properly on the classpath for your mappers? Dong-iL, Kim

Re: using MapReduce java.sql.SQLException: No suitable driver found for jdbc:phoenix occured.

2016-09-14 Thread Josh Elser
phoenix-4.8.0-HBase-1.1-client.jar is the jar which should be used. The phoenix-4.8.0-HBase-1.1-hive.jar is to be used with the Hive integration. dalin.qin wrote: [root@namenode phoenix]# findjar . org.apache.phoenix.jdbc.PhoenixDriver Starting search for JAR files from directory . Looking for

Re: 回复: 回复: 回复: Can query server run with hadoop ha mode?

2016-09-08 Thread Josh Elser
I was going to say that https://issues.apache.org/jira/browse/PHOENIX-3223 might be related, but it looks like the HADOOP_CONF_DIR is already put on the classpath. Glad to see you goth this working :) On Thu, Sep 8, 2016 at 5:56 AM, F21 wrote: > Glad you got it working! :)

Re: Is the JSON that is sent to the server converted to Protobufs or is the Protobufs converted to JSON to be used by Phoenix

2016-09-08 Thread Josh Elser
Yup, Francis got it right. There are POJOs in Avatica which Jackson (un)marshals the JSON in-to/out-of and logic which constructs the POJOs from Protobuf and vice versa. In some hot-code paths, there are implementations in the server which can use protobuf objects directly (to avoid extra

Re: Phoenix + Spark + JDBC + Kerberos?

2016-09-15 Thread Josh Elser
How do you expect JDBC on Spark Kerberos authentication to work? Are you using the principal+keytab options in the Phoenix JDBC URL or is Spark itself obtaining a ticket for you (via some "magic")? Jean-Marc Spaggiari wrote: Hi, I tried to build a small app all under Kerberos. JDBC to

Re: Joins dont work

2016-09-15 Thread Josh Elser
com> Mobile: 876-881-7889 skype: cheyenne.forbes1 On Thu, Sep 15, 2016 at 4:42 PM, Josh Elser <josh.el...@gmail.com <mailto:josh.el...@gmail.com>> wrote: The error you see would also be rather helpful. James Taylor wrote: Hi Cheyenne, Are you referrin

Re: FW: Phoenix Query Server not returning any results

2016-09-12 Thread Josh Elser
Puneeth -- One extra thing to add to Francis' great explanation; the response message told you what you did wrong: "missingStatement":true This is telling you that the server does not have a statement with the ID 12345 as you provided. F21 wrote: Hey, You mentioned that you sent a

Re: Jdbc secure connection -- exception

2016-10-04 Thread Josh Elser
Hi Vikram, See https://issues.apache.org/jira/browse/PHOENIX-1754 This is presently an open issue. You will not be able to use the convenience "Kerberos login via URL" when on Windows. You will need to manually perform your Kerberos login (via JAAS or Hadoop's UserGroupInformation class) and

Re: Phoenix + Spark + JDBC + Kerberos?

2016-09-15 Thread Josh Elser
to run command line applications directly from the worker nodes and it works, But inside the Spark Executor it doesn't... 2016-09-15 13:07 GMT-04:00 Josh Elser <josh.el...@gmail.com <mailto:josh.el...@gmail.com>>: How do you expect JDBC on Spark Kerberos authentication to work? Are

Re: Joins dont work

2016-09-15 Thread Josh Elser
The error you see would also be rather helpful. James Taylor wrote: Hi Cheyenne, Are you referring to joins through the query server? Thanks, James On Thu, Sep 15, 2016 at 1:37 PM, Cheyenne Forbes > wrote: I was

Re: Exception connection to a server with pheonix 4.7 installed

2016-09-15 Thread Josh Elser
Hi Xindian, A couple of initial things that come to mind... * Make sure that you're using HDP "bits" (jars) everywhere to remove any possibility that there's an issue between what Hortonworks ships and what's in Apache. * Make sure that your Java application/Spark job has the correct

Re: property object is being modified

2016-09-23 Thread Josh Elser
Thanks Prabhjyot. Feel free to assign it directly to me. I can help triage/fix it. Prabhjyot Singh wrote: Thank you Josh, sure I'll do that. On 2016-09-22 08:23 ( 0530), Josh Elser <j...@gmail.com <mailto:j...@gmail.com>> wrote: > Sounds like the thin driver should b

Re: property object is being modified

2016-09-21 Thread Josh Elser
Sounds like the thin driver should be making a copy of the properties if its going to be modifying it. Want to open a JIRA issue? Prabhjyot Singh wrote: Hi, I'm using DriverManager.getConnection(url, properties) using following properties url ->

Re: property object is being modified

2016-09-27 Thread Josh Elser
Thanks bud! Prabhjyot Singh wrote: Yes, I did, https://issues.apache.org/jira/browse/PHOENIX-3315. and somehow didn't tag you. On 27 September 2016 at 07:57, Josh Elser <josh.el...@gmail.com <mailto:josh.el...@gmail.com>> wrote: Did you ever create this issue, Prabhjyot? I

Re: property object is being modified

2016-09-26 Thread Josh Elser
Did you ever create this issue, Prabhjyot? I don't recall seeing it come across my inbox but I might have missed it... Josh Elser wrote: Thanks Prabhjyot. Feel free to assign it directly to me. I can help triage/fix it. Prabhjyot Singh wrote: Thank you Josh, sure I'll do that. On 2016-09-22

Re: How can I contribute clients?

2016-10-23 Thread Josh Elser
If they're generic to Apache Avatica (Apache Calcite sub-project) and not tied to Apache Phoenix, we'd also love to have you recognized, if not having the code not directly committed to Avatica :). Avatica is the underlying tech to the Phoenix Query Server. Minor clarification with my Phoenix

Re: error if result size is over some limit

2016-11-23 Thread Josh Elser
Hi Noam, Can you quantify the query you run that shows this error? Also, when you change the criteria to retrieve less data, do you mean that you're fetching fewer rows? Bulvik, Noam wrote: I am using phonix 4.5.2 and in my table the data in in Array. When I issue a query sometime the

Re: spark 2.0.2 connect phoenix query server error

2016-11-23 Thread Josh Elser
Hi Dequn, There should be more to this stacktrace than you provided as the actual cause is not included. Can you please include the entire stacktrace? If you are not seeing this client-side, please check the Phoenix Query Server log file to see if there is more there. Dequn Zhang wrote:

Re: Salting an secondary index

2016-11-23 Thread Josh Elser
IIRC, SALT_BUCKET configuration from the data table is implicitly applied to any index tables you create from that data table. Pradheep Shanmugam wrote: Hi, I have a hbase table created using phoenix which is salted. Since the queries on the table required a secondary index, I created index

Re: phoenix-Hbase-client jar web application issue

2016-11-23 Thread Josh Elser
Hi Pradeep, No, this is one you will likely have to work around on your own by building a custom Phoenix client jar that does not include the javax-servlet classes. They are getting transitively pulled into Phoenix via Hadoop (IIRC). If your web application already has the classes present,

Re: Hash join out of memory error

2016-10-31 Thread Josh Elser
Is the directory containing hbase-site.xml where you have made the modification included on your overriden CLASSPATH? How are you running this query -- is it on the classpath for that program? ashish tapdiya wrote: Query: SELECT /*+ NO_STAR_JOIN*/ IP, RANK, TOTAL FROM (SELECT SOURCEIPADDR as

Re: Sample phoenix upserts using threads

2016-11-01 Thread Josh Elser
(cc: -dev +user, bcc: +dev) Hi Krishna, Might you be able to share the stacktrace that accompanied that Exception? Shiva Krishna wrote: Hi All, Can any one give me a small example of Phoenix upserts using Threads in Java. I wrote a sample it is working fine in local environment but when

Re: PrepareAndExecute statement return only 100 rows

2016-10-13 Thread Josh Elser
Hi Puneeth, What version of Phoenix are you using? Indeed per [1], maxRowCount should control the number of rows returned in the ExecuteResponse. However, given that you see 100 rows (which is the default), it sounds like the value is not being respected. The most recent docs may not align

Re: PrepareAndExecute statement return only 100 rows

2016-10-16 Thread Josh Elser
: Josh Elser [mailto:josh.el...@gmail.com] Sent: 13 October 2016 14:58 To: user@phoenix.apache.org Subject: Re: PrepareAndExecute statement return only 100 rows Hi Puneeth, What version of Phoenix are you using? Indeed per [1], maxRowCount should control the number of rows returned

Re: Ordering of numbers generated by a sequence

2016-10-16 Thread Josh Elser
Not 100% sure, but yes, I believe this is correct. One of the servers would get 0-99, the other 100-199. The server to use up that batch of 100 values would then request 200-299, etc. Setting the cache to be 0 would likely impact the performance of Phoenix. Using some external system to

Re: Upsert works using /sqlline-thin.py and /sqlline.py but not in Zeppelin or other remote jdbc clients

2016-11-26 Thread Josh Elser
Or make sure that Zeppelin is adding hbase-site.xml to the classpath. You can easily test this by making a copy of your phoenix-client.jar and manually adding in a copy of hbase-site.xml to the jar. James Taylor wrote: https://docs.oracle.com/javase/7/docs/api/java/sql/Connection.html On

Re: phoenix-Hbase-client jar web application issue

2016-11-26 Thread Josh Elser
com <mailto:pradeep.b...@gmail.com>> wrote: thanks josh looking into it. On Wed, Nov 23, 2016 at 1:48 PM, Josh Elser <josh.el...@gmail.com <mailto:josh.el...@gmail.com>> wrote: Hi Pradeep, No, this is one you will likely have to work around on your

Re: Phoenix database adapter for Python not working

2016-12-17 Thread Josh Elser
Just to clarify, like near all other services on Linux, you do not want to run the Phoenix Query Server as root. Running it as the "hbase" user (or the user you are running hbase as) is the common way to do this. Will Xu wrote: OK, this means you probably don't have Phoenix query server

Re: phoenix upsert select query fails with : java.util.concurrent.ExecutionException: java.lang.ArrayIndexOutOfBoundsException

2016-12-11 Thread Josh Elser
What's the rest of the stacktrace? You cut off the cause. venkata subbarayudu wrote: I faced a strange issue, that, Phoenix hbase upsert query fails with ArrayIndexOutOfBounds exception. Query looks like: upsert into table (pk,col1, col2, col3) select a.pk

Re: borken maven phoenix package

2017-01-09 Thread Josh Elser
for org.apache.phoenix:phoenix ? we need to have the jdbc driver thanks On 06.01.2017 18:38, Josh Elser wrote: There is no JAR for org.apache.phoenix:phoenix. There's a source-release tarball [1] Use org.apache.phoenix:phoenix-core [2] [1] http://repo1.maven.org/maven2/org/apache/phoenix/phoenix/4.4.0-HBase

Re: Error at starting Phoenix shell with HBase

2017-01-13 Thread Josh Elser
(-cc dev@phoenix) phoenix-4.8.2-HBase-1.2-server.jar in the top-level binary tarball of Apache Phoenix 4.8.0 is the jar which is meant to be deployed to all HBase's classpath. I would check the RegionServer logs -- I'm guessing that it never started correctly or failed. The error message is

Re: [ANNOUNCE] Apache Phoenix 4.9 released

2016-12-03 Thread Josh Elser
(-cc the reply-all) I'm not sure what your reasons are for not accepting "add-ons" as a valid implementation, but this release doesn't change Phoenix's use of Tephra to enable cross-row and cross-table transactions which are not natively provided by HBase. If you want to see this feature

Re: how to security phoenix

2016-12-05 Thread Josh Elser
Yes, use the HBase-provided access control mechanisms. lk_phoenix wrote: hi,all: I want to know how to add access contorl to the table I create by phoenix . I need to add the privilege through hbase? 2016-12-05 lk_phoenix

Re: ArrayIndexOutOfBoundsException in PQS

2017-01-05 Thread Josh Elser
mentcache.expiryduration Statement cache expiration duration. Any statements older than this value will be discarded. Default is 5 minutes. 5 avatica.statementcache.expiryunit Statement cache expiration unit. Unit modifier applied to the value provided in avatica.statem

Re: ArrayIndexOutOfBoundsException in PQS

2017-01-05 Thread Josh Elser
Thanks. Tulasi Paradarami wrote: I created CALCITE-1565 for this issue. On Thu, Jan 5, 2017 at 12:10 PM, Josh Elser <josh.el...@gmail.com <mailto:josh.el...@gmail.com>> wrote: Hrm, that's frustrating. No stack trace is a bug. I remember there being one of these I fixed

Re: borken maven phoenix package

2017-01-06 Thread Josh Elser
There is no JAR for org.apache.phoenix:phoenix. There's a source-release tarball [1] Use org.apache.phoenix:phoenix-core [2] [1] http://repo1.maven.org/maven2/org/apache/phoenix/phoenix/4.4.0-HBase-1.1/ [2] http://repo1.maven.org/maven2/org/apache/phoenix/phoenix-core/4.4.0-HBase-1.1/

Re: How many servers are need to put Phoenix in production?

2016-12-28 Thread Josh Elser
You should really work backwards from your use cases. The amount of hardware you need is dependent on your requirements and what else you're going to be running on the hardware. You're not likely to get a good answer here because the question is so open-ended. Cheyenne Forbes wrote: are

Re: slow response on large # of columns

2016-12-27 Thread Josh Elser
Maybe you could separate some of the columns into separate column families so you have some physical partitioning on disk? Whether you select one or many columns, you presently have to read through each column on disk. AFAIK, there shouldn't really be an upper limit here (in terms of what

Re: Cannot connect Phoenix to HBase in secure cluster (Kerberos)

2017-03-21 Thread Josh Elser
e got. > > > Thanks & Regards, > Rohit R. K. > > > On Tue, 14 Mar 2017 20:50:30 +0530 Josh Elser <els...@apache.org> wrote > > > When you provide the principal and keytab options in the JDBC URL, the > ticket cache (created by your kinit invoca

Re: Csvbulkloadtool

2017-03-21 Thread Josh Elser
On Mon, Mar 20, 2017 at 12:55 AM, Adi Meller wrote: > Hello. > I need to move some (5-6) big (2 tera each) tables from hive to Phoenix > every day. > > I have cdh 5.7 and install phoenix 4.7 thought parcel. > I have 4 region server with 94gb physical memory And 32 cores

Re: python-phoenixdb

2017-03-26 Thread Josh Elser
-11-07 18:12 GMT+01:00 Josh Elser<els...@apache.org>: +1 I was poking around with it this weekend. I had some issues (trying to use it from the Avatica side, instead of PQS, specifically), but for the most part it worked. Definitely feel free to report any issues you run into: https://b

Re: Upgrade from Phoenix 4.4.0-hbase-1.1 to 4.8.0-hbase-1.1 Error

2017-04-03 Thread Josh Elser
The root cause of your exception is a ConnectionRefusedException. This means that your client was unable to make an network connection to 192.168.1.147:52540 (from earlier in your stacktrace). Typically, this is an OS or HBase level issue. I'd first try to rule out a networking level issue

Re: Phoenix Query Server query logging

2017-04-12 Thread Josh Elser
s, >>>> Michael >>>> >>>> On Mon, Apr 3, 2017 at 2:28 PM, Ryan Templeton >>>> <rtemple...@hortonworks.com> wrote: >>>>> >>>>> I see there’s a phoenix-tracing-webapp project in the build plus this >>>>> on the web

Re: How can I "use" a hbase co-processor from a User Defined Function?

2017-04-12 Thread Josh Elser
Since you're writing the UDF yourself, you can have it do anything you'd like. However, I wouldn't think that it would be a good idea to do a remote RPC for every potential row that you're processing in a query... On Wed, Apr 12, 2017 at 5:45 PM, Cheyenne Forbes

Re: phoenix.schema.isNamespaceMappingEnabled

2017-04-20 Thread Josh Elser
default to true. thanks for the explanation why it shouldn't -Sudhir On Thu, Apr 20, 2017 at 11:56 AM, Josh Elser <els...@apache.org <mailto:els...@apache.org>> wrote: Most likely to avoid breaking existing functionality. As this mapping is a relatively new feature, we w

Re: Weird High Read throuput of SYSTEM.STATS

2017-04-13 Thread Josh Elser
the high read load of the SYSTEM.STATS table. On Thu, Apr 13, 2017 at 11:42 AM, Josh Elser <josh.el...@gmail.com <mailto:josh.el...@gmail.com>> wrote: What version of Phoenix are you using? (an Apache release? some vendors' packaging?) Academically speaking, when you bu

Re: Problem connecting JDBC client to a secure cluster

2017-04-13 Thread Josh Elser
Just some extra context here: From your original message, you noted how the ZK connection succeeded but the HBase connection didn't. The JAAS configuration file you provided is *only* used by ZooKeeper. As you have eventually realized, hbase-site.xml is the configuration file which controls

Re: Bad performance of the first resultset.next()

2017-04-19 Thread Josh Elser
I'm guessing that you're using a version of HDP? If you're using those versions from Apache, please update as they're dreadfully out of date. What is the DDL of the table you're reading from? Do you have any secondary indexes on this table (if so, on what columns)? What kind of query are you

Re: phoenix.schema.isNamespaceMappingEnabled

2017-04-20 Thread Josh Elser
Most likely to avoid breaking existing functionality. As this mapping is a relatively new feature, we wouldn't want to force it upon new users. The need for Phoenix to have the proper core-site, hdfs-site, and hbase-site XML files on the classpath is a fair knock though (although, the lack

Re: Phoenix connection to kerberized hbase fails

2017-04-19 Thread Josh Elser
Reid wouldn't have seen the message he did in the original message about a successful login if that were the case. Try adding in "-Dsun.security.krb5.debug" to your PHOENIX_OPTS (I think that is present in that version of Phoenix). It should give you a lot more debug information, providing

Re: phoenix client config and memory

2017-03-09 Thread Josh Elser
So properties like phoenix.query.timeoutMs has to be on app side hbase-site? But I see the above property being set on the server side hbase-site though ambari.. is that not going to be used by phoenix client? Thanks, Pradheep On 3/8/17, 3:30 PM, "Josh Elser"<els...@apache.org>

  1   2   3   4   >