Re: Joins dont work

2016-09-15 Thread Josh Elser
The error you see would also be rather helpful. James Taylor wrote: Hi Cheyenne, Are you referring to joins through the query server? Thanks, James On Thu, Sep 15, 2016 at 1:37 PM, Cheyenne Forbes mailto:cheyenne.osanu.for...@gmail.com>> wrote: I was using phoenix 4.4 then I switched to 4.

Re: Exception connection to a server with pheonix 4.7 installed

2016-09-15 Thread Josh Elser
Hi Xindian, A couple of initial things that come to mind... * Make sure that you're using HDP "bits" (jars) everywhere to remove any possibility that there's an issue between what Hortonworks ships and what's in Apache. * Make sure that your Java application/Spark job has the correct hbase-si

Re: Phoenix + Spark + JDBC + Kerberos?

2016-09-15 Thread Josh Elser
o tried to run command line applications directly from the worker nodes and it works, But inside the Spark Executor it doesn't... 2016-09-15 13:07 GMT-04:00 Josh Elser mailto:josh.el...@gmail.com>>: How do you expect JDBC on Spark Kerberos authentication to work? Are you using

Re: Phoenix + Spark + JDBC + Kerberos?

2016-09-15 Thread Josh Elser
How do you expect JDBC on Spark Kerberos authentication to work? Are you using the principal+keytab options in the Phoenix JDBC URL or is Spark itself obtaining a ticket for you (via some "magic")? Jean-Marc Spaggiari wrote: Hi, I tried to build a small app all under Kerberos. JDBC to Phoeni

Re: using MapReduce java.sql.SQLException: No suitable driver found for jdbc:phoenix occured.

2016-09-14 Thread Josh Elser
phoenix-4.8.0-HBase-1.1-client.jar is the jar which should be used. The phoenix-4.8.0-HBase-1.1-hive.jar is to be used with the Hive integration. dalin.qin wrote: [root@namenode phoenix]# findjar . org.apache.phoenix.jdbc.PhoenixDriver Starting search for JAR files from directory . Looking for

Re: using MapReduce java.sql.SQLException: No suitable driver found for jdbc:phoenix occured.

2016-09-13 Thread Josh Elser
Hi, The trailing semi-colon on the URL seems odd, but I do not think it would cause issues in parsing when inspecting the logic in PhoenixEmbeddedDriver#acceptsURL(String). Does the Class.forName(..) call succeed? You have Phoenix properly on the classpath for your mappers? Dong-iL, Kim wr

Re: FW: Phoenix Query Server not returning any results

2016-09-12 Thread Josh Elser
Puneeth -- One extra thing to add to Francis' great explanation; the response message told you what you did wrong: "missingStatement":true This is telling you that the server does not have a statement with the ID 12345 as you provided. F21 wrote: Hey, You mentioned that you sent a PrepareA

Re: Is the JSON that is sent to the server converted to Protobufs or is the Protobufs converted to JSON to be used by Phoenix

2016-09-08 Thread Josh Elser
Yup, Francis got it right. There are POJOs in Avatica which Jackson (un)marshals the JSON in-to/out-of and logic which constructs the POJOs from Protobuf and vice versa. In some hot-code paths, there are implementations in the server which can use protobuf objects directly (to avoid extra dese

Re: 回复: 回复: 回复: Can query server run with hadoop ha mode?

2016-09-08 Thread Josh Elser
I was going to say that https://issues.apache.org/jira/browse/PHOENIX-3223 might be related, but it looks like the HADOOP_CONF_DIR is already put on the classpath. Glad to see you goth this working :) On Thu, Sep 8, 2016 at 5:56 AM, F21 wrote: > Glad you got it working! :) > > Cheers, > Francis >

Re: [ANNOUNCE] Apache Phoenix 4.8.0 released

2016-08-19 Thread Josh Elser
(-cc other lists) Hi Afshin, The release notes you referenced are more meant to alert users about any issues in the new release that you may run into over previous releases. "Release notes provide details on issues and their fixes which may have an impact on prior Phoenix behavior" - Josh

Re: Phoenix-queryserver-client jar is too fat in 4.8.0

2016-08-12 Thread Josh Elser
es at runtime. Josh Elser wrote: Hi Youngwoo, The inclusion of hadoop-common is probably the source of most of the bloat. We really only needed the UserGroupInformation code, but Hadoop doesn't provide a proper artifact with just that dependency for us to use downstream. What dependency

Re: Phoenix-queryserver-client jar is too fat in 4.8.0

2016-08-12 Thread Josh Elser
Hi Youngwoo, The inclusion of hadoop-common is probably the source of most of the bloat. We really only needed the UserGroupInformation code, but Hadoop doesn't provide a proper artifact with just that dependency for us to use downstream. What dependency issues are you running into? There wa

Re: HBase prefix scan.

2016-07-19 Thread Josh Elser
Hi Ankit, Assuming you provide some condition such as `WHERE ROWKEY_COLUMN like "9898989898_@#$%"` in your query, I believe Phoenix will automatically execute the query via a bounded range scan over that rowKey prefix. You can verify this is happening by using the `explain ` command. ankit b

Re: HBase checkAndPut Support

2016-07-19 Thread Josh Elser
Did you read James' response in PHOENIX-2271? [1] Restating for you: as a work-around, you could try to use the recent transaction support which was added via Apache Tephra to prevent multiple clients from modifying a cell. This would be much less efficient than the "native" checkAndPut API ca

Re: Errors while launching sqlline

2016-07-18 Thread Josh Elser
You can check the dev list for the VOTE thread which contains a link to the release candidate but it is not an official Apache Phoenix release yet. Vasanth Bhat wrote: Thanks a lot Ankit. where do I download this from? I am looking at http://mirror.fibergrid.in/apache/phoenix/don't seem

Re: Phoenix error : Task rejected from org.apache.phoenix.job.JobManager

2016-07-06 Thread Josh Elser
It sounds like whatever query you were running was just causing the error to happen again locally. Like you said, if you launched a new instance of sqlline.py, you would have a new JVM and thus a new ThreadPool (and backing queue). vishnu rao wrote: hi i was using the "sqlline.py" client ..

Re: NoClassDefFoundError org/apache/hadoop/hbase/HBaseConfiguration

2016-07-05 Thread Josh Elser
Looking into this on the HDP side. Please feel free to reach out via HDP channels instead of Apache channels. Thanks for letting us know as well. Josh Mahonin wrote: Hi Robert, I recommend following up with HDP on this issue. The underlying problem is that the 'phoenix-spark-4.4.0.2.4.0.0-16

Re: how dose count(1) works?

2016-07-01 Thread Josh Elser
Can you share the error that your RegionServers report in the log before they crash? It's hard to give an explanation without knowing the error you're facing. Thanks. kevin wrote: hi,all I have a test about hbase run top of alluxio . In my hbase there is a table a create by phoenix an ha

Avatica/Phoenix-Query-Server .NET driver

2016-06-27 Thread Josh Elser
Hi, I was just made aware of a neat little .NET driver for Avatica (specifically, the authors were focused on Phoenix's use of Avatica in the Phoenix Query Server). https://www.nuget.org/packages/Microsoft.Phoenix.Client/1.0.0-preview I'll have to try it out at some point, but would love to

Re: Bulk loading and index

2016-06-25 Thread Josh Elser
Hi Tongzhou, Maybe you can try `ALTER INDEX index ON table DISABLE`. And then the same command with USABLE after you update the index. Are you attempting to do this incrementally? Like, a bulk load of data then a bulk load of index data, repeat? Regarding the TTL, I assume so, but I'm not ce

Re: linkage error using Groovy

2016-06-20 Thread Josh Elser
Negative, sorry :\ I'm not really sure how this all is supposed to work in Groovy. I'm a bit out of my element. Brian Jeltema wrote: Any luck with this? On Jun 9, 2016, at 10:07 PM, Josh Elser mailto:josh.el...@gmail.com>> wrote: FWIW, I've also reproduced this with

Re: linkage error using Groovy

2016-06-09 Thread Josh Elser
FWIW, I've also reproduced this with Groovy 2.4.3, Oracle Java 1.7.0_79 and Apache Phoenix 4.8.0-SNAPSHOT locally. Will dig some more. Brian Jeltema wrote: Groovy 2.4.3 JDK 1.8 On Jun 8, 2016, at 11:26 AM, Josh Elser mailto:josh.el...@gmail.com>> wrote: Thanks for the info, B

Re: phoenix on non-apache hbase

2016-06-09 Thread Josh Elser
Koert, Apache Phoenix goes through a lot of work to provide multiple versions of Phoenix for various versions of Apache HBase (0.98, 1.1, and 1.2 presently). The builds for each of these branches are tested against those specific versions of HBase, so I doubt that there are issues between Apa

Re: linkage error using Groovy

2016-06-08 Thread Josh Elser
/hdp/current/phoenix-cient/phoenix-client.jar and run the following groovy script, assuming zookeeper is running on zknode: import groovy.sql.Sql Sql.newInstance("jdbc:phoenix:zknode:/hbase-unsecure", 'foo', 'bar', "org.apache.phoenix.jdbc.PhoenixDriver")

Re: linkage error using Groovy

2016-06-06 Thread Josh Elser
Looks like you're knocking up against Hadoop (in o.a.h.c.Configuration). Have you checked search results without Phoenix specifically? I haven't run into anything like this before, but I'm also not a bit Groovy aficionado. If you can share your environment (or some sample project that can exhi

Re: Phoenix - HBase: complex data

2016-05-06 Thread Josh Elser
Hi Mariana, You could try defining an array of whatever type you need. See https://phoenix.apache.org/array_type.html for more details. - Josh Mariana Medeiros wrote: Hello :) I have a Java class Example with a String and an ArrayList fields. I am using Apache phoenix to insert and read data

Re: Phoenix : Update protobuf-java and guava version

2016-05-01 Thread Josh Elser
Hi Naveen, The Protocol Buffer dependency on 2.5 is very unlikely to change in Phoenix as that is directly inherited from HBase (as you can imagine, these need to be kept in sync). There are efforts, in both HBase and Phoenix, underway to provide shaded-jars for each project which would allo

Re: apache phoenix json api

2016-04-19 Thread Josh Elser
cute", "connectionId": 8, "statementId": 20, "sql": "SELECT * FROM us_population", "maxRowCount": -1 } And this is the commit command response (if it can give you more insights) { "response": "resultSet", "connec

Re: apache phoenix json api

2016-04-19 Thread Josh Elser
Nope, you shouldn't need to do this. "statements" that you create using the CreateStatementRequest are very similarly treated to the JDBC Statement interface (they essentially refer to an instance of a PhoenixStatement inside PQS, actually). You should be able to create one statement and just

Re: apache phoenix json api

2016-04-17 Thread Josh Elser
Also, you're using the wrong command. You want "prepareAndExecute" not "prepareAndExecuteBatch". Josh Elser wrote: Thanks, will fix this. Plamen Paskov wrote: Ah i found the error. It should be "sqlCommands": instead of "sqlCommands", The docume

Re: apache phoenix json api

2016-04-17 Thread Josh Elser
ulation(STATE,CITY,POPULATION) VALUES('C2','City 2',100)" ] } And this is the response i receive: Error 500 HTTP ERROR: 500 Problem accessing /. Reason: com.fasterxml.jackson.core.JsonParseException: Unexpected character (',' (code 44)): was expecti

Re: apache phoenix json api

2016-04-17 Thread Josh Elser
#x27;City 1',10)", "UPSERT INTO us_population(STATE,CITY,POPULATION) VALUES('C2','City 2',100)" ] } And this is the response i receive: Error 500 HTTP ERROR: 500 Problem accessing /. Reason: com.fasterxml.jackson.core.JsonParseException: Unexp

Re: prepareAndExecute with UPSERT not working

2016-04-17 Thread Josh Elser
What version of Phoenix are you using? Plamen Paskov wrote: Hey folks, I'm trying to UPSERT some data via the json api but no luck for now. My requests looks like: { "request": "openConnection", "connectionId": "6" } { "request": "createStatement", "connectionId": "6" } { "request": "prepareA

Re: apache phoenix json api

2016-04-13 Thread Josh Elser
For reference materials: definitely check out https://calcite.apache.org/avatica/ While JSON is easy to get started with, there are zero guarantees on compatibility between versions. If you use protobuf, we should be able to hide all schema drift from you as a client (e.g. applications you wr

Re: Error while attempting join query

2016-04-06 Thread Josh Elser
(-cc dev@phoenix) Deepak, As the name suggests, that release is targeted for HBase-0.98.x release lines. Any compatibility of an older release of HBase than 0.98 is likely circumstantial. I can't speak on behalf of the HBase community, but I feel relatively confident in suggesting that it w

Re: Phoenix Query Server - proper POST body format

2016-04-06 Thread Josh Elser
Hi Jared, This is just a bad error message on PQS' part. Sorry about that. IIRC, it was something obtuse like not finding the server-endpoint for the JSON message you sent. If you want to do a POST and use the body, you can just put the bytes for your JSON blob in there and that should be su

Re: Phoenix transactions not committing.

2016-04-04 Thread Josh Elser
If you invoked a commit on PQS, it should have flushed any cached values to HBase. The general messages you described in your initial post look correct at a glance. If you have an end-to-end example of this that I can play with, I can help explain what's happening inside of PQS. If you want to

Re: Phoenix jars on http://mvnrepository.com

2016-04-04 Thread Josh Elser
Hi Pierre, 1.1.2.2.4 is not a version of Apache HBase. Might you be needing to contact a vendor for specific information? Either way, the phoenix shaded client and server (targeted for HBase server) are not attached to the Maven build which means that they are not deployed via Maven as a par

Re: Non-transactional table has transaction-like behavior

2016-04-04 Thread Josh Elser
Let's think back to before transactions where added to Phoenix. With autoCommit=false, updates to HBase will be batched in the Phoenix driver, eventually flushing on their own or whenever you invoked commit() on the connection. With autoCommit=true, updates to HBase are flushed with every exe

Re: Phoenix Query Server - 413 Entity Too Large

2016-03-30 Thread Josh Elser
Hi Jared, Sounds like https://issues.apache.org/jira/browse/CALCITE-780 That version of Phoenix (probably) is using Calcite-1.2.0-incubating. You could ask the vendor to update to a newer version, or use Phoenix 4.7.0 (directly from Apache) which is using Calcite-1.6.0. Jared Katz wrote: Th

Re: Kerberos ticket renewal

2016-03-24 Thread Josh Elser
Also, setting -Dsun.security.krb5.debug=true when you launch your Java application will give you lots of very helpful information about what is happening "under the hood". Sanooj Padmakumar wrote: Thanks Josh and everyone else .. Shall try this suggestion On 22 Mar 2016 09:36, &

Re: How do I query the phoenix query server?

2016-03-24 Thread Josh Elser
Correct, James: Phoenix-4.7.0 uses Calcite-1.6.0. This included lots of goodies includes commit/rollback support. Phoenix-4.6.0 used Calcite-1.3.0. In general, if you want to use the QueryServer, I'd strongly recommend trying to go with Phoenix-4.7.0. You'll inherit *lots* of bugfixes/improvem

Re: Kerberos ticket renewal

2016-03-21 Thread Josh Elser
Keytab-based logins do not automatically spawn a renewal thread in Hadoop's UserGroupInformation library, IIRC. HBase's RPC implementation does try to automatically re-login, but if you are not actively making RPCs, you may miss the window in which you are allowed to perform a renewal. Commonl

Re: Phoenix Query Server Avatica Upsert

2016-03-04 Thread Josh Elser
Yeah, I don't think the inclusion of Python code should be viewed as a barrier to inclusion (maybe just a hurdle). I've seen other projects (Ambari, iirc) which have tons of Python code and lots of integration. The serialization for PQS can be changed via a single configuration property in hba

Re: Thin Client Commits?

2016-02-22 Thread Josh Elser
I only wired up commit/rollback in Calcite/Avatica in Calcite-1.6.0 [1], so I Phoenix-4.6 isn't going to have that in the binaries that you can download (Phoenix-4.6 is using 1.3.0-incubating). This should be included in the upcoming Phoenix-4.7.0. Sadly, I'm not sure why autoCommit=true would

Re: Phoenix Query Server and/or Avatica Bug and/or My Misunderstanding

2016-02-16 Thread Josh Elser
Hi Steve, Sorry for the delayed response. Putting the "payload" (json or protobuf) into the POST instead of the header should be the 'recommended' way forward to avoid the limit as you ran into [1]. I think Phoenix >=4.6 was using Calcite-1.4, but my memory might be failing me. Regarding th

Re: Issue with connecting to Phoenix in kerberised cluster.

2015-12-29 Thread Josh Elser
Ns G wrote: Hi All, I have written a simple class to access phoenix. I am able to establish connection. But when executing below line i get the error. conn = DriverManager.getConnection(dbUrl); I am facing below exception when accessing phoenix through JDBC from eclipse. INFO - Call except

Re: Phoenix JDBC connection pool

2015-12-18 Thread Josh Elser
Created https://issues.apache.org/jira/browse/PHOENIX-2539 James Taylor wrote: Another good contribution would be to add this question to our FAQ. On Tue, Dec 15, 2015 at 2:20 PM, Samarth Jain mailto:sama...@apache.org>> wrote: Kannan, See my response here: https://mail-archives.

<    1   2   3   4