First steps: Could not find or load main class sqlline.SqlLine

2014-08-26 Thread Jean-Marc Spaggiari
Hi, I build and installed Hadoop+HBase+Phoenix successfuly with BigTop. Hadoop works well, HBase too. Now it's time for Phoenix. From the Getting Started guide here http://phoenix.apache.org/download.html#Installation I tried to run /usr/lib/phoenix/bin/sqlline.py localhost However, I get the

Re: First steps: Could not find or load main class sqlline.SqlLine

2014-08-26 Thread Jean-Marc Spaggiari
Ok. Looked into sqlline.py code, exported PHOENIX_LIB_DIR to the right directory, and it now works... Just posting here in case someone face the same issue. JM 2014-08-26 12:27 GMT-04:00 Jean-Marc Spaggiari jean-m...@spaggiari.org: Hi, I build and installed Hadoop+HBase+Phoenix successfuly

Re: First steps: Could not find or load main class sqlline.SqlLine

2014-08-26 Thread Jean-Marc Spaggiari
...@apache.org: On Tue, Aug 26, 2014 at 9:40 AM, Jean-Marc Spaggiari jean-m...@spaggiari.org wrote: Ok. Looked into sqlline.py code, exported PHOENIX_LIB_DIR to the right directory, and it now works... ​You were using the Bigtop package? Please consider filing a Bigtop JIRA.​ -- Best regards

Re: First steps: Could not find or load main class sqlline.SqlLine

2014-08-26 Thread Jean-Marc Spaggiari
days if the RC holds up). On Tue, Aug 26, 2014 at 4:46 PM, Jean-Marc Spaggiari jean-m...@spaggiari.org wrote: I faced this and also, BigTop doesn't compile against Phoenix 4.0.1. And Phoenix 4.0 has an hbase-default.xml issue with Hadoop 2.0. Had to do some manual stuff to fix that. I

Auto-increment field?

2014-08-27 Thread Jean-Marc Spaggiari
Hi, I have data like: CustID, URL and I want to put that into Phoenix. Is there a way to have an auto-increment field to do something like: CREATE TABLE IF NOT EXISTS testdata ( id BIGINT NOT NULL, subid AUTO-INCREMENT, url VARCHAR CONSTRAINT my_pk PRIMARY KEY (id, subid)); Idea is, I have

ILIKE

2014-09-22 Thread Jean-Marc Spaggiari
Hi, I have pushed a small patch to add ILIKE keyword to Phoenix. It's simple and available there: PHOENIX-1273 https://issues.apache.org/jira/browse/PHOENIX-1273 I'm pretty sure it is complete but it's a first draft for review. I still need to update the PhoenixSQL.g file. Thanks, JM

Subqueries: Missing LPAREN

2014-09-24 Thread Jean-Marc Spaggiari
Hi, Is it possible to run sub-queries with Phoenix? Something like this: select * from metadata n where L = 1 AND R = (select max(R) from metadata z where n.A = z.A); Goel is to get all lignes where L=1 and R=max. Field A is the key. Thanks, JM

View composite key?

2014-09-24 Thread Jean-Marc Spaggiari
Hi, Is it possible to create a view on and existing HBase table and describe the composite key? I don't see anything about that in the doc http://phoenix.apache.org/views.html but it also doesn't say that it's not possible. Would like to do something like that: CREATE VIEW t1 ( USER

Re: View composite key?

2014-09-24 Thread Jean-Marc Spaggiari
, f1.W unsigned_long, f1.P bigint, f1.N varchar, f1.E varchar, f1.S unsigned_long, f1.M unsigned_long, f1.T unsigned_int, CONSTRAINT pk PRIMARY KEY (USER, ID, VERSION) ); Thanks, James On Wed, Sep 24, 2014 at 6:21 AM, Jean-Marc Spaggiari jean-m

Recursive queries?

2014-09-24 Thread Jean-Marc Spaggiari
Hi, We have something like this that we want to translate into Phoenix (snippet): RETURN QUERY WITH RECURSIVE first_level AS ( -- non-recursive term ( SELECT a.id AS id FROM asset a WHERE a.parent_id = p_id AND TYPE = 2 ) UNION -- Recursive Term SELECT a.id AS id FROM

Re: Recursive queries?

2014-09-25 Thread Jean-Marc Spaggiari
per level) using the IN clause support we have (i.e. by generating a query)? You could use UPSERT SELECT to dump the IDs you get back at each level into a temp table if need be and join against it for the next query. Thanks, James On Wed, Sep 24, 2014 at 1:08 PM, Jean-Marc Spaggiari jean

Re: Recursive queries?

2014-09-29 Thread Jean-Marc Spaggiari
if there were cycles, you could add a WHERE NOT IN clause. Thanks, James On Thu, Sep 25, 2014 at 5:38 AM, Jean-Marc Spaggiari jean-m...@spaggiari.org wrote: Hi James, Thanks for the feedback. My knowledge of Phoenix and SQL is not good enough for now to jump on such a big patch

CQ as the key?

2014-09-29 Thread Jean-Marc Spaggiari
Hi, Can I have a column qualifier part of a key? Doing this I define columns based on the RowKey: create view asset_metadata ( L unsigned_long not null, A unsigned_long not null, R bigint not null, s.W unsigned_long, s.P bigint, s.N varchar, s.E varchar, s.S

Re: CQ as the key?

2014-09-29 Thread Jean-Marc Spaggiari
the columns you'd likely also use when you filter on s.W in a WHERE clause. Depending on your use case, you might choose immutable/mutable and local/global - take a look here for more info: http://phoenix.apache.org/secondary_indexing.html Thanks, James On Mon, Sep 29, 2014 at 6:15 AM, Jean-Marc

Re: CQ as the key?

2014-10-01 Thread Jean-Marc Spaggiari
the query optimizer deems that it'll perform better in doing so. For example, if your query filtered on s.W, then the index might be used. There's no other way than this to get a column qualifier into the row key. Thanks, James On Mon, Sep 29, 2014 at 10:31 AM, Jean-Marc Spaggiari jean-m

Re: Replication?

2014-12-09 Thread Jean-Marc Spaggiari
4.2 Phoenix version may have issues on local index). There is a test case MutableIndexReplicationIT where you can see some details. Ideally Phoenix should provide a customer replication sink so that a user doesn't have to setup replication on index table. From: Jean-Marc Spaggiari jean-m

Re: Replication?

2014-12-10 Thread Jean-Marc Spaggiari
the sequence values on a failover event. HTH. Maybe more information than you wanted? Tell us more about how you're relying on replication when you get a chance. Thanks, James On Tue, Dec 9, 2014 at 5:00 PM, Jean-Marc Spaggiari jean-m...@spaggiari.org wrote: Hum. Thanks for al those updates

Re: Replication?

2014-12-11 Thread Jean-Marc Spaggiari
). Another solution if that doesn't work would be if the SYSTEM.SEQUENCE table could be replicated synchronously (HBASE-12672). TMI? HTH. James On Wed, Dec 10, 2014 at 7:42 AM, Jean-Marc Spaggiari jean-m...@spaggiari.org wrote: Thanks James (And Andrew). I think there can not be to much

Re: TTL

2015-02-11 Thread Jean-Marc Spaggiari
Hi Ralph, Thinking out loud... If you have an index on your table, the TTL will remove some data from the table but will not clean the references in the index table. So if you query using the index, that will return some data which doesn't exist anymore on the original table. Therefore you

Re: taking a backup of a Phoenix database

2015-08-10 Thread Jean-Marc Spaggiari
Except that you have to snapshot EVERYTHING... If you get SYSTEM.CATALOG, SYSTEM.SEQUENCE and your table, and you want to restore your table, then you will also restore those 2 system tables might brake the other tables that you have not snapshot nor restored... 2015-08-10 13:59 GMT-04:00 Ankit

Re: Phoenix exception with CDH 5.4.4

2015-07-17 Thread Jean-Marc Spaggiari
Have you looked at those 2 links? - http://blog.cloudera.com/blog/2015/05/apache-phoenix-joins-cloudera-labs/ - http://www.cloudera.com/content/cloudera/en/developers/home/cloudera-labs/apache-phoenix/install-apache-phoenix-cloudera-labs.pdf Seems more recent that the one you are

Re: Phoenix exception with CDH 5.4.4

2015-07-17 Thread Jean-Marc Spaggiari
-17 19:00 GMT-04:00 Alex Kamil alex.ka...@gmail.com: thanks Jean-Marc, but we don't use Cloudera manager On Fri, Jul 17, 2015 at 6:56 PM, Jean-Marc Spaggiari jean-m...@spaggiari.org wrote: Have you looked at those 2 links? - http://blog.cloudera.com/blog/2015/05/apache-phoenix

Re: java.lang.NoSuchMethodError: org.apache.hadoop.hbase.client.Put.setWriteToWAL

2015-07-15 Thread Jean-Marc Spaggiari
As Serega said. You have to use the parcel available on the Cloudera Labs repo. Because Cloudera has backported some of the 1.1 features into their 1.0 branch, some signatures changed and the default Phoenix distribution will not work with CDH. You need to make sure to follow the instructions

Re: setting up community repo of Phoenix for CDH5?

2015-10-12 Thread Jean-Marc Spaggiari
Hum. Unfortunatly it's not really a script but more manual work and Jenkins :( Not sure what I can share which might help to build that back :(

Re: Phoenix map reduce

2015-09-01 Thread Jean-Marc Spaggiari
Hi Gaurav, bulk load bypass the WAL, that's correct. It's true for Phoenix, it's true for HBase (outside of Phoenix). If you have replication activated, you will have to bulkload the data into the 2 clusters. Transfert your csv files on the other side too and bulkload from there. JM 2015-09-01

Re: Phoenix map reduce

2015-09-01 Thread Jean-Marc Spaggiari
p 2, 2015 at 12:23 AM, Jean-Marc Spaggiari < > jean-m...@spaggiari.org> wrote: > >> Hi Gaurav, >> >> bulk load bypass the WAL, that's correct. It's true for Phoenix, it's >> true for HBase (outside of Phoenix). >> >> If you have replication activated,

Re: sqlline reporting 1 row affected when it isn't

2015-09-02 Thread Jean-Marc Spaggiari
Is not the output the number of lines of the delete command, which is one line (the command itself) and not the number of deleted lines? Can you try to put some rows into the table and do the delete again? Or try without the where close too? 2015-09-02 9:54 GMT-04:00 James Heather

Re: setting up community repo of Phoenix for CDH5?

2015-09-12 Thread Jean-Marc Spaggiari
Exact. There is some some code change because of what has been back ported into CDH and what has not been. But overall, it should not be rocket science. Mostly method signatures... Let us know when the repo is available so we can help... Thanks, JM 2015-09-12 18:38 GMT-04:00 Krishna

Re: setting up community repo of Phoenix for CDH5?

2015-09-16 Thread Jean-Marc Spaggiari
> > James > On 16 Sep 2015 01:02, "Andrew Purtell" <apurt...@apache.org> wrote: > >> I used dev/make_rc.sh, built with Maven 3.2.2, Java 7u79. Ubuntu build >> host. >> >> >> On Tue, Sep 15, 2015 at 4:58 PM, Jean-Marc Spaggiari < >> jean

Re: setting up community repo of Phoenix for CDH5?

2015-09-15 Thread Jean-Marc Spaggiari
ny of the necessary changes so far. >> >> I chose that branch, by the way, because it's the latest release, and is >> using the same version of HBase as CDH5.4. The master branch of the Phoenix >> repo is building a snapshot of (the forthcoming) Phoenix 4.6, against HBase >&

Re: setting up community repo of Phoenix for CDH5?

2015-09-21 Thread Jean-Marc Spaggiari
> @JM how did you get on with the parcel building? > > Has anyone managed to get 4.5 working on CDH5 now? I was going to stick > with 4.3 on our cluster until we had a parcel, but I'm now needing to use > pherf, and that doesn't seem to exist in 4.3. > > James > > > On

Re: Does apache phoenix works with MapRDB aka M7?

2015-09-21 Thread Jean-Marc Spaggiari
Hi Ashutosh, If I'm not mistaken, there is many features missing in MapRDB like coprocessors, and Phoenix relays on them. So my guess is that Phoenix will not work on MapRDB. JM 2015-09-21 12:43 GMT-04:00 Ashutosh Sharma : > > please let me know. > -- > With best

Table replication

2016-06-09 Thread Jean-Marc Spaggiari
Hi, When Phoenix is used, what is the recommended way to do replication? Replication acts as a client on the 2nd cluster, so should we simply configure Phoenix on both cluster and on the destination it will take care of updating the index tables, etc. Or should all the tables on the destination

Phoenix + Spark + JDBC + Kerberos?

2016-09-15 Thread Jean-Marc Spaggiari
Hi, I tried to build a small app all under Kerberos. JDBC to Phoenix works Client to HBase works Client (puts) on Spark to HBase works. But JDBC on Spark to HBase fails with a message like "GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]" Keytab

Re: Phoenix + Spark + JDBC + Kerberos?

2016-09-15 Thread Jean-Marc Spaggiari
pect JDBC on Spark Kerberos authentication to work? Are you > using the principal+keytab options in the Phoenix JDBC URL or is Spark > itself obtaining a ticket for you (via some "magic")? > > > Jean-Marc Spaggiari wrote: > >> Hi, >> >> I tried to build a sm

Re: Full text query in Phoenix

2016-09-19 Thread Jean-Marc Spaggiari
HBase + Lily Indexer + SOLR will do that very well. As James said, Phoenix might not help with the full time. Google for that and you will find many pointers for web articules or even books. JMS 2016-09-19 9:05 GMT-04:00 Cheyenne Forbes : > Hi James, > > Thanks

Re: Phoenix + Spark + JDBC + Kerberos?

2016-09-19 Thread Jean-Marc Spaggiari
UG in log4j config > > Hard to guess at the real issue without knowing more :). Any more context > you can share, I'd be happy to try to help. > > (ps. obligatory warning about PHOENIX-3189 if you're using 4.8.0) > > Jean-Marc Spaggiari wrote: > >> Using the keytab in

Re: Cloudera parcel update

2017-11-09 Thread Jean-Marc Spaggiari
tc and getting users to push changes to the project? How do you do this in >>> Phoenix? Via another mail list, right? >>> >>> Defining regression strategy is probably the most complex bit. And >>> automating it is even more complex I think. This is where more w

Re: Cloudera parcel update

2017-10-26 Thread Jean-Marc Spaggiari
It is. The parcel is not just a packaging of the Phoenix code into a different format. It requires some modifications. However, it's doable... Andrew applied those modifications on a later version and we packaged it into a Parcel. So it's definitely doable. Might be interesting to do that for the

Re: Cloudera parcel update

2017-10-27 Thread Jean-Marc Spaggiari
FYI, you can also count on me for that. At least to perform some testing or gather information, communication, etc. Flavio, what can you leading do you need there? James, I am also interested ;) So count me in... (My very personal contribution) To setup a repo we just need to have a folder on

Re: Cloudera parcel update

2017-10-27 Thread Jean-Marc Spaggiari
f my time every month ;) JMS > > Kind of those things :) > > On Fri, Oct 27, 2017 at 2:33 PM, Jean-Marc Spaggiari < > jean-m...@spaggiari.org> wrote: > >> FYI, you can also count on me for that. At least to perform some testing >> or gather information, communic

Re: where should I put configuration file hbase-site.xml?

2018-01-29 Thread Jean-Marc Spaggiari
As Ethan said. As long as it's in your classpath, it will be picked up by the application... conf is a good candidate, but you can just put it where ever you want... 2018-01-26 3:20 GMT-05:00 Ethan : > At server side hbase-site.xml usually goes into hbaseroot/conf/ folder. So >

Connection Pooling?

2018-10-18 Thread Jean-Marc Spaggiari
Hi, Is this statement in the FAQ still valid? "If Phoenix Connections are reused, it is possible that the underlying HBase connection is not always left in a healthy state by the previous user. It is better to create new Phoenix Connections to ensure that you avoid any potential issues."

Re: Connection Pooling?

2018-10-18 Thread Jean-Marc Spaggiari
is accurate (as is the majority of the rest of the > documentation ;)) > > On 10/18/18 1:14 PM, Batyrshin Alexander wrote: > > I've already asked the same question in this thread - > > > http://apache-phoenix-user-list.1124778.n5.nabble.com/Statements-caching-td4674.html > > > >