Hi,
I build and installed Hadoop+HBase+Phoenix successfuly with BigTop. Hadoop
works well, HBase too. Now it's time for Phoenix.
From the Getting Started guide here
http://phoenix.apache.org/download.html#Installation I tried to run
/usr/lib/phoenix/bin/sqlline.py localhost
However, I get the
Ok. Looked into sqlline.py code, exported PHOENIX_LIB_DIR to the right
directory, and it now works...
Just posting here in case someone face the same issue.
JM
2014-08-26 12:27 GMT-04:00 Jean-Marc Spaggiari jean-m...@spaggiari.org:
Hi,
I build and installed Hadoop+HBase+Phoenix successfuly
...@apache.org:
On Tue, Aug 26, 2014 at 9:40 AM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Ok. Looked into sqlline.py code, exported PHOENIX_LIB_DIR to the right
directory, and it now works...
You were using the Bigtop package? Please consider filing a Bigtop JIRA.
--
Best regards
days if the RC holds up).
On Tue, Aug 26, 2014 at 4:46 PM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
I faced this and also, BigTop doesn't compile against Phoenix 4.0.1. And
Phoenix 4.0 has an hbase-default.xml issue with Hadoop 2.0. Had to do
some
manual stuff to fix that.
I
Hi,
I have data like:
CustID, URL
and I want to put that into Phoenix. Is there a way to have an
auto-increment field to do something like:
CREATE TABLE IF NOT EXISTS testdata ( id BIGINT NOT NULL, subid
AUTO-INCREMENT, url VARCHAR CONSTRAINT my_pk PRIMARY KEY (id, subid));
Idea is, I have
Hi,
I have pushed a small patch to add ILIKE keyword to Phoenix. It's simple
and available there: PHOENIX-1273
https://issues.apache.org/jira/browse/PHOENIX-1273
I'm pretty sure it is complete but it's a first draft for review. I still
need to update the PhoenixSQL.g file.
Thanks,
JM
Hi,
Is it possible to run sub-queries with Phoenix? Something like this:
select * from metadata n where L = 1 AND R = (select max(R) from
metadata z where n.A = z.A);
Goel is to get all lignes where L=1 and R=max. Field A is the key.
Thanks,
JM
Hi,
Is it possible to create a view on and existing HBase table and describe
the composite key?
I don't see anything about that in the doc
http://phoenix.apache.org/views.html but it also doesn't say that it's not
possible.
Would like to do something like that:
CREATE VIEW t1 ( USER
,
f1.W unsigned_long,
f1.P bigint,
f1.N varchar,
f1.E varchar,
f1.S unsigned_long,
f1.M unsigned_long,
f1.T unsigned_int,
CONSTRAINT pk PRIMARY KEY (USER, ID, VERSION)
);
Thanks,
James
On Wed, Sep 24, 2014 at 6:21 AM, Jean-Marc Spaggiari
jean-m
Hi,
We have something like this that we want to translate into Phoenix
(snippet):
RETURN QUERY WITH RECURSIVE first_level AS (
-- non-recursive term
(
SELECT a.id AS id FROM asset a
WHERE a.parent_id = p_id AND TYPE = 2
)
UNION
-- Recursive Term
SELECT a.id AS id FROM
per level) using the IN clause support we have (i.e. by generating a
query)? You could use UPSERT SELECT to dump the IDs you get back at
each level into a temp table if need be and join against it for the
next query.
Thanks,
James
On Wed, Sep 24, 2014 at 1:08 PM, Jean-Marc Spaggiari
jean
if there were cycles, you could add a WHERE NOT IN clause.
Thanks,
James
On Thu, Sep 25, 2014 at 5:38 AM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Hi James,
Thanks for the feedback. My knowledge of Phoenix and SQL is not good
enough
for now to jump on such a big patch
Hi,
Can I have a column qualifier part of a key?
Doing this I define columns based on the RowKey:
create view asset_metadata (
L unsigned_long not null,
A unsigned_long not null,
R bigint not null,
s.W unsigned_long,
s.P bigint,
s.N varchar,
s.E varchar,
s.S
the columns you'd likely also use when you filter on
s.W in a WHERE clause. Depending on your use case, you might choose
immutable/mutable and local/global - take a look here for more info:
http://phoenix.apache.org/secondary_indexing.html
Thanks,
James
On Mon, Sep 29, 2014 at 6:15 AM, Jean-Marc
the query optimizer deems that it'll perform better
in doing so. For example, if your query filtered on s.W, then the
index might be used.
There's no other way than this to get a column qualifier into the row key.
Thanks,
James
On Mon, Sep 29, 2014 at 10:31 AM, Jean-Marc Spaggiari
jean-m
4.2 Phoenix version may have issues on local
index). There is a test case MutableIndexReplicationIT where you can see
some details. Ideally Phoenix should provide a customer replication sink so
that a user doesn't have to setup replication on index table.
From: Jean-Marc Spaggiari jean-m
the sequence values on a failover
event.
HTH. Maybe more information than you wanted? Tell us more about how
you're relying on replication when you get a chance.
Thanks,
James
On Tue, Dec 9, 2014 at 5:00 PM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Hum. Thanks for al those updates
). Another solution if that doesn't work would be if the
SYSTEM.SEQUENCE table could be replicated synchronously (HBASE-12672).
TMI? HTH.
James
On Wed, Dec 10, 2014 at 7:42 AM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Thanks James (And Andrew). I think there can not be to much
Hi Ralph,
Thinking out loud...
If you have an index on your table, the TTL will remove some data from the
table but will not clean the references in the index table. So if you query
using the index, that will return some data which doesn't exist anymore on
the original table. Therefore you
Except that you have to snapshot EVERYTHING...
If you get SYSTEM.CATALOG, SYSTEM.SEQUENCE and your table, and you want to
restore your table, then you will also restore those 2 system tables might
brake the other tables that you have not snapshot nor restored...
2015-08-10 13:59 GMT-04:00 Ankit
Have you looked at those 2 links?
-
http://blog.cloudera.com/blog/2015/05/apache-phoenix-joins-cloudera-labs/
-
http://www.cloudera.com/content/cloudera/en/developers/home/cloudera-labs/apache-phoenix/install-apache-phoenix-cloudera-labs.pdf
Seems more recent that the one you are
-17 19:00 GMT-04:00 Alex Kamil alex.ka...@gmail.com:
thanks Jean-Marc, but we don't use Cloudera manager
On Fri, Jul 17, 2015 at 6:56 PM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Have you looked at those 2 links?
-
http://blog.cloudera.com/blog/2015/05/apache-phoenix
As Serega said. You have to use the parcel available on the Cloudera Labs
repo. Because Cloudera has backported some of the 1.1 features into their
1.0 branch, some signatures changed and the default Phoenix distribution
will not work with CDH. You need to make sure to follow the instructions
Hum. Unfortunatly it's not really a script but more manual work and Jenkins
:( Not sure what I can share which might help to build that back :(
Hi Gaurav,
bulk load bypass the WAL, that's correct. It's true for Phoenix, it's true
for HBase (outside of Phoenix).
If you have replication activated, you will have to bulkload the data into
the 2 clusters. Transfert your csv files on the other side too and bulkload
from there.
JM
2015-09-01
p 2, 2015 at 12:23 AM, Jean-Marc Spaggiari <
> jean-m...@spaggiari.org> wrote:
>
>> Hi Gaurav,
>>
>> bulk load bypass the WAL, that's correct. It's true for Phoenix, it's
>> true for HBase (outside of Phoenix).
>>
>> If you have replication activated,
Is not the output the number of lines of the delete command, which is one
line (the command itself) and not the number of deleted lines?
Can you try to put some rows into the table and do the delete again? Or try
without the where close too?
2015-09-02 9:54 GMT-04:00 James Heather
Exact. There is some some code change because of what has been back ported
into CDH and what has not been. But overall, it should not be rocket
science. Mostly method signatures...
Let us know when the repo is available so we can help...
Thanks,
JM
2015-09-12 18:38 GMT-04:00 Krishna
>
> James
> On 16 Sep 2015 01:02, "Andrew Purtell" <apurt...@apache.org> wrote:
>
>> I used dev/make_rc.sh, built with Maven 3.2.2, Java 7u79. Ubuntu build
>> host.
>>
>>
>> On Tue, Sep 15, 2015 at 4:58 PM, Jean-Marc Spaggiari <
>> jean
ny of the necessary changes so far.
>>
>> I chose that branch, by the way, because it's the latest release, and is
>> using the same version of HBase as CDH5.4. The master branch of the Phoenix
>> repo is building a snapshot of (the forthcoming) Phoenix 4.6, against HBase
>&
> @JM how did you get on with the parcel building?
>
> Has anyone managed to get 4.5 working on CDH5 now? I was going to stick
> with 4.3 on our cluster until we had a parcel, but I'm now needing to use
> pherf, and that doesn't seem to exist in 4.3.
>
> James
>
>
> On
Hi Ashutosh,
If I'm not mistaken, there is many features missing in MapRDB like
coprocessors, and Phoenix relays on them. So my guess is that Phoenix will
not work on MapRDB.
JM
2015-09-21 12:43 GMT-04:00 Ashutosh Sharma :
>
> please let me know.
> --
> With best
Hi,
When Phoenix is used, what is the recommended way to do replication?
Replication acts as a client on the 2nd cluster, so should we simply
configure Phoenix on both cluster and on the destination it will take care
of updating the index tables, etc. Or should all the tables on the
destination
Hi,
I tried to build a small app all under Kerberos.
JDBC to Phoenix works
Client to HBase works
Client (puts) on Spark to HBase works.
But JDBC on Spark to HBase fails with a message like "GSSException: No
valid credentials provided (Mechanism level: Failed to
find any Kerberos tgt)]"
Keytab
pect JDBC on Spark Kerberos authentication to work? Are you
> using the principal+keytab options in the Phoenix JDBC URL or is Spark
> itself obtaining a ticket for you (via some "magic")?
>
>
> Jean-Marc Spaggiari wrote:
>
>> Hi,
>>
>> I tried to build a sm
HBase + Lily Indexer + SOLR will do that very well. As James said, Phoenix
might not help with the full time. Google for that and you will find many
pointers for web articules or even books.
JMS
2016-09-19 9:05 GMT-04:00 Cheyenne Forbes :
> Hi James,
>
> Thanks
UG in log4j config
>
> Hard to guess at the real issue without knowing more :). Any more context
> you can share, I'd be happy to try to help.
>
> (ps. obligatory warning about PHOENIX-3189 if you're using 4.8.0)
>
> Jean-Marc Spaggiari wrote:
>
>> Using the keytab in
tc and getting users to push changes to the project? How do you do this in
>>> Phoenix? Via another mail list, right?
>>>
>>> Defining regression strategy is probably the most complex bit. And
>>> automating it is even more complex I think. This is where more w
It is. The parcel is not just a packaging of the Phoenix code into a
different format. It requires some modifications. However, it's doable...
Andrew applied those modifications on a later version and we packaged it
into a Parcel. So it's definitely doable. Might be interesting to do that
for the
FYI, you can also count on me for that. At least to perform some testing or
gather information, communication, etc.
Flavio, what can you leading do you need there?
James, I am also interested ;) So count me in... (My very personal
contribution)
To setup a repo we just need to have a folder on
f my
time every month ;)
JMS
>
> Kind of those things :)
>
> On Fri, Oct 27, 2017 at 2:33 PM, Jean-Marc Spaggiari <
> jean-m...@spaggiari.org> wrote:
>
>> FYI, you can also count on me for that. At least to perform some testing
>> or gather information, communic
As Ethan said. As long as it's in your classpath, it will be picked up by
the application... conf is a good candidate, but you can just put it where
ever you want...
2018-01-26 3:20 GMT-05:00 Ethan :
> At server side hbase-site.xml usually goes into hbaseroot/conf/ folder. So
>
Hi,
Is this statement in the FAQ still valid?
"If Phoenix Connections are reused, it is possible that the underlying
HBase connection is not always left in a healthy state by the previous
user. It is better to create new Phoenix Connections to ensure that you
avoid any potential issues."
is accurate (as is the majority of the rest of the
> documentation ;))
>
> On 10/18/18 1:14 PM, Batyrshin Alexander wrote:
> > I've already asked the same question in this thread -
> >
> http://apache-phoenix-user-list.1124778.n5.nabble.com/Statements-caching-td4674.html
> >
> >
44 matches
Mail list logo