Is there a standard way to move all apache phoenix data to a new cluster?
and how long would it take to move 2 terabytes of phoenix rows?
senario:
I launched my platform using virtual servers (as its cheaper) but now I am
ready to move to dedicated servers but I want to know the right way to
Is there a standard way to move all apache phoenix data to a new cluster?
and how long would it take to move 2 terabytes of phoenix rows?
senario:
I launched my platform using virtual servers (as its cheaper) but now I am
ready to move to dedicated servers but I want to know the right way to
Is there a standard way to move all apache phoenix data to a new cluster?
and how long would it take to move 2 terabytes of phoenix rows?
senario:
I launched my platform using virtual servers (as its cheaper) but now I am
ready to move to dedicated servers but I want to know the right way to
Is it possible to delete an array element by value in phoenix?
Regards,
Cheyenne O. Forbes
I want to take advantage of phoenix sequences to create IDs for messages of
chats.but instead of "*SELECT NEXT VALUE FOR chat_id*" I want to do
something like "*SELECT NEXT VALUE FOR message_id WHERE parent_id = {the
chat id}*". What is better to do? create a sequence for each chat or find
away to
I'm using phoenix to store user sessions. The table's TTL is set to 3 days
and I'd like to have the 3 days start over if the user comes back before
the previous 3 days havent ended.
Thanks,
Cheyenne O. Forbes
I'm using phoenix to store user sessions. The table's TTL is set to 3 days
and I'd like to have the 3 days start over if the user comes back before
the previous 3 days havent ended.
Thanks,
Cheyenne O. Forbes
Which of these three would be more efficient based Phoenix's design?
4 = ARRAY_ELEM(ARRAY[1,2,3,4,5,6], 4)'4' = SUBSTR('123456', 0, 4)'4' = LIKE
'123%'
Regards,
Cheyenne
Can I have something like *"select id from table limit 10"*
Regards,
Cheyenne
Do joins take place on all the region servers then the results get bundled
together and sent to the client or the data from all the "joining" tables
are collected on a single server?
Regards,
Cheyenne O. Forbes
re of, so no.
>
> On Tue, Jun 6, 2017 at 5:27 PM, Cheyenne Forbes <
> cheyenne.osanu.for...@gmail.com> wrote:
>
>> Can i do
>>
>> *UPSERT(id, myArray) VALUES (1, array_reduce(myArray, "index", 3))*
>> the second argument would be either "index&
ad, the
>>array may be manipulated on the client-side and then upserted back in its
>>entirety.
>>
>> Thanks,
>> Sergey
>>
>> On Mon, Jun 5, 2017 at 7:25 PM, Cheyenne Forbes <
>> cheyenne.osanu.for...@gmail.com> wrote:
>>
>>> Can I delete elements from Phoenix arrays?
>>>
>>> Regards,
>>>
>>> Cheyenne O. Forbes
>>>
>>
>>
>
Can I delete elements from Phoenix arrays?
Regards,
Cheyenne O. Forbes
feature is enabled by default
> which means the column names declared in the Phoenix schema are going to be
> different from the column qualifiers in hbase. If you would like to
> disabled column mapping, set COLUMN_ENCODED_BYTES=NONE property in your ddl.
>
> On Mon, Jun 5, 2017 at 1:0
Can anyone please help?
On Sat, Jun 3, 2017 at 8:51 PM, Cheyenne Forbes <
cheyenne.osanu.for...@gmail.com> wrote:
> I am doing some analytics which require me to scan through a phoenix
> created table with hbase instead of a phoenix select query.
>
> I created a column wi
I am doing some analytics which require me to scan through a phoenix
created table with hbase instead of a phoenix select query.
I created a column with the name 'fname' but a scan with hbase shell shows
"\x80\x0F":
\x00\x00Z\xF3\x10z@\x14column=PERSONAL:\x80\x0F,
timestamp=1496360923816,
Elser <els...@apache.org> wrote:
> The log4j.properties which you have configured to be on the HBase
> RegionServer classpath. I don't know how you configured your system.
>
> On 5/25/17 2:02 PM, Cheyenne Forbes wrote:
>
>> Which one of the files? I found 4
>>
>
> On 5/24/17 11:02 AM, Cheyenne Forbes wrote:
>
>> I want to output the steps of execution of my UDF but I cant find the
>> logs, I searched the region log in //usr/local/hbase/logs//
>>
>> /public static final Log LOG = LogFactory.getLog(MyUDF.class);/
>
Which is more efficient in heavy usage platform?
1. Join 8 tables with billions of rows
2. Select the "primary row" from a table then run multiple select
queries on the other tables using each primary key returned from the first
table in a for loop on the client side
Regards,
I want to output the steps of execution of my UDF but I cant find the logs,
I searched the region log in */usr/local/hbase/logs/*
*public static final Log LOG = LogFactory.getLog(MyUDF.class);*
*public boolean evaluate(Tuple tuple, ImmutableBytesWritable ptr)
{LOG.info("UDF
any date decided yet?
Is there anyway to do real time analytics on data with phoenix?
Regards,
Cheyenne O. Forbes
Chief Executive Officer
Avapno Omnitech, Limited
Chief Operating Officer
Avapno Solutions Co. Limited
Chief Technology Officer
ZirconOne Corperation
Chairman
Avapno Assets, LLC
Bethel Town P.O
t;hbase.zookeeper.quorum");```
>
>
> Cheyenne Forbes wrote:
>
>> Can I access the value of "hbase.zookeeper.quorum" in my UDF?
>>
>> Regards,
>>
>> Cheyenne O. Forbes
>>
>
Can I access the value of "hbase.zookeeper.quorum" in my UDF?
Regards,
Cheyenne O. Forbes
Any updates on how I'd go about getting *"**HRegion" *in a UDF?
Regards,
Cheyenne O. Forbes
On Wed, Apr 19, 2017 at 6:03 PM, Cheyenne Forbes <
cheyenne.osanu.for...@gmail.com> wrote:
> At postOpen the location of the lucene directory to be used for the region
>
I get "Undefined column family. familyName=f" whenever I run the following
query,it works without the ORDER BY and works with the ORDER BY if its not
a union and just one select statement
SELECT
p.name
FROM
person p
JOIN
friends f
When I run "$PHOENIX_HOME/bin/bin/queryserver.py stop" I get "no Query
Server to stop because PID file not found,
/tmp/phoenix/root-queryserver.pid"
Regards,
Cheyenne O. Forbes
I created a jar with:
jar -cf $HBASE_HOME/lib/phoenix-udj.jar UDF.java
but I get "ClassNotFoundException" when I try to use the UDF in simple
select query
Regards,
Cheyenne O. Forbes
,
Cheyenne O. Forbes
On Mon, May 1, 2017 at 7:54 PM, Cheyenne Forbes <
cheyenne.osanu.for...@gmail.com> wrote:
> I see "master.HMaster: Failed to become active master
> java.net.ConnectException: Call From 7a2df40d1596/172.17.0.2 to
> localhost:9000 failed on connection
-7889 <876-881-7889>
Landline: +1 (876) 957-1821
skype: cheyenne.forbes1
On Mon, May 1, 2017 at 6:55 PM, York, Zach <zy...@amazon.com> wrote:
> This means that the HBase Master can’t connect to ZooKeeper. Can you check
> the HBase logs to see any exceptions there?
>
>
Also when I run "list" in "hbase shell" I get "ERROR: Can't get master
address from ZooKeeper; znode data == null"
Regards,
Cheyenne O. Forbes
On Mon, May 1, 2017 at 4:30 PM, Cheyenne Forbes <
cheyenne.osanu.for...@gmail.com> wrote:
> I see [
run -it --name phoenix -p 8765:8765 avapno/apache-phoenix
>
>
>
> Regards,
> Will
>
> *From: *Cheyenne Forbes <cheyenne.osanu.for...@gmail.com>
> *Reply-To: *"user@phoenix.apache.org" <user@phoenix.apache.org>
> *Date: *Saturday, April 29, 2017 at 1:35 PM
operations
are "tied" to hbase hooks
Regards,
Cheyenne O. Forbes
On Wed, Apr 19, 2017 at 4:21 PM, Sergey Soldatov <sergeysolda...@gmail.com>
wrote:
> How do you handle HBase region splits and merges with such architecture?
>
> Thanks,
> Sergey
>
> On W
ions wrt potential new features?
>
> On Wed, Apr 19, 2017 at 8:53 AM Cheyenne Forbes <
> cheyenne.osanu.for...@gmail.com> wrote:
>
>> I'd still need " *HRegion MyVar; ", *because I'd still need the name of
>> the region where the row of the id passed to the U
n Tue, Apr 18, 2017 at 6:27 PM, Sergey Soldatov <sergeysolda...@gmail.com>
wrote:
> I mean you need to modify Phoenix code itself to properly support such
> kind of features.
>
> Thanks,
> Sergey
>
> On Tue, Apr 18, 2017 at 3:52 PM, Cheyenne Forbes <
> cheyenne.osan
the Phoenix layer in the way
> like global/local indexes are implemented.
>
> Thanks,
> Sergey
>
> On Tue, Apr 18, 2017 at 12:34 PM, Cheyenne Forbes <
> cheyenne.osanu.for...@gmail.com> wrote:
>
>> I am creating a plugin that uses Lucene to index text fields and I
nce and get this HRegion using the key that is processed by
>> evaluate function. But it's a completely wrong approach for both HBase and
>> Phoenix. And it's not clear for me why SQL query may need access to the
>> region internals.
>>
>> Thanks,
>> Sergey
>
I am creating a plugin that uses Lucene to index text fields and I need to
access *getConf()* and *getFilesystem()* of *HRegion, *the Lucene indexes
are split with the regions so I need " *HRegion MyVar; ", *I am positive
the UDF will run on the region server and not the client
*.*
so there is no way of getting HRegion in a UDF?
would *my_udf* be executed on the region server that the row of the column
that is passed to it is located on ?
would *my_udf* be executed on the region server that the row of the column
that is passed to it?
I was wonder if the arrays are stored in the order I add them or they are
sorted otherwise (maybe for performance reasons)
If I have a coprocessor of class "MyCoprocessor" which extends
"BaseRegionObserverCoprocessor" is it possible to "access" it from a
Phoenix UDF?
Regards,
Cheyenne O. Forbes
npoint any other non supported constructs.
>
> On Wed, Apr 12, 2017 at 6:49 AM Cheyenne Forbes <
> cheyenne.osanu.for...@gmail.com> wrote:
>
>> I was using this (http://rextester.com/VXZONO82847) in MySQL but I am
>> not able to use it in Phoenix. Basically what the quer
So I cant get neither the name of the table, the name of the column, the id
of the row nor the region that "ptr" belongs to after calling:
*Expression arg = getChildren().get(0);if
(!arg.evaluate(tuple, ptr)) {return false;}*
Thank you James, could you also answer my other question?
https://lists.apache.org/thread.html/5dfd0aecf5e6325b707fed4533f1e727886c338be762d6aaccfcf2f3@%3Cuser.phoenix.apache.org%3E
Can someone show an example of how to declare a variable (and change the
value) in Phoenix as I would use @ in front of the variable name in MySQL?
(example: @my_variable := "my value")
Thanks,
Cheyenne
or get the region of the table after calling:
*Expression arg = getChildren().get(0);if
(!arg.evaluate(tuple, ptr)) {return false;}*
Anyone?
Is it possible to get the Region and ID of the current row in evaluate()
when creating a user defined function?
Could you show an example?
or should I say "declare variables in Phoenix" instead of "defining
variables in Phoenix"?
Can you show me an example of defining variables in Phoenix (not send a
parameter)? I am getting errors, how would I do this in phoenix:
*@my_variable := CASE WHEN my_column IS NULL THEN "this value" ELSE "that
value" END*
Will I be able to change the value of userId from in the query?
*:**userId = CASE WHEN **userId > 10 THEN **userId ELSE **(userId + 1) END*
Anyone?
Can I define parameters as I would in MySQL with @var_name:="value"?
example:
*@rn:= CASE WHEN my_column IS NULL THEN "this value" ELSE "that value" END*
I need it for a query I used in MySQL to work in Phoenix
Regards,
Cheyenne O. Forbes
Anyone?
If I use a Hbase patch/function and want to use that function on a column
in phoenix, how should I do it?
turns out that the only way to connect to phoenix from the latest version
of Squirrel is with
"jdbc:phoenix:thin:url=http://172.17.0.2:8765;serialization=PROTOBUF;
instead of just "jdbc:phoenix:thin:url=http://172.17.0.2:8765;; If I'm
using the default phoenix 4.9.0 settings
I do see
org/apache/phoenix/shaded/org/apache/http/conn/ssl/SSLConnectionSocketFactory.class
anything else?
Can anyone try to see if they get the same error?
I've used Squirrel SQL Client before but now I'm trying Squirrel'
snapshot-20170214_2214 with phoenix-4.9.0-HBase-1.2-client.jar it wont work.
URL field: jdbc:phoenix:thin:url=http://172.17.0.2:8765
Class name: org.apache.phoenix.queryserver.client.Driver
Error:
I am not sure how sequences work in phoenix but something popped up in my
mind.
If I request 100 values from a sequence will there be 100 values stored in
the database or just 1 value which is "100" telling phoenix the is the
number to be incremented?
Hi! This is the ezmlm program. I'm managing the
user@phoenix.apache.org mailing list.
Messages to you from the user mailing list seem to
have been bouncing. I've attached a copy of the first bounce
message I received.
If this message bounces too, I will send you a probe. If the probe bounces,
I
I cant find the difference between the date/time types, arent all of them
the same? also should I parse them as int or string?
TIME Type
TIME
The time data type. The format is -MM-dd hh:mm:ss, with both the date
and time parts maintained. Mapped to java.sql.Time. The binary
representation is
my project highly depends on protobuf2, can I tell phoenix which version of
protobuf to read with when I am sending a request?
Can I use the SQL WITH clause Phoenix instead of "untidy" sub queries?
are there any recommended specs for the servers?
thank you
So instead of doing:
query("select from table where c1 = ? or c2 = ?", [my_id, my_id])
i would do:
query("select from table where c1 = ?1 or c2 = ?1", [my_id])
Is it efficient to query a VERY huge Phoenix database this way?
(THIS IS NOT (YET) A REAL PROGRAMMING LANGUAGE)
chats = phoenix.query(" select id, name from chats where participant1 = ?
or participant2 = ? ", [user_id, user_id]); for_each( chats ) { participants
= phoenix.query(" select p.id,
so why would I choose Phoenix over Spark?
Why would/should I care about spark/spark plugin when I already have
phoenix?
I made three protobuf clients for Phoenix in C++, Python and Erlang.
how can I make these become "official" as lalinsky's Phoenix JSON python
client?
*The story*: I first created my application in Python and used lalinsky's
JSON client but later found out Python (and JSON) weren't my best
can I add versions of a row and select versions of a row as I can when
using hbase alone?
is
*delete from table*
slower than
*delete 'table', 'row'*
Thats the question I shouldve asked myself, no
How can I get it done paid?
Are there people who do this for free?
Check out this post for loading data from MySQL to Ignite
https://dzone.com/articles/apache-ignite-how-to-read-data-from-persistent-sto
and this one (recommended) on how to UPSERT to Phoenix on Ignite PUT...
*delete, etc.*
https://apacheignite.readme.io/docs/persistent-store#cachestore-example
May I ask which in memory db are you using?
Hi Anil,
Basically what you want to do is copy all the data you had input with
Phoenix to your in memory db?
I was wondering because it seems extra wordy
Hi James,
Thanks a lot, I found a link showing how to integrate hbase with lucene
https://itpeernetwork.intel.com/idh-hbase-lucene-integration/
this query fails:
SELECT COUNT(fr.friend_1), u.first_name
>
> FROM users AS u
>
> LEFT JOIN friends AS fr ON u.id = fr.friend_2
>
>
with:
SQLException: ERROR 1018 (42Y27): Aggregate may not contain columns not in
> GROUP BY. U.FIRST_NAME
>
TABLES:
users table with these
Hi Anil,
If I have:
users table with these columns ( id, first_name, last_name )
friends table with these columns ( friend_1, friend_2 )
> user_posts table with these columns ( user_id, post_text, date_time )
in hbase (phoenix) and I want to view all user posts (post_text) with
similar
Hi James,
I found this for Hbase
https://issues.apache.org/jira/browse/HBASE-3529
its patch that can be added to hbase based on what I am seeing
- Can it be fast?
- does it use the Hbase regex feature?
- how can I make it case insensitive? so when I do "LIKE 'query%' " the
results include "Query"
- Can I get millisecond results using "WHERE column LIKE" on a large
table? couple terabytes of data
- is it recommended
Thank you, I got the error because I copied the queryserver jar instead of
the server jar :)
does anyone have an idea whats causing this?
here (root-queryserver.log):
> 2016-09-16 23:40:18,378 INFO
> org.apache.hadoop.hbase.client.RpcRetryingCaller:
> 2016-09-16 23:40:18,420 WARN
> org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel: Call failed on
> IOException
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
Yes James through the query server. Josh it doent show any errors, it just
hangs there for minutes
Regards,
Cheyenne Forbes
> On Thu, Sep 15, 2016 at 4:42 PM, Josh Elser <josh.el...@gmail.com> wrote:
>
>> The error you see would also be rather helpful.
>>
>>
yes James through the query server. Josh it doent show any errors, it just
hang there for minutes
Regards,
Cheyenne Forbes
Chief Executive Officer
Avapno Omnitech
Chief Operating Officer
Avapno Solutions, Co.
Chairman
Avapno Assets, LLC
Bethel Town P.O
Westmoreland
Jamaica
Email
I was using phoenix 4.4 then I switched to 4.8 because I thought it was
related to version 4.4 (both on hbase 1.1.2), neither using json nor
protobufs works.
I tried (also using the outer key word):
> left join
right join
inner join
if I was to use spark (via python api for example), the query would be
processed on my webservers or on a separate server like in phoenix?
Regards,
Cheyenne Forbes
Chief Executive Officer
Avapno Omnitech
Chief Operating Officer
Avapno Solutions, Co.
Chairman
Avapno Assets, LLC
Bethel Town
i've been thinking, is spark sql faster than phoenix (or phoenix-spark)
with selects with joins on large data (for example instagram's size)?
Regards,
Cheyenne Forbes
Chief Executive Officer
Avapno Omnitech
Chief Operating Officer
Avapno Solutions, Co.
Chairman
Avapno Assets, LLC
Bethel Town
Thanks everyone, I will be using phoenix for simple input/output and
the phoenix_spark plugin (https://phoenix.apache.org/phoenix_spark.html)
for more complex queries, is that the smart thing?
Regards,
Cheyenne Forbes
Chief Executive Officer
Avapno Omnitech
Chief Operating Officer
Avapno
esource Management (Fair Scheduling)
> * Additional SQL Features (Windowing ,etc.)
> * Machine Learning Libraries
>
>
> Regards,
> John
>
> > On Sep 11, 2016, at 2:45 AM, Cheyenne Forbes <
> cheyenne.osanu.for...@gmail.com> wrote:
> >
> > I re
I realized there is a spark plugin for phoenix, any use cases? why would I
use spark with phoenix instead of phoenix by itself?
Hbase 1.1 Phoenix 4.4.0 core and server jars are in Hbase Lib folder,
everything works except join queries:
select u.id, b.book
> from users as u
> inner join books as b
> on u.id = b.author
I am using phoenix for my platform but I cant do full text queries
"SELECT ID, FirstName, Lastname FROM users
WHERE MATCH (FirstName, Lastname)
AGAINST ('first_name last_name' IN BOOLEAN MODE)
AND [Searcher not blocked by user]
Regards,
Cheyenne
1 - 100 of 110 matches
Mail list logo