Transfer all data to a new phoenix cluster

2017-11-13 Thread Cheyenne Forbes
Is there a standard way to move all apache phoenix data to a new cluster?
and how long would it take to move 2 terabytes of phoenix rows?

senario:

I launched my platform using virtual servers (as its cheaper) but now I am
ready to move to dedicated servers but I want to know the right way to move
the data to a new cluster so my users dont hunt me down to torture me for
messing up the data they trusted my platform with.

Regards,
Cheyenne


transfer all data to another cluster

2017-11-13 Thread Cheyenne Forbes
Is there a standard way to move all apache phoenix data to a new cluster?
and how long would it take to move 2 terabytes of phoenix rows?

senario:

I launched my platform using virtual servers (as its cheaper) but now I am
ready to move to dedicated servers but I want to know the right way to move
the data to a new cluster so my users dont hunt me down to torture me for
messing up the data they trusted my platform with.

Regards,
Cheyenne


-- 
Regards,

Cheyenne O. Forbes

Chief Executive Officer
Avapno Omnitech, Limited

Chief Operating Officer
Avapno Solutions Co. Limited

Chief Technology Officer
ZirconOne Corperation

Chairman
Avapno Assets, LLC

Bethel Town P.O
Westmoreland
Jamaica

Email: cheyenne.osanu.for...@gmail.com
Mobile: +1 (876) 881-7889 <876-881-7889>
Landline: +1 (876) 957-1821
skype: cheyenne.forbes1


*Notice: **The information contained in this e-mail message and/or
attachments to it may contain confidential or privileged information. If
you are not the intended recipient, any dissemination, use, review,
distribution, printing or copying of the information contained in this
e-mail message and/or attachments to it are strictly prohibited. If you
have received this communication in error, please notify us by reply e-mail
or telephone and immediately and permanently delete the message and any
attachments. Thank you*


transfer all data to another cluster

2017-11-13 Thread Cheyenne Forbes
Is there a standard way to move all apache phoenix data to a new cluster?
and how long would it take to move 2 terabytes of phoenix rows?

senario:

I launched my platform using virtual servers (as its cheaper) but now I am
ready to move to dedicated servers but I want to know the right way to move
the data to a new cluster so my users dont hunt me down to torture me for
messing up the data they trusted my platform with.

Regards,
Cheyenne


-- 
Regards,

Cheyenne O. Forbes

Chief Executive Officer
Avapno Omnitech, Limited

Chief Operating Officer
Avapno Solutions Co. Limited

Chief Technology Officer
ZirconOne Corperation

Chairman
Avapno Assets, LLC

Bethel Town P.O
Westmoreland
Jamaica

Email: cheyenne.osanu.for...@gmail.com
Mobile: +1 (876) 881-7889 <876-881-7889>
Landline: +1 (876) 957-1821
skype: cheyenne.forbes1


*Notice: **The information contained in this e-mail message and/or
attachments to it may contain confidential or privileged information. If
you are not the intended recipient, any dissemination, use, review,
distribution, printing or copying of the information contained in this
e-mail message and/or attachments to it are strictly prohibited. If you
have received this communication in error, please notify us by reply e-mail
or telephone and immediately and permanently delete the message and any
attachments. Thank you*


Delete array element by value

2017-09-24 Thread Cheyenne Forbes
Is it possible to delete an array element by value in phoenix?

Regards,
Cheyenne O. Forbes


Sequence per ID

2017-09-24 Thread Cheyenne Forbes
I want to take advantage of phoenix sequences to create IDs for messages of
chats.but instead of "*SELECT NEXT VALUE FOR chat_id*" I want to do
something like "*SELECT NEXT VALUE FOR message_id WHERE parent_id = {the
chat id}*". What is better to do? create a sequence for each chat or find
away to allow "*SELECT NEXT VALUE FOR message_id WHERE parent_id = {the
chat id}*"

Regards,
Cheyenne O. Forbes


Can I reset a row's TTL?

2017-08-06 Thread Cheyenne Forbes
 I'm using phoenix to store user sessions. The table's TTL is set to 3 days
and I'd like to have the 3 days start over if the user comes back before
the previous 3 days havent ended.

Thanks,

Cheyenne O. Forbes


Can I reset a row's TTL?

2017-08-06 Thread Cheyenne Forbes
I'm using phoenix to store user sessions. The table's TTL is set to 3 days
and I'd like to have the 3 days start over if the user comes back before
the previous 3 days havent ended.

Thanks,

Cheyenne O. Forbes


Which would be more efficient based Hbase/Phoenix design?

2017-06-11 Thread Cheyenne Forbes
Which of these three would be more efficient based Phoenix's design?
4 = ARRAY_ELEM(ARRAY[1,2,3,4,5,6], 4)'4' = SUBSTR('123456', 0, 4)'4' = LIKE
'123%'


Regards,
Cheyenne


Maximum (LIMIT) possible results

2017-06-11 Thread Cheyenne Forbes
Can I have something like *"select id from table limit 10"*

Regards,

Cheyenne


Where do joins take place?

2017-06-07 Thread Cheyenne Forbes
Do joins take place on all the region servers then the results get bundled
together and sent to the client or the data from all the "joining" tables
are collected on a single server?

Regards,
Cheyenne O. Forbes


Re: Delete from Array

2017-06-06 Thread Cheyenne Forbes
I know, I was asking if I made a UDF to accept the array and remove the
element the return the new one in a UPSERT if it would work

Regards,
Cheyenne


On Tue, Jun 6, 2017 at 7:32 PM, James Taylor <jamestay...@apache.org> wrote:

> There's no array_reduce built-in function that I'm aware of, so no.
>
> On Tue, Jun 6, 2017 at 5:27 PM, Cheyenne Forbes <
> cheyenne.osanu.for...@gmail.com> wrote:
>
>> Can i do
>>
>> *UPSERT(id, myArray) VALUES (1, array_reduce(myArray, "index", 3))*
>> the second argument would be either "index" or "value" (what to delete
>> by) and the last would be the index or value to delete
>>
>> Regards,
>> Cheyenne
>>
>> On Tue, Jun 6, 2017 at 6:17 PM, James Taylor <jamestay...@apache.org>
>> wrote:
>>
>>> Please feel free to file a JIRA. Though you can delete an element by
>>> using Java code, it'd be nice to have a built-in function to do the same.
>>> Functions like ARRAY_REMOVE_ELEM, ARRAY_SUB_ARRAY would be useful.
>>> Contributions are of course much appreciated.
>>>
>>> Thanks,
>>> James
>>>
>>>
>>> On Tue, Jun 6, 2017 at 3:16 PM, Sergey Soldatov <
>>> sergeysolda...@gmail.com> wrote:
>>>
>>>> From the Apache Phoenix documentation:
>>>>
>>>>
>>>>- Partial update of an array is currently not possible. Instead,
>>>>the array may be manipulated on the client-side and then upserted back 
>>>> in
>>>>its entirety.
>>>>
>>>> Thanks,
>>>> Sergey
>>>>
>>>> On Mon, Jun 5, 2017 at 7:25 PM, Cheyenne Forbes <
>>>> cheyenne.osanu.for...@gmail.com> wrote:
>>>>
>>>>> Can I delete elements from Phoenix arrays?
>>>>>
>>>>> Regards,
>>>>>
>>>>> Cheyenne O. Forbes
>>>>>
>>>>
>>>>
>>>
>>
>


Re: Delete from Array

2017-06-06 Thread Cheyenne Forbes
Can i do

*UPSERT(id, myArray) VALUES (1, array_reduce(myArray, "index", 3))*
the second argument would be either "index" or "value" (what to delete by)
and the last would be the index or value to delete

Regards,
Cheyenne

On Tue, Jun 6, 2017 at 6:17 PM, James Taylor <jamestay...@apache.org> wrote:

> Please feel free to file a JIRA. Though you can delete an element by using
> Java code, it'd be nice to have a built-in function to do the same.
> Functions like ARRAY_REMOVE_ELEM, ARRAY_SUB_ARRAY would be useful.
> Contributions are of course much appreciated.
>
> Thanks,
> James
>
>
> On Tue, Jun 6, 2017 at 3:16 PM, Sergey Soldatov <sergeysolda...@gmail.com>
> wrote:
>
>> From the Apache Phoenix documentation:
>>
>>
>>- Partial update of an array is currently not possible. Instead, the
>>array may be manipulated on the client-side and then upserted back in its
>>entirety.
>>
>> Thanks,
>> Sergey
>>
>> On Mon, Jun 5, 2017 at 7:25 PM, Cheyenne Forbes <
>> cheyenne.osanu.for...@gmail.com> wrote:
>>
>>> Can I delete elements from Phoenix arrays?
>>>
>>> Regards,
>>>
>>> Cheyenne O. Forbes
>>>
>>
>>
>


Delete from Array

2017-06-05 Thread Cheyenne Forbes
Can I delete elements from Phoenix arrays?

Regards,

Cheyenne O. Forbes


Re: Scan phoenix created columns, hbase

2017-06-05 Thread Cheyenne Forbes
So there is no Java function to turn "\x80\x0F" back into "fname" if I'm
using *CellUtil.cloneQualifier(cell)*?

Regards,
Cheyenne O. Forbes

On Mon, Jun 5, 2017 at 3:13 PM, Samarth Jain <samarth.j...@gmail.com> wrote:

> Cheyene, with Phoenix 4.10, column mapping feature is enabled by default
> which means the column names declared in the Phoenix schema are going to be
> different from the column qualifiers in hbase. If you would like to
> disabled column mapping, set COLUMN_ENCODED_BYTES=NONE property in your ddl.
>
> On Mon, Jun 5, 2017 at 1:09 PM, Cheyenne Forbes <
> cheyenne.osanu.for...@gmail.com> wrote:
>
>> Can anyone please help?
>>
>>
>> On Sat, Jun 3, 2017 at 8:51 PM, Cheyenne Forbes <
>> cheyenne.osanu.for...@gmail.com> wrote:
>>
>>> I am doing some analytics which require me to scan through a phoenix
>>> created table with hbase instead of a phoenix select query.
>>>
>>> I created a column with the name 'fname' but a scan with hbase shell
>>> shows "\x80\x0F":
>>>
>>> \x00\x00Z\xF3\x10z@\x14column=PERSONAL:\x80\x0F,
>>> timestamp=1496360923816, value=Cheyenne
>>>
>>> How can I scan using column names if for example the name i gave is
>>> "fname" but in hbase I it is "\x80\x0F"?
>>>
>>> Regards,
>>>
>>> Cheyenne O. Forbes
>>>
>>
>>
>


Re: Scan phoenix created columns, hbase

2017-06-05 Thread Cheyenne Forbes
Can anyone please help?

On Sat, Jun 3, 2017 at 8:51 PM, Cheyenne Forbes <
cheyenne.osanu.for...@gmail.com> wrote:

> I am doing some analytics which require me to scan through a phoenix
> created table with hbase instead of a phoenix select query.
>
> I created a column with the name 'fname' but a scan with hbase shell shows
> "\x80\x0F":
>
> \x00\x00Z\xF3\x10z@\x14column=PERSONAL:\x80\x0F,
> timestamp=1496360923816, value=Cheyenne
>
> How can I scan using column names if for example the name i gave is
> "fname" but in hbase I it is "\x80\x0F"?
>
> Regards,
>
> Cheyenne O. Forbes
>


Scan phoenix created columns, hbase

2017-06-03 Thread Cheyenne Forbes
I am doing some analytics which require me to scan through a phoenix
created table with hbase instead of a phoenix select query.

I created a column with the name 'fname' but a scan with hbase shell shows
"\x80\x0F":

\x00\x00Z\xF3\x10z@\x14column=PERSONAL:\x80\x0F,
timestamp=1496360923816, value=Cheyenne

How can I scan using column names if for example the name i gave is "fname"
but in hbase I it is "\x80\x0F"?

Regards,

Cheyenne O. Forbes


Re: Log from inside UDF

2017-05-25 Thread Cheyenne Forbes
I have everything running inside a single docker container

https://github.com/CheyenneForbes/docker-apache-phoenix/blob/master/Dockerfile

could you look at my dockerfile and tell me what steps are missing to get
the logging?

Regards,

Cheyenne O. Forbes


On Thu, May 25, 2017 at 3:05 PM, Josh Elser <els...@apache.org> wrote:

> The log4j.properties which you have configured to be on the HBase
> RegionServer classpath. I don't know how you configured your system.
>
> On 5/25/17 2:02 PM, Cheyenne Forbes wrote:
>
>> Which one of the files? I found 4
>>
>> //usr/local/hbase-1.2.5/conf/log4j.properties
>> /usr/local/apache-phoenix-4.10.0-HBase-1.2-bin/bin/log4j.properties
>> /usr/local/apache-phoenix-4.10.0-HBase-1.2-bin/bin/sandbox-
>> log4j.properties
>> /usr/local/apache-phoenix-4.10.0-HBase-1.2-bin/bin/config/lo
>> g4j.properties/
>>
>> Regards,
>>
>> Cheyenne O. Forbes
>>
>> On Thu, May 25, 2017 at 11:31 AM, Josh Elser <els...@apache.org > els...@apache.org>> wrote:
>>
>> Verify HBase's log4j.properties configuration is set to print log
>> messages for your class at info (check rootLogger level, log
>> threshold, and logger class/package level).
>>
>> On 5/24/17 11:02 AM, Cheyenne Forbes wrote:
>>
>> I want to output the steps of execution of my UDF but I cant
>> find the logs, I searched the region log in
>> //usr/local/hbase/logs//
>> 
>> /public static final Log LOG = LogFactory.getLog(MyUDF.class);/
>> 
>> /public boolean evaluate(Tuple tuple, ImmutableBytesWritable
>> ptr) {
>>   LOG.info("UDF execution started");/
>> 
>>
>> Regards,
>> Cheyenne O. Forbes
>>
>>
>>


Re: Log from inside UDF

2017-05-25 Thread Cheyenne Forbes
Which one of the files? I found 4




*/usr/local/hbase-1.2.5/conf/log4j.properties/usr/local/apache-phoenix-4.10.0-HBase-1.2-bin/bin/log4j.properties/usr/local/apache-phoenix-4.10.0-HBase-1.2-bin/bin/sandbox-log4j.properties/usr/local/apache-phoenix-4.10.0-HBase-1.2-bin/bin/config/log4j.properties*

Regards,

Cheyenne O. Forbes

On Thu, May 25, 2017 at 11:31 AM, Josh Elser <els...@apache.org> wrote:

> Verify HBase's log4j.properties configuration is set to print log messages
> for your class at info (check rootLogger level, log threshold, and logger
> class/package level).
>
> On 5/24/17 11:02 AM, Cheyenne Forbes wrote:
>
>> I want to output the steps of execution of my UDF but I cant find the
>> logs, I searched the region log in //usr/local/hbase/logs//
>> 
>> /public static final Log LOG = LogFactory.getLog(MyUDF.class);/
>> 
>> /public boolean evaluate(Tuple tuple, ImmutableBytesWritable ptr) {
>>  LOG.info("UDF execution started");/
>> 
>>
>> Regards,
>> Cheyenne O. Forbes
>>
>


Requests in for loop vs joins

2017-05-25 Thread Cheyenne Forbes
Which is more efficient in heavy usage platform?

   1. Join 8 tables with billions of rows
   2. Select the "primary row" from a table then run multiple select
   queries on the other tables using each primary key returned from the first
   table in a for loop on the client side


Regards,

Cheyenne O. Forbes


Log from inside UDF

2017-05-24 Thread Cheyenne Forbes
I want to output the steps of execution of my UDF but I cant find the logs,
I searched the region log in */usr/local/hbase/logs/*

*public static final Log LOG = LogFactory.getLog(MyUDF.class);*


*public boolean evaluate(Tuple tuple, ImmutableBytesWritable ptr)
{LOG.info("UDF execution started");*


Regards,
Cheyenne O. Forbes


When will we have phoenix 5?

2017-05-21 Thread Cheyenne Forbes
any date decided yet?


Real time data analytics

2017-05-17 Thread Cheyenne Forbes
Is there anyway to do real time analytics on data with phoenix?

Regards,

Cheyenne O. Forbes

Chief Executive Officer
Avapno Omnitech, Limited

Chief Operating Officer
Avapno Solutions Co. Limited

Chief Technology Officer
ZirconOne Corperation

Chairman
Avapno Assets, LLC

Bethel Town P.O
Westmoreland
Jamaica

Email: cheyenne.osanu.for...@gmail.com
Mobile: +1 (876) 881-7889 <876-881-7889>
Landline: +1 (876) 957-1821
skype: cheyenne.forbes1


Re: get the value of "hbase.zookeeper.quorum" within UDF

2017-05-16 Thread Cheyenne Forbes
Will that give me the quorum servers used by phoenix? for example if the
value in the hbase config is "zk1.aob.net,zk2.aob.net"

Regards,

Cheyenne O. Forbes


On Tue, May 16, 2017 at 11:53 AM, Josh Elser <els...@apache.org> wrote:

> ```HBaseConfiguration.create().get("hbase.zookeeper.quorum");```
>
>
> Cheyenne Forbes wrote:
>
>> Can I access the value of "hbase.zookeeper.quorum" in my UDF?
>>
>> Regards,
>>
>> Cheyenne O. Forbes
>>
>


get the value of "hbase.zookeeper.quorum" within UDF

2017-05-16 Thread Cheyenne Forbes
Can I access the value of "hbase.zookeeper.quorum" in my UDF?

Regards,

Cheyenne O. Forbes


Re: How can I "use" a hbase co-processor from a User Defined Function?

2017-05-12 Thread Cheyenne Forbes
Any updates on how I'd go about getting *"**HRegion" *in a UDF?

Regards,

Cheyenne O. Forbes

On Wed, Apr 19, 2017 at 6:03 PM, Cheyenne Forbes <
cheyenne.osanu.for...@gmail.com> wrote:

> At postOpen the location of the lucene directory to be used for the region
> is set using the value of *"h_region.getRegionInfo().getEncodedName();" *so
> whenever prePut is called the index of the column is stored in the
> directory that was set during postOpen. So basically the lucene operations
> are "tied" to hbase hooks
>
> Regards,
>
> Cheyenne O. Forbes
>
>
>
> On Wed, Apr 19, 2017 at 4:21 PM, Sergey Soldatov <sergeysolda...@gmail.com
> > wrote:
>
>> How do you handle HBase region splits and merges with such architecture?
>>
>> Thanks,
>> Sergey
>>
>> On Wed, Apr 19, 2017 at 9:22 AM, Cheyenne Forbes <
>> cheyenne.osanu.for...@gmail.com> wrote:
>>
>>> I created a hbase co-processor that stores/deletes text indexes with
>>> Lucene, the indexes are stored on HDFS (for back up, replication, etc.).
>>> The indexes "mirror" the regions so if the index for a column is at
>>> "hdfs://localhost:9000/hbase/region_name" the index is stored at
>>> "hdfs://localhost:9000/lucene/region_name". I did this just in case I
>>> needed to delete (or other operation) an entire region for which ever
>>> reason. The id of the row, the column and query are passed to a Lucene
>>> BooleanQuery to get a search score to use to sort the data
>>> "SEARCH_SCORE(primary_key, text_column_name, search_query)". So I am trying
>>> to find a way to get "HRegion" of the region server the code is running on
>>> to either *1.* get the region name and the hadoop FileSystem or *2. *get
>>> access to the co-processor on that server which already have the values in
>>> option *1*
>>>
>>> Regards,
>>>
>>> Cheyenne O. Forbes
>>>
>>>
>>>
>>> On Wed, Apr 19, 2017 at 10:59 AM, James Taylor <jamestay...@apache.org>
>>> wrote:
>>>
>>>> Can you describe the functionality you're after at a high level in
>>>> terms of a use case (rather than an implementation idea/detail) and we can
>>>> discuss any options wrt potential new features?
>>>>
>>>> On Wed, Apr 19, 2017 at 8:53 AM Cheyenne Forbes <
>>>> cheyenne.osanu.for...@gmail.com> wrote:
>>>>
>>>>> I'd still need " *HRegion MyVar; ", *because I'd still need the name
>>>>> of the region where the row of the id passed to the UDF is located and the
>>>>> value returned my* "getFilesystem()" *of* "**HRegion", *what do you
>>>>> recommend that I do?
>>>>>
>>>>> Regards,
>>>>>
>>>>> Cheyenne O. Forbes
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Apr 18, 2017 at 6:27 PM, Sergey Soldatov <
>>>>> sergeysolda...@gmail.com> wrote:
>>>>>
>>>>>> I mean you need to modify Phoenix code itself to properly support
>>>>>> such kind of features.
>>>>>>
>>>>>> Thanks,
>>>>>> Sergey
>>>>>>
>>>>>> On Tue, Apr 18, 2017 at 3:52 PM, Cheyenne Forbes <
>>>>>> cheyenne.osanu.for...@gmail.com> wrote:
>>>>>>
>>>>>>> Could you explain a little more what you mean by that?
>>>>>>>
>>>>>>> Regards,
>>>>>>>
>>>>>>> Cheyenne O. Forbes
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Apr 18, 2017 at 4:36 PM, Sergey Soldatov <
>>>>>>> sergeysolda...@gmail.com> wrote:
>>>>>>>
>>>>>>>> I may be wrong, but you have chosen wrong approach. Such kind of
>>>>>>>> integration need to be (should be) done on the Phoenix layer in the way
>>>>>>>> like global/local indexes are implemented.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Sergey
>>>>>>>>
>>>>>>>> On Tue, Apr 18, 2017 at 12:34 PM, Cheyenne Forbes <
>>>>>>>> cheyenne.osanu.for...@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> I am creating a plugin that uses Lucene to index text field

ORDER BY not working with UNION ALL

2017-05-03 Thread Cheyenne Forbes
 I get "Undefined column family. familyName=f" whenever I run the following
query,it works without the ORDER BY and works with the ORDER BY if its not
a union and just one select statement

   SELECT
  p.name
FROM
  person p
JOIN
  friends f
ON
  f.person = p.id
WHERE
  567 != ANY(f.persons)
UNION ALL
SELECT
  p.name
FROM
  person p
JOIN
  friends f
ON
  f.person = p.id
WHERE
  123 != ANY(f.persons)
ORDER BY f.date_time LIMIT 20

Regards,

Cheyenne O. Forbes


Cant stop queryserver

2017-05-03 Thread Cheyenne Forbes
When I run "$PHOENIX_HOME/bin/bin/queryserver.py stop" I get "no Query
Server to stop because PID file not found,
/tmp/phoenix/root-queryserver.pid"

Regards,

Cheyenne O. Forbes


Where to put UDF jar

2017-05-02 Thread Cheyenne Forbes
I created a jar with:

jar -cf $HBASE_HOME/lib/phoenix-udj.jar UDF.java

but I get "ClassNotFoundException" when I try to use the UDF in simple
select query

Regards,

Cheyenne O. Forbes


Re: I cant get the latest phoenix to work in docker

2017-05-01 Thread Cheyenne Forbes
I fixed it, it was due to "hbase.rootdir" in hbase-site.xml having
"hdfs://localhost:9000/hbase" instead "hdfs://my_hostname:9000/hbase"

I was struggling to find a docker image with Phoenix 4.10 so I decided to
make one, there it is for anyone who needs one

Regards,

Cheyenne O. Forbes


On Mon, May 1, 2017 at 7:54 PM, Cheyenne Forbes <
cheyenne.osanu.for...@gmail.com> wrote:

> I see "master.HMaster: Failed to become active master
> java.net.ConnectException: Call From 7a2df40d1596/172.17.0.2 to
> localhost:9000 failed on connection exception: java.net.ConnectException:
> Connection refused; For more details see:  http://wiki.apache.org/hadoop/
> ConnectionRefused"
>
> Regards,
>
> Cheyenne O. Forbes
>
> Chief Executive Officer
> Avapno Omnitech, Limited
>
> Chief Operating Officer
> Avapno Solutions Co. Limited
>
> Chief Technology Officer
> ZirconOne Corperation
>
> Chairman
> Avapno Assets, LLC
>
> Bethel Town P.O
> Westmoreland
> Jamaica
>
> Email: cheyenne.osanu.for...@gmail.com
> Mobile: +1 (876) 881-7889 <876-881-7889>
> Landline: +1 (876) 957-1821 <%28876%29%20957-1821>
> skype: cheyenne.forbes1
>
>
> On Mon, May 1, 2017 at 6:55 PM, York, Zach <zy...@amazon.com> wrote:
>
>> This means that the HBase Master can’t connect to ZooKeeper. Can you
>> check the HBase logs to see any exceptions there?
>>
>>
>>
>> *From: *Cheyenne Forbes <cheyenne.osanu.for...@gmail.com>
>> *Reply-To: *"user@phoenix.apache.org" <user@phoenix.apache.org>
>> *Date: *Monday, May 1, 2017 at 4:26 PM
>> *To: *"user@phoenix.apache.org" <user@phoenix.apache.org>
>> *Subject: *Re: I cant get the latest phoenix to work in docker
>>
>>
>>
>> Also when I run "list" in "hbase shell" I get "ERROR: Can't get master
>> address from ZooKeeper; znode data == null"
>>
>>
>> Regards,
>>
>>
>>
>> Cheyenne O. Forbes
>>
>>
>>
>>
>>
>> On Mon, May 1, 2017 at 4:30 PM, Cheyenne Forbes <
>> cheyenne.osanu.for...@gmail.com> wrote:
>>
>> I see [zookeeper, hbase] but when I try to use
>> "$PHOENIX_HOME/bin/sqlline-thin.py localhost" I am gettng
>> java.lang.RuntimeException: org.apache.phoenix.shaded.org.
>> apache.http.conn.HttpHostConnectException: Connect to localhost:8765
>> [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection
>> refused
>>
>>
>> Regards,
>>
>>
>>
>> Cheyenne O. Forbes
>>
>>
>>
>>
>>
>> On Mon, May 1, 2017 at 12:25 PM, Will Xu <w...@hortonworks.com> wrote:
>>
>> If you run
>>
>> $> hbase zkcli
>>
>> $>ls /
>>
>>
>>
>> You should be able to see [zookeeper, hbase]
>>
>> If you don’t see this, then it means hbase service is not started or did
>> not properly register with zookeeper.
>>
>>
>>
>> Best thing to do would be remove the images of avapno/apache-phoenix and
>> try pulling the docker image again.
>>
>>
>>
>> I used this command and it worked
>>
>> docker run -it --name phoenix -p 8765:8765 avapno/apache-phoenix
>>
>>
>>
>> Regards,
>> Will
>>
>> *From: *Cheyenne Forbes <cheyenne.osanu.for...@gmail.com>
>> *Reply-To: *"user@phoenix.apache.org" <user@phoenix.apache.org>
>> *Date: *Saturday, April 29, 2017 at 1:35 PM
>> *To: *"user@phoenix.apache.org" <user@phoenix.apache.org>
>> *Subject: *I cant get the latest phoenix to work in docker
>>
>>
>>
>> docker file and other files: https://github.com/CheyenneFor
>> bes/docker-apache-phoenix
>>
>> the connection just hangs when I try to use the query server and if I run
>> any command in hbase shell I get "ERROR: Can't get master address from
>> ZooKeeper; znode data == null"
>>
>>
>> Regards,
>>
>>
>>
>> Cheyenne O. Forbes
>>
>>
>>
>>
>>
>
>


Re: I cant get the latest phoenix to work in docker

2017-05-01 Thread Cheyenne Forbes
I see "master.HMaster: Failed to become active master
java.net.ConnectException: Call From 7a2df40d1596/172.17.0.2 to
localhost:9000 failed on connection exception: java.net.ConnectException:
Connection refused; For more details see:
http://wiki.apache.org/hadoop/ConnectionRefused;

Regards,

Cheyenne O. Forbes

Chief Executive Officer
Avapno Omnitech, Limited

Chief Operating Officer
Avapno Solutions Co. Limited

Chief Technology Officer
ZirconOne Corperation

Chairman
Avapno Assets, LLC

Bethel Town P.O
Westmoreland
Jamaica

Email: cheyenne.osanu.for...@gmail.com
Mobile: +1 (876) 881-7889 <876-881-7889>
Landline: +1 (876) 957-1821
skype: cheyenne.forbes1


On Mon, May 1, 2017 at 6:55 PM, York, Zach <zy...@amazon.com> wrote:

> This means that the HBase Master can’t connect to ZooKeeper. Can you check
> the HBase logs to see any exceptions there?
>
>
>
> *From: *Cheyenne Forbes <cheyenne.osanu.for...@gmail.com>
> *Reply-To: *"user@phoenix.apache.org" <user@phoenix.apache.org>
> *Date: *Monday, May 1, 2017 at 4:26 PM
> *To: *"user@phoenix.apache.org" <user@phoenix.apache.org>
> *Subject: *Re: I cant get the latest phoenix to work in docker
>
>
>
> Also when I run "list" in "hbase shell" I get "ERROR: Can't get master
> address from ZooKeeper; znode data == null"
>
>
> Regards,
>
>
>
> Cheyenne O. Forbes
>
>
>
>
>
> On Mon, May 1, 2017 at 4:30 PM, Cheyenne Forbes <
> cheyenne.osanu.for...@gmail.com> wrote:
>
> I see [zookeeper, hbase] but when I try to use 
> "$PHOENIX_HOME/bin/sqlline-thin.py
> localhost" I am gettng java.lang.RuntimeException:
> org.apache.phoenix.shaded.org.apache.http.conn.HttpHostConnectException:
> Connect to localhost:8765 [localhost/127.0.0.1,
> localhost/0:0:0:0:0:0:0:1] failed: Connection refused
>
>
> Regards,
>
>
>
> Cheyenne O. Forbes
>
>
>
>
>
> On Mon, May 1, 2017 at 12:25 PM, Will Xu <w...@hortonworks.com> wrote:
>
> If you run
>
> $> hbase zkcli
>
> $>ls /
>
>
>
> You should be able to see [zookeeper, hbase]
>
> If you don’t see this, then it means hbase service is not started or did
> not properly register with zookeeper.
>
>
>
> Best thing to do would be remove the images of avapno/apache-phoenix and
> try pulling the docker image again.
>
>
>
> I used this command and it worked
>
> docker run -it --name phoenix -p 8765:8765 avapno/apache-phoenix
>
>
>
> Regards,
> Will
>
> *From: *Cheyenne Forbes <cheyenne.osanu.for...@gmail.com>
> *Reply-To: *"user@phoenix.apache.org" <user@phoenix.apache.org>
> *Date: *Saturday, April 29, 2017 at 1:35 PM
> *To: *"user@phoenix.apache.org" <user@phoenix.apache.org>
> *Subject: *I cant get the latest phoenix to work in docker
>
>
>
> docker file and other files: https://github.com/
> CheyenneForbes/docker-apache-phoenix
>
> the connection just hangs when I try to use the query server and if I run
> any command in hbase shell I get "ERROR: Can't get master address from
> ZooKeeper; znode data == null"
>
>
> Regards,
>
>
>
> Cheyenne O. Forbes
>
>
>
>
>


Re: I cant get the latest phoenix to work in docker

2017-05-01 Thread Cheyenne Forbes
Also when I run "list" in "hbase shell" I get "ERROR: Can't get master
address from ZooKeeper; znode data == null"

Regards,

Cheyenne O. Forbes


On Mon, May 1, 2017 at 4:30 PM, Cheyenne Forbes <
cheyenne.osanu.for...@gmail.com> wrote:

> I see [zookeeper, hbase] but when I try to use 
> "$PHOENIX_HOME/bin/sqlline-thin.py
> localhost" I am gettng java.lang.RuntimeException:
> org.apache.phoenix.shaded.org.apache.http.conn.HttpHostConnectException:
> Connect to localhost:8765 [localhost/127.0.0.1,
> localhost/0:0:0:0:0:0:0:1] failed: Connection refused
>
> Regards,
>
> Cheyenne O. Forbes
>
>
> On Mon, May 1, 2017 at 12:25 PM, Will Xu <w...@hortonworks.com> wrote:
>
>> If you run
>>
>> $> hbase zkcli
>>
>> $>ls /
>>
>>
>>
>> You should be able to see [zookeeper, hbase]
>>
>> If you don’t see this, then it means hbase service is not started or did
>> not properly register with zookeeper.
>>
>>
>>
>> Best thing to do would be remove the images of avapno/apache-phoenix and
>> try pulling the docker image again.
>>
>>
>>
>> I used this command and it worked
>>
>> docker run -it --name phoenix -p 8765:8765 avapno/apache-phoenix
>>
>>
>>
>> Regards,
>> Will
>>
>> *From: *Cheyenne Forbes <cheyenne.osanu.for...@gmail.com>
>> *Reply-To: *"user@phoenix.apache.org" <user@phoenix.apache.org>
>> *Date: *Saturday, April 29, 2017 at 1:35 PM
>> *To: *"user@phoenix.apache.org" <user@phoenix.apache.org>
>> *Subject: *I cant get the latest phoenix to work in docker
>>
>>
>>
>> docker file and other files: https://github.com/CheyenneFor
>> bes/docker-apache-phoenix
>>
>> the connection just hangs when I try to use the query server and if I run
>> any command in hbase shell I get "ERROR: Can't get master address from
>> ZooKeeper; znode data == null"
>>
>>
>> Regards,
>>
>>
>>
>> Cheyenne O. Forbes
>>
>
>


Re: I cant get the latest phoenix to work in docker

2017-05-01 Thread Cheyenne Forbes
I see [zookeeper, hbase] but when I try to use
"$PHOENIX_HOME/bin/sqlline-thin.py localhost" I am gettng
java.lang.RuntimeException:
org.apache.phoenix.shaded.org.apache.http.conn.HttpHostConnectException:
Connect to localhost:8765 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1]
failed: Connection refused

Regards,

Cheyenne O. Forbes


On Mon, May 1, 2017 at 12:25 PM, Will Xu <w...@hortonworks.com> wrote:

> If you run
>
> $> hbase zkcli
>
> $>ls /
>
>
>
> You should be able to see [zookeeper, hbase]
>
> If you don’t see this, then it means hbase service is not started or did
> not properly register with zookeeper.
>
>
>
> Best thing to do would be remove the images of avapno/apache-phoenix and
> try pulling the docker image again.
>
>
>
> I used this command and it worked
>
> docker run -it --name phoenix -p 8765:8765 avapno/apache-phoenix
>
>
>
> Regards,
> Will
>
> *From: *Cheyenne Forbes <cheyenne.osanu.for...@gmail.com>
> *Reply-To: *"user@phoenix.apache.org" <user@phoenix.apache.org>
> *Date: *Saturday, April 29, 2017 at 1:35 PM
> *To: *"user@phoenix.apache.org" <user@phoenix.apache.org>
> *Subject: *I cant get the latest phoenix to work in docker
>
>
>
> docker file and other files: https://github.com/
> CheyenneForbes/docker-apache-phoenix
>
> the connection just hangs when I try to use the query server and if I run
> any command in hbase shell I get "ERROR: Can't get master address from
> ZooKeeper; znode data == null"
>
>
> Regards,
>
>
>
> Cheyenne O. Forbes
>


Re: How can I "use" a hbase co-processor from a User Defined Function?

2017-04-19 Thread Cheyenne Forbes
At postOpen the location of the lucene directory to be used for the region
is set using the value of *"h_region.getRegionInfo().getEncodedName();" *so
whenever prePut is called the index of the column is stored in the
directory that was set during postOpen. So basically the lucene operations
are "tied" to hbase hooks

Regards,

Cheyenne O. Forbes



On Wed, Apr 19, 2017 at 4:21 PM, Sergey Soldatov <sergeysolda...@gmail.com>
wrote:

> How do you handle HBase region splits and merges with such architecture?
>
> Thanks,
> Sergey
>
> On Wed, Apr 19, 2017 at 9:22 AM, Cheyenne Forbes <
> cheyenne.osanu.for...@gmail.com> wrote:
>
>> I created a hbase co-processor that stores/deletes text indexes with
>> Lucene, the indexes are stored on HDFS (for back up, replication, etc.).
>> The indexes "mirror" the regions so if the index for a column is at
>> "hdfs://localhost:9000/hbase/region_name" the index is stored at
>> "hdfs://localhost:9000/lucene/region_name". I did this just in case I
>> needed to delete (or other operation) an entire region for which ever
>> reason. The id of the row, the column and query are passed to a Lucene
>> BooleanQuery to get a search score to use to sort the data
>> "SEARCH_SCORE(primary_key, text_column_name, search_query)". So I am trying
>> to find a way to get "HRegion" of the region server the code is running on
>> to either *1.* get the region name and the hadoop FileSystem or *2. *get
>> access to the co-processor on that server which already have the values in
>> option *1*
>>
>> Regards,
>>
>> Cheyenne O. Forbes
>>
>>
>>
>> On Wed, Apr 19, 2017 at 10:59 AM, James Taylor <jamestay...@apache.org>
>> wrote:
>>
>>> Can you describe the functionality you're after at a high level in terms
>>> of a use case (rather than an implementation idea/detail) and we can
>>> discuss any options wrt potential new features?
>>>
>>> On Wed, Apr 19, 2017 at 8:53 AM Cheyenne Forbes <
>>> cheyenne.osanu.for...@gmail.com> wrote:
>>>
>>>> I'd still need " *HRegion MyVar; ", *because I'd still need the name
>>>> of the region where the row of the id passed to the UDF is located and the
>>>> value returned my* "getFilesystem()" *of* "**HRegion", *what do you
>>>> recommend that I do?
>>>>
>>>> Regards,
>>>>
>>>> Cheyenne O. Forbes
>>>>
>>>>
>>>>
>>>> On Tue, Apr 18, 2017 at 6:27 PM, Sergey Soldatov <
>>>> sergeysolda...@gmail.com> wrote:
>>>>
>>>>> I mean you need to modify Phoenix code itself to properly support such
>>>>> kind of features.
>>>>>
>>>>> Thanks,
>>>>> Sergey
>>>>>
>>>>> On Tue, Apr 18, 2017 at 3:52 PM, Cheyenne Forbes <
>>>>> cheyenne.osanu.for...@gmail.com> wrote:
>>>>>
>>>>>> Could you explain a little more what you mean by that?
>>>>>>
>>>>>> Regards,
>>>>>>
>>>>>> Cheyenne O. Forbes
>>>>>>
>>>>>>
>>>>>> On Tue, Apr 18, 2017 at 4:36 PM, Sergey Soldatov <
>>>>>> sergeysolda...@gmail.com> wrote:
>>>>>>
>>>>>>> I may be wrong, but you have chosen wrong approach. Such kind of
>>>>>>> integration need to be (should be) done on the Phoenix layer in the way
>>>>>>> like global/local indexes are implemented.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Sergey
>>>>>>>
>>>>>>> On Tue, Apr 18, 2017 at 12:34 PM, Cheyenne Forbes <
>>>>>>> cheyenne.osanu.for...@gmail.com> wrote:
>>>>>>>
>>>>>>>> I am creating a plugin that uses Lucene to index text fields and I
>>>>>>>> need to access *getConf()* and *getFilesystem()* of *HRegion, *the
>>>>>>>> Lucene indexes are split with the regions so I need  " *HRegion
>>>>>>>> MyVar; ", *I am positive the UDF will run on the region server and
>>>>>>>> not the client*.*
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>>
>>>>>>>> Cheyenne O. Forbes
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, Apr 18, 2017 at 1:22 PM, James Taylor <
>>>>>>>> jamestay...@apache.org> wrote:
>>>>>>>>
>>>>>>>>> Shorter answer is "no". Your UDF may be executed on the client
>>>>>>>>> side as well (depending on the query) and there is of course no 
>>>>>>>>> HRegion
>>>>>>>>> available from the client.
>>>>>>>>>
>>>>>>>>> On Tue, Apr 18, 2017 at 11:10 AM Sergey Soldatov <
>>>>>>>>> sergeysolda...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Well, theoretically there is a way of having a coprocessor that
>>>>>>>>>> will keep static public map of current rowkey processed by Phoenix 
>>>>>>>>>> and the
>>>>>>>>>> correlated HRegion instance and get this HRegion using the key that 
>>>>>>>>>> is
>>>>>>>>>> processed by evaluate function. But it's a completely wrong approach 
>>>>>>>>>> for
>>>>>>>>>> both HBase and Phoenix. And it's not clear for me why SQL query may 
>>>>>>>>>> need
>>>>>>>>>> access to the region internals.
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> Sergey
>>>>>>>>>>
>>>>>>>>>> On Mon, Apr 17, 2017 at 10:04 PM, Cheyenne Forbes <
>>>>>>>>>> cheyenne.osanu.for...@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> so there is no way of getting HRegion in a UDF?
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>
>


Re: How can I "use" a hbase co-processor from a User Defined Function?

2017-04-19 Thread Cheyenne Forbes
I created a hbase co-processor that stores/deletes text indexes with
Lucene, the indexes are stored on HDFS (for back up, replication, etc.).
The indexes "mirror" the regions so if the index for a column is at
"hdfs://localhost:9000/hbase/region_name" the index is stored at
"hdfs://localhost:9000/lucene/region_name". I did this just in case I
needed to delete (or other operation) an entire region for which ever
reason. The id of the row, the column and query are passed to a Lucene
BooleanQuery to get a search score to use to sort the data
"SEARCH_SCORE(primary_key, text_column_name, search_query)". So I am trying
to find a way to get "HRegion" of the region server the code is running on
to either *1.* get the region name and the hadoop FileSystem or *2. *get
access to the co-processor on that server which already have the values in
option *1*

Regards,

Cheyenne O. Forbes



On Wed, Apr 19, 2017 at 10:59 AM, James Taylor <jamestay...@apache.org>
wrote:

> Can you describe the functionality you're after at a high level in terms
> of a use case (rather than an implementation idea/detail) and we can
> discuss any options wrt potential new features?
>
> On Wed, Apr 19, 2017 at 8:53 AM Cheyenne Forbes <
> cheyenne.osanu.for...@gmail.com> wrote:
>
>> I'd still need " *HRegion MyVar; ", *because I'd still need the name of
>> the region where the row of the id passed to the UDF is located and the
>> value returned my* "getFilesystem()" *of* "**HRegion", *what do you
>> recommend that I do?
>>
>> Regards,
>>
>> Cheyenne O. Forbes
>>
>>
>>
>> On Tue, Apr 18, 2017 at 6:27 PM, Sergey Soldatov <
>> sergeysolda...@gmail.com> wrote:
>>
>>> I mean you need to modify Phoenix code itself to properly support such
>>> kind of features.
>>>
>>> Thanks,
>>> Sergey
>>>
>>> On Tue, Apr 18, 2017 at 3:52 PM, Cheyenne Forbes <
>>> cheyenne.osanu.for...@gmail.com> wrote:
>>>
>>>> Could you explain a little more what you mean by that?
>>>>
>>>> Regards,
>>>>
>>>> Cheyenne O. Forbes
>>>>
>>>>
>>>> On Tue, Apr 18, 2017 at 4:36 PM, Sergey Soldatov <
>>>> sergeysolda...@gmail.com> wrote:
>>>>
>>>>> I may be wrong, but you have chosen wrong approach. Such kind of
>>>>> integration need to be (should be) done on the Phoenix layer in the way
>>>>> like global/local indexes are implemented.
>>>>>
>>>>> Thanks,
>>>>> Sergey
>>>>>
>>>>> On Tue, Apr 18, 2017 at 12:34 PM, Cheyenne Forbes <
>>>>> cheyenne.osanu.for...@gmail.com> wrote:
>>>>>
>>>>>> I am creating a plugin that uses Lucene to index text fields and I
>>>>>> need to access *getConf()* and *getFilesystem()* of *HRegion, *the
>>>>>> Lucene indexes are split with the regions so I need  " *HRegion
>>>>>> MyVar; ", *I am positive the UDF will run on the region server and
>>>>>> not the client*.*
>>>>>>
>>>>>> Regards,
>>>>>>
>>>>>> Cheyenne O. Forbes
>>>>>>
>>>>>>
>>>>>> On Tue, Apr 18, 2017 at 1:22 PM, James Taylor <jamestay...@apache.org
>>>>>> > wrote:
>>>>>>
>>>>>>> Shorter answer is "no". Your UDF may be executed on the client side
>>>>>>> as well (depending on the query) and there is of course no HRegion
>>>>>>> available from the client.
>>>>>>>
>>>>>>> On Tue, Apr 18, 2017 at 11:10 AM Sergey Soldatov <
>>>>>>> sergeysolda...@gmail.com> wrote:
>>>>>>>
>>>>>>>> Well, theoretically there is a way of having a coprocessor that
>>>>>>>> will keep static public map of current rowkey processed by Phoenix and 
>>>>>>>> the
>>>>>>>> correlated HRegion instance and get this HRegion using the key that is
>>>>>>>> processed by evaluate function. But it's a completely wrong approach 
>>>>>>>> for
>>>>>>>> both HBase and Phoenix. And it's not clear for me why SQL query may 
>>>>>>>> need
>>>>>>>> access to the region internals.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Sergey
>>>>>>>>
>>>>>>>> On Mon, Apr 17, 2017 at 10:04 PM, Cheyenne Forbes <
>>>>>>>> cheyenne.osanu.for...@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> so there is no way of getting HRegion in a UDF?
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>


Re: How can I "use" a hbase co-processor from a User Defined Function?

2017-04-19 Thread Cheyenne Forbes
I'd still need " *HRegion MyVar; ", *because I'd still need the name of the
region where the row of the id passed to the UDF is located and the value
returned my* "getFilesystem()" *of* "**HRegion", *what do you recommend
that I do?

Regards,

Cheyenne O. Forbes



On Tue, Apr 18, 2017 at 6:27 PM, Sergey Soldatov <sergeysolda...@gmail.com>
wrote:

> I mean you need to modify Phoenix code itself to properly support such
> kind of features.
>
> Thanks,
> Sergey
>
> On Tue, Apr 18, 2017 at 3:52 PM, Cheyenne Forbes <
> cheyenne.osanu.for...@gmail.com> wrote:
>
>> Could you explain a little more what you mean by that?
>>
>> Regards,
>>
>> Cheyenne O. Forbes
>>
>>
>> On Tue, Apr 18, 2017 at 4:36 PM, Sergey Soldatov <
>> sergeysolda...@gmail.com> wrote:
>>
>>> I may be wrong, but you have chosen wrong approach. Such kind of
>>> integration need to be (should be) done on the Phoenix layer in the way
>>> like global/local indexes are implemented.
>>>
>>> Thanks,
>>> Sergey
>>>
>>> On Tue, Apr 18, 2017 at 12:34 PM, Cheyenne Forbes <
>>> cheyenne.osanu.for...@gmail.com> wrote:
>>>
>>>> I am creating a plugin that uses Lucene to index text fields and I need
>>>> to access *getConf()* and *getFilesystem()* of *HRegion, *the Lucene
>>>> indexes are split with the regions so I need  " *HRegion MyVar; ", *I
>>>> am positive the UDF will run on the region server and not the client*.*
>>>>
>>>> Regards,
>>>>
>>>> Cheyenne O. Forbes
>>>>
>>>>
>>>> On Tue, Apr 18, 2017 at 1:22 PM, James Taylor <jamestay...@apache.org>
>>>> wrote:
>>>>
>>>>> Shorter answer is "no". Your UDF may be executed on the client side as
>>>>> well (depending on the query) and there is of course no HRegion available
>>>>> from the client.
>>>>>
>>>>> On Tue, Apr 18, 2017 at 11:10 AM Sergey Soldatov <
>>>>> sergeysolda...@gmail.com> wrote:
>>>>>
>>>>>> Well, theoretically there is a way of having a coprocessor that will
>>>>>> keep static public map of current rowkey processed by Phoenix and the
>>>>>> correlated HRegion instance and get this HRegion using the key that is
>>>>>> processed by evaluate function. But it's a completely wrong approach for
>>>>>> both HBase and Phoenix. And it's not clear for me why SQL query may need
>>>>>> access to the region internals.
>>>>>>
>>>>>> Thanks,
>>>>>> Sergey
>>>>>>
>>>>>> On Mon, Apr 17, 2017 at 10:04 PM, Cheyenne Forbes <
>>>>>> cheyenne.osanu.for...@gmail.com> wrote:
>>>>>>
>>>>>>> so there is no way of getting HRegion in a UDF?
>>>>>>>
>>>>>>
>>>>>>
>>>>
>>>
>>
>


Re: How can I "use" a hbase co-processor from a User Defined Function?

2017-04-18 Thread Cheyenne Forbes
Could you explain a little more what you mean by that?

Regards,

Cheyenne O. Forbes


On Tue, Apr 18, 2017 at 4:36 PM, Sergey Soldatov <sergeysolda...@gmail.com>
wrote:

> I may be wrong, but you have chosen wrong approach. Such kind of
> integration need to be (should be) done on the Phoenix layer in the way
> like global/local indexes are implemented.
>
> Thanks,
> Sergey
>
> On Tue, Apr 18, 2017 at 12:34 PM, Cheyenne Forbes <
> cheyenne.osanu.for...@gmail.com> wrote:
>
>> I am creating a plugin that uses Lucene to index text fields and I need
>> to access *getConf()* and *getFilesystem()* of *HRegion, *the Lucene
>> indexes are split with the regions so I need  " *HRegion MyVar; ", *I am
>> positive the UDF will run on the region server and not the client*.*
>>
>> Regards,
>>
>> Cheyenne O. Forbes
>>
>>
>> On Tue, Apr 18, 2017 at 1:22 PM, James Taylor <jamestay...@apache.org>
>> wrote:
>>
>>> Shorter answer is "no". Your UDF may be executed on the client side as
>>> well (depending on the query) and there is of course no HRegion available
>>> from the client.
>>>
>>> On Tue, Apr 18, 2017 at 11:10 AM Sergey Soldatov <
>>> sergeysolda...@gmail.com> wrote:
>>>
>>>> Well, theoretically there is a way of having a coprocessor that will
>>>> keep static public map of current rowkey processed by Phoenix and the
>>>> correlated HRegion instance and get this HRegion using the key that is
>>>> processed by evaluate function. But it's a completely wrong approach for
>>>> both HBase and Phoenix. And it's not clear for me why SQL query may need
>>>> access to the region internals.
>>>>
>>>> Thanks,
>>>> Sergey
>>>>
>>>> On Mon, Apr 17, 2017 at 10:04 PM, Cheyenne Forbes <
>>>> cheyenne.osanu.for...@gmail.com> wrote:
>>>>
>>>>> so there is no way of getting HRegion in a UDF?
>>>>>
>>>>
>>>>
>>
>


Re: How can I "use" a hbase co-processor from a User Defined Function?

2017-04-18 Thread Cheyenne Forbes
I am creating a plugin that uses Lucene to index text fields and I need to
access *getConf()* and *getFilesystem()* of *HRegion, *the Lucene indexes
are split with the regions so I need  " *HRegion MyVar; ", *I am positive
the UDF will run on the region server and not the client*.*

Regards,

Cheyenne O. Forbes


On Tue, Apr 18, 2017 at 1:22 PM, James Taylor <jamestay...@apache.org>
wrote:

> Shorter answer is "no". Your UDF may be executed on the client side as
> well (depending on the query) and there is of course no HRegion available
> from the client.
>
> On Tue, Apr 18, 2017 at 11:10 AM Sergey Soldatov <sergeysolda...@gmail.com>
> wrote:
>
>> Well, theoretically there is a way of having a coprocessor that will keep
>> static public map of current rowkey processed by Phoenix and the correlated
>> HRegion instance and get this HRegion using the key that is processed by
>> evaluate function. But it's a completely wrong approach for both HBase and
>> Phoenix. And it's not clear for me why SQL query may need access to the
>> region internals.
>>
>> Thanks,
>> Sergey
>>
>> On Mon, Apr 17, 2017 at 10:04 PM, Cheyenne Forbes <
>> cheyenne.osanu.for...@gmail.com> wrote:
>>
>>> so there is no way of getting HRegion in a UDF?
>>>
>>
>>


Re: How can I "use" a hbase co-processor from a User Defined Function?

2017-04-18 Thread Cheyenne Forbes
I am creating a plugin that uses Lucene to index text fields and I need to
access *getConf()* and *getFilesystem()* of *HRegion, *the Lucene indexes
are split with the regions so I need  " *HRegion MyVar; ", *I am positive
the UDF will run on the region server and not the client
*.*


Re: How can I "use" a hbase co-processor from a User Defined Function?

2017-04-17 Thread Cheyenne Forbes
so there is no way of getting HRegion in a UDF?


Re: How can I "use" a hbase co-processor from a User Defined Function?

2017-04-14 Thread Cheyenne Forbes
would *my_udf* be executed on the region server that the row of the column
that is passed to it is located on ?


Re: How can I "use" a hbase co-processor from a User Defined Function?

2017-04-14 Thread Cheyenne Forbes
would *my_udf* be executed on the region server that the row of the column
that is passed to it?


Are arrays stored and retrieved in the order they are added to phoenix?

2017-04-13 Thread Cheyenne Forbes
I was wonder if the arrays are stored in the order I add them or they are
sorted otherwise (maybe for performance reasons)


How can I "use" a hbase co-processor from a User Defined Function?

2017-04-12 Thread Cheyenne Forbes
If I have a coprocessor of class "MyCoprocessor" which extends
"BaseRegionObserverCoprocessor" is it possible to "access" it from a
Phoenix UDF?

Regards,

Cheyenne O. Forbes


Re: Why does this work in MySQL but not Phoenix

2017-04-12 Thread Cheyenne Forbes
Okay thank you, so it is better to select one of each combination in one
query then count how many of each combination in another query?

Regards,

Cheyenne O. Forbes

Chief Executive Officer
Avapno Omnitech, Limited

Chief Operating Officer
Avapno Solutions Co. Limited

Chief Technology Officer
ZirconOne Corperation

Chairman
Avapno Assets, LLC

Bethel Town P.O
Westmoreland
Jamaica

Email: cheyenne.osanu.for...@gmail.com
Mobile: +1 (876) 881-7889 <876-881-7889>
Landline: +1 (876) 957-1821
skype: cheyenne.forbes1


On Wed, Apr 12, 2017 at 11:27 AM, James Taylor <jamestay...@apache.org>
wrote:

> The non standard @ syntax is not supported by Phoenix: (@rn:= if((@item =
> t.item_id) AND (@type = t.type),@rn + 1,
>if((@item:=t.item_id) AND (@type:=t.type),1,1)
>
> Also, I'd recommend breaking that query up into multiple queries to
> pinpoint any other non supported constructs.
>
> On Wed, Apr 12, 2017 at 6:49 AM Cheyenne Forbes <
> cheyenne.osanu.for...@gmail.com> wrote:
>
>> I was using this (http://rextester.com/VXZONO82847) in MySQL but I am
>> not able to use it in Phoenix. Basically what the query does is "select
>> only one row where a column combination (type and item_id) is common and
>> count how many more of the combination exists". It allows me to add the
>> following feature to my notifications system "John Brown and 5 others like
>> your post"
>>
>> Can anyone answer any of these?
>>
>>1. Why doesn't it work in Phoenix?
>>2. How can I get that query to work?
>>
>> Regards,
>>
>> Cheyenne
>>
>


Re: Get the Region and ID of the current row in a UDF

2017-03-28 Thread Cheyenne Forbes
So I cant get neither the name of the table, the name of the column, the id
of the row nor the region that "ptr" belongs to after calling:




*Expression arg = getChildren().get(0);if
(!arg.evaluate(tuple, ptr)) {return false;}*


Re: Define parameters in queries

2017-03-28 Thread Cheyenne Forbes
Thank you James, could you also answer my other question?
https://lists.apache.org/thread.html/5dfd0aecf5e6325b707fed4533f1e727886c338be762d6aaccfcf2f3@%3Cuser.phoenix.apache.org%3E


Re: Define parameters in queries

2017-03-28 Thread Cheyenne Forbes
Can someone show an example of how to declare a variable (and change the
value) in Phoenix as I would use @ in front of the variable name in MySQL?
(example: @my_variable := "my value")

Thanks,
Cheyenne


Re: Get the Region and ID of the current row in a UDF

2017-03-25 Thread Cheyenne Forbes
or get the region of the table after calling:




*Expression arg = getChildren().get(0);if
(!arg.evaluate(tuple, ptr)) {return false;}*


Re: Get the Region and ID of the current row in a UDF

2017-03-25 Thread Cheyenne Forbes
Anyone?


Get the Region and ID of the current row in a UDF

2017-03-25 Thread Cheyenne Forbes
Is it possible to get the Region and ID of the current row in evaluate()
when creating a user defined function?


Re: Define parameters in queries

2017-03-24 Thread Cheyenne Forbes
Could you show an example?


Re: Define parameters in queries

2017-03-23 Thread Cheyenne Forbes
or should I say "declare variables in Phoenix" instead of "defining
variables in Phoenix"?


Re: Define parameters in queries

2017-03-23 Thread Cheyenne Forbes
Can you show me an example of defining variables in Phoenix (not send a
parameter)? I am getting errors, how would I do this in phoenix:
*@my_variable := CASE WHEN my_column IS NULL THEN "this value" ELSE "that
value" END*


Re: Define parameters in queries

2017-03-23 Thread Cheyenne Forbes
Will I be able to change the value of userId from in the query?
*:**userId = CASE WHEN **userId > 10 THEN **userId ELSE **(userId + 1) END*


Re: Define parameters in queries

2017-03-23 Thread Cheyenne Forbes
Anyone?


Define parameters in queries

2017-03-22 Thread Cheyenne Forbes
Can I define parameters as I would in MySQL with @var_name:="value"?
example:
 *@rn:= CASE WHEN my_column IS NULL THEN "this value" ELSE "that value" END*

I need it for a query I used in MySQL to work in Phoenix

Regards,

Cheyenne O. Forbes


Re: Differences between the date/time types

2017-03-17 Thread Cheyenne Forbes
Anyone?


call hbase functions from phoenix

2017-03-05 Thread Cheyenne Forbes
If I use a Hbase patch/function and want to use that function on a column
in phoenix, how should I do it?


Re: Squirrel SQL Client doesnt work with phoenix-4.9.0-HBase-1.2

2017-03-05 Thread Cheyenne Forbes
turns out that the only way to connect to phoenix from the latest version
of Squirrel is with
"jdbc:phoenix:thin:url=http://172.17.0.2:8765;serialization=PROTOBUF;
instead of just "jdbc:phoenix:thin:url=http://172.17.0.2:8765;; If I'm
using the default phoenix 4.9.0 settings


Re: Squirrel SQL Client doesnt work with phoenix-4.9.0-HBase-1.2

2017-03-04 Thread Cheyenne Forbes
I do see
org/apache/phoenix/shaded/org/apache/http/conn/ssl/SSLConnectionSocketFactory.class

anything else?


Re: Squirrel SQL Client doesnt work with phoenix-4.9.0-HBase-1.2

2017-03-02 Thread Cheyenne Forbes
Can anyone try to see if they get the same error?


Squirrel SQL Client doesnt work with phoenix-4.9.0-HBase-1.2

2017-03-01 Thread Cheyenne Forbes
I've used Squirrel SQL Client before but now I'm trying Squirrel'
snapshot-20170214_2214 with phoenix-4.9.0-HBase-1.2-client.jar it wont work.

URL field: jdbc:phoenix:thin:url=http://172.17.0.2:8765

Class name: org.apache.phoenix.queryserver.client.Driver

Error:











































*java.util.concurrent.ExecutionException: java.lang.RuntimeException:
java.lang.RuntimeException: Failed to construct AvaticaHttpClient
implementation
org.apache.calcite.avatica.remote.AvaticaCommonsHttpClientImplat
java.util.concurrent.FutureTask.report(FutureTask.java:122)at
java.util.concurrent.FutureTask.get(FutureTask.java:206)at
net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.awaitConnection(OpenConnectionCommand.java:132)
at
net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.access$100(OpenConnectionCommand.java:45)
at
net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand$2.run(OpenConnectionCommand.java:115)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)Caused by:
java.lang.RuntimeException: java.lang.RuntimeException: Failed to construct
AvaticaHttpClient implementation
org.apache.calcite.avatica.remote.AvaticaCommonsHttpClientImplat
net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.executeConnect(OpenConnectionCommand.java:175)
at
net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.access$000(OpenConnectionCommand.java:45)
at
net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand$1.run(OpenConnectionCommand.java:104)
... 5 moreCaused by: java.lang.RuntimeException: Failed to construct
AvaticaHttpClient implementation
org.apache.calcite.avatica.remote.AvaticaCommonsHttpClientImplat
org.apache.calcite.avatica.remote.AvaticaHttpClientFactoryImpl.instantiateClient(AvaticaHttpClientFactoryImpl.java:106)
at
org.apache.calcite.avatica.remote.AvaticaHttpClientFactoryImpl.getClient(AvaticaHttpClientFactoryImpl.java:67)
at
org.apache.calcite.avatica.remote.Driver.getHttpClient(Driver.java:159)
at
org.apache.calcite.avatica.remote.Driver.createService(Driver.java:122)
at org.apache.calcite.avatica.remote.Driver.createMeta(Driver.java:96)
at
org.apache.calcite.avatica.AvaticaConnection.(AvaticaConnection.java:118)
at
org.apache.calcite.avatica.AvaticaJdbc41Factory$AvaticaJdbc41Connection.(AvaticaJdbc41Factory.java:105)
at
org.apache.calcite.avatica.AvaticaJdbc41Factory.newConnection(AvaticaJdbc41Factory.java:62)
at
org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:143)
at org.apache.calcite.avatica.remote.Driver.connect(Driver.java:164)at
net.sourceforge.squirrel_sql.fw.sql.SQLDriverManager.getConnection(SQLDriverManager.java:133)
at
net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.executeConnect(OpenConnectionCommand.java:167)
... 7 moreCaused by: java.lang.reflect.InvocationTargetExceptionat
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)at
org.apache.calcite.avatica.remote.AvaticaHttpClientFactoryImpl.instantiateClient(AvaticaHttpClientFactoryImpl.java:103)
... 18 moreCaused by: java.lang.NoClassDefFoundError: Could not initialize
class
org.apache.phoenix.shaded.org.apache.http.conn.ssl.SSLConnectionSocketFactory
at
org.apache.phoenix.shaded.org.apache.http.impl.conn.PoolingHttpClientConnectionManager.getDefaultRegistry(PoolingHttpClientConnectionManager.java:109)
at
org.apache.phoenix.shaded.org.apache.http.impl.conn.PoolingHttpClientConnectionManager.(PoolingHttpClientConnectionManager.java:116)
at
org.apache.calcite.avatica.remote.AvaticaCommonsHttpClientImpl.(AvaticaCommonsHttpClientImpl.java:99)
... 23 more*
Regards,

Cheyenne O. Forbes


Are values of a sequence deleted after an incrementation?

2017-02-22 Thread Cheyenne Forbes
I am not sure how sequences work in phoenix but something popped up in my
mind.

If I request 100 values from a sequence will there be 100 values stored in
the database or just 1 value which is "100" telling phoenix the is the
number to be incremented?


I got a very weird message from user@phoenix.apache.org

2017-02-20 Thread Cheyenne Forbes
Hi! This is the ezmlm program. I'm managing the
user@phoenix.apache.org mailing list.


Messages to you from the user mailing list seem to
have been bouncing. I've attached a copy of the first bounce
message I received.

If this message bounces too, I will send you a probe. If the probe bounces,
I will remove your address from the user mailing list,
without further notice.


I've kept a list of which messages from the user mailing list have
bounced from your address.


Differences between the date/time types

2017-02-15 Thread Cheyenne Forbes
I cant find the difference between the date/time types, arent all of them
the same? also should I parse them as int or string?
TIME Type
TIME

The time data type. The format is -MM-dd hh:mm:ss, with both the date
and time parts maintained. Mapped to java.sql.Time. The binary
representation is an 8 byte long (the number of milliseconds from the
epoch), making it possible (although not necessarily recommended) to store
more information within a TIME column than what is provided by java.sql.Time.
Note that the internal representation is based on a number of milliseconds
since the epoch (which is based on a time in GMT), while java.sql.Time will
format times based on the client's local time zone. Please note that this
TIME type is different than the TIME type as defined by the SQL 92 standard
in that it includes year, month, and day components. As such, it is not in
compliance with the JDBC APIs. As the underlying data is still stored as a
long, only the presentation of the value is incorrect.

Example:

TIME
DATE Type
DATE

The date data type. The format is -MM-dd hh:mm:ss, with both the date
and time parts maintained to a millisecond accuracy. Mapped to java.sql.Date.
The binary representation is an 8 byte long (the number of milliseconds
from the epoch), making it possible (although not necessarily recommended)
to store more information within a DATE column than what is provided by
java.sql.Date. Note that the internal representation is based on a number
of milliseconds since the epoch (which is based on a time in GMT), while
java.sql.Date will format dates based on the client's local time zone.
Please note that this DATE type is different than the DATE type as defined
by the SQL 92 standard in that it includes a time component. As such, it is
not in compliance with the JDBC APIs. As the underlying data is still
stored as a long, only the presentation of the value is incorrect.

Example:

DATE
TIMESTAMP Type
TIMESTAMP

The timestamp data type. The format is -MM-dd hh:mm:ss[.n].
Mapped to java.sql.Timestamp with an internal representation of the number
of nanos from the epoch. The binary representation is 12 bytes: an 8 byte
long for the epoch time plus a 4 byte integer for the nanos. Note that the
internal representation is based on a number of milliseconds since the
epoch (which is based on a time in GMT), while java.sql.Timestamp will
format timestamps based on the client's local time zone.

Example:

TIMESTAMP
UNSIGNED_TIME Type
UNSIGNED_TIME

The unsigned time data type. The format is -MM-dd hh:mm:ss, with both
the date and time parts maintained to the millisecond accuracy. Mapped to
java.sql.Time. The binary representation is an 8 byte long (the number of
milliseconds from the epoch) matching the HBase.toBytes(long) method. The
purpose of this type is to map to existing HBase data that was serialized
using this HBase utility method. If that is not the case, use the regular
signed type instead.

Example:

UNSIGNED_TIME
UNSIGNED_DATE Type
UNSIGNED_DATE

The unsigned date data type. The format is -MM-dd hh:mm:ss, with both
the date and time parts maintained to a millisecond accuracy. Mapped to
java.sql.Date. The binary representation is an 8 byte long (the number of
milliseconds from the epoch) matching the HBase.toBytes(long) method. The
purpose of this type is to map to existing HBase data that was serialized
using this HBase utility method. If that is not the case, use the regular
signed type instead.

Example:

UNSIGNED_DATE
UNSIGNED_TIMESTAMP Type
UNSIGNED_TIMESTAMP

The timestamp data type. The format is -MM-dd hh:mm:ss[.n].
Mapped to java.sql.Timestamp with an internal representation of the number
of nanos from the epoch. The binary representation is 12 bytes: an 8 byte
long for the epoch time plus a 4 byte integer for the nanos with the long
serialized through the HBase.toBytes(long) method. The purpose of this type
is to map to existing HBase data that was serialized using this HBase
utility method. If that is not the case, use the regular signed type
instead.

Example:

UNSIGNED_TIMESTAMP


Can I use protobuf2 with Phoenix instead of protobuf3?

2017-02-13 Thread Cheyenne Forbes
my project highly depends on protobuf2, can I tell phoenix which version of
protobuf to read with when I am sending a request?


Can I use the SQL WITH clause Phoenix?

2017-01-16 Thread Cheyenne Forbes
Can I use the SQL WITH clause Phoenix instead of "untidy" sub queries?


Re: How many servers are need to put Phoenix in production?

2016-12-27 Thread Cheyenne Forbes
are there any recommended specs for the servers?


Re: Can I reuse parameter values in phoenix query?

2016-12-08 Thread Cheyenne Forbes
thank you


Can I reuse parameter values in phoenix query?

2016-12-08 Thread Cheyenne Forbes
So instead of doing:

  query("select from table where c1 = ? or c2  = ?",  [my_id, my_id])

i would do:

  query("select from table where c1 = ?1 or c2  = ?1",  [my_id])


Is it efficient to query a VERY huge Phoenix database this way?

2016-12-06 Thread Cheyenne Forbes
Is it efficient to query a VERY huge Phoenix database this way?

(THIS IS NOT (YET) A REAL PROGRAMMING LANGUAGE)

chats = phoenix.query(" select id, name from chats where participant1 = ?
or participant2 = ? ", [user_id, user_id]); for_each( chats ) { participants
= phoenix.query(" select p.id, u.name, u.age, p.join_date from participants
join users u on p.id = u.id where chat_id = ? ", this.id); messages =
phoenix.query(" select m.id, m.sender, m.time_date from messages where
chat_id = ? ", this.id); chat = { id: this.id, name: this.name, participants
: participants, messages: messages }; (chat).appendTo(chats_array); };
return chats_array;


Re: Why is there a spark plugin?

2016-11-16 Thread Cheyenne Forbes
so why would I choose Phoenix over Spark?


Why is there a spark plugin?

2016-11-16 Thread Cheyenne Forbes
 Why would/should I care about spark/spark plugin when I already have
phoenix?


How can I contribute clients?

2016-10-23 Thread Cheyenne Forbes
 I made three protobuf clients for Phoenix in C++, Python and Erlang.

how can I make these become "official" as lalinsky's Phoenix JSON python
client?

*The story*: I first created my application in Python and used lalinsky's
JSON client but later found out Python (and JSON) weren't my best choice so
I "splited" my application into two languages, C++ and Erlang. Why did I
create clients in both languages? I needed the C++ client but maybe someone
somewhere is or will be in need of an Erlang client. Oh! two weeks ago I
knew nothing about Erlang :D


Row versions

2016-10-17 Thread Cheyenne Forbes
 can I add versions of a row and select versions of a row as I can when
using hbase alone?


is deleting from phoenix slower than deleting from from hbase?

2016-10-17 Thread Cheyenne Forbes
is


*delete from table*
slower than

*delete 'table', 'row'*


Re: How and where can I get help to set up my "phoenix cluster" for production?

2016-10-13 Thread Cheyenne Forbes
Thats the question I shouldve asked myself, no

How can I get it done paid?


How and where can I get help to set up my "phoenix cluster" for production?

2016-10-13 Thread Cheyenne Forbes
 Are there people who do this for free?


Re: Region start row and end row

2016-10-13 Thread Cheyenne Forbes
Check out this post for loading data from MySQL to Ignite
https://dzone.com/articles/apache-ignite-how-to-read-data-from-persistent-sto

and this one (recommended) on how to UPSERT to Phoenix on Ignite PUT...
*delete, etc.*
https://apacheignite.readme.io/docs/persistent-store#cachestore-example

Just replace the MySQL things with Phoenix things (eg. the JDBC driver,
INSERT to UPSERT, etc.). If after reading you still have issues, feel free
ask in this thread for more help


Re: Region start row and end row

2016-10-13 Thread Cheyenne Forbes
May I ask which in memory db are you using?


Re: Region start row and end row

2016-10-13 Thread Cheyenne Forbes
Hi Anil,

Basically what you want to do is copy all the data you had input with
Phoenix to your in memory db?


Re: Using COUNT() with columns that don't use COUNT() when the table is join fails

2016-09-19 Thread Cheyenne Forbes
I was wondering because it seems extra wordy


Re: Full text query in Phoenix

2016-09-19 Thread Cheyenne Forbes
Hi James,

Thanks a lot, I found a link showing how to integrate hbase with lucene
https://itpeernetwork.intel.com/idh-hbase-lucene-integration/


Using COUNT() with columns that don't use COUNT() when the table is join fails

2016-09-18 Thread Cheyenne Forbes
 this query fails:

SELECT COUNT(fr.friend_1), u.first_name
>
> FROM users AS u
>
> LEFT JOIN friends AS fr ON u.id = fr.friend_2
>
>
with:

SQLException: ERROR 1018 (42Y27): Aggregate may not contain columns not in
> GROUP BY. U.FIRST_NAME
>

TABLES:

users table with these columns ( id, first_name, last_name )


friends table with these columns ( friend_1, friend_2 )


Re: Full text query in Phoenix

2016-09-18 Thread Cheyenne Forbes
Hi Anil,

If I have:

users table with these columns ( id, first_name, last_name )


friends table with these columns ( friend_1, friend_2 )



> user_posts table with these columns ( user_id, post_text, date_time )


in hbase (phoenix) and I want to view all user posts (post_text) with
similar things (full test search) to "children playing at the beach" that
were posted by people in my friends list, how could I achieve this if I use
phoenix?


Re: Full text query in Phoenix

2016-09-18 Thread Cheyenne Forbes
Hi James,

I found this for Hbase
https://issues.apache.org/jira/browse/HBASE-3529

its patch that can be added to hbase based on what I am seeing


Phoenix "LIKE 'query%' " performance

2016-09-18 Thread Cheyenne Forbes
   - Can it be fast?
   - does it use the Hbase regex feature?
   - how can I make it case insensitive? so when I do "LIKE 'query%' " the
   results include "Query"
   - Can I get millisecond results using "WHERE column LIKE" on a large
   table? couple terabytes of data
   - is it recommended


Re: Joins dont work

2016-09-18 Thread Cheyenne Forbes
Thank you, I got the error because I copied the queryserver jar instead of
the server jar :)


Re: Joins dont work

2016-09-17 Thread Cheyenne Forbes
does anyone have an idea whats causing this?


Re: Joins dont work

2016-09-16 Thread Cheyenne Forbes
here (root-queryserver.log):

> 2016-09-16 23:40:18,378 INFO
> org.apache.hadoop.hbase.client.RpcRetryingCaller:
> 2016-09-16 23:40:18,420 WARN
> org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel: Call failed on
> IOException
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
> attempts=35, exceptions:
> Fri Sep 16 23:31:07 EDT 2016,
> RpcRetryingCaller{globalStartTime=1474083067003, pause=100, retries=35},
> java.io.IOException: java.io.IOException: java.lang.NoClassDefFoundError:
> org/iq80/snappy/CorruptionException
> at
> org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:77)
> at
> org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService.callMethod(ServerCachingProtos.java:3293)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7435)
> at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1875)
> at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1857)
>

Regards,
Cheyenne


Re: Joins dont work

2016-09-15 Thread Cheyenne Forbes
Yes James through the query server. Josh it doent show any errors, it just
hangs there for minutes

Regards,

Cheyenne Forbes


> On Thu, Sep 15, 2016 at 4:42 PM, Josh Elser <josh.el...@gmail.com> wrote:
>
>> The error you see would also be rather helpful.
>>
>> James Taylor wrote:
>>
>>> Hi Cheyenne,
>>> Are you referring to joins through the query server?
>>> Thanks,
>>> James
>>>
>>> On Thu, Sep 15, 2016 at 1:37 PM, Cheyenne Forbes
>>> <cheyenne.osanu.for...@gmail.com
>>> <mailto:cheyenne.osanu.for...@gmail.com>> wrote:
>>>
>>> I was using phoenix 4.4 then I switched to 4.8 because I thought it
>>> was related to version 4.4 (both on hbase 1.1.2), neither using json
>>> nor protobufs works.
>>>
>>> I tried (also using the outer key word):
>>>
>>> left join
>>>
>>> right join
>>>
>>> inner join
>>>
>>>
>>>
>>>
>


Re: Joins dont work

2016-09-15 Thread Cheyenne Forbes
yes James through the query server. Josh it doent show any errors, it just
hang there for minutes

Regards,

Cheyenne Forbes

Chief Executive Officer
Avapno Omnitech

Chief Operating Officer
Avapno Solutions, Co.

Chairman
Avapno Assets, LLC

Bethel Town P.O
Westmoreland
Jamaica

Email: cheyenne.osanu.for...@gmail.com
Mobile: 876-881-7889
skype: cheyenne.forbes1


On Thu, Sep 15, 2016 at 4:42 PM, Josh Elser <josh.el...@gmail.com> wrote:

> The error you see would also be rather helpful.
>
> James Taylor wrote:
>
>> Hi Cheyenne,
>> Are you referring to joins through the query server?
>> Thanks,
>> James
>>
>> On Thu, Sep 15, 2016 at 1:37 PM, Cheyenne Forbes
>> <cheyenne.osanu.for...@gmail.com
>> <mailto:cheyenne.osanu.for...@gmail.com>> wrote:
>>
>> I was using phoenix 4.4 then I switched to 4.8 because I thought it
>> was related to version 4.4 (both on hbase 1.1.2), neither using json
>> nor protobufs works.
>>
>> I tried (also using the outer key word):
>>
>> left join
>>
>> right join
>>
>> inner join
>>
>>
>>
>>


Joins dont work

2016-09-15 Thread Cheyenne Forbes
I was using phoenix 4.4 then I switched to 4.8 because I thought it was
related to version 4.4 (both on hbase 1.1.2), neither using json nor
protobufs works.

I tried (also using the outer key word):

> left join

right join

inner join


Re: When would/should I use spark with phoenix?

2016-09-13 Thread Cheyenne Forbes
if I was to use spark (via python api for example), the query would be
processed on my webservers or on a separate server like in phoenix?

Regards,

Cheyenne Forbes

Chief Executive Officer
Avapno Omnitech

Chief Operating Officer
Avapno Solutions, Co.

Chairman
Avapno Assets, LLC

Bethel Town P.O
Westmoreland
Jamaica

Email: cheyenne.osanu.for...@gmail.com
Mobile: 876-881-7889
skype: cheyenne.forbes1


On Tue, Sep 13, 2016 at 3:07 PM, dalin.qin <dalin...@gmail.com> wrote:

> Hi Cheyenne ,
>
> That's a very interesting question, if secondary indexes are created well
> on phoenix table , hbase will use coprocessor to do the join operation
> (java based  map reduce job still if I understand correctly) and then
> return the result . on the contrary spark is famous for its great
> improvement vs the traditional m/r operation ,once the two tables are in
> spark dataframe , I believe spark wins all the time . however it might take
> long time to load the two big table into spark .
>
> I'll do this test in the future,right now our system is quite busy with
> ALS model tasks.
>
> Cheers,
> Dalin
>
> On Tue, Sep 13, 2016 at 3:58 PM, Cheyenne Forbes <
> cheyenne.osanu.for...@gmail.com> wrote:
>
>> i've been thinking, is spark sql faster than phoenix (or phoenix-spark)
>> with selects with joins on large data (for example instagram's size)?
>>
>> Regards,
>>
>> Cheyenne Forbes
>>
>> Chief Executive Officer
>> Avapno Omnitech
>>
>> Chief Operating Officer
>> Avapno Solutions, Co.
>>
>> Chairman
>> Avapno Assets, LLC
>>
>> Bethel Town P.O
>> Westmoreland
>> Jamaica
>>
>> Email: cheyenne.osanu.for...@gmail.com
>> Mobile: 876-881-7889
>> skype: cheyenne.forbes1
>>
>>
>> On Tue, Sep 13, 2016 at 8:41 AM, Josh Mahonin <jmaho...@gmail.com> wrote:
>>
>>> Hi Dalin,
>>>
>>> Thanks for the information, I'm glad to hear that the spark integration
>>> is working well for your use case.
>>>
>>> Josh
>>>
>>> On Mon, Sep 12, 2016 at 8:15 PM, dalin.qin <dalin...@gmail.com> wrote:
>>>
>>>> Hi Josh,
>>>>
>>>> before the project kicked off , we get the idea that hbase is more
>>>> suitable for massive writing rather than batch full table reading(I forgot
>>>> where the idea from ,just some benchmart testing posted in the website
>>>> maybe). So we decide to read hbase only based on primary key for small
>>>> amount of data query request. we store the hbase result in json file either
>>>> as everyday's incremental changes(another benefit from json is you can put
>>>> them in a time based directory so that you could only query part of those
>>>> files), then use spark to read those json files and do the ML model or
>>>> report caculation.
>>>>
>>>> Hope this could help:)
>>>>
>>>> Dalin
>>>>
>>>>
>>>> On Mon, Sep 12, 2016 at 5:36 PM, Josh Mahonin <jmaho...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi Dalin,
>>>>>
>>>>> That's great to hear. Have you also tried reading back those rows
>>>>> through Spark for a larger "batch processing" job? Am curious if you have
>>>>> any experiences or insight there from operating on a large dataset.
>>>>>
>>>>> Thanks!
>>>>>
>>>>> Josh
>>>>>
>>>>> On Mon, Sep 12, 2016 at 10:29 AM, dalin.qin <dalin...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi ,
>>>>>> I've used phoenix table to store billions of rows , rows are
>>>>>> incrementally insert into phoenix by spark every day and the table was 
>>>>>> for
>>>>>> instant query from web page by providing primary key . so far so good .
>>>>>>
>>>>>> Thanks
>>>>>> Dalin
>>>>>>
>>>>>> On Mon, Sep 12, 2016 at 10:07 AM, Cheyenne Forbes <
>>>>>> cheyenne.osanu.for...@gmail.com> wrote:
>>>>>>
>>>>>>> Thanks everyone, I will be using phoenix for simple input/output and
>>>>>>> the phoenix_spark plugin (https://phoenix.apache.org/ph
>>>>>>> oenix_spark.html) for more complex queries, is that the smart thing?
>>>>>>>
>>>>>>> Regards,
>>>>>>>
>>>>

Re: When would/should I use spark with phoenix?

2016-09-13 Thread Cheyenne Forbes
i've been thinking, is spark sql faster than phoenix (or phoenix-spark)
with selects with joins on large data (for example instagram's size)?

Regards,

Cheyenne Forbes

Chief Executive Officer
Avapno Omnitech

Chief Operating Officer
Avapno Solutions, Co.

Chairman
Avapno Assets, LLC

Bethel Town P.O
Westmoreland
Jamaica

Email: cheyenne.osanu.for...@gmail.com
Mobile: 876-881-7889
skype: cheyenne.forbes1


On Tue, Sep 13, 2016 at 8:41 AM, Josh Mahonin <jmaho...@gmail.com> wrote:

> Hi Dalin,
>
> Thanks for the information, I'm glad to hear that the spark integration is
> working well for your use case.
>
> Josh
>
> On Mon, Sep 12, 2016 at 8:15 PM, dalin.qin <dalin...@gmail.com> wrote:
>
>> Hi Josh,
>>
>> before the project kicked off , we get the idea that hbase is more
>> suitable for massive writing rather than batch full table reading(I forgot
>> where the idea from ,just some benchmart testing posted in the website
>> maybe). So we decide to read hbase only based on primary key for small
>> amount of data query request. we store the hbase result in json file either
>> as everyday's incremental changes(another benefit from json is you can put
>> them in a time based directory so that you could only query part of those
>> files), then use spark to read those json files and do the ML model or
>> report caculation.
>>
>> Hope this could help:)
>>
>> Dalin
>>
>>
>> On Mon, Sep 12, 2016 at 5:36 PM, Josh Mahonin <jmaho...@gmail.com> wrote:
>>
>>> Hi Dalin,
>>>
>>> That's great to hear. Have you also tried reading back those rows
>>> through Spark for a larger "batch processing" job? Am curious if you have
>>> any experiences or insight there from operating on a large dataset.
>>>
>>> Thanks!
>>>
>>> Josh
>>>
>>> On Mon, Sep 12, 2016 at 10:29 AM, dalin.qin <dalin...@gmail.com> wrote:
>>>
>>>> Hi ,
>>>> I've used phoenix table to store billions of rows , rows are
>>>> incrementally insert into phoenix by spark every day and the table was for
>>>> instant query from web page by providing primary key . so far so good .
>>>>
>>>> Thanks
>>>> Dalin
>>>>
>>>> On Mon, Sep 12, 2016 at 10:07 AM, Cheyenne Forbes <
>>>> cheyenne.osanu.for...@gmail.com> wrote:
>>>>
>>>>> Thanks everyone, I will be using phoenix for simple input/output and
>>>>> the phoenix_spark plugin (https://phoenix.apache.org/ph
>>>>> oenix_spark.html) for more complex queries, is that the smart thing?
>>>>>
>>>>> Regards,
>>>>>
>>>>> Cheyenne Forbes
>>>>>
>>>>> Chief Executive Officer
>>>>> Avapno Omnitech
>>>>>
>>>>> Chief Operating Officer
>>>>> Avapno Solutions, Co.
>>>>>
>>>>> Chairman
>>>>> Avapno Assets, LLC
>>>>>
>>>>> Bethel Town P.O
>>>>> Westmoreland
>>>>> Jamaica
>>>>>
>>>>> Email: cheyenne.osanu.for...@gmail.com
>>>>> Mobile: 876-881-7889
>>>>> skype: cheyenne.forbes1
>>>>>
>>>>>
>>>>> On Sun, Sep 11, 2016 at 11:07 AM, Ted Yu <yuzhih...@gmail.com> wrote:
>>>>>
>>>>>> w.r.t. Resource Management, Spark also relies on other framework
>>>>>> such as YARN or Mesos.
>>>>>>
>>>>>> Cheers
>>>>>>
>>>>>> On Sun, Sep 11, 2016 at 6:31 AM, John Leach <jlea...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Spark has a robust execution model with the following features that
>>>>>>> are not part of phoenix
>>>>>>> * Scalable
>>>>>>> * fault tolerance with lineage (Handles large intermediate
>>>>>>> results)
>>>>>>> * memory management for tasks
>>>>>>> * Resource Management (Fair Scheduling)
>>>>>>> * Additional SQL Features (Windowing ,etc.)
>>>>>>> * Machine Learning Libraries
>>>>>>>
>>>>>>>
>>>>>>> Regards,
>>>>>>> John
>>>>>>>
>>>>>>> > On Sep 11, 2016, at 2:45 AM, Cheyenne Forbes <
>>>>>>> cheyenne.osanu.for...@gmail.com> wrote:
>>>>>>> >
>>>>>>> > I realized there is a spark plugin for phoenix, any use cases? why
>>>>>>> would I use spark with phoenix instead of phoenix by itself?
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>


Re: When would/should I use spark with phoenix?

2016-09-12 Thread Cheyenne Forbes
Thanks everyone, I will be using phoenix for simple input/output and
the phoenix_spark plugin (https://phoenix.apache.org/phoenix_spark.html)
for more complex queries, is that the smart thing?

Regards,

Cheyenne Forbes

Chief Executive Officer
Avapno Omnitech

Chief Operating Officer
Avapno Solutions, Co.

Chairman
Avapno Assets, LLC

Bethel Town P.O
Westmoreland
Jamaica

Email: cheyenne.osanu.for...@gmail.com
Mobile: 876-881-7889
skype: cheyenne.forbes1


On Sun, Sep 11, 2016 at 11:07 AM, Ted Yu <yuzhih...@gmail.com> wrote:

> w.r.t. Resource Management, Spark also relies on other framework such as
> YARN or Mesos.
>
> Cheers
>
> On Sun, Sep 11, 2016 at 6:31 AM, John Leach <jlea...@gmail.com> wrote:
>
>> Spark has a robust execution model with the following features that are
>> not part of phoenix
>> * Scalable
>> * fault tolerance with lineage (Handles large intermediate
>> results)
>> * memory management for tasks
>> * Resource Management (Fair Scheduling)
>> * Additional SQL Features (Windowing ,etc.)
>> * Machine Learning Libraries
>>
>>
>> Regards,
>> John
>>
>> > On Sep 11, 2016, at 2:45 AM, Cheyenne Forbes <
>> cheyenne.osanu.for...@gmail.com> wrote:
>> >
>> > I realized there is a spark plugin for phoenix, any use cases? why
>> would I use spark with phoenix instead of phoenix by itself?
>>
>>
>


Re: When would/should I use spark with phoenix?

2016-09-11 Thread Cheyenne Forbes
Thank you. For a project as big as Facebook or Snapschat, would you
recommend using Spark or Phoenix for things such as message
retrieval/insert, user search, user feeds retrieval/insert, etc. and what
are the pros and cons?

Regard,
Cheyenne


On Sun, Sep 11, 2016 at 8:31 AM, John Leach <jlea...@gmail.com> wrote:

> Spark has a robust execution model with the following features that are
> not part of phoenix
> * Scalable
> * fault tolerance with lineage (Handles large intermediate results)
> * memory management for tasks
> * Resource Management (Fair Scheduling)
> * Additional SQL Features (Windowing ,etc.)
> * Machine Learning Libraries
>
>
> Regards,
> John
>
> > On Sep 11, 2016, at 2:45 AM, Cheyenne Forbes <
> cheyenne.osanu.for...@gmail.com> wrote:
> >
> > I realized there is a spark plugin for phoenix, any use cases? why would
> I use spark with phoenix instead of phoenix by itself?
>
>


When would/should I use spark with phoenix?

2016-09-11 Thread Cheyenne Forbes
I realized there is a spark plugin for phoenix, any use cases? why would I
use spark with phoenix instead of phoenix by itself?


Joint query doesn't work in Squirrel client (or any other JDBC client)

2016-09-10 Thread Cheyenne Forbes
 Hbase 1.1 Phoenix 4.4.0 core and server jars are in Hbase Lib folder,
everything works except join queries:

select u.id, b.book
> from users as u
> inner join books as b
> on u.id = b.author


Full text query in Phoenix

2016-09-07 Thread Cheyenne Forbes
I am using phoenix for my platform but I cant do full text queries

"SELECT ID, FirstName, Lastname FROM users
   WHERE MATCH (FirstName, Lastname)
 AGAINST ('first_name last_name' IN BOOLEAN MODE)
   AND [Searcher not blocked by user]

Regards,
Cheyenne


  1   2   >