RE: Phoenix and Cloudera

2018-01-02 Thread Dor Ben Dov
Thanks for the help Flavio.

Regards,
Dor

From: Flavio Pompermaier [mailto:pomperma...@okkam.it]
Sent: יום ג 02 ינואר 2018 11:56
To: user@phoenix.apache.org
Cc: Dor Ben Dov <dor.ben-...@amdocs.com>; Shai Shapira <shai.shap...@amdocs.com>
Subject: Re: Phoenix and Cloudera

Yes I think so.

On Tue, Jan 2, 2018 at 10:38 AM, Dor Ben Dov 
<dor.ben-...@amdocs.com<mailto:dor.ben-...@amdocs.com>> wrote:
Hi Flavio

Thanks for the quick respond it is helpful.
According to your answer, if I will upgrade my cluster to CDH 5.11.2 it will 
work. This is what I am understanding.

Regards,
Dor

From: Flavio Pompermaier 
[mailto:pomperma...@okkam.it<mailto:pomperma...@okkam.it>]
Sent: יום ה 28 דצמבר 2017 10:06
To: user@phoenix.apache.org<mailto:user@phoenix.apache.org>
Subject: Re: Phoenix and Cloudera

I don't think it wiĺl work on Cloudera 5.10.1
There are 2 Phoenix parcel: the last official one (Phoenix 4.7 on cdh 4.9) and 
the unofficial one (under release) that is Phoenix 4.13 on CDH 5.11.2.
You can find more info at
http://community.cloudera.com/t5/Cloudera-Labs/Apache-Phoenix-Support-for-CDH-5-8-x-5-9x-5-10-x-5-11-x/m-p/62687#M416?eid=1=1

Best,
Flavio

On 28 Dec 2017 08:29, "Dor Ben Dov" 
<dor.ben-...@amdocs.com<mailto:dor.ben-...@amdocs.com>> wrote:
Hi,

I am trying with no luck due to many exceptions to connect phoenix to CDH 5.10.1
Can one give some tips, guidance how it should be done ?

I tried with the manual on the Apache site as well as via Cloudera Parcels 
distribution and activation process.

Regards,
Dor Ben Dov
Research Manager, Digital – BSS - Research
+972-9-7764043<tel:+972%209-776-4043>   (office)
+972-52-3573492<tel:+972%2052-357-3492> (mobile)
[cid:image001.png@01D28AC9.B559B360]

Follow us on Facebook<http://www.facebook.com/amdocs/>, 
Twitter<http://twitter.com/amdocs>, 
LinkedIn<http://www.linkedin.com/company/amdocs>, 
YouTube<http://www.youtube.com/amdocsinc>, 
Google+<https://plus.google.com/105657940751678445194> and the Amdocs blog 
network<http://blogs.amdocs.com/>| Digital BSS 
Research@Yammer<https://www.yammer.com/amdocs.com/#/threads/inGroup?type=in_group=7043838>

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at https://www.amdocs.com/about/email-disclaimer

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at https://www.amdocs.com/about/email-disclaimer



--
Flavio Pompermaier
Development Department

OKKAM S.r.l.
Tel. +(39) 0461 041809
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,

you may review at https://www.amdocs.com/about/email-disclaimer 
<https://www.amdocs.com/about/email-disclaimer>


RE: Phoenix and Cloudera

2018-01-02 Thread Dor Ben Dov
Hi Flavio

Thanks for the quick respond it is helpful.
According to your answer, if I will upgrade my cluster to CDH 5.11.2 it will 
work. This is what I am understanding.

Regards,
Dor

From: Flavio Pompermaier [mailto:pomperma...@okkam.it]
Sent: יום ה 28 דצמבר 2017 10:06
To: user@phoenix.apache.org
Subject: Re: Phoenix and Cloudera

I don't think it wiĺl work on Cloudera 5.10.1
There are 2 Phoenix parcel: the last official one (Phoenix 4.7 on cdh 4.9) and 
the unofficial one (under release) that is Phoenix 4.13 on CDH 5.11.2.
You can find more info at
http://community.cloudera.com/t5/Cloudera-Labs/Apache-Phoenix-Support-for-CDH-5-8-x-5-9x-5-10-x-5-11-x/m-p/62687#M416?eid=1=1

Best,
Flavio

On 28 Dec 2017 08:29, "Dor Ben Dov" 
<dor.ben-...@amdocs.com<mailto:dor.ben-...@amdocs.com>> wrote:
Hi,

I am trying with no luck due to many exceptions to connect phoenix to CDH 5.10.1
Can one give some tips, guidance how it should be done ?

I tried with the manual on the Apache site as well as via Cloudera Parcels 
distribution and activation process.

Regards,
Dor Ben Dov
Research Manager, Digital – BSS - Research
+972-9-7764043<tel:+972%209-776-4043>   (office)
+972-52-3573492<tel:+972%2052-357-3492> (mobile)
[cid:image001.png@01D28AC9.B559B360]

Follow us on Facebook<http://www.facebook.com/amdocs/>, 
Twitter<http://twitter.com/amdocs>, 
LinkedIn<http://www.linkedin.com/company/amdocs>, 
YouTube<http://www.youtube.com/amdocsinc>, 
Google+<https://plus.google.com/105657940751678445194> and the Amdocs blog 
network<http://blogs.amdocs.com/>| Digital BSS 
Research@Yammer<https://www.yammer.com/amdocs.com/#/threads/inGroup?type=in_group=7043838>

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at https://www.amdocs.com/about/email-disclaimer

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,

you may review at https://www.amdocs.com/about/email-disclaimer 
<https://www.amdocs.com/about/email-disclaimer>


Phoenix and Cloudera

2017-12-27 Thread Dor Ben Dov
Hi,

I am trying with no luck due to many exceptions to connect phoenix to CDH 5.10.1
Can one give some tips, guidance how it should be done ?

I tried with the manual on the Apache site as well as via Cloudera Parcels 
distribution and activation process.

Regards,
Dor Ben Dov
Research Manager, Digital - BSS - Research
+972-9-7764043   (office)
+972-52-3573492 (mobile)
[cid:image001.png@01D28AC9.B559B360]

Follow us on Facebook<http://www.facebook.com/amdocs/>, 
Twitter<http://twitter.com/amdocs>, 
LinkedIn<http://www.linkedin.com/company/amdocs>, 
YouTube<http://www.youtube.com/amdocsinc>, 
Google+<https://plus.google.com/105657940751678445194> and the Amdocs blog 
network<http://blogs.amdocs.com/>| Digital BSS 
Research@Yammer<https://www.yammer.com/amdocs.com/#/threads/inGroup?type=in_group=7043838>

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,

you may review at https://www.amdocs.com/about/email-disclaimer 
<https://www.amdocs.com/about/email-disclaimer>


RE: ***UNCHECKED*** Re: HBase Phoenix Integration

2016-03-02 Thread Dor Ben Dov
Divya,
How much are you working or what kind of ‘use’ are you using it on top HDP ?

Dor

From: Divya Gehlot [mailto:divya.htco...@gmail.com]
Sent: יום ג 01 מרץ 2016 08:11
To: user@phoenix.apache.org
Subject: Re: ***UNCHECKED*** Re: HBase Phoenix Integration

I am using Hortonworks distribution and It comes with phoenix :)

No idea about the patch.
You try posting the error in CDH forum .
It might help.

All the Best !!

Cheers,
Divya

On 1 March 2016 at 13:15, Amit Shah 
> wrote:
No, it doesn't work for phoenix 4.6. Attached is the error I get when I execute 
'sqlline.py :2181'

Can you please give more details about the patch?

Thanks,
Amit.

On Tue, Mar 1, 2016 at 10:39 AM, Divya Gehlot 
> wrote:
Hi Amit,
Is it working ?
No , Mine is phoenix 4.4 .

Thanks,
Divya

On 1 March 2016 at 13:00, Amit Shah 
> wrote:
Hi Divya,

Thanks for the patch. Is this for phoenix version 4.6 ? Are the changes made to 
make phoenix work with CDH 5.5.2?

Thanks,
Amit.

On Tue, Mar 1, 2016 at 10:08 AM, Divya Gehlot 
> wrote:
Hi Amit,
Extract attached jar and try placing it in your hbase classpath

P.S. Please remove the 'x' from the jar extension
Hope this helps.


Thanks,
Divya

On 26 February 2016 at 20:44, Amit Shah 
> wrote:
Hello,

I have been trying to install phoenix on my cloudera hbase cluster. Cloudera 
version is CDH5.5.2 while HBase version is 1.0.

I copied the server & core jar (version 4.6-HBase-1.0) on the master and region 
servers and restarted the hbase cluster. I copied the corresponding client jar 
on my SQuirrel client but I get an exception on connect. Pasted below. The 
connection url is “jdbc:phoenix::2181".
I even tried compiling the source by adding cloudera dependencies as suggested 
on this post 

 but didn't succeed.

Any suggestions to make this work?

Thanks,
Amit.



Caused by: 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException):
 org.apache.hadoop.hbase.DoNotRetryIOException: SYSTEM.CATALOG: 
org.apache.hadoop.hbase.client.Scan.setRaw(Z)Lorg/apache/hadoop/hbase/client/Scan;
at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:87)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1319)
at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11715)
at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7388)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1776)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1758)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32209)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2034)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NoSuchMethodError: 
org.apache.hadoop.hbase.client.Scan.setRaw(Z)Lorg/apache/hadoop/hbase/client/Scan;
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildDeletedTable(MetaDataEndpointImpl.java:1016)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.loadTable(MetaDataEndpointImpl.java:1092)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1266)
... 10 more

P.S - The full stacktrace is attached in the mail.






This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp


RE: Re: HBase Phoenix Integration

2016-03-01 Thread Dor Ben Dov
Thanks Michael for the details.
Dor

From: Michael McAllister [mailto:mmcallis...@homeaway.com]
Sent: יום ד 02 מרץ 2016 01:58
To: user@phoenix.apache.org
Subject: RE: Re: HBase Phoenix Integration

Technically, it depends on which version of HDP you are on. Here are the 
versions:-

HDP 2.1 = Apache Phoenix 4.0
HDP 2.2 = Apache Phoenix 4.2
HDP 2.3 = Apache Phoenix 4.4
HDP 2.4 = Apache Phoenix 4.4

(From this page -> http://hortonworks.com/hdp/whats-new/)

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 | 
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex: 
https://h.a/mikewebex
[Description: Description: cid:3410354473_30269081]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

From: Bulvik, Noam [mailto:noam.bul...@teoco.com]
Sent: Tuesday, March 01, 2016 5:10 AM
To: user@phoenix.apache.org<mailto:user@phoenix.apache.org>
Subject: RE: Re: HBase Phoenix Integration

4.4

From: Dor Ben Dov [mailto:dor.ben-...@amdocs.com]
Sent: Tuesday, March 1, 2016 12:34 PM
To: user@phoenix.apache.org<mailto:user@phoenix.apache.org>
Subject: RE: Re: HBase Phoenix Integration

Any one knows here which version of Phoenix being used in HortonWorks bundle ?
Dor

From: Amit Shah [mailto:amits...@gmail.com]
Sent: יום ג 01 מרץ 2016 12:33
To: user@phoenix.apache.org<mailto:user@phoenix.apache.org>
Subject: Re: Re: HBase Phoenix Integration

Hi Sun,

In my deployment I have only one zookeeper node. I do not have cluster. What 
was your command line string to connect? Did you specify the port number?
I specify this ./sqlline.py :2181

Regards,
Amit.

On Tue, Mar 1, 2016 at 3:38 PM, Fulin Sun 
<su...@certusnet.com.cn<mailto:su...@certusnet.com.cn>> wrote:
Hi, Amit
I had successfully built the git repo according to your file change. But when I 
try to use sqlline to connect to Phoenix,
I ran into the following error:

while dev-1,dev-2,dev-3 are my zookeeper hosts. I can still see the 
regionservers being killed abnormally.

Had you met this issue when you use in your scenerio ? If so, please suggest 
how to resolve this.

Thanks.

 Connecting to jdbc:phoenix:dev-1,dev-2,dev-3
16/03/01 18:04:17 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Error: Failed after attempts=36, exceptions:
Tue Mar 01 18:05:30 CST 2016, null, java.net.SocketTimeoutException: 
callTimeout=6, callDuration=69350: row 'SYSTEM.SEQUENCE,,00' on 
table 'hbase:meta' at region=hbase:meta,,1.1588230740, 
hostname=dev-2,60020,1456826584858, seqNum=0 (state=08000,code=101)
org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, 
exceptions:
Tue Mar 01 18:05:30 CST 2016, null, java.net.SocketTimeoutException: 
callTimeout=6, callDuration=69350: row 'SYSTEM.SEQUENCE,,00' on 
table 'hbase:meta' at region=hbase:meta,,1.1588230740, 
hostname=dev-2,60020,1456826584858, seqNum=0



From: Amit Shah<mailto:amits...@gmail.com>
Date: 2016-03-01 18:00
To: user<mailto:user@phoenix.apache.org>
Subject: Re: Re: HBase Phoenix Integration
Sure Sun.
PFA.

Regards,
Amit.

On Tue, Mar 1, 2016 at 2:58 PM, Fulin Sun 
<su...@certusnet.com.cn<mailto:su...@certusnet.com.cn>> wrote:
Hi Amit,
Glad you found a temporory  fix for that. Can you share the relative java file 
you modified ?
Thanks a lot.

Best,
Sun.




From: Amit Shah<mailto:amits...@gmail.com>
Date: 2016-03-01 17:22
To: user<mailto:user@phoenix.apache.org>
Subject: Re: RE: HBase Phoenix Integration
Hi All,

I got some success in deploying phoenix 4.6-HBase-1.0 on CDH 5.5.2. I resolved 
the compilation errors by commenting out the usage of 
BinaryCompatibleIndexKeyValueDecoder and 
BinaryCompatibleCompressedIndexKeyValueDecoder classes since they only get used 
in secondary indexing. This is a temporary fix but yes it works !

Hope that helps.
Waiting to see the phoenix-cloudera fix for the latest phoenix version 4.7 
especially since 4.7 has some new features.

Thanks,
Amit.

On Tue, Mar 1, 2016 at 2:13 PM, Fulin Sun 
<su...@certusnet.com.cn<mailto:su...@certusnet.com.cn>> wrote:
No idea. I had only searched for this relatively latest post for supporting 
phoenix with CDH 5.5.x
However, I cannot con

RE: RE: HBase Phoenix Integration

2016-03-01 Thread Dor Ben Dov
Sun
Are you using it in production ? I mean Hbase and Phoenix ?

Dor

From: Fulin Sun [mailto:su...@certusnet.com.cn]
Sent: יום ג 01 מרץ 2016 10:43
To: user
Subject: Re: RE: HBase Phoenix Integration

No idea. I had only searched for this relatively latest post for supporting 
phoenix with CDH 5.5.x
However, I cannot connect to phoenix according to the post guide. Compiling the 
git repo had also
give me no luck.





From: Dor Ben Dov<mailto:dor.ben-...@amdocs.com>
Date: 2016-03-01 16:39
To: user@phoenix.apache.org<mailto:user@phoenix.apache.org>
Subject: RE: Re: HBase Phoenix Integration
Sun
This is an old post there, do you know or are there any news about Cloudera 
Adopting the new Apache Phoenix 4.7 ?

Dor

From: Fulin Sun [mailto:su...@certusnet.com.cn]
Sent: יום ג 01 מרץ 2016 09:14
To: user; James Taylor
Subject: Re: Re: HBase Phoenix Integration

Hi Amit
Yeah. Meet the same error message when doing mvn package. I compared between 
the apache phoenix 4.6.0-hbase-1.0
code with this repo code for CDH 5.5.1 , I did not find the abstract class 
BinaryCompatiblePhoenixBaseDecoder in the former.
Hope some guy can explain this and issue an workaround.

If no way to resolve this, I would still be using the Cloudera-Labs phoenix 
version from this :
https://blog.cloudera.com/blog/2015/11/new-apache-phoenix-4-5-2-package-from-cloudera-labs/



Thanks,
Sun.




From: Amit Shah<mailto:amits...@gmail.com>
Date: 2016-03-01 15:37
To: user<mailto:user@phoenix.apache.org>; 
jamestaylor<mailto:jamestay...@apache.org>
Subject: Re: HBase Phoenix Integration
Hi James,
I get a compilation error along with multiple warnings when packaging 
4.6-HBase-1.0-cdh5.5 branch. Attached is the error.

Also I realized that the pom.xml indicates the branch is for cloudera CDH 
version 5.5.1. Do you know if it would work for the latest CDH version 5.5.2?

Thanks,
Amit.

On Tue, Mar 1, 2016 at 12:05 PM, Dor Ben Dov 
<dor.ben-...@amdocs.com<mailto:dor.ben-...@amdocs.com>> wrote:
James,
Do you have any problems working with Phoenix latest with CDH 5.5.X ?

Dor

From: James Taylor 
[mailto:jamestay...@apache.org<mailto:jamestay...@apache.org>]
Sent: יום ג 01 מרץ 2016 08:24
To: user
Cc: Murugesan, Rani
Subject: Re: HBase Phoenix Integration

Hi Amit,

For Phoenix 4.6 on CDH, try using this git repo instead, courtesy of Andrew 
Purtell: 
https://github.com/chiastic-security/phoenix-for-cloudera/tree/4.6-HBase-1.0-cdh5.5

Thanks,
James



On Mon, Feb 29, 2016 at 10:19 PM, Amit Shah 
<amits...@gmail.com<mailto:amits...@gmail.com>> wrote:
Hi Sergey,

I get lot of compilation errors when I compile the source code for 
4.6-HBase-1.0 branch or v4.7.0-HBase-1.0-rc3 tag. Note that the source 
compilation succeeds when the changes to include cloudera dependent versions 
are not included. The only difference between the code changes suggested on the 
stackoverflow post and mine is the cloudera cdh version. I am using cdh 5.5.2. 
I didn't quite follow the reason behind the code changes needed in phoenix when 
deployed on CDH.

Thanks,
Amit.

On Tue, Mar 1, 2016 at 1:15 AM, Sergey Soldatov 
<sergeysolda...@gmail.com<mailto:sergeysolda...@gmail.com>> wrote:
Hi Amit,

Switching to 4.3 means you need HBase 0.98. What kind of problem you
experienced after building 4.6 from sources with changes suggested on
StackOverflow?

Thanks,
Sergey

On Sun, Feb 28, 2016 at 10:49 PM, Amit Shah 
<amits...@gmail.com<mailto:amits...@gmail.com>> wrote:
> An update -
>
> I was able to execute "./sqlline.py " command but I
> get the same exception as I mentioned earlier.
>
> Later I tried following the steps mentioned on this link with phoenix 4.3.0
> but I still get an error this time with a different stack trace (attached to
> this mail)
>
> Any help would be appreciated
>
> On Sat, Feb 27, 2016 at 8:03 AM, Amit Shah 
> <amits...@gmail.com<mailto:amits...@gmail.com>> wrote:
>>
>> Hi Murugesan,
>>
>> What preconditions would I need on the server to execute the python
>> script? I have Python 2.7.5 installed on the zookeeper server. If I just
>> copy the sqlline script to the /etc/hbase/conf directory and execute it I
>> get the below import errors. Note this time I had 4.5.2-HBase-1.0 version
>> server and core phoenix jars in HBase/lib directory on the master and region
>> servers.
>>
>> Traceback (most recent call last):
>>   File "./sqlline.py", line 25, in 
>> import phoenix_utils
>> ImportError: No module named phoenix_utils
>>
>> Pardon me for my knowledge about python.
>>
>> Thanks,
>> Amit
>>
>> On Fri, Feb 26, 2016 at 11:26 PM, Murugesan, Rani 
>> &l

RE: Re: HBase Phoenix Integration

2016-03-01 Thread Dor Ben Dov
Sun
This is an old post there, do you know or are there any news about Cloudera 
Adopting the new Apache Phoenix 4.7 ?

Dor

From: Fulin Sun [mailto:su...@certusnet.com.cn]
Sent: יום ג 01 מרץ 2016 09:14
To: user; James Taylor
Subject: Re: Re: HBase Phoenix Integration

Hi Amit
Yeah. Meet the same error message when doing mvn package. I compared between 
the apache phoenix 4.6.0-hbase-1.0
code with this repo code for CDH 5.5.1 , I did not find the abstract class 
BinaryCompatiblePhoenixBaseDecoder in the former.
Hope some guy can explain this and issue an workaround.

If no way to resolve this, I would still be using the Cloudera-Labs phoenix 
version from this :
https://blog.cloudera.com/blog/2015/11/new-apache-phoenix-4-5-2-package-from-cloudera-labs/


Thanks,
Sun.




From: Amit Shah<mailto:amits...@gmail.com>
Date: 2016-03-01 15:37
To: user<mailto:user@phoenix.apache.org>; 
jamestaylor<mailto:jamestay...@apache.org>
Subject: Re: HBase Phoenix Integration
Hi James,
I get a compilation error along with multiple warnings when packaging 
4.6-HBase-1.0-cdh5.5 branch. Attached is the error.

Also I realized that the pom.xml indicates the branch is for cloudera CDH 
version 5.5.1. Do you know if it would work for the latest CDH version 5.5.2?

Thanks,
Amit.

On Tue, Mar 1, 2016 at 12:05 PM, Dor Ben Dov 
<dor.ben-...@amdocs.com<mailto:dor.ben-...@amdocs.com>> wrote:
James,
Do you have any problems working with Phoenix latest with CDH 5.5.X ?

Dor

From: James Taylor 
[mailto:jamestay...@apache.org<mailto:jamestay...@apache.org>]
Sent: יום ג 01 מרץ 2016 08:24
To: user
Cc: Murugesan, Rani
Subject: Re: HBase Phoenix Integration

Hi Amit,

For Phoenix 4.6 on CDH, try using this git repo instead, courtesy of Andrew 
Purtell: 
https://github.com/chiastic-security/phoenix-for-cloudera/tree/4.6-HBase-1.0-cdh5.5

Thanks,
James



On Mon, Feb 29, 2016 at 10:19 PM, Amit Shah 
<amits...@gmail.com<mailto:amits...@gmail.com>> wrote:
Hi Sergey,

I get lot of compilation errors when I compile the source code for 
4.6-HBase-1.0 branch or v4.7.0-HBase-1.0-rc3 tag. Note that the source 
compilation succeeds when the changes to include cloudera dependent versions 
are not included. The only difference between the code changes suggested on the 
stackoverflow post and mine is the cloudera cdh version. I am using cdh 5.5.2. 
I didn't quite follow the reason behind the code changes needed in phoenix when 
deployed on CDH.

Thanks,
Amit.

On Tue, Mar 1, 2016 at 1:15 AM, Sergey Soldatov 
<sergeysolda...@gmail.com<mailto:sergeysolda...@gmail.com>> wrote:
Hi Amit,

Switching to 4.3 means you need HBase 0.98. What kind of problem you
experienced after building 4.6 from sources with changes suggested on
StackOverflow?

Thanks,
Sergey

On Sun, Feb 28, 2016 at 10:49 PM, Amit Shah 
<amits...@gmail.com<mailto:amits...@gmail.com>> wrote:
> An update -
>
> I was able to execute "./sqlline.py " command but I
> get the same exception as I mentioned earlier.
>
> Later I tried following the steps mentioned on this link with phoenix 4.3.0
> but I still get an error this time with a different stack trace (attached to
> this mail)
>
> Any help would be appreciated
>
> On Sat, Feb 27, 2016 at 8:03 AM, Amit Shah 
> <amits...@gmail.com<mailto:amits...@gmail.com>> wrote:
>>
>> Hi Murugesan,
>>
>> What preconditions would I need on the server to execute the python
>> script? I have Python 2.7.5 installed on the zookeeper server. If I just
>> copy the sqlline script to the /etc/hbase/conf directory and execute it I
>> get the below import errors. Note this time I had 4.5.2-HBase-1.0 version
>> server and core phoenix jars in HBase/lib directory on the master and region
>> servers.
>>
>> Traceback (most recent call last):
>>   File "./sqlline.py", line 25, in 
>> import phoenix_utils
>> ImportError: No module named phoenix_utils
>>
>> Pardon me for my knowledge about python.
>>
>> Thanks,
>> Amit
>>
>> On Fri, Feb 26, 2016 at 11:26 PM, Murugesan, Rani 
>> <ranmu...@visa.com<mailto:ranmu...@visa.com>>
>> wrote:
>>>
>>> Did you test and confirm your phoenix shell from the zookeeper server?
>>>
>>> cd /etc/hbase/conf
>>>
>>> > phoenix-sqlline.py :2181
>>>
>>>
>>>
>>>
>>>
>>> From: Amit Shah [mailto:amits...@gmail.com<mailto:amits...@gmail.com>]
>>> Sent: Friday, February 26, 2016 4:45 AM
>>> To: user@phoenix.apache.org<mailto:user@phoenix.apache.org>
>>> Subject: HBase Phoenix Integration
>>>
>>>
>>>
>>> Hello,
>>&

RE: HBase Phoenix Integration

2016-02-29 Thread Dor Ben Dov
James,
Do you have any problems working with Phoenix latest with CDH 5.5.X ?

Dor

From: James Taylor [mailto:jamestay...@apache.org]
Sent: יום ג 01 מרץ 2016 08:24
To: user
Cc: Murugesan, Rani
Subject: Re: HBase Phoenix Integration

Hi Amit,

For Phoenix 4.6 on CDH, try using this git repo instead, courtesy of Andrew 
Purtell: 
https://github.com/chiastic-security/phoenix-for-cloudera/tree/4.6-HBase-1.0-cdh5.5

Thanks,
James



On Mon, Feb 29, 2016 at 10:19 PM, Amit Shah 
> wrote:
Hi Sergey,

I get lot of compilation errors when I compile the source code for 
4.6-HBase-1.0 branch or v4.7.0-HBase-1.0-rc3 tag. Note that the source 
compilation succeeds when the changes to include cloudera dependent versions 
are not included. The only difference between the code changes suggested on the 
stackoverflow post and mine is the cloudera cdh version. I am using cdh 5.5.2. 
I didn't quite follow the reason behind the code changes needed in phoenix when 
deployed on CDH.

Thanks,
Amit.

On Tue, Mar 1, 2016 at 1:15 AM, Sergey Soldatov 
> wrote:
Hi Amit,

Switching to 4.3 means you need HBase 0.98. What kind of problem you
experienced after building 4.6 from sources with changes suggested on
StackOverflow?

Thanks,
Sergey

On Sun, Feb 28, 2016 at 10:49 PM, Amit Shah 
> wrote:
> An update -
>
> I was able to execute "./sqlline.py " command but I
> get the same exception as I mentioned earlier.
>
> Later I tried following the steps mentioned on this link with phoenix 4.3.0
> but I still get an error this time with a different stack trace (attached to
> this mail)
>
> Any help would be appreciated
>
> On Sat, Feb 27, 2016 at 8:03 AM, Amit Shah 
> > wrote:
>>
>> Hi Murugesan,
>>
>> What preconditions would I need on the server to execute the python
>> script? I have Python 2.7.5 installed on the zookeeper server. If I just
>> copy the sqlline script to the /etc/hbase/conf directory and execute it I
>> get the below import errors. Note this time I had 4.5.2-HBase-1.0 version
>> server and core phoenix jars in HBase/lib directory on the master and region
>> servers.
>>
>> Traceback (most recent call last):
>>   File "./sqlline.py", line 25, in 
>> import phoenix_utils
>> ImportError: No module named phoenix_utils
>>
>> Pardon me for my knowledge about python.
>>
>> Thanks,
>> Amit
>>
>> On Fri, Feb 26, 2016 at 11:26 PM, Murugesan, Rani 
>> >
>> wrote:
>>>
>>> Did you test and confirm your phoenix shell from the zookeeper server?
>>>
>>> cd /etc/hbase/conf
>>>
>>> > phoenix-sqlline.py :2181
>>>
>>>
>>>
>>>
>>>
>>> From: Amit Shah [mailto:amits...@gmail.com]
>>> Sent: Friday, February 26, 2016 4:45 AM
>>> To: user@phoenix.apache.org
>>> Subject: HBase Phoenix Integration
>>>
>>>
>>>
>>> Hello,
>>>
>>>
>>>
>>> I have been trying to install phoenix on my cloudera hbase cluster.
>>> Cloudera version is CDH5.5.2 while HBase version is 1.0.
>>>
>>>
>>>
>>> I copied the server & core jar (version 4.6-HBase-1.0) on the master and
>>> region servers and restarted the hbase cluster. I copied the corresponding
>>> client jar on my SQuirrel client but I get an exception on connect. Pasted
>>> below. The connection url is “jdbc:phoenix::2181".
>>>
>>> I even tried compiling the source by adding cloudera dependencies as
>>> suggested on this post but didn't succeed.
>>>
>>>
>>>
>>> Any suggestions to make this work?
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Amit.
>>>
>>>
>>>
>>> 
>>>
>>>
>>>
>>> Caused by:
>>> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException):
>>> org.apache.hadoop.hbase.DoNotRetryIOException: SYSTEM.CATALOG:
>>> org.apache.hadoop.hbase.client.Scan.setRaw(Z)Lorg/apache/hadoop/hbase/client/Scan;
>>>
>>> at
>>> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:87)
>>>
>>> at
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1319)
>>>
>>> at
>>> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11715)
>>>
>>> at
>>> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7388)
>>>
>>> at
>>> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1776)
>>>
>>> at
>>> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1758)
>>>
>>> at
>>> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32209)
>>>
>>> at
>>> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2034)
>>>
>>>

RE: Cloudera and Phoenix

2016-02-21 Thread Dor Ben Dov
Ben,
Thanks for the answer.
Dor

From: Benjamin Kim [mailto:bbuil...@gmail.com]
Sent: יום א 21 פברואר 2016 21:36
To: user@phoenix.apache.org
Subject: Re: Cloudera and Phoenix

I don’t know if Cloudera will support Phoenix going forward. There are a few 
things that lead me into thinking this.

  1.  No activity on a new port of Phoenix 4.6 or 4.7 in Cloudera Labs, as 
mentioned below
  2.  In the Cloudera Community groups, I got no reply to my question about 
help compiling Phoenix 4.7 for CDH
  3.  In the Spark Users groups, there’s active discussion about the Spark on 
HBase module that was developed by Cloudera and that it will be out in early 
summer.

 *   
http://blog.cloudera.com/blog/2015/08/apache-spark-comes-to-apache-hbase-with-hbase-spark-module/

My bet is that Cloudera is going with the Spark solution since it’s their baby, 
and it can natviely work with HBase table directly. So, this would mean that 
Phoenix is a no-go for CDH going forward? I hope not.

Cheers,
Ben


On Feb 21, 2016, at 11:15 AM, James Taylor 
<jamestay...@apache.org<mailto:jamestay...@apache.org>> wrote:

Hi Dor,

Whether or not Phoenix becomes part of CDH is not under our control. It *is* 
under your control, though (assuming you're a customer of CDH). The *only* way 
Phoenix will transition from being in Cloudera Labs to being part of the 
official CDH distro is if you and other customers demand it.

Thanks,
James

On Sun, Feb 21, 2016 at 10:03 AM, Dor Ben Dov 
<dor.ben-...@amdocs.com<mailto:dor.ben-...@amdocs.com>> wrote:
Stephen

Any plans or do you or anyone where see the possibility that it will be 
although all below as official release ?

Dor

From: Stephen Wilcoxon [mailto:wilco...@gmail.com<mailto:wilco...@gmail.com>]
Sent: יום א 21 פברואר 2016 19:37
To: user@phoenix.apache.org<mailto:user@phoenix.apache.org>
Subject: Re: Cloudera and Phoenix

As of a few months ago, Cloudera includes Phoenix as a "lab" (basically beta) 
but it was out-of-date.  From what I gather, the official Phoenix releases will 
not run on Cloudera without modifications (someone was doing unofficial 
Phoenix/Cloudera releases but I'm not sure if they still are or not).

On Sun, Feb 21, 2016 at 6:39 AM, Dor Ben Dov 
<dor.ben-...@amdocs.com<mailto:dor.ben-...@amdocs.com>> wrote:
Hi All,

Do we have Phoenix release officially in Cloudera ? any plan to if not ?

Regards,

Dor ben Dov

From: Benjamin Kim [mailto:bbuil...@gmail.com<mailto:bbuil...@gmail.com>]
Sent: יום ו 19 פברואר 2016 19:41
To: user@phoenix.apache.org<mailto:user@phoenix.apache.org>
Subject: Re: Spark Phoenix Plugin

All,

Thanks for the help. I have switched out Cloudera’s HBase 1.0.0 with the 
current Apache HBase 1.1.3. Also, I installed Phoenix 4.7.0, and everything 
works fine except for the Phoenix Spark Plugin. I wonder if it’s a version 
incompatibility issue with Spark 1.6. Has anyone tried compiling 4.7.0 using 
Spark 1.6?

Thanks,
Ben

On Feb 12, 2016, at 6:33 AM, Benjamin Kim 
<bbuil...@gmail.com<mailto:bbuil...@gmail.com>> wrote:

Anyone know when Phoenix 4.7 will be officially released? And what Cloudera 
distribution versions will it be compatible with?

Thanks,
Ben

On Feb 10, 2016, at 11:03 AM, Benjamin Kim 
<bbuil...@gmail.com<mailto:bbuil...@gmail.com>> wrote:

Hi Pierre,

I am getting this error now.

Error: org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: 
SYSTEM.CATALOG,,1453397732623.8af7b44f3d7609eb301ad98641ff2611.: 
org.apache.hadoop.hbase.client.Delete.setAttribute(Ljava/lang/String;[B)Lorg/apache/hadoop/hbase/client/Delete;

I even tried to use sqlline.py to do some queries too. It resulted in the same 
error. I followed the installation instructions. Is there something missing?

Thanks,
Ben


On Feb 9, 2016, at 10:20 AM, Ravi Kiran 
<maghamraviki...@gmail.com<mailto:maghamraviki...@gmail.com>> wrote:

Hi Pierre,

  Try your luck for building the artifacts from 
https://github.com/chiastic-security/phoenix-for-cloudera. Hopefully it helps.

Regards
Ravi .

On Tue, Feb 9, 2016 at 10:04 AM, Benjamin Kim 
<bbuil...@gmail.com<mailto:bbuil...@gmail.com>> wrote:
Hi Pierre,

I found this article about how Cloudera’s version of HBase is very different 
than Apache HBase so it must be compiled using Cloudera’s repo and versions. 
But, I’m not having any success with it.

http://stackoverflow.com/questions/31849454/using-phoenix-with-cloudera-hbase-installed-from-repo

There’s also a Chinese site that does the same thing.

https://www.zybuluo.com/xtccc/note/205739

I keep getting errors like the one’s below.

[ERROR] 
/opt/tools/phoenix/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/LocalIndexMerger.java:[110,29]
 cannot find symbol
[ERROR] symbol:   class Region
[ERROR] location: class org.apache.hadoop.hbase.regionserver.LocalIndexMerger
…

Have you tried this also?

As 

RE: Cloudera and Phoenix

2016-02-21 Thread Dor Ben Dov
James,

Understood.
Thanks for the reply,
Dor

From: James Taylor [mailto:jamestay...@apache.org]
Sent: יום א 21 פברואר 2016 21:16
To: user
Subject: Re: Cloudera and Phoenix

Hi Dor,

Whether or not Phoenix becomes part of CDH is not under our control. It *is* 
under your control, though (assuming you're a customer of CDH). The *only* way 
Phoenix will transition from being in Cloudera Labs to being part of the 
official CDH distro is if you and other customers demand it.

Thanks,
James

On Sun, Feb 21, 2016 at 10:03 AM, Dor Ben Dov 
<dor.ben-...@amdocs.com<mailto:dor.ben-...@amdocs.com>> wrote:
Stephen

Any plans or do you or anyone where see the possibility that it will be 
although all below as official release ?

Dor

From: Stephen Wilcoxon [mailto:wilco...@gmail.com<mailto:wilco...@gmail.com>]
Sent: יום א 21 פברואר 2016 19:37
To: user@phoenix.apache.org<mailto:user@phoenix.apache.org>
Subject: Re: Cloudera and Phoenix

As of a few months ago, Cloudera includes Phoenix as a "lab" (basically beta) 
but it was out-of-date.  From what I gather, the official Phoenix releases will 
not run on Cloudera without modifications (someone was doing unofficial 
Phoenix/Cloudera releases but I'm not sure if they still are or not).

On Sun, Feb 21, 2016 at 6:39 AM, Dor Ben Dov 
<dor.ben-...@amdocs.com<mailto:dor.ben-...@amdocs.com>> wrote:
Hi All,

Do we have Phoenix release officially in Cloudera ? any plan to if not ?

Regards,

Dor ben Dov

From: Benjamin Kim [mailto:bbuil...@gmail.com<mailto:bbuil...@gmail.com>]
Sent: יום ו 19 פברואר 2016 19:41
To: user@phoenix.apache.org<mailto:user@phoenix.apache.org>
Subject: Re: Spark Phoenix Plugin

All,

Thanks for the help. I have switched out Cloudera’s HBase 1.0.0 with the 
current Apache HBase 1.1.3. Also, I installed Phoenix 4.7.0, and everything 
works fine except for the Phoenix Spark Plugin. I wonder if it’s a version 
incompatibility issue with Spark 1.6. Has anyone tried compiling 4.7.0 using 
Spark 1.6?

Thanks,
Ben

On Feb 12, 2016, at 6:33 AM, Benjamin Kim 
<bbuil...@gmail.com<mailto:bbuil...@gmail.com>> wrote:

Anyone know when Phoenix 4.7 will be officially released? And what Cloudera 
distribution versions will it be compatible with?

Thanks,
Ben

On Feb 10, 2016, at 11:03 AM, Benjamin Kim 
<bbuil...@gmail.com<mailto:bbuil...@gmail.com>> wrote:

Hi Pierre,

I am getting this error now.

Error: org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: 
SYSTEM.CATALOG,,1453397732623.8af7b44f3d7609eb301ad98641ff2611.: 
org.apache.hadoop.hbase.client.Delete.setAttribute(Ljava/lang/String;[B)Lorg/apache/hadoop/hbase/client/Delete;

I even tried to use sqlline.py to do some queries too. It resulted in the same 
error. I followed the installation instructions. Is there something missing?

Thanks,
Ben


On Feb 9, 2016, at 10:20 AM, Ravi Kiran 
<maghamraviki...@gmail.com<mailto:maghamraviki...@gmail.com>> wrote:

Hi Pierre,

  Try your luck for building the artifacts from 
https://github.com/chiastic-security/phoenix-for-cloudera. Hopefully it helps.

Regards
Ravi .

On Tue, Feb 9, 2016 at 10:04 AM, Benjamin Kim 
<bbuil...@gmail.com<mailto:bbuil...@gmail.com>> wrote:
Hi Pierre,

I found this article about how Cloudera’s version of HBase is very different 
than Apache HBase so it must be compiled using Cloudera’s repo and versions. 
But, I’m not having any success with it.

http://stackoverflow.com/questions/31849454/using-phoenix-with-cloudera-hbase-installed-from-repo

There’s also a Chinese site that does the same thing.

https://www.zybuluo.com/xtccc/note/205739

I keep getting errors like the one’s below.

[ERROR] 
/opt/tools/phoenix/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/LocalIndexMerger.java:[110,29]
 cannot find symbol
[ERROR] symbol:   class Region
[ERROR] location: class org.apache.hadoop.hbase.regionserver.LocalIndexMerger
…

Have you tried this also?

As a last resort, we will have to abandon Cloudera’s HBase for Apache’s HBase.

Thanks,
Ben


On Feb 8, 2016, at 11:04 PM, pierre lacave 
<pie...@lacave.me<mailto:pie...@lacave.me>> wrote:

Havent met that one.
According to SPARK-1867, the real issue is hidden.
I d process by elimination, maybe try in local[*] mode first
https://issues.apache.org/jira/plugins/servlet/mobile#issue/SPARK-1867

On Tue, 9 Feb 2016, 04:58 Benjamin Kim 
<bbuil...@gmail.com<mailto:bbuil...@gmail.com>> wrote:
Pierre,

I got it to work using phoenix-4.7.0-HBase-1.0-client-spark.jar. But, now, I 
get this error:

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 
3, 
prod-dc1-datanode151.pdc1i.gradientx.com<http://prod-dc1-datanode151.pdc1i.gradientx.com/>):
 java.lang.IllegalStateException: unread block data

It happens when I do:

RE: Cloudera and Phoenix

2016-02-21 Thread Dor Ben Dov
Stephen

Any plans or do you or anyone where see the possibility that it will be 
although all below as official release ?

Dor

From: Stephen Wilcoxon [mailto:wilco...@gmail.com]
Sent: יום א 21 פברואר 2016 19:37
To: user@phoenix.apache.org
Subject: Re: Cloudera and Phoenix

As of a few months ago, Cloudera includes Phoenix as a "lab" (basically beta) 
but it was out-of-date.  From what I gather, the official Phoenix releases will 
not run on Cloudera without modifications (someone was doing unofficial 
Phoenix/Cloudera releases but I'm not sure if they still are or not).

On Sun, Feb 21, 2016 at 6:39 AM, Dor Ben Dov 
<dor.ben-...@amdocs.com<mailto:dor.ben-...@amdocs.com>> wrote:
Hi All,

Do we have Phoenix release officially in Cloudera ? any plan to if not ?

Regards,

Dor ben Dov

From: Benjamin Kim [mailto:bbuil...@gmail.com<mailto:bbuil...@gmail.com>]
Sent: יום ו 19 פברואר 2016 19:41
To: user@phoenix.apache.org<mailto:user@phoenix.apache.org>
Subject: Re: Spark Phoenix Plugin

All,

Thanks for the help. I have switched out Cloudera’s HBase 1.0.0 with the 
current Apache HBase 1.1.3. Also, I installed Phoenix 4.7.0, and everything 
works fine except for the Phoenix Spark Plugin. I wonder if it’s a version 
incompatibility issue with Spark 1.6. Has anyone tried compiling 4.7.0 using 
Spark 1.6?

Thanks,
Ben

On Feb 12, 2016, at 6:33 AM, Benjamin Kim 
<bbuil...@gmail.com<mailto:bbuil...@gmail.com>> wrote:

Anyone know when Phoenix 4.7 will be officially released? And what Cloudera 
distribution versions will it be compatible with?

Thanks,
Ben

On Feb 10, 2016, at 11:03 AM, Benjamin Kim 
<bbuil...@gmail.com<mailto:bbuil...@gmail.com>> wrote:

Hi Pierre,

I am getting this error now.

Error: org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: 
SYSTEM.CATALOG,,1453397732623.8af7b44f3d7609eb301ad98641ff2611.: 
org.apache.hadoop.hbase.client.Delete.setAttribute(Ljava/lang/String;[B)Lorg/apache/hadoop/hbase/client/Delete;

I even tried to use sqlline.py to do some queries too. It resulted in the same 
error. I followed the installation instructions. Is there something missing?

Thanks,
Ben


On Feb 9, 2016, at 10:20 AM, Ravi Kiran 
<maghamraviki...@gmail.com<mailto:maghamraviki...@gmail.com>> wrote:

Hi Pierre,

  Try your luck for building the artifacts from 
https://github.com/chiastic-security/phoenix-for-cloudera. Hopefully it helps.

Regards
Ravi .

On Tue, Feb 9, 2016 at 10:04 AM, Benjamin Kim 
<bbuil...@gmail.com<mailto:bbuil...@gmail.com>> wrote:
Hi Pierre,

I found this article about how Cloudera’s version of HBase is very different 
than Apache HBase so it must be compiled using Cloudera’s repo and versions. 
But, I’m not having any success with it.

http://stackoverflow.com/questions/31849454/using-phoenix-with-cloudera-hbase-installed-from-repo

There’s also a Chinese site that does the same thing.

https://www.zybuluo.com/xtccc/note/205739

I keep getting errors like the one’s below.

[ERROR] 
/opt/tools/phoenix/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/LocalIndexMerger.java:[110,29]
 cannot find symbol
[ERROR] symbol:   class Region
[ERROR] location: class org.apache.hadoop.hbase.regionserver.LocalIndexMerger
…

Have you tried this also?

As a last resort, we will have to abandon Cloudera’s HBase for Apache’s HBase.

Thanks,
Ben


On Feb 8, 2016, at 11:04 PM, pierre lacave 
<pie...@lacave.me<mailto:pie...@lacave.me>> wrote:

Havent met that one.
According to SPARK-1867, the real issue is hidden.
I d process by elimination, maybe try in local[*] mode first
https://issues.apache.org/jira/plugins/servlet/mobile#issue/SPARK-1867

On Tue, 9 Feb 2016, 04:58 Benjamin Kim 
<bbuil...@gmail.com<mailto:bbuil...@gmail.com>> wrote:
Pierre,

I got it to work using phoenix-4.7.0-HBase-1.0-client-spark.jar. But, now, I 
get this error:

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 
3, 
prod-dc1-datanode151.pdc1i.gradientx.com<http://prod-dc1-datanode151.pdc1i.gradientx.com/>):
 java.lang.IllegalStateException: unread block data

It happens when I do:

df.show()

Getting closer…

Thanks,
Ben



On Feb 8, 2016, at 2:57 PM, pierre lacave 
<pie...@lacave.me<mailto:pie...@lacave.me>> wrote:

This is the wrong client jar try with the one named 
phoenix-4.7.0-HBase-1.1-client-spark.jar

On Mon, 8 Feb 2016, 22:29 Benjamin Kim 
<bbuil...@gmail.com<mailto:bbuil...@gmail.com>> wrote:
Hi Josh,

I tried again by putting the settings within the spark-default.conf.

spark.driver.extraClassPath=/opt/tools/phoenix/phoenix-4.7.0-HBase-1.0-client.jar
spark.executor.extraClassPath=/opt/tools/phoenix/phoenix-4.7.0-HBase-1.0-client.jar

I still get the same error using the code below.

import org.apache.phoenix.spark._
va

Cloudera and Phoenix

2016-02-21 Thread Dor Ben Dov
Hi All,

Do we have Phoenix release officially in Cloudera ? any plan to if not ?

Regards,

Dor ben Dov

From: Benjamin Kim [mailto:bbuil...@gmail.com]
Sent: יום ו 19 פברואר 2016 19:41
To: user@phoenix.apache.org
Subject: Re: Spark Phoenix Plugin

All,

Thanks for the help. I have switched out Cloudera’s HBase 1.0.0 with the 
current Apache HBase 1.1.3. Also, I installed Phoenix 4.7.0, and everything 
works fine except for the Phoenix Spark Plugin. I wonder if it’s a version 
incompatibility issue with Spark 1.6. Has anyone tried compiling 4.7.0 using 
Spark 1.6?

Thanks,
Ben

On Feb 12, 2016, at 6:33 AM, Benjamin Kim 
<bbuil...@gmail.com<mailto:bbuil...@gmail.com>> wrote:

Anyone know when Phoenix 4.7 will be officially released? And what Cloudera 
distribution versions will it be compatible with?

Thanks,
Ben

On Feb 10, 2016, at 11:03 AM, Benjamin Kim 
<bbuil...@gmail.com<mailto:bbuil...@gmail.com>> wrote:

Hi Pierre,

I am getting this error now.

Error: org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: 
SYSTEM.CATALOG,,1453397732623.8af7b44f3d7609eb301ad98641ff2611.: 
org.apache.hadoop.hbase.client.Delete.setAttribute(Ljava/lang/String;[B)Lorg/apache/hadoop/hbase/client/Delete;

I even tried to use sqlline.py to do some queries too. It resulted in the same 
error. I followed the installation instructions. Is there something missing?

Thanks,
Ben


On Feb 9, 2016, at 10:20 AM, Ravi Kiran 
<maghamraviki...@gmail.com<mailto:maghamraviki...@gmail.com>> wrote:

Hi Pierre,

  Try your luck for building the artifacts from 
https://github.com/chiastic-security/phoenix-for-cloudera. Hopefully it helps.

Regards
Ravi .

On Tue, Feb 9, 2016 at 10:04 AM, Benjamin Kim 
<bbuil...@gmail.com<mailto:bbuil...@gmail.com>> wrote:
Hi Pierre,

I found this article about how Cloudera’s version of HBase is very different 
than Apache HBase so it must be compiled using Cloudera’s repo and versions. 
But, I’m not having any success with it.

http://stackoverflow.com/questions/31849454/using-phoenix-with-cloudera-hbase-installed-from-repo

There’s also a Chinese site that does the same thing.

https://www.zybuluo.com/xtccc/note/205739

I keep getting errors like the one’s below.

[ERROR] 
/opt/tools/phoenix/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/LocalIndexMerger.java:[110,29]
 cannot find symbol
[ERROR] symbol:   class Region
[ERROR] location: class org.apache.hadoop.hbase.regionserver.LocalIndexMerger
…

Have you tried this also?

As a last resort, we will have to abandon Cloudera’s HBase for Apache’s HBase.

Thanks,
Ben


On Feb 8, 2016, at 11:04 PM, pierre lacave 
<pie...@lacave.me<mailto:pie...@lacave.me>> wrote:

Havent met that one.
According to SPARK-1867, the real issue is hidden.
I d process by elimination, maybe try in local[*] mode first
https://issues.apache.org/jira/plugins/servlet/mobile#issue/SPARK-1867

On Tue, 9 Feb 2016, 04:58 Benjamin Kim 
<bbuil...@gmail.com<mailto:bbuil...@gmail.com>> wrote:
Pierre,

I got it to work using phoenix-4.7.0-HBase-1.0-client-spark.jar. But, now, I 
get this error:

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 
3, 
prod-dc1-datanode151.pdc1i.gradientx.com<http://prod-dc1-datanode151.pdc1i.gradientx.com/>):
 java.lang.IllegalStateException: unread block data

It happens when I do:

df.show()

Getting closer…

Thanks,
Ben



On Feb 8, 2016, at 2:57 PM, pierre lacave 
<pie...@lacave.me<mailto:pie...@lacave.me>> wrote:

This is the wrong client jar try with the one named 
phoenix-4.7.0-HBase-1.1-client-spark.jar

On Mon, 8 Feb 2016, 22:29 Benjamin Kim 
<bbuil...@gmail.com<mailto:bbuil...@gmail.com>> wrote:
Hi Josh,

I tried again by putting the settings within the spark-default.conf.

spark.driver.extraClassPath=/opt/tools/phoenix/phoenix-4.7.0-HBase-1.0-client.jar
spark.executor.extraClassPath=/opt/tools/phoenix/phoenix-4.7.0-HBase-1.0-client.jar

I still get the same error using the code below.

import org.apache.phoenix.spark._
val df = sqlContext.load("org.apache.phoenix.spark", Map("table" -> 
"TEST.MY_TEST", "zkUrl" -> “zk1,zk2,zk3:2181"))

Can you tell me what else you’re doing?

Thanks,
Ben


On Feb 8, 2016, at 1:44 PM, Josh Mahonin 
<jmaho...@gmail.com<mailto:jmaho...@gmail.com>> wrote:

Hi Ben,

I'm not sure about the format of those command line options you're passing. 
I've had success with spark-shell just by setting the 
'spark.executor.extraClassPath' and 'spark.driver.extraClassPath' options on 
the spark config, as per the docs [1].

I'm not sure if there's anything special needed for CDH or not though. I also 
have a docker image I've been toying with which has a working Spark/Phoenix 
setup using the Phoenix 4.7.0

production

2016-02-17 Thread Dor Ben Dov
Hi,

Are any one here knows or uses the project in his production ? for how long ?

Does the Tephra integration and transactions over Hbase are working good - can 
I count on it for production stress ?


Regards,
Dor Ben Dov

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp