Re: table and schema rename

2016-02-09 Thread James Taylor
Hi Noam,
We don't support table rename currently - please file a JIRA. Depending on
how you're using Phoenix, you may be able to do this yourself by using
views[1]. For example, given a regular Phoenix table named my_table, you
can create a view on it like this:

CREATE VIEW my_view AS SELECT * FROM TABLE my_table;

Then from your code you can query and update the view in the same way as
the table. At this point if you want to change the name of the view, you
can just drop the view and create it again using the new name (this won't
impact your data). There are some gotchas if you're using secondary
indexes, though, as indexes would get dropped when you drop the view.

HTH. Thanks,

James

[1] http://phoenix.apache.org/views.html

On Tue, Feb 9, 2016 at 5:18 AM, Bulvik, Noam  wrote:

> Hi,
>
>
>
> Does phoenix support fast rename of table and/or  schema without the need
> to disable the table and clone the snapshot data as appears currently in
> https://hbase.apache.org/book.html#table.rename
>
>
>
> If not are there plans to support it in the future
>
>
>
> Regards,
>
> Noam
>
>
>
>
>
> --
>
> PRIVILEGED AND CONFIDENTIAL
> PLEASE NOTE: The information contained in this message is privileged and
> confidential, and is intended only for the use of the individual to whom it
> is addressed and others who have been specifically authorized to receive
> it. If you are not the intended recipient, you are hereby notified that any
> dissemination, distribution or copying of this communication is strictly
> prohibited. If you have received this communication in error, or if any
> problems occur with transmission, please contact sender. Thank you.
>


Re: Spark Phoenix Plugin

2016-02-09 Thread Benjamin Kim
Hi Ravi,

I see that the version is still 4.6. Does it include the fix for the Spark 
plugin? https://issues.apache.org/jira/browse/PHOENIX-2503 


This is the main reason I need it.

Thanks,
Ben

> On Feb 9, 2016, at 10:20 AM, Ravi Kiran  wrote:
> 
> Hi Pierre,
> 
>   Try your luck for building the artifacts from 
> https://github.com/chiastic-security/phoenix-for-cloudera 
> . Hopefully it 
> helps.
> 
> Regards
> Ravi .
> 
> On Tue, Feb 9, 2016 at 10:04 AM, Benjamin Kim  > wrote:
> Hi Pierre,
> 
> I found this article about how Cloudera’s version of HBase is very different 
> than Apache HBase so it must be compiled using Cloudera’s repo and versions. 
> But, I’m not having any success with it.
> 
> http://stackoverflow.com/questions/31849454/using-phoenix-with-cloudera-hbase-installed-from-repo
>  
> 
> 
> There’s also a Chinese site that does the same thing.
> 
> https://www.zybuluo.com/xtccc/note/205739 
> 
> 
> I keep getting errors like the one’s below.
> 
> [ERROR] 
> /opt/tools/phoenix/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/LocalIndexMerger.java:[110,29]
>  cannot find symbol
> [ERROR] symbol:   class Region
> [ERROR] location: class org.apache.hadoop.hbase.regionserver.LocalIndexMerger
> …
> 
> Have you tried this also?
> 
> As a last resort, we will have to abandon Cloudera’s HBase for Apache’s HBase.
> 
> Thanks,
> Ben
> 
> 
>> On Feb 8, 2016, at 11:04 PM, pierre lacave > > wrote:
>> 
>> Havent met that one.
>> 
>> According to SPARK-1867, the real issue is hidden.
>> 
>> I d process by elimination, maybe try in local[*] mode first
>> 
>> https://issues.apache.org/jira/plugins/servlet/mobile#issue/SPARK-1867 
>> 
>> On Tue, 9 Feb 2016, 04:58 Benjamin Kim > > wrote:
>> Pierre,
>> 
>> I got it to work using phoenix-4.7.0-HBase-1.0-client-spark.jar. But, now, I 
>> get this error:
>> 
>> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
>> stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 
>> (TID 3, prod-dc1-datanode151.pdc1i.gradientx.com 
>> ): 
>> java.lang.IllegalStateException: unread block data
>> 
>> It happens when I do:
>> 
>> df.show()
>> 
>> Getting closer…
>> 
>> Thanks,
>> Ben
>> 
>> 
>> 
>>> On Feb 8, 2016, at 2:57 PM, pierre lacave >> > wrote:
>>> 
>>> This is the wrong client jar try with the one named 
>>> phoenix-4.7.0-HBase-1.1-client-spark.jar 
>>> 
>>> 
>>> On Mon, 8 Feb 2016, 22:29 Benjamin Kim >> > wrote:
>>> Hi Josh,
>>> 
>>> I tried again by putting the settings within the spark-default.conf.
>>> 
>>> spark.driver.extraClassPath=/opt/tools/phoenix/phoenix-4.7.0-HBase-1.0-client.jar
>>> spark.executor.extraClassPath=/opt/tools/phoenix/phoenix-4.7.0-HBase-1.0-client.jar
>>> 
>>> I still get the same error using the code below.
>>> 
>>> import org.apache.phoenix.spark._
>>> val df = sqlContext.load("org.apache.phoenix.spark", Map("table" -> 
>>> "TEST.MY_TEST", "zkUrl" -> “zk1,zk2,zk3:2181"))
>>> 
>>> Can you tell me what else you’re doing?
>>> 
>>> Thanks,
>>> Ben
>>> 
>>> 
 On Feb 8, 2016, at 1:44 PM, Josh Mahonin >>> > wrote:
 
 Hi Ben,
 
 I'm not sure about the format of those command line options you're 
 passing. I've had success with spark-shell just by setting the 
 'spark.executor.extraClassPath' and 'spark.driver.extraClassPath' options 
 on the spark config, as per the docs [1].
 
 I'm not sure if there's anything special needed for CDH or not though. I 
 also have a docker image I've been toying with which has a working 
 Spark/Phoenix setup using the Phoenix 4.7.0 RC and Spark 1.6.0. It might 
 be a useful reference for you as well [2].
 
 Good luck,
 
 Josh
 
 [1] https://phoenix.apache.org/phoenix_spark.html 
 
 [2] https://github.com/jmahonin/docker-phoenix/tree/phoenix_spark 
 
 
 On Mon, Feb 8, 2016 at 4:29 PM, Benjamin Kim >>> > wrote:
 Hi Pierre,
 
 I tried to run in spark-shell using spark 1.6.0 by running this:
 
 spark-shell --master yarn-client --driver-class-path 
 /opt/tools/phoenix/phoenix-4.7.0-HBase-1.0-client.jar 
 --driver-java-options 
 "-Dspark.executor.extraClassPath=/opt/tools/phoenix/phoenix-4.7.0-HBase-1.0-client.jar”
 
 The version of HBase is the one in CDH5.4.8, which 

Re: Spark Phoenix Plugin

2016-02-09 Thread Ravi Kiran
Hi Pierre,

  Try your luck for building the artifacts from
https://github.com/chiastic-security/phoenix-for-cloudera. Hopefully it
helps.

Regards
Ravi .

On Tue, Feb 9, 2016 at 10:04 AM, Benjamin Kim  wrote:

> Hi Pierre,
>
> I found this article about how Cloudera’s version of HBase is very
> different than Apache HBase so it must be compiled using Cloudera’s repo
> and versions. But, I’m not having any success with it.
>
>
> http://stackoverflow.com/questions/31849454/using-phoenix-with-cloudera-hbase-installed-from-repo
>
> There’s also a Chinese site that does the same thing.
>
> https://www.zybuluo.com/xtccc/note/205739
>
> I keep getting errors like the one’s below.
>
> [ERROR]
> /opt/tools/phoenix/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/LocalIndexMerger.java:[110,29]
> cannot find symbol
> [ERROR] symbol:   class Region
> [ERROR] location: class
> org.apache.hadoop.hbase.regionserver.LocalIndexMerger
> …
>
> Have you tried this also?
>
> As a last resort, we will have to abandon Cloudera’s HBase for Apache’s
> HBase.
>
> Thanks,
> Ben
>
>
> On Feb 8, 2016, at 11:04 PM, pierre lacave  wrote:
>
> Havent met that one.
>
> According to SPARK-1867, the real issue is hidden.
>
> I d process by elimination, maybe try in local[*] mode first
>
> https://issues.apache.org/jira/plugins/servlet/mobile#issue/SPARK-1867
>
> On Tue, 9 Feb 2016, 04:58 Benjamin Kim  wrote:
>
>> Pierre,
>>
>> I got it to work using phoenix-4.7.0-HBase-1.0-client-spark.jar. But,
>> now, I get this error:
>>
>> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0
>> in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage
>> 0.0 (TID 3, prod-dc1-datanode151.pdc1i.gradientx.com):
>> java.lang.IllegalStateException: unread block data
>>
>> It happens when I do:
>>
>> df.show()
>>
>> Getting closer…
>>
>> Thanks,
>> Ben
>>
>>
>>
>> On Feb 8, 2016, at 2:57 PM, pierre lacave  wrote:
>>
>> This is the wrong client jar try with the one named
>> phoenix-4.7.0-HBase-1.1-client-spark.jar
>>
>> On Mon, 8 Feb 2016, 22:29 Benjamin Kim  wrote:
>>
>>> Hi Josh,
>>>
>>> I tried again by putting the settings within the spark-default.conf.
>>>
>>>
>>> spark.driver.extraClassPath=/opt/tools/phoenix/phoenix-4.7.0-HBase-1.0-client.jar
>>>
>>> spark.executor.extraClassPath=/opt/tools/phoenix/phoenix-4.7.0-HBase-1.0-client.jar
>>>
>>> I still get the same error using the code below.
>>>
>>> import org.apache.phoenix.spark._
>>> val df = sqlContext.load("org.apache.phoenix.spark", Map("table" ->
>>> "TEST.MY_TEST", "zkUrl" -> “zk1,zk2,zk3:2181"))
>>>
>>> Can you tell me what else you’re doing?
>>>
>>> Thanks,
>>> Ben
>>>
>>>
>>> On Feb 8, 2016, at 1:44 PM, Josh Mahonin  wrote:
>>>
>>> Hi Ben,
>>>
>>> I'm not sure about the format of those command line options you're
>>> passing. I've had success with spark-shell just by setting the
>>> 'spark.executor.extraClassPath' and 'spark.driver.extraClassPath' options
>>> on the spark config, as per the docs [1].
>>>
>>> I'm not sure if there's anything special needed for CDH or not though. I
>>> also have a docker image I've been toying with which has a working
>>> Spark/Phoenix setup using the Phoenix 4.7.0 RC and Spark 1.6.0. It might be
>>> a useful reference for you as well [2].
>>>
>>> Good luck,
>>>
>>> Josh
>>>
>>> [1] https://phoenix.apache.org/phoenix_spark.html
>>> [2] https://github.com/jmahonin/docker-phoenix/tree/phoenix_spark
>>>
>>> On Mon, Feb 8, 2016 at 4:29 PM, Benjamin Kim  wrote:
>>>
 Hi Pierre,

 I tried to run in spark-shell using spark 1.6.0 by running this:

 spark-shell --master yarn-client --driver-class-path
 /opt/tools/phoenix/phoenix-4.7.0-HBase-1.0-client.jar --driver-java-options
 "-Dspark.executor.extraClassPath=/opt/tools/phoenix/phoenix-4.7.0-HBase-1.0-client.jar”

 The version of HBase is the one in CDH5.4.8, which is 1.0.0-cdh5.4.8.

 When I get to the line:

 val df = sqlContext.load("org.apache.phoenix.spark", Map("table" ->
 “TEST.MY_TEST", "zkUrl" -> “zk1,zk2,zk3:2181”))

 I get this error:

 java.lang.NoClassDefFoundError: Could not initialize class
 org.apache.spark.rdd.RDDOperationScope$

 Any ideas?

 Thanks,
 Ben


 On Feb 5, 2016, at 1:36 PM, pierre lacave  wrote:

 I don't know when the full release will be, RC1 just got pulled out,
 and expecting RC2 soon

 you can find them here

 https://dist.apache.org/repos/dist/dev/phoenix/


 there is a new phoenix-4.7.0-HBase-1.1-client-spark.jar that is all you
 need to have in spark classpath


 *Pierre Lacave*
 171 Skellig House, Custom House, Lower Mayor street, Dublin 1, Ireland
 Phone :   +353879128708

 On Fri, Feb 5, 2016 at 9:28 PM, Benjamin Kim 
 wrote:

> Hi Pierre,
>
> When will I be able to download this version?
>
> Thanks,
> Ben
>
>
>>

Re: Spark Phoenix Plugin

2016-02-09 Thread Benjamin Kim
Hi Pierre,

I found this article about how Cloudera’s version of HBase is very different 
than Apache HBase so it must be compiled using Cloudera’s repo and versions. 
But, I’m not having any success with it.

http://stackoverflow.com/questions/31849454/using-phoenix-with-cloudera-hbase-installed-from-repo

There’s also a Chinese site that does the same thing.

https://www.zybuluo.com/xtccc/note/205739

I keep getting errors like the one’s below.

[ERROR] 
/opt/tools/phoenix/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/LocalIndexMerger.java:[110,29]
 cannot find symbol
[ERROR] symbol:   class Region
[ERROR] location: class org.apache.hadoop.hbase.regionserver.LocalIndexMerger
…

Have you tried this also?

As a last resort, we will have to abandon Cloudera’s HBase for Apache’s HBase.

Thanks,
Ben


> On Feb 8, 2016, at 11:04 PM, pierre lacave  wrote:
> 
> Havent met that one.
> 
> According to SPARK-1867, the real issue is hidden.
> 
> I d process by elimination, maybe try in local[*] mode first
> 
> https://issues.apache.org/jira/plugins/servlet/mobile#issue/SPARK-1867 
> 
> On Tue, 9 Feb 2016, 04:58 Benjamin Kim  > wrote:
> Pierre,
> 
> I got it to work using phoenix-4.7.0-HBase-1.0-client-spark.jar. But, now, I 
> get this error:
> 
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
> stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 
> (TID 3, prod-dc1-datanode151.pdc1i.gradientx.com 
> ): 
> java.lang.IllegalStateException: unread block data
> 
> It happens when I do:
> 
> df.show()
> 
> Getting closer…
> 
> Thanks,
> Ben
> 
> 
> 
>> On Feb 8, 2016, at 2:57 PM, pierre lacave > > wrote:
>> 
>> This is the wrong client jar try with the one named 
>> phoenix-4.7.0-HBase-1.1-client-spark.jar 
>> 
>> 
>> On Mon, 8 Feb 2016, 22:29 Benjamin Kim > > wrote:
>> Hi Josh,
>> 
>> I tried again by putting the settings within the spark-default.conf.
>> 
>> spark.driver.extraClassPath=/opt/tools/phoenix/phoenix-4.7.0-HBase-1.0-client.jar
>> spark.executor.extraClassPath=/opt/tools/phoenix/phoenix-4.7.0-HBase-1.0-client.jar
>> 
>> I still get the same error using the code below.
>> 
>> import org.apache.phoenix.spark._
>> val df = sqlContext.load("org.apache.phoenix.spark", Map("table" -> 
>> "TEST.MY_TEST", "zkUrl" -> “zk1,zk2,zk3:2181"))
>> 
>> Can you tell me what else you’re doing?
>> 
>> Thanks,
>> Ben
>> 
>> 
>>> On Feb 8, 2016, at 1:44 PM, Josh Mahonin >> > wrote:
>>> 
>>> Hi Ben,
>>> 
>>> I'm not sure about the format of those command line options you're passing. 
>>> I've had success with spark-shell just by setting the 
>>> 'spark.executor.extraClassPath' and 'spark.driver.extraClassPath' options 
>>> on the spark config, as per the docs [1].
>>> 
>>> I'm not sure if there's anything special needed for CDH or not though. I 
>>> also have a docker image I've been toying with which has a working 
>>> Spark/Phoenix setup using the Phoenix 4.7.0 RC and Spark 1.6.0. It might be 
>>> a useful reference for you as well [2].
>>> 
>>> Good luck,
>>> 
>>> Josh
>>> 
>>> [1] https://phoenix.apache.org/phoenix_spark.html 
>>> 
>>> [2] https://github.com/jmahonin/docker-phoenix/tree/phoenix_spark 
>>> 
>>> 
>>> On Mon, Feb 8, 2016 at 4:29 PM, Benjamin Kim >> > wrote:
>>> Hi Pierre,
>>> 
>>> I tried to run in spark-shell using spark 1.6.0 by running this:
>>> 
>>> spark-shell --master yarn-client --driver-class-path 
>>> /opt/tools/phoenix/phoenix-4.7.0-HBase-1.0-client.jar --driver-java-options 
>>> "-Dspark.executor.extraClassPath=/opt/tools/phoenix/phoenix-4.7.0-HBase-1.0-client.jar”
>>> 
>>> The version of HBase is the one in CDH5.4.8, which is 1.0.0-cdh5.4.8.
>>> 
>>> When I get to the line:
>>> 
>>> val df = sqlContext.load("org.apache.phoenix.spark", Map("table" -> 
>>> “TEST.MY_TEST", "zkUrl" -> “zk1,zk2,zk3:2181”))
>>> 
>>> I get this error:
>>> 
>>> java.lang.NoClassDefFoundError: Could not initialize class 
>>> org.apache.spark.rdd.RDDOperationScope$
>>> 
>>> Any ideas?
>>> 
>>> Thanks,
>>> Ben
>>> 
>>> 
 On Feb 5, 2016, at 1:36 PM, pierre lacave >>> > wrote:
 
 I don't know when the full release will be, RC1 just got pulled out, and 
 expecting RC2 soon
 
 you can find them here 
 
 https://dist.apache.org/repos/dist/dev/phoenix/ 
 
 
 
 there is a new phoenix-4.7.0-HBase-1.1-client-spark.jar that is all you 
 need to have in spark classpath
 
 
 Pierre Lacave
 171 Skellig House, Custom House, Lower Mayor street, Dublin 1, Ireland
 Phone : 

table and schema rename

2016-02-09 Thread Bulvik, Noam
Hi,

Does phoenix support fast rename of table and/or  schema without the need to 
disable the table and clone the snapshot data as appears currently in 
https://hbase.apache.org/book.html#table.rename

If not are there plans to support it in the future

Regards,
Noam





PRIVILEGED AND CONFIDENTIAL
PLEASE NOTE: The information contained in this message is privileged and 
confidential, and is intended only for the use of the individual to whom it is 
addressed and others who have been specifically authorized to receive it. If 
you are not the intended recipient, you are hereby notified that any 
dissemination, distribution or copying of this communication is strictly 
prohibited. If you have received this communication in error, or if any 
problems occur with transmission, please contact sender. Thank you.