Re: How to config zookeeper quorum in sqlline command?

2017-02-15 Thread Juvenn Woo
Hi Chanh, 

I think you need only specify one node:

./sqlline.py zoo1:2182
./sqlline.py zoo1

Best,
-- 
Juvenn Woo
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)


On Thursday, 16 February 2017 at 12:41 PM, Chanh Le wrote:

> Hi everybody,
> I am a newbie start using phoenix for a few days after did some research 
> about config zookeeper quorum and still stuck I finally wanna ask directly 
> into the community.
> 
> Current zk quorum of mine a little odd "hbase.zookeeper.quorum", 
> "zoo1:2182,zoo1:2183,zoo2:2182"I edited the env.sh (http://env.sh) and add 
> HBASE_PATH=/build/etl/hbase-1.2.4
> So I tried to run sqlline by 
>  ./sqlline.py zk://zoo1:2182,zoo1:2183,zoo2:2182/hbase
>  ./sqlline.py zoo1:2182,zoo1:2183,zoo2:2182:/hbase
> 
> Both not working.
> 
> So I tried ./queryserver.py start
> and used sqlline-thin.py  and got this error
> Caused by: java.sql.SQLException: ERROR 102 (08001): Malformed connection 
> url. :zoo1:2182,zoo1:2183,zoo2:2182:2181:/hbase;
> 
> 
> Thank you in advance.
> Chanh
> 
> 
> 




How to config zookeeper quorum in sqlline command?

2017-02-15 Thread Chanh Le
Hi everybody,
I am a newbie start using phoenix for a few days after did some research about 
config zookeeper quorum and still stuck I finally wanna ask directly into the 
community.

Current zk quorum of mine a little odd "hbase.zookeeper.quorum", 
"zoo1:2182,zoo1:2183,zoo2:2182"
I edited the env.sh and add HBASE_PATH=/build/etl/hbase-1.2.4
So I tried to run sqlline by 
 ./sqlline.py zk://zoo1:2182,zoo1:2183,zoo2:2182/hbase 

 ./sqlline.py zoo1:2182,zoo1:2183,zoo2:2182:/hbase

Both not working.

So I tried ./queryserver.py start
and used sqlline-thin.py  and got this error
Caused by: java.sql.SQLException: ERROR 102 (08001): Malformed connection url. 
:zoo1:2182,zoo1:2183,zoo2:2182:2181:/hbase;


Thank you in advance.
Chanh

Phoenix Query Server tenant_id

2017-02-15 Thread Michael Young
Is it possible to pass the TenantID attribute on the URL when using the
phoenix query server?  For example,

/usr/hdp/2.5.0.0-1245/phoenix/bin/sqlline-thin.py
http://pqshost.myhost.com:8765;TenantId=tenant1

This works fine for me when connecting via jdbc.  Just didn't seem to work
with the query server.

Thanks,
-Michael


Differences between the date/time types

2017-02-15 Thread Cheyenne Forbes
I cant find the difference between the date/time types, arent all of them
the same? also should I parse them as int or string?
TIME Type
TIME

The time data type. The format is -MM-dd hh:mm:ss, with both the date
and time parts maintained. Mapped to java.sql.Time. The binary
representation is an 8 byte long (the number of milliseconds from the
epoch), making it possible (although not necessarily recommended) to store
more information within a TIME column than what is provided by java.sql.Time.
Note that the internal representation is based on a number of milliseconds
since the epoch (which is based on a time in GMT), while java.sql.Time will
format times based on the client's local time zone. Please note that this
TIME type is different than the TIME type as defined by the SQL 92 standard
in that it includes year, month, and day components. As such, it is not in
compliance with the JDBC APIs. As the underlying data is still stored as a
long, only the presentation of the value is incorrect.

Example:

TIME
DATE Type
DATE

The date data type. The format is -MM-dd hh:mm:ss, with both the date
and time parts maintained to a millisecond accuracy. Mapped to java.sql.Date.
The binary representation is an 8 byte long (the number of milliseconds
from the epoch), making it possible (although not necessarily recommended)
to store more information within a DATE column than what is provided by
java.sql.Date. Note that the internal representation is based on a number
of milliseconds since the epoch (which is based on a time in GMT), while
java.sql.Date will format dates based on the client's local time zone.
Please note that this DATE type is different than the DATE type as defined
by the SQL 92 standard in that it includes a time component. As such, it is
not in compliance with the JDBC APIs. As the underlying data is still
stored as a long, only the presentation of the value is incorrect.

Example:

DATE
TIMESTAMP Type
TIMESTAMP

The timestamp data type. The format is -MM-dd hh:mm:ss[.n].
Mapped to java.sql.Timestamp with an internal representation of the number
of nanos from the epoch. The binary representation is 12 bytes: an 8 byte
long for the epoch time plus a 4 byte integer for the nanos. Note that the
internal representation is based on a number of milliseconds since the
epoch (which is based on a time in GMT), while java.sql.Timestamp will
format timestamps based on the client's local time zone.

Example:

TIMESTAMP
UNSIGNED_TIME Type
UNSIGNED_TIME

The unsigned time data type. The format is -MM-dd hh:mm:ss, with both
the date and time parts maintained to the millisecond accuracy. Mapped to
java.sql.Time. The binary representation is an 8 byte long (the number of
milliseconds from the epoch) matching the HBase.toBytes(long) method. The
purpose of this type is to map to existing HBase data that was serialized
using this HBase utility method. If that is not the case, use the regular
signed type instead.

Example:

UNSIGNED_TIME
UNSIGNED_DATE Type
UNSIGNED_DATE

The unsigned date data type. The format is -MM-dd hh:mm:ss, with both
the date and time parts maintained to a millisecond accuracy. Mapped to
java.sql.Date. The binary representation is an 8 byte long (the number of
milliseconds from the epoch) matching the HBase.toBytes(long) method. The
purpose of this type is to map to existing HBase data that was serialized
using this HBase utility method. If that is not the case, use the regular
signed type instead.

Example:

UNSIGNED_DATE
UNSIGNED_TIMESTAMP Type
UNSIGNED_TIMESTAMP

The timestamp data type. The format is -MM-dd hh:mm:ss[.n].
Mapped to java.sql.Timestamp with an internal representation of the number
of nanos from the epoch. The binary representation is 12 bytes: an 8 byte
long for the epoch time plus a 4 byte integer for the nanos with the long
serialized through the HBase.toBytes(long) method. The purpose of this type
is to map to existing HBase data that was serialized using this HBase
utility method. If that is not the case, use the regular signed type
instead.

Example:

UNSIGNED_TIMESTAMP


Re: Protobuf serialized column

2017-02-15 Thread Josh Elser

No, PQS is just a proxy to the Phoenix (thick) JDBC driver.

You are still limited to the capabilities of the Phoenix JDBC driver. 
You might be able to do something with a custom UDF, but I'm not sure.


Sudhir Babu Pothineni wrote:

Sorry for not asking the question properly, my understanding is Avatica
(query seerver) will serialize the data fetched from HBase and sent to
the JDBC thin client. But if the HBase column where each cell is
protobuf serialized bytes of multiple columns, Is it possible to hack
the Avatica code to use this serialized data directly?

On Mon, Feb 13, 2017 at 7:26 PM, Sudhir Babu Pothineni
> wrote:

Is it read protobuf serialized column in HBase to a Phoenix table?

Thanks
Sudhir




Re: Can I use protobuf2 with Phoenix instead of protobuf3?

2017-02-15 Thread Josh Elser

This is a non-issue...

Avatica's use of protobuf is completely shaded (relocated classes). You 
can use whatever version of protobuf in your client application you'd like.


Mark Heppner wrote:

If Cheyenne is talking about the query server, I'm not sure where you're
getting that from, Ted. It doesn't directly depend on protobufs, it's
pulled in from Avatica:
https://github.com/apache/phoenix/blob/v4.9.0-HBase-1.2/phoenix-queryserver/pom.xml
Avatica does use protobuf 3:
https://github.com/apache/calcite/blob/calcite-avatica-1.9.0/avatica/pom.xml

Cheyenne, could you try building with an earlier version of Avatica?

On Mon, Feb 13, 2017 at 8:41 PM, Ted Yu > wrote:

Phoenix uses protobuf 2.5
From pom.xml :

2.5.0

FYI

On Mon, Feb 13, 2017 at 4:52 PM, Cheyenne Forbes
> wrote:

my project highly depends on protobuf2, can I tell phoenix which
version of protobuf to read with when I am sending a request?





--
Mark Heppner


Re: FW: Failing on writing Dataframe to Phoenix

2017-02-15 Thread Josh Mahonin
Hi,

Spark is unable to load the Phoenix classes it needs. If you're using a
recent version of Phoenix, please ensure the "fat" *client* JAR (or for
older versions of Phoenix, the Phoenix *client*-spark JAR) is on your Spark
driver and executor classpath [1]. The 'phoenix-spark' JAR is insufficient
to provide Spark all of the classes necessary.

[1] https://phoenix.apache.org/phoenix_spark.html

On Wed, Feb 15, 2017 at 10:29 AM, Nimrod Oren <
nimrod.o...@veracity-group.com> wrote:

> Hi,
>
>
>
> I'm trying to write a simple dataframe to Phoenix:
>
>  df.save("org.apache.phoenix.spark", SaveMode.Overwrite,
>
>   Map("table" -> "TEST_SAVE", "zkUrl" -> "zk.internal:2181"))
>
>
>
> I have the following in my pom.xml:
>
> 
>
> org.apache.phoenix
>
> phoenix-spark
>
> ${phoenix-version}
>
> provided
>
> 
>
>
>
> and phoenix-spark is in spark-defaults.conf on all servers. However I'm
> getting the following error:
>
>
>
> Exception in thread "main" java.lang.NoClassDefFoundError:
> org/apache/phoenix/util/SchemaUtil
>
> at org.apache.phoenix.spark.DataFrameFunctions$$anonfun$1.
> apply(DataFrameFunctions.scala:33)
>
> at org.apache.phoenix.spark.DataFrameFunctions$$anonfun$1.
> apply(DataFrameFunctions.scala:33)
>
> at scala.collection.TraversableLike$$anonfun$map$
> 1.apply(TraversableLike.scala:244)
>
> at scala.collection.TraversableLike$$anonfun$map$
> 1.apply(TraversableLike.scala:244)
>
> at scala.collection.IndexedSeqOptimized$class.
> foreach(IndexedSeqOptimized.scala:33)
>
> at scala.collection.mutable.ArrayOps$ofRef.foreach(
> ArrayOps.scala:108)
>
> at scala.collection.TraversableLike$class.map(
> TraversableLike.scala:244)
>
> at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
>
> at org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(
> DataFrameFunctions.scala:33)
>
> at org.apache.phoenix.spark.DefaultSource.createRelation(
> DefaultSource.scala:47)
>
> at org.apache.spark.sql.execution.datasources.
> ResolvedDataSource$.apply(ResolvedDataSource.scala:222)
>
> at org.apache.spark.sql.DataFrameWriter.save(
> DataFrameWriter.scala:148)
>
> at org.apache.spark.sql.DataFrame.save(DataFrame.scala:2045)
>
> at com.pelephone.TrueCallLoader$.main(TrueCallLoader.scala:184)
>
> at com.pelephone.TrueCallLoader.main(TrueCallLoader.scala)
>
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:57)
>
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:606)
>
> at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$
> deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
>
> at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(
> SparkSubmit.scala:181)
>
> at org.apache.spark.deploy.SparkSubmit$.submit(
> SparkSubmit.scala:206)
>
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.
> scala:121)
>
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>
> Caused by: java.lang.ClassNotFoundException: org.apache.phoenix.util.
> SchemaUtil
>
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>
> at java.security.AccessController.doPrivileged(Native Method)
>
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>
>
>
> Am I missing something?
>
>
>
> Nimrod
>
>
>
>
>
>
>


FW: Failing on writing Dataframe to Phoenix

2017-02-15 Thread Nimrod Oren
Hi,



I'm trying to write a simple dataframe to Phoenix:

 df.save("org.apache.phoenix.spark", SaveMode.Overwrite,

  Map("table" -> "TEST_SAVE", "zkUrl" -> "zk.internal:2181"))



I have the following in my pom.xml:



org.apache.phoenix

phoenix-spark

${phoenix-version}

provided





and phoenix-spark is in spark-defaults.conf on all servers. However I'm
getting the following error:



Exception in thread "main" java.lang.NoClassDefFoundError:
org/apache/phoenix/util/SchemaUtil

at
org.apache.phoenix.spark.DataFrameFunctions$$anonfun$1.apply(DataFrameFunctions.scala:33)

at
org.apache.phoenix.spark.DataFrameFunctions$$anonfun$1.apply(DataFrameFunctions.scala:33)

at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)

at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)

at
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)

at
scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)

at
scala.collection.TraversableLike$class.map(TraversableLike.scala:244)

at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)

at
org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:33)

at
org.apache.phoenix.spark.DefaultSource.createRelation(DefaultSource.scala:47)

at
org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:222)

at
org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:148)

at org.apache.spark.sql.DataFrame.save(DataFrame.scala:2045)

at com.pelephone.TrueCallLoader$.main(TrueCallLoader.scala:184)

at com.pelephone.TrueCallLoader.main(TrueCallLoader.scala)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)

at
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)

at
org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)

at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)

at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

Caused by: java.lang.ClassNotFoundException:
org.apache.phoenix.util.SchemaUtil

at java.net.URLClassLoader$1.run(URLClassLoader.java:366)

at java.net.URLClassLoader$1.run(URLClassLoader.java:355)

at java.security.AccessController.doPrivileged(Native Method)

at java.net.URLClassLoader.findClass(URLClassLoader.java:354)

at java.lang.ClassLoader.loadClass(ClassLoader.java:425)

at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)

at java.lang.ClassLoader.loadClass(ClassLoader.java:358)



Am I missing something?



Nimrod