Re: Azure Blob storage backed secure cluster not working with Phoenix

2017-10-30 Thread rafa
Hi Mallieswari,

I have no experience at all with Azure, but the excepction points to
missing classes in your classpath.

Perhaps you need to add the proper jars to be able to interact with an
Azure FS:
https://hadoop.apache.org/docs/stable/hadoop-azure/index.html

There is an interesting article in :  http://beadooper.com/?p=409

Regards,
rafa


On Mon, Oct 30, 2017 at 12:43 PM, Mallieswari Dineshbabu <
dmalliesw...@gmail.com> wrote:

> Hi All,
>
>
>
> I have a Hadoop(version 2.7.2) - HBase(version 1.2.5) Kerberos enabled
> secure cluster configured with default file system as Azure Blob Storage
> <https://hadoop.apache.org/docs/r2.7.2/hadoop-azure/index.html>* (WASBS)*
> in windows environment. In this cluster I try to integrate Phoenix(4.12.0).
> I could not establish sqline thin client connection successfully. Phoenix
> thin client and thick client is working fine with Hadoop secure cluster
> configured with *local file system* but it is failing in *Azure blob file
> system* alone.
>
>
>
> *Query*:
>
> Kindly update, if I am missing anything else or please update whether
> Azure Blob storage backed secure cluster is supported by Phoenix or not.
>
>
>
> *Configuration Changes:*
>
>1. *Referred phoenix-4.12.0-HBase-1.2-client.jar and
>phoenix-4.12.0-HBase-1.2-thin-client.jar in hbase/lib*
>
>
>
>1. *Added following Properties for phoenix in
>hbase/conf/Hbase-site.xml (secure cluster) : *
>
>
>
> phoenix.queryserver.withRemoteUserExtractor
>
> true
>
> Phoenix.queryserver.keytab.file
>
> Create HTTP user and keytab
>
> Phoenix.queryserver.kerberos.principal
>
> Refer the principal of HTTP user
>
>
>
>1. *Set the Environment variable:*
>
> set JAVA_HOME= C:\Hadoop\Java\jdk1.7.0_51
>
> set HBASE_HOME= C:\Hadoop\ HBase
>
> set HADOOP_HOME=C:\Hadoop\Hadoop
>
> set Python=C:\Hadoop\WinPython\python-2.7.10.amd64
>
> set path=%JAVA_HOME%\bin;%HBASE_HOME%\conf;%HADOOP_HOME%\etc\
> hadoop;%Python%
>
> set HADOOP_CONF_DIR=C:\Hadoop\Hadoop\etc\hadoop
>
> set HBASE_CONF_DIR=C:\Hadoop\HBase\conf
>
> set HADOOP_CLASSPATH=C:\Hadoop\Hadoop\share\hadoop
>
>
>
>1. *Command used to start phoenix thin client*
>
> Ø  python sqlline-thin.py http://namenode.AD.ONMICROSOFT.COM:8765;
> authentication=SPNEGO;principal=phoenixclient@AD.
> ONMICROSOFT.COM;keytab=C:\\Hadoop\keytabs\phoenixclient.
> keytab;serialization=PROTOBUF
> <http://namenode.AD.ONMICROSOFT.COM:8765;authentication=SPNEGO;principal=phoenixcli...@ad.onmicrosoft.com;keytab=C:/Hadoop/keytabs/phoenixclient.keytab;serialization=PROTOBUF>
> ;
>
>
>
>
>
> No exception produced while starting *Query server*.
>
>
>
> *Exception details from Sqline thin client connection:*
>
> SQLException: ERROR 103 (08004): Unable to establish connection. ->
> SQLException: ERROR 103 (08004): Unable to establish connection. ->
> IOException: java.lang.reflect.InvocationTargetException ->
> InvocationTargetException: (null exception message) ->
> ExceptionInInitializerError: (null exception message) -> RuntimeException:
> java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem
> not found -> ClassNotFoundException: Class 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem
> not found. Error -1 (0) null
>
> java.lang.RuntimeException: java.sql.SQLException: ERROR 103 (08004):
> Unable to establish connection.
>
> at org.apache.calcite.avatica.jdbc.JdbcMeta.openConnection(
> JdbcMeta.java:621)
>
> at org.apache.calcite.avatica.remote.LocalService.apply(
> LocalService.java:285)
>
> at org.apache.calcite.avatica.remote.Service$
> OpenConnectionRequest.accept(Service.java:1771)
>
> at org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1657)
>
> at org.apache.phoenix.queryserver.server.QueryServer$
> PhoenixDoAsCallback.doAsRemoteUser(QueryServer.java:463)
>
> at org.apache.calcite.avatica.server.HttpServer$Builder$1.
> doAsRemoteUser(HttpServer.java:725)
>
> at org.apache.calcite.avatica.server.AvaticaProtobufHandler.
> handle(AvaticaProtobufHandler.java:120)
>
> at org.apache.phoenix.shaded.org.eclipse.jetty.security.
> SecurityHandler.handle(SecurityHandler.java:542)
>
> at org.apache.phoenix.shaded.org.eclipse.jetty.server.
> HttpChannel.handle(HttpChannel.java:311)
>
> Caused by: java.sql.SQLException: ERROR 103 (08004): Unable to establish
> connection.
>
> at org.apache.phoenix.exception.SQLExceptionCode$Factory$1.
> newException(SQLExceptionCode.java:489)
>
>   

Re: Cannot connect phoenix client in kerberos cluster

2017-10-20 Thread rafa
Hi Mallieswari,

As far as I know you can configure queryServer to connect to a secured
cluster with a proper keytab and principal on its configuration. Once the
queryserver is started that way you can connect with a simple:

 python sqlline-thin.py http://hostname:8765

can you login correctly in the cluster with the used keytab? could you
regenerate the keytab?
have you started the queryserver with the keytab and the log confirms it
has authenticated correctly?

regards,
rafa

On Thu, Oct 19, 2017 at 7:55 AM, Mallieswari Dineshbabu <
dmalliesw...@gmail.com> wrote:

> Hi Rafa,
>
> following are the checksum failed exception with additional logs gathered
> in query server side.
>
> ... 19 more
> Caused by: java.security.GeneralSecurityException: Checksum failed
> at sun.security.krb5.internal.crypto.dk.ArcFourCrypto.
> decrypt(ArcFourCry
> pto.java:408)
> at sun.security.krb5.internal.crypto.ArcFourHmac.decrypt(
> ArcFourHmac.jav
> a:91)
> at sun.security.krb5.internal.crypto.ArcFourHmacEType.
> decrypt(ArcFourHma
> cEType.java:100)
> ... 25 more
> 17/10/19 05:42:10 DEBUG server.AvaticaJsonHandler: HTTP request from
> 172.0.0.4 i
> s unauthenticated and authentication is required
> 17/10/19 05:42:10 DEBUG server.HttpConnection:
> org.apache.phoenix.shaded.org.ecl
> ipse.jetty.server.HttpConnection$SendCallback@5891b2c8[PROCESSING][i=
> ResponseInf
> o{HTTP/1.1 404 null,278,false},cb=org.apache.phoenix.shaded.org.eclipse.
> jetty.se
> rver.HttpChannel$CommitCallback@76bf3474] generate: NEED_HEADER
> (null,[p=0,l=278
> ,c=2048,r=278],true)@START
> 17/10/19 05:42:10 DEBUG server.HttpConnection:
> org.apache.phoenix.shaded.org.ecl
> ipse.jetty.server.HttpConnection$SendCallback@5891b2c8[PROCESSING][i=
> ResponseInf
> o{HTTP/1.1 404 null,278,false},cb=org.apache.phoenix.shaded.org.eclipse.
> jetty.se
> rver.HttpChannel$CommitCallback@76bf3474] generate: FLUSH
> ([p=0,l=210,c=8192,r=2
> 10],[p=0,l=278,c=2048,r=278],true)@COMPLETING
> 17/10/19 05:42:10 DEBUG io.WriteFlusher: write: WriteFlusher@3d86d805{IDLE}
> [Hea
> pByteBuffer@58e0ca22[p=0,l=210,c=8192,r=210]={<< ...z-SNAPSHOT)
> \r\n\r\n>>>erver: Jetty(9.2\x00\x00\x00\x00\
> x00\x00\x00\x00\x00\x00\x00\x00\
> x00\x00\x00},HeapByteBuffer@30ce894[p=0,l=278,c=2048,r=
> 278]={<<<\n\n
> \n\n>>>\x00\x00\x00\x00\x00\x00\x00\
> x00\x00\x00\x00\x00\x00\x
> 00\x00\x00\x00...\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
> x00\x00\x00\x00\x00}]
> 17/10/19 05:42:10 DEBUG io.WriteFlusher: update WriteFlusher@3d86d805
> {WRITING}:I
> DLE-->WRITING
>
> Regards,
> Mallieswari D
>
> On Thu, Oct 12, 2017 at 11:00 AM, Mallieswari Dineshbabu <
> dmalliesw...@gmail.com> wrote:
>
>> Hi Rafa,
>>
>> As per your concerns, I have updated the JCE policy and tested now
>> getting "Checksum Failed" Exception. Please find the error below.
>>
>>
>>
>> GSSException: Failure unspecified at GSS-API level (Mechanism level: 
>> *Checksum
>> fa*
>>
>> *iled*)
>>
>> at sun.security.jgss.krb5.Krb5Context.acceptSecContext(Krb5Cont
>> ext.java:
>>
>> 788)
>>
>> at sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContext
>> Impl.java
>>
>> :342)
>>
>> at sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContext
>> Impl.java
>>
>> :285)
>>
>> at sun.security.jgss.spnego.SpNegoContext.GSS_acceptSecContext(
>> SpNegoCon
>>
>> text.java:871)
>>
>> at sun.security.jgss.spnego.SpNegoContext.acceptSecContext(
>> SpNegoContext
>>
>> .java:544)
>>
>> at sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContext
>> Impl.java
>>
>> :342)
>>
>> at sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContext
>> Impl.java
>>
>> :285)
>>
>> at org.apache.phoenix.shaded.org.eclipse.jetty.security.SpnegoL
>> oginServi
>>
>> ce.login(SpnegoLoginService.java:137)
>>
>> at org.apache.phoenix.shaded.org.eclipse.jetty.security.authent
>> ication.L
>>
>> oginAuthenticator.login(LoginAuthenticator.java:61)
>>
>> at org.apache.phoenix.shaded.org.eclipse.jetty.security.authent
>> ication.S
>>
>> pnegoAuthenticator.validateRequest(SpnegoAuthenticator.java:99)
>>
>> at org.apache.phoenix.shaded.org.eclipse.jetty.security.Securit
>> yHandler.
>>
>> handle(SecurityHandler.java:512)
>>
>> at org.apache.phoenix.shaded.org.eclipse.jetty.server.ha

Re: Phoenix client failed when used HACluster name on hbase.rootdir property

2017-10-12 Thread rafa
You cannot  use "hacluster" if that hostname is not resolved to a IP. Is
what I tried to explain in my last mail.

Use the ip of te machine that is running query server or its hostname

Regards
Rafa

El 12 oct. 2017 6:19, "Mallieswari Dineshbabu" <dmalliesw...@gmail.com>
escribió:

> Hi Rafa,
>
> Still, faced “UnKnownHostException:hacluster” when started the query
> server with cluster name 'hacluster' and try to connect phoenix client like
> below.
>
>
>
> bin>python sqlline-thin.py http://hacluster:8765
>
>
>
> Setting property: [incremental, false]
>
> Setting property: [isolation, TRANSACTION_READ_COMMITTED]
>
> issuing: !connect jdbc:phoenix:thin:url=http://
> hacluster:8765;serialization=PROTOBUF none none org.apache.phoenix.
> queryserver.client.Driver
>
> Connecting to jdbc:phoenix:thin:url=http://hacluster:8765;serialization=
> PROTOBUF
>
> java.lang.RuntimeException: java.net.UnknownHostException: hacluster
>
> at org.apache.calcite.avatica.remote.AvaticaCommonsHttpClientImpl.
> send(AvaticaCommonsHttpClientImpl.java:169)
>
> at org.apache.calcite.avatica.remote.RemoteProtobufService._
> apply(RemoteProtobufService.java:45)
>
> at org.apache.calcite.avatica.remote.ProtobufService.apply(
> ProtobufService.java:81)
>
> at org.apache.calcite.avatica.remote.Driver.connect(Driver.
> java:176)
>
> at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
>
> at sqlline.DatabaseConnection.getConnection(
> DatabaseConnection.java:203)
>
> at sqlline.Commands.connect(Commands.java:1064)
>
> at sqlline.Commands.connect(Commands.java:996)
>
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:57)
>
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:606)
>
> at sqlline.ReflectiveCommandHandler.execute(
> ReflectiveCommandHandler.java:38)
>
> at sqlline.SqlLine.dispatch(SqlLine.java:809)
>
> at sqlline.SqlLine.initArgs(SqlLine.java:588)
>
> at sqlline.SqlLine.begin(SqlLine.java:661)
>
> at sqlline.SqlLine.start(SqlLine.java:398)
>
> at sqlline.SqlLine.main(SqlLine.java:291)
>
>
>
> Regards,
>
> Mallieswari D
>
>
>
> On Wed, Oct 11, 2017 at 5:53 PM, rafa <raf...@gmail.com> wrote:
>
>> Hi Mallieswari,
>>
>> The hbase.rootdir is a filesystem resource. If you have a HA NAmenode and
>> a created nameservice you can point to the active namenode automatically.
>> As far as I know it is not related to the HBase Master HA.
>>
>> The "hacluster" used in this command : python sqlline-thin.py 
>> http://hacluster:8765
>>   is a hostname resource. What do you obtain from nslookup hacluster ?
>>
>> To have serveral Phoenix query servers to achieve HA in that layer you
>> would need a balancer (sw or hw) defined to balance across all your
>> available query servers.
>> Regards,
>>
>> rafa
>>
>> On Wed, Oct 11, 2017 at 1:30 PM, Mallieswari Dineshbabu <
>> dmalliesw...@gmail.com> wrote:
>>
>>> Hi All,
>>>
>>> I am trying to integrate Phoenix in a High availability enabled
>>> Hadoop-HBase cluster. I have used nameservice ID
>>> <https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html#Configuration_overview>
>>>  instead
>>> of HMaster's hostname in the following property. So that any of my active
>>> HMaster will be identified automatically in case of fail over,
>>>
>>> 
>>>
>>>hbase.rootdir
>>>
>>>hdfs://hacluster:9000/HBase
>>>
>>>  
>>>
>>>
>>>
>>> Similarly, I tried connecting to QueryServer that is *running in one of
>>> the HMaster node*, from my thin client with the following URL . But I
>>> get the error, “No suitable driver found for http://hacluster:8765;
>>>
>>>
>>>
>>> python sqlline-thin.py http://hacluster:8765
>>>
>>>
>>>
>>>
>>>
>>> *Please tell what configuration need to be done to connect QueryServer
>>> with nameserviceID.*
>>>
>>>
>>>
>>> Note: The same works fine when I specify HMaster's ip address in both my
>>> HBase configuration and sqline connection string.
>>>
>>>
>>> --
>>> Thanks and regards
>>> D.Mallieswari
>>>
>>
>>
>
>
> --
> Thanks and regards
> D.Mallieswari
>


Re: Phoenix client failed when used HACluster name on hbase.rootdir property

2017-10-11 Thread rafa
Hi Mallieswari,

The hbase.rootdir is a filesystem resource. If you have a HA NAmenode and a
created nameservice you can point to the active namenode automatically. As
far as I know it is not related to the HBase Master HA.

The "hacluster" used in this command : python sqlline-thin.py
http://hacluster:8765
  is a hostname resource. What do you obtain from nslookup hacluster ?

To have serveral Phoenix query servers to achieve HA in that layer you
would need a balancer (sw or hw) defined to balance across all your
available query servers.
Regards,

rafa

On Wed, Oct 11, 2017 at 1:30 PM, Mallieswari Dineshbabu <
dmalliesw...@gmail.com> wrote:

> Hi All,
>
> I am trying to integrate Phoenix in a High availability enabled
> Hadoop-HBase cluster. I have used nameservice ID
> <https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html#Configuration_overview>
>  instead
> of HMaster's hostname in the following property. So that any of my active
> HMaster will be identified automatically in case of fail over,
>
> 
>
>hbase.rootdir
>
>hdfs://hacluster:9000/HBase
>
>  
>
>
>
> Similarly, I tried connecting to QueryServer that is *running in one of
> the HMaster node*, from my thin client with the following URL . But I get
> the error, “No suitable driver found for http://hacluster:8765;
>
>
>
> python sqlline-thin.py http://hacluster:8765
>
>
>
>
>
> *Please tell what configuration need to be done to connect QueryServer
> with nameserviceID.*
>
>
>
> Note: The same works fine when I specify HMaster's ip address in both my
> HBase configuration and sqline connection string.
>
>
> --
> Thanks and regards
> D.Mallieswari
>


Re: Cannot connect phoenix client in kerberos cluster

2017-10-11 Thread rafa
Hi Mallieswari,

The error:

KrbException: Encryption type AES256 CTS mode with HMAC SHA1-96 is not
supported/enabled

points to JCE not installed or incorrectly installed in the JVM.

What I have configured is : Phoenix query server connects itself to the
secured cluster with a valid kerberos principal and keytab.

The access to query server : sqlline-thin.py http://hostname:8765

Regards,
rafa


Re: Cannot connect phoenix client in kerberos cluster

2017-10-09 Thread rafa
Hi Mallieswari:

*Method 1:* python sqlline-thin.py http://hostname:8765

This should be enough to connect to Phoenix query server.

Increase the Phoenix Qeury Server log file level to see if there is a
problem with it.

regards,
rafa

On Fri, Oct 6, 2017 at 11:28 AM, Mallieswari Dineshbabu <
dmalliesw...@gmail.com> wrote:

> Hi rafa,
> I have this Kernel32 error in normal hadoop cluster also but i can
> successfully to connected with the query server using sqlline-thin.py. In
> kerberos cluster the ,I getting following error.
> java.lang.RuntimeException: Failed to execute HTTP Request, got HTTP/404
> at org.apache.calcite.avatica.remote.AvaticaCommonsHttpClientSpnego
> Impl.send(AvaticaCommonsHttpClientSpnegoImpl.java:148)
> at org.apache.calcite.avatica.remote.DoAsAvaticaHttpClient$
> 1.run(DoAsAvaticaHttpClient.java:40)
> at org.apache.calcite.avatica.remote.DoAsAvaticaHttpClient$
> 1.run(DoAsAvaticaHttpClient.java:38)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:360)
> at org.apache.calcite.avatica.remote.DoAsAvaticaHttpClient.
> send(DoAsAvaticaHttpClient.java:38)
> at org.apache.calcite.avatica.remote.RemoteProtobufService._
> apply(RemoteProtobufService.java:45)
> at org.apache.calcite.avatica.remote.ProtobufService.apply(
> ProtobufService.java:81)
> at org.apache.calcite.avatica.remote.Driver.connect(Driver.java:176)
> at java.sql.DriverManager.getConnection(DriverManager.java:664)
> at java.sql.DriverManager.getConnection(DriverManager.java:270)
> at multiaccess.Jobs.PhoenixJava(Jobs.java:69)
> at multiaccess.Jobs.executeQueries(Jobs.java:39)
> at multiaccess.MultiAccess$1.call(MultiAccess.java:61)
> at multiaccess.MultiAccess$1.call(MultiAccess.java:56)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
>
> Note: Phoenix Package Version- 4.11.0-Hbase-1.2
> Hbase Version- 4.2.5
> Hadoop version- 2.7.2
>
>
> Please help me to connect with query server through sqlline-thin client
>
> Regards,
> Mallieswari D
>
> On Thu, Oct 5, 2017 at 7:01 PM, rafa <raf...@gmail.com> wrote:
>
>> Hi,
>>
>> The method 1 should work as far as the query server connects to the
>> cluster successfully with the configured keytab. It seems a classpath
>> problem on client side:
>>
>> [ERROR] Terminal initialization failed; falling back to unsupported
>>
>> java.lang.NoClassDefFoundError: Could not initialize class
>> org.apache.phoenix.sh
>> aded.org.fusesource.jansi.internal.Kernel32
>>
>> I have no exprience with windows. Seems that there is need for jline in
>> the classpath
>>
>> https://jline.github.io/
>>
>> check this:
>>
>> https://issues.apache.org/jira/browse/HIVE-13824
>>
>> regards
>>
>>
>> On Thu, Oct 5, 2017 at 2:29 PM, Mallieswari Dineshbabu <
>> dmalliesw...@gmail.com> wrote:
>>
>>> Yes, It is installed in all the JVMs. Any other solution.
>>>
>>>
>>> On Wed, Oct 4, 2017 at 5:30 PM, rafa <raf...@gmail.com> wrote:
>>>
>>>> Hi Mallieswari,
>>>>
>>>> Perhaps the Java Cryptography Extension (JCE) Unlimited Strength
>>>> Jurisdiction Policy Files are not installed in all the JVMs ?
>>>>
>>>> Regards,
>>>> rafa
>>>>
>>>> On Wed, Oct 4, 2017 at 1:18 PM, Mallieswari Dineshbabu <
>>>> dmalliesw...@gmail.com> wrote:
>>>>
>>>>> Hi ,
>>>>>
>>>>>
>>>>>
>>>>> I have configured a phoenix package "apache-phoenix-4.11.0-HBase-1.2-bin"
>>>>> to Hbase version "1.2.5" in kerberos cluster.
>>>>>
>>>>>
>>>>>
>>>>> For phoenix secure cluster configuration, I have added the following
>>>>> properties into the *hbase-site.xml* present in *phoenix/bin* along
>>>>> with the properties of hbase configuration properties present in 
>>>>> hbase/conf
>>>>> path and refer the *core-site.xml*, *hdfs-site.xml* file in
>>>>> phoenix/bin path
>>>>>
>>>>>
>>>>>
>>>>> phoenix.queryserver.keytab.file
>>>>>
>>>>> The key to look for keytab file.
>>>>>
>>>>> *unset*
>>>>>
>>>

Re: Phoenix- Client Shell Exception when starting phoenix thin client service in windows environment

2017-10-05 Thread rafa
Hi,

It seems the client is getting connected finally:

org.apache.phoenix.queryserver.client.Driver Connecting to
jdbc:phoenix:thin:url=http://localhost:8765;serialization=PROTOBUF
Connected to: Apache Phoenix (version unknown version) Driver: Phoenix
Remote JDBC Driver (version unknown version) Autocommit status: true
Transaction isolation: TRANSACTION_READ_COMMITTED Building list of
tables and columns for tab-completion (set fastconnect to true to
skip)... 92/92 (100%) Done Done sqlline version 1.2.0 0:
jdbc:phoenix:thin:url=http://localhost:876>


Try adding jline to the classpath:

http://repo1.maven.org/maven2/jline/jline/2.14.5/

regards

rafa



On Thu, Oct 5, 2017 at 2:53 PM, Mallieswari Dineshbabu <
dmalliesw...@gmail.com> wrote:

>
>
>
> Hi all,
>
>
>
>
>
> I have facing "NoClassDefFoundError" When execute the phoenix thin client
> via terminal. Log Details
>
> Exception Details fro thin clients:
> [ERROR] Terminal initialization failed; falling back to unsupported 
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.phoenix.shaded.org.fusesource.jansi.internal.Kernel32 at 
> org.apache.phoenix.shaded.org.fusesource.jansi.internal.WindowsSupport.getConsoleMode(WindowsSupport.java:50)
>  at 
> org.apache.phoenix.shaded.jline.WindowsTerminal.getConsoleMode(WindowsTerminal.java:177)
>  at 
> org.apache.phoenix.shaded.jline.WindowsTerminal.init(WindowsTerminal.java:80) 
> at 
> org.apache.phoenix.shaded.jline.TerminalFactory.create(TerminalFactory.java:101)
>  at 
> org.apache.phoenix.shaded.jline.TerminalFactory.get(TerminalFactory.java:159) 
> at sqlline.SqlLineOpts.(SqlLineOpts.java:45) at 
> sqlline.SqlLine.(SqlLine.java:55) at 
> sqlline.SqlLine.start(SqlLine.java:397) at 
> sqlline.SqlLine.main(SqlLine.java:291) at 
> org.apache.phoenix.queryserver.client.SqllineWrapper.main(SqllineWrapper.java:93)
>  [ERROR] Terminal initialization failed; falling back to unsupported 
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.phoenix.shaded.org.fusesource.jansi.internal.Kernel32 at 
> org.apache.phoenix.shaded.org.fusesource.jansi.internal.WindowsSupport.getConsoleMode(WindowsSupport.java:50)
>  at 
> org.apache.phoenix.shaded.jline.WindowsTerminal.getConsoleMode(WindowsTerminal.java:177)
>  at 
> org.apache.phoenix.shaded.jline.WindowsTerminal.init(WindowsTerminal.java:80) 
> at 
> org.apache.phoenix.shaded.jline.TerminalFactory.create(TerminalFactory.java:101)
>  at sqlline.SqlLine.getConsoleReader(SqlLine.java:723) at 
> sqlline.SqlLine.begin(SqlLine.java:657) at 
> sqlline.SqlLine.start(SqlLine.java:398) at 
> sqlline.SqlLine.main(SqlLine.java:291) at 
> org.apache.phoenix.queryserver.client.SqllineWrapper.main(SqllineWrapper.java:93)
>  Setting property: [incremental, false] Setting property: [isolation, 
> TRANSACTION_READ_COMMITTED] issuing: !connect 
> jdbc:phoenix:thin:url=http://localhost:8765;serialization=PROTOBUF none none 
> org.apache.phoenix.queryserver.client.Driver Connecting to 
> jdbc:phoenix:thin:url=http://localhost:8765;serialization=PROTOBUF Connected 
> to: Apache Phoenix (version unknown version) Driver: Phoenix Remote JDBC 
> Driver (version unknown version) Autocommit status: true Transaction 
> isolation: TRANSACTION_READ_COMMITTED Building list of tables and columns for 
> tab-completion (set fastconnect to true to skip)... 92/92 (100%) Done Done 
> sqlline version 1.2.0 0: jdbc:phoenix:thin:url=http://localhost:876>
>
> Environment Details
>
> OS - windows 7, 8.1 & 10.
> JAVA - jdk 1.7.0_51
> Phoenix - 4.11.0-HBase1.2
>
> In Linux- No Error Message reproduced.
>
> What should I do to solve this problem.
>
> --
> Thanks and regards
> D.Mallieswari
>


Re: Cannot connect phoenix client in kerberos cluster

2017-10-05 Thread rafa
Hi,

The method 1 should work as far as the query server connects to the cluster
successfully with the configured keytab. It seems a classpath problem on
client side:

[ERROR] Terminal initialization failed; falling back to unsupported

java.lang.NoClassDefFoundError: Could not initialize class
org.apache.phoenix.sh
aded.org.fusesource.jansi.internal.Kernel32

I have no exprience with windows. Seems that there is need for jline in the
classpath

https://jline.github.io/

check this:

https://issues.apache.org/jira/browse/HIVE-13824

regards


On Thu, Oct 5, 2017 at 2:29 PM, Mallieswari Dineshbabu <
dmalliesw...@gmail.com> wrote:

> Yes, It is installed in all the JVMs. Any other solution.
>
>
> On Wed, Oct 4, 2017 at 5:30 PM, rafa <raf...@gmail.com> wrote:
>
>> Hi Mallieswari,
>>
>> Perhaps the Java Cryptography Extension (JCE) Unlimited Strength
>> Jurisdiction Policy Files are not installed in all the JVMs ?
>>
>> Regards,
>> rafa
>>
>> On Wed, Oct 4, 2017 at 1:18 PM, Mallieswari Dineshbabu <
>> dmalliesw...@gmail.com> wrote:
>>
>>> Hi ,
>>>
>>>
>>>
>>> I have configured a phoenix package "apache-phoenix-4.11.0-HBase-1.2-bin"
>>> to Hbase version "1.2.5" in kerberos cluster.
>>>
>>>
>>>
>>> For phoenix secure cluster configuration, I have added the following
>>> properties into the *hbase-site.xml* present in *phoenix/bin* along
>>> with the properties of hbase configuration properties present in hbase/conf
>>> path and refer the *core-site.xml*, *hdfs-site.xml* file in phoenix/bin
>>> path
>>>
>>>
>>>
>>> phoenix.queryserver.keytab.file
>>>
>>> The key to look for keytab file.
>>>
>>> *unset*
>>>
>>> phoenix.queryserver.kerberos.principal
>>>
>>> The kerberos principal to use when authenticating.
>>>
>>> *unset*
>>>
>>> Phoenix Query Server:
>>>
>>>
>>>
>>> Once updated a above properties query server has been started
>>> successfully using keytab.
>>>
>>>
>>>
>>> *Command to Server:*
>>>
>>> *python queryserver.py*
>>>
>>>
>>>
>>> Phoenix Client:
>>>
>>>
>>>
>>> Once the query server is started successfully then the port no 8765
>>> comes to live. When i try to connect client with following command it
>>> returns GSS Exception. Am I missing any steps in configuration.
>>>
>>>
>>>
>>>
>>>
>>> *Command to Client:*
>>>
>>> Following are the methods i tried to connect in secure cluster it does
>>> not works.
>>>
>>>
>>>
>>> *Method 1:* python sqlline-thin.py http://hostname:8765
>>>
>>> *Method 2:*
>>>
>>> python sqlthin-client.py http://hostname:8765;authentic
>>> ation=SPNEGO;principal=phoenix/org...@xx.x.com;
>>> keytab=C:\\path\\to\\HadoopKeyTabs\\\phoenix.keytab
>>> <http://hostname:8765;authentication=SPNEGO;principal=phoenix/org...@xx.x.com;keytab=C:/path/to/HadoopKeyTabs/phoenix.keytab>
>>>
>>>
>>>
>>>
>>>
>>> *CLIENT SIDE ERROR:*
>>>
>>> x-4.11.0-HBase-1.2-bin\bin>python sqlline-thin.py http://namenode1:8765
>>>
>>> Failed to find hbase executable on PATH, defaulting serialization to
>>> PROTOBUF.
>>>
>>> [ERROR] Terminal initialization failed; falling back to unsupported
>>>
>>> java.lang.NoClassDefFoundError: Could not initialize class
>>> org.apache.phoenix.sh
>>>
>>> aded.org.fusesource.jansi.internal.Kernel32
>>>
>>> at org.apache.phoenix.shaded.org.fusesource.jansi.internal.Wind
>>> owsSuppor
>>>
>>> t.getConsoleMode(WindowsSupport.java:50)
>>>
>>> at org.apache.phoenix.shaded.jline.WindowsTerminal.getConsoleMo
>>> de(Window
>>>
>>> sTerminal.java:177)
>>>
>>> at org.apache.phoenix.shaded.jline.WindowsTerminal.init(Windows
>>> Terminal.
>>>
>>> java:80)
>>>
>>> at org.apache.phoenix.shaded.jline.TerminalFactory.create(Termi
>>> nalFactor
>>>
>>> y.java:101)
>>>
>>> at org.apache.phoenix.shaded.jline.TerminalFactory.get(Terminal
>>> Factory.j
>>>
>>> ava:159)
>>>
>>> at 

Re: Cannot connect phoenix client in kerberos cluster

2017-10-04 Thread rafa
Hi Mallieswari,

Perhaps the Java Cryptography Extension (JCE) Unlimited Strength
Jurisdiction Policy Files are not installed in all the JVMs ?

Regards,
rafa

On Wed, Oct 4, 2017 at 1:18 PM, Mallieswari Dineshbabu <
dmalliesw...@gmail.com> wrote:

> Hi ,
>
>
>
> I have configured a phoenix package "apache-phoenix-4.11.0-HBase-1.2-bin"
> to Hbase version "1.2.5" in kerberos cluster.
>
>
>
> For phoenix secure cluster configuration, I have added the following
> properties into the *hbase-site.xml* present in *phoenix/bin* along with
> the properties of hbase configuration properties present in hbase/conf path
> and refer the *core-site.xml*, *hdfs-site.xml* file in phoenix/bin path
>
>
>
> phoenix.queryserver.keytab.file
>
> The key to look for keytab file.
>
> *unset*
>
> phoenix.queryserver.kerberos.principal
>
> The kerberos principal to use when authenticating.
>
> *unset*
>
> Phoenix Query Server:
>
>
>
> Once updated a above properties query server has been started successfully
> using keytab.
>
>
>
> *Command to Server:*
>
> *python queryserver.py*
>
>
>
> Phoenix Client:
>
>
>
> Once the query server is started successfully then the port no 8765 comes
> to live. When i try to connect client with following command it returns GSS
> Exception. Am I missing any steps in configuration.
>
>
>
>
>
> *Command to Client:*
>
> Following are the methods i tried to connect in secure cluster it does not
> works.
>
>
>
> *Method 1:* python sqlline-thin.py http://hostname:8765
>
> *Method 2:*
>
> python sqlthin-client.py http://hostname:8765;authentication=SPNEGO;
> principal=phoenix/org...@xx.x.com;keytab=C:\\
> path\\to\\HadoopKeyTabs\\\phoenix.keytab
> <http://hostname:8765;authentication=SPNEGO;principal=phoenix/org...@xx.x.com;keytab=C:/path/to/HadoopKeyTabs/phoenix.keytab>
>
>
>
>
>
> *CLIENT SIDE ERROR:*
>
> x-4.11.0-HBase-1.2-bin\bin>python sqlline-thin.py http://namenode1:8765
>
> Failed to find hbase executable on PATH, defaulting serialization to
> PROTOBUF.
>
> [ERROR] Terminal initialization failed; falling back to unsupported
>
> java.lang.NoClassDefFoundError: Could not initialize class
> org.apache.phoenix.sh
>
> aded.org.fusesource.jansi.internal.Kernel32
>
> at org.apache.phoenix.shaded.org.fusesource.jansi.internal.
> WindowsSuppor
>
> t.getConsoleMode(WindowsSupport.java:50)
>
> at org.apache.phoenix.shaded.jline.WindowsTerminal.
> getConsoleMode(Window
>
> sTerminal.java:177)
>
> at org.apache.phoenix.shaded.jline.WindowsTerminal.init(
> WindowsTerminal.
>
> java:80)
>
> at org.apache.phoenix.shaded.jline.TerminalFactory.create(
> TerminalFactor
>
> y.java:101)
>
> at org.apache.phoenix.shaded.jline.TerminalFactory.get(
> TerminalFactory.j
>
> ava:159)
>
> at sqlline.SqlLineOpts.(SqlLineOpts.java:45)
>
> at sqlline.SqlLine.(SqlLine.java:55)
>
> at sqlline.SqlLine.start(SqlLine.java:397)
>
> at sqlline.SqlLine.main(SqlLine.java:291)
>
> at org.apache.phoenix.queryserver.client.SqllineWrapper$1.run(
> SqllineWra
>
> pper.java:88)
>
> at org.apache.phoenix.queryserver.client.SqllineWrapper$1.run(
> SqllineWra
>
> pper.java:85)
>
> at java.security.AccessController.doPrivileged(Native Method)
>
> at javax.security.auth.Subject.doAs(Subject.java:415)
>
> at org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInforma
>
> tion.java:1657)
>
> at org.apache.phoenix.queryserver.client.SqllineWrapper.main(
> SqllineWrap
>
> per.java:85)
>
>
>
> [ERROR] Terminal initialization failed; falling back to unsupported
>
> java.lang.NoClassDefFoundError: Could not initialize class
> org.apache.phoenix.sh
>
> aded.org.fusesource.jansi.internal.Kernel32
>
> at org.apache.phoenix.shaded.org.fusesource.jansi.internal.
> WindowsSuppor
>
> t.getConsoleMode(WindowsSupport.java:50)
>
> at org.apache.phoenix.shaded.jline.WindowsTerminal.
> getConsoleMode(Window
>
> sTerminal.java:177)
>
> at org.apache.phoenix.shaded.jline.WindowsTerminal.init(
> WindowsTerminal.
>
> java:80)
>
> at org.apache.phoenix.shaded.jline.TerminalFactory.create(
> TerminalFactor
>
> y.java:101)
>
> at sqlline.SqlLine.getConsoleReader(SqlLine.java:723)
>
> at sqlline.SqlLine.begin(SqlLine.java:657)
>
> at sqlline.SqlLine.start(SqlLine.java:398)
>
> at sqlli

Re: Support of OFFSET in Phoenix 4.7

2017-09-06 Thread rafa
Hi Sumanta,

Here you have the answer. You already asked the same question some months
ago :)

https://mail-archives.apache.org/mod_mbox/phoenix-user/201705.mbox/browser

>From 4.8

regards,
rafa

On Wed, Sep 6, 2017 at 9:19 AM, Sumanta Gh <sumanta...@tcs.com> wrote:

> Hi,
> From which version of Phoenix pagination with OFFSET is supported. It
> seems this is not supported in 4.7
>
> https://phoenix.apache.org/paged.html
>
> regards,
> Sumanta
>
> =-=-=
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
>
>


Re: how to modify column in phoenix?

2017-06-16 Thread rafa
Hi,

As far as I know there is no support for that in Phoenix.

James Taylor explained an alternative to accomplish that in this thread:

https://mail-archives.apache.org/mod_mbox/phoenix-user/201610.mbox/browser

Regards,
rafa


On Fri, Jun 16, 2017 at 10:33 AM, 曾柏棠 <zengbait...@qq.com> wrote:

> hi,
>I want to expand one column in my phoenix table, but I can not find any
> words about how to modify column in phoenix reference,
>  so Is phoenix can do something like *alter table modify some_column
> varchar(100)?*
>
> *thanks!*
>


Re: pagination

2017-05-18 Thread rafa
Hi Sumanta,

It is supported from 4.8:

Apache Phoenix enables OLTP and operational analytics for Hadoop through
SQL support and integration with other projects in the ecosystem such as
Spark, HBase, Pig, Flume, MapReduce and Hive.

We're pleased to announce our 4.8.0 release which includes:
- Local Index improvements[1]
- Integration with hive[2]
- Namespace mapping support[3]
- VIEW enhancements[4]
- Offset support for paged queries[5]
- 130+ Bugs resolved[6]
- HBase v1.2 is also supported ( with continued support for v1.1, v1.0 &
v0.98)
- Many performance enhancements(related to StatsCache, distinct, Serial
query with Stats etc)[6]

The release is available in source or binary form here [7].

Release artifacts are signed with the following key:
*https://people.apache.org/keys/committer/ankit.asc
<https://people.apache.org/keys/committer/ankit.asc>*

Thanks,
The Apache Phoenix Team

[1] https://issues.apache.org/jira/browse/PHOENIX-1734
[2] https://issues.apache.org/jira/browse/PHOENIX-2743
[3] https://issues.apache.org/jira/browse/PHOENIX-1311
[4] https://issues.apache.org/jira/browse/PHOENIX-1508
[5] https://issues.apache.org/jira/browse/PHOENIX-2722
[6] 
*https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12334393=12315120
<https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12334393=12315120>*
[7] https://phoenix.apache.org/download.html

Regards,

rafa



On Thu, May 18, 2017 at 2:04 PM, rafa <raf...@gmail.com> wrote:

> Ups...sorry my mistake. The Jira is for limit - offset with orderby. sorry.
>
> On Thu, May 18, 2017 at 2:02 PM, rafa <raf...@gmail.com> wrote:
>
>> Hi Sumanta,
>>
>> I think it is not supported yet:
>>
>> https://issues.apache.org/jira/browse/PHOENIX-3353
>>
>> Best regards,
>> rafa
>>
>> On Thu, May 18, 2017 at 1:52 PM, Sumanta Gh <sumanta...@tcs.com> wrote:
>>
>>> Hi,
>>> From which version of Phoenix LIMIT-OFFSET based pagination is
>>> supported? I am using 4.7, but not able to use OFFSET.
>>>
>>> Regards
>>> Sumanta
>>>
>>> =-=-=
>>> Notice: The information contained in this e-mail
>>> message and/or attachments to it may contain
>>> confidential or privileged information. If you are
>>> not the intended recipient, any dissemination, use,
>>> review, distribution, printing or copying of the
>>> information contained in this e-mail message
>>> and/or attachments to it are strictly prohibited. If
>>> you have received this communication in error,
>>> please notify us by reply e-mail or telephone and
>>> immediately and permanently delete the message
>>> and any attachments. Thank you
>>>
>>>
>>
>


Re: pagination

2017-05-18 Thread rafa
Ups...sorry my mistake. The Jira is for limit - offset with orderby. sorry.

On Thu, May 18, 2017 at 2:02 PM, rafa <raf...@gmail.com> wrote:

> Hi Sumanta,
>
> I think it is not supported yet:
>
> https://issues.apache.org/jira/browse/PHOENIX-3353
>
> Best regards,
> rafa
>
> On Thu, May 18, 2017 at 1:52 PM, Sumanta Gh <sumanta...@tcs.com> wrote:
>
>> Hi,
>> From which version of Phoenix LIMIT-OFFSET based pagination is supported?
>> I am using 4.7, but not able to use OFFSET.
>>
>> Regards
>> Sumanta
>>
>> =-=-=
>> Notice: The information contained in this e-mail
>> message and/or attachments to it may contain
>> confidential or privileged information. If you are
>> not the intended recipient, any dissemination, use,
>> review, distribution, printing or copying of the
>> information contained in this e-mail message
>> and/or attachments to it are strictly prohibited. If
>> you have received this communication in error,
>> please notify us by reply e-mail or telephone and
>> immediately and permanently delete the message
>> and any attachments. Thank you
>>
>>
>


Re: pagination

2017-05-18 Thread rafa
Hi Sumanta,

I think it is not supported yet:

https://issues.apache.org/jira/browse/PHOENIX-3353

Best regards,
rafa

On Thu, May 18, 2017 at 1:52 PM, Sumanta Gh <sumanta...@tcs.com> wrote:

> Hi,
> From which version of Phoenix LIMIT-OFFSET based pagination is supported?
> I am using 4.7, but not able to use OFFSET.
>
> Regards
> Sumanta
>
> =-=-=
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
>
>


Re: Phoenix connection to kerberized hbase fails

2017-04-19 Thread rafa
Hi Reid,

Then the most probable thing is that the provided keytab is not suitable
for the principal:

phoenix/hadoop-offline032.dx.momo.com@MOMO.OFFLINE

/opt/hadoop/etc/hadoop/security/phoenix.keytab

Can you do a

kinit -kt  /opt/hadoop/etc/hadoop/security/phoenix.keytab   phoenix/hadoop-
offline032.dx.momo.com@MOMO.OFFLINE  ?

Regards,
rafa


On Wed, Apr 19, 2017 at 12:46 PM, Reid Chan <reidddc...@outlook.com> wrote:

> Hi rafa,
>
> I followed the guides on site:
> https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/
> bk_command-line-installation/content/configuring-phoenix-
> to-run-in-a-secure-cluster.html
> , and linked those configuration files under phoenix bin directory.
>
> But problem remains.
>
> Best regards,
> ---R
>
>
>
> --
> View this message in context: http://apache-phoenix-user-
> list.1124778.n5.nabble.com/Phoenix-connection-to-kerberized-hbase-fails-
> tp3419p3422.html
> Sent from the Apache Phoenix User List mailing list archive at Nabble.com.
>


Re: Phoenix connection to kerberized hbase fails

2017-04-19 Thread rafa
Hi Reid,

Take a look at:

https://lists.apache.org/thread.html/5c0bf3199f8864421e8b7b2f5b2aa4c509691cb2fa82b1894f97a5a1@%3Cuser.phoenix.apache.org%3E

(second mail)

I reproduced a very similar problem  when not having a correct
core-site.xml in the client machine.

Do you have a copy of the working cluster core-site.xml in your client
machine?

regards,
rafa


On Wed, Apr 19, 2017 at 11:51 AM, Reid Chan <reidddc...@outlook.com> wrote:

> Version infomation, phoenix: phoenix-4.10.0-HBase-1.2, hbase: hbase-1.2.4
>
>
>
> --
> View this message in context: http://apache-phoenix-user-
> list.1124778.n5.nabble.com/Phoenix-connection-to-kerberized-hbase-fails-
> tp3419p3420.html
> Sent from the Apache Phoenix User List mailing list archive at Nabble.com.
>


Re: Problem connecting JDBC client to a secure cluster

2017-04-18 Thread rafa
Hi Sergey, Josh,

Thank you very much for your comments !!

Best Reagrds,
Rafa.

On Tue, Apr 18, 2017 at 5:29 AM, Sergey Soldatov <sergeysolda...@gmail.com>
wrote:

> That's not hbase-site.xml loaded incorrectly. This is the behavior of java
> classpath. It's accept only jars and directories. So if any resources
> should be added to the classpath other than jars, you need to add to the
> classpath the directory where they are located.
>
> Thanks,
> Sergey
>
> On Tue, Apr 11, 2017 at 10:15 AM, rafa <raf...@gmail.com> wrote:
>
>> Hi all,
>>
>>
>> I have been able to track down the origin of the problem and it is
>> related to the hbase-site.xml not being loaded correctly by the application
>> server.
>>
>> Seeing the instructions given by Anil in this JIRA:
>> https://issues.apache.org/jira/browse/PHOENIX-19 it has been easy to
>> reproduce it
>>
>> java <r...@clo-mgr-p-01u.cajamar.int:/tmp/testhbase2%3ejava>  -cp
>> /tmp/testhbase2:/opt/cloudera/parcels/CLABS_PHOENIX-4.7.0-1.
>> clabs_phoenix1.3.0.p0.000/lib/phoenix/lib/hadoop-hdfs-2.6.0-
>> cdh5.7.0.jar:/opt/cloudera/parcels/CLABS_PHOENIX-4.7.0-1.cla
>> bs_phoenix1.3.0.p0.000/lib/phoenix/phoenix-4.7.0-clabs-phoenix1.3.0-client.jar
>> sqlline.SqlLine -d org.apache.phoenix.jdbc.PhoenixDriver  -u
>> jdbc:phoenix:node-01u..int:2181:phoe...@hadoop.int:/etc/
>> security/keytabs/phoenix.keytab  -n none -p none --color=true
>> --fastConnect=false --verbose=true  --incremental=false
>> --isolation=TRANSACTION_READ_COMMITTED
>>
>> In /tmp/testhbase2 there are 3 files:
>>
>> -rw-r--r--   1 root root  4027 Apr 11 18:23 hdfs-site.xml
>> -rw-r--r--   1 root root  3973 Apr 11 18:29 core-site.xml
>> -rw-rw-rw-   1 root root  3924 Apr 11 18:49 hbase-site.xml
>>
>>
>> a) If hdfs-site.xml is missing or invalid:
>>
>> It fails with Caused by: java.lang.IllegalArgumentException:
>> java.net.UnknownHostException: nameservice1
>>
>> (with HA HDFS, hdfs-site.xml  is needed to resolve the name service)
>>
>> b) if core-site.xml is missing or invalid:
>>
>>  17/04/11 19:05:01 WARN security.UserGroupInformation:
>> PriviledgedActionException as:root (auth:SIMPLE)
>> cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by
>> GSSException: No valid credentials provided (Mechanism level: Failed to
>> find any Kerberos tgt)]
>> 17/04/11 19:05:01 WARN ipc.RpcClientImpl: Exception encountered while
>> connecting to the server : javax.security.sasl.SaslException: GSS
>> initiate failed [Caused by GSSException: No valid credentials provided
>> (Mechanism level: Failed to find any Kerberos tgt)]
>> 17/04/11 19:05:01 FATAL ipc.RpcClientImpl: SASL authentication failed.
>> The most likely cause is missing or invalid credentials. Consider 'kinit'.
>> javax.security.sasl.SaslException: GSS initiate failed [Caused by
>> GSSException: No valid credentials provided (Mechanism level: Failed to
>> find any Kerberos tgt)]
>> at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChalleng
>> e(GssKrb5Client.java:211)
>> at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConn
>> ect(HBaseSaslRpcClient.java:181)
>>
>> ...
>> Caused by: GSSException: No valid credentials provided (Mechanism level:
>> Failed to find any Kerberos tgt)
>> at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5In
>> itCredential.java:147)
>> at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(
>> Krb5MechFactory.java:121)
>> at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(K
>> rb5MechFactory.java:187)
>> at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSMana
>> gerImpl.java:223)
>> at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextIm
>> pl.java:212)
>> at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextIm
>> pl.java:179)
>> at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChalleng
>> e(GssKrb5Client.java:192)
>>
>>
>>
>> c) If hbase-site.xml is missing or invalid:
>>
>> The zookeeeper connection works right, but not the Hbase master one:
>>
>> java  -cp /tmp/testhbase2:/opt/cloudera/parcels/CLABS_PHOENIX-4.7.0-1.
>> clabs_phoenix1.3.0.p0.000/lib/phoenix/lib/hadoop-hdfs-2.6.0-
>> cdh5.7.0.jar:/opt/cloudera/parcels/CLABS_PHOENIX-4.7.0-1.cla
>> bs_phoenix1.3.0.p0.000/lib/phoenix/phoenix-4.7.0-clabs-phoenix1.3.0-client.jar
>> sqlline.SqlLine -d org.apache.phoenix.jdbc.PhoenixDriver  -u
>> jdbc:phoenix:node

Re: Problem connecting JDBC client to a secure cluster

2017-04-11 Thread rafa
8 INFO zookeeper.ZooKeeper: Client
environment:java.version=1.7.0_131
17/04/11 19:06:38 INFO zookeeper.ZooKeeper: Client
environment:java.vendor=Oracle Corporation
17/04/11 19:06:38 INFO zookeeper.ZooKeeper: Client
environment:java.home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.131.x86_64/jre
17/04/11 19:06:38 INFO zookeeper.ZooKeeper: Client
environment:java.class.path=/tmp/testhbase2:/opt/cloudera/parcels/CLABS_PHOENIX-4.7.0-1.clabs_phoenix1.3.0.p0.000/lib/phoenix/lib/hadoop-hdfs-2.6.0-cdh5.7.0.jar:/opt/cloudera/parcels/CLABS_PHOENIX-4.7.0-1.clabs_phoenix1.3.0.p0.000/lib/phoenix/phoenix-4.7.0-clabs-phoenix1.3.0-client.jar
17/04/11 19:06:38 INFO zookeeper.ZooKeeper: Client
environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
17/04/11 19:06:38 INFO zookeeper.ZooKeeper: Client
environment:java.io.tmpdir=/tmp
17/04/11 19:06:38 INFO zookeeper.ZooKeeper: Client
environment:java.compiler=
17/04/11 19:06:38 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
17/04/11 19:06:38 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
17/04/11 19:06:38 INFO zookeeper.ZooKeeper: Client
environment:os.version=2.6.32-573.26.1.el6.x86_64
17/04/11 19:06:38 INFO zookeeper.ZooKeeper: Client environment:user.name
=root
17/04/11 19:06:38 INFO zookeeper.ZooKeeper: Client
environment:user.home=/root
17/04/11 19:06:38 INFO zookeeper.ZooKeeper: Client
environment:user.dir=/tmp/testhbase2
17/04/11 19:06:38 INFO zookeeper.ZooKeeper: Initiating client connection,
connectString=node-01u..int:2181 sessionTimeout=9
watcher=hconnection-0x6f9edfc90x0, quorum=node-01u..int:2181,
baseZNode=/hbase
17/04/11 19:06:39 INFO zookeeper.ClientCnxn: Opening socket connection to
server node-01u..int/192.168.101.161:2181. Will not attempt to
authenticate using SASL (unknown error)
17/04/11 19:06:39 INFO zookeeper.ClientCnxn: Socket connection established,
initiating session, client: /192.168.101.161:33960, server:
node-01u..int/192.168.101.161:2181
17/04/11 19:06:39 INFO zookeeper.ClientCnxn: Session establishment complete
on server node-01u..int/192.168.101.161:2181, sessionid =
0x15afb9d0dee8c0f, negotiated timeout = 6
17/04/11 19:06:39 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
17/04/11 19:06:39 INFO metrics.Metrics: Initializing metrics system: phoenix
17/04/11 19:06:39 WARN impl.MetricsConfig: Cannot locate configuration:
tried hadoop-metrics2-phoenix.properties,hadoop-metrics2.properties
17/04/11 19:06:39 INFO impl.MetricsSystemImpl: Scheduled snapshot period at
10 second(s).
17/04/11 19:06:39 INFO impl.MetricsSystemImpl: phoenix metrics system
started
17/04/11 19:06:39 INFO Configuration.deprecation: hadoop.native.lib is
deprecated. Instead, use io.native.lib.available
17/04/11 19:07:28 INFO client.RpcRetryingCaller: Call exception, tries=10,
retries=35, started=48483 ms ago, cancelled=false, msg=
17/04/11 19:07:48 INFO client.RpcRetryingCaller: Call exception, tries=11,
retries=35, started=68653 ms ago, cancelled=false, msg=


Hbase master:

2017-04-11 19:06:40,212 DEBUG org.apache.hadoop.hbase.ipc.RpcServer:
RpcServer.listener,port=6: Caught exception while
reading:Authentication is required
2017-04-11 19:06:40,418 DEBUG org.apache.hadoop.hbase.ipc.RpcServer:
RpcServer.listener,port=6: Caught exception while
reading:Authentication is required
2017-04-11 19:06:40,726 DEBUG org.apache.hadoop.hbase.ipc.RpcServer:
RpcServer.listener,port=6: Caught exception while
reading:Authentication is required
2017-04-11 19:06:41,233 DEBUG org.apache.hadoop.hbase.ipc.RpcServer:
RpcServer.listener,port=6: Caught exception while
reading:Authentication is required


Thanks !!
Best Regards,
rafa




On Tue, Apr 11, 2017 at 4:05 PM, rafa <raf...@gmail.com> wrote:

> Hi everybody !,
>
> We have a CDH 5.8 kerberized cluster in which we have installed Apache
> Phoenix 4.7 (via CLABS parcel). Everything works as expected. The only
> problem we are facing is when trying to connect a WeblogicServer to Apache
> Phoenix via the fat client.
>
> needed files are added in classpath: hbase-site.xml,core-site.xml and
> hdfs-site.xml
>
> Jaas.conf used:
>
> Client {
> com.sun.security.auth.module.Krb5LoginModule required principal="
> phoe...@hadoop.int"
> useKeyTab=true
> keyTab=phoenix.keytab
> storeKey=true
> debug=true;
> };
>
> JDBC URL used: jdbc:phoenix:node-01u..int:2181:hbase/phoenix@HADOOP.
> INT:/wldoms/domcb1arqu/phoenix.keytab
>
> The secured connection is made correctly in Zookeeper, but it never
> succeeds when connecting to the HBase Master
>
> 17/04/10 12:38:23 INFO zookeeper.ZooKeeper: Client
> environment:java.io.tmpdir=/tmp
> 17/04/10 12:38:23 INFO zookeeper.ZooKeeper: Client
> environment:java.compiler=
> 17/04/10 12:38:23 INFO zookeeper.ZooKeeper: Client environment:os.n

Problem connecting JDBC client to a secure cluster

2017-04-11 Thread rafa
 error:


2017-04-10 12:47:15,849 DEBUG org.apache.hadoop.hbase.ipc.RpcServer:
RpcServer.listener,port=6: connection from 192.168.60.6:34380; # active
connections: 5
2017-04-10 12:47:15,849 DEBUG org.apache.hadoop.hbase.ipc.RpcServer:
RpcServer.listener,port=6: Caught exception while
reading:Authentication is required
2017-04-10 12:47:15,849 DEBUG org.apache.hadoop.hbase.ipc.RpcServer:
RpcServer.listener,port=6: DISCONNECTING client 192.168.60.6:34380
because read count=-1. Number of active connections:

Executing a "hbase shell" manually inside the cluster after obtaining a
ticket with the same keytab we see:


2017-04-11 12:33:12,319 DEBUG org.apache.hadoop.hbase.ipc.RpcServer:
RpcServer.listener,port=6: connection from 192.168.101.161:60370; #
active connections: 5
2017-04-11 12:33:12,330 DEBUG org.apache.hadoop.hbase.ipc.RpcServer:
Kerberos principal name is hbase/node-05u.@hadoop.int
2017-04-11 12:33:12,330 DEBUG
org.apache.hadoop.security.UserGroupInformation: PrivilegedAction as:hbase/
node-05u.@hadoop.int (auth:KERBEROS)
from:org.apache.hadoop.hbase.ipc.RpcServer$Connection.saslReadAndProcess(RpcServer.java:1354)
2017-04-11 12:33:12,331 DEBUG org.apache.hadoop.hbase.ipc.RpcServer:
Created SASL server with mechanism = GSSAPI
2017-04-11 12:33:12,331 DEBUG org.apache.hadoop.hbase.ipc.RpcServer: Have
read input token of size 640 for processing by saslServer.evaluateResponse()
2017-04-11 12:33:12,333 DEBUG org.apache.hadoop.hbase.ipc.RpcServer: Will
send token of size 108 from saslServer.
2017-04-11 12:33:12,335 DEBUG org.apache.hadoop.hbase.ipc.RpcServer: Have
read input token of size 0 for processing by saslServer.evaluateResponse()
2017-04-11 12:33:12,335 DEBUG org.apache.hadoop.hbase.ipc.RpcServer: Will
send token of size 32 from saslServer.
2017-04-11 12:33:12,336 DEBUG org.apache.hadoop.hbase.ipc.RpcServer: Have
read input token of size 32 for processing by saslServer.evaluateResponse()
2017-04-11 12:33:12,336 DEBUG
org.apache.hadoop.hbase.security.HBaseSaslRpcServer: SASL server GSSAPI
callback: setting canonicalized client ID: phoe...@hadoop.int
2017-04-11 12:33:12,336 DEBUG org.apache.hadoop.hbase.ipc.RpcServer: SASL
server context established. Authenticated client: phoe...@hadoop.int
(auth:KERBEROS). Negotiated QoP is auth
2017-04-11 12:33:12,336 INFO SecurityLogger.org.apache.hadoop.hbase.Server:
Auth successful for phoe...@hadoop.int (auth:KERBEROS)
2017-04-11 12:33:12,338 INFO SecurityLogger.org.apache.hadoop.hbase.Server:
Connection from 192.168.101.161 port: 60370 with version info: version:
"1.2.0-cdh5.8.0" url:
"file:///data/jenkins/workspace/generic-package-rhel64-6-0/topdir/BUILD/hbase-1.2.0-cdh5.8.0"
revision: "Unknown" user: "jenkins" date: "Tue Jul 12 16:09:11 PDT 2016"
src_checksum: "b910b34d6127cf42495e0a8bf37a0e9e"
2017-04-11 12:33:12,338 INFO
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
Authorization successful for phoe...@hadoop.int (auth:KERBEROS) for
protocol=interface
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingInterface

It seems that the JDBC driver is not trying to authenticate to Hbase.
Perhaps some of you have faced a similar situation or could point me to a
new direction.

Thank you very much for your help !
Best Regards,
rafa.


Re: Retiring empty regions

2017-03-24 Thread rafa
Hi,

For everyone to know, Nick has published the script for retiring empty
regions in :

https://issues.apache.org/jira/browse/HBASE-15712

Nick, Thank you very much for your help and great work !!!

Best Regards,
Rafa.



On Mon, Mar 6, 2017 at 5:49 PM, rafa <raf...@gmail.com> wrote:

> Hi Nick,
>
> We are facing the same issue. Increasingly number of empty regions derived
> from TTL and Timestamp in row key.
>
> Did you finally published that scripts? Are they available for public
> usage?
>
> Thank you very much in advance for your work and help,
> Best Regards,
> rafa
>
>
>
>
> On Thu, Apr 21, 2016 at 1:48 AM, Andrew Purtell <apurt...@apache.org>
> wrote:
>
>> >  the shell and find the empty ones, another to merge a given region
>> into a neighbor. We've run them without incident, looks like it all works
>> fine. One thing we did notice is that the AM leaves the old "retired"
>> regions around in its counts -- the master status page shows a large number
>> of "Other Regions". This was alarming at first,
>>
>> Good to know. I had seen this recently and had a mental note to circle
>> around and confirm it's just a temporary artifact.
>>
>> On Wed, Apr 20, 2016 at 3:16 PM, Nick Dimiduk <ndimi...@gmail.com> wrote:
>>
>>> Circling back here and adding user@phoenix.
>>>
>>> I put together one script to dump region info from the shell and find
>>> the empty ones, another to merge a given region into a neighbor. We've run
>>> them without incident, looks like it all works fine. One thing we did
>>> notice is that the AM leaves the old "retired" regions around in its counts
>>> -- the master status page shows a large number of "Other Regions". This was
>>> alarming at first, but we verified it's just an artifact in the AM and in
>>> fact these regions are not on HDFS or in meta. Bouncing master resolved it.
>>>
>>> No one has volunteered any alternative schema designs, so as best we
>>> know, this will happen to anyone who has timestamp in their rowkey (ie,
>>> anyone using Phoenix's "Row timestamp" feature [0]) and is also using the
>>> TTL feature. Are folks interested in adding these scripts to our
>>> distribution and our book?
>>>
>>> -n
>>>
>>> [0]: https://phoenix.apache.org/rowtimestamp.html
>>>
>>> On Mon, Apr 4, 2016 at 8:34 AM, Nick Dimiduk <ndimi...@gmail.com> wrote:
>>>
>>>> > Crazy idea, but you might be able to take stripped down version of
>>>> region
>>>> > normalizer code and make a Tool to run? Requesting split or merge is
>>>> done
>>>> > through the client API, and the only weighing information you need is
>>>> > whether region empty or not, that you could find out too?
>>>>
>>>> Yeah, that's the direction I'm headed.
>>>>
>>>> > A bit off topic, but I think unfortunately region normalizer now
>>>> ignores
>>>> > empty regions to avoid undoing pre-split on the table.
>>>>
>>>> Unfortunate indeed. Maybe we should be keeping around the initial
>>>> splits list as a metadata attribute on the table?
>>>>
>>>> > With a right row-key design you will never have empty regions due to
>>>> TTL.
>>>>
>>>> I'd love to hear your thoughts on this design, Vlad. Maybe you'd like
>>>> to write up a post for the blog? Meanwhile, I'm sure of a couple of us on
>>>> here on the list would appreciate your Cliff's Notes version. I can take
>>>> this into account for my v2 schema design.
>>>>
>>>> > So Nick, merge on 1.1 is not recommended??? Was working very well on
>>>> > previous versions. Is ProcV2 really impact it that bad??
>>>>
>>>> How to answer here carefully... I have no reason to believe merge is
>>>> not working on 1.1. I've been on the wrong end of enough "regions stuck in
>>>> transition" support tickets that I'm not keen to put undue stress on my
>>>> master. ProcV2 insures against many scenarios that cause master trauma,
>>>> hence my interest in the implementation details and my preference for
>>>> cluster administration tasks that use it as their source of authority.
>>>>
>>>> Thanks for the thoughts folks.
>>>> -n
>>>>
>>>> On Fri, Apr 1, 2016 at 10:52 AM, Jean-Marc Spaggiari <
>>>> jean-m...@spaggiari.or

Re: Retiring empty regions

2017-03-06 Thread rafa
Hi Nick,

We are facing the same issue. Increasingly number of empty regions derived
from TTL and Timestamp in row key.

Did you finally published that scripts? Are they available for public usage?

Thank you very much in advance for your work and help,
Best Regards,
rafa



On Thu, Apr 21, 2016 at 1:48 AM, Andrew Purtell <apurt...@apache.org> wrote:

> >  the shell and find the empty ones, another to merge a given region
> into a neighbor. We've run them without incident, looks like it all works
> fine. One thing we did notice is that the AM leaves the old "retired"
> regions around in its counts -- the master status page shows a large number
> of "Other Regions". This was alarming at first,
>
> Good to know. I had seen this recently and had a mental note to circle
> around and confirm it's just a temporary artifact.
>
> On Wed, Apr 20, 2016 at 3:16 PM, Nick Dimiduk <ndimi...@gmail.com> wrote:
>
>> Circling back here and adding user@phoenix.
>>
>> I put together one script to dump region info from the shell and find the
>> empty ones, another to merge a given region into a neighbor. We've run them
>> without incident, looks like it all works fine. One thing we did notice is
>> that the AM leaves the old "retired" regions around in its counts -- the
>> master status page shows a large number of "Other Regions". This was
>> alarming at first, but we verified it's just an artifact in the AM and in
>> fact these regions are not on HDFS or in meta. Bouncing master resolved it.
>>
>> No one has volunteered any alternative schema designs, so as best we
>> know, this will happen to anyone who has timestamp in their rowkey (ie,
>> anyone using Phoenix's "Row timestamp" feature [0]) and is also using the
>> TTL feature. Are folks interested in adding these scripts to our
>> distribution and our book?
>>
>> -n
>>
>> [0]: https://phoenix.apache.org/rowtimestamp.html
>>
>> On Mon, Apr 4, 2016 at 8:34 AM, Nick Dimiduk <ndimi...@gmail.com> wrote:
>>
>>> > Crazy idea, but you might be able to take stripped down version of
>>> region
>>> > normalizer code and make a Tool to run? Requesting split or merge is
>>> done
>>> > through the client API, and the only weighing information you need is
>>> > whether region empty or not, that you could find out too?
>>>
>>> Yeah, that's the direction I'm headed.
>>>
>>> > A bit off topic, but I think unfortunately region normalizer now
>>> ignores
>>> > empty regions to avoid undoing pre-split on the table.
>>>
>>> Unfortunate indeed. Maybe we should be keeping around the initial splits
>>> list as a metadata attribute on the table?
>>>
>>> > With a right row-key design you will never have empty regions due to
>>> TTL.
>>>
>>> I'd love to hear your thoughts on this design, Vlad. Maybe you'd like to
>>> write up a post for the blog? Meanwhile, I'm sure of a couple of us on here
>>> on the list would appreciate your Cliff's Notes version. I can take this
>>> into account for my v2 schema design.
>>>
>>> > So Nick, merge on 1.1 is not recommended??? Was working very well on
>>> > previous versions. Is ProcV2 really impact it that bad??
>>>
>>> How to answer here carefully... I have no reason to believe merge is not
>>> working on 1.1. I've been on the wrong end of enough "regions stuck in
>>> transition" support tickets that I'm not keen to put undue stress on my
>>> master. ProcV2 insures against many scenarios that cause master trauma,
>>> hence my interest in the implementation details and my preference for
>>> cluster administration tasks that use it as their source of authority.
>>>
>>> Thanks for the thoughts folks.
>>> -n
>>>
>>> On Fri, Apr 1, 2016 at 10:52 AM, Jean-Marc Spaggiari <
>>> jean-m...@spaggiari.org> wrote:
>>>
>>>> ;) That was not the question ;)
>>>>
>>>> So Nick, merge on 1.1 is not recommended??? Was working very well on
>>>> previous versions. Is ProcV2 really impact it that bad??
>>>>
>>>> JMS
>>>>
>>>> 2016-04-01 13:49 GMT-04:00 Vladimir Rodionov <vladrodio...@gmail.com>:
>>>>
>>>> > >> This is something
>>>> > >> which makes it far less useful for time-series databases with
>>>> short TTL
>>>> > on
>>>> > >> the tables.
>>

Re: problems about using phoenix over hbase

2016-04-19 Thread rafa
Hi,

test with:

create view if not exists "test" (pk VARCHAR primary key, "family"."number"
INTEGER);

regards,
rafa

On Tue, Apr 19, 2016 at 12:58 PM, 金砖 <jinzh...@wacai.com> wrote:

> I'm new to phoenix
> using phoenix-4.7.0-HBase-1.1 on hbase-1.1.3
>
> my steps:
>
> 1. create hbase table
> create 'test', 'family'
>
> 2. put row in hbase table
> put 'test', 'row1', 'family:number', '123456'
>
> 3. create view in phoenix
> create view if not exists "test" (pk VARCHAR primary key,
> "family".number INTEGER);
>
> 4. select phoenix view
> select NUMBER from "test";
>
> But why result  is null ? Is there anything wrong about the steps ?
>


Re: Phoenix for CDH5 compatibility

2016-04-16 Thread rafa
Hi Swapna,

You can download the  official parcel from Cloudera, although it is not the
last phoenix version.

http://archive.cloudera.com/cloudera-labs/phoenix/parcels/latest/

If you want to use higher versions you'll have to compile them against  the
cdh  libraries

Regards,
Rafa
El 16/04/2016 12:17, "Swapna Swapna" <talktoswa...@gmail.com> escribió:

> Hi,
>
>
> I was able to use phoenix-4.6.0-HBase-1 in a standalone machine but when i
> tried to use with cdh5, its throwing the below exception. After doing some
> research, noticed this seems to be a common problem faced by many others as
> well.
>
> Caused by: java.lang.NoSuchMethodError:
> org.apache.hadoop.hbase.client.Scan.setRaw(Z)Lorg/apache/hadoop/hbase/client/Scan;
>
> at
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildDeletedTable(MetaDataEndpointImpl.java:973)
>
> at
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.loadTable(MetaDataEndpointImpl.java:1049)
>
> at
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1223)
>
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1199)
>
> at
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:216)
>
> at
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:300)
>
> at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.execService(ClientProtos.java:31913)
>
> at
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1605)
>
>
> Please suggest me, from where I can download the Phoenix binary/source
> (for hbase 1.0) to be compatible with cdh5.
>
> Regards
>
> Swapna
>
>


Re: org.apache.phoenix.join.MaxServerCacheSizeExceededException

2016-02-10 Thread rafa
Hi Nanda,

It seems the server is taking the default value for
phoenix.query.maxServerCacheBytes

https://phoenix.apache.org/tuning.html

phoenix.query.maxServerCacheBytes

   - Maximum size (in bytes) of the raw results of a relation before being
   compressed and sent over to the region servers.
   - Attempting to serializing the raw results of a relation with a size
   bigger than this setting will result in a
   MaxServerCacheSizeExceededException.
   - *Default: 104,857,600*

Regards,

rafa


On Wed, Feb 10, 2016 at 1:37 PM, Nanda <tnkish...@gmail.com> wrote:

> Hi ,
>
> I am using HDP 2.3.0 with Phoneix 4.4 and i quiet often get the below
> exception,
>
> Caused by: java.sql.SQLException: Encountered exception in sub plan [0]
> execution.
> at
> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:156)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
> at
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:251)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
> at
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:241)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
> at
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:240)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
> at
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:1223)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
> at
> com.brocade.nva.dataaccess.AbstractDAO.getResultSet(AbstractDAO.java:388)
> ~[nvp-data-access-1.0-SNAPSHOT.jar:na]
> at
> com.brocade.nva.dataaccess.HistoryDAO.getSummaryTOP10ReportDetails(HistoryDAO.java:306)
> ~[nvp-data-access-1.0-SNAPSHOT.jar:na]
> ... 75 common frames omitted
> Caused by: org.apache.phoenix.join.MaxServerCacheSizeExceededException:
> Size of hash cache (104857651 bytes) exceeds the maximum allowed size
> (104857600 bytes)
> at
> org.apache.phoenix.join.HashCacheClient.serialize(HashCacheClient.java:109)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
> at
> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:82)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
> at
> org.apache.phoenix.execute.HashJoinPlan$HashSubPlan.execute(HashJoinPlan.java:338)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
> at
> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:135)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> ~[na:1.8.0_40]
> at
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:172)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [na:1.8.0_40]
>
>
> Below are params i am using,
>
> server side properties:
> phoenix.coprocessor.maxServerCacheTimeToLiveMs=18
> phoenix.groupby.maxCacheSize=1572864000
> phoenix.query.maxGlobalMemoryPercentage=60
> phoenix.query.maxGlobalMemorySize=409600
> phoenix.stats.guidepost.width=524288000
>
>
> client side properties are:
> hbase.client.scanner.timeout.period=18
> phoenix.query.spoolThresholdBytes=1048576000
> phoenix.query.timeoutMs=18
> phoenix.query.threadPoolSize=240
> phoenix.query.maxGlobalMemoryPercentage=60
> phoenix.query.maxServerCacheBytes=1048576810
>
>
> and my hbase heap is set to 4GB
>
> Is there som property i need to set explicitly for this.
>
> Thanks,
> Nanda
>
>


Re: Using Sqoop to load HBase tables , Data not visible via Phoenix

2016-01-28 Thread rafa
Hi Manya,

see this thread:

http://mail-archives.apache.org/mod_mbox/incubator-phoenix-user/201512.mbox/%3CCAOnY4Jd6u9T8-Ce2Lp54CbH_a8zj41FVc=iXT=z8hp8-mxv...@mail.gmail.com%3E

http://phoenix.apache.org/faq.html#How_I_map_Phoenix_table_to_an_existing_HBase_table

regards,
rafa


On Thu, Jan 28, 2016 at 1:26 PM, manya cancerian <manyacancer...@gmail.com>
wrote:

>
>
>
> Looking for some help in the following scenario -
>
> - I have created on Phoenix table which created underneath HBase table.
>
> - Then used sqoop command to move data from relational database database
> table (Teradata) to underneath HBase table successfully
>
> - I can view the data through HBase but it is not visible in Phoenix table.
>
> What i missing here ?
>
> Regards
> Manya
>
>
>
>
>


Re: Question about support for ARRAY data type with Pig Integration

2016-01-11 Thread rafa
Hi Kiran,

Thank you very much for the information and your work !

Best Regards,
rafa.



On Sat, Jan 9, 2016 at 8:53 AM, Ravi Kiran <maghamraviki...@gmail.com>
wrote:

> Hi Rafa,
>
>I will be working on this ticket
> https://issues.apache.org/jira/browse/PHOENIX-2584. You can add yourself
> as a watcher to the ticket to see the progress.
>
> Regards
> Ravi
>
> On Wed, Dec 23, 2015 at 3:21 AM, rafa <raf...@gmail.com> wrote:
>
>> Hi all !!
>>
>> Just a quick question. I see in:
>>
>> https://phoenix.apache.org/pig_integration.html
>>
>> that the array data type will be supported in the future.
>>
>> Anybody knows if that work has been started and if there is a future
>> release in which this feature will be included?
>>
>> Thank you very much in advance,
>> Best Regards,
>> rafa.
>>
>>
>>
>


Question about support for ARRAY data type with Pig Integration

2015-12-23 Thread rafa
Hi all !!

Just a quick question. I see in:

https://phoenix.apache.org/pig_integration.html

that the array data type will be supported in the future.

Anybody knows if that work has been started and if there is a future
release in which this feature will be included?

Thank you very much in advance,
Best Regards,
rafa.


Re: Is it possible to run the phoenix query server on a machine other than the regionservers?

2015-12-17 Thread rafa
think so. Copy the hbase-site.xml from the cluster into the new query
Server machine and set the directory where the xml resides in the classpath
of the Query Server. That should be enough,

Regards
rafa

On Thu, Dec 17, 2015 at 12:21 PM, F21 <f21.gro...@gmail.com> wrote:

> Hey Rafa,
>
> So in terms of the hbase-site.xml, I just need the entries for the
> location to the zookeeper quorum and the zookeeper znode for the cluster
> right?
>
> Cheers!
>
>
> On 17/12/2015 9:48 PM, rafa wrote:
>
> Hi F21,
>
> You can install Query Server in any server that has network connection
> with your cluster. You'll need connection with zookeeper.
>
> Usually the Apache Phoenix Query Server is installed in the master nodes.
>
> Accordong to the Apache Phoenix doc:
> <https://phoenix.apache.org/server.html>
> https://phoenix.apache.org/server.html
>
> "The server is packaged in a standalone jar,
> phoenix-server--runnable.jar. This jar and HBASE_CONF_DIR on the
> classpath are all that is required to launch the server."
>
> you'll only need that jar and the Hbase XML config files,
>
> Regards,
> Rafa.
>
> On Thu, Dec 17, 2015 at 11:31 AM, F21 <f21.gro...@gmail.com> wrote:
>
>> I have successfully deployed phoenix and the phoenix query server into a
>> toy HBase cluster.
>>
>> I am currently running the http query server on all regionserver, however
>> I think it would be much better if I can run the http query servers on
>> separate docker containers or machines. This way, I can easily scale the
>> number of query servers and put them against a DNS name such as
>> phoenix.mycompany.internal.
>>
>> I've had a look at the configuration, but it seems to be heavily tied to
>> HBase. For example, it requires the HBASE_CONF_DIR environment variable to
>> be set.
>>
>> Is this something that's currently possible?
>>
>
>
>


Re: ConcurrentModificationException when making concurrent requests through Phoenix Query Server

2015-11-25 Thread rafa
"I don't know if there is a problem with our Phoenix Query Server
configuration, which is the default one"  I meant,

Thanks !!

On Wed, Nov 25, 2015 at 5:08 PM, rafa <raf...@gmail.com> wrote:

> Hi all !
>
> We are using Apache Phoenix 4.5.2. We are making some tests from an
> application server using the query server jdbc thin client driver.
>
> When making upserts, if we use more than one client thread for making
> requests we obtain this error in our client:
>
> exception is java.sql.SQLException: exception while executing query:
> response code 500
> at
> org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:84)
> at
> org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
> at
> org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
> at
> org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:660)
> at
> org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:909)
> at
> org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:970)
> at
> org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:980)
> at
> com..x.x.xx.poc.phoenix.PhoenixClient.put(PhoenixClient.java:127)
> at
> com..x.x.xx.poc.webapp.common.Injector.call(Injector.java:31)
> at
> com..x.x.xx.poc.webapp.common.Injector.call(Injector.java:13)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.sql.SQLException: exception while executing query:
> response code 500
> at
> org.apache.calcite.avatica.Helper.createException(Helper.java:41)
> at
> org.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:432)
> at
> org.apache.calcite.avatica.AvaticaPreparedStatement.executeUpdate(AvaticaPreparedStatement.java:118)
> at
> com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeUpdate(NewProxyPreparedStatement.java:410)
> at
> org.springframework.jdbc.core.JdbcTemplate$2.doInPreparedStatement(JdbcTemplate.java:916)
> at
> org.springframework.jdbc.core.JdbcTemplate$2.doInPreparedStatement(JdbcTemplate.java:909)
> at
> org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:644)
> ... 10 more
> Caused by: java.lang.RuntimeException: response code 500
> at
> org.apache.calcite.avatica.remote.RemoteService.apply(RemoteService.java:45)
> at
> org.apache.calcite.avatica.remote.JsonService.apply(JsonService.java:215)
> at
> org.apache.calcite.avatica.remote.RemoteMeta.fetch(RemoteMeta.java:190)
> at
> org.apache.calcite.avatica.MetaImpl$FetchIterator.moveNext(MetaImpl.java:809)
> at
> org.apache.calcite.avatica.MetaImpl$FetchIterator.(MetaImpl.java:780)
> at
> org.apache.calcite.avatica.MetaImpl$FetchIterable.iterator(MetaImpl.java:758)
> at
> org.apache.calcite.avatica.MetaImpl.createCursor(MetaImpl.java:98)
> at
> org.apache.calcite.avatica.AvaticaResultSet.execute(AvaticaResultSet.java:187)
> at
> org.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:430)
> ... 15 more
>
>
> In  the Phoenix Query Server log we see these errors:
>
> 2015-11-25 15:44:52,495 WARN org.eclipse.jetty.server.HttpChannel: /
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.remove(HashMap.java:940)
> at
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:501)
> at
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:472)
> at
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:469)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:469)
> at
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:323)
> at
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:312)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at
> org.apache.phoenix.jdbc.PhoenixStatement.executeM

ConcurrentModificationException when making concurrent requests through Phoenix Query Server

2015-11-25 Thread rafa
)
at
org.apache.calcite.avatica.remote.JsonHandler.apply(JsonHandler.java:43)
at
org.apache.calcite.avatica.server.AvaticaHandler.handle(AvaticaHandler.java:55)
at
org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:497)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
at
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:245)
at
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)


 Those errors only appear when there are more than one threads making
requests.

 I don't know if there is a problem with our Phoenix Query Server, which is
the default one, or there is some kind of problem when pooling connections
from an application server to the Query Server,

Thank you very much for your help,
Best Regards,
Rafa


Problem setting an alternate Phoenix Query Server Listen Port

2015-11-20 Thread rafa
Hi all !

We are having some trouble setting Phoenix Query Server listen port other
than the deafult one.

It seems that it is not taking the property.

setting the property:


  phoenix.queryserver.http.port
  8300


in the local bin directory inside the hbase-site.xml  and starting query
server:

2015-11-20 16:03:24,854 INFO org.eclipse.jetty.server.ServerConnector:
Started ServerConnector@209b9515{HTTP/1.1}{0.0.0.0:8765}
2015-11-20 16:03:24,854 INFO org.eclipse.jetty.server.Server: Started
@5025ms
2015-11-20 16:03:24,854 INFO org.apache.calcite.avatica.server.HttpServer:
Service listening on port 8765.

We are using Apache Phoenix 4.5.2

Are we doing something wrong?
Is there another way to specify the port?

Thank you very much for your help !!
Best Regards.