Re: Problem connecting JDBC client to a secure cluster

2017-04-13 Thread Josh Elser

Just some extra context here:

From your original message, you noted how the ZK connection succeeded 
but the HBase connection didn't. The JAAS configuration file you 
provided is *only* used by ZooKeeper. As you have eventually realized, 
hbase-site.xml is the configuration file which controls how the client 
connects to HBase.


Nice write-up for the archive :)

rafa wrote:

Hi all,


I have been able to track down the origin of the problem and it is
related to the hbase-site.xml not being loaded correctly by the
application server.

Seeing the instructions given by Anil in this JIRA:
https://issues.apache.org/jira/browse/PHOENIX-19 it has been easy to
reproduce it

java   -cp
/tmp/testhbase2:/opt/cloudera/parcels/CLABS_PHOENIX-4.7.0-1.clabs_phoenix1.3.0.p0.000/lib/phoenix/lib/hadoop-hdfs-2.6.0-cdh5.7.0.jar:/opt/cloudera/parcels/CLABS_PHOENIX-4.7.0-1.clabs_phoenix1.3.0.p0.000/lib/phoenix/phoenix-4.7.0-clabs-phoenix1.3.0-client.jar
sqlline.SqlLine -d org.apache.phoenix.jdbc.PhoenixDriver  -u
jdbc:phoenix:node-01u..int:2181:phoe...@hadoop.int:/etc/security/keytabs/phoenix.keytab
-n none -p none --color=true --fastConnect=false --verbose=true
--incremental=false --isolation=TRANSACTION_READ_COMMITTED


In /tmp/testhbase2 there are 3 files:

-rw-r--r--   1 root root  4027 Apr 11 18:23 hdfs-site.xml
-rw-r--r--   1 root root  3973 Apr 11 18:29 core-site.xml
-rw-rw-rw-   1 root root  3924 Apr 11 18:49 hbase-site.xml


a) If hdfs-site.xml is missing or invalid:

It fails with Caused by: java.lang.IllegalArgumentException:
java.net.UnknownHostException: nameservice1

(with HA HDFS, hdfs-site.xml  is needed to resolve the name service)

b) if core-site.xml is missing or invalid:

  17/04/11 19:05:01 WARN security.UserGroupInformation:
PriviledgedActionException as:root (auth:SIMPLE)
cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by
GSSException: No valid credentials provided (Mechanism level: Failed to
find any Kerberos tgt)]
17/04/11 19:05:01 WARN ipc.RpcClientImpl: Exception encountered while
connecting to the server : javax.security.sasl.SaslException: GSS
initiate failed [Caused by GSSException: No valid credentials provided
(Mechanism level: Failed to find any Kerberos tgt)]
17/04/11 19:05:01 FATAL ipc.RpcClientImpl: SASL authentication failed.
The most likely cause is missing or invalid credentials. Consider 'kinit'.
javax.security.sasl.SaslException: GSS initiate failed [Caused by
GSSException: No valid credentials provided (Mechanism level: Failed to
find any Kerberos tgt)]
 at
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
 at
org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:181)

...
Caused by: GSSException: No valid credentials provided (Mechanism level:
Failed to find any Kerberos tgt)
 at
sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
 at
sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)
 at
sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
 at
sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)
 at
sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
 at
sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
 at
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)



c) If hbase-site.xml is missing or invalid:

The zookeeeper connection works right, but not the Hbase master one:

java  -cp
/tmp/testhbase2:/opt/cloudera/parcels/CLABS_PHOENIX-4.7.0-1.clabs_phoenix1.3.0.p0.000/lib/phoenix/lib/hadoop-hdfs-2.6.0-cdh5.7.0.jar:/opt/cloudera/parcels/CLABS_PHOENIX-4.7.0-1.clabs_phoenix1.3.0.p0.000/lib/phoenix/phoenix-4.7.0-clabs-phoenix1.3.0-client.jar
sqlline.SqlLine -d org.apache.phoenix.jdbc.PhoenixDriver  -u
jdbc:phoenix:node-01u..int:2181:phoe...@hadoop.int:/etc/security/keytabs/phoenix.keytab
-n none -p none --color=true --fastConnect=false --verbose=true
--incremental=false --isolation=TRANSACTION_READ_COMMITTED
Setting property: [incremental, false]
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect
jdbc:phoenix:node-01u..int:2181:phoe...@hadoop.int:/etc/security/keytabs/phoenix.keytab
none none org.apache.phoenix.jdbc.PhoenixDriver
Connecting to
jdbc:phoenix:node-01u..int:2181:phoe...@hadoop.int:/etc/security/keytabs/phoenix.keytab
17/04/11 19:06:38 INFO query.ConnectionQueryServicesImpl: Trying to
connect to a secure cluster with keytab:/etc/security/keytabs/phoenix.keytab
17/04/11 19:06:38 INFO security.UserGroupInformation: Login successful
for user phoe...@hadoop.int  using keytab
file /etc/security/keytabs/phoenix.keytab
17/04/11 19:06:38 INFO query.ConnectionQueryServicesImpl: Successfull
login 

Re: Weird High Read throuput of SYSTEM.STATS

2017-04-13 Thread Josh Elser
Thanks for the version info. I'm not sure about what's included in CDH's 
packaging -- maybe someone else knows.


I understand that your question was about the read load. However, the 
old guideposts likely needed to be updated with the new files' stats. 
Thus, even though you're writing data to your table, to update the stats 
for that table, it would involve a read of the *existing* stats data. 
But, again, that's just a guess :)


Mac Fang wrote:

Josh,

Thanks for the reply,
It is a CDH vendor version (4.7.0_1.3.0). Mutations are reasonable when
we do bulk load. Question is why we have the high read load of the
SYSTEM.STATS table.


On Thu, Apr 13, 2017 at 11:42 AM, Josh Elser > wrote:

What version of Phoenix are you using? (an Apache release? some
vendors' packaging?)

Academically speaking, when you bulk load some data, the stats table
should get updated (otherwise the stats are wrong until a compaction
occurs), but I can't specifically point you at a line of code that
is doing this (nor am I 100% positive it happens).


On Wed, Apr 12, 2017 at 3:12 AM, Mac Fang
> wrote:

Hi, Guys,


We are notices some weird high read throuput of the HBase,

The HBase Read Rate


​
And the SYSTEM.STATS Read Rate


​
During that time frame, the system did not have a high QPS.
However, it did import some data (some millions) data via the
"Phoenix Bulk Load".

The question is what does phoenix do with the SYSTEM.STATS table
when we do "Bulk Load"?

We did not find any clues while we looked into the code. Any
hints ?


--
regards
macf





--
regards
macf


Are arrays stored and retrieved in the order they are added to phoenix?

2017-04-13 Thread Cheyenne Forbes
I was wonder if the arrays are stored in the order I add them or they are
sorted otherwise (maybe for performance reasons)