[GitHub] zeppelin issue #1157: [ZEPPELIN-1146] Zeppelin JDBC interpreter should work ...

2016-07-18 Thread rja1
Github user rja1 commented on the issue:

https://github.com/apache/zeppelin/pull/1157
  
Thanks for the tip @prabhjyotsingh and for your work, hive is working!

I'm running into an issue with jdbc(phoenix) now though, hoping you can 
help.  My interpreter config is listed below.  Note that we don't have a 
/hbase-secure dir in zookeeper, just hbase.
[zk: localhost:2181(CONNECTED) 1] ls /hbase
[replication, meta-region-server, rs, splitWAL, backup-masters, table-lock, 
flush-table-proc, region-in-transition, online-snapshot, acl, master, running, 
balancer, tokenauth, recovering-regions, draining, namespace, hbaseid, table]

Here's the notebook / error:
select * from USER_ACCOUNTS where USER_SEED = '1000'
Failed after attempts=1, exceptions:
Mon Jul 18 14:00:22 MDT 2016, 
RpcRetryingCaller{globalStartTime=1468872022128, pause=100, retries=1}, 
org.apache.hadoop.hbase.MasterNotRunningException: 
com.google.protobuf.ServiceException: 
org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to 
name01.hadoop.test.company.com/10.4.59.25:6 failed on local exception: 
org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to 
name01.hadoop.test.company.com/10.4.59.25:6 is closing. Call id=0, 
waitTime=11
class org.apache.phoenix.exception.PhoenixIOException
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)

org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1064)

org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1370)

org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2116)

org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:828)

org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:183)
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:338)
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:326)
org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)

org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:324)

org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1326)

org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2275)

org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2244)

org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)

org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2244)

org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:233)

org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:135)
org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:202)
java.sql.DriverManager.getConnection(DriverManager.java:664)
java.sql.DriverManager.getConnection(DriverManager.java:208)

org.apache.zeppelin.jdbc.JDBCInterpreter.getConnection(JDBCInterpreter.java:226)

org.apache.zeppelin.jdbc.JDBCInterpreter.getStatement(JDBCInterpreter.java:237)

org.apache.zeppelin.jdbc.JDBCInterpreter.executeSql(JDBCInterpreter.java:296)
org.apache.zeppelin.jdbc.JDBCInterpreter.interpret(JDBCInterpreter.java:402)

org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:94)

org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:383)
org.apache.zeppelin.scheduler.Job.run(Job.java:176)

org.apache.zeppelin.scheduler.ParallelScheduler$JobRunner.run(ParallelScheduler.java:162)
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
java.util.concurrent.FutureTask.run(FutureTask.java:266)

java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)

java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)

Interpreter Config:
"2BRGRRCBW": {
  "id": "2BRGRRCBW",
  "name": "jdbc",
  "group": "jdbc",
  "properties": {
"phoenix.user": "zeppelin",
"hive.url": 
"jdbc:hive2://cms01.hadoop.test.company.com:1/default;principal\u003dhive/_h...@hadoop.test.company.com",
"default.driver": "org.postgresql.Driver",
 

[GitHub] zeppelin issue #1157: [ZEPPELIN-1146] Zeppelin JDBC interpreter should work ...

2016-07-14 Thread rja1
Github user rja1 commented on the issue:

https://github.com/apache/zeppelin/pull/1157
  
Thanks for your work @prabhjyotsingh...  Still unable to get this to work 
with hive (haven't tried phoenix yet)...  Looks like it's related to the 
hive.url, doesn't appear that we have discovery mode enabled.  Any direction 
you can give is much appreciated..

hive.url: 
jdbc:hive2://data03.hadoop.test.company.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2

Error: org.apache.hive.jdbc.ZooKeeperHiveClientException: Unable to read 
HiveServer2 uri from ZooKeeper

hive.url: jdbc:hive2://cms02.hadoop.test.company.com:1

Error: Could not open client transport with JDBC Uri: 
jdbc:hive2://cms02.hadoop.test.company.com:1: Peer indicated failure: 
Unsupported mechanism type PLAIN

hive.url: 
jdbc:hive2://cms02.hadoop.test.company.com:1/default;principal=hive/_h...@hadoop.test.company.com

Error while compiling statement: FAILED: ParseException line 1:11 
extraneous input ';' expecting EOF near ''

Settings:
"2BRGRRCBW": {
  "id": "2BRGRRCBW",
  "name": "jdbc",
  "group": "jdbc",
  "properties": {
"phoenix.user": "phoenixuser",
"hive.url": 
"jdbc:hive2://cms02.hadoop.test.company.com:1/default;principal=hive/_h...@hadoop.test.company.com",
"default.driver": "org.postgresql.Driver",
"phoenix.driver": "org.apache.phoenix.jdbc.PhoenixDriver",
"hive.user": "hive",
"psql.password": "",
"psql.user": "phoenixuser",
"psql.url": "jdbc:postgresql://localhost:5432/",
"default.user": "gpadmin",
"phoenix.hbase.client.retries.number": "1",
"phoenix.url": "jdbc:phoenix:localhost:2181:/hbase-unsecure",
"tajo.url": "jdbc:tajo://localhost:26002/default",
"tajo.driver": "org.apache.tajo.jdbc.TajoDriver",
"psql.driver": "org.postgresql.Driver",
"default.password": "",
"zeppelin.interpreter.localRepo": 
"/opt/zeppelin-ZEPPELIN-1146/local-repo/2BRGRRCBW",
"zeppelin.jdbc.auth.type": "KERBEROS",
"hive.password": "",
"zeppelin.jdbc.concurrent.use": "true",
"hive.driver": "org.apache.hive.jdbc.HiveDriver",
"common.max_count": "1000",
"zeppelin.jdbc.keytab.location": 
"/opt/zeppelin-ZEPPELIN-1146/conf/zeppelin.keytab",
"phoenix.password": "",
"zeppelin.jdbc.principal": "zeppe...@hadoop.test.company.com",
"zeppelin.jdbc.concurrent.max_connection": "10",
"default.url": "jdbc:postgresql://localhost:5432/"
  },
  "interpreterGroup": [
{
  "name": "sql",
  "class": "org.apache.zeppelin.jdbc.JDBCInterpreter"
}
  ],
  "dependencies": [
{
  "groupArtifactVersion": 
"/opt/cloudera/CDH/jars/hive-jdbc-1.1.0-cdh5.5.2.jar",
  "local": false
},
{
  "groupArtifactVersion": 
"/opt/cloudera/CDH/jars/hadoop-common-2.6.0-cdh5.5.2.jar",
  "local": false
},
{
  "groupArtifactVersion": 
"/opt/cloudera/CDH/jars/hive-shims-0.23-1.1.0-cdh5.5.2.jar",
  "local": false,
  "exclusions": []
},
{
  "groupArtifactVersion": 
"/opt/cloudera/CDH/jars/hive-jdbc-1.1.0-cdh5.5.2-standalone.jar",
  "local": false
}
  ],
  "option": {
"remote": true,
"port": -1,
"perNoteSession": false,
"perNoteProcess": false,
"isExistingProcess": false
  }

I'm then testing the notebook as follows:
%jdbc(hive)
show tables;







---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] zeppelin issue #1157: [ZEPPELIN-1146] Zeppelin JDBC interpreter should work ...

2016-07-13 Thread rja1
Github user rja1 commented on the issue:

https://github.com/apache/zeppelin/pull/1157
  
Thanks @prabhjyotsingh.  Novice question: how to you make the distinction 
in the notebook that you want jdbc hive vs jdbc postgresql, jdbc phoenix, etc..

If I run:
%jdbc
show tables;

Zeppelin defaults to trying to connect to postgres (as that's the default 
driver, which makes sense).
I'm guessing you're doing something like:
%jdbc.hive?
show tables;




 







---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] zeppelin issue #1157: [ZEPPELIN-1146] Zeppelin JDBC interpreter should work ...

2016-07-12 Thread rja1
Github user rja1 commented on the issue:

https://github.com/apache/zeppelin/pull/1157
  
@prabhjyotsingh would you mind sharing your jdbc interpreter settings 
please?  I must be missing something..  Thanks


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] zeppelin issue #986: [Zeppelin 946] Permissions not honoring group

2016-06-15 Thread rja1
Github user rja1 commented on the issue:

https://github.com/apache/zeppelin/pull/986
  
Nice job @prabhjyotsingh, thanks for your work!   I don't see the extra 
ldap call anymore (step 1).  I also don't see the large ldap call any longer 
(step 4).  In addition, all of my roles are mapped as expected.

I'm not sure if all of these changes are related to your latest commit, or 
if I missed something along the way, that I picked up in the latest 946 branch. 
 At any rate, looks good! 

I'll continue to test it today and will let you know if I see any other 
issues.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] zeppelin issue #986: [Zeppelin 946] Permissions not honoring group

2016-06-14 Thread rja1
Github user rja1 commented on the issue:

https://github.com/apache/zeppelin/pull/986
  
Thanks once again @prabhjyotsingh.  I really appreciate your work. The 
activeDirectoryRealm.principalSuffix works now.  I do have some concerns about 
the number of ldap calls made and the amount of data pulled back.

 It looks like the app:
1) makes ldap bindRequest as n...@mydomain.com and fails.
2) makes ldap bindRequest as usern...@mydomain.com and succeeds.
3) makes a bindRequest as activedirectoryrealm.systemusern...@mydomain.com 
and succeeds
4) does a searchRequest for the wholeSubTree

Step 4 pulls back about 5mb of data, which is a lot.  Could add quite a lot 
of load to AD, if lots of users are simultaneously using the UI...  I can limit 
the result set by more fully qualifying the activeDirectoryRealm.searchBase, 
however, then it seems to miss the group data.  Seems like there should really 
just be only be a couple lightweight calls.
1) bind the username
2) pull back the group memberships for username (if step one was a success).

Not sure if there's a more concise way to make these queries in java.  I 
can do it via command line the following way: ldapsearch -xLLL -h ldapServer -b 
"dc=company,dc=com" -D "CN=LDAP Bind,OU=Special,Accounts,DC=company,DC=com" -W 
uid=randerson.  This returns everything about the uid: randerson, including all 
group memberships.  The total size of the data is 60k... 

In addition, my groups / roles are still not mapped to my username, 
regardless of if the app searches the whole tree or not.  I'm not sure why.  
Perhaps I've missed something along the way.  

Here's my shiro.ini:

[main]
activeDirectoryRealm = org.apache.zeppelin.server.ActiveDirectoryGroupRealm
activeDirectoryRealm.systemUsername = username
activeDirectoryRealm.systemPassword = password
activeDirectoryRealm.searchBase = dc=company,dc=com
activeDirectoryRealm.url = ldap://server:389
activeDirectoryRealm.groupRolesMap = "cn=g.acl.ops.bigdata,ou=unix 
groups,ou=groups,ou=accounts,cn=users,dc=company,dc=com":"admin"
activeDirectoryRealm.authorizationCachingEnabled=false
activeDirectoryRealm.principalSuffix=@DOMAIN.COM
shiro.loginUrl = /api/login
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
securityManager.sessionManager = $sessionManager
securityManager.sessionManager.globalSessionTimeout = 8640
shiro.loginUrl = /api/login

[roles]
admin = *

[urls]
/api/version = anon
/** = authc


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] zeppelin issue #986: [Zeppelin 946] Permissions not honoring group

2016-06-13 Thread rja1
Github user rja1 commented on the issue:

https://github.com/apache/zeppelin/pull/986
  
Thanks for your work on this!  I built and deployed the zeppelin-946 
branch.  Unfortunately, I'm unable to authenticate against AD.  Looking at the 
tcpdump, it appears activeDirectoryRealm.principalSuffix isn't honored.  As a 
result, the bind request searches for randerson, rather than 
rander...@company.com...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---