[jira] [Created] (ZEPPELIN-4327) Chrome autofills user name in notebook filter

2019-09-12 Thread Maziyar PANAHI (Jira)
Maziyar PANAHI created ZEPPELIN-4327:


 Summary: Chrome autofills user name in notebook filter
 Key: ZEPPELIN-4327
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-4327
 Project: Zeppelin
  Issue Type: Bug
  Components: front-end
Affects Versions: 0.8.1, 0.8.0, 0.8.2
Reporter: Maziyar PANAHI


If you have your username and password saved on Chrome it will keep autofill 
the Filter box with your username. This results of not seeing any notebook and 
manually has to be cleared from time to time. This won't affect if you are not 
saving your username password.

I have found an identical issue with Hue which was reported here and it was 
fixed too if it helps:

[https://issues.cloudera.org/browse/HUE-8727]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (ZEPPELIN-4028) New zpln notes impossible to delete

2019-03-04 Thread Maziyar PANAHI (JIRA)
Maziyar PANAHI created ZEPPELIN-4028:


 Summary: New zpln notes impossible to delete
 Key: ZEPPELIN-4028
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-4028
 Project: Zeppelin
  Issue Type: Bug
  Components: zeppelin-server, zeppelin-zengine
Affects Versions: 0.9.0
Reporter: Maziyar PANAHI
 Attachments: zeppelin-ghost-notes-bug.gif

Hi,

I am using 0.9.0 which has zpln as the new supported note format. However, I 
have a couple of problems:

1- If everyone at the same time creates a note with the same name, even though 
there is a unique ID after the name on HDFS, it still messes up all the 
permissions! An empty note I created just needs to wait a few seconds and then 
it says I don't have enough permission to view it. 

 

2- It becomes impossible to remove these notes! It says it doesn't exist!

!zeppelin-ghost-notes-bug.gif!

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ZEPPELIN-4027) Reload notes from storage displays user's notes to everyone

2019-03-04 Thread Maziyar PANAHI (JIRA)
Maziyar PANAHI created ZEPPELIN-4027:


 Summary: Reload notes from storage displays user's notes to 
everyone
 Key: ZEPPELIN-4027
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-4027
 Project: Zeppelin
  Issue Type: Bug
  Components: zeppelin-zengine
Affects Versions: 0.9.0
Reporter: Maziyar PANAHI
 Attachments: zeppelin-reload-notebooks-bug.gif

Hi,

 

If a user clicks on "Reload notes from storage", everyone else will see his/her 
notes on their homepage instead of their own notes! 

 

!zeppelin-reload-notebooks-bug.gif!

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ZEPPELIN-4014) Zeppelin notebook filter displays different names for zpln notes

2019-02-19 Thread Maziyar PANAHI (JIRA)
Maziyar PANAHI created ZEPPELIN-4014:


 Summary: Zeppelin notebook filter displays different names for 
zpln notes
 Key: ZEPPELIN-4014
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-4014
 Project: Zeppelin
  Issue Type: Bug
  Components: GUI
Affects Versions: 0.9.0
 Environment: Ubuntu 16.04: zeppelin-server

Zeppelin 0.9.0

Cloudera/CDH 6.1

Chrome: latest

macOS latest
Reporter: Maziyar PANAHI
 Attachments: Screenshot 2019-02-19 19.58.42.png, Screenshot 2019-02-19 
20.01.13.png

Hi,

I have used the configs provided to convert old note.json to *.zpln files. 
However, I have to problem with the filter textbox.

1- Whatever I search it displays the notes name in the format of "Note ID":

!Screenshot 2019-02-19 19.58.42.png!

2- The other problem is that this textbox is not *autocomplete="nope"* so every 
now and then something strange pops into the Filter on its own as result to 
some sort of autocomplete.

!Screenshot 2019-02-19 20.01.13.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ZEPPELIN-3991) Fail to bootstrap PySpark in yarn client mode

2019-02-06 Thread Maziyar PANAHI (JIRA)
Maziyar PANAHI created ZEPPELIN-3991:


 Summary: Fail to bootstrap PySpark in yarn client mode
 Key: ZEPPELIN-3991
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-3991
 Project: Zeppelin
  Issue Type: Bug
  Components: pySpark, python-interpreter
Affects Versions: 0.9.0
 Environment: Cloudera/CDH 6.1

Spark 2.4

Hadoop 3.0

*zeppein-env.sh*
{code:java}
export PYSPARK_PYTHON=/opt/cloudera/parcels/Anaconda/envs/py36/bin/python3
export 
PYSPARK_DRIVER_PYTHON=/opt/cloudera/parcels/Anaconda/envs/py36/bin/python3

export PYSPARKPYTHON=/opt/cloudera/parcels/Anaconda/envs/py36/bin/python3
export PYSPARKDRIVERPYTHON=/opt/cloudera/parcels/Anaconda/envs/py36/bin/python3
{code}
Reporter: Maziyar PANAHI


Hi,

PySpark fails with following error in Zeppelin 0.9.0:
{code:java}
ERROR [2019-02-06 22:23:40,599] ({FIFOScheduler-Worker-1} Job.java[run]:174) - 
Job failed
org.apache.zeppelin.interpreter.InterpreterException: Fail to bootstrap pyspark
at 
org.apache.zeppelin.spark.PySparkInterpreter.open(PySparkInterpreter.java:124)
at 
org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69)
at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:593)
at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:502)
at org.apache.zeppelin.scheduler.Job.run(Job.java:172)
at 
org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:121)
at 
org.apache.zeppelin.scheduler.FIFOScheduler.lambda$runJobInScheduler$0(FIFOScheduler.java:39)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Fail to run bootstrap script: 
python/zeppelin_pyspark.py
at 
org.apache.zeppelin.python.PythonInterpreter.bootstrapInterpreter(PythonInterpreter.java:581)
at 
org.apache.zeppelin.spark.PySparkInterpreter.open(PySparkInterpreter.java:122)
... 9 more
{code}
Full logs of YARN:

[https://gist.github.com/maziyarpanahi/3380e230246271217a2feb4512f5d665]

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ZEPPELIN-3987) Zeppelin 0.9.0 fail to access Notebooks from HDFS

2019-02-03 Thread Maziyar PANAHI (JIRA)
Maziyar PANAHI created ZEPPELIN-3987:


 Summary: Zeppelin 0.9.0 fail to access Notebooks from HDFS
 Key: ZEPPELIN-3987
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-3987
 Project: Zeppelin
  Issue Type: Bug
Affects Versions: 0.9.0
 Environment: Cloudera 6.1

Spark 2.4

Hadoop 3.0

Shiro, LDAP
Reporter: Maziyar PANAHI
 Attachments: Screenshot 2019-02-03 17.59.35.png

Hi,

I have built Zeppelin-0.9.0-SNAPSHOT and copied my configs from previous 
version 0.8.2 into this new directory. Usually, all the versions after 0.8.0 
(0.8.1, 0.8.2) immediately after start will fetch all the notebooks from HDFS. 
However, in 0.9.0 the UI is empty and the logs also indicate the reading 
Notebooks did not happen. 
{code:java}
 zeppelin.notebook.storage 
org.apache.zeppelin.notebook.repo.FileSystemNotebookRepo 
hadoop compatible file system notebook persistence layer 
implementation 
 zeppelin.notebook.dir 
hdfs://hadoop-master-1:8020/user/zeppelin/notebook 
path or URI for notebook persist 
{code}
The startup logs:

 
{code:java}
INFO [2019-02-03 17:55:41,797] ({main} ZeppelinConfiguration.java[create]:127) 
- Load configuration from 
file:/opt/zeppelin-0.9.0-SNAPSHOT/conf/zeppelin-site.xml
INFO [2019-02-03 17:55:41,856] ({main} ZeppelinConfiguration.java[create]:135) 
- Server Host: 0.0.0.0
INFO [2019-02-03 17:55:41,857] ({main} ZeppelinConfiguration.java[create]:137) 
- Server Port: 8080
INFO [2019-02-03 17:55:41,857] ({main} ZeppelinConfiguration.java[create]:141) 
- Context Path: /
INFO [2019-02-03 17:55:41,857] ({main} ZeppelinConfiguration.java[create]:142) 
- Zeppelin Version: 0.9.0-SNAPSHOT
INFO [2019-02-03 17:55:41,876] ({main} Log.java[initialized]:193) - Logging 
initialized @440ms to org.eclipse.jetty.util.log.Slf4jLog
WARN [2019-02-03 17:55:41,994] ({main} 
ServerConnector.java[setSoLingerTime]:458) - Ignoring deprecated socket close 
linger time
INFO [2019-02-03 17:55:42,064] ({main} 
ZeppelinServer.java[setupWebAppContext]:403) - ZeppelinServer Webapp path: 
/opt/zeppelin-0.9.0-SNAPSHOT/webapps
WARN [2019-02-03 17:55:42,223] ({main} 
NotebookAuthorization.java[getInstance]:79) - Notebook authorization module was 
called without initialization, initializing with default configuration
WARN [2019-02-03 17:55:42,225] ({main} 
ZeppelinConfiguration.java[getConfigFSDir]:545) - zeppelin.config.fs.dir is not 
specified, fall back to local conf directory zeppelin.conf.dir
WARN [2019-02-03 17:55:42,225] ({main} 
ZeppelinConfiguration.java[getConfigFSDir]:545) - zeppelin.config.fs.dir is not 
specified, fall back to local conf directory zeppelin.conf.dir
INFO [2019-02-03 17:55:42,225] ({main} 
LocalConfigStorage.java[loadNotebookAuthorization]:84) - Load notebook 
authorization from file: 
/opt/zeppelin-0.9.0-SNAPSHOT/conf/notebook-authorization.json
INFO [2019-02-03 17:55:42,279] ({main} Credentials.java[loadFromFile]:121) - 
/opt/zeppelin-0.9.0-SNAPSHOT/conf/credentials.json
INFO [2019-02-03 17:55:42,350] ({main} NotebookServer.java[]:145) - 
NotebookServer instantiated: org.apache.zeppelin.socket.NotebookServer@ae13544
INFO [2019-02-03 17:55:42,350] ({main} 
NotebookServer.java[setServiceLocator]:150) - Injected ServiceLocator: 
ServiceLocatorImpl(shared-locator,0,1089504328)
INFO [2019-02-03 17:55:42,351] ({main} NotebookServer.java[setNotebook]:156) - 
Injected NotebookProvider
INFO [2019-02-03 17:55:42,353] ({main} 
NotebookServer.java[setNotebookService]:163) - Injected NotebookServiceProvider
INFO [2019-02-03 17:55:42,359] ({main} ZeppelinServer.java[main]:233) - 
Starting zeppelin server
INFO [2019-02-03 17:55:42,361] ({main} Server.java[doStart]:370) - 
jetty-9.4.14.v20181114; built: 2018-11-14T21:20:31.478Z; git: 
c4550056e785fb5665914545889f21dc136ad9e6; jvm 1.8.0_201-b09
INFO [2019-02-03 17:55:44,696] ({main} 
StandardDescriptorProcessor.java[visitServlet]:283) - NO JSP Support for /, did 
not find org.eclipse.jetty.jsp.JettyJspServlet
INFO [2019-02-03 17:55:44,711] ({main} 
DefaultSessionIdManager.java[doStart]:365) - DefaultSessionIdManager 
workerName=node0
INFO [2019-02-03 17:55:44,711] ({main} 
DefaultSessionIdManager.java[doStart]:370) - No SessionScavenger set, using 
defaults
INFO [2019-02-03 17:55:44,713] ({main} HouseKeeper.java[startScavenging]:149) - 
node0 Scavenging every 66ms
INFO [2019-02-03 17:55:44,720] ({main} ContextHandler.java[log]:2345) - 
Initializing Shiro environment
INFO [2019-02-03 17:55:44,720] ({main} 
EnvironmentLoader.java[initEnvironment]:133) - Starting Shiro environment 
initialization.
INFO [2019-02-03 17:55:45,078] ({main} IniRealm.java[processDefinitions]:188) - 
IniRealm defined, but there is no [users] section defined. This realm will not 
be populated with any users and it is assumed that they will be populated 
programatically. Users must be defined for this Realm instance to be useful.
INFO [2019-02-03 17:55:45,078] 

[jira] [Created] (ZEPPELIN-3986) Cannot access any JAR in yarn cluster mode

2019-02-03 Thread Maziyar PANAHI (JIRA)
Maziyar PANAHI created ZEPPELIN-3986:


 Summary: Cannot access any JAR in yarn cluster mode
 Key: ZEPPELIN-3986
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-3986
 Project: Zeppelin
  Issue Type: Bug
  Components: Interpreters
Affects Versions: 0.8.1, 0.8.2
 Environment: Cloudera/CDH 6.1

Spark 2.4

Hadoop 3.0

Zeppelin 0.8.2 (built from the latest merged pull request)
Reporter: Maziyar PANAHI


Hello,

YARN cluster mode was introduced in `0.8.0` and fixed for not finding 
ZeppelinContext in `0.8.1`. However, I have difficulties to access any JAR in 
order to `import` them inside my notebook.

I have a CDH cluster, where everything works in deployMode `client`, but the 
moment I switch to `cluster` and the driver is not the same machine as Zeppelin 
server it can't find the packages.

Working configs:

Inside interpreter:

master: yarn

spark.submit.deployMode: client

Inside `zeppelin-env.sh`:

 
{code:java}
export SPARK_SUBMIT_OPTIONS="--jars 
hdfs:///user/maziyar/jars/zeppelin/graphframes/graphframes-assembly-0.7.0-spark2.3-SNAPSHOT.jar
{code}
 

Since the JAR is already on HDFS, switching to `cluster` should be as simple as 
changing `spark.submit.deployMode` to the cluster. However, doing that results 
in:

 
{code:java}
import org.graphframes._

:23: error: object graphframes is not a member of package org import 
org.graphframes._
{code}
I can see my JAR in Spark UI in `spark.yarn.dist.jars` and 
`spark.yarn.secondary.jars` in both cluster and client mode.

 

In client mode `sc.jars` will result:

 
{code:java}
res2: Seq[String] = 
List(file:/opt/zeppelin-0.8.2-new/interpreter/spark/spark-interpreter-0.8.2-SNAPSHOT.jar){code}
 

However, in `cluster` mode the same command is empty. I thought maybe there is 
something extra or missing on Zeppelin Spark Interpreter that doesn't not allow 
the JAR being used in cluster mode.

 

This is how Spark UI reports my JAR in `client` mode:

 

 

 

 
|spark.repl.local.jars 
|file:/tmp/spark-3aadfe3c-8821-4dfe-875b-744c2e35a95a/graphframes-assembly-0.7.0-spark2.3-SNAPSHOT.jar|
|spark.yarn.dist.jars 
|hdfs://hadoop-master-1:8020/user/mpanahi/jars/zeppelin/graphframes/graphframes-assembly-0.7.0-spark2.3-SNAPSHOT.jar|
|spark.yarn.secondary.jars|graphframes-assembly-0.7.0-spark2.3-SNAPSHOT.jar|
|sun.java.command|org.apache.spark.deploy.SparkSubmit --master yarn --conf 
spark.executor.memory=5g --conf spark.driver.memory=8g --conf 
spark.driver.cores=4 --conf spark.yarn.isPython=true --conf 
spark.driver.extraClassPath=:/opt/zeppelin-0.8.2-new/interpreter/spark/*:/opt/zeppelin-0.8.2-new/zeppelin-interpreter/target/lib/*::/opt/zeppelin-0.8.2-new/zeppelin-interpreter/target/classes:/opt/zeppelin-0.8.2-new/zeppelin-interpreter/target/test-classes:/opt/zeppelin-0.8.2-new/zeppelin-zengine/target/test-classes:/opt/zeppelin-0.8.2-new/interpreter/spark/spark-interpreter-0.8.2-SNAPSHOT.jar
 --conf spark.useHiveContext=true --conf spark.app.name=Zeppelin --conf 
spark.executor.cores=5 --conf spark.submit.deployMode=client --conf 
spark.dynamicAllocation.maxExecutors=50 --conf 
spark.dynamicAllocation.initialExecutors=1 --conf 
spark.dynamicAllocation.enabled=true --conf spark.driver.extraJavaOptions= 
-Dfile.encoding=UTF-8 
-Dlog4j.configuration=file:///opt/zeppelin-0.8.2-new/conf/log4j.properties 
-Dzeppelin.log.file=/var/log/zeppelin/zeppelin-interpreter-spark-mpanahi-zeppelin-hadoop-gateway.log
 --class org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer --jars 
hdfs:///user/mpanahi/jars/zeppelin/graphframes/graphframes-assembly-0.7.0-spark2.3-SNAPSHOT.jar,|

 

This is how Spark UI reports my JAR in `cluster` mode (same configs as I 
mentioned above):

  
|spark.repl.local.jars |This field does not exist in cluster mode|
|spark.yarn.dist.jars 
|hdfs://hadoop-master-1:8020/user/mpanahi/jars/zeppelin/graphframes/graphframes-assembly-0.7.0-spark2.3-SNAPSHOT.jar|
|spark.yarn.secondary.jars|graphframes-assembly-0.7.0-spark2.3-SNAPSHOT.jar|
|sun.java.command|org.apache.spark.deploy.yarn.ApplicationMaster --class 
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer --jar 
file:/opt/zeppelin-0.8.2-new/interpreter/spark/spark-interpreter-0.8.2-SNAPSHOT.jar
 --arg 134.158.74.122 --arg 46130 --arg : --properties-file 
/yarn/nm/usercache/mpanahi/appcache/application_1547731772080_0077/container_1547731772080_0077_01_01/__spark_conf__/__spark_conf__.properties|

 

Thank you.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ZEPPELIN-3939) Spark 2.4 incompatibility with commons-lang3 in Zeppelin

2019-01-08 Thread Maziyar PANAHI (JIRA)
Maziyar PANAHI created ZEPPELIN-3939:


 Summary: Spark 2.4 incompatibility with commons-lang3 in Zeppelin
 Key: ZEPPELIN-3939
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-3939
 Project: Zeppelin
  Issue Type: Bug
  Components: Interpreters, zeppelin-interpreter
Affects Versions: 0.8.1
 Environment: Cloudera 6.1

Spark 2.4

Hadoop 3.0
Reporter: Maziyar PANAHI


Hi,

I have built Zeppelin in my Cloudera 6.1 cluster for Spark 2.4 (Hadoop 3.0) and 
everything went well with the support of Spark 2.4. 

However, I can't read JSON nor CSV files due to the following error:

 
{noformat}
java.io.InvalidClassException: org.apache.commons.lang3.time.FastDateParser; 
local class incompatible 
{noformat}
 

 
{code:java}
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
stage 4.0 failed 4 times, most recent failure: Lost task 0.3 in stage 4.0 (TID 
117, hadoop-16, executor 3): java.io.InvalidClassException: 
org.apache.commons.lang3.time.FastDateParser; local class incompatible: stream 
classdesc serialVersionUID = 2, local class serialVersionUID = 3 at 
java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:699) at 
java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1885) at 
java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1751) at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2042) at 
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287) at 
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211) at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) at 
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287) at 
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211) at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) at 
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287) at 
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211) at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) at 
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287) at 
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211) at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) at 
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287) at 
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211) at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) at 
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287) at 
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211) at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) at 
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) at 
java.io.ObjectInputStream.readObject(ObjectInputStream.java:431) at 
scala.collection.immutable.List$SerializationProxy.readObject(List.scala:490) 
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1170) at 
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2178) at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) at 
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287) at 
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211) at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) at 
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287) at 
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211) at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) at 
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) at 
java.io.ObjectInputStream.readObject(ObjectInputStream.java:431) at 
scala.collection.immutable.List$SerializationProxy.readObject(List.scala:490) 
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at 

[jira] [Created] (ZEPPELIN-3847) Duplicate results in notebooks

2018-11-02 Thread Maziyar PANAHI (JIRA)
Maziyar PANAHI created ZEPPELIN-3847:


 Summary: Duplicate results in notebooks
 Key: ZEPPELIN-3847
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-3847
 Project: Zeppelin
  Issue Type: Bug
Affects Versions: 0.8.0
Reporter: Maziyar PANAHI
 Attachments: Screenshot 2018-11-02 17.51.00.png, Screenshot 2018-11-02 
17.51.10.png

After upgrading to Zeppelin 0.8.0 with the same tech stack behind it, I start 
noticing after a while working with any notebooks (small or large), the 
notebook becomes unresponsive for few seconds where I can't scroll then it 
produces duplicate results for all the executed paragraphs.

Not sure if the page becomes unresponsive due to duplicating all the results 
from all the paragraphs or it is for another reason, but I am sure it should be 
related to copying all those results twice in the page.

NOTE: If I refresh the page, there is no duplicate and there is only one result 
for each paragraph. This only happens while the notebook is open.

 

I have one idea, to reproduce this you can simply disconnect from the internet 
(if the zeppelin is hosted remotely). WebSocket keeps trying to connect, 
re-connect the internet and you see immediately the results being duplicated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ZEPPELIN-3668) Can't hide Spark Jobs (Spark UI) button

2018-07-27 Thread Maziyar PANAHI (JIRA)
Maziyar PANAHI created ZEPPELIN-3668:


 Summary: Can't hide Spark Jobs (Spark UI) button
 Key: ZEPPELIN-3668
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-3668
 Project: Zeppelin
  Issue Type: Bug
  Components: front-end
Affects Versions: 0.8.0
Reporter: Maziyar PANAHI


Hi,

In Zeppelin 0.8.0 I can't manage to hide "Spark Jobs" / Spark UI in our Notes 
after upgrading from 0.7.3.

I did a bit of digging and I could see if "spark.ui.enabled" is set to false 
the button shouldn't be visible. But this seems to be ignored and it's still 
visible.

 

Thank you.

Maziyar



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ZEPPELIN-3646) Previous permissions are not effective and Notes are visible to everyone after upgrade

2018-07-21 Thread Maziyar PANAHI (JIRA)
Maziyar PANAHI created ZEPPELIN-3646:


 Summary: Previous permissions are not effective and Notes are 
visible to everyone after upgrade
 Key: ZEPPELIN-3646
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-3646
 Project: Zeppelin
  Issue Type: Bug
  Components: security
Affects Versions: 0.8.0
Reporter: Maziyar PANAHI


Hi,

I followed the upgrade guide by copying *notebook* and *conf* directories into 
the new Zeppelin 0.8 directory, however right after starting the new Zeppelin, 
all the users can see all the existing Notes. (I followed the same process 
before without a problem in 0.7.x)

I can confirm the permissions exist by looking at the file or by opening 
someone else's Note and check the permissions.

I can start the old Zeppelin 0.7.3 and the permissions will be restored to 
normal as expected. I don't understand why it fails to hide the Notes which 
user doesn't have permission to READ in 0.8.0.

The *ZEPPELIN_NOTEBOOK_PUBLIC* and *zeppelin.notebook.public* is set to 
*false*, but I guess configs should not be an issue since it works in 0.7.3 and 
not in 0.8.0 as it looks to me the reading permissions might be different in 
0.8.0.

Let me know if you need config/log details.

Thank you.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ZEPPELIN-3619) Multi-line code is not allowed: illegal start of definition

2018-07-12 Thread Maziyar PANAHI (JIRA)
Maziyar PANAHI created ZEPPELIN-3619:


 Summary: Multi-line code is not allowed: illegal start of 
definition
 Key: ZEPPELIN-3619
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-3619
 Project: Zeppelin
  Issue Type: Bug
Affects Versions: 0.8.0
Reporter: Maziyar PANAHI


Hi,

Previously I was able to have code like this in my 0.7.3 Spark Interpreter 
(YARN cluster):

 
{code:java}
val word2Vec = new Word2Vec()
 .setInputCol("filtered")
 .setOutputCol("word2vec")
 .setVectorSize(100)
 .setMinCount(10)
 .setMaxIter(20){code}
But the same code in Zeppelin 0.8 on my local machine gives me this error (I am 
testing the new release before I upgrade the one on the cluster)
{code:java}
:1: error: illegal start of definition{code}
 

Many thanks. 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ZEPPELIN-3267) Users are not authorised to restart Spark interpreter

2018-02-26 Thread Maziyar PANAHI (JIRA)
Maziyar PANAHI created ZEPPELIN-3267:


 Summary: Users are not authorised to restart Spark interpreter
 Key: ZEPPELIN-3267
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-3267
 Project: Zeppelin
  Issue Type: Improvement
  Components: Interpreters, security
Affects Versions: 0.7.3
 Environment: Apache Zeppelin 0.7.3

Apache Spark 2.2 CDH

YARN Cloudera 5.14
Reporter: Maziyar PANAHI


Following on this issue which was merged long time ago:

https://issues.apache.org/jira/browse/ZEPPELIN-987

I have two problems with this way of "securing" endpoints especially in 
interpreters:
 # If users are not supposed to access these 3 areas, shouldn't the UI be 
smarter and hide them as well? Not really ergonomic to display a choice and 
then say sorry you can't touch this.
 # The bigger issue that I am just facing is users can't restart their Spark 
interpreter after securing `/api/interpreter/**`. It says you are not 
authorised to access /api/interpreters/settings/restart/.

It is really important for users to start a fresh Spark context since the 
sessions are not terminated after some idle time (at least not in 0.7.3) like 
Livy. So users may need to create a fresh Spark context/session and destroy old 
variables/UI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ZEPPELIN-3125) Invalid UTF-8 middle byte

2018-01-02 Thread Maziyar PANAHI (JIRA)
Maziyar PANAHI created ZEPPELIN-3125:


 Summary: Invalid UTF-8 middle byte
 Key: ZEPPELIN-3125
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-3125
 Project: Zeppelin
  Issue Type: Bug
  Components: Core
Affects Versions: 0.7.3
Reporter: Maziyar PANAHI


If user writes any character that is not UTF-8 inside the notebook, it will 
immediately result in an error.

For instance:

{code:java}
wikipediaDF.filter($"article" === "série")

{code}

{code:java}
Error with 400 StatusCode: "Invalid UTF-8 middle byte 0x72\n at [Source: 
HttpInputOverHTTP@29b395f7; line: 2, column: 107]"
{code}


Log output from Zeppelin:

{code:java}
Job 20180102-180648_179114920 is finished, status: ERROR, exception: null, 
result: %text Error with 400 StatusCode: "Invalid UTF-8 middle byte 0x72\n at 
[Source: HttpInputOverHTTP@29b395f7; line: 2, column: 107]"
{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (ZEPPELIN-2993) Job manager should only display user's jobs

2017-10-13 Thread Maziyar PANAHI (JIRA)
Maziyar PANAHI created ZEPPELIN-2993:


 Summary: Job manager should only display user's jobs
 Key: ZEPPELIN-2993
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-2993
 Project: Zeppelin
  Issue Type: Bug
Affects Versions: 0.7.3
Reporter: Maziyar PANAHI


Hi,

Job manager displays all the jobs of all the users to each user. This may cause 
some issues:

1- Privacy: Although it is not possible to stop, run nor view someone else's 
notebook from Job manager, users still can see the names of those notebooks. 
One can argue if the notebooks are not accessible by some users then there is 
no reason for them to see their status in Job manager.
2- Management: It is much easier to only deal with your own jobs rather than 
all the users. There is a search bar to filter, but what if the names are the 
same? Imagine having 10 notebooks by 10 users all named "Test". Then the only 
way is to try and fail until you find your own job.
3- Use case: The only situation that listing all the jobs will help is as if 
you are an admin. Which this takes me to my question:

How do you set permission for Job manager? How does someone sets permission the 
way that only people in [admin] can see all the jobs but the rest only seeing 
their own jobs?
Is this possible in Shiro? 

Many thanks,




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)