A small comment (as a new Zeppelin user): use S3 to store the Notebooks, and use some S3 tool to upload/download notebooks. If you need hand with setting up S3 as Notebook Storage, I can help, as just today I have set it up and works very well :)
On 4 November 2015 at 22:01, Craig Ching <craigch...@gmail.com> wrote: > Hi Moon, > > Can I set that when I start zeppelin up? The last thing I want to have to > do is tell my users they need to change this. I’m trying to introduce new > users to spark and I feel that zeppelin is a great way to do that. So the > less I have them do the better. > > Here’s what I’m doing. First, I have zeppelin containerized in a docker > container. This docker container is parameterized with the port and the > spark master. Then I wrote a little UI. The user gives me their name (any > unique id really) and I fire up a docker container running zeppelin for > them with their own ports (I find free ports for the web port and the web > socket port and reserve them). It’s their own little zeppelin environment > where they can create notebooks and upload and download them (I haven’t > quite figured out the upload and download just yet). > > Thanks, I appreciate the response! > > Cheers, > Craig > > On Nov 4, 2015, at 9:49 AM, moon soo Lee <m...@apache.org> wrote: > > Hi, > > I think you can change "spark.app.name" property of your spark > interpreter setting in "Interpreter" menu. > > Best, > moon > > On Wed, Nov 4, 2015 at 2:12 PM Craig Ching <craigch...@gmail.com> wrote: > >> Hi all, >> >> Just starting to play with zeppelin a bit. I was wondering if there was >> a way to set spark.app.name? It appears to be hard-coded in the source >> (SparkInterpreter), would a PR be accepted to change this? I want to be >> able to fire up many zeppelin instances based on a user id and have the >> spark jobs submitted to a cluster with those ids so that users can see the >> status of their jobs in the spark UI. Thoughts? >> >> Cheers, >> Craig > > >