Nice to see this update. I think this serves the purpose. Deployment time update should be enough. In my environment now, I tweak the conf and put them on hdfs, everytime, executor pulls down and spins with my customized conf.
I think the issue is solved now. Thanks a lot! -Luyi. On Sat, Sep 20, 2014 at 12:22 PM, James DeFelice <[email protected]> wrote: > correction: in my setup I changed the executor URI to point to > http://MY_NIMBUS_SERVER_ADDR:MY_FILESERVER_PORT/conf/MY_ > CUSTOM_MESOS_STORM_SPIN. > > On Sat, Sep 20, 2014 at 3:21 PM, James DeFelice <[email protected]> > wrote: > >> Right, so in my setup I changed the executor URI to point to >> http://localhost:MY_FILESERVER_PORT/conf/MY_CUSTOM_MESOS_STORM_SPIN. >> It was possible to do that because MY_FILESERVER_PORT is predictable when >> you set it via the "nimbus.fileserver.port" property. >> >> So in my environment I customize the logback configuration at framework >> deploy time, repack the storm distro with the updated config and store the >> distro it in conf/, the YAML is configured as mentioned above. This >> facilitates the storm supervisors pulling down the storm spin with the >> customized logback config. >> >> If you don't need something so dynamic then you could just tweak the >> mesos/storm tarball once (with your updated configs) and push it to a local >> web server somewhere, then update your executor URI to point to that. >> >> >> On Fri, Sep 19, 2014 at 6:23 PM, Luyi Wang <[email protected]> >> wrote: >> >>> This is a little bit different from what I asked but might on the same >>> page. >>> >>> I am asking if it should use the CONF_EXECUTOR_URI to setup the >>> executor environment. >>> >>> >>> https://github.com/jdef/storm/blob/5e18be19574ca3e09b0068033558f6e429099e65/src/storm/mesos/MesosNimbus.java#L441 >>> >>> But still thanks for giving me insight of the implementation. >>> >>> Thanks. >>> >>> >>> >>> >>> -Luyi. >>> >>> >>> >>> >>> On Fri, Sep 19, 2014 at 2:12 PM, James DeFelice < >>> [email protected]> wrote: >>> >>>> https://github.com/mesos/storm/pull/11 >>>> >>>> Looks like some cleanup has been requested... but should work as-is. >>>> >>>> On Fri, Sep 19, 2014 at 1:57 AM, Luyi Wang <[email protected]> >>>> wrote: >>>> >>>>> Thanks, james. I will pull the latest change and see what you >>>>> committed. Thanks. >>>>> On Sep 18, 2014 8:21 PM, "James DeFelice" <[email protected]> >>>>> wrote: >>>>> >>>>>> I submitted a pull request that facilitates what you're asking for. >>>>>> It lets you specify a port number for the built in file server on nimbus. >>>>>> Once you have a predictable uri for that built in server you can rebuild >>>>>> the storm tarball with whatever config you want your executors to have >>>>>> and >>>>>> throw it in conf/ so nimbus can serve it up. I've done exactly this and >>>>>> it's been working great for us. >>>>>> >>>>>> --sent from my phone >>>>>> On Sep 18, 2014 6:06 PM, "Luyi Wang" <[email protected]> wrote: >>>>>> >>>>>>> Well. After investigating the problem. It turns out to be a setting >>>>>>> problem. >>>>>>> >>>>>>> My old storm task never ran correctly as it kept trying to connect a >>>>>>> wrong zookeeper server. >>>>>>> >>>>>>> The way I started the mesos is following. I use zookeeper to store >>>>>>> configuration but this zookeeper is embedded and running standalone on >>>>>>> master node(192.168.1.11). >>>>>>> >>>>>>> nohup sudo /home/ubuntu/mesos/build/bin/mesos-master.sh >>>>>>> --work_dir=/var/lib/mesos --zk=zk://0.0.0.0:2181/mesos --quorum=1 >>>>>>> --log_dir=/var/log/mesos </dev/null >/dev/null 2>&1 & >>>>>>> >>>>>>> >>>>>>> And I started the slave using following command. >>>>>>> >>>>>>> nohup sudo /home/ubuntu/mesos/build/bin/mesos-slave.sh --master=zk:// >>>>>>> 192.168.123.19:2181/mesos --log_dir=/var/log/mesos </dev/null >>>>>>> >/dev/null 2>&1 & >>>>>>> >>>>>>> >>>>>>> Under this situation, everything looks fine. >>>>>>> >>>>>>> To set up the storm framework, I change the storm.yaml in the conf >>>>>>> folder. >>>>>>> >>>>>>> mesos.master.url: "zk://192.168.123.19:2181/mesos" >>>>>>> storm.zookeeper.servers: >>>>>>> - "localhost" >>>>>>> nimbus.host: "localhost" >>>>>>> >>>>>>> and running "storm-mesos nimbus" and "storm ui". >>>>>>> >>>>>>> The problem raised here in this configuration.For every task, >>>>>>> storm-mesos created an executor with storm-mesos environment by >>>>>>> downloading the full tar ball from either http from memosphere or hdfs >>>>>>> which includes configuration file may or may not as same as the one >>>>>>> using >>>>>>> in the master node. In my case, all executor using the above >>>>>>> configuration. The new created executor would fetch from zookeeper >>>>>>> server >>>>>>> to but here what it tried to talked with is still "localhost". The >>>>>>> zookeeper server exists on the slave but never used for mesos, so the >>>>>>> task >>>>>>> were marked as LOST. To make it work,In my case, the set up should be >>>>>>> like >>>>>>> this.( I assume I also need to change the nimbus host to the master >>>>>>> node"). >>>>>>> >>>>>>> mesos.master.url: "zk://192.168.123.19:2181/mesos" >>>>>>> storm.zookeeper.servers: >>>>>>> - "192.168.123.19" >>>>>>> nimbus.host: "192.168.123.19" >>>>>>> >>>>>>> >>>>>>> After making this change, everything works fine now. >>>>>>> I hope this would help people having the same issues. >>>>>>> >>>>>>> Meanwhile. In my opinion, they way downloading whole tarball with >>>>>>> fixed configuration from somewhere should be avoided or improved. >>>>>>> >>>>>>> Probably worth to discuss. >>>>>>> >>>>>>> Thanks. >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -Luyi. >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Thu, Sep 18, 2014 at 11:36 AM, Luyi Wang <[email protected]> >>>>>>> wrote: >>>>>>> >>>>>>>> I attached nimbus.log and supervisor.log for your reference >>>>>>>> >>>>>>>> >>>>>>>> On Wed, Sep 17, 2014 at 5:30 PM, Benjamin Mahler < >>>>>>>> [email protected]> wrote: >>>>>>>> >>>>>>>>> logs >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -Luyi. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>> >>>> >>>> -- >>>> James DeFelice >>>> 585.241.9488 (voice) >>>> 650.649.6071 (fax) >>>> >>> >>> >> >> >> -- >> James DeFelice >> 585.241.9488 (voice) >> 650.649.6071 (fax) >> > > > > -- > James DeFelice > 585.241.9488 (voice) > 650.649.6071 (fax) >

