in your /etc/hosts what do you have for localhost 127.0.0.1 localhost.localdomain localhost
conf/slave should have one entry in your case cat slaves # A Spark Worker will be started on each of the machines listed below. localhost ....... Dr Mich Talebzadeh LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>* http://talebzadehmich.wordpress.com On 28 March 2016 at 15:32, David O'Gwynn <[email protected]> wrote: > Greetings to all, > > I've search around the mailing list, but it would seem that (nearly?) > everyone has the opposite problem as mine. I made a stab at looking in the > source for an answer, but I figured I might as well see if anyone else has > run into the same problem as I. > > I'm trying to limit my Master/Worker UI to run only on localhost. As it > stands, I have the following two environment variables set in my > spark-env.sh: > > SPARK_LOCAL_IP=127.0.0.1 > SPARK_MASTER_IP=127.0.0.1 > > and my slaves file contains one line: 127.0.0.1 > > The problem is that when I run "start-all.sh", I can nmap my box's public > interface and get the following: > > PORT STATE SERVICE > 22/tcp open ssh > 8080/tcp open http-proxy > 8081/tcp open blackice-icecap > > Furthermore, I can go to my box's public IP at port 8080 in my browser and > get the master node's UI. The UI even reports that the URL/REST URLs to be > 127.0.0.1: > > Spark Master at spark://127.0.0.1:7077 > URL: spark://127.0.0.1:7077 > REST URL: spark://127.0.0.1:6066 (cluster mode) > > I'd rather not have spark available in any way to the outside world > without an explicit SSH tunnel. > > There are variables to do with setting the Web UI port, but I'm not > concerned with the port, only the network interface to which the Web UI > binds. > > Any help would be greatly appreciated. > >
