Right. No firewalls. This is my 'toy' environment running as virtual machines 
on my desktop computer. I'm playing with this here because have the same 
problem on my real cluster. Will try to explicitly configure starting IP for 
this SNN.

-----Original Message-----
From: praveenesh kumar [mailto:praveen...@gmail.com]
Sent: lunes, 04 de junio de 2012 14:02
To: common-user@hadoop.apache.org
Subject: Re: SecondaryNameNode not connecting to NameNode : 
PriviledgedActionException

Try giving value to dfs.secondary.http.address in hdfs-site.xml on your SNN.
In your logs, its starting SNN webserver at 0.0.0.0:50090. Its better if we 
provide which IP it should start at.
Also I am assuming you are not having any firewalls enable between these 2 
machines right ?

Regards,
Praveenesh

On Mon, Jun 4, 2012 at 5:05 PM, <ramon....@accenture.com> wrote:

> I configured dfs.http.address on SNN's hdfs-site.xml but still gets:
>
> /************************************************************
> STARTUP_MSG: Starting SecondaryNameNode
> STARTUP_MSG:   host = hadoop01/192.168.0.11
> STARTUP_MSG:   args = [-checkpoint, force]
> STARTUP_MSG:   version = 1.0.3
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
> 1335192; compiled by 'hortonfo' on Tue May  8 20:31:25 UTC 2012
> ************************************************************/
> 12/06/04 13:34:24 INFO namenode.SecondaryNameNode: Starting web server as:
> hadoop
> 12/06/04 13:34:24 INFO mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 12/06/04 13:34:24 INFO http.HttpServer: Added global filtersafety
> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 12/06/04 13:34:24 INFO http.HttpServer: Port returned by
> webServer.getConnectors()[0].getLocalPort() before open() is -1.
> Opening the listener on 50090
> 12/06/04 13:34:24 INFO http.HttpServer: listener.getLocalPort()
> returned
> 50090 webServer.getConnectors()[0].getLocalPort() returned 50090
> 12/06/04 13:34:24 INFO http.HttpServer: Jetty bound to port 50090
> 12/06/04 13:34:24 INFO mortbay.log: jetty-6.1.26
> 12/06/04 13:34:25 INFO mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50090
> 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Web server init
> done
> 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Secondary
> Web-server up
> at: 0.0.0.0:50090
> 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: Secondary image
> servlet up at: 0.0.0.0:50090
> 12/06/04 13:34:25 WARN namenode.SecondaryNameNode: Checkpoint Period
> :3600 secs (60 min)
> 12/06/04 13:34:25 WARN namenode.SecondaryNameNode: Log Size Trigger
>  :67108864 bytes (65536 KB)
> 12/06/04 13:34:25 ERROR security.UserGroupInformation:
> PriviledgedActionException as:hadoop cause:java.net.ConnectException:
> Connection refused
> 12/06/04 13:34:25 ERROR namenode.SecondaryNameNode: checkpoint:
> Connection refused
> 12/06/04 13:34:25 INFO namenode.SecondaryNameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down SecondaryNameNode at hadoop01/192.168.0.11
> ************************************************************/
>
> -----Original Message-----
> From: praveenesh kumar [mailto:praveen...@gmail.com]
> Sent: lunes, 04 de junio de 2012 13:15
> To: common-user@hadoop.apache.org
> Subject: Re: SecondaryNameNode not connecting to NameNode :
> PriviledgedActionException
>
> I am not sure what could be the exact issue but when configuring
> secondary NN to NN, you need to tell your SNN where the actual NN resides.
> Try adding - dfs.http.address on your secondary namenode machine
> having value as <NN:port> on hdfs-site.xml Port should be on which
> your NN url is opening - means your NN web browser http port.
>
> Regards,
> Praveenesh
> On Mon, Jun 4, 2012 at 4:37 PM, <ramon....@accenture.com> wrote:
>
> > Hello. I'm facing a issue when trying to configure my
> > SecondaryNameNode on a different machine than my NameNode. When both
> > are on the same machine everything works fine but after moving the
> secondary to a new machine I get:
> >
> > 2012-05-28 09:57:36,832 ERROR
> > org.apache.hadoop.security.UserGroupInformation:
> > PriviledgedActionException as:hadoop cause:java.net.ConnectException:
> > Connection refused
> > 2012-05-28 09:57:36,832 ERROR
> > org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception
> > in
> > doCheckpoint:
> > 2012-05-28 09:57:36,834 ERROR
> > org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
> > java.net.ConnectException: Connection refused
> >        at java.net.PlainSocketImpl.socketConnect(Native Method)
> >        at
> >
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.jav
> a:327)
> >        at
> >
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketI
> mpl.java:191)
> >        at
> >
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:
> 180)
> >        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384)
> >        at java.net.Socket.connect(Socket.java:546)
> >        at java.net.Socket.connect(Socket.java:495)
> >        at sun.net.NetworkClient.doConnect(NetworkClient.java:178)
> >        at sun.net.www.http.HttpClient.openServer(HttpClient.java:409)
> >        at sun.net.www.http.HttpClient.openServer(HttpClient.java:530)
> >        at sun.net.www.http.HttpClient.<init>(HttpClient.java:240)
> >        at sun.net.www.http.HttpClient.New(HttpClient.java:321)
> >        at sun.net.www.http.HttpClient.New(HttpClient.java:338)
> >        at
> >
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLCo
> nnection.java:935)
> >        at
> >
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnec
> tion.java:876)
> >        at
> > sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.
> > java:801)
> >
> > Is there any configuration I'm missing? At this point my
> > mapred-site.xml is very simple just:
> >
> > <?xml version="1.0"?>
> > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> > <configuration>  <property>
> >    <name>mapred.job.tracker</name>
> >    <value>hadoop00:9001</value>
> >  </property>
> >  <property>
> >    <name>mapred.system.dir</name>
> >    <value>/home/hadoop/mapred/system</value>
> >  </property>
> >  <property>
> >    <name>mapred.local.dir</name>
> >    <value>/home/hadoop/mapred/local</value>
> >  </property>
> >  <property>
> >    <name>mapred.jobtracker.taskScheduler</name>
> >    <value>org.apache.hadoop.mapred.FairScheduler</value>
> >  </property>
> >  <property>
> >    <name>mapred.fairscheduler.allocation.file</name>
> >    <value>/home/hadoop/hadoop/conf/fairscheduler.xml</value>
> >  </property>
> > </configuration>
> >
> >
> >
> > ________________________________
> > Subject to local law, communications with Accenture and its
> > affiliates including telephone calls and emails (including content),
> > may be monitored by our systems for the purposes of security and the
> > assessment of internal compliance with Accenture policy.
> >
> > ____________________________________________________________________
> > __
> > ________________
> >
> > www.accenture.com
> >
>
> ________________________________
> Subject to local law, communications with Accenture and its affiliates
> including telephone calls and emails (including content), may be
> monitored by our systems for the purposes of security and the
> assessment of internal compliance with Accenture policy.
>
> ______________________________________________________________________
> ________________
>
> www.accenture.com
>
>

________________________________
Subject to local law, communications with Accenture and its affiliates 
including telephone calls and emails (including content), may be monitored by 
our systems for the purposes of security and the assessment of internal 
compliance with Accenture policy.
______________________________________________________________________________________

www.accenture.com

Reply via email to