On Wed, Aug 8, 2012 at 3:15 PM, Charitha Kankanamge <[email protected]>wrote:

> These configuration settings, deployment tips/tricks everything has to be
> documented somewhere. Specially, in a product like BAM, since there are so
> many configuration options, we should provide users with a deployment
> script to setup a BAM cluster.

+1
https://wso2.org/jira/browse/BAM-695

>
>
> /Charitha
>
> On Wed, Aug 8, 2012 at 10:57 AM, Chamara Ariyarathne <[email protected]>wrote:
>
>> This got fixed when the directory referring with hadoop.tmp.dir in
>> core-site.xml is cleaned up and then the namenode is formatted.
>>
>>
>> On Wed, Aug 8, 2012 at 10:25 AM, Deependra Ariyadewa <[email protected]>wrote:
>>
>>> Hi Chamara,
>>>
>>> Pl try to start the name node without all the other services and attach
>>> the name node log. It will help to isolate the issue in Name Node.
>>>
>>> You can start the name node by executing "bin/hadoop namenode"
>>>
>>>
>>> Thanks,
>>> Deependra.
>>>
>>> On Wed, Aug 8, 2012 at 9:56 AM, Chamara Ariyarathne 
>>> <[email protected]>wrote:
>>>
>>>> I get the following [1] when starting hadoop namenode in the
>>>> distributed hadoop cluster and it prevents from starting all the hadoop
>>>> nodes.
>>>>
>>>> /etc/hosts is:
>>>> 10.100.3.221    bamhadoop01
>>>> 10.100.3.228    bamhadoop02
>>>> 10.100.3.224    bamhadoop03
>>>> 10.100.3.244    appserver.cloud-test.wso2.com
>>>>
>>>> # The following lines are desirable for IPv6 capable hosts
>>>> ::1     ip6-localhost ip6-loopback
>>>> fe00::0 ip6-localnet
>>>> ff00::0 ip6-mcastprefix
>>>> ff02::1 ip6-allnodes
>>>> ff02::2 ip6-allrouters
>>>>
>>>> $ hostname
>>>> bamhadoop01
>>>>
>>>> Property in core-site.xml
>>>> <property>
>>>>         <name>fs.default.name</name>
>>>>         <value>hdfs://bamhadoop01:9000</value>
>>>>         </property>
>>>> <property>
>>>>
>>>> Can ssh to any other hadoop datanode machines without password.
>>>>
>>>> [1]
>>>> 2012-08-08 09:45:05,289 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>> hadoop-metrics2.properties
>>>> 2012-08-08 09:45:05,297 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>> MetricsSystem,sub=Stats registered.
>>>> 2012-08-08 09:45:05,297 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>> period at 10 second(s).
>>>> 2012-08-08 09:45:05,297 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: JobTracker metrics
>>>> system started
>>>> 2012-08-08 09:45:05,478 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>> registered.
>>>> 2012-08-08 09:45:05,481 WARN
>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already
>>>> exists!
>>>> 2012-08-08 09:45:05,482 INFO
>>>> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
>>>> Updating the current master key for generating delegation tokens
>>>> 2012-08-08 09:45:05,483 INFO
>>>> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
>>>> Starting expired delegation token remover thread,
>>>> tokenRemoverScanInterval=60 min(s)
>>>> 2012-08-08 09:45:05,483 INFO
>>>> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
>>>> Updating the current master key for generating delegation tokens
>>>> 2012-08-08 09:45:05,483 INFO org.apache.hadoop.mapred.JobTracker:
>>>> Scheduler configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
>>>> limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
>>>>  2012-08-08 09:45:05,484 INFO org.apache.hadoop.util.HostsFileReader:
>>>> Refreshing hosts (include/exclude) list
>>>> 2012-08-08 09:45:05,517 INFO org.apache.hadoop.mapred.JobTracker:
>>>> Starting jobtracker with owner as bamtesttmp
>>>> 2012-08-08 09:45:05,531 INFO org.apache.hadoop.ipc.Server: Starting
>>>> SocketReader
>>>> 2012-08-08 09:45:05,533 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>> RpcDetailedActivityForPort9001 registered.
>>>> 2012-08-08 09:45:05,534 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>> RpcActivityForPort9001 registered.
>>>> 2012-08-08 09:45:10,598 INFO org.mortbay.log: Logging to
>>>> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>>>> org.mortbay.log.Slf4jLog
>>>> 2012-08-08 09:45:10,634 INFO org.apache.hadoop.http.HttpServer: Added
>>>> global filtersafety
>>>> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
>>>> 2012-08-08 09:45:10,655 INFO org.apache.hadoop.http.HttpServer: Port
>>>> returned by webServer.getConnectors()[0].getLocalPort() before open() is
>>>> -1. Opening the listener on 50030
>>>> 2012-08-08 09:45:10,656 INFO org.apache.hadoop.http.HttpServer:
>>>> listener.getLocalPort() returned 50030
>>>> webServer.getConnectors()[0].getLocalPort() returned 50030
>>>> 2012-08-08 09:45:10,656 INFO org.apache.hadoop.http.HttpServer: Jetty
>>>> bound to port 50030
>>>> 2012-08-08 09:45:10,656 INFO org.mortbay.log: jetty-6.1.26
>>>> 2012-08-08 09:45:10,676 WARN org.mortbay.log: Can't reuse
>>>> /tmp/Jetty_0_0_0_0_50030_job____yn7qmk, using
>>>> /tmp/Jetty_0_0_0_0_50030_job____yn7qmk_2518359383055609936
>>>> 2012-08-08 09:45:10,811 INFO org.mortbay.log: Started
>>>> [email protected]:50030
>>>> 2012-08-08 09:45:10,815 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>> registered.
>>>> 2012-08-08 09:45:10,815 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>> JobTrackerMetrics registered.
>>>> 2012-08-08 09:45:10,815 INFO org.apache.hadoop.mapred.JobTracker:
>>>> JobTracker up at: 9001
>>>> 2012-08-08 09:45:10,815 INFO org.apache.hadoop.mapred.JobTracker:
>>>> JobTracker webserver: 50030
>>>> 2012-08-08 09:45:11,867 INFO org.apache.hadoop.ipc.Client: Retrying
>>>> connect to server: bamhadoop01/10.100.3.221:9000. Already tried 0
>>>> time(s).
>>>> 2012-08-08 09:45:12,868 INFO org.apache.hadoop.ipc.Client: Retrying
>>>> connect to server: bamhadoop01/10.100.3.221:9000. Already tried 1
>>>> time(s).
>>>> 2012-08-08 09:45:13,868 INFO org.apache.hadoop.ipc.Client: Retrying
>>>> connect to server: bamhadoop01/10.100.3.221:9000. Already tried 2
>>>> time(s).
>>>> 2012-08-08 09:45:14,869 INFO org.apache.hadoop.ipc.Client: Retrying
>>>> connect to server: bamhadoop01/10.100.3.221:9000. Already tried 3
>>>> time(s).
>>>> 2012-08-08 09:45:15,869 INFO org.apache.hadoop.ipc.Client: Retrying
>>>> connect to server: bamhadoop01/10.100.3.221:9000. Already tried 4
>>>> time(s).
>>>> 2012-08-08 09:45:16,870 INFO org.apache.hadoop.ipc.Client: Retrying
>>>> connect to server: bamhadoop01/10.100.3.221:9000. Already tried 5
>>>> time(s).
>>>> 2012-08-08 09:45:17,870 INFO org.apache.hadoop.ipc.Client: Retrying
>>>> connect to server: bamhadoop01/10.100.3.221:9000. Already tried 6
>>>> time(s).
>>>> 2012-08-08 09:45:18,871 INFO org.apache.hadoop.ipc.Client: Retrying
>>>> connect to server: bamhadoop01/10.100.3.221:9000. Already tried 7
>>>> time(s).
>>>> 2012-08-08 09:45:19,871 INFO org.apache.hadoop.ipc.Client: Retrying
>>>> connect to server: bamhadoop01/10.100.3.221:9000. Already tried 8
>>>> time(s).
>>>> 2012-08-08 09:45:20,872 INFO org.apache.hadoop.ipc.Client: Retrying
>>>> connect to server: bamhadoop01/10.100.3.221:9000. Already tried 9
>>>> time(s).
>>>> 2012-08-08 09:45:20,874 INFO org.apache.hadoop.mapred.JobTracker:
>>>> problem cleaning system directory: null
>>>> java.net.ConnectException: Call to bamhadoop01/10.100.3.221:9000failed on 
>>>> connection exception: java.net.ConnectException: Connection
>>>> refused
>>>> at org.apache.hadoop.ipc.Client.wrapException(Client.java:1057)
>>>>  at org.apache.hadoop.ipc.Client.call(Client.java:1033)
>>>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:224)
>>>>  at $Proxy5.getProtocolVersion(Unknown Source)
>>>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:364)
>>>>  at
>>>> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
>>>> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:208)
>>>>  at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:175)
>>>> at
>>>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
>>>>  at
>>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1310)
>>>> at org.apache.hadoop.fs.FileSystem.access$100(FileSystem.java:65)
>>>>  at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1328)
>>>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:226)
>>>>  at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:109)
>>>> at org.apache.hadoop.mapred.JobTracker$3.run(JobTracker.java:2349)
>>>>  at org.apache.hadoop.mapred.JobTracker$3.run(JobTracker.java:2347)
>>>> at java.security.AccessController.doPrivileged(Native Method)
>>>>  at javax.security.auth.Subject.doAs(Subject.java:396)
>>>> at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
>>>>  at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2347)
>>>> at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2171)
>>>>  at
>>>> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:300)
>>>> at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:291)
>>>>  at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4956)
>>>> Caused by: java.net.ConnectException: Connection refused
>>>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>>>>  at
>>>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
>>>> at
>>>> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>>>>  at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:406)
>>>> at
>>>> org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:414)
>>>>  at
>>>> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:527)
>>>> at org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:187)
>>>>  at org.apache.hadoop.ipc.Client.getConnection(Client.java:1164)
>>>> at org.apache.hadoop.ipc.Client.call(Client.java:1010)
>>>>  ... 22 more
>>>> 2012-08-08 09:45:31,877 INFO org.apache.hadoop.ipc.Client: Retrying
>>>> connect to server: bamhadoop01/10.100.3.221:9000. Already tried 0
>>>> time(s).
>>>> 2012-08-08 09:45:32,878 INFO org.apache.hadoop.ipc.Client: Retrying
>>>> connect to server: bamhadoop01/10.100.3.221:9000. Already tried 1
>>>> time(s).
>>>> 2012-08-08 09:45:33,878 INFO org.apache.hadoop.ipc.Client: Retrying
>>>> connect to server: bamhadoop01/10.100.3.221:9000. Already tried 2
>>>> time(s).
>>>> 2012-08-08 09:45:34,879 INFO org.apache.hadoop.ipc.Client: Retrying
>>>> connect to server: bamhadoop01/10.100.3.221:9000. Already tried 3
>>>> time(s).
>>>> 2012-08-08 09:45:35,879 INFO org.apache.hadoop.ipc.Client: Retrying
>>>> connect to server: bamhadoop01/10.100.3.221:9000. Already tried 4
>>>> time(s).
>>>>
>>>>
>>>>
>>>> --
>>>> *Chamara Ariyarathne*
>>>> Senior Software Engineer - QA;
>>>> WSO2 Inc; http://www.wso2.com/.
>>>> Mobile; *+94772786766*
>>>>
>>>>
>>>> _______________________________________________
>>>> Dev mailing list
>>>> [email protected]
>>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>>
>>>>
>>>
>>>
>>> --
>>> Deependra Ariyadewa
>>> WSO2, Inc. http://wso2.com/ http://wso2.org
>>>
>>> email [email protected]; cell +94 71 403 5996 ;
>>> Blog http://risenfall.wordpress.com/
>>> PGP info: KeyID: 'DC627E6F'
>>>
>>>
>>
>>
>> --
>> *Chamara Ariyarathne*
>> Senior Software Engineer - QA;
>> WSO2 Inc; http://www.wso2.com/.
>> Mobile; *+94772786766*
>>
>>
>> _______________________________________________
>> Dev mailing list
>> [email protected]
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>


-- 
*Chamara Ariyarathne*
Senior Software Engineer - QA;
WSO2 Inc; http://www.wso2.com/.
Mobile; *+94772786766*
_______________________________________________
Dev mailing list
[email protected]
http://wso2.org/cgi-bin/mailman/listinfo/dev

Reply via email to