Jon created SPARK-15469:
---------------------------

             Summary: Datanode is not starting correctly: Retrying connect to 
server: masternode/10.18.0.50:8020. Already tried 5 time(s); maxRetries=45
                 Key: SPARK-15469
                 URL: https://issues.apache.org/jira/browse/SPARK-15469
             Project: Spark
          Issue Type: Bug
            Reporter: Jon


when I execute jps command datanode appears, but it seems that is not starting 
correctly, because if I try to start spark I get an error about:

16/05/21 02:28:07 INFO spark.SparkContext: Successfully stopped SparkContext
16/05/21 02:28:07 INFO remote.RemoteActorRefProvider$RemotingTerminator: 
Remoting shut down.
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
/user/hadoopadmin/.sparkStaging/application_1463794034497_0001/spark-assembly-1.6.1-hadoop2.6.0.jar
 could only be replicated to 0 nodes instead of minReplication (=1).  There are 
0 datanode(s) running and no node(s) are excluded in this operation.
        at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1550)

In the datanode logs appears this:

STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 
15ecc87ccf4a0228f35af08fc56de536e6ce657a; compiled by 'jenkins' on 
2015-06-29T06:04Z
STARTUP_MSG:   java = 1.8.0_91
************************************************************/
2016-05-21 02:26:57,938 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
registered UNIX signal handlers for [TERM, HUP, INT]
2016-05-21 02:26:59,082 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: 
loaded properties from hadoop-metrics2.properties
2016-05-21 02:26:59,262 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: 
Scheduled snapshot period at 10 second(s).
2016-05-21 02:26:59,262 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: 
DataNode metrics system started
2016-05-21 02:26:59,269 INFO 
org.apache.hadoop.hdfs.server.datanode.BlockScanner: Initialized block scanner 
with targetBytesPerSec 1048576
2016-05-21 02:26:59,273 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Configured hostname is slavenode
2016-05-21 02:26:59,295 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Starting DataNode with maxLockedMemory = 0
2016-05-21 02:26:59,331 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Opened streaming server at /0.0.0.0:50010
2016-05-21 02:26:59,345 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Balancing bandwith is 1048576 bytes/s
2016-05-21 02:26:59,345 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Number threads for balancing is 5
2016-05-21 02:26:59,498 INFO org.mortbay.log: Logging to 
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2016-05-21 02:26:59,509 INFO 
org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable 
to initialize FileSignerSecretProvider, falling back to use random secrets.
2016-05-21 02:26:59,524 INFO org.apache.hadoop.http.HttpRequestLog: Http 
request log for http.requests.datanode is not defined
2016-05-21 02:26:59,531 INFO org.apache.hadoop.http.HttpServer2: Added global 
filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2016-05-21 02:26:59,534 INFO org.apache.hadoop.http.HttpServer2: Added filter 
static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context datanode
2016-05-21 02:26:59,534 INFO org.apache.hadoop.http.HttpServer2: Added filter 
static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context logs
2016-05-21 02:26:59,534 INFO org.apache.hadoop.http.HttpServer2: Added filter 
static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context static
2016-05-21 02:26:59,554 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to 
port 38093
2016-05-21 02:26:59,554 INFO org.mortbay.log: jetty-6.1.26
2016-05-21 02:26:59,900 INFO org.mortbay.log: Started 
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38093
2016-05-21 02:27:00,063 INFO 
org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer: Listening HTTP 
traffic on /0.0.0.0:50075
2016-05-21 02:27:00,376 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
dnUserName = hadoopadmin
2016-05-21 02:27:00,376 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
supergroup = supergroup
2016-05-21 02:27:00,460 INFO org.apache.hadoop.ipc.CallQueueManager: Using 
callQueue class java.util.concurrent.LinkedBlockingQueue
2016-05-21 02:27:00,478 INFO org.apache.hadoop.ipc.Server: Starting Socket 
Reader #1 for port 50020
2016-05-21 02:27:00,560 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Opened IPC server at /0.0.0.0:50020
2016-05-21 02:27:00,598 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Refresh request received for nameservices: null
2016-05-21 02:27:00,630 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Starting BPOfferServices for nameservices: <default>
2016-05-21 02:27:00,646 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Block pool <registering> (Datanode Uuid unassigned) service to 
masternode/10.18.0.50:8020 starting to offer service
2016-05-21 02:27:00,662 INFO org.apache.hadoop.ipc.Server: IPC Server 
Responder: starting
2016-05-21 02:27:00,664 INFO org.apache.hadoop.ipc.Server: IPC Server listener 
on 50020: starting
2016-05-21 02:27:20,868 INFO org.apache.hadoop.ipc.Client: Retrying connect to 
server: masternode/10.18.0.50:8020. Already tried 0 time(s); maxRetries=45
2016-05-21 02:27:40,894 INFO org.apache.hadoop.ipc.Client: Retrying connect to 
server: masternode/10.18.0.50:8020. Already tried 1 time(s); maxRetries=45
2016-05-21 02:28:00,915 INFO org.apache.hadoop.ipc.Client: Retrying connect to 
server: masternode/10.18.0.50:8020. Already tried 2 time(s); maxRetries=45
2016-05-21 02:28:20,935 INFO org.apache.hadoop.ipc.Client: Retrying connect to 
server: masternode/10.18.0.50:8020. Already tried 3 time(s); maxRetries=45
2016-05-21 02:28:40,948 INFO org.apache.hadoop.ipc.Client: Retrying connect to 
server: masternode/10.18.0.50:8020. Already tried 4 time(s); maxRetries=45
2016-05-21 02:29:00,967 INFO org.apache.hadoop.ipc.Client: Retrying connect to 
server: masternode/10.18.0.50:8020. Already tried 5 time(s); maxRetries=45

yarn-site.xml:

<configuration>
 <property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>masternode:8031</value>
  </property>
  <property>
    <name>yarn.resourcemanager.address</name>
    <value>masternode:8032</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>masternode:8030</value>
  </property>
  <property>
    <name>yarn.resourcemanager.admin.address</name>
    <value>masternode:8033</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.address</name>
    <value>masternode:8088</value>
  </property>

</configuration>

Do you understand why its not working?





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to