[
https://issues.apache.org/jira/browse/FLUME-2132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Lior Zeno resolved FLUME-2132.
------------------------------
Resolution: Incomplete
Closing since this issue does not provide enough information to reproduce.
> Exception while syncing from Flume to HDFS
> ------------------------------------------
>
> Key: FLUME-2132
> URL: https://issues.apache.org/jira/browse/FLUME-2132
> Project: Flume
> Issue Type: Bug
> Components: Sinks+Sources
> Affects Versions: v1.3.0
> Environment: Flume 1.3.0, Hadoop 1.2.0, 8GB RAM, Intel Pentium core 2
> duo
> Reporter: Divya R
> Labels: flume, hadoop
> Fix For: v1.7.0
>
>
> I'm running hadoop 1.2.0 and flume 1.3.0. Every thing works fine if its
> independently run. When I start my tomcat I get the below exception after
> some time.
> {quote}2013-07-17 12:40:35,640 (ResponseProcessor for block
> blk_5249456272858461891_436734) [WARN -
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:3015)]
> DFSOutputStream ResponseProcessor exception for block
> blk_5249456272858461891_436734java.net.SocketTimeoutException: 63000 millis
> timeout while waiting for channel to be ready for read. ch :
> java.nio.channels.SocketChannel[connected local=/127.0.0.1:24433
> remote=/127.0.0.1:50010]
> at
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> at java.io.DataInputStream.readFully(DataInputStream.java:195)
> at java.io.DataInputStream.readLong(DataInputStream.java:416)
> at
> org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.readFields(DataTransferProtocol.java:124)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2967){quote}
> {quote}2013-07-17 12:40:35,800 (hdfs-hdfs-write-roll-timer-0) [WARN -
> org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:277)]
> failed to close() HDFSWriter for file
> (hdfs://localhost:9000/flume/Broadsoft_App2/20130717/jboss/Broadsoft_App2.1374044838498.tmp).
> Exception follows.
> java.io.IOException: All datanodes 127.0.0.1:50010 are bad. Aborting...
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3096)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2100(DFSClient.java:2589)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2793){quote}
> Java snippet for Configuraion
> {quote}configuration.set("fs.default.name", "hdfs://localhost:9000");
> configuration.set("mapred.job.tracker", "hdfs://localhost:9000");{quote}
> I'm using a single datanode to read the files that where written to hdfs by
> flume, my java program just reads the files from hdfs to show it on the
> screen nothing much.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)