Hi,
I am trying to feed a single node HDFS cluster.
But I getting this error :
2014-01-21 08:29:44,426 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO -
org.apache.flume.sink.hdfs.HDFSDataStream.configure(HDFSDataStream.java:56)]
Serializer = TEXT, UseRawLocalFileSystem = false2014-01-21 08:29:44,474
(SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO -
org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:219)] Creating
hdfs://xyz.16.137.81:54545/flume/FlumeData.1390289384427.tmp2014-01-21
08:29:44,878 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR -
org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:422)]
process failedjava.lang.UnsupportedOperationException: This is supposed to be
overridden by subclasses. at
com.google.protobuf.GeneratedMessage.getUnknownFields(GeneratedMessage.java:180)
at
org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$FsPermissionProto.getSerializedSize(HdfsProtos.java:5407)
at
com.google.protobuf.CodedOutputStream.computeMessageSizeNoTag(CodedOutputStream.java:749)
at
com.google.protobuf.CodedOutputStream.computeMessageSize(CodedOutputStream.java:530)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$CreateRequestProto.getSerializedSize(ClientNamenodeProtocolProtos.java:2371)
at
com.google.protobuf.AbstractMessageLite.toByteString(AbstractMessageLite.java:49)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.constructRpcRequest(ProtobufRpcEngine.java:149)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:193)
at $Proxy11.create(Unknown Source) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601) at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
at $Proxy11.create(Unknown Source) at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:192)
at
org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1298)
at
org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1317)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1242)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1199) at
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:273)
at
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:262)
at
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:79)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:851)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:832) at
org.apache.hadoop.fs.FileSystem.create(FileSystem.java:731) at
org.apache.hadoop.fs.FileSystem.create(FileSystem.java:720) at
org.apache.flume.sink.hdfs.HDFSDataStream.open(HDFSDataStream.java:80)
at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:227)
at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:220)
at org.apache.flume.sink.hdfs.BucketWriter$8$1.run(BucketWriter.java:557)
at
org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:160)
at
org.apache.flume.sink.hdfs.BucketWriter.access$1000(BucketWriter.java:56)
at org.apache.flume.sink.hdfs.BucketWriter$8.call(BucketWriter.java:554)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166) at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
My conf file :-
Naming the components in this
Agent###############################httpagent.sources =
http-sourcehttpagent.sinks = local-file-sinkhttpagent.channels = ch3
# Define / Configure
Source###############################httpagent.sources.http-source.type =
org.apache.flume.source.http.HTTPSource
httpagent.sources.http-source.channels = ch3httpagent.sources.http-source.port
= 44444
# Local File
Sink###############################httpagent.sinks.local-file-sink.type =
hdfshttpagent.sinks.local-file-sink.channel =
ch3httpagent.sinks.local-file-sink.hdfs.path =
hdfs://xyz.16.137.81:54545/flumehttpagent.sinks.local-file-sink.hdfs.fileType =
DataStream#httpagent.sinks.local-file-sink.hdfs.filePrefix =
events-httpagent.sinks.local-file-sink.hdfs.round =
truehttpagent.sinks.local-file-sink.hdfs.roundValue =
1httpagent.sinks.local-file-sink.hdfs.roundUnit = minute
# Channels###############################httpagent.channels.ch3.type =
memoryhttpagent.channels.ch3.capacity =
1000httpagent.channels.ch3.transactionCapacity = 100
Apart from that I have these jars for hadoop/hdfs in the /lib folder
:-commons-codec-1.4.jarcommons-configuration-1.6.jarcommons-httpclient-3.1.jarhadoop-annotations-2.0.0-cdh4.2.0.jarhadoop-auth-2.0.0-cdh4.2.0.jarhadoop-client-2.0.0-mr1-cdh4.2.0.jarhadoop-common-2.0.0-cdh4.2.0.jarhadoop-core-2.0.0-mr1-cdh4.2.0.jarhadoop-hdfs-2.0.0-cdh4.2.0.jarprotobuf-java-2.5.0.jar
I believe this error is coming from protobuf-java-2.5.0.jar file.
Any suggestions will be of great help!!!
Thanks,Himanshu