I am getting the same error even after building flume with guava-11.0.2
Thanks,Himanshu

Date: Tue, 21 Jan 2014 17:18:06 +0800
Subject: Re: HDFS Sink Error
From: [email protected]
To: [email protected]

you can try this patch https://issues.apache.org/jira/browse/FLUME-2172and 
build flume with guava-11.0.2 the same version as hadoop2.0.x used, currently 
flume use guava-10.0.1, so just change the version 

<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>10.0.1</version>
</dependency>


On Tue, Jan 21, 2014 at 3:35 PM, Himanshu Patidar 
<[email protected]> wrote:




Hi,
I am trying to feed a single node HDFS cluster.
But I getting this error :
2014-01-21 08:29:44,426 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - 
org.apache.flume.sink.hdfs.HDFSDataStream.configure(HDFSDataStream.java:56)] 
Serializer = TEXT, UseRawLocalFileSystem = false
2014-01-21 08:29:44,474 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - 
org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:219)] Creating 
hdfs://xyz.16.137.81:54545/flume/FlumeData.1390289384427.tmp
2014-01-21 08:29:44,878 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR 
- org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:422)] 
process failedjava.lang.UnsupportedOperationException: This is supposed to be 
overridden by subclasses.
        at 
com.google.protobuf.GeneratedMessage.getUnknownFields(GeneratedMessage.java:180)
        at 
org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$FsPermissionProto.getSerializedSize(HdfsProtos.java:5407)
        at 
com.google.protobuf.CodedOutputStream.computeMessageSizeNoTag(CodedOutputStream.java:749)
        at 
com.google.protobuf.CodedOutputStream.computeMessageSize(CodedOutputStream.java:530)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$CreateRequestProto.getSerializedSize(ClientNamenodeProtocolProtos.java:2371)
        at 
com.google.protobuf.AbstractMessageLite.toByteString(AbstractMessageLite.java:49)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.constructRpcRequest(ProtobufRpcEngine.java:149)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:193)
        at $Proxy11.create(Unknown Source)        at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:601)        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
        at $Proxy11.create(Unknown Source)        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:192)
        at 
org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1298)        
at 
org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1317)
        at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1242)
        at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1199)        
at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:273)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:262)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:79)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:851)        
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:832)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:731)        
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:720)        at 
org.apache.flume.sink.hdfs.HDFSDataStream.open(HDFSDataStream.java:80)
        at 
org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:227)        at 
org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:220)        at 
org.apache.flume.sink.hdfs.BucketWriter$8$1.run(BucketWriter.java:557)
        at 
org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:160)    
    at 
org.apache.flume.sink.hdfs.BucketWriter.access$1000(BucketWriter.java:56)       
 at org.apache.flume.sink.hdfs.BucketWriter$8.call(BucketWriter.java:554)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)   
     at java.util.concurrent.FutureTask.run(FutureTask.java:166)        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) 
       at java.lang.Thread.run(Thread.java:722)

My conf file :-

Naming the components in this 
Agent###############################httpagent.sources = 
http-sourcehttpagent.sinks = local-file-sinkhttpagent.channels = ch3

# Define / Configure 
Source###############################httpagent.sources.http-source.type = 
org.apache.flume.source.http.HTTPSource
httpagent.sources.http-source.channels = ch3
httpagent.sources.http-source.port = 44444

# Local File 
Sink###############################httpagent.sinks.local-file-sink.type = 
hdfshttpagent.sinks.local-file-sink.channel = ch3
httpagent.sinks.local-file-sink.hdfs.path = 
hdfs://xyz.16.137.81:54545/flumehttpagent.sinks.local-file-sink.hdfs.fileType = 
DataStream#httpagent.sinks.local-file-sink.hdfs.filePrefix = events-
httpagent.sinks.local-file-sink.hdfs.round = 
truehttpagent.sinks.local-file-sink.hdfs.roundValue = 
1httpagent.sinks.local-file-sink.hdfs.roundUnit = minute
# Channels
###############################httpagent.channels.ch3.type = 
memoryhttpagent.channels.ch3.capacity = 
1000httpagent.channels.ch3.transactionCapacity = 100


Apart from that I have these jars for hadoop/hdfs in the /lib folder :-
commons-codec-1.4.jar
commons-configuration-1.6.jar
commons-httpclient-3.1.jar
hadoop-annotations-2.0.0-cdh4.2.0.jar
hadoop-auth-2.0.0-cdh4.2.0.jar
hadoop-client-2.0.0-mr1-cdh4.2.0.jar
hadoop-common-2.0.0-cdh4.2.0.jar
hadoop-core-2.0.0-mr1-cdh4.2.0.jarhadoop-hdfs-2.0.0-cdh4.2.0.jar
protobuf-java-2.5.0.jar

I believe this error is coming from protobuf-java-2.5.0.jar file.

Any suggestions will be of great help!!!



Thanks,Himanshu                                           


-- 
have a good day! chenshang'an

                                          

Reply via email to