Hi,

Thanks for adding that Jira, if this is down to framing and packets exceeding 
the mtu then it may be down to the unusual networking I have between servers 
whereby they are linked via openvpn rather than normal networking. So its 
possiblt that openvpn is doing something to the packets that is affecting this.

-Ian

On Wednesday 01 October 2014 09:12:05 Andrew Purtell wrote:
> Thanks for reporting this. Please see
> https://issues.apache.org/jira/browse/HBASE-12141. Hope I've
> understood the issue correctly. We will look into it.
> 
> On Wed, Oct 1, 2014 at 4:37 AM, Ian Brooks <[email protected]> wrote:
> > Hi,
> >
> >  I have a java client that connects to hbase and reads and writes data to 
> > hbase. every now and then, I'm seeing the following stack traces in the 
> > application log and I'm not sure why they are coming up.
> >
> > org.apache.hadoop.hbase.client.ClusterStatusListener - ERROR - Unexpected 
> > exception, continuing.
> > com.google.protobuf.InvalidProtocolBufferException: Protocol message tag 
> > had invalid wire type.
> >         at 
> > com.google.protobuf.InvalidProtocolBufferException.invalidWireType(InvalidProtocolBufferException.java:99)
> >         at 
> > com.google.protobuf.UnknownFieldSet$Builder.mergeFieldFrom(UnknownFieldSet.java:498)
> >         at 
> > com.google.protobuf.GeneratedMessage.parseUnknownField(GeneratedMessage.java:193)
> >         at 
> > org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ClusterStatus.<init>(ClusterStatusProtos.java:7554)
> >         at 
> > org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ClusterStatus.<init>(ClusterStatusProtos.java:7512)
> >         at 
> > org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ClusterStatus$1.parsePartialFrom(ClusterStatusProtos.java:7689)
> >         at 
> > org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos$ClusterStatus$1.parsePartialFrom(ClusterStatusProtos.java:7684)
> >         at 
> > com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:141)
> >         at 
> > com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:176)
> >         at 
> > com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:182)
> >         at 
> > com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
> >         at 
> > org.jboss.netty.handler.codec.protobuf.ProtobufDecoder.decode(ProtobufDecoder.java:122)
> >         at 
> > org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:66)
> >         at 
> > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
> >         at 
> > org.jboss.netty.channel.socket.oio.OioDatagramWorker.process(OioDatagramWorker.java:52)
> >         at 
> > org.jboss.netty.channel.socket.oio.AbstractOioWorker.run(AbstractOioWorker.java:73)
> >         at 
> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >         at 
> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >         at java.lang.Thread.run(Thread.java:745)
> >
> > I'm running hbase-0.98.3-hadoop2
> >
> > -Ian
> 
> 
> 
> 

Reply via email to