[jira] [Commented] (AVRO-1407) NettyTransceiver can cause a infinite loop when slow to connect

2017-11-13 Thread Suraj Acharya (JIRA)

[ 
https://issues.apache.org/jira/browse/AVRO-1407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16250648#comment-16250648
 ] 

Suraj Acharya commented on AVRO-1407:
-

Yes.
Feel free to cherry-pick it back to 1.7.8.
I am for the time being removing the release version 1.7.8 from the Jira

> NettyTransceiver can cause a infinite loop when slow to connect
> ---
>
> Key: AVRO-1407
> URL: https://issues.apache.org/jira/browse/AVRO-1407
> Project: Avro
>  Issue Type: Bug
>  Components: java
>Affects Versions: 1.7.5, 1.7.6
>Reporter: Gareth Davis
>Assignee: Gareth Davis
> Fix For: 1.8.0
>
> Attachments: AVRO-1407-1.patch, AVRO-1407-2.patch, 
> AVRO-1407-testcase.patch
>
>
> When a new {{NettyTransceiver}} is created it forces the channel to be 
> allocated and connected to the remote host. it waits for the connectTimeout 
> ms on the [connect channel 
> future|https://github.com/apache/avro/blob/1579ab1ac95731630af58fc303a07c9bf28541d6/lang/java/ipc/src/main/java/org/apache/avro/ipc/NettyTransceiver.java#L271]
>  this is obivously a good thing it's only that on being unsuccessful, ie 
> {{!channelFuture.isSuccess()}} an exception is thrown and the call to the 
> constructor fails with an {{IOException}}, but has the potential to leave a 
> active channel associated with the {{ChannelFactory}}
> The problem is that a Netty {{NioClientSocketChannelFactory}} will not 
> shutdown if there are active channels still around and if you have supplied 
> the {{ChannelFactory}} to the {{NettyTransceiver}} then  you will not be able 
> to cancel it by calling {{ChannelFactory.releaseExternalResources()}} like 
> the [Flume Avro RPC client 
> does|https://github.com/apache/flume/blob/b8cf789b8509b1e5be05dd0b0b16c5d9af9698ae/flume-ng-sdk/src/main/java/org/apache/flume/api/NettyAvroRpcClient.java#L158].
>  In order to recreate this you need a very laggy network, where the connect 
> attempt takes longer than the connect timeout but does actually work, this 
> very hard to organise in a test case, although I do have a test setup using 
> vagrant VM's that recreates this everytime, using the Flume RPC client and 
> server.
> The following stack is from a production system, it won't ever leave recover 
> until the channel is disconnected (by forcing a disconnect at the remote 
> host) or restarting the JVM.
> {noformat:title=Production stack trace}
> "TLOG-0" daemon prio=10 tid=0x7f581c7be800 nid=0x39a1 waiting on 
> condition [0x7f57ef9f2000]
>   java.lang.Thread.State: TIMED_WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   parking to wait for <0x0007218b16e0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>   at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
>   at 
> java.util.concurrent.ThreadPoolExecutor.awaitTermination(ThreadPoolExecutor.java:1253)
>   at 
> org.jboss.netty.util.internal.ExecutorUtil.terminate(ExecutorUtil.java:103)
>   at 
> org.jboss.netty.channel.socket.nio.AbstractNioWorkerPool.releaseExternalResources(AbstractNioWorkerPool.java:80)
>   at 
> org.jboss.netty.channel.socket.nio.NioClientSocketChannelFactory.releaseExternalResources(NioClientSocketChannelFactory.java:181)
>   at 
> org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:142)
>   at 
> org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:101)
>   at 
> org.apache.flume.api.NettyAvroRpcClient.configure(NettyAvroRpcClient.java:564)
>   locked <0x0006c30ae7b0> (a org.apache.flume.api.NettyAvroRpcClient)
>   at 
> org.apache.flume.api.RpcClientFactory.getInstance(RpcClientFactory.java:88)
>   at 
> org.apache.flume.api.LoadBalancingRpcClient.createClient(LoadBalancingRpcClient.java:214)
>   at 
> org.apache.flume.api.LoadBalancingRpcClient.getClient(LoadBalancingRpcClient.java:205)
>   locked <0x0006a97b18e8> (a org.apache.flume.api.LoadBalancingRpcClient)
>   at 
> org.apache.flume.api.LoadBalancingRpcClient.appendBatch(LoadBalancingRpcClient.java:95)
>   at 
> com.ean.platform.components.tlog.client.service.AvroRpcEventRouter$1.call(AvroRpcEventRouter.java:45)
>   at 
> com.ean.platform.components.tlog.client.service.AvroRpcEventRouter$1.call(AvroRpcEventRouter.java:43)
> {noformat}
> The solution is very simple, and a patch should be along in a moment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (AVRO-1407) NettyTransceiver can cause a infinite loop when slow to connect

2017-11-13 Thread Suraj Acharya (JIRA)

 [ 
https://issues.apache.org/jira/browse/AVRO-1407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suraj Acharya updated AVRO-1407:

Fix Version/s: (was: 1.7.8)

> NettyTransceiver can cause a infinite loop when slow to connect
> ---
>
> Key: AVRO-1407
> URL: https://issues.apache.org/jira/browse/AVRO-1407
> Project: Avro
>  Issue Type: Bug
>  Components: java
>Affects Versions: 1.7.5, 1.7.6
>Reporter: Gareth Davis
>Assignee: Gareth Davis
> Fix For: 1.8.0
>
> Attachments: AVRO-1407-1.patch, AVRO-1407-2.patch, 
> AVRO-1407-testcase.patch
>
>
> When a new {{NettyTransceiver}} is created it forces the channel to be 
> allocated and connected to the remote host. it waits for the connectTimeout 
> ms on the [connect channel 
> future|https://github.com/apache/avro/blob/1579ab1ac95731630af58fc303a07c9bf28541d6/lang/java/ipc/src/main/java/org/apache/avro/ipc/NettyTransceiver.java#L271]
>  this is obivously a good thing it's only that on being unsuccessful, ie 
> {{!channelFuture.isSuccess()}} an exception is thrown and the call to the 
> constructor fails with an {{IOException}}, but has the potential to leave a 
> active channel associated with the {{ChannelFactory}}
> The problem is that a Netty {{NioClientSocketChannelFactory}} will not 
> shutdown if there are active channels still around and if you have supplied 
> the {{ChannelFactory}} to the {{NettyTransceiver}} then  you will not be able 
> to cancel it by calling {{ChannelFactory.releaseExternalResources()}} like 
> the [Flume Avro RPC client 
> does|https://github.com/apache/flume/blob/b8cf789b8509b1e5be05dd0b0b16c5d9af9698ae/flume-ng-sdk/src/main/java/org/apache/flume/api/NettyAvroRpcClient.java#L158].
>  In order to recreate this you need a very laggy network, where the connect 
> attempt takes longer than the connect timeout but does actually work, this 
> very hard to organise in a test case, although I do have a test setup using 
> vagrant VM's that recreates this everytime, using the Flume RPC client and 
> server.
> The following stack is from a production system, it won't ever leave recover 
> until the channel is disconnected (by forcing a disconnect at the remote 
> host) or restarting the JVM.
> {noformat:title=Production stack trace}
> "TLOG-0" daemon prio=10 tid=0x7f581c7be800 nid=0x39a1 waiting on 
> condition [0x7f57ef9f2000]
>   java.lang.Thread.State: TIMED_WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   parking to wait for <0x0007218b16e0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>   at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2025)
>   at 
> java.util.concurrent.ThreadPoolExecutor.awaitTermination(ThreadPoolExecutor.java:1253)
>   at 
> org.jboss.netty.util.internal.ExecutorUtil.terminate(ExecutorUtil.java:103)
>   at 
> org.jboss.netty.channel.socket.nio.AbstractNioWorkerPool.releaseExternalResources(AbstractNioWorkerPool.java:80)
>   at 
> org.jboss.netty.channel.socket.nio.NioClientSocketChannelFactory.releaseExternalResources(NioClientSocketChannelFactory.java:181)
>   at 
> org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:142)
>   at 
> org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:101)
>   at 
> org.apache.flume.api.NettyAvroRpcClient.configure(NettyAvroRpcClient.java:564)
>   locked <0x0006c30ae7b0> (a org.apache.flume.api.NettyAvroRpcClient)
>   at 
> org.apache.flume.api.RpcClientFactory.getInstance(RpcClientFactory.java:88)
>   at 
> org.apache.flume.api.LoadBalancingRpcClient.createClient(LoadBalancingRpcClient.java:214)
>   at 
> org.apache.flume.api.LoadBalancingRpcClient.getClient(LoadBalancingRpcClient.java:205)
>   locked <0x0006a97b18e8> (a org.apache.flume.api.LoadBalancingRpcClient)
>   at 
> org.apache.flume.api.LoadBalancingRpcClient.appendBatch(LoadBalancingRpcClient.java:95)
>   at 
> com.ean.platform.components.tlog.client.service.AvroRpcEventRouter$1.call(AvroRpcEventRouter.java:45)
>   at 
> com.ean.platform.components.tlog.client.service.AvroRpcEventRouter$1.call(AvroRpcEventRouter.java:43)
> {noformat}
> The solution is very simple, and a patch should be along in a moment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AVRO-1597) Random data tool writes corrupt data to standard output

2017-11-13 Thread Suraj Acharya (JIRA)

[ 
https://issues.apache.org/jira/browse/AVRO-1597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16250638#comment-16250638
 ] 

Suraj Acharya commented on AVRO-1597:
-

Seems like the issue is that the patch is present in the code, but it has lost 
its history.
You can see the changes made 
https://issues.apache.org/jira/secure/attachment/12677013/AVRO-1597.patch in 
https://github.com/apache/avro/blame/branch-1.7/lang/java/trevni/core/src/test/java/org/apache/trevni/TestUtil.java


> Random data tool writes corrupt data to standard output
> ---
>
> Key: AVRO-1597
> URL: https://issues.apache.org/jira/browse/AVRO-1597
> Project: Avro
>  Issue Type: Bug
>  Components: java
>Reporter: Doug Cutting
>Assignee: Doug Cutting
> Fix For: 1.7.8, 1.8.0
>
> Attachments: AVRO-1597.patch
>
>
> When 'random' command is used to write a data file to standard output that 
> file is corrupt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (AVRO-1340) use default to allow old readers to specify default enum value when encountering new enum symbols

2017-11-13 Thread Daniel Abrahamsson (JIRA)

[ 
https://issues.apache.org/jira/browse/AVRO-1340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16249501#comment-16249501
 ] 

Daniel Abrahamsson commented on AVRO-1340:
--

One difference between the alias and fallback proposals, is that the former 
would affect the data format, whereas the latter would not. I think that makes 
the fallback proposal favorable.

> use default to allow old readers to specify default enum value when 
> encountering new enum symbols
> -
>
> Key: AVRO-1340
> URL: https://issues.apache.org/jira/browse/AVRO-1340
> Project: Avro
>  Issue Type: Improvement
>  Components: spec
> Environment: N/A
>Reporter: Jim Donofrio
>Priority: Minor
>
> The schema resolution page says:
> > if both are enums:
> > if the writer's symbol is not present in the reader's enum, then an
> error is signalled.
> This makes it difficult to use enum's because you can never add a enum value 
> and keep old reader's compatible. Why not use the default option to refer to 
> one of enum values so that when a old reader encounters a enum ordinal it 
> does not recognize, it can default to the optional schema provided one. If 
> the old schema does not provide a default then the older reader can continue 
> to fail as it does today.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)