[
https://issues.apache.org/jira/browse/HADOOP-18546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17645079#comment-17645079
]
ASF GitHub Bot commented on HADOOP-18546:
-----------------------------------------
pranavsaxena-microsoft commented on PR #5176:
URL: https://github.com/apache/hadoop/pull/5176#issuecomment-1343863437
> getting a test failure locally, ITestReadBufferManager failing as one of
its asserts isn't valid.
>
> going to reopen the jira @pranavsaxena-microsoft can you see if you can
replicate the problem and add a followup patch (use the same jira). do make
sure you are running this test _first_, and that it is failing for you. thanks
>
> ```
> INFO] Running org.apache.hadoop.fs.azurebfs.services.ITestReadBufferManager
> [ERROR] Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed:
3.816 s <<< FAILURE! - in
org.apache.hadoop.fs.azurebfs.services.ITestReadBufferManager
> [ERROR]
testPurgeBufferManagerForSequentialStream(org.apache.hadoop.fs.azurebfs.services.ITestReadBufferManager)
Time elapsed: 1.995 s <<< FAILURE!
> java.lang.AssertionError:
> [Buffers associated with closed input streams shouldn't be present]
> Expecting:
>
<org.apache.hadoop.fs.azurebfs.services.AbfsInputStream@5a709b9b{counters=((stream_read_bytes_backwards_on_seek=0)
(stream_read_seek_forward_operations=0) (stream_read_seek_operations=0)
(read_ahead_bytes_read=16384) (stream_read_seek_bytes_skipped=0)
(stream_read_bytes=1) (action_http_get_request=0) (bytes_read_buffer=1)
(seek_in_buffer=0) (remote_bytes_read=81920)
(action_http_get_request.failures=0) (stream_read_operations=1)
(remote_read_op=8) (stream_read_seek_backward_operations=0));
> gauges=();
> minimums=((action_http_get_request.failures.min=-1)
(action_http_get_request.min=-1));
> maximums=((action_http_get_request.max=-1)
(action_http_get_request.failures.max=-1));
> means=((action_http_get_request.failures.mean=(samples=0, sum=0,
mean=0.0000)) (action_http_get_request.mean=(samples=0, sum=0, mean=0.0000)));
>
}AbfsInputStream@(1517329307){StreamStatistics{counters=((stream_read_seek_bytes_skipped=0)
(seek_in_buffer=0) (stream_read_bytes=1) (stream_read_seek_operations=0)
(remote_bytes_read=81920) (stream_read_operations=1) (bytes_read_buffer=1)
(action_http_get_request.failures=0) (action_http_get_request=0)
(stream_read_seek_forward_operations=0) (stream_read_bytes_backwards_on_seek=0)
(read_ahead_bytes_read=16384) (stream_read_seek_backward_operations=0)
(remote_read_op=8));
> gauges=();
> minimums=((action_http_get_request.min=-1)
(action_http_get_request.failures.min=-1));
> maximums=((action_http_get_request.max=-1)
(action_http_get_request.failures.max=-1));
> means=((action_http_get_request.mean=(samples=0, sum=0, mean=0.0000))
(action_http_get_request.failures.mean=(samples=0, sum=0, mean=0.0000)));
> }}>
> not to be equal to:
>
<org.apache.hadoop.fs.azurebfs.services.AbfsInputStream@5a709b9b{counters=((bytes_read_buffer=1)
(stream_read_seek_forward_operations=0) (read_ahead_bytes_read=16384)
(stream_read_seek_operations=0) (stream_read_seek_bytes_skipped=0)
(stream_read_seek_backward_operations=0) (remote_bytes_read=81920)
(stream_read_operations=1) (stream_read_bytes_backwards_on_seek=0)
(action_http_get_request.failures=0) (seek_in_buffer=0)
(action_http_get_request=0) (remote_read_op=8) (stream_read_bytes=1));
> gauges=();
> minimums=((action_http_get_request.min=-1)
(action_http_get_request.failures.min=-1));
> maximums=((action_http_get_request.max=-1)
(action_http_get_request.failures.max=-1));
> means=((action_http_get_request.mean=(samples=0, sum=0, mean=0.0000))
(action_http_get_request.failures.mean=(samples=0, sum=0, mean=0.0000)));
>
}AbfsInputStream@(1517329307){StreamStatistics{counters=((remote_read_op=8)
(stream_read_seek_forward_operations=0)
(stream_read_seek_backward_operations=0) (read_ahead_bytes_read=16384)
(action_http_get_request.failures=0) (bytes_read_buffer=1)
(stream_read_seek_operations=0) (stream_read_bytes=1)
(stream_read_bytes_backwards_on_seek=0) (action_http_get_request=0)
(seek_in_buffer=0) (stream_read_seek_bytes_skipped=0) (remote_bytes_read=81920)
(stream_read_operations=1));
> gauges=();
> minimums=((action_http_get_request.failures.min=-1)
(action_http_get_request.min=-1));
> maximums=((action_http_get_request.failures.max=-1)
(action_http_get_request.max=-1));
> means=((action_http_get_request.failures.mean=(samples=0, sum=0,
mean=0.0000)) (action_http_get_request.mean=(samples=0, sum=0, mean=0.0000)));
> }}>
>
> at
org.apache.hadoop.fs.azurebfs.services.ITestReadBufferManager.assertListDoesnotContainBuffersForIstream(ITestReadBufferManager.java:145)
> at
org.apache.hadoop.fs.azurebfs.services.ITestReadBufferManager.testPurgeBufferManagerForSequentialStream(ITestReadBufferManager.java:120)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
> at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:750)
> ```
Thanks. I am checking on it.
> disable purging list of in progress reads in abfs stream closed
> ---------------------------------------------------------------
>
> Key: HADOOP-18546
> URL: https://issues.apache.org/jira/browse/HADOOP-18546
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/azure
> Affects Versions: 3.3.4
> Reporter: Steve Loughran
> Assignee: Pranav Saxena
> Priority: Major
> Labels: pull-request-available
>
> turn off the prune of in progress reads in
> ReadBufferManager::purgeBuffersForStream
> this will ensure active prefetches for a closed stream complete. they wiill
> then get to the completed list and hang around until evicted by timeout, but
> at least prefetching will be safe.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]