[
https://issues.apache.org/jira/browse/HADOOP-19098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17823287#comment-17823287
]
ASF GitHub Bot commented on HADOOP-19098:
-----------------------------------------
steveloughran commented on PR #6604:
URL: https://github.com/apache/hadoop/pull/6604#issuecomment-1977235222
Just going to highlight that the contract test failed *badly* when reading
into direct buffers from azure. I think this is a bug in the direct buffer
fetching logic -it's always
setting 0 as the position for readFully(position...)
What does that mean? It means until this patch is in it is *not* safe to
read into direct buffers except in the stores which do their own native
implementations of the API.
which really means "don't use direct buffers as a destination, at all"
```
[ERROR] Tests run: 34, Failures: 13, Errors: 2, Skipped: 0, Time elapsed:
375.803 s <<< FAILURE! - in
org.apache.hadoop.fs.azurebfs.contract.ITestAbfsFileSystemContractVectoredRead
[ERROR] testNormalReadAfterVectoredRead[Buffer type :
direct](org.apache.hadoop.fs.azurebfs.contract.ITestAbfsFileSystemContractVectoredRead)
Time elapsed: 1.607 s <<< FAILURE!
java.lang.AssertionError: vecRead with read offset 110: data[0] !=
DATASET[110] expected:<111> but was:<97>
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotEquals(Assert.java:835)
at org.junit.Assert.assertEquals(Assert.java:647)
at
org.apache.hadoop.fs.contract.ContractTestUtils.assertDatasetEquals(ContractTestUtils.java:1182)
at
org.apache.hadoop.fs.contract.ContractTestUtils.validateVectoredReadResult(ContractTestUtils.java:1140)
at
org.apache.hadoop.fs.contract.AbstractContractVectoredReadTest.testNormalReadAfterVectoredRead(AbstractContractVectoredReadTest.java:347)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:750)
[ERROR] testVectoredReadMultipleRanges[Buffer type :
direct](org.apache.hadoop.fs.azurebfs.contract.ITestAbfsFileSystemContractVectoredRead)
Time elapsed: 0.391 s <<< FAILURE!
java.lang.AssertionError: vecRead with read offset 100: data[0] !=
DATASET[100] expected:<101> but was:<97>
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotEquals(Assert.java:835)
at org.junit.Assert.assertEquals(Assert.java:647)
at
org.apache.hadoop.fs.contract.ContractTestUtils.assertDatasetEquals(ContractTestUtils.java:1182)
at
org.apache.hadoop.fs.contract.ContractTestUtils.validateVectoredReadResult(ContractTestUtils.java:1140)
at
org.apache.hadoop.fs.contract.AbstractContractVectoredReadTest.testVectoredReadMultipleRanges(AbstractContractVectoredReadTest.java:170)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:750)
[ERROR] testSomeRangesMergedSomeUnmerged[Buffer type :
direct](org.apache.hadoop.fs.azurebfs.contract.ITestAbfsFileSystemContractVectoredRead)
Time elapsed: 0.417 s <<< FAILURE!
java.lang.AssertionError: vecRead with read offset 1947: data[0] !=
DATASET[1947] expected:<124> but was:<97>
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotEquals(Assert.java:835)
at org.junit.Assert.assertEquals(Assert.java:647)
at
org.apache.hadoop.fs.contract.ContractTestUtils.assertDatasetEquals(ContractTestUtils.java:1182)
at
org.apache.hadoop.fs.contract.ContractTestUtils.validateVectoredReadResult(ContractTestUtils.java:1140)
at
org.apache.hadoop.fs.contract.AbstractContractVectoredReadTest.testSomeRangesMergedSomeUnmerged(AbstractContractVectoredReadTest.java:245)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:750)
[ERROR] testEOFRanges[Buffer type :
direct](org.apache.hadoop.fs.azurebfs.contract.ITestAbfsFileSystemContractVectoredRead)
Time elapsed: 0.305 s <<< FAILURE!
java.lang.AssertionError: Expected a java.io.EOFException to be thrown, but
got the result: : java.nio.DirectByteBuffer[pos=0 lim=100 cap=100]
at
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:499)
at
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:384)
at
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:453)
at
org.apache.hadoop.test.LambdaTestUtils.interceptFuture(LambdaTestUtils.java:792)
at
org.apache.hadoop.fs.contract.AbstractContractVectoredReadTest.testEOFRanges(AbstractContractVectoredReadTest.java:310)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:750)
[ERROR] testVectoredReadAfterNormalRead[Buffer type :
direct](org.apache.hadoop.fs.azurebfs.contract.ITestAbfsFileSystemContractVectoredRead)
Time elapsed: 0.416 s <<< FAILURE!
java.lang.AssertionError: vecRead with read offset 110: data[0] !=
DATASET[110] expected:<111> but was:<97>
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotEquals(Assert.java:835)
at org.junit.Assert.assertEquals(Assert.java:647)
at
org.apache.hadoop.fs.contract.ContractTestUtils.assertDatasetEquals(ContractTestUtils.java:1182)
at
org.apache.hadoop.fs.contract.ContractTestUtils.validateVectoredReadResult(ContractTestUtils.java:1140)
at
org.apache.hadoop.fs.contract.AbstractContractVectoredReadTest.testVectoredReadAfterNormalRead(AbstractContractVectoredReadTest.java:366)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:750)
[ERROR] testDisjointRanges[Buffer type :
direct](org.apache.hadoop.fs.azurebfs.contract.ITestAbfsFileSystemContractVectoredRead)
Time elapsed: 1.483 s <<< FAILURE!
java.lang.AssertionError: vecRead with read offset 4101: data[0] !=
DATASET[4101] expected:<102> but was:<97>
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotEquals(Assert.java:835)
at org.junit.Assert.assertEquals(Assert.java:647)
at
org.apache.hadoop.fs.contract.ContractTestUtils.assertDatasetEquals(ContractTestUtils.java:1182)
at
org.apache.hadoop.fs.contract.ContractTestUtils.validateVectoredReadResult(ContractTestUtils.java:1140)
at
org.apache.hadoop.fs.contract.AbstractContractVectoredReadTest.testDisjointRanges(AbstractContractVectoredReadTest.java:203)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:750)
[ERROR] testVectoredReadCapability[Buffer type :
direct](org.apache.hadoop.fs.azurebfs.contract.ITestAbfsFileSystemContractVectoredRead)
Time elapsed: 0.351 s <<< FAILURE!
java.lang.AssertionError:
Should have capability: in:readvectored in
org.apache.hadoop.fs.FSDataInputStream@4bb717:
org.apache.hadoop.fs.azurebfs.services.AbfsInputStream@456a59ae{counters=((remote_read_op=0)
(stream_read_seek_operations=0) (stream_read_seek_backward_operations=0)
(read_ahead_bytes_read=0) (action_http_get_request.failures=0)
(remote_bytes_read=0) (stream_read_operations=0)
(stream_read_seek_bytes_skipped=0) (stream_read_seek_forward_operations=0)
(bytes_read_buffer=0) (stream_read_bytes_backwards_on_seek=0)
(stream_read_bytes=0) (seek_in_buffer=0) (action_http_get_request=0));
gauges=();
minimums=((action_http_get_request.min=-1)
(action_http_get_request.failures.min=-1));
maximums=((action_http_get_request.max=-1)
(action_http_get_request.failures.max=-1));
means=((action_http_get_request.failures.mean=(samples=0, sum=0,
mean=0.0000)) (action_http_get_request.mean=(samples=0, sum=0, mean=0.0000)));
}AbfsInputStream@(1164597678){[fs.azure.capability.readahead.safe],
StreamStatistics{counters=((seek_in_buffer=0) (action_http_get_request=0)
(stream_read_seek_forward_operations=0)
(stream_read_seek_backward_operations=0) (read_ahead_bytes_read=0)
(remote_read_op=0) (remote_bytes_read=0) (stream_read_seek_bytes_skipped=0)
(stream_read_operations=0) (bytes_read_buffer=0)
(stream_read_bytes_backwards_on_seek=0) (action_http_get_request.failures=0)
(stream_read_seek_operations=0) (stream_read_bytes=0));
gauges=();
minimums=((action_http_get_request.failures.min=-1)
(action_http_get_request.min=-1));
maximums=((action_http_get_request.failures.max=-1)
(action_http_get_request.max=-1));
means=((action_http_get_request.mean=(samples=0, sum=0, mean=0.0000))
(action_http_get_request.failures.mean=(samples=0, sum=0, mean=0.0000)));
}}
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.assertTrue(Assert.java:42)
at
org.apache.hadoop.fs.contract.ContractTestUtils.assertCapabilities(ContractTestUtils.java:1647)
at
org.apache.hadoop.fs.contract.AbstractContractVectoredReadTest.testVectoredReadCapability(AbstractContractVectoredReadTest.java:149)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:750)
[ERROR] testSomeRandomNonOverlappingRanges[Buffer type :
direct](org.apache.hadoop.fs.azurebfs.contract.ITestAbfsFileSystemContractVectoredRead)
Time elapsed: 0.378 s <<< FAILURE!
java.lang.AssertionError: vecRead with read offset 500: data[0] !=
DATASET[500] expected:<117> but was:<97>
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotEquals(Assert.java:835)
at org.junit.Assert.assertEquals(Assert.java:647)
at
org.apache.hadoop.fs.contract.ContractTestUtils.assertDatasetEquals(ContractTestUtils.java:1182)
at
org.apache.hadoop.fs.contract.ContractTestUtils.validateVectoredReadResult(ContractTestUtils.java:1140)
at
org.apache.hadoop.fs.contract.AbstractContractVectoredReadTest.testSomeRandomNonOverlappingRanges(AbstractContractVectoredReadTest.java:279)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:750)
[ERROR] testAllRangesMergedIntoOne[Buffer type :
direct](org.apache.hadoop.fs.azurebfs.contract.ITestAbfsFileSystemContractVectoredRead)
Time elapsed: 0.432 s <<< FAILURE!
java.lang.AssertionError: vecRead with read offset 3899: data[0] !=
DATASET[3899] expected:<124> but was:<97>
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotEquals(Assert.java:835)
at org.junit.Assert.assertEquals(Assert.java:647)
at
org.apache.hadoop.fs.contract.ContractTestUtils.assertDatasetEquals(ContractTestUtils.java:1182)
at
org.apache.hadoop.fs.contract.ContractTestUtils.validateVectoredReadResult(ContractTestUtils.java:1140)
at
org.apache.hadoop.fs.contract.AbstractContractVectoredReadTest.testAllRangesMergedIntoOne(AbstractContractVectoredReadTest.java:220)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:750)
[ERROR] testVectoredIOEndToEnd[Buffer type :
direct](org.apache.hadoop.fs.azurebfs.contract.ITestAbfsFileSystemContractVectoredRead)
Time elapsed: 180.019 s <<< ERROR!
org.junit.runners.model.TestTimedOutException: test timed out after 180000
milliseconds
at sun.misc.Unsafe.park(Native Method)
at
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1039)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1332)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at
org.apache.hadoop.fs.contract.AbstractContractVectoredReadTest.testVectoredIOEndToEnd(AbstractContractVectoredReadTest.java:416)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:750)
[ERROR] testVectoredReadAndReadFully[Buffer type :
direct](org.apache.hadoop.fs.azurebfs.contract.ITestAbfsFileSystemContractVectoredRead)
Time elapsed: 0.945 s <<< FAILURE!
org.junit.ComparisonFailure: [Result from vectored read and readFully must
match] expected:<java.nio.[Heap]ByteBuffer[pos=0 lim...> but
was:<java.nio.[Direct]ByteBuffer[pos=0 lim...>
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at
org.apache.hadoop.fs.contract.AbstractContractVectoredReadTest.testVectoredReadAndReadFully(AbstractContractVectoredReadTest.java:186)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:750)
[ERROR] testMultipleVectoredReads[Buffer type :
direct](org.apache.hadoop.fs.azurebfs.contract.ITestAbfsFileSystemContractVectoredRead)
Time elapsed: 0.331 s <<< FAILURE!
java.lang.AssertionError: vecRead with read offset 110: data[0] !=
DATASET[110] expected:<111> but was:<97>
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotEquals(Assert.java:835)
at org.junit.Assert.assertEquals(Assert.java:647)
at
org.apache.hadoop.fs.contract.ContractTestUtils.assertDatasetEquals(ContractTestUtils.java:1182)
at
org.apache.hadoop.fs.contract.ContractTestUtils.validateVectoredReadResult(ContractTestUtils.java:1140)
at
org.apache.hadoop.fs.contract.AbstractContractVectoredReadTest.testMultipleVectoredReads(AbstractContractVectoredReadTest.java:378)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:750)
[ERROR] testConsecutiveRanges[Buffer type :
direct](org.apache.hadoop.fs.azurebfs.contract.ITestAbfsFileSystemContractVectoredRead)
Time elapsed: 0.419 s <<< FAILURE!
java.lang.AssertionError: vecRead with read offset 500: data[0] !=
DATASET[500] expected:<117> but was:<97>
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotEquals(Assert.java:835)
at org.junit.Assert.assertEquals(Assert.java:647)
at
org.apache.hadoop.fs.contract.ContractTestUtils.assertDatasetEquals(ContractTestUtils.java:1182)
at
org.apache.hadoop.fs.contract.ContractTestUtils.validateVectoredReadResult(ContractTestUtils.java:1140)
at
org.apache.hadoop.fs.contract.AbstractContractVectoredReadTest.testConsecutiveRanges(AbstractContractVectoredReadTest.java:292)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:750)
[ERROR] testVectoredReadCapability[Buffer type :
array](org.apache.hadoop.fs.azurebfs.contract.ITestAbfsFileSystemContractVectoredRead)
Time elapsed: 0.414 s <<< FAILURE!
java.lang.AssertionError:
Should have capability: in:readvectored in
org.apache.hadoop.fs.FSDataInputStream@2ff7f68a:
org.apache.hadoop.fs.azurebfs.services.AbfsInputStream@278496ba{counters=((stream_read_bytes=0)
(bytes_read_buffer=0) (stream_read_seek_forward_operations=0)
(stream_read_operations=0) (stream_read_bytes_backwards_on_seek=0)
(stream_read_seek_backward_operations=0) (stream_read_seek_operations=0)
(read_ahead_bytes_read=0) (action_http_get_request.failures=0)
(seek_in_buffer=0) (remote_read_op=0) (remote_bytes_read=0)
(action_http_get_request=0) (stream_read_seek_bytes_skipped=0));
gauges=();
minimums=((action_http_get_request.min=-1)
(action_http_get_request.failures.min=-1));
maximums=((action_http_get_request.max=-1)
(action_http_get_request.failures.max=-1));
means=((action_http_get_request.mean=(samples=0, sum=0, mean=0.0000))
(action_http_get_request.failures.mean=(samples=0, sum=0, mean=0.0000)));
}AbfsInputStream@(663000762){[fs.azure.capability.readahead.safe],
StreamStatistics{counters=((remote_read_op=0) (bytes_read_buffer=0)
(stream_read_bytes_backwards_on_seek=0) (action_http_get_request=0)
(action_http_get_request.failures=0) (remote_bytes_read=0)
(stream_read_seek_forward_operations=0) (stream_read_seek_operations=0)
(stream_read_bytes=0) (seek_in_buffer=0) (stream_read_seek_bytes_skipped=0)
(stream_read_seek_backward_operations=0) (stream_read_operations=0)
(read_ahead_bytes_read=0));
gauges=();
minimums=((action_http_get_request.failures.min=-1)
(action_http_get_request.min=-1));
maximums=((action_http_get_request.failures.max=-1)
(action_http_get_request.max=-1));
means=((action_http_get_request.failures.mean=(samples=0, sum=0,
mean=0.0000)) (action_http_get_request.mean=(samples=0, sum=0, mean=0.0000)));
}}
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.assertTrue(Assert.java:42)
at
org.apache.hadoop.fs.contract.ContractTestUtils.assertCapabilities(ContractTestUtils.java:1647)
at
org.apache.hadoop.fs.contract.AbstractContractVectoredReadTest.testVectoredReadCapability(AbstractContractVectoredReadTest.java:149)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:750)
[ERROR] testVectoredIOEndToEnd[Buffer type :
array](org.apache.hadoop.fs.azurebfs.contract.ITestAbfsFileSystemContractVectoredRead)
Time elapsed: 180.015 s <<< ERROR!
org.junit.runners.model.TestTimedOutException: test timed out after 180000
milliseconds
at sun.misc.Unsafe.park(Native Method)
at
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1039)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1332)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at
org.apache.hadoop.fs.contract.AbstractContractVectoredReadTest.testVectoredIOEndToEnd(AbstractContractVectoredReadTest.java:416)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:750)
[INFO]
[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR]
ITestAbfsFileSystemContractVectoredRead>AbstractContractVectoredReadTest.testAllRangesMergedIntoOne:220->Assert.assertEquals:647->Assert.failNotEquals:835->Assert.fail:89
vecRead with read offset 3899: data[0] != DATASET[3899] expected:<124> but
was:<97>
[ERROR]
ITestAbfsFileSystemContractVectoredRead>AbstractContractVectoredReadTest.testConsecutiveRanges:292->Assert.assertEquals:647->Assert.failNotEquals:835->Assert.fail:89
vecRead with read offset 500: data[0] != DATASET[500] expected:<117> but
was:<97>
[ERROR]
ITestAbfsFileSystemContractVectoredRead>AbstractContractVectoredReadTest.testDisjointRanges:203->Assert.assertEquals:647->Assert.failNotEquals:835->Assert.fail:89
vecRead with read offset 4101: data[0] != DATASET[4101] expected:<102> but
was:<97>
[ERROR]
ITestAbfsFileSystemContractVectoredRead>AbstractContractVectoredReadTest.testEOFRanges:310
Expected a java.io.EOFException to be thrown, but got the result: :
java.nio.DirectByteBuffer[pos=0 lim=100 cap=100]
[ERROR]
ITestAbfsFileSystemContractVectoredRead>AbstractContractVectoredReadTest.testMultipleVectoredReads:378->Assert.assertEquals:647->Assert.failNotEquals:835->Assert.fail:89
vecRead with read offset 110: data[0] != DATASET[110] expected:<111> but
was:<97>
[ERROR]
ITestAbfsFileSystemContractVectoredRead>AbstractContractVectoredReadTest.testNormalReadAfterVectoredRead:347->Assert.assertEquals:647->Assert.failNotEquals:835->Assert.fail:89
vecRead with read offset 110: data[0] != DATASET[110] expected:<111> but
was:<97>
[ERROR]
ITestAbfsFileSystemContractVectoredRead>AbstractContractVectoredReadTest.testSomeRandomNonOverlappingRanges:279->Assert.assertEquals:647->Assert.failNotEquals:835->Assert.fail:89
vecRead with read offset 500: data[0] != DATASET[500] expected:<117> but
was:<97>
[ERROR]
ITestAbfsFileSystemContractVectoredRead>AbstractContractVectoredReadTest.testSomeRangesMergedSomeUnmerged:245->Assert.assertEquals:647->Assert.failNotEquals:835->Assert.fail:89
vecRead with read offset 1947: data[0] != DATASET[1947] expected:<124> but
was:<97>
[ERROR]
ITestAbfsFileSystemContractVectoredRead>AbstractContractVectoredReadTest.testVectoredReadAfterNormalRead:366->Assert.assertEquals:647->Assert.failNotEquals:835->Assert.fail:89
vecRead with read offset 110: data[0] != DATASET[110] expected:<111> but
was:<97>
[ERROR]
ITestAbfsFileSystemContractVectoredRead>AbstractContractVectoredReadTest.testVectoredReadAndReadFully:186
[Result from vectored read and readFully must match]
expected:<java.nio.[Heap]ByteBuffer[pos=0 lim...> but
was:<java.nio.[Direct]ByteBuffer[pos=0 lim...>
[ERROR]
ITestAbfsFileSystemContractVectoredRead>AbstractContractVectoredReadTest.testVectoredReadCapability:149->Assert.assertTrue:42->Assert.fail:89
Should have capability: in:readvectored in
org.apache.hadoop.fs.FSDataInputStream@2ff7f68a:
org.apache.hadoop.fs.azurebfs.services.AbfsInputStream@278496ba{counters=((stream_read_bytes=0)
(bytes_read_buffer=0) (stream_read_seek_forward_operations=0)
(stream_read_operations=0) (stream_read_bytes_backwards_on_seek=0)
(stream_read_seek_backward_operations=0) (stream_read_seek_operations=0)
(read_ahead_bytes_read=0) (action_http_get_request.failures=0)
(seek_in_buffer=0) (remote_read_op=0) (remote_bytes_read=0)
(action_http_get_request=0) (stream_read_seek_bytes_skipped=0));
gauges=();
minimums=((action_http_get_request.min=-1)
(action_http_get_request.failures.min=-1));
maximums=((action_http_get_request.max=-1)
(action_http_get_request.failures.max=-1));
means=((action_http_get_request.mean=(samples=0, sum=0, mean=0.0000))
(action_http_get_request.failures.mean=(samples=0, sum=0, mean=0.0000)));
}AbfsInputStream@(663000762){[fs.azure.capability.readahead.safe],
StreamStatistics{counters=((remote_read_op=0) (bytes_read_buffer=0)
(stream_read_bytes_backwards_on_seek=0) (action_http_get_request=0)
(action_http_get_request.failures=0) (remote_bytes_read=0)
(stream_read_seek_forward_operations=0) (stream_read_seek_operations=0)
(stream_read_bytes=0) (seek_in_buffer=0) (stream_read_seek_bytes_skipped=0)
(stream_read_seek_backward_operations=0) (stream_read_operations=0)
(read_ahead_bytes_read=0));
gauges=();
minimums=((action_http_get_request.failures.min=-1)
(action_http_get_request.min=-1));
maximums=((action_http_get_request.failures.max=-1)
(action_http_get_request.max=-1));
means=((action_http_get_request.failures.mean=(samples=0, sum=0,
mean=0.0000)) (action_http_get_request.mean=(samples=0, sum=0, mean=0.0000)));
}}
[ERROR]
ITestAbfsFileSystemContractVectoredRead>AbstractContractVectoredReadTest.testVectoredReadCapability:149->Assert.assertTrue:42->Assert.fail:89
Should have capability: in:readvectored in
org.apache.hadoop.fs.FSDataInputStream@4bb717:
org.apache.hadoop.fs.azurebfs.services.AbfsInputStream@456a59ae{counters=((remote_read_op=0)
(stream_read_seek_operations=0) (stream_read_seek_backward_operations=0)
(read_ahead_bytes_read=0) (action_http_get_request.failures=0)
(remote_bytes_read=0) (stream_read_operations=0)
(stream_read_seek_bytes_skipped=0) (stream_read_seek_forward_operations=0)
(bytes_read_buffer=0) (stream_read_bytes_backwards_on_seek=0)
(stream_read_bytes=0) (seek_in_buffer=0) (action_http_get_request=0));
gauges=();
minimums=((action_http_get_request.min=-1)
(action_http_get_request.failures.min=-1));
maximums=((action_http_get_request.max=-1)
(action_http_get_request.failures.max=-1));
means=((action_http_get_request.failures.mean=(samples=0, sum=0,
mean=0.0000)) (action_http_get_request.mean=(samples=0, sum=0, mean=0.0000)));
}AbfsInputStream@(1164597678){[fs.azure.capability.readahead.safe],
StreamStatistics{counters=((seek_in_buffer=0) (action_http_get_request=0)
(stream_read_seek_forward_operations=0)
(stream_read_seek_backward_operations=0) (read_ahead_bytes_read=0)
(remote_read_op=0) (remote_bytes_read=0) (stream_read_seek_bytes_skipped=0)
(stream_read_operations=0) (bytes_read_buffer=0)
(stream_read_bytes_backwards_on_seek=0) (action_http_get_request.failures=0)
(stream_read_seek_operations=0) (stream_read_bytes=0));
gauges=();
minimums=((action_http_get_request.failures.min=-1)
(action_http_get_request.min=-1));
maximums=((action_http_get_request.failures.max=-1)
(action_http_get_request.max=-1));
means=((action_http_get_request.mean=(samples=0, sum=0, mean=0.0000))
(action_http_get_request.failures.mean=(samples=0, sum=0, mean=0.0000)));
}}
[ERROR]
ITestAbfsFileSystemContractVectoredRead>AbstractContractVectoredReadTest.testVectoredReadMultipleRanges:170->Assert.assertEquals:647->Assert.failNotEquals:835->Assert.fail:89
vecRead with read offset 100: data[0] != DATASET[100] expected:<101> but
was:<97>
[ERROR] Errors:
[ERROR]
ITestAbfsFileSystemContractVectoredRead>AbstractContractVectoredReadTest.testVectoredIOEndToEnd:416
» TestTimedOut
[ERROR]
ITestAbfsFileSystemContractVectoredRead>AbstractContractVectoredReadTest.testVectoredIOEndToEnd:416
» TestTimedOut
[INFO]
[ERROR] Tests run: 34, Failures: 13, Errors: 2, Skipped: 0
[INFO]
[ERROR] There are test failures.
```
> Vector IO: consistent specified rejection of overlapping ranges
> ---------------------------------------------------------------
>
> Key: HADOOP-19098
> URL: https://issues.apache.org/jira/browse/HADOOP-19098
> Project: Hadoop Common
> Issue Type: Improvement
> Components: fs, fs/s3
> Affects Versions: 3.3.6
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Major
> Labels: pull-request-available
>
> Related to PARQUET-2171 q: "how do you deal with overlapping ranges?"
> I believe s3a rejects this, but the other impls may not.
> Proposed
> FS spec to say
> * "overlap triggers IllegalArgumentException".
> * special case: 0 byte ranges may be short circuited to return empty buffer
> even without checking file length etc.
> Contract tests to validate this
> (+ common helper code to do this).
> I'll copy the validation stuff into the parquet PR for consistency with older
> releases
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]