steveloughran commented on issue #584: HADOOP-16109. Parquet reading S3AFileSystem causes EOF URL: https://github.com/apache/hadoop/pull/584#issuecomment-471211465 Tested: S3 ireland. All the new tests worked,. failures in * `ITestAssumedRole`; my restricted permission role (works for 3.2+) didn't have the describeTables permission needed in branch-3.1 * `ITestS3GuardToolDynamoDB.testDynamoDBInitDestroyCycle`. That one failed because I've moved that bucket to on-demand capacity, and attempting to change capacity fails. I think we've cut that bit of code during the work on DDB throttling, as there's a limit to how often you could try to change capacity in a day and so the tests would fail if run to often` ``` org.apache.hadoop.fs.s3a.AWSServiceIOException: ProvisionTable on hwdev-steve-ireland-new: com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: One or more parameter values were invalid: Neither ReadCapacityUnits nor WriteCapacityUnits can be specified when BillingMode is PAY_PER_REQUEST (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException; Request ID: M6K9JLKNI9PPUFKSG89HF5CLSFVV4KQNSO5AEMVJF66Q9ASUAAJG): One or more parameter values were invalid: Neither ReadCapacityUnits nor WriteCapacityUnits can be specified when BillingMode is PAY_PER_REQUEST (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException; Request ID: M6K9JLKNI9PPUFKSG89HF5CLSFVV4KQNSO5AEMVJF66Q9ASUAAJG) at org.apache.hadoop.fs.s3a.S3AUtils.translateDynamoDBException(S3AUtils.java:389) at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:181) at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:111) at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265) at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322) at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261) at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:193) at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:215) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.provisionTable(DynamoDBMetadataStore.java:1073) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.provisionTableBlocking(DynamoDBMetadataStore.java:1088) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.updateParameters(DynamoDBMetadataStore.java:1195) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$SetCapacity.run(S3GuardTool.java:498) at org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.exec(AbstractS3GuardToolTestBase.java:308) at org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.exec(AbstractS3GuardToolTestBase.java:286) at org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB.testDynamoDBInitDestroyCycle(ITestS3GuardToolDynamoDB.java:224) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: One or more parameter values were invalid: Neither ReadCapacityUnits nor WriteCapacityUnits can be specified when BillingMode is PAY_PER_REQUEST (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException; Request ID: M6K9JLKNI9PPUFKSG89HF5CLSFVV4KQNSO5AEMVJF66Q9ASUAAJG) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1639) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667) at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513) at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:2925) at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:2901) at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.executeUpdateTable(AmazonDynamoDBClient.java:2757) at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.updateTable(AmazonDynamoDBClient.java:2733) at com.amazonaws.services.dynamodbv2.document.Table.updateTable(Table.java:371) at com.amazonaws.services.dynamodbv2.document.Table.updateTable(Table.java:463) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.lambda$provisionTable$7(DynamoDBMetadataStore.java:1076) at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$2(Invoker.java:195) at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109) ... 24 more ``` I think I'm going to selectively cherry-pick more of the 3.2 aws code back to 3.1; at least enough to have all the tests happy, and address any other pressing concerns I have
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org