[ https://issues.apache.org/jira/browse/HADOOP-15847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771911#comment-16771911 ]
Steve Loughran commented on HADOOP-15847: ----------------------------------------- catching up with this I couldn't find that bit of ScaleTestBase and the option fs.s3a.s3guard.ddb.table.scale.capacity.limit anywhere, I think that diff is either against a very old version of the code or its a diff between two intermediate patches. Can you do a diff from trunk...HEAD for the full patch. thx. * if a new config option is added for testing it must go into {{org.apache.hadoop.fs.s3a.S3ATestConstants}}; something in testing.md to mention it. * IDE shouldn't be converting a single static import to a .*: check your rules or strip those changes from patches. * that deleteTable call should be in a finally clause in the test to guarantee it always happens Yes, we do need that cleanup > S3Guard testConcurrentTableCreations to set r & w capacity == 1 > --------------------------------------------------------------- > > Key: HADOOP-15847 > URL: https://issues.apache.org/jira/browse/HADOOP-15847 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test > Affects Versions: 3.2.0 > Reporter: Steve Loughran > Assignee: lqjacklee > Priority: Major > Attachments: HADOOP-15847-001.patch, HADOOP-15847-002.patch > > > I just found a {{testConcurrentTableCreations}} DDB table lurking in a > region, presumably from an interrupted test. Luckily > test/resources/core-site.xml forces the r/w capacity to be 10, but it could > still run up bills. > Recommend > * explicitly set capacity = 1 for the test > * and add comments in the testing docs about keeping cost down. > I think we may also want to make this a scale-only test, so it's run less > often -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org