DomGarguilo opened a new issue, #4045:
URL: https://github.com/apache/accumulo/issues/4045
**Describe the bug**
A table gets stuck in the `NEW` state when the scenario below happens. See
comments in the code.
**Versions (OS, Maven, Java, and others, as appropriate):**
- Affected version(s) of this project: 2.1.3-SNAPSHOT and 3.1.0-SNAPSHOT
**To Reproduce**
Run the test added this branch: [DomGarguilo:accumulo:newTableStateBug]
(https://github.com/DomGarguilo/accumulo/tree/newTableStateBug). This was
branched from 3.1.0-SNAPHOT but the changes can also be applied to 2.1.
Pasted here in case the branch ever gets deleted. This test case was added
to ImportExportIT:
```java
@Test
public void testBug() throws Exception {
try (AccumuloClient client =
Accumulo.newClient().from(getClientProps()).build()) {
String[] tableNames = getUniqueNames(3);
String srcTable = tableNames[0], destTable1 = tableNames[1],
destTable2 = tableNames[2];
client.tableOperations().create(srcTable);
try (BatchWriter bw = client.createBatchWriter(srcTable)) {
for (int row = 0; row < 1000; row++) {
Mutation m = new Mutation("row_" + String.format("%010d", row));
for (int col = 0; col < 100; col++) {
m.put(Integer.toString(col), "", Integer.toString(col * 2));
}
bw.addMutation(m);
}
}
final Path testDir = new Path("/tmp/manysplitsTableExport");
final Path tableExportDir = new Path(testDir, srcTable);
final Path copyDir = new Path(testDir, "_tmp");
FileSystem fs = cluster.getFileSystem();
fs.delete(testDir, true); // remove the dir if it exists from a
previous run
fs.mkdirs(tableExportDir);
fs.mkdirs(copyDir);
log.info("Exporting table data to {}", tableExportDir);
client.tableOperations().offline(srcTable);
client.tableOperations().exportTable(srcTable,
tableExportDir.toString());
// copy files
try (FSDataInputStream fsDataInputStream = fs.open(new
Path(tableExportDir, "distcp.txt"));
InputStreamReader inputStreamReader = new
InputStreamReader(fsDataInputStream, UTF_8);
BufferedReader reader = new BufferedReader(inputStreamReader)) {
String file;
while ((file = reader.readLine()) != null) {
Path src = new Path(file);
Path dest = new Path(copyDir, src.getName());
FileUtil.copy(fs, src, fs, dest, false, true,
cluster.getServerContext().getHadoopConf());
}
}
// create the table before trying to import to it
client.tableOperations().create(destTable1);
ImportConfiguration importConfig =
ImportConfiguration.builder().setKeepOffline(true).setKeepMappings(true).build();
// since the table already exists we expect this to be thrown
assertThrows(TableExistsException.class, () -> client.tableOperations()
.importTable(destTable1, Set.of(copyDir.toString()),
importConfig));
client.tableOperations().importTable(destTable2,
Set.of(copyDir.toString()), importConfig);
Map<String,String> tableIdMap =
getServerContext().tableOperations().tableIdMap();
TableId tid = TableId.of(tableIdMap.get(destTable2));
var context = getServerContext();
// for some reason when we try to import to a table that already
exists and then use that same
// dir to import to another table (that doesn't yet exist) the new
table is created but does
// not leave the NEW table state.
Wait.waitFor(() -> context.getTableState(tid) != TableState.NEW,
SECONDS.toMillis(30),
SECONDS.toMillis(1), "Table stuck in NEW table state");
}
}
```
**Expected behavior**
A table should not get stuck in `NEW` table state
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]