Github user gparai commented on a diff in the pull request:
https://github.com/apache/drill/pull/653#discussion_r88306937
--- Diff:
exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/Metadata.java
---
@@ -495,31 +499,65 @@ private ParquetFileMetadata_v3
getParquetFileMetadata_v3(ParquetTableMetadata_v3
* @param p
* @throws IOException
*/
- private void writeFile(ParquetTableMetadata_v3 parquetTableMetadata,
Path p) throws IOException {
+ private void writeFile(ParquetTableMetadata_v3 parquetTableMetadata,
String path) throws IOException {
JsonFactory jsonFactory = new JsonFactory();
jsonFactory.configure(Feature.AUTO_CLOSE_TARGET, false);
jsonFactory.configure(JsonParser.Feature.AUTO_CLOSE_SOURCE, false);
ObjectMapper mapper = new ObjectMapper(jsonFactory);
SimpleModule module = new SimpleModule();
module.addSerializer(ColumnMetadata_v3.class, new
ColumnMetadata_v3.Serializer());
mapper.registerModule(module);
- FSDataOutputStream os = fs.create(p);
+
+ // If multiple clients are updating metadata cache file concurrently,
the cache file
+ // can get corrupted. To prevent this, write to a unique temporary
file and then do
+ // atomic rename.
+ UUID randomUUID = UUID.randomUUID();
+ Path tmpPath = new Path(path, new String(METADATA_FILENAME + "." +
randomUUID));
+
+ FSDataOutputStream os = fs.create(tmpPath);
mapper.writerWithDefaultPrettyPrinter().writeValue(os,
parquetTableMetadata);
os.flush();
os.close();
+
+ // Use fileContext API as FileSystem rename is deprecated.
+ FileContext fileContext = FileContext.getFileContext(tmpPath.toUri());
+ Path finalPath = new Path(path, METADATA_FILENAME);
+
+ try {
+ fileContext.rename(tmpPath, finalPath, Options.Rename.OVERWRITE);
+ } catch (Exception e) {
+ logger.info("Rename from {} to {} failed", tmpPath.toString(),
finalPath.toString(), e);
+ }
}
- private void writeFile(ParquetTableMetadataDirs
parquetTableMetadataDirs, Path p) throws IOException {
+ private void writeFile(ParquetTableMetadataDirs
parquetTableMetadataDirs, String path) throws IOException {
JsonFactory jsonFactory = new JsonFactory();
jsonFactory.configure(Feature.AUTO_CLOSE_TARGET, false);
jsonFactory.configure(JsonParser.Feature.AUTO_CLOSE_SOURCE, false);
ObjectMapper mapper = new ObjectMapper(jsonFactory);
SimpleModule module = new SimpleModule();
mapper.registerModule(module);
- FSDataOutputStream os = fs.create(p);
+
+ // If multiple clients are updating metadata cache file concurrently,
the cache file
+ // can get corrupted. To prevent this, write to a unique temporary
file and then do
+ // atomic rename.
+ UUID randomUUID = UUID.randomUUID();
+ Path tmpPath = new Path(path, new String(METADATA_DIRECTORIES_FILENAME
+ "." + randomUUID));
+
+ FSDataOutputStream os = fs.create(tmpPath);
mapper.writerWithDefaultPrettyPrinter().writeValue(os,
parquetTableMetadataDirs);
os.flush();
os.close();
+
+ // Use fileContext API as FileSystem rename is deprecated.
+ FileContext fileContext = FileContext.getFileContext(tmpPath.toUri());
+ Path finalPath = new Path(path, METADATA_DIRECTORIES_FILENAME);
+
+ try {
+ fileContext.rename(tmpPath, finalPath, Options.Rename.OVERWRITE);
+ } catch (Exception e) {
+ logger.info("Rename from {} to {} failed", tmpPath.toString(),
finalPath.toString(), e);
+ }
--- End diff --
It looks like the function throws an IOException. We should figure out the
code doing so and handle it if related to tmp file creation?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---