jamesmudd commented on a change in pull request #2164:
URL: https://github.com/apache/drill/pull/2164#discussion_r584348367



##########
File path: 
contrib/format-hdf5/src/main/java/org/apache/drill/exec/store/hdf5/HDF5BatchReader.java
##########
@@ -237,37 +242,37 @@ public boolean open(FileSchemaNegotiator negotiator) {
    * This function is called when the default path is set and the data set is 
a single dimension.
    * This function will create an array of one dataWriter of the
    * correct datatype
-   * @param dsInfo The HDF5 dataset information
+   * @param dataset The HDF5 dataset
    */
-  private void buildSchemaFor1DimensionalDataset(HDF5DataSetInformation 
dsInfo) {
-    TypeProtos.MinorType currentDataType = HDF5Utils.getDataType(dsInfo);
+  private void buildSchemaFor1DimensionalDataset(Dataset dataset) {
+    MinorType currentDataType = HDF5Utils.getDataType(dataset.getDataType());
 
     // Case for null or unknown data types:
     if (currentDataType == null) {
-      logger.warn("Couldn't add {}", 
dsInfo.getTypeInformation().tryGetJavaType().toGenericString());
+      logger.warn("Couldn't add {}", dataset.getJavaType().getName());
       return;
     }
     dataWriters.add(buildWriter(currentDataType));
   }
 
-  private HDF5DataWriter buildWriter(TypeProtos.MinorType dataType) {
+  private HDF5DataWriter buildWriter(MinorType dataType) {
     switch (dataType) {
-      case GENERIC_OBJECT:
-        return new HDF5EnumDataWriter(hdf5Reader, writerSpec, 
readerConfig.defaultPath);
+      /*case GENERIC_OBJECT:
+        return new HDF5EnumDataWriter(hdfFile, writerSpec, 
readerConfig.defaultPath);*/

Review comment:
       Yes enum types should be supported. (Since 
https://github.com/jamesmudd/jhdf/issues/121) If this isn't working for you 
open an issue with an example file. See 
https://github.com/jamesmudd/jhdf/blob/master/jhdf/src/test/java/io/jhdf/dataset/EnumDatasetTest.java
 for usage example.

##########
File path: 
contrib/format-hdf5/src/main/java/org/apache/drill/exec/store/hdf5/HDF5BatchReader.java
##########
@@ -323,53 +328,20 @@ private void openFile(FileSchemaNegotiator negotiator) 
throws IOException {
     InputStream in = null;
     try {
       in = 
negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
-      IHDF5Factory factory = HDF5FactoryProvider.get();
-      inFile = convertInputStreamToFile(in);
-      hdf5Reader = factory.openForReading(inFile);
+      hdfFile = HdfFile.fromInputStream(in);

Review comment:
       Choosing the approach depending on the file size seems like a good idea. 
I will open a PR here when this auto-deletion support is in jhdf but for now 
this will be fine.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to