cgivre commented on a change in pull request #2164:
URL: https://github.com/apache/drill/pull/2164#discussion_r582487354



##########
File path: 
contrib/format-hdf5/src/main/java/org/apache/drill/exec/store/hdf5/HDF5BatchReader.java
##########
@@ -198,27 +203,27 @@ public boolean open(FileSchemaNegotiator negotiator) {
       negotiator.tableSchema(builder.buildSchema(), false);
 
       loader = negotiator.build();
-      dimensions = new long[0];
+      dimensions = new int[0];
       rowWriter = loader.writer();
 
     } else {
       // This is the case when the default path is specified. Since the user 
is explicitly asking for a dataset
       // Drill can obtain the schema by getting the datatypes below and 
ultimately mapping that schema to columns
-      HDF5DataSetInformation dsInfo = 
hdf5Reader.object().getDataSetInformation(readerConfig.defaultPath);
-      dimensions = dsInfo.getDimensions();
+      Dataset dataSet = hdfFile.getDatasetByPath(readerConfig.defaultPath);
+      dimensions = dataSet.getDimensions();
 
       loader = negotiator.build();
       rowWriter = loader.writer();
       writerSpec = new WriterSpec(rowWriter, negotiator.providedSchema(),
           negotiator.parentErrorContext());
       if (dimensions.length <= 1) {

Review comment:
       There are unit tests for most of the data types, including scalars.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to