Github user kavinderd commented on a diff in the pull request:

    https://github.com/apache/incubator-hawq/pull/1224#discussion_r113817022
  
    --- Diff: 
pxf/pxf-hive/src/main/java/org/apache/hawq/pxf/plugins/hive/HiveDataFragmenter.java
 ---
    @@ -466,7 +466,14 @@ private boolean buildSingleFilter(Object filter,
          */
         @Override
         public FragmentsStats getFragmentsStats() throws Exception {
    -        throw new UnsupportedOperationException(
    -                "ANALYZE for Hive plugin is not supported");
    +        Metadata.Item tblDesc = 
HiveUtilities.extractTableFromName(inputData.getDataSource());
    +        Table tbl = HiveUtilities.getHiveTable(client, tblDesc);
    +        Metadata metadata = new Metadata(tblDesc);
    +        HiveUtilities.getSchema(tbl, metadata);
    +
    +        long split_count = 
Long.parseLong(tbl.getParameters().get("numFiles"));
    --- End diff --
    
    @sansanichfb @shivzone It is possible to have files that are larger than an 
hdfs block/split size. However, I think this is an anomaly especially with ORC 
where creating many small files is preferred to increase concurrency and 
parallelism. So based on the fact that this is an edge case and the number of 
splits is only used by HAWQ in calculating its sampling ratio for statistic 
collection is the current implementation acceptable?
    
    I personally don't think that getting an accurate number of splits for the 
ANALYZE case is worth running a function like 
https://github.com/apache/incubator-hawq/blob/master/pxf/pxf-hive/src/main/java/org/apache/hawq/pxf/plugins/hive/HiveDataFragmenter.java#L284
 just to get a handle on the number of splits


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to