> On Sept. 29, 2015, 12:59 p.m., Jinfeng Ni wrote:
> > contrib/storage-hive/core/src/main/java/org/apache/drill/exec/planner/sql/logical/ConvertHiveParquetScanToDrillParquetScan.java,
> >  line 267
> > <https://reviews.apache.org/r/38796/diff/3/?file=1087071#file1087071line267>
> >
> >     I have one question about partition column. 
> >     
> >     Let's say Hive has 'year" as partition column. For value 2015, does 
> > Hive put "year=2015" as the directory name? If that's the case, then 
> > "year=2015" would be treated as "dir0" by native parquet reader, in stead 
> > of "2015"? Do we need handle the difference of partition column between 
> > hive scan and native scan?

Hive already stores the partition values (eg, 2015) as strings in metastore. If 
the partition doesn't have location defined in ADD partition command, it 
creates a default partition location by appending 
partcol1=value1/partcol2=value2 to table location. In our case we get the dir0 
values from metastore directory and pass them to ScanBatch for partition vector 
fillup. So we just need to cast them from VARCHAR to partition column type.


- Venki


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/38796/#review100998
-----------------------------------------------------------


On Sept. 29, 2015, 9:23 a.m., Venki Korukanti wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/38796/
> -----------------------------------------------------------
> 
> (Updated Sept. 29, 2015, 9:23 a.m.)
> 
> 
> Review request for drill and Jinfeng Ni.
> 
> 
> Repository: drill-git
> 
> 
> Description
> -------
> 
> Please jira DRILL-3209 for details.
> 
> 
> Diffs
> -----
> 
>   
> contrib/storage-hive/core/src/main/java/org/apache/drill/exec/planner/sql/HivePartitionDescriptor.java
>  11c6455 
>   
> contrib/storage-hive/core/src/main/java/org/apache/drill/exec/planner/sql/logical/ConvertHiveParquetScanToDrillParquetScan.java
>  PRE-CREATION 
>   
> contrib/storage-hive/core/src/main/java/org/apache/drill/exec/store/hive/HiveDrillNativeParquetScan.java
>  PRE-CREATION 
>   
> contrib/storage-hive/core/src/main/java/org/apache/drill/exec/store/hive/HiveDrillNativeParquetSubScan.java
>  PRE-CREATION 
>   
> contrib/storage-hive/core/src/main/java/org/apache/drill/exec/store/hive/HiveDrillNativeScanBatchCreator.java
>  PRE-CREATION 
>   
> contrib/storage-hive/core/src/main/java/org/apache/drill/exec/store/hive/HiveScan.java
>  9ada569 
>   
> contrib/storage-hive/core/src/main/java/org/apache/drill/exec/store/hive/HiveStoragePlugin.java
>  23aa37f 
>   
> contrib/storage-hive/core/src/main/java/org/apache/drill/exec/store/hive/HiveSubScan.java
>  2181c2a 
>   
> contrib/storage-hive/core/src/main/java/org/apache/drill/exec/store/hive/schema/DrillHiveTable.java
>  b459ee4 
>   
> contrib/storage-hive/core/src/test/java/org/apache/drill/exec/TestHivePartitionPruning.java
>  f0b4bdc 
>   
> contrib/storage-hive/core/src/test/java/org/apache/drill/exec/TestHiveProjectPushDown.java
>  6423a36 
>   
> contrib/storage-hive/core/src/test/java/org/apache/drill/exec/hive/TestHiveStorage.java
>  9211af6 
>   
> contrib/storage-hive/core/src/test/java/org/apache/drill/exec/hive/TestInfoSchemaOnHiveStorage.java
>  6118be5 
>   
> contrib/storage-hive/core/src/test/java/org/apache/drill/exec/store/hive/HiveTestDataGenerator.java
>  34a7ed6 
>   exec/java-exec/src/main/java/org/apache/drill/exec/ExecConstants.java 
> 66f9f03 
>   
> exec/java-exec/src/main/java/org/apache/drill/exec/server/options/SystemOptionManager.java
>  5838bd1 
> 
> Diff: https://reviews.apache.org/r/38796/diff/
> 
> 
> Testing
> -------
> 
> Added unittests to test reading all supported types, project pushdown and 
> partition pruning. Manually tested with Hive tables containing large amount 
> of data (these tests will become part of the regression suite).
> 
> 
> Thanks,
> 
> Venki Korukanti
> 
>

Reply via email to