> On 2011-03-16 16:37:44, Ning Zhang wrote:
> > trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionPruner.java,
> >  line 244
> > <https://reviews.apache.org/r/489/diff/1/?file=13887#file13887line244>
> >
> >     I agree in general we should use a generic interface in the 
> > declaration, but I'd prefer leave the LinkedHashMap here. The reason is 
> > that what we need for the partSpec is an ordered map where the order of 
> > iterator.getNext() should be the same order of elements being inserted. 
> > Unfortunately Java collections doesn't have this interface but just an 
> > implementation. We could declare an interface just for that, but that 
> > should be a different issue.

Using the generic interface (java.util.Map in this context) in declaration will 
not affect the expected FIFO functionality.
Further, when this map instance is being passed around to get the partition in 
the getPartition(...) method, the reference type is generic Map and not 
implementation type.
Here is a sample program to illustrate that using Map as reference for an 
LinkedHashMap maintains the order:

public class Sample {
        public static void main(String[] args) {
                Map<String, String> map = new LinkedHashMap<String, String>();

                map.put("A", "A");
                map.put("B", "B");
                map.put("C", "C");
                map.put("D", "D");
                map.put("E", "E");
                map.put("F", "F");
                map.put("G", "G");

                Iterator<String> iter = map.keySet().iterator();

                while (iter.hasNext()) {
                        System.out.println(map.get(iter.next()));
                }

                for (Map.Entry<String, String> entry : map.entrySet()) {
                        System.out.println(entry.getKey() + " " + 
entry.getValue());
                }
        }
}


> On 2011-03-16 16:37:44, Ning Zhang wrote:
> > trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionPruner.java,
> >  line 216
> > <https://reviews.apache.org/r/489/diff/1/?file=13887#file13887line216>
> >
> >     I think explicitly catching HiveException and throw it away will save 
> > the creation of HiveException in the catch(Exception) block, right?

Fine.


- M


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/489/#review333
-----------------------------------------------------------


On 2011-03-16 16:50:15, Ning Zhang wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/489/
> -----------------------------------------------------------
> 
> (Updated 2011-03-16 16:50:15)
> 
> 
> Review request for hive.
> 
> 
> Summary
> -------
> 
>     *  expose HiveMetaStoreClient.listPartitionsByFilter() to Hive.java so 
> that PartitionPruner can use that for certain partition predicates.
>     * only allows {=, AND, OR} in the partition predicates that can be pushed 
> down to JDO filtering.
> 
> 
> Diffs
> -----
> 
>   
> trunk/metastore/src/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java
>  1081948 
>   trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java 
> 1081948 
>   
> trunk/metastore/src/java/org/apache/hadoop/hive/metastore/parser/ExpressionTree.java
>  1081948 
>   trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java 1081948 
>   trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java 1081948 
>   
> trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionPruner.java
>  1081948 
>   trunk/ql/src/test/queries/clientpositive/ppr_pushdown.q PRE-CREATION 
>   trunk/ql/src/test/queries/clientpositive/ppr_pushdown2.q PRE-CREATION 
>   trunk/ql/src/test/results/clientpositive/ppr_pushdown.q.out PRE-CREATION 
>   trunk/ql/src/test/results/clientpositive/ppr_pushdown2.q.out PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/489/diff
> 
> 
> Testing
> -------
> 
> 
> Thanks,
> 
> Ning
> 
>

Reply via email to