-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/10143/#review18562
-----------------------------------------------------------


Hi Vasanth,
thank you very much for your time. Do you think that it would be possible to 
create test case to ensure that this issue won't repeat in the future?


execution/mapreduce/src/main/java/org/apache/sqoop/job/etl/HdfsExportPartitioner.java
<https://reviews.apache.org/r/10143/#comment38915>

    I'm concerned about losing precision when converting long to double for 
really big values, so I would prefer to avoid conversion to double if possible. 
What about checking if remainder is non zero and adding 1 conditionally? 
Something like:
    
    if(numInputBytes % context.getMaxPartitions() != 0 ) {
     maxSplitSize += 1;
    }


Jarcec

- Jarek Cecho


On March 26, 2013, 9:42 p.m., vasanthkumar wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/10143/
> -----------------------------------------------------------
> 
> (Updated March 26, 2013, 9:42 p.m.)
> 
> 
> Review request for Sqoop.
> 
> 
> Description
> -------
> 
> HdfsExportPartitioner is not always respecting maximal number of partitions.
> Modified partition logic.
> 
> 
> This addresses bug sqoop-844.
>     https://issues.apache.org/jira/browse/sqoop-844
> 
> 
> Diffs
> -----
> 
>   
> execution/mapreduce/src/main/java/org/apache/sqoop/job/etl/HdfsExportPartitioner.java
>  115ca54 
> 
> Diff: https://reviews.apache.org/r/10143/diff/
> 
> 
> Testing
> -------
> 
> Done
> 
> 
> Thanks,
> 
> vasanthkumar
> 
>

Reply via email to