Github user rtreffer commented on the pull request:

    https://github.com/apache/spark/pull/6796#issuecomment-115620818
  
    @liancheng it starts to work (compiles and minimal initial test worked, no 
guarantees). I think there are some points that need feedback
    - How is the compatibility mode intended to work? Settings are currently 
private, but I'd like to store Decimal(19), so is lifting the 18 limit correct 
for compatibility mode?
    - INT32/INT64 are only used when the byte length matches the byte length 
for the precision. FIXED_LEN_BYTE_ARRAY will thus e.g. be used to store 6 byte 
values
    - FIXED_LEN_BYTE_ARRAY means I'll have to create an array of the correct 
size. I've increased the scratch_bytes. Not very happy about the code path, do 
you have better ideas?
    - BYTES_FOR_PRECISION needs to handle any precision. I've reworked that 
code. Again, suggestions welcome
    
    The patch is now way smaller and less intrusive. Looks like the refactoring 
was well worth the effort!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to