Github user JoshRosen commented on the pull request:

    https://github.com/apache/spark/pull/6222#issuecomment-104057422
  
    I had put this patch on hold while finishing up some 1.4.0 stuff, but I 
plan to return to this soon.  The fact that this is causing lots of test 
failures suggests that this could break things in bad ways for user code if 
we're not careful.  I think that the main issue is that the old code would 
implicitly pass through primitives that were of the wrong type and allow 
implicit numeric conversions to take place.  For instance, if I had a table 
that expected double-valued columns, it would be fine to pass integers since we 
never did any Scala -> Catalyst conversion for integers and the internal code 
implicitly handles these conversions somewhere.
    
    I'll see if I can come up with a clean way of allowing these implicit 
numeric conversions to still take place but guarantee that they're performed as 
part of the inbound Row conversion rather than implicitly in lower-level code.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to