Github user zasdfgbnm commented on the issue:
    Hi @holdenk , I think I'm done. I create a test for this issue and I do 
find from the test that spark has the same issue not only for float but also 
for byte and short. After several commits, `./python/run-tests 
--modules=pyspark-sql` passes on my computer.
    To be clear, I need to say that only array with typecode `b,h,i,l,f,d` are 
supported, array with typecode `u` is not supported because it "corresponds to 
Python’s obsolete unicode character", array with typecode `B,H,I,L` are not 
supported because there is no unsigned types on JVM, array with typecode `q,Q` 
are not supported because they "are available only if the platform C compiler 
used to build Python supports C long long", which makes supporting them 
complicated. For the unsupported typecodes, a TypeError will be raised if the 
user try to create a DataFrame of it.
    Would you, or any other developer, review my code and get it merged?

If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at or file a JIRA ticket
with INFRA.

To unsubscribe, e-mail:
For additional commands, e-mail:

Reply via email to