jorisvandenbossche commented on code in PR #39535: URL: https://github.com/apache/arrow/pull/39535#discussion_r1446239096
########## python/pyarrow/pandas_compat.py: ########## @@ -789,9 +788,10 @@ def table_to_dataframe( # Set of the string repr of all numpy dtypes that can be stored in a pandas # dataframe (complex not included since not supported by Arrow) _pandas_supported_numpy_types = { - str(np.dtype(typ)) - for typ in (_np_sctypes['int'] + _np_sctypes['uint'] + _np_sctypes['float'] + - ['object', 'bool']) + "int8", "int16", "int32", "int64", + "uint8", "uint16", "uint32", "uint64", + "float16", "float32", "float64", "float128", Review Comment: On second thought: those types are not arrow types, but the numpy dtype stored in the pandas metadata, i.e. the original dtype of the pandas DataFrame column that was converted to a pyarrow.Table. So in theory you can have a pandas DataFrame with a float128 columns, and get that in the metadata (and then having that included in the list above is fine). Now, this is currently also not possible, as we haven't implemented the conversion of numpy float128 to a pyarrow float array, and thus the conversion of such a DataFrame currently fails. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: github-unsubscr...@arrow.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org