[
https://issues.apache.org/jira/browse/ARROW-2136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16869281#comment-16869281
]
Joris Van den Bossche commented on ARROW-2136:
----------------------------------------------
For {{Table.from_pandas}}, in the end, the actual conversion is done column by
column with {{pa.array}}.
So the question for me is, do we want to bake this in into {{pa.array}} (eg add
a `nullable=True` default keyword to {{pa.array}})? So that the underlying
conversion raises when it cannot satisfy a {{nullable=False}}.
Alternatively, we could also rather easily verify in
{{pandas_compat.dataframe_to_arrays}} (the function that calls {{pa.array}} for
each column) that the arrays null_count does not conflict with the schema
fields' metadata on this. Raising while converting in {{pa.array}} could of
course raise earlier.
> [Python] Non-nullable schema fields not checked in conversions from pandas
> --------------------------------------------------------------------------
>
> Key: ARROW-2136
> URL: https://issues.apache.org/jira/browse/ARROW-2136
> Project: Apache Arrow
> Issue Type: Bug
> Components: Python
> Affects Versions: 0.8.0
> Reporter: Matthew Gilbert
> Assignee: Joris Van den Bossche
> Priority: Major
> Fix For: 0.14.0
>
>
> If you provide a schema with {{nullable=False}} but pass a {{DataFrame}}
> which in fact has nulls it appears the schema is ignored? I would expect an
> error here.
> {code}
> import pyarrow as pa
> import pandas as pd
> df = pd.DataFrame({"a":[1.2, 2.1, pd.np.NaN]})
> schema = pa.schema([pa.field("a", pa.float64(), nullable=False)])
> table = pa.Table.from_pandas(df, schema=schema)
> table[0]
> <pyarrow.lib.Column object at 0x7f213bf2fb70>
> chunk 0: <pyarrow.lib.DoubleArray object at 0x7f213bf20ea8>
> [
> 1.2,
> 2.1,
> NA
> ]
> {code}
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)