[ 
https://issues.apache.org/jira/browse/ARROW-17912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17612577#comment-17612577
 ] 

David Li commented on ARROW-17912:
----------------------------------

There is no such thing as a 'table' in Java in the first place, nor is there 
any such concept in Arrow itself. Tables are only a convenience added in some 
Arrow implementations.

You can create an empty record batch of a given schema (e.g. by using 
MakeArrayOfNull) and explicitly write that out to the stream.

Again, there are two different kinds of 'empty'. Either you have a single empty 
batch, or you have no batches at all. Hence I think it is best to be explicit. 
I also do not understand why Spark cannot be made to be more robust about this.

> [C++] IPC writer does not write an empty batch in the case of an empty table, 
> which PySpark cannot handle
> ---------------------------------------------------------------------------------------------------------
>
>                 Key: ARROW-17912
>                 URL: https://issues.apache.org/jira/browse/ARROW-17912
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: C++
>            Reporter: Liangcai li
>            Priority: Major
>
> My current work is about Pyspark Cogroup Pandas UDF. And two processes are 
> involved, the JVM one (sender) and the Python one (receiver).
> [Spark is using the Arrow Java 
> `ArrowStreamWriter`|https://github.com/apache/spark/blob/branch-3.3/sql/core/src/main/scala/org/apache/spark/sql/execution/python/CoGroupedArrowPythonRunner.scala#L99]
>  to serialize Arrow tables being sent from the JVM process to the Python 
> process, and ArrowStreamWriter can handle empty tables correctly.
> [While cuDF is using the Arrow C++ RecordBatchWriter 
> |https://github.com/rapidsai/cudf/blob/branch-22.10/java/src/main/native/src/TableJni.cpp#L254]to
>  do the same serialization, but it leads to an error as below on the Python 
> side, where [the Pyspark is calling Pyarrow 
> *Table.from_batches*|https://github.com/apache/spark/blob/branch-3.3/python/pyspark/sql/pandas/serializers.py#L366]
>  to deserialize the arrow stream.
> ``` 
> _E                     File 
> "/usr/local/spark/python/lib/pyspark.zip/pyspark/sql/pandas/serializers.py", 
> line 297, in load_stream_
> _E                       [self.arrow_to_pandas(c) for c in 
> pa.Table.from_batches(batch2).itercolumns()]_
> _E                     File "pyarrow/table.pxi", line 1609, in 
> pyarrow.lib.Table.from_batches_
> _E                   {color:#de350b}*ValueError: Must pass schema, or at 
> least one RecordBatch*{color}_
> ```



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to