ueshin commented on code in PR #40402:
URL: https://github.com/apache/spark/pull/40402#discussion_r1134796113


##########
python/pyspark/sql/connect/dataframe.py:
##########
@@ -1344,9 +1344,9 @@ def collect(self) -> List[Row]:
         if self._session is None:
             raise Exception("Cannot collect on empty session.")
         query = self._plan.to_proto(self._session.client)
-        table = self._session.client.to_table(query)
+        table, schema = self._session.client.to_table(query)
 
-        schema = from_arrow_schema(table.schema)
+        schema = schema or from_arrow_schema(table.schema)

Review Comment:
   That's interesting and I was thinking about the similar thing, but I didn't 
take the approach because it takes some space in the `RecordBatch` that could 
repeatedly be sent to client. That means the schema space could be huge if 
repeat many times.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to