xinrong-meng commented on code in PR #37635:
URL: https://github.com/apache/spark/pull/37635#discussion_r963004995
##########
python/pyspark/sql/types.py:
##########
@@ -2268,12 +2268,48 @@ def convert(self, obj: "np.generic", gateway_client:
GatewayClient) -> Any:
return obj.item()
+class NumpyArrayConverter:
+ def can_convert(self, obj: Any) -> bool:
+ return has_numpy and isinstance(obj, np.ndarray) and obj.ndim == 1
+
+ def convert(self, obj: "np.ndarray", gateway_client: GatewayClient) ->
JavaObject:
+ from pyspark import SparkContext
+
+ gateway = SparkContext._gateway
+ assert gateway is not None
+ plist = obj.tolist()
+ tpe_np_to_java = {
Review Comment:
We cannot import `SparkContext` from the module level. And we may want to do
a nullability check for "SparkContext._gateway". So
`_from_numpy_type_to_java_type` is introduced instead fro code reuse. Let me
know if you have a better idea :)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]