Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/19439#discussion_r148031645
--- Diff: python/pyspark/ml/image.py ---
@@ -0,0 +1,139 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+from pyspark.ml.util import *
+from pyspark.ml.param.shared import *
+from pyspark.sql.types import *
+from pyspark.sql.types import Row, _create_row
+from pyspark.sql import DataFrame, SparkSession, SQLContext
+import numpy as np
+
+undefinedImageType = "Undefined"
+
+imageFields = ["origin", "height", "width", "nChannels", "mode", "data"]
+
+
+def getOcvTypes(spark=None):
+ """
+ Returns the OpenCV type mapping supported
+
+ :param sparkSession (SparkSession): The current spark session
+ :rtype dict: The OpenCV type mapping supported
+
+ .. versionadded:: 2.3.0
+ """
+ spark = spark or SparkSession.builder.getOrCreate()
+ ctx = spark.sparkContext
+ return ctx._jvm.org.apache.spark.ml.image.ImageSchema.ocvTypes
+
+
+# DataFrame with a single column of images named "image" (nullable)
+def getImageSchema(spark=None):
+ """
+ Returns the image schema
+
+ :param spark (SparkSession): The current spark session
+ :rtype StructType: The image schema
+
+ .. versionadded:: 2.3.0
+ """
+ spark = spark or SparkSession.builder.getOrCreate()
--- End diff --
Hm, we could give a shot to resemble the ones in `functions.py` though. I
think here we only need jvm access and AFAIK that's what functions do in that
file. For example:
```python
@since(1.3)
def first(col, ignorenulls=False):
"""Aggregate function: returns the first value in a group.
The function by default returns the first values it sees. It will
return the first non-null
value it sees when ignoreNulls is set to true. If all values are null,
then null is returned.
"""
sc = SparkContext._active_spark_context
jc = sc._jvm.functions.first(_to_java_column(col), ignorenulls)
return Column(jc)
```
I haven't tried the way you explained and looked closely yet but I think
it's worth resembling existing ways and testing it if it's not hard.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]