[ https://issues.apache.org/jira/browse/SPARK-41828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Sandeep Singh updated SPARK-41828: ---------------------------------- Description: {code:java} File "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", line 99, in pyspark.sql.connect.dataframe.DataFrame.isEmpty Failed example: df_empty = spark.createDataFrame([], 'a STRING') Exception raised: Traceback (most recent call last): File "/usr/local/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/doctest.py", line 1350, in __run exec(compile(example.source, filename, "single", File "<doctest pyspark.sql.connect.dataframe.DataFrame.isEmpty[0]>", line 1, in <module> df_empty = spark.createDataFrame([], 'a STRING') File "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/session.py", line 186, in createDataFrame raise ValueError("Input data cannot be empty") ValueError: Input data cannot be empty{code} was: {code:java} File "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", line 302, in pyspark.sql.connect.dataframe.DataFrame.groupBy Failed example: df.groupBy(["name", df.age]).count().sort("name", "age").show() Exception raised: Traceback (most recent call last): File "/usr/local/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/doctest.py", line 1350, in __run exec(compile(example.source, filename, "single", File "<doctest pyspark.sql.connect.dataframe.DataFrame.groupBy[4]>", line 1, in <module> df.groupBy(["name", df.age]).count().sort("name", "age").show() File "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", line 251, in groupBy raise TypeError( TypeError: groupBy requires all cols be Column or str, but got list ['name', Column<'ColumnReference(age)'>]{code} > Implement creating empty Dataframe > ---------------------------------- > > Key: SPARK-41828 > URL: https://issues.apache.org/jira/browse/SPARK-41828 > Project: Spark > Issue Type: Sub-task > Components: Connect > Affects Versions: 3.4.0 > Reporter: Sandeep Singh > Priority: Major > > {code:java} > File > "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/dataframe.py", > line 99, in pyspark.sql.connect.dataframe.DataFrame.isEmpty > Failed example: > df_empty = spark.createDataFrame([], 'a STRING') > Exception raised: > Traceback (most recent call last): > File > "/usr/local/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/doctest.py", > line 1350, in __run > exec(compile(example.source, filename, "single", > File "<doctest pyspark.sql.connect.dataframe.DataFrame.isEmpty[0]>", > line 1, in <module> > df_empty = spark.createDataFrame([], 'a STRING') > File > "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/session.py", > line 186, in createDataFrame > raise ValueError("Input data cannot be empty") > ValueError: Input data cannot be empty{code} -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org