[
https://issues.apache.org/jira/browse/SPARK-42002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17677094#comment-17677094
]
Sandeep Singh commented on SPARK-42002:
---
I'm working on this
> Implement DataFram
Sandeep Singh created SPARK-42073:
-
Summary: Enable pyspark.sql.tests.test_types 2 test cases
Key: SPARK-42073
URL: https://issues.apache.org/jira/browse/SPARK-42073
Project: Spark
Issue Type
[
https://issues.apache.org/jira/browse/SPARK-42012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17676751#comment-17676751
]
Sandeep Singh commented on SPARK-42012:
---
working on this.
> Implement DataFrameRe
[
https://issues.apache.org/jira/browse/SPARK-41820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41820:
--
Description:
{code:java}
>>> df = spark.createDataFrame([(2, "Alice"), (5, "Bob")], schema=["a
Sandeep Singh created SPARK-41922:
-
Summary: Implement DataFrame `semanticHash`
Key: SPARK-41922
URL: https://issues.apache.org/jira/browse/SPARK-41922
Project: Spark
Issue Type: Sub-task
[
https://issues.apache.org/jira/browse/SPARK-41874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17655329#comment-17655329
]
Sandeep Singh commented on SPARK-41874:
---
Working on this
> Implement DataFrame `s
[
https://issues.apache.org/jira/browse/SPARK-41824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41824:
--
Description:
{code:java}
df = spark.createDataFrame([(14, "Tom"), (23, "Alice"), (16, "Bob")],
[
https://issues.apache.org/jira/browse/SPARK-41824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17655293#comment-17655293
]
Sandeep Singh commented on SPARK-41824:
---
this is from the doctests
`./python/run
[ https://issues.apache.org/jira/browse/SPARK-41818 ]
Sandeep Singh deleted comment on SPARK-41818:
---
was (Author: techaddict):
Could be moved under https://issues.apache.org/jira/browse/SPARK-41279
> Support DataFrameWriter.saveAsTable
> ---
Sandeep Singh created SPARK-41921:
-
Summary: Enable doctests in connect.column and connect.functions
Key: SPARK-41921
URL: https://issues.apache.org/jira/browse/SPARK-41921
Project: Spark
Iss
Sandeep Singh created SPARK-41907:
-
Summary: Function `sampleby` return parity
Key: SPARK-41907
URL: https://issues.apache.org/jira/browse/SPARK-41907
Project: Spark
Issue Type: Sub-task
[
https://issues.apache.org/jira/browse/SPARK-41907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41907:
--
Description:
{code:java}
df = self.spark.createDataFrame([Row(a=i, b=(i % 3)) for i in range(1
[
https://issues.apache.org/jira/browse/SPARK-41906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41906:
--
Description:
{code:java}
df = self.df
from pyspark.sql import functions
rnd = df.select("key"
Sandeep Singh created SPARK-41906:
-
Summary: Handle Function `rand() `
Key: SPARK-41906
URL: https://issues.apache.org/jira/browse/SPARK-41906
Project: Spark
Issue Type: Sub-task
Co
[
https://issues.apache.org/jira/browse/SPARK-41905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41905:
--
Summary: Function `slice` should handle string in params (was: Function
`slice` should expect
[
https://issues.apache.org/jira/browse/SPARK-41905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41905:
--
Description:
{code:java}
df = self.spark.createDataFrame(
[
(
[1, 2, 3
Sandeep Singh created SPARK-41905:
-
Summary: Function `slice` should expect string in params
Key: SPARK-41905
URL: https://issues.apache.org/jira/browse/SPARK-41905
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-41904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41904:
--
Summary: Fix Function `nth_value` functions output (was: Fix `nth_value`
functions output)
>
[
https://issues.apache.org/jira/browse/SPARK-41904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41904:
--
Description:
{code:java}
from pyspark.sql import Window
from pyspark.sql.functions import nth_
Sandeep Singh created SPARK-41904:
-
Summary: Fix `nth_value` functions output
Key: SPARK-41904
URL: https://issues.apache.org/jira/browse/SPARK-41904
Project: Spark
Issue Type: Sub-task
[
https://issues.apache.org/jira/browse/SPARK-41902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41902:
--
Summary: Parity in String representation of higher_order_function's output
(was: Parity in St
[
https://issues.apache.org/jira/browse/SPARK-41902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41902:
--
Description:
{code:java}
from pyspark.sql.functions import flatten, struct, transform
df = se
[
https://issues.apache.org/jira/browse/SPARK-41903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41903:
--
Description:
{code:java}
import numpy as np
arr_dtype_to_spark_dtypes = [
("int8", [("b",
[
https://issues.apache.org/jira/browse/SPARK-41902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41902:
--
Description:
{code:java}
expected = {"a": 1, "b": 2}
expected2 = {"c": 3, "d": 4}
df = self.sp
Sandeep Singh created SPARK-41903:
-
Summary: Support data type ndarray
Key: SPARK-41903
URL: https://issues.apache.org/jira/browse/SPARK-41903
Project: Spark
Issue Type: Sub-task
Co
Sandeep Singh created SPARK-41902:
-
Summary: Fix String representation of maps created by
`map_from_arrays`
Key: SPARK-41902
URL: https://issues.apache.org/jira/browse/SPARK-41902
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-41901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41901:
--
Description:
{code:java}
from pyspark.sql import functions
funs = [
(functions.acosh, "AC
[
https://issues.apache.org/jira/browse/SPARK-41901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41901:
--
Description:
{code:java}
from pyspark.sql import functions
funs = [
(functions.acosh, "AC
Sandeep Singh created SPARK-41901:
-
Summary: Parity in String representation of Column
Key: SPARK-41901
URL: https://issues.apache.org/jira/browse/SPARK-41901
Project: Spark
Issue Type: Sub-t
[
https://issues.apache.org/jira/browse/SPARK-41900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41900:
--
Description:
{code:java}
import numpy as np
from pyspark.sql.functions import lit
dtype_to_sp
Sandeep Singh created SPARK-41900:
-
Summary: Support data type int8
Key: SPARK-41900
URL: https://issues.apache.org/jira/browse/SPARK-41900
Project: Spark
Issue Type: Sub-task
Compo
[
https://issues.apache.org/jira/browse/SPARK-41898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41898:
--
Description:
{code:java}
df = self.spark.createDataFrame([(1, "1"), (2, "2"), (1, "2"), (1, "2
[
https://issues.apache.org/jira/browse/SPARK-41899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41899:
--
Description:
{code:java}
dt = datetime.date(2021, 12, 27)
# Note; number var in Python gets c
Sandeep Singh created SPARK-41899:
-
Summary: DataFrame.createDataFrame converting int to bigint
Key: SPARK-41899
URL: https://issues.apache.org/jira/browse/SPARK-41899
Project: Spark
Issue Ty
[
https://issues.apache.org/jira/browse/SPARK-41898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41898:
--
Description:
{code:java}
from pyspark.sql.functions import assert_true
df = self.spark.range(
Sandeep Singh created SPARK-41898:
-
Summary: Window.rowsBetween should handle `float("-inf")` and
`float("+inf")` as argument
Key: SPARK-41898
URL: https://issues.apache.org/jira/browse/SPARK-41898
Pr
[
https://issues.apache.org/jira/browse/SPARK-41897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41897:
--
Description:
PySpark throws Py4JJavaError where as connect throws SparkConnectException
{code:
Sandeep Singh created SPARK-41897:
-
Summary: Parity in Error types between pyspark and connect
functions
Key: SPARK-41897
URL: https://issues.apache.org/jira/browse/SPARK-41897
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-41891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41891:
--
Summary: Enable test_add_months_function, test_array_repeat,
test_dayofweek, test_first_last_i
Sandeep Singh created SPARK-41892:
-
Summary: Add JIRAs or messages for skipped messages
Key: SPARK-41892
URL: https://issues.apache.org/jira/browse/SPARK-41892
Project: Spark
Issue Type: Sub-
[
https://issues.apache.org/jira/browse/SPARK-41878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41878:
--
Summary: Add JIRAs or messages for skipped tests (was: Add JIRAs or
messages for skipped mess
Sandeep Singh created SPARK-41891:
-
Summary: Enable 8 tests
Key: SPARK-41891
URL: https://issues.apache.org/jira/browse/SPARK-41891
Project: Spark
Issue Type: Sub-task
Components: C
Sandeep Singh created SPARK-41887:
-
Summary: Support DataFrame hint parameter to be list
Key: SPARK-41887
URL: https://issues.apache.org/jira/browse/SPARK-41887
Project: Spark
Issue Type: Sub
[
https://issues.apache.org/jira/browse/SPARK-41887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41887:
--
Description:
{code:java}
df = self.spark.range(10e10).toDF("id")
such_a_nice_list = ["itworks1
[
https://issues.apache.org/jira/browse/SPARK-41871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41871:
--
Summary: DataFrame hint parameter can be str, float or int (was: DataFrame
hint parameter can
[
https://issues.apache.org/jira/browse/SPARK-41884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41884:
--
Description:
{code:java}
import numpy as np
import pandas as pd
df = self.spark.createDataFra
Sandeep Singh created SPARK-41884:
-
Summary: DataFrame `toPandas` parity in return types
Key: SPARK-41884
URL: https://issues.apache.org/jira/browse/SPARK-41884
Project: Spark
Issue Type: Sub
[
https://issues.apache.org/jira/browse/SPARK-41878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41878:
--
Description: Add JIRAs or Messages for all the skipped messages. (was: 5
tests pass now. Shou
Sandeep Singh created SPARK-41878:
-
Summary: Add JIRAs or messages for skipped messages
Key: SPARK-41878
URL: https://issues.apache.org/jira/browse/SPARK-41878
Project: Spark
Issue Type: Sub-
[
https://issues.apache.org/jira/browse/SPARK-41877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41877:
--
Description:
{code:java}
df = self.spark.createDataFrame(
[
(1, 10, 1.0, "one"),
Sandeep Singh created SPARK-41877:
-
Summary: SparkSession.createDataFrame error parity
Key: SPARK-41877
URL: https://issues.apache.org/jira/browse/SPARK-41877
Project: Spark
Issue Type: Sub-t
Sandeep Singh created SPARK-41876:
-
Summary: Implement DataFrame `toLocalIterator`
Key: SPARK-41876
URL: https://issues.apache.org/jira/browse/SPARK-41876
Project: Spark
Issue Type: Sub-task
[
https://issues.apache.org/jira/browse/SPARK-41875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41875:
--
Description:
{code:java}
schema = StructType(
[StructField("i", StringType(), True), Struc
Sandeep Singh created SPARK-41875:
-
Summary: Throw proper errors in Dataset.to()
Key: SPARK-41875
URL: https://issues.apache.org/jira/browse/SPARK-41875
Project: Spark
Issue Type: Sub-task
Sandeep Singh created SPARK-41874:
-
Summary: Implement DataFrame `sameSemantics`
Key: SPARK-41874
URL: https://issues.apache.org/jira/browse/SPARK-41874
Project: Spark
Issue Type: Sub-task
[
https://issues.apache.org/jira/browse/SPARK-41872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41872:
--
Summary: Fix DataFrame createDataframe handling of None (was: Fix
DataFrame fillna with bool)
[
https://issues.apache.org/jira/browse/SPARK-41872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41872:
--
Description:
{code:java}
row = self.spark.createDataFrame([("Alice", None, None, None)],
sche
[
https://issues.apache.org/jira/browse/SPARK-41873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41873:
--
Summary: Implement DataFrame `pandas_api` (was: Implement DataFrameReader
`pandas_api`)
> Im
Sandeep Singh created SPARK-41873:
-
Summary: Implement DataFrameReader `pandas_api`
Key: SPARK-41873
URL: https://issues.apache.org/jira/browse/SPARK-41873
Project: Spark
Issue Type: Sub-task
[
https://issues.apache.org/jira/browse/SPARK-41873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41873:
--
Description: (was: {code:java}
File
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/
[
https://issues.apache.org/jira/browse/SPARK-41872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41872:
--
Description:
{code:java}
row = self.spark.createDataFrame([("Alice", None, None, None)],
sche
Sandeep Singh created SPARK-41872:
-
Summary: Fix DataFrame fillna with bool
Key: SPARK-41872
URL: https://issues.apache.org/jira/browse/SPARK-41872
Project: Spark
Issue Type: Sub-task
Sandeep Singh created SPARK-41871:
-
Summary: DataFrame hint parameter can be str, list, float or int
Key: SPARK-41871
URL: https://issues.apache.org/jira/browse/SPARK-41871
Project: Spark
Iss
[
https://issues.apache.org/jira/browse/SPARK-41871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41871:
--
Description:
{code:java}
df = self.spark.range(10e10).toDF("id")
such_a_nice_list = ["itworks1
Sandeep Singh created SPARK-41870:
-
Summary: Handle duplicate columns in `createDataFrame`
Key: SPARK-41870
URL: https://issues.apache.org/jira/browse/SPARK-41870
Project: Spark
Issue Type: S
[
https://issues.apache.org/jira/browse/SPARK-41870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41870:
--
Description:
{code:java}
df = self.spark.createDataFrame([(1, 2)], ["c", "c"]){code}
Error:
{c
[
https://issues.apache.org/jira/browse/SPARK-41869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41869:
--
Description:
{code:java}
df = self.spark.createDataFrame([("Alice", 50), ("Alice", 60)], ["nam
Sandeep Singh created SPARK-41869:
-
Summary: DataFrame dropDuplicates should throw error on non list
argument
Key: SPARK-41869
URL: https://issues.apache.org/jira/browse/SPARK-41869
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-41855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17654255#comment-17654255
]
Sandeep Singh commented on SPARK-41855:
---
[~podongfeng] there is another failure wh
[
https://issues.apache.org/jira/browse/SPARK-41856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41856:
--
Summary: Enable test_freqItems, test_input_files,
test_toDF_with_schema_string, test_to_pandas
[
https://issues.apache.org/jira/browse/SPARK-41868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41868:
--
Description:
{code:java}
import pandas as pd
from datetime import timedelta
df = self.spark.c
Sandeep Singh created SPARK-41868:
-
Summary: Support data type Duration(NANOSECOND)
Key: SPARK-41868
URL: https://issues.apache.org/jira/browse/SPARK-41868
Project: Spark
Issue Type: Sub-task
[
https://issues.apache.org/jira/browse/SPARK-41866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41866:
--
Description:
{code:java}
import array
data = [Row(longarray=array.array("l", [-92233720368547
Sandeep Singh created SPARK-41866:
-
Summary: Make `createDataFrame` support array
Key: SPARK-41866
URL: https://issues.apache.org/jira/browse/SPARK-41866
Project: Spark
Issue Type: Sub-task
[
https://issues.apache.org/jira/browse/SPARK-41856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17654239#comment-17654239
]
Sandeep Singh commented on SPARK-41856:
---
[~gurwls223] for some reason its still as
[
https://issues.apache.org/jira/browse/SPARK-41857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41857:
--
Summary: Enable test_between_function, test_datetime_functions, test_expr,
test_math_functions
[
https://issues.apache.org/jira/browse/SPARK-41857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41857:
--
Summary: Enable test_between_function, test_datetime_functions, test_expr,
test_function_parit
Sandeep Singh created SPARK-41857:
-
Summary: Enable 10 tests that pass
Key: SPARK-41857
URL: https://issues.apache.org/jira/browse/SPARK-41857
Project: Spark
Issue Type: Sub-task
Co
[
https://issues.apache.org/jira/browse/SPARK-41856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41856:
--
Description: 5 tests pass now. Should enable them. (was: These tests pass
now. Should enable
Sandeep Singh created SPARK-41856:
-
Summary: Enable test_create_nan_decimal_dataframe, test_freqItems,
test_input_files, test_toDF_with_schema_string,
test_to_pandas_required_pandas_not_found
Key: SPARK-41856
URL
[
https://issues.apache.org/jira/browse/SPARK-41852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17653750#comment-17653750
]
Sandeep Singh commented on SPARK-41852:
---
[~podongfeng] these are from the doctests
[
https://issues.apache.org/jira/browse/SPARK-41851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17653751#comment-17653751
]
Sandeep Singh commented on SPARK-41851:
---
[~podongfeng]
{code:java}
>>> df = spark
[
https://issues.apache.org/jira/browse/SPARK-41847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41847:
--
Description:
{code:java}
File
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/f
Sandeep Singh created SPARK-41852:
-
Summary: Fix `pmod` function
Key: SPARK-41852
URL: https://issues.apache.org/jira/browse/SPARK-41852
Project: Spark
Issue Type: Sub-task
Componen
[
https://issues.apache.org/jira/browse/SPARK-41852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41852:
--
Description:
{code:java}
File
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/f
Sandeep Singh created SPARK-41851:
-
Summary: Fix `nanvl` function
Key: SPARK-41851
URL: https://issues.apache.org/jira/browse/SPARK-41851
Project: Spark
Issue Type: Sub-task
Compone
[
https://issues.apache.org/jira/browse/SPARK-41851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41851:
--
Description:
{code:java}
File
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/f
[
https://issues.apache.org/jira/browse/SPARK-41847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41847:
--
Description:
{code:java}
File
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/f
[
https://issues.apache.org/jira/browse/SPARK-41850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17653738#comment-17653738
]
Sandeep Singh commented on SPARK-41850:
---
This should be moved under SPARK-41283
>
[
https://issues.apache.org/jira/browse/SPARK-41850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41850:
--
Description:
{code:java}
File
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/f
[
https://issues.apache.org/jira/browse/SPARK-41850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41850:
--
Summary: Fix `isnan` function (was: Fix DataFrameReader.isnan)
> Fix `isnan` function
> -
Sandeep Singh created SPARK-41850:
-
Summary: Fix DataFrameReader.isnan
Key: SPARK-41850
URL: https://issues.apache.org/jira/browse/SPARK-41850
Project: Spark
Issue Type: Sub-task
Co
Sandeep Singh created SPARK-41849:
-
Summary: Implement DataFrameReader.text
Key: SPARK-41849
URL: https://issues.apache.org/jira/browse/SPARK-41849
Project: Spark
Issue Type: Sub-task
[
https://issues.apache.org/jira/browse/SPARK-41849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41849:
--
Description:
{code:java}
File
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/f
[
https://issues.apache.org/jira/browse/SPARK-41847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41847:
--
Description:
{code:java}
File
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/f
[
https://issues.apache.org/jira/browse/SPARK-41847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41847:
--
Summary: DataFrame mapfield,structlist invalid type (was: DataFrame
mapfield invalid type)
>
[
https://issues.apache.org/jira/browse/SPARK-41847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41847:
--
Description:
{code:java}
File
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/f
Sandeep Singh created SPARK-41847:
-
Summary: DataFrame mapfield invalid type
Key: SPARK-41847
URL: https://issues.apache.org/jira/browse/SPARK-41847
Project: Spark
Issue Type: Sub-task
[
https://issues.apache.org/jira/browse/SPARK-41846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41846:
--
Summary: DataFrame windowspec functions : unresolved columns (was:
DataFrame aggregation func
[
https://issues.apache.org/jira/browse/SPARK-41846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sandeep Singh updated SPARK-41846:
--
Description:
{code:java}
File
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/f
1 - 100 of 323 matches
Mail list logo