[
https://issues.apache.org/jira/browse/SPARK-36277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17387766#comment-17387766
]
Fu Chen commented on SPARK-36277:
---------------------------------
This bug also persists when we run with the latest master branch.
The user-defined schema was pruning by Rule `ColumnPruning`
{noformat}
=== Applying Rule org.apache.spark.sql.catalyst.optimizer.ColumnPruning ===
Aggregate [count(1) AS count#29L]
Aggregate [count(1) AS count#29L]
!+- Relation [firstname#0,middlename#1,lastname#2,id#3,gender#4,salary#5] csv
+- Project
!
+- Relation [firstname#0,middlename#1,lastname#2,id#3,gender#4,salary#5] csv
{noformat}
{noformat}
*(2) HashAggregate(keys=[], functions=[count(1)], output=[count#29L])
+- ShuffleQueryStage 0
+- Exchange SinglePartition, ENSURE_REQUIREMENTS, [id=#22]
+- *(1) HashAggregate(keys=[], functions=[partial_count(1)],
output=[count#32L])
+- FileScan csv [] Batched: false, DataFilters: [], Format: CSV,
Location: InMemoryFileIndex(1 paths)[file:/tmp/sample.csv], PartitionFilters:
[], PushedFilters: [], ReadSchema: struct<>
{noformat}
But note that when we read CSV files with DROPMALFORMED mode, the
`UnivocityParser` needs the schema to judge whether a record is corrupt or not,
so `FileScan` scans all records in the CSV file including corrupted records.
(salary = 'NA' is corrupted due to user-defined schema field salary is the type
of Integer)
When we disabled rule `ColumnPruning`, the result is what we want(in other
words, `UnivocityParser` needs schema when we scan CSV files with DROPMALFORMED
mode.)
{code:java}
spark.sql("set
spark.sql.optimizer.excludedRules=org.apache.spark.sql.catalyst.optimizer.ColumnPruning")
{code}
Any suggestions or ideas to fix this bug? [~hyukjin.kwon]
> Issue with record count of data frame while reading in DropMalformed mode
> -------------------------------------------------------------------------
>
> Key: SPARK-36277
> URL: https://issues.apache.org/jira/browse/SPARK-36277
> Project: Spark
> Issue Type: Bug
> Components: PySpark
> Affects Versions: 2.4.3
> Reporter: anju
> Priority: Major
> Attachments: 111.PNG, Inputfile.PNG, sample.csv
>
>
> I am writing the steps to reproduce the issue for "count" pyspark api while
> using mode as dropmalformed.
> I have a csv sample file in s3 bucket . I am reading the file using pyspark
> api for csv . I am reading the csv "without schema" and "with schema using
> mode 'dropmalformed' options in two different dataframes . While displaying
> the "with schema using mode 'dropmalformed'" dataframe , the display looks
> good ,it is not showing the malformed records .But when we apply count api on
> the dataframe it gives the record count of actual file. I am expecting it
> should give me valid record count .
> here is the code used:-
> {code}
> without_schema_df=spark.read.csv("s3://noa-poc-lakeformation/data/test_files/sample.csv",header=True)
> schema = StructType([ \
> StructField("firstname",StringType(),True), \
> StructField("middlename",StringType(),True), \
> StructField("lastname",StringType(),True), \
> StructField("id", StringType(), True), \
> StructField("gender", StringType(), True), \
> StructField("salary", IntegerType(), True) \
> ])
> with_schema_df =
> spark.read.csv("s3://noa-poc-lakeformation/data/test_files/sample.csv",header=True,schema=schema,mode="DROPMALFORMED")
> print("The dataframe with schema")
> with_schema_df.show()
> print("The dataframe without schema")
> without_schema_df.show()
> cnt_with_schema=with_schema_df.count()
> print("The records count from with schema df :"+str(cnt_with_schema))
> cnt_without_schema=without_schema_df.count()
> print("The records count from without schema df: "+str(cnt_without_schema))
> {code}
> here is the outputs screen shot 111.PNG is the outputs of the code and
> inputfile.csv is the input to the code
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]