twoentartian commented on pull request #31827:
URL: https://github.com/apache/spark/pull/31827#issuecomment-802221172


   I can also write a Java page if necessary. Give me several days because I 
have some courses to do.
   By the way, from this 
[page](https://sparkbyexamples.com/pyspark/pyspark-read-csv-file-into-dataframe/#Read%20all%20CSV%20files%20in%20a%20directory).
 It says we can read all csv files in a folder: `df = spark.read.csv("Folder 
path")`. But when I try it in spark-shell like this: 
   `val path = "examples/src/main/resources"`
   `val df3 = spark.read.option("delimiter", ";").option("header", 
"true").csv(path)`
   The size of `df3` is 539 and seems it reads all files in that folder rather 
than csv file only. Is this API obsolete?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to