ted-jenks commented on code in PR #39927:
URL: https://github.com/apache/spark/pull/39927#discussion_r1098550628


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/csv/CSVExprUtils.scala:
##########
@@ -21,38 +21,34 @@ import org.apache.commons.lang3.StringUtils
 
 object CSVExprUtils {
   /**
-   * Filter ignorable rows for CSV iterator (lines empty and starting with 
`comment`).
+   * Filter ignorable rows starting with `comment`.
    * This is currently being used in CSV reading path and CSV schema inference.
    */
-  def filterCommentAndEmpty(iter: Iterator[String], options: CSVOptions): 
Iterator[String] = {
+  def filterComment(iter: Iterator[String], options: CSVOptions): 
Iterator[String] = {
     if (options.isCommentSet) {
       val commentPrefix = options.comment.toString
-      iter.filter { line =>
-        line.trim.nonEmpty && !line.startsWith(commentPrefix)
-      }
+      iter.dropWhile(_.startsWith(commentPrefix)
     } else {
-      iter.filter(_.trim.nonEmpty)

Review Comment:
   I've addressed this now, things a bit more consistent. Only `CSVUtils` is 
concerned with the `Dataset[String]` case and the CSV file reads are concerned 
with `CSVExprUtils`. Would be interested to see what you think of this as I was 
very confused when trying to do some CSV work to find blank lines were not in 
fact removed here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to