[
https://issues.apache.org/jira/browse/SPARK-24540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17371780#comment-17371780
]
Chandra commented on SPARK-24540:
---------------------------------
My requirement is to process the file which have a multi character row and
column delimiter.
I tried multiple options but ended up with few issues.
File sample:
127'~'127433'~''~''~'2'~'ICR'~'STDLONG'~'NR'~'NR'~'1997-06-25
14:47:37'~''~'NR'~''~'1997-06-25 14:47:37'~'BBB'~''~'Stable'~''~''~''~'Not
Rated'~'CreditWatch/Outlook'~'OL'~''~''~''~'#@#@#152'~'308044'~''~''~'2'~'ICR'~'FCLONG'~'NR'~'NR'~'1997-12-05
14:23:33'~'NM'~'NR'~'1997-12-05 14:23:33'~'1997-12-05 14:23:33'~'B+'~'Watch
Pos'~'NM'~''~''~''~'Not
Rated'~'CreditWatch/Outlook'~'OL'~''~''~''~'#@#@#155'~'308044'~''~''~'2'~'ICR'~'STDLONG'~'NR'~'NR'~'1997-12-05
14:23:34'~'NM'~'NR'~'1997-12-05 14:23:34'~'1997-12-05 14:23:34'~'B+'~'Watch
Pos'~'NM'~''~''~''~'Not
Rated'~'CreditWatch/Outlook'~'OL'~''~''~''~'#@#@#282'~'127812'~''~''~'2'~'ICR'~'FCLONG'~'NR'~'NR'~'1998-11-06
14:45:54'~'NM'~'NR'~'1998-11-06 14:45:54'~'1998-11-06 14:45:54'~'B+'~'Watch
Pos'~'NM'~''~''~''~'Not
Rated'~'CreditWatch/Outlook'~'OL'~''~''~''~'#@#@#287'~'127812'~''~''~'2'~'ICR'~'STDLONG'~'NR'~'NR'~'1998-11-06
14:45:54'~'NM'~'NR'~'1998-11-06 14:45:54'~'1998-11-06 14:45:54'~'B+'~'Watch
Pos'~'NM'~''~''~''~'Not
Rated'~'CreditWatch/Outlook'~'OL'~''~''~''~'#@#@#294'~'100899'~''~''~'2'~'ICR'~'FCLONG'~'NR'~'NR'~'1996-08-01
17:58:09'~'NM'~'NR'~'1996-08-01 17:58:09'~'1996-08-01 17:58:09'~'BB-'~'Watch
Neg'~'NM'~''~''~''~'Not
Rated'~'CreditWatch/Outlook'~'OL'~''~''~''~'#@#@#303'~'100899'~''~''~'2'~'ICR'~'STDLONG'~'NR'~'NR'~'1996-08-01
17:58:09'~'NM'~'NR'~'1996-08-01 17:58:09'~'1996-08-01 17:58:09'~'BB-'~'Watch
Neg'~'NM'~''~''~''~'Not
Rated'~'CreditWatch/Outlook'~'OL'~''~''~''~'#@#@#927'~'104464'~''~''~'2'~'ICR'~'STDLONG'~'NR'~'NR'~'1997-05-13
14:45:30'~''~'NR'~''~'1997-05-13 14:45:30'~'A'~''~'Stable'~''~''~''~'Not
Rated'~'CreditWatch/Outlook'~'OL'~''~''~''~'#@#@#
Row delimiter is : #@#@# COlumn Delimiter: '~'
Code:
df2 = spark.read.load("spRatingData_sample.txt",
format="csv",
sep="'~'",
lineSep="#@#@#")
print("two.csv rowcount: {}".format(df2.count()))
ERROR:
: java.lang.IllegalArgumentException: Delimiter cannot be more than one
character: '~'
at
org.apache.spark.sql.execution.datasources.csv.CSVUtils$.toChar(CSVUtils.scala:118)
at
org.apache.spark.sql.execution.datasources.csv.CSVOptions.<init>(CSVOptions.scala:87)
at
org.apache.spark.sql.execution.datasources.csv.CSVOptions.<init>(CSVOptions.scala:45)
at
org.apache.spark.sql.execution.datasources.csv.CSVFileFormat.inferSchema(CSVFileFormat.scala:58)
at
org.apache.spark.sql.execution.datasources.DataSource.$anonfun$getOrInferFileFormatSchema$12(DataSource.scala:183)
at scala.Option.orElse(Option.scala:447)
at
org.apache.spark.sql.execution.datasources.DataSource.getOrInferFileFormatSchema(DataSource.scala:180)
at
org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:373)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 4, in <module>
File "/usr/lib/spark/python/pyspark/sql/readwriter.py", line 166, in load
return self._df(self._jreader.load(path))
File "/usr/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line
1257, in __call__
File "/usr/lib/spark/python/pyspark/sql/utils.py", line 79, in deco
raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.IllegalArgumentException: "Delimiter cannot be more than one
character: '~'"
> Support for multiple character delimiter in Spark CSV read
> ----------------------------------------------------------
>
> Key: SPARK-24540
> URL: https://issues.apache.org/jira/browse/SPARK-24540
> Project: Spark
> Issue Type: Improvement
> Components: SQL
> Affects Versions: 2.3.1
> Reporter: Ashwin K
> Assignee: Jeff Evans
> Priority: Minor
> Fix For: 3.0.0
>
>
> Currently, the delimiter option Spark 2.0 to read and split CSV files/data
> only support a single character delimiter. If we try to provide multiple
> delimiters, we observer the following error message.
> eg: Dataset<Row> df = spark.read().option("inferSchema", "true")
> .option("header",
> "false")
> .option("delimiter",
> ", ")
> .csv("C:\test.txt");
> Exception in thread "main" java.lang.IllegalArgumentException: Delimiter
> cannot be more than one character: ,
> at
> org.apache.spark.sql.execution.datasources.csv.CSVUtils$.toChar(CSVUtils.scala:111)
> at
> org.apache.spark.sql.execution.datasources.csv.CSVOptions.<init>(CSVOptions.scala:83)
> at
> org.apache.spark.sql.execution.datasources.csv.CSVOptions.<init>(CSVOptions.scala:39)
> at
> org.apache.spark.sql.execution.datasources.csv.CSVFileFormat.inferSchema(CSVFileFormat.scala:55)
> at
> org.apache.spark.sql.execution.datasources.DataSource$$anonfun$8.apply(DataSource.scala:202)
> at
> org.apache.spark.sql.execution.datasources.DataSource$$anonfun$8.apply(DataSource.scala:202)
> at scala.Option.orElse(Option.scala:289)
> at
> org.apache.spark.sql.execution.datasources.DataSource.getOrInferFileFormatSchema(DataSource.scala:201)
> at
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:392)
> at
> org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)
> at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:596)
> at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:473)
>
> Generally, the data to be processed contains multiple character delimiters
> and presently we need to do a manual data clean up on the source/input file,
> which doesn't work well in large applications which consumes numerous files.
> There seems to be work-around like reading data as text and using the split
> option, but this in my opinion defeats the purpose, advantage and efficiency
> of a direct read from CSV file.
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]