Chandra created SPARK-35945:
-------------------------------

             Summary: Unable to parse  multi character row and column delimited 
files using Spark
                 Key: SPARK-35945
                 URL: https://issues.apache.org/jira/browse/SPARK-35945
             Project: Spark
          Issue Type: Bug
          Components: Spark Submit
    Affects Versions: 2.4.4
         Environment: development
            Reporter: Chandra


My requirement is to process the file which have a multi character row and 
column delimiter.

I tried multiple options but ended up with few issues.

 

File sample:

127'~'127433'~''~''~'2'~'ICR'~'STDLONG'~'NR'~'NR'~'1997-06-25 
14:47:37'~''~'NR'~''~'1997-06-25 14:47:37'~'BBB'~''~'Stable'~''~''~''~'Not 
Rated'~'CreditWatch/Outlook'~'OL'~''~''~''~'#@#@#152'~'308044'~''~''~'2'~'ICR'~'FCLONG'~'NR'~'NR'~'1997-12-05
 14:23:33'~'NM'~'NR'~'1997-12-05 14:23:33'~'1997-12-05 14:23:33'~'B+'~'Watch 
Pos'~'NM'~''~''~''~'Not 
Rated'~'CreditWatch/Outlook'~'OL'~''~''~''~'#@#@#155'~'308044'~''~''~'2'~'ICR'~'STDLONG'~'NR'~'NR'~'1997-12-05
 14:23:34'~'NM'~'NR'~'1997-12-05 14:23:34'~'1997-12-05 14:23:34'~'B+'~'Watch 
Pos'~'NM'~''~''~''~'Not 
Rated'~'CreditWatch/Outlook'~'OL'~''~''~''~'#@#[~infrabot]

 

Code:

df2 = spark.read.load("spRatingData_sample.txt",
 format="csv", 
 sep="'~'",
 lineSep="#@#@#")
print("two.csv rowcount: {}".format(df2.count()))

 

ERROR:

: java.lang.IllegalArgumentException: Delimiter cannot be more than one 
character: '~'
 at 
org.apache.spark.sql.execution.datasources.csv.CSVUtils$.toChar(CSVUtils.scala:118)
 at 
org.apache.spark.sql.execution.datasources.csv.CSVOptions.<init>(CSVOptions.scala:87)
 at 
org.apache.spark.sql.execution.datasources.csv.CSVOptions.<init>(CSVOptions.scala:45)
 at 
org.apache.spark.sql.execution.datasources.csv.CSVFileFormat.inferSchema(CSVFileFormat.scala:58)
 at 
org.apache.spark.sql.execution.datasources.DataSource.$anonfun$getOrInferFileFormatSchema$12(DataSource.scala:183)
 at scala.Option.orElse(Option.scala:447)
 at 
org.apache.spark.sql.execution.datasources.DataSource.getOrInferFileFormatSchema(DataSource.scala:180)
 at 
org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:373)
 at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
 at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
 at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
 at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
 at py4j.Gateway.invoke(Gateway.java:282)
 at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
 at py4j.commands.CallCommand.execute(CallCommand.java:79)
 at py4j.GatewayConnection.run(GatewayConnection.java:238)
 at java.lang.Thread.run(Thread.java:748)


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
 File "<stdin>", line 4, in <module>
 File "/usr/lib/spark/python/pyspark/sql/readwriter.py", line 166, in load
 return self._df(self._jreader.load(path))
 File "/usr/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", 
line 1257, in __call__
 File "/usr/lib/spark/python/pyspark/sql/utils.py", line 79, in deco
 raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.IllegalArgumentException: "Delimiter cannot be more than one 
character: '~'"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to