[
https://issues.apache.org/jira/browse/SPARK-43382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17720110#comment-17720110
]
melin edited comment on SPARK-43382 at 5/7/23 8:35 AM:
-------------------------------------------------------
There is an idea to customize the hadoop filesystem based on common vfs. The
common vfs supports reading different archive files.
simple demo:
[https://github.com/melin/spark-jobserver/blob/master/jobserver-extensions/src/test/scala/com/github/melin/jobserver/extensions/sql/]
{code:java}
spark.read.option("header", "true")
.csv("vfs://tgz:ftp://fcftp:[email protected]/csv.tar.gz!/csv").show()
spark.read.option("header", "true")
.csv("vfs://tgz:s3://BxiljVd5YZa3mRUn:3Mq9dsmdMbN1JipE1TlOF7OuDkuYBYpe@cdh1:9300/demo-bucket/csv.tar.gz!/csv").show()
spark.read.option("header", "true")
.csv("vfs://tgz:sftp:///test:[email protected]:22/ftpdata/csv.tar.gz!/csv").show()
{code}
was (Author: melin):
There is an idea to customize the hadoop filesystem based on common vfs. The
common vfs supports reading different archive files.
simple demo:
[https://github.com/melin/spark-jobserver/blob/master/jobserver-extensions/src/test/scala/com/github/melin/jobserver/extensions/sql/|https://github.com/melin/spark-jobserver/blob/master/jobserver-extensions/src/test/scala/com/github/melin/jobserver/extensions/sql/SparkMaskDemo.scala]
{code:java}
spark.read.option("header", "true")
.csv("vfs://tgz:ftp://fcftp:[email protected]/csv.tar.gz!/csv").show()
spark.read.option("header", "true")
.csv("vfs://tgz:s3://BxiljVd5YZa3mRUn:3Mq9dsmdMbN1JipE1TlOF7OuDkuYBYpe@cdh1:9300/demo-bucket/csv.tar.gz!/csv").show()
spark.read.option("header", "true")
.csv("vfs://tgz:sftp:///test:[email protected]:22/ftpdata/csv.tar.gz!/csv").show()
{code}
> Read and write csv and json files. Archive files such as zip or gz are
> supported
> --------------------------------------------------------------------------------
>
> Key: SPARK-43382
> URL: https://issues.apache.org/jira/browse/SPARK-43382
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 3.5.0
> Reporter: melin
> Priority: Major
>
> snowflake data import and export, support fixed files. For example:
>
> {code:java}
> COPY INTO @mystage/data.csv.gz
>
> COPY INTO mytable
> FROM @my_ext_stage/tutorials/dataloading/sales.json.gz;
> FILE_FORMAT = (TYPE = 'JSON')
> MATCH_BY_COLUMN_NAME='CASE_INSENSITIVE';
>
> {code}
> Can spark directly read archive files?
> {code:java}
> spark.read.csv("/tutorials/dataloading/sales.json.gz")
> {code}
> @[~kaifeiYi]
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]