[jira] [Commented] (SPARK-29195) Can't config orc.compress.size option for native ORC writer

2019-09-27 Thread Eric Sun (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939155#comment-16939155
 ] 

Eric Sun commented on SPARK-29195:
--

It is very likely in Spark - 
[https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/orc/OrcFileFormat.scala]

only *MAPRED_OUTPUT_SCHEMA* and *COMPRESS* are set.

> Can't config orc.compress.size option for native ORC writer
> ---
>
> Key: SPARK-29195
> URL: https://issues.apache.org/jira/browse/SPARK-29195
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.3.0
> Environment: Linux
> Java 1.8.0
>Reporter: Eric Sun
>Priority: Minor
>  Labels: ORC
>
>  Only codec can be effectively configured via code, but "orc.compress.size" 
> or "orc.row.index.stride" can not.
>  
> {code:java}
> // try
>   val spark = SparkSession
> .builder()
> .appName(appName)
> .enableHiveSupport()
> .config("spark.sql.orc.impl", "native")
> .config("orc.compress.size", 512 * 1024)
> .config("spark.sql.orc.compress.size", 512 * 1024)
> .config("hive.exec.orc.default.buffer.size", 512 * 1024)
> .config("spark.hadoop.io.file.buffer.size", 512 * 1024)
> .getOrCreate()
> {code}
> orcfiledump still shows:
>  
> {code:java}
> File Version: 0.12 with FUTURE
> Compression: ZLIB
> Compression size: 65536
> {code}
>  
> Executor Log:
> {code}
> impl.WriterImpl: ORC writer created for path: 
> hdfs://name_node_host:9000/foo/bar/_temporary/0/_temporary/attempt_20190920222359_0001_m_000127_0/part-00127-2a9a9287-54bf-441c-b3cf-718b122d9c2f_00127.c000.zlib.orc
>  with stripeSize: 67108864 blockSize: 268435456 compression: ZLIB bufferSize: 
> 65536
> File Output Committer Algorithm version is 2
> {code}
> According to [SPARK-23342], the other ORC options should be configurable. Is 
> there anything missing here?
> Is there any other way to affect "orc.compress.size"?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29195) Can't config orc.compress.size option for native ORC writer

2019-09-24 Thread Hyukjin Kwon (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936767#comment-16936767
 ] 

Hyukjin Kwon commented on SPARK-29195:
--

Can you narrow down if this is a bug in ORC or Spark?

> Can't config orc.compress.size option for native ORC writer
> ---
>
> Key: SPARK-29195
> URL: https://issues.apache.org/jira/browse/SPARK-29195
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.3.0
> Environment: Linux
> Java 1.8.0
>Reporter: Eric Sun
>Priority: Minor
>  Labels: ORC
>
>  Only codec can be effectively configured via code, but "orc.compress.size" 
> or "orc.row.index.stride" can not.
>  
> {code:java}
> // try
>   val spark = SparkSession
> .builder()
> .appName(appName)
> .enableHiveSupport()
> .config("spark.sql.orc.impl", "native")
> .config("orc.compress.size", 512 * 1024)
> .config("spark.sql.orc.compress.size", 512 * 1024)
> .config("hive.exec.orc.default.buffer.size", 512 * 1024)
> .config("spark.hadoop.io.file.buffer.size", 512 * 1024)
> .getOrCreate()
> {code}
> orcfiledump still shows:
>  
> {code:java}
> File Version: 0.12 with FUTURE
> Compression: ZLIB
> Compression size: 65536
> {code}
>  
> Executor Log:
> {code}
> impl.WriterImpl: ORC writer created for path: 
> hdfs://name_node_host:9000/foo/bar/_temporary/0/_temporary/attempt_20190920222359_0001_m_000127_0/part-00127-2a9a9287-54bf-441c-b3cf-718b122d9c2f_00127.c000.zlib.orc
>  with stripeSize: 67108864 blockSize: 268435456 compression: ZLIB bufferSize: 
> 65536
> File Output Committer Algorithm version is 2
> {code}
> According to [SPARK-23342], the other ORC options should be configurable. Is 
> there anything missing here?
> Is there any other way to affect "orc.compress.size"?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org