[jira] [Updated] (SPARK-29542) [SQL][DOC] The descriptions of `spark.sql.files.*` are confused.

2019-10-22 Thread feiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

feiwang updated SPARK-29542:

Description: 
Hi,the description of `spark.sql.files.maxPartitionBytes` is shown as below.

{code:java}
The maximum number of bytes to pack into a single partition when reading files.
{code}

It seems that it can ensure each partition at most process bytes of that value 
for spark sql.

As shown in the attachment,  the value of spark.sql.files.maxPartitionBytes is 
128MB.
For stage 1, its input is 16.3TB, but there are only 6400 tasks.


I checked the code,  it is only effective for data source table.
So, its description is confused.
Same as all the descriptions of `spark.sql.files.*`.

  was:
Hi,the description of `spark.sql.files.maxPartitionBytes` is shown as below.

{code:java}
The maximum number of bytes to pack into a single partition when reading files.
{code}

It seems that it can ensure each partition at most process bytes of that value 
for spark sql.

As shown in the attachment,  the value of spark.sql.files.maxPartitionBytes is 
128MB.
For stage 1, its input is 16.3TB, but there are only 6400 tasks.


I checked the code,  it is only effective for data source table.
So, its description is confused.


> [SQL][DOC] The descriptions of `spark.sql.files.*` are confused.
> 
>
> Key: SPARK-29542
> URL: https://issues.apache.org/jira/browse/SPARK-29542
> Project: Spark
>  Issue Type: Documentation
>  Components: Documentation
>Affects Versions: 2.4.4
>Reporter: feiwang
>Priority: Minor
> Attachments: screenshot-1.png
>
>
> Hi,the description of `spark.sql.files.maxPartitionBytes` is shown as below.
> {code:java}
> The maximum number of bytes to pack into a single partition when reading 
> files.
> {code}
> It seems that it can ensure each partition at most process bytes of that 
> value for spark sql.
> As shown in the attachment,  the value of spark.sql.files.maxPartitionBytes 
> is 128MB.
> For stage 1, its input is 16.3TB, but there are only 6400 tasks.
> I checked the code,  it is only effective for data source table.
> So, its description is confused.
> Same as all the descriptions of `spark.sql.files.*`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-29542) [SQL][DOC] The descriptions of `spark.sql.files.*` are confused.

2019-10-22 Thread feiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

feiwang updated SPARK-29542:

Summary: [SQL][DOC] The descriptions of `spark.sql.files.*` are confused.  
(was: [DOC] The description of `spark.sql.files.maxPartitionBytes` is confused.)

> [SQL][DOC] The descriptions of `spark.sql.files.*` are confused.
> 
>
> Key: SPARK-29542
> URL: https://issues.apache.org/jira/browse/SPARK-29542
> Project: Spark
>  Issue Type: Documentation
>  Components: Documentation
>Affects Versions: 2.4.4
>Reporter: feiwang
>Priority: Minor
> Attachments: screenshot-1.png
>
>
> Hi,the description of `spark.sql.files.maxPartitionBytes` is shown as below.
> {code:java}
> The maximum number of bytes to pack into a single partition when reading 
> files.
> {code}
> It seems that it can ensure each partition at most process bytes of that 
> value for spark sql.
> As shown in the attachment,  the value of spark.sql.files.maxPartitionBytes 
> is 128MB.
> For stage 1, its input is 16.3TB, but there are only 6400 tasks.
> I checked the code,  it is only effective for data source table.
> So, its description is confused.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org