[ 
https://issues.apache.org/jira/browse/SPARK-8008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14568047#comment-14568047
 ] 

Michael Armbrust commented on SPARK-8008:
-----------------------------------------

Out of curiosity, if you are not caching and going directly into a shuffle, is 
this actually bad for memory consumption?  Do we not stream into the shuffle 
files?

> JDBC data source can overload the external database system due to high 
> concurrency
> ----------------------------------------------------------------------------------
>
>                 Key: SPARK-8008
>                 URL: https://issues.apache.org/jira/browse/SPARK-8008
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>            Reporter: Rene Treffer
>
> Spark tries to load as many partitions as possible in parallel, which can in 
> turn overload the database although it would be possible to load all 
> partitions given a lower concurrency.
> It would be nice to either limit the maximum concurrency or to at least warn 
> about this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to