Github user andrewor14 commented on the pull request:

    https://github.com/apache/spark/pull/2363#issuecomment-55351110
  
    @tgravescs Sorry I wasn't clear. There are tons of other configs in Spark 
that we do not intend to expose, like `spark.cleaner.referenceTracking` and 
`spark.storage.blockManagerSlaveTimeoutMs`. These are mainly introduced to 
provide a backdoor to a feature we added in case things go wrong, but not 
intended for the average user to configure themselves. There are already too 
many things that are tunable in Spark, and exposing more than the essential 
configs may confuse the user. Hadoop for example has millions of exposed 
configs in each component, and we are making an effort to avoid something 
similar.
    
    As for renaming the configs to `*.internal`, I don't feel strongly for or 
against it. Though if we to plan to do that then we also need to do the same 
for all the existing configs that we haven't exposed, and that is definitely 
outside the scope of this PR. The main motivation of this patch is to make a 
few particular test suites less flaky. Perhaps we can file a separate JIRA to 
fix this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to