[GitHub] [spark] cloud-fan commented on a change in pull request #25460: [SPARK-25474][SQL][FOLLOW-UP] fallback to hdfs when relation table stats is not available

2019-08-16 Thread GitBox
cloud-fan commented on a change in pull request #25460: 
[SPARK-25474][SQL][FOLLOW-UP] fallback to hdfs when relation table stats is not 
available
URL: https://github.com/apache/spark/pull/25460#discussion_r314592339
 
 

 ##
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/HadoopFsRelation.scala
 ##
 @@ -72,7 +72,8 @@ case class HadoopFsRelation(
 val compressionFactor = sqlContext.conf.fileCompressionFactor
 val defaultSize = (location.sizeInBytes * compressionFactor).toLong
 location match {
-  case cfi: CatalogFileIndex if 
sparkSession.sessionState.conf.fallBackToHdfsForStatsEnabled =>
+  case cfi: CatalogFileIndex if 
sparkSession.sessionState.conf.fallBackToHdfsForStatsEnabled &&
+  location.sizeInBytes == sqlContext.conf.defaultSizeInBytes =>
 
 Review comment:
   The point @maropu and I were making is, `location.sizeInBytes == 
sqlContext.conf.defaultSizeInBytes` doesn't mean the table stats are not 
available. `sqlContext.conf.defaultSizeInBytes` is configurable, and it's 
possible that the table stats are the same as 
`sqlContext.conf.defaultSizeInBytes`, in which case we shouldn't fallback to 
HDFS.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #25460: [SPARK-25474][SQL][FOLLOW-UP] fallback to hdfs when relation table stats is not available

2019-08-15 Thread GitBox
cloud-fan commented on a change in pull request #25460: 
[SPARK-25474][SQL][FOLLOW-UP] fallback to hdfs when relation table stats is not 
available
URL: https://github.com/apache/spark/pull/25460#discussion_r314559804
 
 

 ##
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/HadoopFsRelation.scala
 ##
 @@ -72,7 +72,8 @@ case class HadoopFsRelation(
 val compressionFactor = sqlContext.conf.fileCompressionFactor
 val defaultSize = (location.sizeInBytes * compressionFactor).toLong
 location match {
-  case cfi: CatalogFileIndex if 
sparkSession.sessionState.conf.fallBackToHdfsForStatsEnabled =>
+  case cfi: CatalogFileIndex if 
sparkSession.sessionState.conf.fallBackToHdfsForStatsEnabled
+&& defaultSize == sqlContext.conf.defaultSizeInBytes =>
 
 Review comment:
   Ah good point! Basically there is no way to tell the table stats are 
available or not at this point. `sqlContext.conf.defaultSizeInBytes` is 
configurable and it's possible that the table stats just equal to 
`sqlContext.conf.defaultSizeInBytes`.
   
   #24715 seems to be able to fix it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org