Github user gatorsmile commented on the issue:

    https://github.com/apache/spark/pull/14625
  
    The original purpose of Hive test sets is for verifying the query results 
and behaviors of [Auto Join 
Conversion](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+JoinOptimization#LanguageManualJoinOptimization-OptimizeAutoJoinConversion)
  
    
    This is not applicable to Spark SQL. I assume the current purposes in Spark 
are just for checking whether the query results match the outputs of Hive. 
    
    Do you want me to completely remove them? Or reduce the data set to around 
5 records and keep the queries untouched?
    



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to