Hi, everyone
I have some questions about creating a datasource table.
In HiveExternalCatalog.createDataSourceTable,
newSparkSQLSpecificMetastoreTable will replace the table schema with
EMPTY_DATA_SCHEMA and table.partitionSchema.
So,Why we use EMPTY_DATA_SCHEMA? Why not declare schema
Thank you, I will check it out.
On Mon, Dec 17, 2018 at 9:00 PM Hyukjin Kwon wrote:
> Please take a look for https://spark.apache.org/contributing.html . It
> contains virtually all information it needs for contributions.
>
> 2018년 12월 18일 (화) 오전 3:54, Raghunadh Madamanchi <
> mailto.raghun...@g
Please take a look for https://spark.apache.org/contributing.html . It
contains virtually all information it needs for contributions.
2018년 12월 18일 (화) 오전 3:54, Raghunadh Madamanchi 님이
작성:
> Hi,
>
> I am Raghu, I live in Dallas,TX.
> Having 15+ years of Experience in Software Development and Des
HyukjinKwon commented on issue #162: Add a note about Spark build requirement
at PySpark testing guide in Developer Tools
URL: https://github.com/apache/spark-website/pull/162#issuecomment-448075651
adding @squito as well FYI
HyukjinKwon opened a new pull request #162: Add a note about Spark build
requirement at PySpark testing guide in Developer Tools
URL: https://github.com/apache/spark-website/pull/162
I received some feedback about running PySpark tests via private emails.
Unlike SBT or Maven testing, PySpa
HyukjinKwon commented on issue #162: Add a note about Spark build requirement
at PySpark testing guide in Developer Tools
URL: https://github.com/apache/spark-website/pull/162#issuecomment-448075198
adding @cloud-fan and @srowen.
Hi,
I am Raghu, I live in Dallas,TX.
Having 15+ years of Experience in Software Development and Design using
Java related technologies,Hadoop, Hive..etc.
I wanted to get involved with this group by contributing my knowledge.
Please let me know, if you have something, which i can start working on
Hi,
On Sun, Dec 16, 2018 at 4:43 AM Wenchen Fan wrote:
> Shall we include Parquet and ORC? If they don't support it, it's hard for
> general query engines like Spark to support it.
For each of the more explicit timestamp types we propose a single
semantics regardless of the file format. Query