Hi,
I see that for MOR table only Read optimized view is supported for
Spark data source and Snapshot and Incremental query are not supported for
Spark data source.
Just curious to know whether support for Snapshot and Incremental query for
Spark data source is work in progress? If so, could
https://cwiki.apache.org/confluence/display/HUDI/20200526+Weekly+Sync+Minutes
Thanks
Vinoth
Hi evanzhao,
Done and welcome!
Best,
Vino
赵延军 于2020年5月26日周二 下午9:41写道:
> *Hi,*
>
> *I want to contribute to Apache Hudi. Would you please give me the
> contributor permission? My JIRA ID is evanzhao.*
>
Hi Vinoth,
I see the below comment in Hudi code. How can I start using metastore
client for hive registrations? is there a way to disable useJdbc flag?
*// Support both JDBC and metastore based implementations for backwards
compatiblity. Future users should// disable jdbc and depend on
I added a comment in this wiki. Hope this works. Thanks.
On Sun, May 24, 2020 at 2:32 AM Vinoth Chandar wrote:
> Great team work everyone!
>
> Anything worth documenting here?
> https://cwiki.apache.org/confluence/display/HUDI/Troubleshooting+Guide
>
> On Thu, May 21, 2020 at 11:02 PM Lian
Hi,
I want to contribute to Apache Hudi. Would you please give me the contributor
permission? My JIRA ID is evanzhao.
另外一个细节点:用apache的邮箱,收不到图片。
谢谢
On 2020/05/26 07:23:02, "gaofeng5...@capinfo.com.cn"
wrote:
>
>
>
> spark版本2.3.2.3.1.0.0-78,提交代码为:
> def main(args: Array[String]): Unit = {
> val spark = SparkSession.builder.appName("Demo")
> .config("spark.serializer",
嗨,从 hudi-0.5.2 版本后,推荐使用 spark-2.4.4 版本,同时在spark-2.4.4中,spark-avro被单独出来,需要手动添加到
jars 目录中。并不兼容spark-2.3.3版。
如果有其他问题,联系微信 19941866946,进国内微信群。
On 2020/05/26 07:23:02, "gaofeng5...@capinfo.com.cn"
wrote:
>
>
>
> spark版本2.3.2.3.1.0.0-78,提交代码为:
> def main(args: Array[String]): Unit = {
> val
spark版本2.3.2.3.1.0.0-78,提交代码为:
def main(args: Array[String]): Unit = {
val spark = SparkSession.builder.appName("Demo")
.config("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.master("local[3]")
.getOrCreate()
//insert(spark)
update(spark)
query(spark)
Thanks a lot. Understood !!
On 2020/05/24 06:53:01, Vinoth Chandar wrote:
> Hi,
>
> Right now, archive folder is just that.. an archive.. It's there so that
> you can use the CLI and trace audit history if needed..
> Hudi operations all use the active timeline only (i.e the files you see on
>
10 matches
Mail list logo