Thank you so much! Any update on getting the RC1 up for vote? Jason.
________________________________ From: 郑瑞峰 <[email protected]> Sent: Wednesday, 5 August 2020 12:54 PM To: Jason Moore <[email protected]>; Spark dev list <[email protected]> Subject: 回复: [DISCUSS] Apache Spark 3.0.1 Release Hi all, I am going to prepare the realease of 3.0.1 RC1, with the help of Wenchen. ------------------ 原始邮件 ------------------ 发件人: "Jason Moore" <[email protected]>; 发送时间: 2020年7月30日(星期四) 上午10:35 收件人: "dev"<[email protected]>; 主题: Re: [DISCUSS] Apache Spark 3.0.1 Release Hi all, Discussion around 3.0.1 seems to have trickled away. What was blocking the release process kicking off? I can see some unresolved bugs raised against 3.0.0, but conversely there were quite a few critical correctness fixes waiting to be released. Cheers, Jason. From: Takeshi Yamamuro <[email protected]> Date: Wednesday, 15 July 2020 at 9:00 am To: Shivaram Venkataraman <[email protected]> Cc: "[email protected]" <[email protected]> Subject: Re: [DISCUSS] Apache Spark 3.0.1 Release > Just wanted to check if there are any blockers that we are still waiting for > to start the new release process. I don't see any on-going blocker in my area. Thanks for the notification. Bests, Tkaeshi On Wed, Jul 15, 2020 at 4:03 AM Dongjoon Hyun <[email protected]<mailto:[email protected]>> wrote: Hi, Yi. Could you explain why you think that is a blocker? For the given example from the JIRA description, spark.udf.register("key", udf((m: Map[String, String]) => m.keys.head.toInt)) Seq(Map("1" -> "one", "2" -> "two")).toDF("a").createOrReplaceTempView("t") checkAnswer(sql("SELECT key(a) AS k FROM t GROUP BY key(a)"), Row(1) :: Nil) Apache Spark 3.0.0 seems to work like the following. scala> spark.version res0: String = 3.0.0 scala> spark.udf.register("key", udf((m: Map[String, String]) => m.keys.head.toInt)) res1: org.apache.spark.sql.expressions.UserDefinedFunction = SparkUserDefinedFunction($Lambda$1958/948653928@5d6bed7b,IntegerType,List(Some(class[value[0]: map<string,string>])),None,false,true) scala> Seq(Map("1" -> "one", "2" -> "two")).toDF("a").createOrReplaceTempView("t") scala> sql("SELECT key(a) AS k FROM t GROUP BY key(a)").collect res3: Array[org.apache.spark.sql.Row] = Array([1]) Could you provide a reproducible example? Bests, Dongjoon. On Tue, Jul 14, 2020 at 10:04 AM Yi Wu <[email protected]<mailto:[email protected]>> wrote: This probably be a blocker: https://issues.apache.org/jira/browse/SPARK-32307 On Tue, Jul 14, 2020 at 11:13 PM Sean Owen <[email protected]<mailto:[email protected]>> wrote: https://issues.apache.org/jira/browse/SPARK-32234 ? On Tue, Jul 14, 2020 at 9:57 AM Shivaram Venkataraman <[email protected]<mailto:[email protected]>> wrote: > > Hi all > > Just wanted to check if there are any blockers that we are still waiting for > to start the new release process. > > Thanks > Shivaram > -- --- Takeshi Yamamuro
