This is an automated email from the ASF dual-hosted git repository.
wenchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push:
new 693d008de0e4 [SPARK-49808][SQL] Fix a deadlock in subquery execution
due to lazy vals
693d008de0e4 is described below
commit 693d008de0e407183a278705dc40e0ca64e49053
Author: Ruifeng Zheng <[email protected]>
AuthorDate: Fri Oct 18 16:40:06 2024 +0800
[SPARK-49808][SQL] Fix a deadlock in subquery execution due to lazy vals
### What changes were proposed in this pull request?
1, Introduce a helper class `Lazy` to replace the lazy vals
2, Fix a deadlock in subquery execution
### Why are the changes needed?
we observed a deadlock between `QueryPlan.canonicalized` and
`QueryPlan.references`:
The main thread `TakeOrderedAndProject.doExecute` is trying to compute
`outputOrdering`, it top-down traverse the tree, and requires the lock of
`QueryPlan.canonicalized` in the path.
In this deadlock, it successfully obtained the lock of
`WholeStageCodegenExec` and requires the lock of `HashAggregateExec`;
Concurrently, a subquery execution thread is performing code generation and
bottom-up traverses the tree via `def consume`, which checks
`WholeStageCodegenExec.usedInputs` and refererences a lazy val
`QueryPlan.references`. It requires the lock of `QueryPlan.references` in the
path.
In this deadlock, it successfully obtained the lock of `HashAggregateExec`
and requires the lock of `WholeStageCodegenExec`;
This is due to Scala's lazy val internally calls this.synchronized on the
instance that contains the val. This creates a potential for deadlocks.
### Does this PR introduce _any_ user-facing change?
no
### How was this patch tested?
manually test:
before the fix, the deadlock happened twice in first 20 runs;
after the fix, the deadlock didn't happen in consecutive 100+ runs
### Was this patch authored or co-authored using generative AI tooling?
no
Closes #48391 from zhengruifeng/query_plan_lazy_ref.
Authored-by: Ruifeng Zheng <[email protected]>
Signed-off-by: Wenchen Fan <[email protected]>
---
.../org/apache/spark/util/TransientLazy.scala | 43 ++++++++++++++++
.../org/apache/spark/util/TransientLazySuite.scala | 58 ++++++++++++++++++++++
.../spark/sql/catalyst/plans/QueryPlan.scala | 10 ++--
3 files changed, 107 insertions(+), 4 deletions(-)
diff --git a/core/src/main/scala/org/apache/spark/util/TransientLazy.scala
b/core/src/main/scala/org/apache/spark/util/TransientLazy.scala
new file mode 100644
index 000000000000..2833ef93669a
--- /dev/null
+++ b/core/src/main/scala/org/apache/spark/util/TransientLazy.scala
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.util
+
+/**
+ * Construct to lazily initialize a variable.
+ * This may be helpful for avoiding deadlocks in certain scenarios. For
example,
+ * a) Thread 1 entered a synchronized method, grabbing a coarse lock on the
parent object.
+ * b) Thread 2 gets spawned off, and tries to initialize a lazy value on the
same parent object
+ * (in our case, this was the logger). This causes scala to also try to
grab a coarse lock on
+ * the parent object.
+ * c) If thread 1 waits for thread 2 to join, a deadlock occurs.
+ * The main difference between this and [[LazyTry]] is that this does not
cache failures.
+ *
+ * @note
+ * Scala 3 uses a different implementation of lazy vals which doesn't have
this problem.
+ * Please refer to <a
+ *
href="https://docs.scala-lang.org/scala3/reference/changed-features/lazy-vals-init.html">Lazy
+ * Vals Initialization</a> for more details.
+ */
+private[spark] class TransientLazy[T](initializer: => T) extends Serializable {
+
+ @transient
+ private[this] lazy val value: T = initializer
+
+ def apply(): T = {
+ value
+ }
+}
diff --git a/core/src/test/scala/org/apache/spark/util/TransientLazySuite.scala
b/core/src/test/scala/org/apache/spark/util/TransientLazySuite.scala
new file mode 100644
index 000000000000..c0754ee063d6
--- /dev/null
+++ b/core/src/test/scala/org/apache/spark/util/TransientLazySuite.scala
@@ -0,0 +1,58 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.util
+
+import java.io.{ByteArrayOutputStream, ObjectOutputStream}
+
+import org.apache.spark.SparkFunSuite
+
+class TransientLazySuite extends SparkFunSuite {
+
+ test("TransientLazy val works") {
+ var test: Option[Object] = None
+
+ val lazyval = new TransientLazy({
+ test = Some(new Object())
+ test
+ })
+
+ // Ensure no initialization happened before the lazy value was dereferenced
+ assert(test.isEmpty)
+
+ // Ensure the first invocation creates a new object
+ assert(lazyval() == test && test.isDefined)
+
+ // Ensure the subsequent invocation serves the same object
+ assert(lazyval() == test && test.isDefined)
+ }
+
+ test("TransientLazy val is serializable") {
+ val lazyval = new TransientLazy({
+ new Object()
+ })
+
+ // Ensure serializable before the dereference
+ val oos = new ObjectOutputStream(new ByteArrayOutputStream())
+ oos.writeObject(lazyval)
+
+ val dereferenced = lazyval()
+
+ // Ensure serializable after the dereference
+ val oos2 = new ObjectOutputStream(new ByteArrayOutputStream())
+ oos2.writeObject(lazyval)
+ }
+}
diff --git
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/QueryPlan.scala
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/QueryPlan.scala
index 3f417644082c..9418bf298b29 100644
---
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/QueryPlan.scala
+++
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/QueryPlan.scala
@@ -32,6 +32,7 @@ import org.apache.spark.sql.catalyst.trees.TreePatternBits
import org.apache.spark.sql.catalyst.types.DataTypeUtils
import org.apache.spark.sql.internal.SQLConf
import org.apache.spark.sql.types.{DataType, StructType}
+import org.apache.spark.util.TransientLazy
import org.apache.spark.util.collection.BitSet
/**
@@ -94,10 +95,11 @@ abstract class QueryPlan[PlanType <: QueryPlan[PlanType]]
* All Attributes that appear in expressions from this operator. Note that
this set does not
* include attributes that are implicitly referenced by being passed through
to the output tuple.
*/
- @transient
- lazy val references: AttributeSet = {
- AttributeSet.fromAttributeSets(expressions.map(_.references)) --
producedAttributes
- }
+ def references: AttributeSet = _references()
+
+ private val _references = new TransientLazy({
+ AttributeSet(expressions) -- producedAttributes
+ })
/**
* Returns true when the all the expressions in the current node as well as
all of its children
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]