Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5868#discussion_r30093761
--- Diff:
core/src/main/scala/org/apache/spark/shuffle/unsafe/UnsafeShuffleManager.scala
---
@@ -0,0 +1,178 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.shuffle.unsafe
+
+import org.apache.spark._
+import org.apache.spark.serializer.Serializer
+import org.apache.spark.shuffle._
+import org.apache.spark.shuffle.sort.SortShuffleManager
+
+/**
+ * Subclass of [[BaseShuffleHandle]], used to identify when we've chosen
to use the new shuffle.
+ */
+private class UnsafeShuffleHandle[K, V](
+ shuffleId: Int,
+ override val numMaps: Int,
+ override val dependency: ShuffleDependency[K, V, V])
+ extends BaseShuffleHandle(shuffleId, numMaps, dependency) {
+}
+
+private[spark] object UnsafeShuffleManager extends Logging {
+ /**
+ * Helper method for determining whether a shuffle should use the
optimized unsafe shuffle
+ * path or whether it should fall back to the original sort-based
shuffle.
+ */
+ def canUseUnsafeShuffle[K, V, C](dependency: ShuffleDependency[K, V,
C]): Boolean = {
+ val shufId = dependency.shuffleId
+ val serializer = Serializer.getSerializer(dependency.serializer)
+ if (!serializer.supportsRelocationOfSerializedObjects) {
+ log.debug(s"Can't use UnsafeShuffle for shuffle $shufId because the
serializer, " +
--- End diff --
I considered this, but I worry that this will result in extremely chatty
logs because many operations won't be able to use this new shuffle yet. For
example, this would trigger a warning whenever `reduceByKey` is used.
This is a tricky issue, especially as the number of special-case shuffle
optimizations grows. It will be very easy for users to slightly change their
programs in ways that trigger slower code paths (e.g. by switching from LZF to
LZ4 compression). Conversely, this also creates the potential for small
changes to result in huge secondary performance benefits in non-obvious ways:
if a user were to switch from LZ4 to LZF, then the current code would hit a
more efficient shuffle merge path and might exhibit huge speed-ups, but a user
might misattribute this to LZF being faster / offering better compression in
general, whereas it's really the optimized merge path that's activated by LZF's
concatenatibility that is responsible for the speed up. This is a general
issue that's probably worth exploring as part of a broader discussion of how to
expose internal knowledge of performance optimizations back to end users.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]