Github user andrewor14 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/6252#discussion_r30571665
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/sources/SqlNewHadoopRDD.scala ---
    @@ -0,0 +1,269 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.spark.sql.sources
    +
    +import java.text.SimpleDateFormat
    +import java.util.Date
    +
    +import org.apache.hadoop.conf.{Configurable, Configuration}
    +import org.apache.hadoop.io.Writable
    +import org.apache.hadoop.mapreduce._
    +import org.apache.hadoop.mapreduce.lib.input.{CombineFileSplit, FileSplit}
    +import org.apache.spark.broadcast.Broadcast
    +
    +import org.apache.spark.{Partition => SparkPartition, _}
    +import org.apache.spark.annotation.DeveloperApi
    +import org.apache.spark.deploy.SparkHadoopUtil
    +import org.apache.spark.executor.DataReadMethod
    +import org.apache.spark.mapreduce.SparkHadoopMapReduceUtil
    +import org.apache.spark.rdd.{RDD, HadoopRDD}
    +import org.apache.spark.rdd.NewHadoopRDD.NewHadoopMapPartitionsWithSplitRDD
    +import org.apache.spark.storage.StorageLevel
    +import org.apache.spark.util.Utils
    +
    +import scala.reflect.ClassTag
    +
    +private[spark] class SqlNewHadoopPartition(
    +    rddId: Int,
    +    val index: Int,
    +    @transient rawSplit: InputSplit with Writable)
    +  extends SparkPartition {
    +
    +  val serializableHadoopSplit = new SerializableWritable(rawSplit)
    +
    +  override def hashCode(): Int = 41 * (41 + rddId) + index
    +}
    +
    +/**
    + * :: DeveloperApi ::
    + * An RDD that provides core functionality for reading data stored in 
Hadoop (e.g., files in HDFS,
    + * sources in HBase, or S3), using the new MapReduce API 
(`org.apache.hadoop.mapreduce`).
    + * It is based on [[org.apache.spark.rdd.NewHadoopRDD]]. It has three 
additions.
    + * 1. A shared broadcast Hadoop Configuration.
    + * 2. An optional closure `initDriverSideJobFuncOpt` that set 
configurations at the driver side
    + *    to the shared Hadoop Configuration.
    + * 3. An optional closure `initLocalJobFuncOpt` that set configurations at 
both the driver side
    + *    and the executor side to the shared Hadoop Configuration.
    + *
    + * @param sc The SparkContext to associate the RDD with.
    + * @param inputFormatClass Storage format of the data to be read.
    + * @param keyClass Class of the key associated with the inputFormatClass.
    + * @param valueClass Class of the value associated with the 
inputFormatClass.
    + * @param conf The Hadoop configuration.
    + */
    +@DeveloperApi
    +class SqlNewHadoopRDD[K, V](
    +    @transient sc : SparkContext,
    +    broadcastedConf: Broadcast[SerializableWritable[Configuration]],
    +    @transient initDriverSideJobFuncOpt: Option[Job => Unit],
    +    initLocalJobFuncOpt: Option[Job => Unit],
    +    inputFormatClass: Class[_ <: InputFormat[K, V]],
    +    keyClass: Class[K],
    +    valueClass: Class[V])
    +  extends RDD[(K, V)](sc, Nil)
    +  with SparkHadoopMapReduceUtil
    +  with Logging {
    +
    +  if (initLocalJobFuncOpt.isDefined) {
    +    sc.clean(initLocalJobFuncOpt.get)
    +  }
    --- End diff --
    
    I don't have the full context here, but if we create many 
`SqlNewHadoopRDDs` it might be worthwhile to not clean this in the constructor, 
and assume the caller cleans it if necessary. For instance we know that the 
closure created in `ParquetRelation2.initializeLocalJobFunc` is definitely 
serializable (because the code is in Spark), so we don't need to clean it. This 
might buy us a few seconds (same reasoning as SPARK-7718, or #6256).
    
    On the other hand because this is a developer API, if we do this we should 
add a large comment expressing that it assumes the closure it takes in is 
either already serializable or already cleaned.
    
    By the way this is definitely not critical for this release, but it's just 
a potential source of performance optimization to try out if you're looking for 
one.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to