[GitHub] spark pull request: [SPARK-6973]modify total stages/tasks on the a...

2015-04-20 Thread XuTingjun
Github user XuTingjun closed the pull request at: https://github.com/apache/spark/pull/5550 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature

[GitHub] spark pull request: [SPARK-6918][YARN] Secure HBase support.

2015-04-20 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/5586#issuecomment-94367970 Yeah, LGTM, I need this function. can we put hbase's config into hbase-site.xml, right? --- If your project is set up for it, you can reply to this email and have

[GitHub] spark pull request: [SPARK-6973]modify total stages/tasks on the a...

2015-04-17 Thread XuTingjun
GitHub user XuTingjun opened a pull request: https://github.com/apache/spark/pull/5550 [SPARK-6973]modify total stages/tasks on the allJobsPage Though totalStages = allStages - skippedStages is understandable. But consider the problem [SPARK-6973], I think totalStages = allStages

[GitHub] spark pull request: [SPARK-6973]modify total stages/tasks on the a...

2015-04-17 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/5550#issuecomment-93946383 Yeah, there will be this result. But consider the bug described in the jira, I think it's more reasonable. --- If your project is set up for it, you can reply

[GitHub] spark pull request: [SPARK-6973]modify total stages/tasks on the a...

2015-04-17 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/5550#issuecomment-93948969 Sorry, I can't attach the screenshot of stages, so maybe I described not clearly. --- If your project is set up for it, you can reply to this email and have your

[GitHub] spark pull request: [SPARK-6207] [YARN] [SQL] Adds delegation toke...

2015-03-27 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/5031#issuecomment-87148579 I have tested on secure hbase, and it didn't work. On executor process we got the error: --- If your project is set up for it, you can reply to this email and have

[GitHub] spark pull request: [SPARK-3168]The ServletContextHandler of webui...

2015-02-24 Thread XuTingjun
Github user XuTingjun closed the pull request at: https://github.com/apache/spark/pull/2073 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature

[GitHub] spark pull request: [SPARK-5770] forbid user to overwrite jar usin...

2015-02-15 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/4565#issuecomment-74399664 Thanks, @srowen. I think there have the users who starts a long-running sparkcontext, and add jars to run different cases. --- If your project is set up for it, you

[GitHub] spark pull request: [SPARK-5770] forbid user to overwrite jar usin...

2015-02-15 Thread XuTingjun
Github user XuTingjun closed the pull request at: https://github.com/apache/spark/pull/4565 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature

[GitHub] spark pull request: [SPARK-5831][Streaming]When checkpoint file si...

2015-02-15 Thread XuTingjun
GitHub user XuTingjun opened a pull request: https://github.com/apache/spark/pull/4621 [SPARK-5831][Streaming]When checkpoint file size is bigger than 10, then delete the old ones You can merge this pull request into a Git repository by running: $ git pull https://github.com

[GitHub] spark pull request: [SPARK-5770] Fix bug: Use addJar() to upload a...

2015-02-12 Thread XuTingjun
GitHub user XuTingjun opened a pull request: https://github.com/apache/spark/pull/4565 [SPARK-5770] Fix bug: Use addJar() to upload a new jar file to executor, it can't be added to classloader You can merge this pull request into a Git repository by running: $ git pull https

[GitHub] spark pull request: [SPARK-5770] Fix bug: Use addJar() to upload a...

2015-02-12 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/4565#issuecomment-74059653 Sorry, I think it's very difficult to unload the old jar and load the new jar to the classloader, maybe no solution. So I think the good way is restricting users

[GitHub] spark pull request: [SPARK-5770] Fix bug: Use addJar() to upload a...

2015-02-12 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/4565#issuecomment-74060311 I have test it in spark-shell. Yearh the new jar will overwrite the old one in local dir, but the classloader has not updated. They are inconsistent. It will confuse

[GitHub] spark pull request: [SPARK-5764] Delete the cache and lock file af...

2015-02-12 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/4548#issuecomment-74061806 In SparkContext.scala, the useCache is false, so it won't use the cached file --- If your project is set up for it, you can reply to this email and have your reply

[GitHub] spark pull request: [SPARK-5770] Fix bug: Use addJar() to upload a...

2015-02-12 Thread XuTingjun
GitHub user XuTingjun reopened a pull request: https://github.com/apache/spark/pull/4565 [SPARK-5770] Fix bug: Use addJar() to upload a new jar file to executor, it can't be added to classloader You can merge this pull request into a Git repository by running: $ git pull

[GitHub] spark pull request: [SPARK-5770] Fix bug: Use addJar() to upload a...

2015-02-12 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/4565#issuecomment-74190761 Though it's false by default, but spark.files.overwrite is exposed to user. So I think we should set overwrite just false only to JARS? --- If your project is set up

[GitHub] spark pull request: [SPARK-5770] Fix bug: Use addJar() to upload a...

2015-02-12 Thread XuTingjun
Github user XuTingjun closed the pull request at: https://github.com/apache/spark/pull/4565 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature

[GitHub] spark pull request: [SPARK-5764] Delete the cache and lock file af...

2015-02-12 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/4548#issuecomment-74040124 val cachedFileName = s${url.hashCode}${timestamp}_cache The cache file is named with url.hashCode and timestamp. No cache file of a jar will be the same

[GitHub] spark pull request: [SPARK-5770] Fix bug: Use addJar() to upload a...

2015-02-12 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/4565#issuecomment-74199949 @srowen, I think overwrite jar file has no meaning, because now code doesn't support loader new jar file. So I think forbid use to do this is more reasonable. I

[GitHub] spark pull request: [SPARK-5764] Delete the cache and lock file af...

2015-02-12 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/4548#issuecomment-74064000 I think we should consider the dynamic executor allocation, right? --- If your project is set up for it, you can reply to this email and have your reply appear

[GitHub] spark pull request: [SPARK-5764] Delete the cache and lock file af...

2015-02-12 Thread XuTingjun
Github user XuTingjun closed the pull request at: https://github.com/apache/spark/pull/4548 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature

[GitHub] spark pull request: [SPARK-5770] Fix bug: Use addJar() to upload a...

2015-02-12 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/4565#issuecomment-74064920 I also think it's controversial, but can we make the log more reasonable? --- If your project is set up for it, you can reply to this email and have your reply appear

[GitHub] spark pull request: [SPARK-5764] Delete the cache and lock file af...

2015-02-12 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/4548#issuecomment-74062990 Do you mean, the executors on the same node will use the cached file? I think it's right. --- If your project is set up for it, you can reply to this email and have

[GitHub] spark pull request: [SPARK-5764] Delete the cache and lock file af...

2015-02-12 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/4548#issuecomment-74063531 I think the cache file should be deleted when the app is finished, not executor stops. --- If your project is set up for it, you can reply to this email and have your

[GitHub] spark pull request: [Core][Improvement] Delelte no longer used fil...

2015-02-11 Thread XuTingjun
GitHub user XuTingjun opened a pull request: https://github.com/apache/spark/pull/4548 [Core][Improvement] Delelte no longer used file Every time while executor fetching a jar from httpserver, a lock file and a cache file will be created on the local. After fetching, this two

[GitHub] spark pull request: [SPARK-5530] Add executor container to executo...

2015-02-02 Thread XuTingjun
GitHub user XuTingjun opened a pull request: https://github.com/apache/spark/pull/4309 [SPARK-5530] Add executor container to executorIdToContainer when call killExecutor method, it will only go to the else branch, because the variable executorIdToContainer never be put any value

[GitHub] spark pull request: [SPARK-1507][YARN]specify num of cores for AM

2015-01-07 Thread XuTingjun
Github user XuTingjun closed the pull request at: https://github.com/apache/spark/pull/3686 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature

[GitHub] spark pull request: [SPARK-1507][YARN]specify num of cores for AM

2015-01-07 Thread XuTingjun
Github user XuTingjun closed the pull request at: https://github.com/apache/spark/pull/3806 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature

[GitHub] spark pull request: [SPARK-1507][YARN]specify num of cores for AM

2015-01-04 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/3806#issuecomment-68676616 @sryza, I have splited spark.driver.memory into spark.driver.memory and spark.yarn.am.memory. Please have a look. --- If your project is set up for it, you can reply

[GitHub] spark pull request: [SPARK-1507][YARN]specify num of cores for AM

2015-01-04 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/3806#issuecomment-68672836 @sryza, do you mean spark.driver.memory works in yarn-client and yarn-cluster mode, so we should use one configuration maybe named spark.driver.cores to set am cores

[GitHub] spark pull request: [SPARK-1507][YARN]specify num of cores for AM

2015-01-04 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/3806#issuecomment-6857 @andrewor14 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature

[GitHub] spark pull request: [SPARK-1507][YARN]specify num of cores for AM

2015-01-04 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/3806#issuecomment-68673436 Yearh, I agree with you. Later I will fix this. Thanks @sryza --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub

[GitHub] spark pull request: [SPARK-4966][YARN]The MemoryOverhead value is ...

2014-12-25 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/3797#issuecomment-68092844 @JoshRosen I am sorry to forget describe this patch. I have created a jira for it, can you take a look? --- If your project is set up for it, you can reply

[GitHub] spark pull request: [SPARK-1507][YARN]specify num of cores for AM

2014-12-25 Thread XuTingjun
GitHub user XuTingjun opened a pull request: https://github.com/apache/spark/pull/3799 [SPARK-1507][YARN]specify num of cores for AM I add some configurations below. spark.yarn.am.cores/SPARK_MASTER_CORES/SPARK_DRIVER_CORES for yarn-client mode; spark.driver.cores for yarn

[GitHub] spark pull request: [SPARK-1507][YARN]specify num of cores for AM

2014-12-25 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/3686#issuecomment-68093593 Hi all, I accidently delete my repository, so I create a new patch #3799 for it. --- If your project is set up for it, you can reply to this email and have your

[GitHub] spark pull request: specify AM core in yarn-client and yarn-cluste...

2014-12-25 Thread XuTingjun
GitHub user XuTingjun opened a pull request: https://github.com/apache/spark/pull/3806 specify AM core in yarn-client and yarn-cluster mode I add some configurations below. spark.yarn.am.cores/SPARK_MASTER_CORES/SPARK_DRIVER_CORES for yarn-client mode; spark.driver.cores

[GitHub] spark pull request: [SPARK-1507][YARN]specify num of cores for AM

2014-12-25 Thread XuTingjun
Github user XuTingjun closed the pull request at: https://github.com/apache/spark/pull/3799 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature

[GitHub] spark pull request: [SPARK-1507][YARN]specify num of cores for AM

2014-12-25 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/3806#issuecomment-68125091 @sryza, I am not agree with you. I only add the below code into cluster mode. So the --driver-cores will not work in client mode. OptionAssigner(args.driverCores

[GitHub] spark pull request: Update ClientArguments.scala

2014-12-24 Thread XuTingjun
GitHub user XuTingjun opened a pull request: https://github.com/apache/spark/pull/3797 Update ClientArguments.scala You can merge this pull request into a Git repository by running: $ git pull https://github.com/XuTingjun/spark MemoryOverhead Alternatively you can review

[GitHub] spark pull request: [YARN] Delete confusion configurations

2014-12-23 Thread XuTingjun
GitHub user XuTingjun opened a pull request: https://github.com/apache/spark/pull/3776 [YARN] Delete confusion configurations First, I think put master/work config into yarn mode is confusion. Second, it will add the config twice. For example, if SPARK_WORKER_INSTANCES is null

[GitHub] spark pull request: [YARN] Delete confusion configurations

2014-12-23 Thread XuTingjun
Github user XuTingjun closed the pull request at: https://github.com/apache/spark/pull/3776 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature

[GitHub] spark pull request: [YARN] Delete confusion configurations

2014-12-23 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/3776#issuecomment-68024052 ok, I have got it. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have

[GitHub] spark pull request: [SPARK-1507][YARN]specify num of cores for AM

2014-12-21 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/3686#issuecomment-67810373 sorry, I think this patch works in yarn-client and yarn-cluster mode. The param --driver-cores is standalone cluster only. am I missing something? --- If your

[GitHub] spark pull request: [SPARK-4792] Add error message when making loc...

2014-12-15 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/3635#issuecomment-67105200 @JoshRosen, If users share the cluster, and one unconsciously delete the dir, it will affect the application. So I think add exists() is more secure. --- If your

[GitHub] spark pull request: [SPARK-4792] Add error message when making loc...

2014-12-15 Thread XuTingjun
Github user XuTingjun commented on a diff in the pull request: https://github.com/apache/spark/pull/3635#discussion_r21876377 --- Diff: core/src/main/scala/org/apache/spark/storage/DiskBlockManager.scala --- @@ -67,11 +67,13 @@ private[spark] class DiskBlockManager(blockManager

[GitHub] spark pull request: [SPARK-1507] specify num of cores for AM

2014-12-12 Thread XuTingjun
GitHub user XuTingjun opened a pull request: https://github.com/apache/spark/pull/3686 [SPARK-1507] specify num of cores for AM You can merge this pull request into a Git repository by running: $ git pull https://github.com/XuTingjun/spark SPARK1507 Alternatively you can

[GitHub] spark pull request: [SPARK-1507][YARN]specify num of cores for AM

2014-12-12 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/3686#issuecomment-66774800 @tgravescs --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature

[GitHub] spark pull request: [SPARK-1507][YARN]specify num of cores for AM

2014-12-12 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/3686#issuecomment-66866834 I have tested, it works in yarn-client and yarn-cluster mode. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub

[GitHub] spark pull request: [SPARK-4792] Add error message when making loc...

2014-12-09 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/3635#issuecomment-66283335 @srowen , thank you for your suggestion, I have modified the method. --- If your project is set up for it, you can reply to this email and have your reply appear

[GitHub] spark pull request: Add error message when making local dir unsucc...

2014-12-08 Thread XuTingjun
GitHub user XuTingjun opened a pull request: https://github.com/apache/spark/pull/3635 Add error message when making local dir unsuccessfully You can merge this pull request into a Git repository by running: $ git pull https://github.com/XuTingjun/spark master Alternatively

[GitHub] spark pull request: [SPARK-4598] use pagination to show tasktable

2014-11-30 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/3456#issuecomment-65011962 I am sorry I did not take this into consideration. Of this, I think the application table in HistoryServer web also doesn't support sorting globally by any field

[GitHub] spark pull request: [SPARK-4598] use pagination to show tasktable

2014-11-30 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/3456#issuecomment-65014457 It supports clicking headers in rendered table, but just order one page applications, not global. --- If your project is set up for it, you can reply to this email

[GitHub] spark pull request: [SPARK-4598] use pagination to show tasktable

2014-11-29 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/3456#issuecomment-64977581 val tasks = stageData.taskData.values.toSeq.sortBy(_.taskInfo.launchTime) val showTasks = tasks.slice(actualFirst, Math.min(actualFirst + pageSize, tasks.size

[GitHub] spark pull request: [SPARK-4598] use pagination to show tasktable

2014-11-27 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/3456#issuecomment-64843591 First, I found showing tasks on web causing a lot of memory. No pagination, 5000 tasks will occurs OOM. With pagination, 2 tasks can successfully show in web. I

[GitHub] spark pull request: [SPARK-4598] use pagination to show tasktable

2014-11-25 Thread XuTingjun
GitHub user XuTingjun opened a pull request: https://github.com/apache/spark/pull/3456 [SPARK-4598] use pagination to show tasktable When the application has too many tasks, tasktable with all tasks costs a lot of memory. If using pagination, every time tasktable shows some tasks

[GitHub] spark pull request: [SPARK-3168]The ServletContextHandler of webui...

2014-09-10 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/2073#issuecomment-55088380 Now Spark doesn't support SSL --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does

[GitHub] spark pull request: [yarn]The method has a never used parameter

2014-08-29 Thread XuTingjun
Github user XuTingjun closed the pull request at: https://github.com/apache/spark/pull/1761 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature

[GitHub] spark pull request: [SPARK-2742][yarn] delete useless variables

2014-08-22 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/1614#issuecomment-53038468 How should I deal with it ? Should I close it ? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your

[GitHub] spark pull request: [SPARK-3168]The ServletContextHandler of webui...

2014-08-20 Thread XuTingjun
GitHub user XuTingjun opened a pull request: https://github.com/apache/spark/pull/2073 [SPARK-3168]The ServletContextHandler of webui lacks a SessionManager You can merge this pull request into a Git repository by running: $ git pull https://github.com/XuTingjun/spark master

[GitHub] spark pull request: fix bug of historyserver

2014-08-06 Thread XuTingjun
Github user XuTingjun closed the pull request at: https://github.com/apache/spark/pull/1564 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature

[GitHub] spark pull request: add function of historyserver

2014-08-06 Thread XuTingjun
Github user XuTingjun closed the pull request at: https://github.com/apache/spark/pull/1563 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature

[GitHub] spark pull request: [yarn]The method has a never used parameter

2014-08-04 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/1761#issuecomment-51143050 Should I close it, right? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have

[GitHub] spark pull request: [SPARK-2742][yarn] delete useless variables

2014-08-03 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/1614#issuecomment-51009498 Patrick Wendell has closed this JIRA issue. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your

[GitHub] spark pull request: [yarn] delete useless variables

2014-07-27 Thread XuTingjun
GitHub user XuTingjun opened a pull request: https://github.com/apache/spark/pull/1614 [yarn] delete useless variables You can merge this pull request into a Git repository by running: $ git pull https://github.com/XuTingjun/spark yarn-bug Alternatively you can review

[GitHub] spark pull request: add function of historyserver

2014-07-23 Thread XuTingjun
GitHub user XuTingjun opened a pull request: https://github.com/apache/spark/pull/1563 add function of historyserver automatically delete the log if setting spark.history.fs.maxsavedapplication.enable=true You can merge this pull request into a Git repository by running

[GitHub] spark pull request: Set configuration spark.history.retainedAppli...

2014-07-23 Thread XuTingjun
Github user XuTingjun closed the pull request at: https://github.com/apache/spark/pull/1509 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature

[GitHub] spark pull request: fix bug of historyserver

2014-07-23 Thread XuTingjun
GitHub user XuTingjun opened a pull request: https://github.com/apache/spark/pull/1564 fix bug of historyserver the configuration spark.history.retainedApplications is invalid You can merge this pull request into a Git repository by running: $ git pull https://github.com

[GitHub] spark pull request: Set configuration spark.history.retainedAppli...

2014-07-21 Thread XuTingjun
GitHub user XuTingjun opened a pull request: https://github.com/apache/spark/pull/1509 Set configuration spark.history.retainedApplications be effective When setting spark.history.retainedApplications=1, the historyserver web retains more than one application. You can merge

[GitHub] spark pull request: Delete the useless import

2014-07-13 Thread XuTingjun
Github user XuTingjun closed the pull request at: https://github.com/apache/spark/pull/1284 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature

[GitHub] spark pull request: Delete the useless import

2014-07-07 Thread XuTingjun
Github user XuTingjun commented on the pull request: https://github.com/apache/spark/pull/1284#issuecomment-48179063 it's ok --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature

[GitHub] spark pull request: Delete the useless import

2014-07-02 Thread XuTingjun
GitHub user XuTingjun opened a pull request: https://github.com/apache/spark/pull/1284 Delete the useless import import org.apache.spark.util.Utils is never used in HistoryServer.scala You can merge this pull request into a Git repository by running: $ git pull https

<    1   2