Repository: spark
Updated Branches:
refs/heads/master 93f92c0ed -> d2cddc88e
[SPARK-22850][CORE] Ensure queued events are delivered to all event queues.
The code in LiveListenerBus was queueing events before start in the
queues themselves; so in situations like the following:
bus.post(some
Repository: spark
Updated Branches:
refs/heads/master 66a7d6b30 -> ccda75b0d
[SPARK-22921][PROJECT-INFRA] Bug fix in jira assigning
Small bug fix from last pr, ran a successful merge with this code.
Author: Imran Rashid
Closes #20117 from squito/SPARK-22921.
Project: http://git-wip-us.apa
Repository: spark
Updated Branches:
refs/heads/master 8b497046c -> 4e9e6aee4
[SPARK-22864][CORE] Disable allocation schedule in
ExecutorAllocationManagerSuite.
The scheduled task was racing with the test code and could influence
the values returned to the test, triggering assertions. The chan
Repository: spark
Updated Branches:
refs/heads/master 11a849b3a -> 8b497046c
[SPARK-20654][CORE] Add config to limit disk usage of the history server.
This change adds a new configuration option and support code that limits
how much disk space the SHS will use. The default value is pretty gene
Repository: spark
Updated Branches:
refs/heads/master 613b71a12 -> cfcd74668
[SPARK-11035][CORE] Add in-process Spark app launcher.
This change adds a new launcher that allows applications to be run
in a separate thread in the same process as the calling code. To
achieve that, some code from t
Repository: spark
Updated Branches:
refs/heads/master 8f6d5734d -> 9c21ece35
[SPARK-22836][UI] Show driver logs in UI when available.
Port code from the old executors listener to the new one, so that
the driver logs present in the application start event are kept.
Author: Marcelo Vanzin
Clo
Repository: spark
Updated Branches:
refs/heads/master 9962390af -> 7570eab6b
[SPARK-22788][STREAMING] Use correct hadoop config for fs append support.
Still look at the old one in case any Spark user is setting it
explicitly, though.
Author: Marcelo Vanzin
Closes #19983 from vanzin/SPARK-22
Repository: spark
Updated Branches:
refs/heads/master fb3636b48 -> 772e4648d
[SPARK-20653][CORE] Add cleaning of old elements from the status store.
This change restores the functionality that keeps a limited number of
different types (jobs, stages, etc) depending on configuration, to avoid
th
Repository: spark
Updated Branches:
refs/heads/master ba0e79f57 -> a83e8e6c2
[SPARK-22764][CORE] Fix flakiness in SparkContextSuite.
Use a semaphore to synchronize the tasks with the listener code
that is trying to cancel the job or stage, so that the listener
won't try to cancel a job or stag
Repository: spark
Updated Branches:
refs/heads/master bc0848b4c -> 39b3f10dd
[SPARK-20649][CORE] Simplify REST API resource structure.
With the new UI store, the API resource classes have a lot less code,
since there's no need for complicated translations between the UI
types and the API types
[SPARK-20652][SQL] Store SQL UI data in the new app status store.
This change replaces the SQLListener with a new implementation that
saves the data to the same store used by the SparkContext's status
store. For that, the types used by the old SQLListener had to be
updated a bit so that they're mo
Repository: spark
Updated Branches:
refs/heads/master 4741c0780 -> 0ffa7c488
http://git-wip-us.apache.org/repos/asf/spark/blob/0ffa7c48/sql/core/src/test/scala/org/apache/spark/sql/execution/ui/SQLListenerSuite.scala
--
diff --
http://git-wip-us.apache.org/repos/asf/spark/blob/4741c078/core/src/main/scala/org/apache/spark/ui/jobs/AllStagesPage.scala
--
diff --git a/core/src/main/scala/org/apache/spark/ui/jobs/AllStagesPage.scala
b/core/src/main/scala/org
[SPARK-20648][CORE] Port JobsTab and StageTab to the new UI backend.
This change is a little larger because there's a whole lot of logic
behind these pages, all really tied to internal types and listeners,
and some of that logic had to be implemented in the new listener and
the needed data exposed
http://git-wip-us.apache.org/repos/asf/spark/blob/4741c078/core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala
--
diff --git a/core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala
b/core/src/main/scala/org/apache/
Repository: spark
Updated Branches:
refs/heads/master 11b60af73 -> 4741c0780
http://git-wip-us.apache.org/repos/asf/spark/blob/4741c078/core/src/test/resources/HistoryServerExpectations/stage_task_list_w__sortBy_expectation.json
--
http://git-wip-us.apache.org/repos/asf/spark/blob/4741c078/core/src/test/resources/HistoryServerExpectations/job_list_from_multi_attempt_app_json_2__expectation.json
--
diff --git
a/core/src/test/resources/HistoryServerExpectation
[SPARK-20647][CORE] Port StorageTab to the new UI backend.
This required adding information about StreamBlockId to the store,
which is not available yet via the API. So an internal type was added
until there's a need to expose that information in the API.
The UI only lists RDDs that have cached p
Repository: spark
Updated Branches:
refs/heads/master 6b19c0735 -> 6ae12715c
http://git-wip-us.apache.org/repos/asf/spark/blob/6ae12715/core/src/test/scala/org/apache/spark/ui/storage/StorageTabSuite.scala
--
diff --git
a/core
Repository: spark
Updated Branches:
refs/heads/master 2ca5aae47 -> 11eea1a4c
[SPARK-20646][CORE] Port executors page to new UI backend.
The executors page is built on top of the REST API, so the page itself
was easy to hook up to the new code.
Some other pages depend on the `ExecutorListener`
Repository: spark
Updated Branches:
refs/heads/master 0846a4473 -> 7475a9655
[SPARK-20645][CORE] Port environment page to new UI backend.
This change modifies the status listener to collect the information
needed to render the envionment page, and populates that page and the
API with informati
Repository: spark
Updated Branches:
refs/heads/master 472db58cb -> c7f38e5ad
http://git-wip-us.apache.org/repos/asf/spark/blob/c7f38e5a/core/src/main/scala/org/apache/spark/ui/SparkUI.scala
--
diff --git a/core/src/main/scala/o
[SPARK-20644][core] Initial ground work for kvstore UI backend.
There are two somewhat unrelated things going on in this patch, but
both are meant to make integration of individual UI pages later on
much easier.
The first part is some tweaking of the code in the listener so that
it does less upda
Repository: spark
Updated Branches:
refs/heads/master a83d8d5ad -> 0e9a750a8
[SPARK-20643][CORE] Add listener implementation to collect app state.
The initial listener code is based on the existing JobProgressListener (and
others),
and tries to mimic their behavior as much as possible. The ch
Repository: spark
Updated Branches:
refs/heads/master 24e6c187f -> 73e64f7d5
[SPARK-19662][SCHEDULER][TEST] Add Fair Scheduler Unit Test coverage for
different build cases
## What changes were proposed in this pull request?
Fair Scheduler can be built via one of the following options:
- By se
Repository: spark
Updated Branches:
refs/heads/master b61a401da -> 0cba49512
http://git-wip-us.apache.org/repos/asf/spark/blob/0cba4951/common/kvstore/src/test/java/org/apache/spark/kvstore/LevelDBBenchmark.java
--
diff --git
[SPARK-20641][CORE] Add key-value store abstraction and LevelDB implementation.
This change adds an abstraction and LevelDB implementation for a key-value
store that will be used to store UI and SHS data.
The interface is described in KVStore.java (see javadoc). Specifics
of the LevelDB implement
Repository: spark
Updated Branches:
refs/heads/master ac7fc3075 -> d0f36bcb1
[SPARK-20633][SQL] FileFormatWriter should not wrap FetchFailedException
## What changes were proposed in this pull request?
Explicitly handle the FetchFailedException in FileFormatWriter, so it does not
get wrapped
Repository: spark
Updated Branches:
refs/heads/master d52f63622 -> ac7fc3075
[SPARK-20288] Avoid generating the MapStatus by stageId in
BasicSchedulerIntegrationSuite
## What changes were proposed in this pull request?
ShuffleId is determined before job submitted. But it's hard to predict st
Repository: spark
Updated Branches:
refs/heads/master 4d57981cf -> de953c214
[SPARK-20333] HashPartitioner should be compatible with num of child RDD's
partitions.
## What changes were proposed in this pull request?
Fix test
"don't submit stage until its dependencies map outputs are register
Repository: spark
Updated Branches:
refs/heads/master dbb06c689 -> 66dd5b83f
[SPARK-20391][CORE] Rename memory related fields in ExecutorSummay
## What changes were proposed in this pull request?
This is a follow-up of #14617 to make the name of memory related fields more
meaningful.
Here
Repository: spark
Updated Branches:
refs/heads/branch-2.2 34dec68d7 -> b65858bb3
[SPARK-20391][CORE] Rename memory related fields in ExecutorSummay
## What changes were proposed in this pull request?
This is a follow-up of #14617 to make the name of memory related fields more
meaningful.
He
Repository: spark
Updated Branches:
refs/heads/master 8ddf0d2a6 -> 7536e2849
[SPARK-20038][SQL] FileFormatWriter.ExecuteWriteTask.releaseResources()
implementations to be re-entrant
## What changes were proposed in this pull request?
have the`FileFormatWriter.ExecuteWriteTask.releaseResource
[SPARK-17019][CORE] Expose on-heap and off-heap memory usage in various places
## What changes were proposed in this pull request?
With [SPARK-13992](https://issues.apache.org/jira/browse/SPARK-13992), Spark
supports persisting data into off-heap memory, but the usage of on-heap and
off-heap me
Repository: spark
Updated Branches:
refs/heads/master 5a693b413 -> a4491626e
http://git-wip-us.apache.org/repos/asf/spark/blob/a4491626/core/src/test/resources/spark-events/app-20161116163331-
--
diff --git a/core/src/test/
Repository: spark
Updated Branches:
refs/heads/master 13538cf3d -> 7b5d873ae
[SPARK-13369] Add config for number of consecutive fetch failures
The previously hardcoded max 4 retries per stage is not suitable for all
cluster configurations. Since spark retries a stage at the sign of the first
Repository: spark
Updated Branches:
refs/heads/master 096df6d93 -> 12bf83240
[SPARK-19796][CORE] Fix serialization of long property values in TaskDescription
## What changes were proposed in this pull request?
The properties that are serialized with a TaskDescription can have very long
value
Repository: spark
Updated Branches:
refs/heads/master af63c52fd -> 6287c94f0
[SPARK-16554][CORE] Automatically Kill Executors and Nodes when they are
Blacklisted
## What changes were proposed in this pull request?
In SPARK-8425, we introduced a mechanism for blacklisting executors and nodes
Repository: spark
Updated Branches:
refs/heads/master 7730426cb -> 7beb227cc
[SPARK-17663][CORE] SchedulableBuilder should handle invalid data access via
scheduler.allocation.file
## What changes were proposed in this pull request?
If `spark.scheduler.allocation.file` has invalid `minShare`
es/HistoryServerExpectations/maxEndDate_app_list_json_expectation.json
@@ -0,0 +1,95 @@
+[ {
+ "id" : "local-1430917381535",
+ "name" : "Spark shell",
+ "attempts" : [ {
+"attemptId" : "2",
+"startTime" : &qu
Repository: spark
Updated Branches:
refs/heads/master d50d12b49 -> e20d9b156
[SPARK-19069][CORE] Expose task 'status' and 'duration' in spark history server
REST API.
## What changes were proposed in this pull request?
Although Spark history server UI shows task âstatusâ and âdurationâ
http://git-wip-us.apache.org/repos/asf/spark/blob/640f9423/core/src/test/resources/spark-events/app-20161115172038-
--
diff --git a/core/src/test/resources/spark-events/app-20161115172038-
b/core/src/test/resources/spark-e
Repository: spark
Updated Branches:
refs/heads/master 064fadd2a -> 640f94233
http://git-wip-us.apache.org/repos/asf/spark/blob/640f9423/core/src/test/scala/org/apache/spark/deploy/history/HistoryServerSuite.scala
--
diff --git
quot;,
+ "attempts" : [ {
+"startTime" : "2016-11-16T22:33:29.916GMT",
+"endTime" : "2016-11-16T22:33:40.587GMT",
+"lastUpdated" : "",
+"duration" : 10671,
+"sparkUser" : "jose&quo
http://git-wip-us.apache.org/repos/asf/spark/blob/640f9423/core/src/test/resources/spark-events/app-20161116163331-
--
diff --git a/core/src/test/resources/spark-events/app-20161116163331-
b/core/src/test/resources/spark-e
Repository: spark
Updated Branches:
refs/heads/master 4a4c3dc9c -> 2e139eed3
[SPARK-17931] Eliminate unnecessary task (de) serialization
In the existing code, there are three layers of serialization
involved in sending a task from the scheduler to an executor:
- A Task object is se
Repository: spark
Updated Branches:
refs/heads/master cccd64393 -> ac013ea58
[SPARK-18846][SCHEDULER] Fix flakiness in SchedulerIntegrationSuite
There is a small race in SchedulerIntegrationSuite.
The test assumes that the taskscheduler thread
processing that last task will finish before the D
Repository: spark
Updated Branches:
refs/heads/master ad67993b7 -> 8b1609beb
[SPARK-18117][CORE] Add test for TaskSetBlacklist
## What changes were proposed in this pull request?
This adds tests to verify the interaction between TaskSetBlacklist and
TaskSchedulerImpl. TaskSetBlacklist was in
Repository: spark
Updated Branches:
refs/heads/master 47776e7c0 -> 9ce7d3e54
[SPARK-17675][CORE] Expand Blacklist for TaskSets
## What changes were proposed in this pull request?
This is a step along the way to SPARK-8425.
To enable incremental review, the first step proposed here is to expa
Repository: spark
Updated Branches:
refs/heads/master 07f46afc7 -> fdf9f94f8
[SPARK-15865][CORE] Blacklist should not result in job hanging with less than 4
executors
## What changes were proposed in this pull request?
Before this change, when you turn on blacklisting with
`spark.scheduler.
Repository: spark
Updated Branches:
refs/heads/master 282158914 -> c15b552dd
[SPARK-16106][CORE] TaskSchedulerImpl should properly track executors added to
existing hosts
## What changes were proposed in this pull request?
TaskSchedulerImpl used to only set `newExecAvailable` when a new *hos
Repository: spark
Updated Branches:
refs/heads/master 1aa191e58 -> 282158914
[SPARK-16136][CORE] Fix flaky TaskManagerSuite
## What changes were proposed in this pull request?
TaskManagerSuite "Kill other task attempts when one attempt belonging to the
same task succeeds" was flaky. When ch
Repository: spark
Updated Branches:
refs/heads/master be88383e1 -> a4851ed05
[SPARK-15963][CORE] Catch `TaskKilledException` correctly in Executor.TaskRunner
## The problem
Before this change, if either of the following cases happened to a task , the
task would be marked as `FAILED` instead
Repository: spark
Updated Branches:
refs/heads/master 01277d4b2 -> cf1995a97
[SPARK-15783][CORE] Fix Flakiness in BlacklistIntegrationSuite
## What changes were proposed in this pull request?
Three changes here -- first two were causing failures w/
BlacklistIntegrationSuite
1. The testing f
Repository: spark
Updated Branches:
refs/heads/master 9bd80ad6b -> cafc696d0
[HOTFIX][CORE] fix flaky BasicSchedulerIntegrationTest
## What changes were proposed in this pull request?
SPARK-15927 exacerbated a race in BasicSchedulerIntegrationTest, so it went
from very unlikely to fairly fre
Repository: spark
Updated Branches:
refs/heads/master 0b8d69499 -> 36d3dfa59
[SPARK-15783][CORE] still some flakiness in these blacklist tests so ignore for
now
## What changes were proposed in this pull request?
There is still some flakiness in BlacklistIntegrationSuite, so turning it off
Repository: spark
Updated Branches:
refs/heads/master 190ff274f -> c2f0cb4f6
[SPARK-15714][CORE] Fix flaky o.a.s.scheduler.BlacklistIntegrationSuite
## What changes were proposed in this pull request?
BlacklistIntegrationSuite (introduced by SPARK-10372) is a bit flaky because of
some race c
Repository: spark
Updated Branches:
refs/heads/master 06bae8af1 -> dfc9fc02c
[SPARK-10372] [CORE] basic test framework for entire spark scheduler
This is a basic framework for testing the entire scheduler. The tests this
adds aren't very interesting -- the point of this PR is just to setup t
Repository: spark
Updated Branches:
refs/heads/master d3e2e2029 -> a2c7dcf61
[SPARK-7889][WEBUI] HistoryServer updates UI for incomplete apps
When the HistoryServer is showing an incomplete app, it needs to check if there
is a newer version of the app available. It does this by checking if a
Repository: spark
Updated Branches:
refs/heads/master e3735ce16 -> 6cb06e871
[SPARK-11155][WEB UI] Stage summary json should include stage duration
The json endpoint for stages doesn't include information on the stage duration
that is present in the UI. This looks like a simple oversight, the
Repository: spark
Updated Branches:
refs/heads/branch-1.5 27b5f31a0 -> 4139a4ed1
[SPARK-10666][SPARK-6880][CORE] Use properties from ActiveJob associated with a
Stage
This issue was addressed in https://github.com/apache/spark/pull/5494, but the
fix in that PR, while safe in the sense that i
Repository: spark
Updated Branches:
refs/heads/branch-1.6 4971eaaa5 -> 2aeee5696
[SPARK-10666][SPARK-6880][CORE] Use properties from ActiveJob associated with a
Stage
This issue was addressed in https://github.com/apache/spark/pull/5494, but the
fix in that PR, while safe in the sense that i
Repository: spark
Updated Branches:
refs/heads/master b9b6fbe89 -> 0a5aef753
[SPARK-10666][SPARK-6880][CORE] Use properties from ActiveJob associated with a
Stage
This issue was addressed in https://github.com/apache/spark/pull/5494, but the
fix in that PR, while safe in the sense that it wi
Repository: spark
Updated Branches:
refs/heads/master f31527227 -> e6dd23746
[SPARK-11929][CORE] Make the repl log4j configuration override the root logger.
In the default Spark distribution, there are currently two separate
log4j config files, with different default values for the root logger
Repository: spark
Updated Branches:
refs/heads/branch-1.6 42d933fbb -> fc2942d12
[SPARK-10565][CORE] add missing web UI stats to /api/v1/applications JSON
I looked at the other endpoints, and they don't seem to be missing any fields.
Added fields:
![image](https://cloud.githubusercontent.com/a
Repository: spark
Updated Branches:
refs/heads/master 9b88e1dca -> 08a7a836c
[SPARK-10565][CORE] add missing web UI stats to /api/v1/applications JSON
I looked at the other endpoints, and they don't seem to be missing any fields.
Added fields:
![image](https://cloud.githubusercontent.com/asset
Repository: spark
Updated Branches:
refs/heads/master f92f334ca -> b3aedca6b
[SPARK-11456][TESTS] Remove deprecated junit.framework in Java tests
Replace use of `junit.framework` with `org.junit`, and touch up tests in
question
Author: Sean Owen
Closes #9411 from srowen/SPARK-11456.
Proj
[SPARK-8673] [LAUNCHER] API and infrastructure for communicating with child
apps.
This change adds an API that encapsulates information about an app
launched using the library. It also creates a socket-based communication
layer for apps that are launched as child processes; the launching
applicat
Repository: spark
Updated Branches:
refs/heads/master 70f44ad2d -> 015f7ef50
http://git-wip-us.apache.org/repos/asf/spark/blob/015f7ef5/yarn/src/test/scala/org/apache/spark/deploy/yarn/BaseYarnClusterSuite.scala
--
diff --git
Repository: spark
Updated Branches:
refs/heads/master 331f0b10f -> b78c65b03
[SPARK-5259] [CORE] don't submit stage until its dependencies map outputs are
registered
Track pending tasks by partition ID instead of Task objects.
Before this change, failure & retry could result in a case where
Repository: spark
Updated Branches:
refs/heads/master f0f563a3c -> 72f6dbf7b
[SPARK-8730] Fixes - Deser objects containing a primitive class attribute
Author: EugenCepoi
Closes #7122 from EugenCepoi/master.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-u
Repository: spark
Updated Branches:
refs/heads/master 71a077f6c -> 1502a0f6c
[YARN] [MINOR] Avoid hard code port number in YarnShuffleService test
Current port number is fixed as default (7337) in test, this will introduce
port contention exception, better to change to a random number in unit
Repository: spark
Updated Branches:
refs/heads/branch-1.5 5e6fdc659 -> 0579f28df
[SPARK-8625] [CORE] Propagate user exceptions in tasks back to driver
This allows clients to retrieve the original exception from the
cause field of the SparkException that is thrown by the driver.
If the original
Repository: spark
Updated Branches:
refs/heads/master 3ecb37943 -> 2e680668f
[SPARK-8625] [CORE] Propagate user exceptions in tasks back to driver
This allows clients to retrieve the original exception from the
cause field of the SparkException that is thrown by the driver.
If the original exc
Repository: spark
Updated Branches:
refs/heads/master 97906944e -> 069a4c414
[SPARK-746] [CORE] Added Avro Serialization to Kryo
Added a custom Kryo serializer for generic Avro records to reduce the network IO
involved during a shuffle. This compresses the schema and allows for users to
regist
Repository: spark
Updated Branches:
refs/heads/master ecad9d434 -> c0b7df68f
[SPARK-9366] use task's stageAttemptId in TaskEnd event
Author: Ryan Williams
Closes #7681 from ryan-williams/task-stage-attempt and squashes the following
commits:
d6d5f0f [Ryan Williams] use task's stageAttemptI
Repository: spark
Updated Branches:
refs/heads/branch-1.4 1782c0ef9 -> a292c492a
[SPARK-9193] Avoid assigning tasks to "lost" executor(s)
Now, when some executors are killed by dynamic-allocation, it leads to some
mis-assignment onto lost executors sometimes. Such kind of mis-assignment
caus
Repository: spark
Updated Branches:
refs/heads/master df4ddb312 -> 6592a6058
[SPARK-9193] Avoid assigning tasks to "lost" executor(s)
Now, when some executors are killed by dynamic-allocation, it leads to some
mis-assignment onto lost executors sometimes. Such kind of mis-assignment
causes t
Repository: spark
Updated Branches:
refs/heads/master d9838196f -> aa7bbc143
[SPARK-6980] [CORE] Akka timeout exceptions indicate which conf controls them
(RPC Layer)
Latest changes after refactoring to the RPC layer. I rebased against trunk to
make sure to get any recent changes since it h
Repository: spark
Updated Branches:
refs/heads/master c9e05a315 -> 37bf76a2d
[SPARK-8302] Support heterogeneous cluster install paths on YARN.
Some users have Hadoop installations on different paths across
their cluster. Currently, that makes it hard to set up some
configuration in Spark since
est/resources/HistoryServerExpectations/application_list_json_expectation.json
+++
b/core/src/test/resources/HistoryServerExpectations/application_list_json_expectation.json
@@ -8,6 +8,22 @@
"completed" : true
} ]
}, {
+ "id" : "local-1430917381535",
+ "name
Repository: spark
Updated Branches:
refs/heads/master 3e7d7d6b3 -> 4615081d7
[CORE] [TEST] HistoryServerSuite failed due to timezone issue
follow up for #6377
Change time to the equivalent in GMT
/cc squito
Author: scwf
Closes #6425 from scwf/fix-HistoryServerSuite and squashes the followin
Repository: spark
Updated Branches:
refs/heads/branch-1.4 e5357132b -> 90525c9ba
[CORE] [TEST] HistoryServerSuite failed due to timezone issue
follow up for #6377
Change time to the equivalent in GMT
/cc squito
Author: scwf
Closes #6425 from scwf/fix-HistoryServerSuite and squashes the foll
Repository: spark
Updated Branches:
refs/heads/branch-1.4 4b31a07b6 -> 79bb7dcec
[CORE] [TEST] Fix SimpleDateParamTest
```
sbt.ForkMain$ForkError: 1424424077190 was not equal to 1424474477190
at
org.scalatest.MatchersHelper$.newTestFailedException(MatchersHelper.scala:160)
at
Repository: spark
Updated Branches:
refs/heads/master 43aa819c0 -> bf49c2213
[CORE] [TEST] Fix SimpleDateParamTest
```
sbt.ForkMain$ForkError: 1424424077190 was not equal to 1424474477190
at
org.scalatest.MatchersHelper$.newTestFailedException(MatchersHelper.scala:160)
at
org
Repository: spark
Updated Branches:
refs/heads/master 85b96372c -> 956c4c910
[SPARK-7657] [YARN] Add driver logs links in application UI, in cluster mode.
This PR adds the URLs to the driver logs to `SparkListenerApplicationStarted`
event, which is later used by the `ExecutorsListener` to pop
Repository: spark
Updated Branches:
refs/heads/master 5196efff5 -> a70bf06b7
[SPARK-7750] [WEBUI] Rename endpoints from `json` to `api` to allow fuâ¦
â¦rther extension to non-json outputs too.
Author: Hari Shreedharan
Closes #6273 from harishreedharan/json-to-api and squashes the followin
Repository: spark
Updated Branches:
refs/heads/branch-1.4 e1f7de33b -> 0d061ff9e
[SPARK-7750] [WEBUI] Rename endpoints from `json` to `api` to allow fuâ¦
â¦rther extension to non-json outputs too.
Author: Hari Shreedharan
Closes #6273 from harishreedharan/json-to-api and squashes the foll
426633911241"},"System
Properties":{"java.io.tmpdir":"/var/folders/36/m29jw1z95qv4ywb1c4n0rz00gp/T/","line.separator":"\n","path.separator":":","sun.management.compiler":"HotSpot
64-Bit Tiered
Compilers&q
Repository: spark
Updated Branches:
refs/heads/branch-1.4 0327ca2b2 -> ff8b44995
http://git-wip-us.apache.org/repos/asf/spark/blob/ff8b4499/core/src/test/scala/org/apache/spark/JsonTestUtils.scala
--
diff --git a/core/src/test/
n" : 9601,
+ "writeTime" : 298472,
+ "recordsWritten" : 100
+}
+ }
+}, {
+ "taskId" : 5077,
+ "index" : 67,
+ "attempt" : 0,
+ "launchTime" : "2015-03-26T19:19:59.670GMT",
+
422981780767"},"System
Properties":{"java.io.tmpdir":"/var/folders/36/m29jw1z95qv4ywb1c4n0rz00gp/T/","line.separator":"\n","path.separator":":","sun.management.compiler":"HotSpot
64-Bit Tiered
Compilers&q
425081759269"},"System
Properties":{"java.io.tmpdir":"/var/folders/36/m29jw1z95qv4ywb1c4n0rz00gp/T/","line.separator":"\n","path.separator":":","sun.management.compiler":"HotSpot
64-Bit Tiered
Compilers&q
yonStore.folderName":"spark-ba9af2c0-12a3-4d07-8f0a-2aded3ba3ded","spark.app.id":"local-1427397477963"},"System
Properties":{"java.io.tmpdir":"/var/folders/36/m29jw1z95qv4ywb1c4n0rz00gp/T/","line.separator":"\n",&qu
422981780767"},"System
Properties":{"java.io.tmpdir":"/var/folders/36/m29jw1z95qv4ywb1c4n0rz00gp/T/","line.separator":"\n","path.separator":":","sun.management.compiler":"HotSpot
64-Bit Tiered
Compilers&q
ener:
StorageStatusListener) extends Spar
def storageStatusList: Seq[StorageStatus] =
storageStatusListener.storageStatusList
/** Filter RDD info to include only those with cached partitions */
- def rddInfoList: Seq[RDDInfo] =
_rddInfoMap.values.filter(_.numCachedPartitions > 0).toSeq
+
ener:
StorageStatusListener) extends Spar
def storageStatusList: Seq[StorageStatus] =
storageStatusListener.storageStatusList
/** Filter RDD info to include only those with cached partitions */
- def rddInfoList: Seq[RDDInfo] =
_rddInfoMap.values.filter(_.numCachedPartitions > 0).toSeq
+
n" : 9601,
+ "writeTime" : 298472,
+ "recordsWritten" : 100
+}
+ }
+}, {
+ "taskId" : 5077,
+ "index" : 67,
+ "attempt" : 0,
+ "launchTime" : "2015-03-26T19:19:59.670GMT",
+
: 1,
+"name" : "my counter",
+"value" : "5050"
+ } ]
+} ]
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/spark/blob/ff8b4499/core/src/test/resources/HistoryServerExpectations/applications/local-1426533911241/2/jobs/json_expectation
422981759269"},"System
Properties":{"java.io.tmpdir":"/var/folders/36/m29jw1z95qv4ywb1c4n0rz00gp/T/","line.separator":"\n","path.separator":":","sun.management.compiler":"HotSpot
64-Bit Tiered
Compilers&q
101 - 200 of 214 matches
Mail list logo