[jira] [Commented] (SPARK-19181) SparkListenerSuite.local metrics fails when average executorDeserializeTime is too short.

2018-05-09 Thread Apache Spark (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-19181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468829#comment-16468829
 ] 

Apache Spark commented on SPARK-19181:
--

User 'attilapiros' has created a pull request for this issue:
https://github.com/apache/spark/pull/21280

> SparkListenerSuite.local metrics fails when average executorDeserializeTime 
> is too short.
> -
>
> Key: SPARK-19181
> URL: https://issues.apache.org/jira/browse/SPARK-19181
> Project: Spark
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 2.1.0
>Reporter: Jose Soltren
>Priority: Minor
>
> https://github.com/apache/spark/blob/master/core/src/test/scala/org/apache/spark/scheduler/SparkListenerSuite.scala#L249
> The "local metrics" test asserts that tasks should take more than 1ms on 
> average to complete, even though a code comment notes that this is a small 
> test and tasks may finish faster. I've been seeing some "failures" here on 
> fast systems that finish these tasks quite quickly.
> There are a few ways forward here:
> 1. Disable this test.
> 2. Relax this check.
> 3. Implement sub-millisecond granularity for task times throughout Spark.
> 4. (Imran Rashid's suggestion) Add buffer time by, say, having the task 
> reference a partition that implements a custom Externalizable.readExternal, 
> which always waits 1ms before returning.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-19181) SparkListenerSuite.local metrics fails when average executorDeserializeTime is too short.

2018-05-08 Thread Attila Zsolt Piros (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-19181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16467137#comment-16467137
 ] 

Attila Zsolt Piros commented on SPARK-19181:


I am working on this.

> SparkListenerSuite.local metrics fails when average executorDeserializeTime 
> is too short.
> -
>
> Key: SPARK-19181
> URL: https://issues.apache.org/jira/browse/SPARK-19181
> Project: Spark
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 2.1.0
>Reporter: Jose Soltren
>Priority: Minor
>
> https://github.com/apache/spark/blob/master/core/src/test/scala/org/apache/spark/scheduler/SparkListenerSuite.scala#L249
> The "local metrics" test asserts that tasks should take more than 1ms on 
> average to complete, even though a code comment notes that this is a small 
> test and tasks may finish faster. I've been seeing some "failures" here on 
> fast systems that finish these tasks quite quickly.
> There are a few ways forward here:
> 1. Disable this test.
> 2. Relax this check.
> 3. Implement sub-millisecond granularity for task times throughout Spark.
> 4. (Imran Rashid's suggestion) Add buffer time by, say, having the task 
> reference a partition that implements a custom Externalizable.readExternal, 
> which always waits 1ms before returning.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-19181) SparkListenerSuite.local metrics fails when average executorDeserializeTime is too short.

2018-03-01 Thread Marcelo Vanzin (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-19181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382963#comment-16382963
 ] 

Marcelo Vanzin commented on SPARK-19181:


Another failure (after quite some time):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/87855/testReport/junit/org.apache.spark.scheduler/SparkListenerSuite/local_metrics/

> SparkListenerSuite.local metrics fails when average executorDeserializeTime 
> is too short.
> -
>
> Key: SPARK-19181
> URL: https://issues.apache.org/jira/browse/SPARK-19181
> Project: Spark
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 2.1.0
>Reporter: Jose Soltren
>Priority: Minor
>
> https://github.com/apache/spark/blob/master/core/src/test/scala/org/apache/spark/scheduler/SparkListenerSuite.scala#L249
> The "local metrics" test asserts that tasks should take more than 1ms on 
> average to complete, even though a code comment notes that this is a small 
> test and tasks may finish faster. I've been seeing some "failures" here on 
> fast systems that finish these tasks quite quickly.
> There are a few ways forward here:
> 1. Disable this test.
> 2. Relax this check.
> 3. Implement sub-millisecond granularity for task times throughout Spark.
> 4. (Imran Rashid's suggestion) Add buffer time by, say, having the task 
> reference a partition that implements a custom Externalizable.readExternal, 
> which always waits 1ms before returning.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-19181) SparkListenerSuite.local metrics fails when average executorDeserializeTime is too short.

2017-02-02 Thread Jose Soltren (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-19181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15850408#comment-15850408
 ] 

Jose Soltren commented on SPARK-19181:
--

https://github.com/apache/spark/pull/16586 made some changes to create more 
workers and push the working time for this test slightly longer.

This solution will only work for so long and does not address the fundamental 
issue in the description.

> SparkListenerSuite.local metrics fails when average executorDeserializeTime 
> is too short.
> -
>
> Key: SPARK-19181
> URL: https://issues.apache.org/jira/browse/SPARK-19181
> Project: Spark
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 2.1.0
>Reporter: Jose Soltren
>Priority: Minor
>
> https://github.com/apache/spark/blob/master/core/src/test/scala/org/apache/spark/scheduler/SparkListenerSuite.scala#L249
> The "local metrics" test asserts that tasks should take more than 1ms on 
> average to complete, even though a code comment notes that this is a small 
> test and tasks may finish faster. I've been seeing some "failures" here on 
> fast systems that finish these tasks quite quickly.
> There are a few ways forward here:
> 1. Disable this test.
> 2. Relax this check.
> 3. Implement sub-millisecond granularity for task times throughout Spark.
> 4. (Imran Rashid's suggestion) Add buffer time by, say, having the task 
> reference a partition that implements a custom Externalizable.readExternal, 
> which always waits 1ms before returning.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-19181) SparkListenerSuite.local metrics fails when average executorDeserializeTime is too short.

2017-01-11 Thread Jose Soltren (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-19181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15819186#comment-15819186
 ] 

Jose Soltren commented on SPARK-19181:
--

SPARK-2208 disabled a similar metric previously.

> SparkListenerSuite.local metrics fails when average executorDeserializeTime 
> is too short.
> -
>
> Key: SPARK-19181
> URL: https://issues.apache.org/jira/browse/SPARK-19181
> Project: Spark
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 2.1.0
>Reporter: Jose Soltren
>Priority: Minor
>
> https://github.com/apache/spark/blob/master/core/src/test/scala/org/apache/spark/scheduler/SparkListenerSuite.scala#L249
> The "local metrics" test asserts that tasks should take more than 1ms on 
> average to complete, even though a code comment notes that this is a small 
> test and tasks may finish faster. I've been seeing some "failures" here on 
> fast systems that finish these tasks quite quickly.
> There are a few ways forward here:
> 1. Disable this test.
> 2. Relax this check.
> 3. Implement sub-millisecond granularity for task times throughout Spark.
> 4. (Imran Rashid's suggestion) Add buffer time by, say, having the task 
> reference a partition that implements a custom Externalizable.readExternal, 
> which always waits 1ms before returning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org