I'll kick it off with a +1.
On Thu, Mar 5, 2015 at 6:52 PM, Patrick Wendell wrote:
> Please vote on releasing the following candidate as Apache Spark version
> 1.3.0!
>
> The tag to be voted on is v1.3.0-rc2 (commit 4aaf48d4):
> https://git-wip-us.apache.org/repos/asf?p=
[
https://issues.apache.org/jira/browse/SPARK-5345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-5345:
---
Fix Version/s: 1.3.0
> Flaky test: o.a.s.deploy.history.FsHistoryProviderSu
[
https://issues.apache.org/jira/browse/SPARK-6141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6141:
---
Fix Version/s: (was: 1.3.1)
1.3.0
> Upgrade Breeze to 0.11 to
from that I ran a set of tests on top of standalone and yarn
>> and things look good.
>>
>> On Tue, Mar 3, 2015 at 8:19 PM, Patrick Wendell wrote:
>>> Please vote on releasing the following candidate as Apache Spark version
>>> 1.3.0!
>>>
>>
Please vote on releasing the following candidate as Apache Spark version 1.3.0!
The tag to be voted on is v1.3.0-rc2 (commit 4aaf48d4):
https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=4aaf48d46d13129f0f9bdafd771dd80fe568a7dc
The release files, including signatures, digests, etc. ca
You may need to add the -Phadoop-2.4 profile. When building or release
packages for Hadoop 2.4 we use the following flags:
-Phadoop-2.4 -Phive -Phive-thriftserver -Pyarn
- Patrick
On Thu, Mar 5, 2015 at 12:47 PM, Kelly, Jonathan wrote:
> I confirmed that this has nothing to do with BigTop by ru
[
https://issues.apache.org/jira/browse/SPARK-6175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6175:
---
Priority: Blocker (was: Major)
> Executor log links are using internal addresses in
[
https://issues.apache.org/jira/browse/SPARK-6182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell resolved SPARK-6182.
Resolution: Fixed
Fix Version/s: 1.3.0
Assignee: Sean Owen
> spark-par
not be a trait
> >>
> >> object StorageLevel {
> >> private[this] case object _MemoryOnly extends StorageLevel
> >> final val MemoryOnly: StorageLevel = _MemoryOnly
> >>
> >> private[this] case object _DiskOnly extends StorageLevel
>
Patrick Wendell created SPARK-6182:
--
Summary: spark-parent pom needs to be published for both 2.10 and
2.11
Key: SPARK-6182
URL: https://issues.apache.org/jira/browse/SPARK-6182
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-5143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell resolved SPARK-5143.
Resolution: Fixed
Fix Version/s: 1.3.0
> spark-network-yarn 2.11 depends on sp
[
https://issues.apache.org/jira/browse/SPARK-6149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell resolved SPARK-6149.
Resolution: Fixed
Fix Version/s: 1.3.0
> Spark SQL CLI doesn't work when
I like #4 as well and agree with Aaron's suggestion.
- Patrick
On Wed, Mar 4, 2015 at 6:07 PM, Aaron Davidson wrote:
> I'm cool with #4 as well, but make sure we dictate that the values should
> be defined within an object with the same name as the enumeration (like we
> do for StorageLevel). Ot
ince the byte array for the serialized task result
> shouldn¹t account for the majority of memory footprint anyways, I¹m okay
> with leaving it as is, then.
>
> Thanks,
> Mingyu
>
>
>
>
>
> On 3/4/15, 5:07 PM, "Patrick Wendell" wrote:
>
>>Hey Min
Hey Mingyu,
I think it's broken out separately so we can record the time taken to
serialize the result. Once we serializing it once, the second
serialization should be really simple since it's just wrapping
something that has already been turned into a byte buffer. Do you see
a specific issue with
[
https://issues.apache.org/jira/browse/SPARK-5143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14347567#comment-14347567
]
Patrick Wendell commented on SPARK-5143:
Yes - good catch Sean. Curious that
[
https://issues.apache.org/jira/browse/SPARK-6144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6144:
---
Component/s: Spark Core
> When in cluster mode using ADD JAR with a hdfs:// sourced jar w
sider
> https://issues.apache.org/jira/browse/SPARK-6144 a serious regression
> from 1.2 (since it affects existing "addFile()" functionality if the
> URL is "hdfs:...").
>
> Will test other parts separately.
>
> On Tue, Mar 3, 2015 at 8:19 PM, Patrick Wen
[
https://issues.apache.org/jira/browse/SPARK-6149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14347210#comment-14347210
]
Patrick Wendell commented on SPARK-6149:
Since this only affects the sbt b
[
https://issues.apache.org/jira/browse/SPARK-6149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6149:
---
Priority: Critical (was: Blocker)
> Spark SQL CLI doesn't work when compiled against
[
https://issues.apache.org/jira/browse/SPARK-6149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346519#comment-14346519
]
Patrick Wendell commented on SPARK-6149:
To be more specific, I am sugges
[
https://issues.apache.org/jira/browse/SPARK-6149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346515#comment-14346515
]
Patrick Wendell commented on SPARK-6149:
Yes - because of this I think si
Please vote on releasing the following candidate as Apache Spark version 1.3.0!
The tag to be voted on is v1.3.0-rc2 (commit 3af2687):
https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=3af26870e5163438868c4eb2df88380a533bb232
The release files, including signatures, digests, etc. can
This vote is cancelled in favor of RC2.
On Thu, Feb 26, 2015 at 9:50 AM, Sandor Van Wassenhove
wrote:
> FWIW, I tested the first rc and saw no regressions. I ran our benchmarks
> built against spark 1.3 and saw results consistent with spark 1.2/1.2.1.
>
> On 2/25/15, 5:51 PM, &quo
[
https://issues.apache.org/jira/browse/SPARK-6144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6144:
---
Target Version/s: 1.3.0
> When in cluster mode using ADD JAR with a hdfs:// sourced jar w
[
https://issues.apache.org/jira/browse/SPARK-6144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6144:
---
Priority: Blocker (was: Major)
> When in cluster mode using ADD JAR with a hdfs:// sour
[
https://issues.apache.org/jira/browse/SPARK-6122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6122:
---
Assignee: Calvin Jia
> Upgrade Tachyon dependency to 0.
[
https://issues.apache.org/jira/browse/SPARK-6122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6122:
---
Fix Version/s: (was: 1.3.0)
> Upgrade Tachyon dependency to 0.
[
https://issues.apache.org/jira/browse/SPARK-6122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6122:
---
Assignee: Patrick Wendell
> Upgrade Tachyon dependency to 0.
[
https://issues.apache.org/jira/browse/SPARK-6122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6122:
---
Target Version/s: 1.4.0
> Upgrade Tachyon dependency to 0.
[
https://issues.apache.org/jira/browse/SPARK-6122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6122:
---
Assignee: (was: Patrick Wendell)
> Upgrade Tachyon dependency to 0.
[
https://issues.apache.org/jira/browse/SPARK-6048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell resolved SPARK-6048.
Resolution: Fixed
Fix Version/s: 1.3.0
> SparkConf.translateConfKey should
[
https://issues.apache.org/jira/browse/SPARK-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell resolved SPARK-6066.
Resolution: Fixed
Fix Version/s: 1.3.0
Thanks Andrew and Marcelo for your work on
Yeah calling it Hadoop 2 was a very bad naming choice (of mine!), this
was back when CDH4 was the only real distribution available with some
of the newer Hadoop API's and packaging.
I think to not surprise people using this, it's best to keep v1 as the
default. Overall, we try not to change defaul
[
https://issues.apache.org/jira/browse/SPARK-6087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6087:
---
Labels: starter (was: )
> Provide actionable exception if Kryo buffer is not large eno
[
https://issues.apache.org/jira/browse/SPARK-6086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6086:
---
Component/s: SQL
> Exceptions in DAGScheduler.updateAccumulat
[
https://issues.apache.org/jira/browse/SPARK-6086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6086:
---
Component/s: Spark Core
> Exceptions in DAGScheduler.updateAccumulat
[
https://issues.apache.org/jira/browse/SPARK-6086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6086:
---
Description:
Class Cast Exceptions in DAGScheduler.updateAccumulators, when DAGScheduler is
[
https://issues.apache.org/jira/browse/SPARK-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14341881#comment-14341881
]
Patrick Wendell commented on SPARK-6066:
[~vanzin] - yes you are right (an e
Patrick Wendell created SPARK-6087:
--
Summary: Provide actionable exception if Kryo buffer is not large
enough
Key: SPARK-6087
URL: https://issues.apache.org/jira/browse/SPARK-6087
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-6087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6087:
---
Description:
Right now if you don't have a large enough Kryo buffer, you get a r
[
https://issues.apache.org/jira/browse/SPARK-5979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell resolved SPARK-5979.
Resolution: Fixed
Fix Version/s: 1.3.0
> `--packages` should not exclude sp
[
https://issues.apache.org/jira/browse/SPARK-6032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell resolved SPARK-6032.
Resolution: Fixed
Fix Version/s: 1.3.0
Assignee: Burak Yavuz
> Move
[
https://issues.apache.org/jira/browse/SPARK-5979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-5979:
---
Assignee: Burak Yavuz
> `--packages` should not exclude spark streaming assembly jars
[
https://issues.apache.org/jira/browse/SPARK-6070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell resolved SPARK-6070.
Resolution: Fixed
Fix Version/s: 1.3.0
Assignee: Marcelo Vanzin
> Y
[
https://issues.apache.org/jira/browse/SPARK-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6050:
---
Assignee: Marcelo Vanzin
> Spark on YARN does not work --executor-cores is specif
[
https://issues.apache.org/jira/browse/SPARK-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6066:
---
Component/s: Spark Core
> Metadata in event log makes it very difficult for exter
[
https://issues.apache.org/jira/browse/SPARK-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14341015#comment-14341015
]
Patrick Wendell commented on SPARK-6066:
Hey Marcelo,
I agree having a pu
[
https://issues.apache.org/jira/browse/SPARK-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340882#comment-14340882
]
Patrick Wendell commented on SPARK-6066:
What if as a simple fix we do t
[
https://issues.apache.org/jira/browse/SPARK-6048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340740#comment-14340740
]
Patrick Wendell commented on SPARK-6048:
Okay I just talked to [~vanzin] off
[
https://issues.apache.org/jira/browse/SPARK-6055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-6055:
---
Summary: Memory leak in pyspark sql due to incorrect equality check (was:
memory leak in
I think we need to just update the docs, it is a bit unclear right
now. At the time, we made it worded fairly sternly because we really
wanted people to use --jars when we deprecated SPARK_CLASSPATH. But
there are other types of deployments where there is a legitimate need
to augment the classpath
[
https://issues.apache.org/jira/browse/SPARK-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14339857#comment-14339857
]
Patrick Wendell commented on SPARK-6050:
[~mrid...@yahoo-inc.com] thanks
[
https://issues.apache.org/jira/browse/SPARK-6048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14339629#comment-14339629
]
Patrick Wendell edited comment on SPARK-6048 at 2/27/15 2:3
[
https://issues.apache.org/jira/browse/SPARK-6048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14339629#comment-14339629
]
Patrick Wendell commented on SPARK-6048:
Hey All,
No options on which desig
org.apache.spark.scheduler.Task.run(Task.scala:64)at
>> > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:197)
>> > at
>> >
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> > at
>> &
This has been around for multiple versions of Spark, so I am a bit
surprised to see it not working in your build.
- Patrick
On Wed, Feb 25, 2015 at 9:41 AM, Patrick Wendell wrote:
> Hey Cody,
>
> What build command are you using? In any case, we can actually comment
> out the "
Hey Cody,
What build command are you using? In any case, we can actually comment
out the "unused" thing now in the root pom.xml. It existed just to
ensure that at least one dependency was listed in the shade plugin
configuration (otherwise, some work we do that requires the shade
plugin does not h
[
https://issues.apache.org/jira/browse/SPARK-3851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-3851:
---
Fix Version/s: 1.3.0
> Support for reading parquet files with different but compatible sch
Added - thanks! I trimmed it down a bit to fit our normal description length.
On Mon, Jan 5, 2015 at 8:24 AM, Thomas Stone wrote:
> Please can we add PredictionIO to
> https://cwiki.apache.org/confluence/display/SPARK/Powered+By+Spark
>
> PredictionIO
> http://prediction.io/
>
> PredictionIO is a
[
https://issues.apache.org/jira/browse/SPARK-5845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14335667#comment-14335667
]
Patrick Wendell commented on SPARK-5845:
[~kayousterhout] did you mean the
I've added it, thanks!
On Fri, Feb 20, 2015 at 12:22 AM, Emre Sevinc wrote:
>
> Hello,
>
> Could you please add Big Industries to the Powered by Spark page at
> https://cwiki.apache.org/confluence/display/SPARK/Powered+By+Spark ?
>
>
> Company Name: Big Industries
>
> URL: http://http://www.bigi
It's only been reported on this thread by Tom, so far.
On Mon, Feb 23, 2015 at 10:29 AM, Marcelo Vanzin wrote:
> Hey Patrick,
>
> Do you have a link to the bug related to Python and Yarn? I looked at
> the blockers in Jira but couldn't find it.
>
> On Mon, Feb 2
So actually, the list of blockers on JIRA is a bit outdated. These
days I won't cut RC1 unless there are no known issues that I'm aware
of that would actually block the release (that's what the snapshot
ones are for). I'm going to clean those up and push others to do so
also.
The main issues I'm a
[
https://issues.apache.org/jira/browse/SPARK-5463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14333608#comment-14333608
]
Patrick Wendell commented on SPARK-5463:
Bumping to critical. Per our off
[
https://issues.apache.org/jira/browse/SPARK-5904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell resolved SPARK-5904.
Resolution: Fixed
Fix Version/s: 1.3.0
I think rxin just forgot to close this. It
[
https://issues.apache.org/jira/browse/SPARK-3650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-3650:
---
Priority: Critical (was: Blocker)
> Triangle Count handles reverse edges incorrec
[
https://issues.apache.org/jira/browse/SPARK-5463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-5463:
---
Priority: Critical (was: Blocker)
> Fix Parquet filter push-d
[
https://issues.apache.org/jira/browse/SPARK-3511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell resolved SPARK-3511.
Resolution: Won't Fix
Never ended up doing this. It's stale so I'm just
.
>
> I think that what you are saying is exactly the issue: on my master node UI
> at the bottom I can see the list of "Completed Drivers" all with ERROR
> state...
>
> Thanks,
> Oleg
>
> -Original Message-
> From: Patrick Wendell [mailto:pwend
ontext$.blockOn(BlockContext.scala:53)
> at scala.concurrent.Await$.result(package.scala:107)
> at
> org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$run$1.apply$mcV$sp(CoarseGrainedExecutorBackend.scala:127)
> at
> org.apache.spark.deploy.Sp
[
https://issues.apache.org/jira/browse/SPARK-5916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14332001#comment-14332001
]
Patrick Wendell commented on SPARK-5916:
The naming conflict is unfortu
[
https://issues.apache.org/jira/browse/SPARK-5920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14331986#comment-14331986
]
Patrick Wendell commented on SPARK-5920:
We should definitely do this.
>
[
https://issues.apache.org/jira/browse/SPARK-5920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-5920:
---
Priority: Blocker (was: Critical)
> Use a BufferedInputStream to read local shuffle d
[
https://issues.apache.org/jira/browse/SPARK-5920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-5920:
---
Priority: Critical (was: Major)
> Use a BufferedInputStream to read local shuffle d
[
https://issues.apache.org/jira/browse/SPARK-2389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14327903#comment-14327903
]
Patrick Wendell commented on SPARK-2389:
I've seen some variants of this
[
https://issues.apache.org/jira/browse/SPARK-5887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell resolved SPARK-5887.
Resolution: Invalid
The Datastax connector is not part of the Apache Spark distribution
[
https://issues.apache.org/jira/browse/SPARK-5863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-5863:
---
Priority: Critical (was: Major)
> Performance regression in Spark SQL/Parquet due
I believe the heuristic governing the way that take() decides to fetch
partitions changed between these versions. It could be that in certain
cases the new heuristic is worse, but it might be good to just look at
the source code and see, for your number of elements taken and number
of partitions, i
> UISeleniumSuite:
> *** RUN ABORTED ***
> java.lang.NoClassDefFoundError: org/w3c/dom/ElementTraversal
> ...
This is a newer test suite. There is something flaky about it, we
should definitely fix it, IMO it's not a blocker though.
>
> Patrick this link gives a 404:
> https://people.apache.org
[
https://issues.apache.org/jira/browse/SPARK-5856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell resolved SPARK-5856.
Resolution: Fixed
Fix Version/s: 1.3.0
> In Maven build script, launch Zinc w
[
https://issues.apache.org/jira/browse/SPARK-5864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell resolved SPARK-5864.
Resolution: Fixed
Fix Version/s: 1.3.0
Assignee: Davies Liu
> support .
[
https://issues.apache.org/jira/browse/SPARK-5850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell resolved SPARK-5850.
Resolution: Fixed
Fix Version/s: 1.3.0
> Remove experimental label for Scala 2
[
https://issues.apache.org/jira/browse/SPARK-4579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14325622#comment-14325622
]
Patrick Wendell commented on SPARK-4579:
[~andrewor14] Can you take a loo
[
https://issues.apache.org/jira/browse/SPARK-4579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-4579:
---
Labels: (was: starter)
> Scheduling Delay appears negat
[
https://issues.apache.org/jira/browse/SPARK-4579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-4579:
---
Assignee: Andrew Or
> Scheduling Delay appears negat
[
https://issues.apache.org/jira/browse/SPARK-4579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-4579:
---
Labels: starter (was: )
> Scheduling Delay appears negat
[
https://issues.apache.org/jira/browse/SPARK-4579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-4579:
---
Priority: Critical (was: Minor)
> Scheduling Delay appears negat
Hey Committers,
Now that Spark 1.3 rc1 is cut, please restrict branch-1.3 merges to
the following:
1. Fixes for issues blocking the 1.3 release (i.e. 1.2.X regressions)
2. Documentation and tests.
3. Fixes for non-blocker issues that are surgical, low-risk, and/or
outside of the core.
If there i
Please vote on releasing the following candidate as Apache Spark version 1.3.0!
The tag to be voted on is v1.3.0-rc1 (commit f97b0d4a):
https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=f97b0d4a6b26504916816d7aefcf3132cd1da6c2
The release files, including signatures, digests, etc. ca
Hey Niranda,
It seems to me a lot of effort to support multiple libraries inside of
Spark like this, so I'm not sure that's a great solution.
If you are building an application that embeds Spark, is it not
possible for you to continue to use Jetty for Spark's internal servers
and use tomcat for y
[
https://issues.apache.org/jira/browse/SPARK-4454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-4454:
---
Labels: backport-needed (was: )
> Race condition in DAGSchedu
[
https://issues.apache.org/jira/browse/SPARK-4454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-4454:
---
Target Version/s: 1.3.0, 1.2.2 (was: 1.3.0)
> Race condition in DAGSchedu
[
https://issues.apache.org/jira/browse/SPARK-4454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell reopened SPARK-4454:
Actually, re-opening this since we need to back port it.
> Race condition in DAGSchedu
[
https://issues.apache.org/jira/browse/SPARK-4454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell resolved SPARK-4454.
Resolution: Fixed
Fix Version/s: 1.3.0
We can't be 100% sure this is fixed be
[
https://issues.apache.org/jira/browse/SPARK-5811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell resolved SPARK-5811.
Resolution: Fixed
Assignee: Burak Yavuz
> Documentation for --packages
[
https://issues.apache.org/jira/browse/SPARK-5864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14324787#comment-14324787
]
Patrick Wendell commented on SPARK-5864:
I merged davies PR, but per Bur
[
https://issues.apache.org/jira/browse/SPARK-4454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-4454:
---
Priority: Critical (was: Minor)
> Race condition in DAGSchedu
[
https://issues.apache.org/jira/browse/SPARK-4454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell updated SPARK-4454:
---
Target Version/s: 1.3.0
> Race condition in DAGSchedu
[
https://issues.apache.org/jira/browse/SPARK-4454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14324747#comment-14324747
]
Patrick Wendell commented on SPARK-4454:
[~srowen] yeah I meant the particula
801 - 900 of 5096 matches
Mail list logo