I think this new issue in JIRA blocks the release unfortunately?
https://issues.apache.org/jira/browse/SPARK-16664 - Persist call on data frames
with more than 200 columns is wiping out the data
Otherwise there'll need to be 2.0.1 pretty much right after?
Thanks,
Ewan
On 23 Jul 2016 03:46,
+1
2016-07-22 19:32 GMT-07:00 Kousuke Saruta :
> +1 (non-binding)
>
> Tested on my cluster with three slave nodes.
>
> On 2016/07/23 10:25, Suresh Thalamati wrote:
>
> +1 (non-binding)
>
> Tested data source api , and jdbc data sources.
>
>
> On Jul 19, 2016, at 7:35
+1 (non-binding)
Tested on my cluster with three slave nodes.
On 2016/07/23 10:25, Suresh Thalamati wrote:
+1 (non-binding)
Tested data source api , and jdbc data sources.
On Jul 19, 2016, at 7:35 PM, Reynold Xin > wrote:
Please vote on
+1 (non-binding)
Tested data source api , and jdbc data sources.
> On Jul 19, 2016, at 7:35 PM, Reynold Xin wrote:
>
> Please vote on releasing the following candidate as Apache Spark version
> 2.0.0. The vote is open until Friday, July 22, 2016 at 20:00 PDT and passes
+1
Tested on Ubuntu, ran a bunch of SparkR tests, found a broken link in doc but
not a blocker.
_
From: Michael Armbrust >
Sent: Friday, July 22, 2016 3:18 PM
Subject: Re: [VOTE] Release Apache Spark 2.0.0 (RC5)
+1
On Fri, Jul 22, 2016 at 2:42 PM, Holden Karau wrote:
> +1 (non-binding)
>
> Built locally on Ubuntu 14.04, basic pyspark sanity checking & tested with
> a simple structured streaming project (spark-structured-streaming-ml) &
> spark-testing-base &
+1 (non-binding)
Built locally on Ubuntu 14.04, basic pyspark sanity checking & tested with
a simple structured streaming project (spark-structured-streaming-ml) &
spark-testing-base & high-performance-spark-examples (minor changes
required from preview version but seem intentional & jetty
2016-07-22 23:05 GMT+02:00 Ramon Rosa da Silva :
> Hi Folks,
>
>
>
> What do you think about allow update SaveMode from
> DataFrame.write.mode(“update”)?
>
> Now Spark just has jdbc insert.
I'm working on patch that creates new mode - 'upsert'.
In Mysql it will use
Seems like there is an incompatibility regarding scala versions between
your program and the scala version Spark was compiled against.
Either you're using scala 2.11 and your spark installation was built using
2.10 or the other way around.
On Fri, Jul 22, 2016 at 11:06 PM, Pedro Rodriguez
The dev list is meant for working on development of Spark, not as a way of
escalating an issue just fyi.
If someone hasn't replied on the user list either you haven't given it
enough time or no one has a fix for you. I've definitely gotten replies
from committers multiple times to many questions
Hi Folks,
What do you think about allow update SaveMode from
DataFrame.write.mode("update")?
Now Spark just has jdbc insert.
This e-mail message, including any attachments, is for the sole use of the
person to whom it has been sent and may contain information that is
confidential or legally
+ 1 (non-binding)
Found a minor issue when trying to run some of the docker tests, but
nothing blocking the release. Will create a JIRA for that.
On Tue, Jul 19, 2016 at 7:35 PM, Reynold Xin wrote:
> Please vote on releasing the following candidate as Apache Spark version
+1
Tested on Mac.
Matei
> On Jul 22, 2016, at 11:18 AM, Joseph Bradley wrote:
>
> +1
>
> Mainly tested ML/Graph/R. Perf tests from Tim Hunter showed minor speedups
> from 1.6 for common ML algorithms.
>
> On Thu, Jul 21, 2016 at 9:41 AM, Ricardo Almeida
>
+1
Mainly tested ML/Graph/R. Perf tests from Tim Hunter showed minor speedups
from 1.6 for common ML algorithms.
On Thu, Jul 21, 2016 at 9:41 AM, Ricardo Almeida <
ricardo.alme...@actnowib.com> wrote:
> +1 (non binding)
>
> Tested PySpark Core, DataFrame/SQL, MLlib and Streaming on a
Using 2.0.0-preview using maven
So all dependencies should be correct I guess
org.apache.spark
spark-core_2.11
2.0.0-preview
provided
I see in maven dependencies that this brings in
scala-reflect-2.11.4
scala-compiler-2.11.0
and so on
On Fri, Jul 22, 2016 at 11:04 PM, Aaron Ilovici
I am getting the following error
Exception in thread "main" java.lang.NoSuchMethodError:
scala.reflect.api.JavaUniverse.runtimeMirror(Ljava/lang/ClassLoader;)Lscala/reflect/api/JavaMirrors$JavaMirror;
at org.apache.spark.ml.recommendation.ALS.fit(ALS.scala:452)
Any suggestions to resolve this
Dev team,
Can someone please help me here.
-VG
On Fri, Jul 22, 2016 at 8:30 PM, VG wrote:
> Can someone please help here.
>
> I tried both scala 2.10 and 2.11 on the system
>
>
>
> On Fri, Jul 22, 2016 at 7:59 PM, VG wrote:
>
>> I am using version
I use sbt. Rebuilds are super fast.
Michael
> On Jul 22, 2016, at 7:54 AM, Mikael Ståldal wrote:
>
> Is there any way to speed up an incremental build of Spark?
>
> For me it takes 8 minutes to build the project with just a few code changes.
>
> --
>
>
> Mikael
I assume you have enabled Zinc.
Cheers
On Fri, Jul 22, 2016 at 7:54 AM, Mikael Ståldal
wrote:
> Is there any way to speed up an incremental build of Spark?
>
> For me it takes 8 minutes to build the project with just a few code
> changes.
>
> --
> [image: MagineTV]
>
Is there any way to speed up an incremental build of Spark?
For me it takes 8 minutes to build the project with just a few code changes.
--
[image: MagineTV]
*Mikael Ståldal*
Senior software developer
*Magine TV*
mikael.stal...@magine.com
Grev Turegatan 3 | 114 46 Stockholm, Sweden |
Hi,
Fixed now. git pull and start over.
https://github.com/apache/spark/commit/e1bd70f44b11141b000821e9754efeabc14f24a5
Pozdrawiam,
Jacek Laskowski
https://medium.com/@jaceklaskowski/
Mastering Apache Spark http://bit.ly/mastering-apache-spark
Follow me at
I get this error when trying to build from Git master branch:
[ERROR] Failed to execute goal
net.alchim31.maven:scala-maven-plugin:3.2.2:doc-jar (attach-scaladocs) on
project spark-catalyst_2.11: MavenReportException: Error while creating
archive: wrap: Process exited with an error: 1 (Exit
22 matches
Mail list logo