The problem seems to be that unpicklable RDD objects are being pulled into
function closures. In your failing dockets, it looks like the rdd created
through sc.parallelize is being pulled into the map lambda’s function closure.
I opened a new Dill bug with a small test case that reproduces
https://groups.google.com/forum/#!topic/gcp-hadoop-announce/EfQms8tK5cE
I suspect they are using thr own builds.. has anybody had a chance to look
at it?
Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi https://twitter.com/mayur_rustagi
Hello all,
I was wondering whether Spark/mllib supports Artificial Neural Networks (ANNs)?
If not, I am currently working on an implementation of it. I re-use the code
for linear regression and gradient descent as much as possible.
Would the community be interested in such implementation? Or
Yes, that's what we did: adding two gradient functions to Gradient.scala and
create PoissonRegression and GammaRegression using these gradients. We made
a PR on this.
--
View this message in context:
Please vote on releasing the following candidate as Apache Spark version 1.0.1!
The tag to be voted on is v1.0.1-rc1 (commit 7feeda3):
https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=7feeda3d729f9397aa15ee8750c01ef5aa601962
The release files, including signatures, digests, etc.
Hi,
I am a Spark newbie. I just downloaded Spark1.0.0 and latest IntelliJ version
13.1 with Scala plug-in. At spark-1.0.0 top level, I executed the following
SBT commands and they ran successfully.
- ./sbt/sbt assembly
- ./sbt/sbt update gen-idea
After opening IntelliJ
IntelliJ parser/analyzer/compiler behaves differently from Scala compiler,
and sometimes lead to inconsistent behavior. This is one of the case.
In general while we use IntelliJ, we don't use it to build stuff. I
personally always build in command line with sbt or Maven.
On Thu, Jun 26, 2014
Responded on the jira...
On Thu, Jun 26, 2014 at 9:17 PM, Bharath Ravi Kumar reachb...@gmail.com
wrote:
Hi,
I've been encountering a NPE invoking reduceByKey on JavaPairRDD since
upgrading to 1.0.0 . The issue is straightforward to reproduce with 1.0.0
and doesn't occur with 0.9.0. The
Hi Ron Hu,
The Idea project generated with update gen-idea didn't work properly for me
as well. My workaround is to open corresponding Maven project in Idea
(File-Open look for .bom file). To compile the opened project I use Maven
window in Idea (View-show Maven ). However, tests fail to
Hi Bert,
It would be extremely interesting. Do you plan to implement autoencoder as
well? It would be great to have deep learning in Spark.
Best regards, Alexander
27.06.2014, в 4:47, Bert Greevenbosch bert.greevenbo...@huawei.com
написал(а):
Hello all,
I was wondering whether
10 matches
Mail list logo