Hi,
I am not sure I know how to. Above should have worked. Apart from the trick
every one knows that you can redirect stdout to stderr, knowing why do you
need it would be great !
On Sat, Nov 30, 2013 at 2:53 PM, Wenlei Xie wenlei@gmail.com wrote:
Hi Prashant,
I copied the
Thanks a lot Evan.
Your help is really appreciated.
BR,
Aslan
On Sun, Dec 1, 2013 at 3:00 AM, Evan R. Sparks evan.spa...@gmail.comwrote:
The MLI repo doesn't yet have support for collaborative filtering, though
we've got a private branch we're working on cleaning up that will add it
I did check the DNS scenario when I first got started on this problem -
have been bit by that more than once setting up Spark on various clusters
which had inconsistent DNS setups. That wasn't it though.
It turns out that there was a race condition between when the executors
were registering and
The short term solutions have already been discussed: decrease the
number of reducers (and mappers, if you need them to be tied) or
potentially turn off compression if Snappy is holding too much buffer
space.
Just to follow up with this (sorry for the delay; I was busy/out for
Thanksgiving),
Ah, interesting, thanks for reporting that. Do you mind opening a JIRA issue
for it? I think the right way would be to wait at least X seconds after start
before deciding that some blocks don’t have preferred locations available.
Matei
On Dec 1, 2013, at 9:08 AM, Erik Freed
Shouldn't? I imported the new 0.8.0 jars into my build path, and had to
update my imports accordingly. The only way I upload the spark jars myself
is that they get packaged into my executable jar. The cluster should have
the right version based on the flag used to launch it (and it does.)
On
Hi, all.
I have a question for spark execution on mesos.
I faced task deserialization error at mesos-slave.
Environment is below.
JDK : 1.7.0_45(OpenJDK)
Spark : 0.8.0-incubating
CDH : 4.4.0
Mesos : 0.14.0-rc4
I made spark executor by spark-0.8.0-incubating-bin-cdh4.tgz and put
executor to
Has this been resolved?
Forgive me if I missed the follow-up but I've been having the exact same
problem.
- Horia
On Fri, Nov 22, 2013 at 5:38 AM, Maxime Lemaire digital@gmail.comwrote:
Hi all,
When im building Spark with Hadoop 2.2.0 support, workers cant connect to
Spark master