Yes, atleast for my query scenarios, I have been able to use Spark 1.1 with
Hadoop 2.4 against Hadoop 2.5. Note, Hadoop 2.5 is considered a relatively
minor release
(http://hadoop.apache.org/releases.html#11+August%2C+2014%3A+Release+2.5.0+available)
where Hadoop 2.4 and 2.3 were considered mo
Hi,I want to know how to use jdbcRDD in JAVA not scala, trying to figure out
the last parameter in the constructor of jdbcRDD
thanks
Please correct me if I’m wrong but I was under the impression as per the maven
repositories that it was just to stay more in sync with the various version of
Hadoop. Looking at the latest documentation
(https://spark.apache.org/docs/latest/building-with-maven.html), there are
multiple Hadoop v
It is in the dag scheduler. Look for broadcast.
On Thursday, September 11, 2014, Sandy Ryza wrote:
> Hmm, well I can't find it now, must have been hallucinating. Do you know
> off the top of your head where I'd be able to find the size to log it?
>
> On Thu, Sep 11, 2014 at 6:33 PM, Reynold Xin
Hmm, well I can't find it now, must have been hallucinating. Do you know
off the top of your head where I'd be able to find the size to log it?
On Thu, Sep 11, 2014 at 6:33 PM, Reynold Xin wrote:
> I didn't know about that
>
> On Thu, Sep 11, 2014 at 6:29 PM, Sandy Ryza
> wrote:
>
>> It u
I’m not sure if I’m completely answering your question here but I’m currently
working (on OSX) with Hadoop 2.5 and I used the Spark 1.1 with Hadoop 2.4
without any issues.
On September 11, 2014 at 18:11:46, Haopu Wang (hw...@qilinsoft.com) wrote:
I see the binary packages include hadoop 1, 2.3
I didn't know about that
On Thu, Sep 11, 2014 at 6:29 PM, Sandy Ryza wrote:
> It used to be available on the UI, no?
>
> On Thu, Sep 11, 2014 at 6:26 PM, Reynold Xin wrote:
>
> > I don't think so. We should probably add a line to log it.
> >
> >
> > On Thursday, September 11, 2014, Sandy R
It used to be available on the UI, no?
On Thu, Sep 11, 2014 at 6:26 PM, Reynold Xin wrote:
> I don't think so. We should probably add a line to log it.
>
>
> On Thursday, September 11, 2014, Sandy Ryza
> wrote:
>
>> After the change to broadcast all task data, is there any easy way to
>> discov
I don't think so. We should probably add a line to log it.
On Thursday, September 11, 2014, Sandy Ryza wrote:
> After the change to broadcast all task data, is there any easy way to
> discover the serialized size of the data getting sent down for a task?
>
> thanks,
> -Sandy
>
After the change to broadcast all task data, is there any easy way to
discover the serialized size of the data getting sent down for a task?
thanks,
-Sandy
Nice work everybody! I'm looking forward to trying out this release!
On Thu, Sep 11, 2014 at 8:12 PM, Patrick Wendell wrote:
> I am happy to announce the availability of Spark 1.1.0! Spark 1.1.0 is
> the second release on the API-compatible 1.X line. It is Spark's
> largest release ever, with co
I am happy to announce the availability of Spark 1.1.0! Spark 1.1.0 is
the second release on the API-compatible 1.X line. It is Spark's
largest release ever, with contributions from 171 developers!
This release brings operational and performance improvements in Spark
core including a new implement
Hi All, I currently have 3 questions regarding memory usage:
1)
Regarding overall memory usage:
If I set SPARK_DRIVER_MEMORY to x GB, Spark reports
/14/09/11 15:36:41 INFO MemoryStore: MemoryStore started with capacity
~0.55*x GB/
*Question:*
Does this relate to spark.storage.memoryFraction (defau
I purposely disabled the Maven master tests while we’re debugging why they’re
always failing the Hive tests. We’ll post an update later when we learn more;
in the meantime, please don’t trigger those tests, since we don’t want them to
clobber some of the work that we’re doing by hand in those w
it was part of the review queue, but it looks like the runs have been
gc'd. oh well!
best,
matt
On 09/11/2014 12:18 PM, shane knapp wrote:
you can just click on 'rebuild', if you'd like. what project
specifically? (i had forgotten that i'd killed
https://amplab.cs.berkeley.edu/jenkins/job/
you can just click on 'rebuild', if you'd like. what project specifically?
(i had forgotten that i'd killed
https://amplab.cs.berkeley.edu/jenkins/job/Spark-Master-Maven-with-YARN/557/,
which i just started a rebuild on)
On Thu, Sep 11, 2014 at 9:15 AM, Matthew Farrellee wrote:
> shane,
>
> is
shane,
is there anything we should do for pull requests that failed, but for
unrelated issues?
best,
matt
On 09/11/2014 11:29 AM, shane knapp wrote:
...and the restart is done.
On Thu, Sep 11, 2014 at 7:38 AM, shane knapp wrote:
jenkins is now in quiet mode, and a restart is happening
...and the restart is done.
On Thu, Sep 11, 2014 at 7:38 AM, shane knapp wrote:
> jenkins is now in quiet mode, and a restart is happening soon.
>
> On Wed, Sep 10, 2014 at 3:44 PM, shane knapp wrote:
>
>> that's kinda what we're hoping as well. :)
>>
>> On Wed, Sep 10, 2014 at 2:46 PM, Nichol
jenkins is now in quiet mode, and a restart is happening soon.
On Wed, Sep 10, 2014 at 3:44 PM, shane knapp wrote:
> that's kinda what we're hoping as well. :)
>
> On Wed, Sep 10, 2014 at 2:46 PM, Nicholas Chammas <
> nicholas.cham...@gmail.com> wrote:
>
>> I'm looking forward to this. :)
>>
>>
This is my case about broadcast variable:
14/07/21 19:49:13 INFO Executor: Running task ID 4 14/07/21 19:49:13 INFO
DAGScheduler: Completed ResultTask(0, 2) 14/07/21 19:49:13 INFO TaskSetManager:
Finished TID 2 in 95 ms on localhost (progress: 3/106) 14/07/21 19:49:13 INFO
TableOutputFormat:
Hi,
Can you attach more logs to see if there is some entry from ContextCleaner?
I met very similar issue before…but haven’t get resolved
Best,
--
Nan Zhu
On Thursday, September 11, 2014 at 10:13 AM, Dibyendu Bhattacharya wrote:
> Dear All,
>
> Not sure if this is a false alarm.
Thanks everyone for weighing in on this.
I had backported kinesis module from master to spark 1.0.2 so just to
confirm if I am not missing anything, I did a dependency graph compare of
my spark build with spark-master
and org.apache.httpcomponents:httpclient:jar does seem to resolve to 4.1.2
depen
Hi, there
I am trying to enable the authentication on spark on standealone model.
Seems like only SparkSubmit load the properties from spark-defaults.conf.
org.apache.spark.deploy.master.Master dose not really load the default
setting from spark-defaults.conf.
Dose it mean the spark authentic
23 matches
Mail list logo