Today, entire Indian nationals are mourning for the demise of Irfan Khan, A
true Indian Muslim. And this idiot Zahid Amin or whoever created this
bot(not sure if its a bot or something) spreading rumors about India.
Rights given to Muslims in India are much open then any other Muslim
majority coun
how about Python.
java vs scala vs python vs R
which is better.
On Sat, Oct 27, 2018 at 3:34 AM karan alang wrote:
> Hello
> - is there a "performance" difference when using Java or Scala for Apache
> Spark ?
>
> I understand, there are other obvious differences (less code with scala,
> easier t
try
--num-executors 3 --executor-cores 4 --executor-memory 2G --conf
spark.scheduler.mode=FAIR
On Mon, Jun 11, 2018 at 2:43 PM, Aakash Basu
wrote:
> Hi,
>
> I have submitted a job on* 4 node cluster*, where I see, most of the
> operations happening at one of the worker nodes and other two are s
Putting all cores won't solve the purpose alone, you'll have to mention
executors as well executor memory accordingly to it..
On Tue 27 Feb, 2018, 12:15 AM Vadim Semenov, wrote:
> All used cores aren't getting reported correctly in EMR, and YARN itself
> has no control over it, so whatever you p
yarn.scheduler.fair.preemption
true
yarn.scheduler.fair.preemption.cluster-utilization-threshold
0.8
On Sat, Feb 24, 2018 at 6:26 PM, Jörn Franke wrote:
> Fairscheduler in yarn provides you the possibility to use more resources
> than configured if they are available
>
> On 24. Feb 2018, at 13:47,
>
> it sure is not able to get sufficient resources from YARN to start the
> containers.
>
that's right. I worked when I reduced executors from thrift but it also
reduced thrift's performance.
But it is not the solution i am looking forward to. my sqoop import job
runs just once a day, and thrift
hello vijay,
appreciate your reply.
what was the error when you are trying to run mapreduce import job when
> the
> thrift server is running.
it didnt throw any error, it just gets stuck at
INFO mapreduce.Job: Running job: job_151911053
and resumes the moment i kill Thrift .
thanks
On Tue,
Hello ,
I was trying to optimize my spark cluster. I did it to some extent by doing
some changes in yarn-site.xml and spark-defaults.conf file. before the
changes the mapreduce import job was running fine along with slow thrift
server.
after changes, i have to kill the thrift server to execute my
a small hint would be very helpful .
On Wed, Feb 14, 2018 at 5:17 PM, akshay naidu
wrote:
> Hello Siva,
> Thanks for your reply.
>
> Actually i'm trying to generate online reports for my clients. For this I
> want the jobs should be executed faster without putt
gt; I would recommend slow running job to be configured in a separate pool.
>
> Regards
> Shiv
>
> On Feb 14, 2018, at 5:44 AM, akshay naidu wrote:
>
>
> **
:43 PM, akshay naidu
wrote:
> Hello,
> I'm try to run multiple spark jobs on cluster running in yarn.
> Master is 24GB server with 6 Slaves of 12GB
>
> fairscheduler.xml settings are -
>
> FAIR
> 10
> 2
>
>
> I am running 8 jobs simultaneou
On Tue, Feb 13, 2018 at 4:43 PM, akshay naidu
wrote:
> Hello,
> I'm try to run multiple spark jobs on cluster running in yarn.
> Master is 24GB server with 6 Slaves of 12GB
>
> fairscheduler.xml settings are -
>
> FAIR
> 10
> 2
>
>
> I am
Hello,
I'm try to run multiple spark jobs on cluster running in yarn.
Master is 24GB server with 6 Slaves of 12GB
fairscheduler.xml settings are -
FAIR
10
2
I am running 8 jobs simultaneously , jobs are running parallelly but not
all.
at a time only 7 of then runs simultaneously whi
u an
>> option to select package type. In that, there is an option to select
>> "Pre-Built for Apache Hadoop 2.7 and later". I am assuming it means that it
>> does support Hadoop 3.0.
>>
>> http://spark.apache.org/downloads.html
>>
>> Thanks,
>
hello Users,
I need to know whether we can run latest spark on latest hadoop version
i.e., spark-2.2.1 released on 1st dec and hadoop-3.0.0 released on 13th dec.
thanks.
15 matches
Mail list logo