rchase
> any loan, security or any other financial product or instrument, nor is it
> an offer to sell or a solicitation of an indication of interest to purchase
> any products or services to any persons who are prohibited from receiving
> such information under applicable law. The contents of this communication
> may not be accurate or complete and are subject to change without notice.
> As such, Orchard App, Inc. (including its subsidiaries and affiliates,
> "Orchard") makes no representation regarding the accuracy or completeness
> of the information contained herein. The intended recipient is advised to
> consult its own professional advisors, including those specializing in
> legal, tax and accounting matters. Orchard does not provide legal, tax or
> accounting advice.
>
--
Michael Gummelt
Software Engineer
Mesosphere
security, data locality, queues, etc. (or I might be simply biased
> after having spent months with Spark on YARN mostly?).
>
> Jacek
>
> ---------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
t; scheduler across applications.
> >>
> >> is this sentence still true? any progress on this? it will really
> >> helpful. some roadmap?
> >>
> >> Thanks
> >>
> >> Teng
> >>
> >> ---------
> >> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
> >>
> >
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
k
> '20160808-170425-2365980426-5050-4372-0034'
>
> However, the process doesn’t quit after all. This is critical, because I’d
> like to use SparkLauncher to submit such jobs. If my job doesn’t end, jobs
> will pile up and fill up the memory. Pls help. :-|
>
> —
> BR,
&g
have spark installed in the docker
> container).
>
> Can someone tell me what I'm missing?
>
> Thanks
> Jim
>
>
>
>
> --
> View this message in context: http://apache-spark-user-list.
> 1001560.n3.nabble.com/Spark-on-mesos-in-docker-not-
> getting-parameters-tp27500.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
gt; --
> View this message in context: http://apache-spark-user-list.
> 1001560.n3.nabble.com/mesos-or-kubernetes-tp27530.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To u
t; >
> > -
> > To unsubscribe e-mail: user-unsubscr...@spark.apache.org
> >
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
t;>>
>>>>> W0816 23:17:01.985508 16360 sched.cpp:1195] Attempting to accept an
>>>>> unknown offer b859f2f3-7484-482d-8c0d-35bd91c1ad0a-O162910496
>>>>>
>>>>> W0816 23:17:01.985651 16360 sched.cpp:1195] Attempting to accept an
>>>>> unknown offer b859f2f3-7484-482d-8c0d-35bd91c1ad0a-O162910497
>>>>>
>>>>> W0816 23:17:01.985801 16360 sched.cpp:1195] Attempting to accept an
>>>>> unknown offer b859f2f3-7484-482d-8c0d-35bd91c1ad0a-O162910498
>>>>>
>>>>> W0816 23:17:01.985961 16360 sched.cpp:1195] Attempting to accept an
>>>>> unknown offer b859f2f3-7484-482d-8c0d-35bd91c1ad0a-O162910499
>>>>>
>>>>> W0816 23:17:01.986121 16360 sched.cpp:1195] Attempting to accept an
>>>>> unknown offer b859f2f3-7484-482d-8c0d-35bd91c1ad0a-O162910500
>>>>>
>>>>> 2016-08-16 23:18:41,877:16226(0x7f71271b6
>>>>> 700):ZOO_WARN@zookeeper_interest@1557: Exceeded deadline by 13ms
>>>>>
>>>>> 2016-08-16 23:21:12,007:16226(0x7f71271b6
>>>>> 700):ZOO_WARN@zookeeper_interest@1557: Exceeded deadline by 11ms
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>
--
Michael Gummelt
Software Engineer
Mesosphere
--
> > To unsubscribe e-mail: user-unsubscr...@spark.apache.org
> >
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
not gone and installed Yarn without installing Hadoop.
>>
>> What is the overriding reason to have the Spark on its own?
>>
>> You can use Spark in Local or Standalone mode if you do not want Hadoop
>> core.
>>
>> HTH
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn *
>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> <https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>> On 24 August 2016 at 21:54, kant kodali wrote:
>>
>> What do I loose if I run spark without using HDFS or Zookeper ? which of
>> them is almost a must in practice?
>>
>>
>>
>>
>>
>>
>>
>>
--
Michael Gummelt
Software Engineer
Mesosphere
ache spark 2.0"
> RUN git clone git://github.com/apache/spark.git
> WORKDIR /spark
> RUN ./build/mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -DskipTests
> clean package
>
>
> Could anyone assist pls?
>
> kindest regarsd
> Marco
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
the
>> initiative to provide alternatives to ZK. I am just really looking forward
>> for this.
>>
>> https://issues.apache.org/jira/browse/MESOS-3797
>>
>>
>>
>> On Thu, Aug 25, 2016 2:00 PM, Michael Gummelt mgumm...@mesosphere.io
>> wrote:
>
Cheers and many thanks
>
>
>
> --
> View this message in context: http://apache-spark-user-list.
> 1001560.n3.nabble.com/zookeeper-mesos-logging-in-spark-tp27607.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> --------
:)
On Thu, Aug 25, 2016 at 2:29 PM, Marco Mistroni wrote:
> No i wont accept that :)
> I can't believe i have wasted 3 hrs for a space!
>
> Many thanks MIchael!
>
> kr
>
> On Thu, Aug 25, 2016 at 10:01 PM, Michael Gummelt
> wrote:
>
>> You have a spa
build with the
>> command
>> [ERROR] mvn -rf :spark-mllib_2.11
>> The command '/bin/sh -c ./build/mvn -Pyarn -Phadoop-2.4
>> -Dhadoop.version=2.4.0 -DskipTests clean package' returned a non-zero code:
>> 1
>>
>> what am i forgetting?
>> on
t how is it possible to launch it before start the
> application, if the given Spark will be downloaded to the Mesos executor
> after executor launch but it's looking for the started external shuffle
> service in advance?
>
> Is it possible I can't use spark.executor.uri
Error in sparkR.sparkContext(master, appName, sparkHome, sparkConfigMap, :
>
> JVM is not ready after 10 seconds
>
>
>
>
>
> I couldn’t find any information on this subject in the docs – am I missing
> something?
>
>
>
> Thanks for any hints,
>
> Peter
>
--
Michael Gummelt
Software Engineer
Mesosphere
"id")
> output.coalesce(1000).write.format("com.databricks.spark.csv
> ").save('/tmp/...')
>
> Cheers for any help/pointers! There are a couple of memory leak tickets
> fixed in v1.6.2 that may affect the driver so I may try an upgrade (the
> executors are fine).
>
> Adrian
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
e.url=http://some-url
>
> I tried option specified in http://stackoverflow.com/
> questions/35872093/missing-java-system-properties-when-
> running-spark-streaming-on-mesos-cluster?noredirect=1&lq=1
>
> and still got no change in the result.
>
> Any idea ho to achieve this
rs...)
>
> How can this be accomplished?
> thanks in advance,
> Richard
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
nning on Executors. That is something that Spark
> doesn't try to do by default, and changing that behavior has been an open
> issue for a long time -- cf. SPARK-17064
>
> On Wed, Oct 5, 2016 at 2:07 PM, Michael Gummelt
> wrote:
>
>> If running in client mode, just kill
head into consideration for resources allocation.
> Example: spark.executor.memory=3g spark.memory.offheap.size=1g ==> mesos
> report 3.4g allocated for the executor
> Is there any configuration to use both heap and offheap for mesos
> allocation ?
>
--
Michael Gummelt
Software Engineer
Mesosphere
;
>
> --
> View this message in context: http://apache-spark-user-list.
> 1001560.n3.nabble.com/No-way-to-set-mesos-cluster-driver-
> memory-overhead-tp27897.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ------
99.0.19:7077 \--deploy-mode cluster \--supervise \*
>> --executor-memory 5G \
>> --driver-memory 2G \
>> --total-executor-cores 4 \
>> --jars /build/analytics/kafkajobs/spark-streaming-kafka_2.10-1.6.2.jar \
>> /build/analytics/kafkajobs/kafkajobs-prod.jar
>>
>> It threw me an error: *Exception in thread "main" java.sql.SQLException:
>> No suitable driver found for jdbc:postgresql://psqlhost:5432/kafkajobs*
>> which means my —conf didn’t work and those config I put in
>> */build/analytics/kafkajobs/prod.conf
>> *wasn’t loaded. It only loaded thing I put in application.conf (default
>> config).
>>
>> How to make MCD load my config?
>>
>> Regards,
>> Chanh
>>
>> --
> Daniel Carroza Santana
> Vía de las Dos Castillas, 33, Ática 4, 3ª Planta.
> 28224 Pozuelo de Alarcón. Madrid.
> Tel: +34 91 828 64 73 // *@stratiobd <https://twitter.com/StratioBD>*
>
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
from receiving
> such information under applicable law. The contents of this communication
> may not be accurate or complete and are subject to change without notice.
> As such, Orchard App, Inc. (including its subsidiaries and affiliates,
> "Orchard") makes no representation regarding the accuracy or completeness
> of the information contained herein. The intended recipient is advised to
> consult its own professional advisors, including those specializing in
> legal, tax and accounting matters. Orchard does not provide legal, tax or
> accounting advice.
>
--
Michael Gummelt
Software Engineer
Mesosphere
iling list archive at Nabble.com.
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
submitted a long
> running job succeeded.
>
> Then I want to kill the job.
> How could I do that? Is there any similar commands as launching spark
> on yarn?
>
>
> Thanks,
>
> Jared, (韦煜)
> Software developer
> Interested in open source software, big data, Linux
>
--
Michael Gummelt
Software Engineer
Mesosphere
named
> Mesos frameworks and that there are plenty of CPU core and memory resources
> on our cluster.
>
> I am using Spark 2.0.1 on Mesos 0.28.1. Any ideas that y'all may have
> would be very much appreciated.
>
> Thanks! :)
>
> --John
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
hould I check?
>
>
>
> Thanks,
>
> Jared, (韦煜)
> Software developer
> Interested in open source software, big data, Linux
>
--
Michael Gummelt
Software Engineer
Mesosphere
n Fine grained? How can these CPUs be released when the job is done, so
> that other jobs can start.
>
>
> Regards
> Sumit Chawla
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
er when there is demand. This feature is
> particularly useful if multiple applications share resources in your Spark
> cluster.
>
> - Mail Original -
> De: "Sumit Chawla"
> À: "Michael Gummelt"
> Cc: u...@mesos.apache.org, "Dev" , "U
s.
>>> > When the program starts running, in mesos UI it shows 48 tasks and 48
>>> CPUs
>>> > allocated to job. Now as the tasks get done, the number of active
>>> tasks
>>> > number starts decreasing. How ever, the number of CPUs does not
>>> decrease
>>> > propotionally. When the job was about to finish, there was a single
>>> > remaininig task, however CPU count was still 20.
>>> >
>>> > My questions, is why there is no one to one mapping between tasks and
>>> cpus
>>> > in Fine grained? How can these CPUs be released when the job is done,
>>> so
>>> > that other jobs can start.
>>> >
>>> >
>>> > Regards
>>> > Sumit Chawla
>>>
>>
>>
>
--
Michael Gummelt
Software Engineer
Mesosphere
>
>
> Regards
> Sumit Chawla
>
>
> On Mon, Dec 19, 2016 at 12:45 PM, Michael Gummelt
> wrote:
>
>> > I should preassume that No of executors should be less than number of
>> tasks.
>>
>> No. Each executor runs 0 or more tasks.
>>
>&
;> On Mon, Dec 19, 2016 at 2:45 PM, Mehdi Meziane
> >> wrote:
> >> > We will be interested by the results if you give a try to Dynamic
> >> allocation
> >> > with mesos !
> >> >
> >> >
> >> > - Mail Original -
> >
It seems that CPU usage is
> just a "label" for an executor on Mesos. Where's this in the code?
>
> Pozdrawiam,
> Jacek Laskowski
>
> https://medium.com/@jaceklaskowski/
> Mastering Apache Spark 2.0 https://bit.ly/mastering-apache-spark
> Follow me at
> Ji
>
>
> The information in this email is confidential and may be legally
> privileged. It is intended solely for the addressee. Access to this email
> by anyone else is unauthorized. If you are not the intended recipient, any
> disclosure, copying, distribution or any action take
gt;
>> I really don't understand why this is happening since the same
>> configuration but using a Spark 2.0.0 is running fine within Vagrant.
>> Could someone please help?
>>
>> thanks in advance,
>> Richard
>>
>>
>>
>>
>>
>>
>>
>> --
>> *Abhishek J Bhandari*
>> Mobile No. +1 510 493 6205 <(510)%20493-6205> (USA)
>> Mobile No. +91 96387 93021 <+91%2096387%2093021> (IND)
>> *R & D Department*
>> *Valent Software Inc. CA*
>> Email: *abhis...@valent-software.com *
>>
>
>
> *Olivier Girardot* | Associé
> o.girar...@lateral-thoughts.com
> +33 6 24 09 17 94
>
--
Michael Gummelt
Software Engineer
Mesosphere
packaged in the
> final dist of my app…
> So everything should work in theory.
>
>
>
> On Tue, Jan 10, 2017 7:22 PM, Michael Gummelt mgumm...@mesosphere.io
> wrote:
>
>> Just build with -Pmesos http://spark.apache.org/docs/
>> latest/building-spark.html#building
erence to libmesos-1.0.0.so.
>
> So just for the record, setting the env variable
> MESOS_NATIVE_JAVA_LIBRARY="//
> libmesos-1.0.0.so" fixed the whole thing.
>
> Thanks for the help !
>
> @michael if you want to talk about the setup we're using, we can talk
> framework like HDFS or Cassandra ?
>
> V
>
--
Michael Gummelt
Software Engineer
Mesosphere
> wrote:
> I have found this but I am not sure how it can help...
> https://github.com/mesosphere/spark-build/blob/
> a9efef8850976f787956660262f3b77cd636f3f5/conf/spark-env.sh
>
>
> 2017-01-12 20:16 GMT+01:00 Michael Gummelt :
>
>> That's a good point. I hadn
am not sure each worker will connect to c* nodes on the same mesos
> agent ?
>
> 2017-01-12 21:13 GMT+01:00 Michael Gummelt :
>
>> The code in there w/ docs that reference CNI doesn't actually run when
>> CNI is in effect, and doesn't have anything to do with lo
; disclosure, copying, distribution or any action taken or omitted to be
> taken in reliance on it, is prohibited and may be unlawful.
>
--
Michael Gummelt
Software Engineer
Mesosphere
h the current
>> implementation?
>>
>> Thanks
>> Ji
>>
>> The information in this email is confidential and may be legally
>> privileged. It is intended solely for the addressee. Access to this email
>> by anyone else is unauthorized. If you are not the intended recipient, any
>> disclosure, copying, distribution or any action taken or omitted to be
>> taken in reliance on it, is prohibited and may be unlawful.
>>
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
utilization on the cluster if when another job starts up
> that has a hard requirement on resources, the extra resources to the first
> job can be flexibly re-allocated to the second job.
>
> On Sat, Jan 28, 2017 at 2:32 PM, Michael Gummelt
> wrote:
>
>> We've talked about t
ccepted any resources; check your cluster UI to ensure that workers are
> registered and have sufficient resources"
>
> Also it is confusing to me that --cpu_cores specifies the number of cpu
> cores across all executors, but --memory specifies per executor memory
> requirement.
r memory,
> and --total-executor-cores for cpu cores
>
> On Thu, Feb 2, 2017 at 12:56 PM, Michael Gummelt
> wrote:
>
>> What CLI args are your referring to? I'm aware of spark-submit's
>> arguments (--executor-memory, --total-executor-cores, and --executor-cor
t the configuration is the same.
On Thu, Feb 2, 2017 at 1:06 PM, Ji Yan wrote:
> I was mainly confused why this is the case with memory, but with cpu
> cores, it is not specified on per executor level
>
> On Thu, Feb 2, 2017 at 1:02 PM, Michael Gummelt
> wrote:
>
>> It sou
s being overriden
>
> On Thu, Feb 2, 2017 at 1:30 PM, Michael Gummelt
> wrote:
>
>> As of Spark 2.0, Mesos mode does support setting cores on the executor
>> level, but you might need to set the property directly (--conf
>> spark.executor.cores=). I've written ab
one Spark application through spark submit. However I
>> want this application to run on only a subset of these machines,
>> disregarding data locality. (e.g. 10 machines)
>>
>> Is this possible?. Is there any option in the standalone scheduler, YARN
>> or Mesos that allows such thing?.
>>
>>
>>
--
Michael Gummelt
Software Engineer
Mesosphere
mean jobs inside a Spark application or jobs among
> different applications? Maybe you can read http://spark.apache.org/
> docs/latest/job-scheduling.html for help.
>
> On Jan 31, 2017, at 03:34, Michael Gummelt wrote:
>
>
>
> On Mon, Jan 30, 2017 at 9:47 AM, Ji Yan wrote:
PM, Sun Rui wrote:
> Michael,
> No. We directly launch the external shuffle service by specifying a larger
> heap size than default on each worker node. It is observed that the
> processes are quite stable.
>
> On Feb 9, 2017, at 05:21, Michael Gummelt wrote:
>
> Sun, a
111)
> at java.lang.Thread.run(Thread.java:745)
>
> I was trying to follow instructions here:
> https://github.com/apache/spark/pull/15120
> So in my Marathon json I'm defining the ports to use for the spark driver,
> spark ui and block manager.
>
> Can anyone help me get this running in bridge networking mode?
>
>
>
> --
> View this message in context: http://apache-spark-user-list.
> 1001560.n3.nabble.com/Spark-on-Mesos-with-Docker-in-
> bridge-networking-mode-tp28397.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
less workers than the possible
> maximum) and the maximum threshold in the spark configuration is not
> reached and the queue have lot of pending tasks.
>
> May be I have wrong spark or mesos configuration? Does anyone have the
> same problems?
>
--
Michael Gummelt
Software Engineer
Mesosphere
er-list.
> 1001560.n3.nabble.com/Not-able-pass-3rd-party-jars-to-
> mesos-executors-tp26918p28689.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
--
Michael Gummelt
Software Engineer
Mesosphere
val connection = Utils.getHbaseConnection(propsObj)._1
>
> val table = …
>
> partition.foreach { json =>
>
>
>
> }
>
> table.put(puts)
>
> table.close()
>
> connection.close()
>
> }
>
> }
>
> }
>
>
>
>
>
> Keytab file is not getting copied to yarn staging/temp directory, we are
> not getting that in SparkFiles.get… and if we pass keytab with --files,
> spark-submit is failing because it’s there in --keytab already.
>
>
>
> Thanks,
>
> Sudhir
>
--
Michael Gummelt
Software Engineer
Mesosphere
>
> Is it possible to tweak some configuration so that my job submission fails
> gracefully(instead of queuing up) if sufficient resources are not found on
> Mesos cluster?
>
> Regards,
>
> Vatsal
>
--
Michael Gummelt
Software Engineer
Mesosphere
Is there any configurable timeout which controls queuing of the driver in
> Mesos cluster mode or the driver will remain in queue for indefinite until
> it find resource on cluster?
>
>
>
> *From:* Michael Gummelt [mailto:mgumm...@mesosphere.io]
> *Sent:* Friday, May 26, 2017
58 matches
Mail list logo