Re: how to use spark.mesos.constraints

2016-08-03 Thread Michael Gummelt
If you run your jobs with debug logging on in Mesos, it should print why
the offer is being declined:
https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala#L301

On Tue, Jul 26, 2016 at 6:38 PM, Rodrick Brown 
wrote:

> Shuffle service has nothing to do with constraints it is however advised
> to run the mesos-shuffle-service on each of your agent nodes running spark.
>
> Here is the command I use to run a typical spark jobs on my cluster using
> constraints (this is generated but from another script we run but should
> give you a clear idea)
>
> Jobs not being accepted by any resources could mean what you're asking for
> is way larger than the resources you have available.
>
> /usr/bin/timeout 3600 /opt/spark-1.6.1/bin/spark-submit
> --master "mesos://zk://prod-zk-1:2181,prod-zk-2:2181,prod-zk-3:2181/mesos"
> --conf spark.ui.port=40046
> --conf spark.mesos.coarse=true
> --conf spark.sql.broadcastTimeout=3600
> --conf spark.cores.max=5
> --conf spark.mesos.constraints="rack:spark"
> --conf spark.sql.tungsten.enabled=true
> --conf spark.shuffle.service.enabled=true
> --conf spark.dynamicAllocation.enabled=true
> --conf spark.mesos.executor.memoryOverhead=3211
> --class
> com.orchard.dataloader.library.originators..LoadAccountDetail_LC
> --total-executor-cores 5
> --driver-memory 5734M
> --executor-memory 8028M
> --jars /data/orchard/etc/config/load-accountdetail-accumulo-prod.jar
> /data/orchard/jars/dataloader-library-assembled.jar 1
>
> Nodes used for my spark jobs are all using the constraint 'rack:spark'
>
> I hope this helps!
>
>
> On Tue, Jul 26, 2016 at 7:10 PM, Jia Yu  wrote:
>
>> Hi,
>>
>> I am also trying to use the spark.mesos.constraints but it gives me the
>> same error: job has not be accepted by any resources.
>>
>> I am doubting that I should start some additional service like
>> ./sbin/start-mesos-shuffle-service.sh. Am I correct?
>>
>> Thanks,
>> Jia
>>
>> On Tue, Dec 1, 2015 at 5:14 PM, rarediel 
>> wrote:
>>
>>> I am trying to add mesos constraints to my spark-submit command in my
>>> marathon file I am setting it to spark.mesos.coarse=true.
>>>
>>> Here is an example of a constraint I am trying to set.
>>>
>>>  --conf spark.mesos.constraint=cpus:2
>>>
>>> I want to use the constraints to control the amount of executors are
>>> created
>>> so I can control the total memory of my spark job.
>>>
>>> I've tried many variations of resource constraints, but no matter which
>>> resource or what number, range, etc. I do I always get the error "Initial
>>> job has not accepted any resources; check your cluster UI...".  My
>>> cluster
>>> has the available resources.  Is there any examples I can look at where
>>> people use resource constraints?
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-spark-user-list.1001560.n3.nabble.com/how-to-use-spark-mesos-constraints-tp25541.html
>>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>>
>>> -
>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>> For additional commands, e-mail: user-h...@spark.apache.org
>>>
>>>
>>
>
>
> --
>
> [image: Orchard Platform] 
>
> *Rodrick Brown */ *DevOPs*
>
> 9174456839 / rodr...@orchardplatform.com
>
> Orchard Platform
> 101 5th Avenue, 4th Floor, New York, NY
>
> *NOTICE TO RECIPIENTS*: This communication is confidential and intended
> for the use of the addressee only. If you are not an intended recipient of
> this communication, please delete it immediately and notify the sender by
> return email. Unauthorized reading, dissemination, distribution or copying
> of this communication is prohibited. This communication does not constitute
> an offer to sell or a solicitation of an indication of interest to purchase
> any loan, security or any other financial product or instrument, nor is it
> an offer to sell or a solicitation of an indication of interest to purchase
> any products or services to any persons who are prohibited from receiving
> such information under applicable law. The contents of this communication
> may not be accurate or complete and are subject to change without notice.
> As such, Orchard App, Inc. (including its subsidiaries and affiliates,
> "Orchard") makes no representation regarding the accuracy or completeness
> of the information contained herein. The intended recipient is advised to
> consult its own professional advisors, including those specializing in
> legal, tax and accounting matters. Orchard does not provide legal, tax or
> accounting advice.
>



-- 
Michael Gummelt
Software Engineer
Mesosphere


Re: how to use spark.mesos.constraints

2016-07-26 Thread Rodrick Brown
Shuffle service has nothing to do with constraints it is however advised to
run the mesos-shuffle-service on each of your agent nodes running spark.

Here is the command I use to run a typical spark jobs on my cluster using
constraints (this is generated but from another script we run but should
give you a clear idea)

Jobs not being accepted by any resources could mean what you're asking for
is way larger than the resources you have available.

/usr/bin/timeout 3600 /opt/spark-1.6.1/bin/spark-submit
--master "mesos://zk://prod-zk-1:2181,prod-zk-2:2181,prod-zk-3:2181/mesos"
--conf spark.ui.port=40046
--conf spark.mesos.coarse=true
--conf spark.sql.broadcastTimeout=3600
--conf spark.cores.max=5
--conf spark.mesos.constraints="rack:spark"
--conf spark.sql.tungsten.enabled=true
--conf spark.shuffle.service.enabled=true
--conf spark.dynamicAllocation.enabled=true
--conf spark.mesos.executor.memoryOverhead=3211
--class
com.orchard.dataloader.library.originators..LoadAccountDetail_LC
--total-executor-cores 5
--driver-memory 5734M
--executor-memory 8028M
--jars /data/orchard/etc/config/load-accountdetail-accumulo-prod.jar
/data/orchard/jars/dataloader-library-assembled.jar 1

Nodes used for my spark jobs are all using the constraint 'rack:spark'

I hope this helps!


On Tue, Jul 26, 2016 at 7:10 PM, Jia Yu  wrote:

> Hi,
>
> I am also trying to use the spark.mesos.constraints but it gives me the
> same error: job has not be accepted by any resources.
>
> I am doubting that I should start some additional service like
> ./sbin/start-mesos-shuffle-service.sh. Am I correct?
>
> Thanks,
> Jia
>
> On Tue, Dec 1, 2015 at 5:14 PM, rarediel 
> wrote:
>
>> I am trying to add mesos constraints to my spark-submit command in my
>> marathon file I am setting it to spark.mesos.coarse=true.
>>
>> Here is an example of a constraint I am trying to set.
>>
>>  --conf spark.mesos.constraint=cpus:2
>>
>> I want to use the constraints to control the amount of executors are
>> created
>> so I can control the total memory of my spark job.
>>
>> I've tried many variations of resource constraints, but no matter which
>> resource or what number, range, etc. I do I always get the error "Initial
>> job has not accepted any resources; check your cluster UI...".  My cluster
>> has the available resources.  Is there any examples I can look at where
>> people use resource constraints?
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/how-to-use-spark-mesos-constraints-tp25541.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>>
>


-- 

[image: Orchard Platform] 

*Rodrick Brown */ *DevOPs*

9174456839 / rodr...@orchardplatform.com

Orchard Platform
101 5th Avenue, 4th Floor, New York, NY

-- 
*NOTICE TO RECIPIENTS*: This communication is confidential and intended for 
the use of the addressee only. If you are not an intended recipient of this 
communication, please delete it immediately and notify the sender by return 
email. Unauthorized reading, dissemination, distribution or copying of this 
communication is prohibited. This communication does not constitute an 
offer to sell or a solicitation of an indication of interest to purchase 
any loan, security or any other financial product or instrument, nor is it 
an offer to sell or a solicitation of an indication of interest to purchase 
any products or services to any persons who are prohibited from receiving 
such information under applicable law. The contents of this communication 
may not be accurate or complete and are subject to change without notice. 
As such, Orchard App, Inc. (including its subsidiaries and affiliates, 
"Orchard") makes no representation regarding the accuracy or completeness 
of the information contained herein. The intended recipient is advised to 
consult its own professional advisors, including those specializing in 
legal, tax and accounting matters. Orchard does not provide legal, tax or 
accounting advice.


Re: how to use spark.mesos.constraints

2016-07-26 Thread Jia Yu
Hi,

I am also trying to use the spark.mesos.constraints but it gives me the
same error: job has not be accepted by any resources.

I am doubting that I should start some additional service like
./sbin/start-mesos-shuffle-service.sh. Am I correct?

Thanks,
Jia

On Tue, Dec 1, 2015 at 5:14 PM, rarediel 
wrote:

> I am trying to add mesos constraints to my spark-submit command in my
> marathon file I am setting it to spark.mesos.coarse=true.
>
> Here is an example of a constraint I am trying to set.
>
>  --conf spark.mesos.constraint=cpus:2
>
> I want to use the constraints to control the amount of executors are
> created
> so I can control the total memory of my spark job.
>
> I've tried many variations of resource constraints, but no matter which
> resource or what number, range, etc. I do I always get the error "Initial
> job has not accepted any resources; check your cluster UI...".  My cluster
> has the available resources.  Is there any examples I can look at where
> people use resource constraints?
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/how-to-use-spark-mesos-constraints-tp25541.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>


how to use spark.mesos.constraints

2015-12-01 Thread rarediel
I am trying to add mesos constraints to my spark-submit command in my
marathon file I am setting it to spark.mesos.coarse=true.

Here is an example of a constraint I am trying to set.

 --conf spark.mesos.constraint=cpus:2 

I want to use the constraints to control the amount of executors are created
so I can control the total memory of my spark job.

I've tried many variations of resource constraints, but no matter which
resource or what number, range, etc. I do I always get the error "Initial
job has not accepted any resources; check your cluster UI...".  My cluster
has the available resources.  Is there any examples I can look at where
people use resource constraints?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/how-to-use-spark-mesos-constraints-tp25541.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org