The `py-spark` framework looks to be driver based i.e. it uses the 
`MesosSchedulerDriver` underneath. You would need to use the `/teardown` 
endpoint that takes in the `frameworkId`as a query parameter for tearing it 
down. For more details, see: 
http://mesos.apache.org/documentation/latest/endpoints/master/teardown/ 
<http://mesos.apache.org/documentation/latest/endpoints/master/teardown/>

The `TEARDOWN` call to `/api/v1/scheduler` endpoint only works if your 
framework is using the new Scheduler API 
<http://mesos.apache.org/documentation/latest/scheduler-http-api/>. Hope this 
helps.

-anand

> On Apr 15, 2016, at 12:56 PM, June Taylor <[email protected]> wrote:
> 
> We're getting the highlighted error message returned when attempting to tear 
> down a framework on our cluster:
> 
> june@cluster:~$ mesos frameworks
>                      ID                          NAME                HOST     
>       ACTIVE  TASKS   CPU     MEM     DISK
>  0c540ad0-a050-4c20-82df-7bd14ce95f51-0090  pyspark-shell  cluster   True     
> 4    115.0  450560.0  0.0
> 
> 
> june@cluster:~$ curl -XPOST http://cluster 
> <http://cluster/>:5050/api/v1/scheduler -d '{ "framework_id": { "value": 
> "0c540ad0-a050-4c20-82df-7bd14ce95f51-0090" }, "type": "TEARDOWN"}' -H 
> Content-Type:application/json
> Framework is not connected via HTTP
> 
> We cannot get this framework to shut down. I'm not sure why we're getting 
> this type of error message, as the same POST command has worked against other 
> framework IDs in the past.
> 
> Your thoughts are much appreciated.
> 
> Thanks,
> June Taylor
> System Administrator, Minnesota Population Center
> University of Minnesota

Reply via email to