Re: Persistent volumes

2017-11-29 Thread Benjamin Mahler
+jpeach

The polling mechanism is used by the "disk/du" isolator to handle the case
where we don't have filesystem support for enforcing a quota on a
per-directory basis. I believe the "disk/xfs" isolator will stop writes
with EDQUOT without killing the task:

http://mesos.apache.org/documentation/latest/isolators/disk-xfs/

On Tue, Nov 28, 2017 at 1:19 PM, Gabriel Hartmann <gabr...@mesosphere.io>
wrote:

> I agree with pretty much everything Hendrik just said with the exception
> of the use of disk quota.  The polling mechanism employed for enforcing
> disk usage implies that any breach of the disk usage limit by a Task also
> implies loss of access to that data forever.  This is true for ROOT volumes
> at least.  MOUNT volumes can be configured to map to "real" devices which
> can provide normal write failures when exceeding disk limits instead of
> essentially revoking all access to data forever.
>
> On Mon, Nov 27, 2017 at 11:34 PM Hendrik Haddorp <hendrik.hadd...@gmx.net>
> wrote:
>
>> As said, I only use persistent volumes with my only scheduler straight
>> on Mesos so do not exactly know how this works in Marathon...
>>
>> The persistent volume is created on a Mesos agent and basically ends up
>> being a folder on that hosts disk. So yes, you can not use the volume on
>> a different agent/slave. For marathon you would need to set a hostname
>> constraint that makes sure the same host is used when restarting the
>> task. You won't be able to use fail over to different agents just have
>> Marathon restart your task once it fails. Also only one task at a time
>> can have the volume bound.
>>
>> Yes, you can achieve persistence in pretty much the same way by using a
>> hostpath but then you are using implicit knowledge about your
>> environment, which is not very clean in my opinion, and thus have a
>> tighter coupling. The nice thing about persistent volumes is that they
>> are managed by Mesos. I do not need to tell the Mesos admin that I need
>> space at some location. I do not need to do something special if I have
>> multiple instances running as they get all their own directory. And I
>> can programatically destroy the volume and then the directory on the
>> host gets deleted again (at least since Mesos 1.0). So in my opinion the
>> usage of persistent volumes is much cleaner. But there are certainly use
>> cases that do not really work with them, like being able to fail over to
>> different host. For that you would wither need a shared network mount or
>> storage like HDFS. Btw, the Mesos containerizer should also enforce disk
>> quotas so your task would not be able to fill the filesystem.
>>
>> On 27.11.2017 16:11, Dino Lokmic wrote:
>> > yes I did. So I don't have to prepare it before task? I can't use
>> > volume created on slave A, from slave B
>> >
>> > Once task fails where will it be restarted? Do I have to specify host?
>> >
>> > If I do, it means I can achieve "persistence" same way I deploy now,
>> > by specifying hostpath for volume and hostname
>> >
>> > 
>> >   "constraints": [
>> > [
>> >   "hostname",
>> >   "CLUSTER",
>> >   "MYHOSTNAME"
>> > ]
>> >   ],
>> >   "container": {
>> > "type": "DOCKER",
>> > "volumes": [
>> >   {
>> > "containerPath": "/opt/storm/storm-local",
>> > "hostPath": "/opt/docker_data/storm/storm-local",
>> > "mode": "RW"
>> >   },
>> >   {
>> > "containerPath": "/opt/storm/logs",
>> > "hostPath": "/opt/docker_logs/storm/logs",
>> > "mode": "RW"
>> >   },
>> >   {
>> > "containerPath": "/home/xx/runtime/storm",
>> > "hostPath": "/home/xx/runtime/storm",
>> > "mode": "RO"
>> >   }
>> > ],
>> > "docker": {
>> >   "image": "xxx/storm-1.1.0",
>> >   "network": "HOST",
>> >   "portMappings": [],
>> >   "privileged": false,
>> >   "parameters": [],
>> >   "forcePullImage": true
>> > }
>> >   },
>> >
>> > 
&

Re: Persistent volumes

2017-11-28 Thread Gabriel Hartmann
I agree with pretty much everything Hendrik just said with the exception of
the use of disk quota.  The polling mechanism employed for enforcing disk
usage implies that any breach of the disk usage limit by a Task also
implies loss of access to that data forever.  This is true for ROOT volumes
at least.  MOUNT volumes can be configured to map to "real" devices which
can provide normal write failures when exceeding disk limits instead of
essentially revoking all access to data forever.

On Mon, Nov 27, 2017 at 11:34 PM Hendrik Haddorp <hendrik.hadd...@gmx.net>
wrote:

> As said, I only use persistent volumes with my only scheduler straight
> on Mesos so do not exactly know how this works in Marathon...
>
> The persistent volume is created on a Mesos agent and basically ends up
> being a folder on that hosts disk. So yes, you can not use the volume on
> a different agent/slave. For marathon you would need to set a hostname
> constraint that makes sure the same host is used when restarting the
> task. You won't be able to use fail over to different agents just have
> Marathon restart your task once it fails. Also only one task at a time
> can have the volume bound.
>
> Yes, you can achieve persistence in pretty much the same way by using a
> hostpath but then you are using implicit knowledge about your
> environment, which is not very clean in my opinion, and thus have a
> tighter coupling. The nice thing about persistent volumes is that they
> are managed by Mesos. I do not need to tell the Mesos admin that I need
> space at some location. I do not need to do something special if I have
> multiple instances running as they get all their own directory. And I
> can programatically destroy the volume and then the directory on the
> host gets deleted again (at least since Mesos 1.0). So in my opinion the
> usage of persistent volumes is much cleaner. But there are certainly use
> cases that do not really work with them, like being able to fail over to
> different host. For that you would wither need a shared network mount or
> storage like HDFS. Btw, the Mesos containerizer should also enforce disk
> quotas so your task would not be able to fill the filesystem.
>
> On 27.11.2017 16:11, Dino Lokmic wrote:
> > yes I did. So I don't have to prepare it before task? I can't use
> > volume created on slave A, from slave B
> >
> > Once task fails where will it be restarted? Do I have to specify host?
> >
> > If I do, it means I can achieve "persistence" same way I deploy now,
> > by specifying hostpath for volume and hostname
> >
> > 
> >   "constraints": [
> > [
> >   "hostname",
> >   "CLUSTER",
> >   "MYHOSTNAME"
> > ]
> >   ],
> >   "container": {
> > "type": "DOCKER",
> > "volumes": [
> >   {
> > "containerPath": "/opt/storm/storm-local",
> > "hostPath": "/opt/docker_data/storm/storm-local",
> > "mode": "RW"
> >   },
> >   {
> > "containerPath": "/opt/storm/logs",
> > "hostPath": "/opt/docker_logs/storm/logs",
> > "mode": "RW"
> >   },
> >   {
> > "containerPath": "/home/xx/runtime/storm",
> > "hostPath": "/home/xx/runtime/storm",
> > "mode": "RO"
> >   }
> > ],
> > "docker": {
> >   "image": "xxx/storm-1.1.0",
> >   "network": "HOST",
> >   "portMappings": [],
> >   "privileged": false,
> >   "parameters": [],
> >   "forcePullImage": true
> > }
> >   },
> >
> > 
> >
> >
> >
> > On Mon, Nov 27, 2017 at 3:05 PM, Hendrik Haddorp
> > <hendrik.hadd...@gmx.net <mailto:hendrik.hadd...@gmx.net>> wrote:
> >
> > I have my own scheduler that is performing a create operation. As
> > you are using Marathon this call would have to be done by Marathon.
> > Did you read
> > https://mesosphere.github.io/marathon/docs/persistent-volumes.html
> > <https://mesosphere.github.io/marathon/docs/persistent-volumes.html>
> ?
> >
> > On 27.11.2017 14:59, Dino Lokmic wrote:
> >
> > @hendrik
> >
> > How did you create this
> > "my-volume-227927c2-3266-412b-8572-92c5c93c

Re: Persistent volumes

2017-11-27 Thread Hendrik Haddorp
As said, I only use persistent volumes with my only scheduler straight 
on Mesos so do not exactly know how this works in Marathon...


The persistent volume is created on a Mesos agent and basically ends up 
being a folder on that hosts disk. So yes, you can not use the volume on 
a different agent/slave. For marathon you would need to set a hostname 
constraint that makes sure the same host is used when restarting the 
task. You won't be able to use fail over to different agents just have 
Marathon restart your task once it fails. Also only one task at a time 
can have the volume bound.


Yes, you can achieve persistence in pretty much the same way by using a 
hostpath but then you are using implicit knowledge about your 
environment, which is not very clean in my opinion, and thus have a 
tighter coupling. The nice thing about persistent volumes is that they 
are managed by Mesos. I do not need to tell the Mesos admin that I need 
space at some location. I do not need to do something special if I have 
multiple instances running as they get all their own directory. And I 
can programatically destroy the volume and then the directory on the 
host gets deleted again (at least since Mesos 1.0). So in my opinion the 
usage of persistent volumes is much cleaner. But there are certainly use 
cases that do not really work with them, like being able to fail over to 
different host. For that you would wither need a shared network mount or 
storage like HDFS. Btw, the Mesos containerizer should also enforce disk 
quotas so your task would not be able to fill the filesystem.


On 27.11.2017 16:11, Dino Lokmic wrote:
yes I did. So I don't have to prepare it before task? I can't use 
volume created on slave A, from slave B


Once task fails where will it be restarted? Do I have to specify host?

If I do, it means I can achieve "persistence" same way I deploy now, 
by specifying hostpath for volume and hostname



  "constraints": [
    [
      "hostname",
      "CLUSTER",
      "MYHOSTNAME"
    ]
  ],
  "container": {
    "type": "DOCKER",
    "volumes": [
      {
        "containerPath": "/opt/storm/storm-local",
        "hostPath": "/opt/docker_data/storm/storm-local",
        "mode": "RW"
      },
      {
        "containerPath": "/opt/storm/logs",
        "hostPath": "/opt/docker_logs/storm/logs",
        "mode": "RW"
      },
      {
        "containerPath": "/home/xx/runtime/storm",
        "hostPath": "/home/xx/runtime/storm",
        "mode": "RO"
      }
    ],
    "docker": {
      "image": "xxx/storm-1.1.0",
      "network": "HOST",
      "portMappings": [],
      "privileged": false,
      "parameters": [],
      "forcePullImage": true
    }
  },





On Mon, Nov 27, 2017 at 3:05 PM, Hendrik Haddorp 
<hendrik.hadd...@gmx.net <mailto:hendrik.hadd...@gmx.net>> wrote:


I have my own scheduler that is performing a create operation. As
you are using Marathon this call would have to be done by Marathon.
Did you read
https://mesosphere.github.io/marathon/docs/persistent-volumes.html
<https://mesosphere.github.io/marathon/docs/persistent-volumes.html> ?

On 27.11.2017 14:59, Dino Lokmic wrote:

    @hendrik

How did you create this
"my-volume-227927c2-3266-412b-8572-92c5c93c051a" volume?

On Mon, Nov 27, 2017 at 7:59 AM, Hendrik Haddorp
<hendrik.hadd...@gmx.net <mailto:hendrik.hadd...@gmx.net>
<mailto:hendrik.hadd...@gmx.net
<mailto:hendrik.hadd...@gmx.net>>> wrote:

    Hi,

    I'm using persistent volumes directly on Mesos, without
Marathon.
    For that the scheduler (like Marathon) has to first
reserve disk
    space and then create a persistent volume with that. The next
    resource offer message then contain the volume in "disk"
resource
    part of the offer. Now you can start your task. In the
request you
    would need to include the resources and for the
"container" part
    of the request you would have:
        volumes {
            container_path: "/mount/point/in/container"
            host_path:
"my-volume-227927c2-3266-412b-8572-92c5c93c051a"
            mode: RW
        }

    The container path is the mount point in your container
and the
    host path is the id of your persistent volume.

    In case you use marathon the documentation should be this:
https://mesosphe

Re: Persistent volumes

2017-11-27 Thread Dino Lokmic
yes I did. So I don't have to prepare it before task? I can't use volume
created on slave A, from slave B

Once task fails where will it be restarted? Do I have to specify host?

If I do, it means I can achieve "persistence" same way I deploy now, by
specifying hostpath for volume and hostname


  "constraints": [
[
  "hostname",
  "CLUSTER",
  "MYHOSTNAME"
]
  ],
  "container": {
"type": "DOCKER",
"volumes": [
  {
"containerPath": "/opt/storm/storm-local",
"hostPath": "/opt/docker_data/storm/storm-local",
"mode": "RW"
  },
  {
"containerPath": "/opt/storm/logs",
"hostPath": "/opt/docker_logs/storm/logs",
"mode": "RW"
  },
  {
"containerPath": "/home/xx/runtime/storm",
"hostPath": "/home/xx/runtime/storm",
"mode": "RO"
  }
],
"docker": {
  "image": "xxx/storm-1.1.0",
  "network": "HOST",
  "portMappings": [],
  "privileged": false,
  "parameters": [],
  "forcePullImage": true
}
  },






On Mon, Nov 27, 2017 at 3:05 PM, Hendrik Haddorp <hendrik.hadd...@gmx.net>
wrote:

> I have my own scheduler that is performing a create operation. As you are
> using Marathon this call would have to be done by Marathon.
> Did you read https://mesosphere.github.io/marathon/docs/persistent-volume
> s.html ?
>
> On 27.11.2017 14:59, Dino Lokmic wrote:
>
>> @hendrik
>>
>> How did you create this "my-volume-227927c2-3266-412b-8572-92c5c93c051a"
>> volume?
>>
>> On Mon, Nov 27, 2017 at 7:59 AM, Hendrik Haddorp <hendrik.hadd...@gmx.net
>> <mailto:hendrik.hadd...@gmx.net>> wrote:
>>
>> Hi,
>>
>> I'm using persistent volumes directly on Mesos, without Marathon.
>> For that the scheduler (like Marathon) has to first reserve disk
>> space and then create a persistent volume with that. The next
>> resource offer message then contain the volume in "disk" resource
>> part of the offer. Now you can start your task. In the request you
>> would need to include the resources and for the "container" part
>> of the request you would have:
>> volumes {
>> container_path: "/mount/point/in/container"
>> host_path: "my-volume-227927c2-3266-412b-8572-92c5c93c051a"
>> mode: RW
>> }
>>
>> The container path is the mount point in your container and the
>> host path is the id of your persistent volume.
>>
>> In case you use marathon the documentation should be this:
>> https://mesosphere.github.io/marathon/docs/persistent-volumes.html
>> <https://mesosphere.github.io/marathon/docs/persistent-volumes.html>
>>
>> regards,
>> Hendrik
>>
>>
>> On 23.11.2017 10:00, Dino Lokmic wrote:
>>
>> I have few machines on Linode and I run Mesos there. Can
>> someone explain to me, how to set volumes right.
>>
>> Now I run taks via marathon like this
>>
>> ...
>>
>> "constraints": [
>> [
>>   "hostname",
>>   "CLUSTER",
>>   "HOSTNAME"
>> ]
>>   ],
>>   "container": {
>> "type": "DOCKER",
>> "volumes": [
>>   {
>> "containerPath": "/opt/storm/storm-local",
>> "hostPath": "/opt/docker_data/storm/storm-local",
>> "mode": "RW"
>>   }
>> ],
>> "docker": {
>>   "image": "",
>>   "network": "HOST",
>>   "portMappings": [],
>>   "privileged": false,
>>   "parameters": [],
>>   "forcePullImage": true
>> }
>>   },
>> ...
>>
>> So if task is restarted I can be sure it has access to
>> previously used data.
>> You can see I have scaling problem and my task is depending on
>> this node.
>>
>> I would like for my apps to be node independent and also that
>> they have redundant data.
>>
>> What is best practice for this?
>>
>> I want to scale aplication to 2 instances, I1 and I2
>>
>> Instance I1 runs on agent A1 and uses volume V1
>> Instance I2 runs on agent A2 and uses volume V2
>>
>> If agent A1 stops, I1 is restared to A3 and uses V1
>> If V1 failes I1 uses copy of data from V3...
>>
>>
>> Can someone point to article describing this, or at least give
>> me few "keywords"
>>
>>
>> Thanks
>>
>>
>>
>>
>>
>


Re: Persistent volumes

2017-11-27 Thread Hendrik Haddorp
I have my own scheduler that is performing a create operation. As you 
are using Marathon this call would have to be done by Marathon.
Did you read 
https://mesosphere.github.io/marathon/docs/persistent-volumes.html ?


On 27.11.2017 14:59, Dino Lokmic wrote:

@hendrik

How did you create this 
"my-volume-227927c2-3266-412b-8572-92c5c93c051a" volume?


On Mon, Nov 27, 2017 at 7:59 AM, Hendrik Haddorp 
<hendrik.hadd...@gmx.net <mailto:hendrik.hadd...@gmx.net>> wrote:


    Hi,

    I'm using persistent volumes directly on Mesos, without Marathon.
For that the scheduler (like Marathon) has to first reserve disk
space and then create a persistent volume with that. The next
resource offer message then contain the volume in "disk" resource
part of the offer. Now you can start your task. In the request you
would need to include the resources and for the "container" part
of the request you would have:
    volumes {
        container_path: "/mount/point/in/container"
        host_path: "my-volume-227927c2-3266-412b-8572-92c5c93c051a"
        mode: RW
    }

The container path is the mount point in your container and the
host path is the id of your persistent volume.

In case you use marathon the documentation should be this:
https://mesosphere.github.io/marathon/docs/persistent-volumes.html
<https://mesosphere.github.io/marathon/docs/persistent-volumes.html>

regards,
Hendrik


On 23.11.2017 10:00, Dino Lokmic wrote:

I have few machines on Linode and I run Mesos there. Can
someone explain to me, how to set volumes right.

Now I run taks via marathon like this

...

"constraints": [
    [
      "hostname",
      "CLUSTER",
      "HOSTNAME"
    ]
  ],
  "container": {
    "type": "DOCKER",
    "volumes": [
      {
        "containerPath": "/opt/storm/storm-local",
        "hostPath": "/opt/docker_data/storm/storm-local",
        "mode": "RW"
      }
    ],
    "docker": {
      "image": "",
      "network": "HOST",
      "portMappings": [],
      "privileged": false,
      "parameters": [],
      "forcePullImage": true
    }
  },
...

So if task is restarted I can be sure it has access to
previously used data.
You can see I have scaling problem and my task is depending on
this node.

I would like for my apps to be node independent and also that
they have redundant data.

What is best practice for this?

I want to scale aplication to 2 instances, I1 and I2

Instance I1 runs on agent A1 and uses volume V1
Instance I2 runs on agent A2 and uses volume V2

If agent A1 stops, I1 is restared to A3 and uses V1
If V1 failes I1 uses copy of data from V3...


Can someone point to article describing this, or at least give
me few "keywords"


Thanks








Re: Persistent volumes

2017-11-27 Thread Dino Lokmic
@hendrik

How did you create this "my-volume-227927c2-3266-412b-8572-92c5c93c051a"
volume?

On Mon, Nov 27, 2017 at 7:59 AM, Hendrik Haddorp <hendrik.hadd...@gmx.net>
wrote:

> Hi,
>
> I'm using persistent volumes directly on Mesos, without Marathon. For that
> the scheduler (like Marathon) has to first reserve disk space and then
> create a persistent volume with that. The next resource offer message then
> contain the volume in "disk" resource part of the offer. Now you can start
> your task. In the request you would need to include the resources and for
> the "container" part of the request you would have:
> volumes {
> container_path: "/mount/point/in/container"
> host_path: "my-volume-227927c2-3266-412b-8572-92c5c93c051a"
> mode: RW
> }
>
> The container path is the mount point in your container and the host path
> is the id of your persistent volume.
>
> In case you use marathon the documentation should be this:
> https://mesosphere.github.io/marathon/docs/persistent-volumes.html
>
> regards,
> Hendrik
>
>
> On 23.11.2017 10:00, Dino Lokmic wrote:
>
>> I have few machines on Linode and I run Mesos there. Can someone explain
>> to me, how to set volumes right.
>>
>> Now I run taks via marathon like this
>>
>> ...
>>
>> "constraints": [
>> [
>>   "hostname",
>>   "CLUSTER",
>>   "HOSTNAME"
>> ]
>>   ],
>>   "container": {
>> "type": "DOCKER",
>> "volumes": [
>>   {
>> "containerPath": "/opt/storm/storm-local",
>> "hostPath": "/opt/docker_data/storm/storm-local",
>> "mode": "RW"
>>   }
>> ],
>> "docker": {
>>   "image": "",
>>   "network": "HOST",
>>   "portMappings": [],
>>   "privileged": false,
>>   "parameters": [],
>>   "forcePullImage": true
>> }
>>   },
>> ...
>>
>> So if task is restarted I can be sure it has access to previously used
>> data.
>> You can see I have scaling problem and my task is depending on this node.
>>
>> I would like for my apps to be node independent and also that they have
>> redundant data.
>>
>> What is best practice for this?
>>
>> I want to scale aplication to 2 instances, I1 and I2
>>
>> Instance I1 runs on agent A1 and uses volume V1
>> Instance I2 runs on agent A2 and uses volume V2
>>
>> If agent A1 stops, I1 is restared to A3 and uses V1
>> If V1 failes I1 uses copy of data from V3...
>>
>>
>> Can someone point to article describing this, or at least give me few
>> "keywords"
>>
>>
>> Thanks
>>
>>
>>
>


Re: Persistent volumes

2017-11-27 Thread Dino Lokmic
Thanks, for answers

yes I use Marathon and Mesos.

On Mon, Nov 27, 2017 at 7:59 AM, Hendrik Haddorp <hendrik.hadd...@gmx.net>
wrote:

> Hi,
>
> I'm using persistent volumes directly on Mesos, without Marathon. For that
> the scheduler (like Marathon) has to first reserve disk space and then
> create a persistent volume with that. The next resource offer message then
> contain the volume in "disk" resource part of the offer. Now you can start
> your task. In the request you would need to include the resources and for
> the "container" part of the request you would have:
> volumes {
> container_path: "/mount/point/in/container"
> host_path: "my-volume-227927c2-3266-412b-8572-92c5c93c051a"
> mode: RW
> }
>
> The container path is the mount point in your container and the host path
> is the id of your persistent volume.
>
> In case you use marathon the documentation should be this:
> https://mesosphere.github.io/marathon/docs/persistent-volumes.html
>
> regards,
> Hendrik
>
>
> On 23.11.2017 10:00, Dino Lokmic wrote:
>
>> I have few machines on Linode and I run Mesos there. Can someone explain
>> to me, how to set volumes right.
>>
>> Now I run taks via marathon like this
>>
>> ...
>>
>> "constraints": [
>> [
>>   "hostname",
>>   "CLUSTER",
>>   "HOSTNAME"
>> ]
>>   ],
>>   "container": {
>> "type": "DOCKER",
>> "volumes": [
>>   {
>> "containerPath": "/opt/storm/storm-local",
>> "hostPath": "/opt/docker_data/storm/storm-local",
>> "mode": "RW"
>>   }
>> ],
>> "docker": {
>>   "image": "",
>>   "network": "HOST",
>>   "portMappings": [],
>>   "privileged": false,
>>   "parameters": [],
>>   "forcePullImage": true
>> }
>>   },
>> ...
>>
>> So if task is restarted I can be sure it has access to previously used
>> data.
>> You can see I have scaling problem and my task is depending on this node.
>>
>> I would like for my apps to be node independent and also that they have
>> redundant data.
>>
>> What is best practice for this?
>>
>> I want to scale aplication to 2 instances, I1 and I2
>>
>> Instance I1 runs on agent A1 and uses volume V1
>> Instance I2 runs on agent A2 and uses volume V2
>>
>> If agent A1 stops, I1 is restared to A3 and uses V1
>> If V1 failes I1 uses copy of data from V3...
>>
>>
>> Can someone point to article describing this, or at least give me few
>> "keywords"
>>
>>
>> Thanks
>>
>>
>>
>


Re: Persistent volumes

2017-11-26 Thread Hendrik Haddorp

Hi,

I'm using persistent volumes directly on Mesos, without Marathon. For 
that the scheduler (like Marathon) has to first reserve disk space and 
then create a persistent volume with that. The next resource offer 
message then contain the volume in "disk" resource part of the offer. 
Now you can start your task. In the request you would need to include 
the resources and for the "container" part of the request you would have:

    volumes {
        container_path: "/mount/point/in/container"
        host_path: "my-volume-227927c2-3266-412b-8572-92c5c93c051a"
        mode: RW
    }

The container path is the mount point in your container and the host 
path is the id of your persistent volume.


In case you use marathon the documentation should be this: 
https://mesosphere.github.io/marathon/docs/persistent-volumes.html


regards,
Hendrik

On 23.11.2017 10:00, Dino Lokmic wrote:
I have few machines on Linode and I run Mesos there. Can someone 
explain to me, how to set volumes right.


Now I run taks via marathon like this

...

"constraints": [
    [
      "hostname",
      "CLUSTER",
      "HOSTNAME"
    ]
  ],
  "container": {
    "type": "DOCKER",
    "volumes": [
      {
        "containerPath": "/opt/storm/storm-local",
        "hostPath": "/opt/docker_data/storm/storm-local",
        "mode": "RW"
      }
    ],
    "docker": {
      "image": "",
      "network": "HOST",
      "portMappings": [],
      "privileged": false,
      "parameters": [],
      "forcePullImage": true
    }
  },
...

So if task is restarted I can be sure it has access to previously used 
data.

You can see I have scaling problem and my task is depending on this node.

I would like for my apps to be node independent and also that they 
have redundant data.


What is best practice for this?

I want to scale aplication to 2 instances, I1 and I2

Instance I1 runs on agent A1 and uses volume V1
Instance I2 runs on agent A2 and uses volume V2

If agent A1 stops, I1 is restared to A3 and uses V1
If V1 failes I1 uses copy of data from V3...


Can someone point to article describing this, or at least give me few 
"keywords"



Thanks






Re: Persistent volumes

2017-11-26 Thread Judith Malnick
Are you using DC/OS or "vanilla" Mesos and Marathon, without DC/OS? If you
are using Mesos, you might get a better answer on the Mesos mailing list
<http://mesos.apache.org/community/#mailing-lists>, and you can also check
out these docs on persistent volumes
<http://mesos.apache.org/documentation/latest/persistent-volume/>.

Hope this helps!
Judith

On Thu, Nov 23, 2017 at 1:00 AM, Dino Lokmic <dino.lok...@ngs.ba> wrote:

> I have few machines on Linode and I run Mesos there. Can someone explain
> to me, how to set volumes right.
>
> Now I run taks via marathon like this
>
> ...
>
> "constraints": [
> [
>   "hostname",
>   "CLUSTER",
>   "HOSTNAME"
> ]
>   ],
>   "container": {
> "type": "DOCKER",
> "volumes": [
>   {
> "containerPath": "/opt/storm/storm-local",
> "hostPath": "/opt/docker_data/storm/storm-local",
> "mode": "RW"
>   }
> ],
> "docker": {
>   "image": "",
>   "network": "HOST",
>   "portMappings": [],
>   "privileged": false,
>   "parameters": [],
>   "forcePullImage": true
> }
>   },
> ...
>
> So if task is restarted I can be sure it has access to previously used
> data.
> You can see I have scaling problem and my task is depending on this node.
>
> I would like for my apps to be node independent and also that they have
> redundant data.
>
> What is best practice for this?
>
> I want to scale aplication to 2 instances, I1 and I2
>
> Instance I1 runs on agent A1 and uses volume V1
> Instance I2 runs on agent A2 and uses volume V2
>
> If agent A1 stops, I1 is restared to A3 and uses V1
> If V1 failes I1 uses copy of data from V3...
>
>
> Can someone point to article describing this, or at least give me few
> "keywords"
>
>
> Thanks
>
>
>


-- 
Judith Malnick
Community Manager
310-709-1517


Persistent volumes

2017-11-23 Thread Dino Lokmic
I have few machines on Linode and I run Mesos there. Can someone explain to
me, how to set volumes right.

Now I run taks via marathon like this

...

"constraints": [
[
  "hostname",
  "CLUSTER",
  "HOSTNAME"
]
  ],
  "container": {
"type": "DOCKER",
"volumes": [
  {
"containerPath": "/opt/storm/storm-local",
"hostPath": "/opt/docker_data/storm/storm-local",
"mode": "RW"
  }
],
"docker": {
  "image": "",
  "network": "HOST",
  "portMappings": [],
  "privileged": false,
  "parameters": [],
  "forcePullImage": true
}
  },
...

So if task is restarted I can be sure it has access to previously used data.
You can see I have scaling problem and my task is depending on this node.

I would like for my apps to be node independent and also that they have
redundant data.

What is best practice for this?

I want to scale aplication to 2 instances, I1 and I2

Instance I1 runs on agent A1 and uses volume V1
Instance I2 runs on agent A2 and uses volume V2

If agent A1 stops, I1 is restared to A3 and uses V1
If V1 failes I1 uses copy of data from V3...


Can someone point to article describing this, or at least give me few
"keywords"


Thanks


Re: Mesos persistent volumes as Docker volumes

2016-06-25 Thread Jie Yu
Hendrik,

Sorry about the late response on that. I am glad that you figured it out
yourself.

Currently, for local persistent volumes (consumes 'disk' resources on the
agent), the container_path has to be relative. We made that decision based
on a couple of reasons:
1) not all the systems Mesos supports has bind mount (e.g., osx, windows).
Because of that, we don't want to introduce an API that works on some
system, but not on others. A relative container_path (e.g., "abc") means
that the volume can be accessed at $MESOS_SANDBOX/abc. This can be
supported on all systems (i.e., using bind mount on Linux, symlinks on
others).
2) Mesos allows the resources of a container to expand when executor
receives a new task from the framework. The new task might contains a local
persistent volume. Dynamically adding volumes to a running container is not
supported by Docker daemon until recently. Even though, one needs to setup
mount propagation properly before hand. Supporting an absolute
container_path will make it very hard to implement (you'll need to make
sure the parent mount of `container_path` is a shared mount). To solve that
issue, we only set mount propagation for mesos sandbox directory. That's
the reason why we enforce that container_path has to be relative for local
persistent volumes.

The workaround you have is perfectly fine for command tasks. We should
definitely document this. I'll make sure that we follow up on that.

Thanks!
- Jie

On Fri, Jun 24, 2016 at 1:25 PM, Hendrik Haddorp <hendrik.hadd...@gmx.net>
wrote:

> Basically the same issue was actually reported recently in MESOS-3413
> [1]. The discussion in there resulted in a code change in the Mesos
> framework for ArangoDB [2]. One first has to create a volume with some
> container path and then add a volume to the ContainerInfo when launching
> the docker container that uses the same path as host path for the
> volume. Now the container path needs to be absolute, like /data. Files
> that are now being created below /data show up in the persistent volume
> and newly launched containers with the same mapping can reuse the data.
> Not that obvious in my opinion but it works :-)
>
> [1] https://issues.apache.org/jira/browse/MESOS-3413
> [2]
>
> https://github.com/arangodb/arangodb-mesos-framework/commit/98ccbdbaa5ae41f83b02ca42e7325746ad044099
>
> p.s.: looks like jira is currently down ...
>
> On 23/06/16 16:56, Hendrik Haddorp wrote:
> > Hi,
> >
> > I'm trying to write a Mesos framework that should create persistent
> > volumes and then start a Docker container that uses this. So far I was
> > able to dynamically reserve resources (cpu, memory and disk) and create
> > a persistent volume in the reserved disk space. I'm also able to launch
> > a Docker container. I just can't figure out how to connect these
> > correctly. I either get told that some fields are not set correctly, end
> > up with mounting a path on the host system or nothing at all seems to
> > happen.
> >
> > Would be nice if somebody could show how a TaskInfo protobuf would need
> > to get filled to achieve this.
> >
> > thanks,
> > Hendrik
>
>


Re: Mesos persistent volumes as Docker volumes

2016-06-24 Thread Hendrik Haddorp
Basically the same issue was actually reported recently in MESOS-3413
[1]. The discussion in there resulted in a code change in the Mesos
framework for ArangoDB [2]. One first has to create a volume with some
container path and then add a volume to the ContainerInfo when launching
the docker container that uses the same path as host path for the
volume. Now the container path needs to be absolute, like /data. Files
that are now being created below /data show up in the persistent volume
and newly launched containers with the same mapping can reuse the data.
Not that obvious in my opinion but it works :-)

[1] https://issues.apache.org/jira/browse/MESOS-3413
[2]
https://github.com/arangodb/arangodb-mesos-framework/commit/98ccbdbaa5ae41f83b02ca42e7325746ad044099

p.s.: looks like jira is currently down ...

On 23/06/16 16:56, Hendrik Haddorp wrote:
> Hi,
>
> I'm trying to write a Mesos framework that should create persistent
> volumes and then start a Docker container that uses this. So far I was
> able to dynamically reserve resources (cpu, memory and disk) and create
> a persistent volume in the reserved disk space. I'm also able to launch
> a Docker container. I just can't figure out how to connect these
> correctly. I either get told that some fields are not set correctly, end
> up with mounting a path on the host system or nothing at all seems to
> happen.
>
> Would be nice if somebody could show how a TaskInfo protobuf would need
> to get filled to achieve this.
>
> thanks,
> Hendrik



Re: Mesos persistent volumes as Docker volumes

2016-06-23 Thread Hendrik Haddorp
Hi Guangya,

that seems to be pretty much the same that Vaibhav pointed me to, that
is using Docker volume drivers to mount external storage. I would like
to leverage the build in Mesos persistent volumes but mount them at any
position, just like you can with normal docker volumes from the host
filesystem.

On 23/06/16 23:41, Guangya Liu wrote:
> Hi Hendrik,
>
> You can take a look for how Mesos 1.0 support docker volume driver
> integration with Mesos Containerizer from
> here https://github.com/apache/mesos/blob/master/docs/docker-volume.md
>
> Both Mesos Containerizer and Docker Containerizer support integration
> with docker volume driver now, you can take a look
> at https://reviews.apache.org/r/36440/ for how to test docker volume
> driver with Docker Containerizer.
>
> Thanks,
>
> Guangya
>
> On Fri, Jun 24, 2016 at 2:54 AM, Hendrik Haddorp
> <hendrik.hadd...@gmx.net <mailto:hendrik.hadd...@gmx.net>> wrote:
>
> Thanks for the tip, I did actually notice the project when trying
> to find a solution for my problem. This project seems to be about
> leveraging external Docker volume drivers, which is certainly also
> interesting but I'm trying to use the build in Mesos persistent
> storage.
>
> Actually I just noticed that I can mount the volumes that I
> created with Mesos it is just that when I specify a relative path
> for the container path in the volume the mount shows up below
> /mnt/mesos/sandbox and when I specify an absolute path it is being
> ignored as according to the Mesos log slashes are not allowed in
> the container path. So now problem is that I would like to have
> the mount at a different location.
>
>
> On 23/06/16 20:43, Vaibhav Khanduja wrote:
>> Hi Hendrik,
>>
>> If you want to run Docker jobs, it may be a good idea to get
>> volumes from “Docker” volume plugin. 
>>
>> There is a project by EMC - mesos-dvdi, which abstracts the
>> volume creation. Please check this out and it should work with
>> your scheduler …
>>
>>
>> https://github.com/emccode/mesos-module-dvdi
>>
>> Thx
>>
>>
>>
>> On Thu, Jun 23, 2016 at 7:56 AM, Hendrik Haddorp
>> <hendrik.hadd...@gmx.net <mailto:hendrik.hadd...@gmx.net>> wrote:
>>
>> Hi,
>>
>> I'm trying to write a Mesos framework that should create
>> persistent
>> volumes and then start a Docker container that uses this. So
>> far I was
>> able to dynamically reserve resources (cpu, memory and disk)
>> and create
>> a persistent volume in the reserved disk space. I'm also able
>> to launch
>> a Docker container. I just can't figure out how to connect these
>> correctly. I either get told that some fields are not set
>> correctly, end
>> up with mounting a path on the host system or nothing at all
>> seems to
>> happen.
>>
>> Would be nice if somebody could show how a TaskInfo protobuf
>> would need
>> to get filled to achieve this.
>>
>> thanks,
>> Hendrik
>>
>>
>
>



Re: Mesos persistent volumes as Docker volumes

2016-06-23 Thread Guangya Liu
Hi Hendrik,

You can take a look for how Mesos 1.0 support docker volume driver
integration with Mesos Containerizer from here
https://github.com/apache/mesos/blob/master/docs/docker-volume.md

Both Mesos Containerizer and Docker Containerizer support integration with
docker volume driver now, you can take a look at
https://reviews.apache.org/r/36440/ for how to test docker volume driver
with Docker Containerizer.

Thanks,

Guangya

On Fri, Jun 24, 2016 at 2:54 AM, Hendrik Haddorp <hendrik.hadd...@gmx.net>
wrote:

> Thanks for the tip, I did actually notice the project when trying to find
> a solution for my problem. This project seems to be about leveraging
> external Docker volume drivers, which is certainly also interesting but I'm
> trying to use the build in Mesos persistent storage.
>
> Actually I just noticed that I can mount the volumes that I created with
> Mesos it is just that when I specify a relative path for the container path
> in the volume the mount shows up below /mnt/mesos/sandbox and when I
> specify an absolute path it is being ignored as according to the Mesos log
> slashes are not allowed in the container path. So now problem is that I
> would like to have the mount at a different location.
>
>
> On 23/06/16 20:43, Vaibhav Khanduja wrote:
>
> Hi Hendrik,
>
> If you want to run Docker jobs, it may be a good idea to get volumes from
> “Docker” volume plugin.
>
> There is a project by EMC - mesos-dvdi, which abstracts the volume
> creation. Please check this out and it should work with your scheduler …
>
>
> https://github.com/emccode/mesos-module-dvdi
>
> Thx
>
>
>
> On Thu, Jun 23, 2016 at 7:56 AM, Hendrik Haddorp <hendrik.hadd...@gmx.net>
> wrote:
>
>> Hi,
>>
>> I'm trying to write a Mesos framework that should create persistent
>> volumes and then start a Docker container that uses this. So far I was
>> able to dynamically reserve resources (cpu, memory and disk) and create
>> a persistent volume in the reserved disk space. I'm also able to launch
>> a Docker container. I just can't figure out how to connect these
>> correctly. I either get told that some fields are not set correctly, end
>> up with mounting a path on the host system or nothing at all seems to
>> happen.
>>
>> Would be nice if somebody could show how a TaskInfo protobuf would need
>> to get filled to achieve this.
>>
>> thanks,
>> Hendrik
>>
>
>
>


Re: Mesos persistent volumes as Docker volumes

2016-06-23 Thread Hendrik Haddorp
Thanks for the tip, I did actually notice the project when trying to
find a solution for my problem. This project seems to be about
leveraging external Docker volume drivers, which is certainly also
interesting but I'm trying to use the build in Mesos persistent storage.

Actually I just noticed that I can mount the volumes that I created with
Mesos it is just that when I specify a relative path for the container
path in the volume the mount shows up below /mnt/mesos/sandbox and when
I specify an absolute path it is being ignored as according to the Mesos
log slashes are not allowed in the container path. So now problem is
that I would like to have the mount at a different location.

On 23/06/16 20:43, Vaibhav Khanduja wrote:
> Hi Hendrik,
>
> If you want to run Docker jobs, it may be a good idea to get volumes
> from “Docker” volume plugin. 
>
> There is a project by EMC - mesos-dvdi, which abstracts the volume
> creation. Please check this out and it should work with your scheduler …
>
>
> https://github.com/emccode/mesos-module-dvdi
>
> Thx
>
>
>
> On Thu, Jun 23, 2016 at 7:56 AM, Hendrik Haddorp
> <hendrik.hadd...@gmx.net <mailto:hendrik.hadd...@gmx.net>> wrote:
>
> Hi,
>
> I'm trying to write a Mesos framework that should create persistent
> volumes and then start a Docker container that uses this. So far I was
> able to dynamically reserve resources (cpu, memory and disk) and
> create
> a persistent volume in the reserved disk space. I'm also able to
> launch
> a Docker container. I just can't figure out how to connect these
> correctly. I either get told that some fields are not set
> correctly, end
> up with mounting a path on the host system or nothing at all seems to
> happen.
>
> Would be nice if somebody could show how a TaskInfo protobuf would
> need
> to get filled to achieve this.
>
> thanks,
> Hendrik
>
>



Re: Mesos persistent volumes as Docker volumes

2016-06-23 Thread Vaibhav Khanduja
Hi Hendrik,

If you want to run Docker jobs, it may be a good idea to get volumes from
“Docker” volume plugin.

There is a project by EMC - mesos-dvdi, which abstracts the volume
creation. Please check this out and it should work with your scheduler …


https://github.com/emccode/mesos-module-dvdi

Thx
<https://github.com/emccode/mesos-module-dvdi>
<https://github.com/emccode/mesos-module-dvdi>
<https://github.com/emccode/mesos-module-dvdi>
<https://github.com/emccode/mesos-module-dvdi>


On Thu, Jun 23, 2016 at 7:56 AM, Hendrik Haddorp <hendrik.hadd...@gmx.net>
wrote:

> Hi,
>
> I'm trying to write a Mesos framework that should create persistent
> volumes and then start a Docker container that uses this. So far I was
> able to dynamically reserve resources (cpu, memory and disk) and create
> a persistent volume in the reserved disk space. I'm also able to launch
> a Docker container. I just can't figure out how to connect these
> correctly. I either get told that some fields are not set correctly, end
> up with mounting a path on the host system or nothing at all seems to
> happen.
>
> Would be nice if somebody could show how a TaskInfo protobuf would need
> to get filled to achieve this.
>
> thanks,
> Hendrik
>


Mesos persistent volumes as Docker volumes

2016-06-23 Thread Hendrik Haddorp
Hi,

I'm trying to write a Mesos framework that should create persistent
volumes and then start a Docker container that uses this. So far I was
able to dynamically reserve resources (cpu, memory and disk) and create
a persistent volume in the reserved disk space. I'm also able to launch
a Docker container. I just can't figure out how to connect these
correctly. I either get told that some fields are not set correctly, end
up with mounting a path on the host system or nothing at all seems to
happen.

Would be nice if somebody could show how a TaskInfo protobuf would need
to get filled to achieve this.

thanks,
Hendrik


URL for viewing persistent volumes

2016-02-29 Thread Zhitao Li
Hi,

Is there a HTTP url to list and view existing persistent volume created so
far? I'm running 0.27.1 and couldn't find how to obtain such info.

Thanks!

-- 
Cheers,

Zhitao Li