@hendrik

How did you create this "my-volume-227927c2-3266-412b-8572-92c5c93c051a"
volume?

On Mon, Nov 27, 2017 at 7:59 AM, Hendrik Haddorp <[email protected]>
wrote:

> Hi,
>
> I'm using persistent volumes directly on Mesos, without Marathon. For that
> the scheduler (like Marathon) has to first reserve disk space and then
> create a persistent volume with that. The next resource offer message then
> contain the volume in "disk" resource part of the offer. Now you can start
> your task. In the request you would need to include the resources and for
> the "container" part of the request you would have:
>     volumes {
>         container_path: "/mount/point/in/container"
>         host_path: "my-volume-227927c2-3266-412b-8572-92c5c93c051a"
>         mode: RW
>     }
>
> The container path is the mount point in your container and the host path
> is the id of your persistent volume.
>
> In case you use marathon the documentation should be this:
> https://mesosphere.github.io/marathon/docs/persistent-volumes.html
>
> regards,
> Hendrik
>
>
> On 23.11.2017 10:00, Dino Lokmic wrote:
>
>> I have few machines on Linode and I run Mesos there. Can someone explain
>> to me, how to set volumes right.
>>
>> Now I run taks via marathon like this
>>
>> ...
>>
>> "constraints": [
>>     [
>>       "hostname",
>>       "CLUSTER",
>>       "HOSTNAME"
>>     ]
>>   ],
>>   "container": {
>>     "type": "DOCKER",
>>     "volumes": [
>>       {
>>         "containerPath": "/opt/storm/storm-local",
>>         "hostPath": "/opt/docker_data/storm/storm-local",
>>         "mode": "RW"
>>       }
>>     ],
>>     "docker": {
>>       "image": "xxxx",
>>       "network": "HOST",
>>       "portMappings": [],
>>       "privileged": false,
>>       "parameters": [],
>>       "forcePullImage": true
>>     }
>>   },
>> ...
>>
>> So if task is restarted I can be sure it has access to previously used
>> data.
>> You can see I have scaling problem and my task is depending on this node.
>>
>> I would like for my apps to be node independent and also that they have
>> redundant data.
>>
>> What is best practice for this?
>>
>> I want to scale aplication to 2 instances, I1 and I2
>>
>> Instance I1 runs on agent A1 and uses volume V1
>> Instance I2 runs on agent A2 and uses volume V2
>>
>> If agent A1 stops, I1 is restared to A3 and uses V1
>> If V1 failes I1 uses copy of data from V3...
>>
>>
>> Can someone point to article describing this, or at least give me few
>> "keywords"
>>
>>
>> Thanks
>>
>>
>>
>

Reply via email to