[ 
https://issues.apache.org/jira/browse/MESOS-6765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Mahler updated MESOS-6765:
-----------------------------------
    Labels: performance  (was: )

> Consider making the Resources wrapper "copy-on-write" to improve performance.
> -----------------------------------------------------------------------------
>
>                 Key: MESOS-6765
>                 URL: https://issues.apache.org/jira/browse/MESOS-6765
>             Project: Mesos
>          Issue Type: Improvement
>            Reporter: Benjamin Mahler
>              Labels: performance
>
> Resources currently directly stores the underlying resource objects:
> {code}
> class Resources
> {
>   ...
>   std::vector<Resource_> resources;
> };
> {code}
> What this means is that copying of Resources (which occurs frequently) is 
> expensive since copying a {{Resource}} object is relatively heavy-weight.
> One strategy, in MESOS-4770, is to avoid protobuf in favor of C++ types (i.e. 
> replace {{Value::Scalar}}, {{Value::Set}}, and {{Value::Ranges}} with C++ 
> equivalents). However, metadata like reservations, disk info, etc, is still 
> fairly expensive to copy even if avoiding protobufs.
> An approach to reduce copying would be to only copy the resource objects upon 
> writing, when there are multiple references to the resource object. If there 
> is a single reference to the resource object we could safely mutate it 
> without copying. E.g.
> {code}
> class Resource
> {
>   ...
>   std::vector<shared_ptr<Resource_>> resources;
> };
> // Mutation function:
> void Resources::mutate(size_t index)
> {
>   // Copy if there are multiple references.
>   if (resources[i].use_count() > 1) {
>     resources[i] = copy(resources[i]);
>   }
>   // Mutate safely.
>   resources[i].some_mutation();
> }
> {code}
> On the other hand, this introduces a additional level of pointer chasing. So 
> we would need to weigh the approaches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to