I agree with your suggestion. Will it be a default disk allocation which a user can change via UI? If so, it seems a good solution for now. We can do disks_for_mongo = floor (free_disks * mongo_compute_coeff) and take mongo_compute_coeff from openstack.yaml
On Thu, Mar 27, 2014 at 6:33 PM, Mike Scherbakov <[email protected]>wrote: > Sorry, > I meant disks_for_mongo = floor (free_disks / 2) > > > On Thu, Mar 27, 2014 at 6:19 PM, Andrey Danin <[email protected]> wrote: > >> What does floor() do? The free_disks variable is an integer and we cannot >> round it again. >> >> >> On Thu, Mar 27, 2014 at 4:47 PM, Mike Scherbakov < >> [email protected]> wrote: >> >>> My suggestion would be the following for now (let's improve when we get >>> more results): >>> >>> 1. For separated mongodb node >>> 1. If there are > 1 disk >>> 1. use 1 disk for os, all other disks for mongodb >>> 2. if 1 disk >>> 1. Use default os calculator, leave rest of the space for >>> mongodb >>> 2. For combination of mongodb with compute >>> 1. free_disks = disks - 1 (1 is for os); disks_for_mongo = floor >>> (free_disks), rest for virtual storage, and first disk for both os & >>> virtual storage >>> >>> So idea is to allocate separated disks for mongodb where possible. At >>> the same time, still leaving space for virtual storage and other roles. >>> >>> What do you think? >>> >>> >>> On Wed, Mar 26, 2014 at 6:59 PM, Maksim Mazur <[email protected]>wrote: >>> >>>> Hi! >>>> >>>> >>disk allocation scheme should be researched by the team implementing >>>> the feature. In this case, hardware requirements information and disk >>>> allocation scheme draft should be written down into the blueprint. Only >>>> after this is done, we can start discussing the particular implementation. >>>> >>>> As far as I know "team" implementing this feature is the only one >>>> person. And I don not have hardware to make performance tests right now. >>>> And I'm little bit overloaded on Mirantis OpenStack Express project. >>>> >>>> As temporary solution I can propose the following solution. >>>> >>>> 1. For now we leave disk allocation as is. >>>> 2. On MOE we have hardware to test so if Ceilometer will be included in >>>> next release we will be able to do full testing in real hardware up to 20 >>>> hardware nodes. >>>> 3. After tests I will be able to give any recommendations about disks >>>> allocation. >>>> >>>> >>>> Otherwise I need help to do testing and create design documents. >>>> >>>> >>>> >>>> >>>> On Wed, Mar 26, 2014 at 4:08 PM, Mike Scherbakov < >>>> [email protected]> wrote: >>>> >>>>> Why can't we track storage for mongo as just one of the tasks in >>>>> existing blueprint for ceilometer-mongodb? It looks to me pretty granular >>>>> one. More over, I think we need to keep such details in one single design >>>>> document of the bigger story. >>>>> >>>>> There was a document with numbers regarding disk consumption, not sure >>>>> about IO. Max, do you have this info? Based on disk IO & space >>>>> requirements >>>>> we are all can propose and come to an agreement in logic for disk >>>>> partitioning. For example, if IO is high, we may always require separated >>>>> disk(s) for it. There is might be recommendation on fs type for mongodb as >>>>> well. We mongodb is collacated with some other roles, such as compute, we >>>>> will need to come up with logic how to distribute disk space between >>>>> "virtual storage" LVM and mongodb (50/50 or other ratio?). >>>>> >>>> As far as I know data amount in ceilometer DB depends on number of VMs >>>> on stack so it is impossible to predict how much space will we need. >>>> >>>> >>>> >>>>> Thanks, >>>>> On Mar 26, 2014 5:43 PM, "Bogdan Dobrelya" <[email protected]> >>>>> wrote: >>>>> >>>>>> On 03/26/2014 02:15 PM, Maksim Mazur wrote: >>>>>> > Hi! >>>>>> > >>>>>> > I need to create disk allocation rule for MongoDB role. (MongoDB is >>>>>> > NoSQL backend for Ceilometer) >>>>>> > >>>>>> > >>>>>> > I have no experience with high-loaded MongoDB single instances or >>>>>> > ReplicaSets. >>>>>> > >>>>>> > For the first view having a dedicated drive or raid array for >>>>>> MondoDB >>>>>> > data looks like good idea for large installations but it is >>>>>> overkill for >>>>>> > small stacks. >>>>>> > And I'm not sure is it possible to use dedicated drive if MongoDB >>>>>> role >>>>>> > applyed on Controller. >>>>>> > >>>>>> > >>>>>> > Could you please help me with task? >>>>>> > >>>>>> > >>>>>> > Best Regards, >>>>>> > Max Mazur. >>>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>>> >>>>>> Hi all. >>>>>> >>>>>> I suggest to follow the existing guideline Fuel UI provides for disks >>>>>> configuration for nodes. As a start, you could create a new blueprint >>>>>> to >>>>>> define new storage ("Mongo DB Storage") for Nailgun UI to be used by >>>>>> Fuel on provision stage. >>>>>> >>>>>> There are a similar tasks with "Openstack DB Storage" (in TODO list) >>>>>> and >>>>>> separated "Logs Storage" ( see >>>>>> >>>>>> https://etherpad.openstack.org/p/manage-logs-with-free-space-consideration >>>>>> for BP >>>>>> >>>>>> https://blueprints.launchpad.net/fuel/+spec/manage-logs-with-free-space-consideration >>>>>> ) >>>>>> >>>>>> Note: the blueprint was not approved yet, but I believe some patterns >>>>>> could be reused for Mongo/Openstack DB storages as well... E.g. the >>>>>> sections 'Fuel-UI requirements', 'Planning disk partitions, RAID type, >>>>>> logical volume for Remote Logs Storage'. >>>>>> >>>>>> -- >>>>>> Best regards, >>>>>> Bogdan Dobrelya, >>>>>> Skype #bogdando_at_yahoo.com >>>>>> Irc #bogdando >>>>>> >>>>>> -- >>>>>> Mailing list: https://launchpad.net/~fuel-dev >>>>>> Post to : [email protected] >>>>>> Unsubscribe : https://launchpad.net/~fuel-dev >>>>>> More help : https://help.launchpad.net/ListHelp >>>>>> >>>>> >>>> >>>> Best Regards, >>>> Max Mazur >>>> >>> >>> >>> >>> -- >>> Mike Scherbakov >>> #mihgen >>> >>> -- >>> Mailing list: https://launchpad.net/~fuel-dev >>> Post to : [email protected] >>> Unsubscribe : https://launchpad.net/~fuel-dev >>> More help : https://help.launchpad.net/ListHelp >>> >>> >> >> >> -- >> Andrey Danin >> [email protected] >> skype: gcon.monolake >> > > > > -- > Mike Scherbakov > #mihgen > -- Andrey Danin [email protected] skype: gcon.monolake
-- Mailing list: https://launchpad.net/~fuel-dev Post to : [email protected] Unsubscribe : https://launchpad.net/~fuel-dev More help : https://help.launchpad.net/ListHelp

