[openstack-dev] [MagnetoDB] IRC team meeting canceled
Hello team, Let us cancel today meeting due to overlapping with summit. Have a nice day. -- With best regards Ilya Sviridov __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [MagnetoDB] Kilo-3 milestone release
Hello everybody, MagnetoDB team is happy to announce Kilo-3 release [1] Thank you to everyone who made this possible. [1] https://launchpad.net/magnetodb/kilo/kilo-3 -- With best regards Ilya Sviridov __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [MagnetoDB] Kilo-2 development milestone available
Hello everyone, MagnetoDB Kilo-2 development milestone has been released https://launchpad.net/magnetodb/kilo/kilo-2 Have a nice day, Ilya Sviridov __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] MagnetoDB kilo-1 milestone release
Hello openstackers, MagnetoDB team is happy to announce the release of kilo-1 milestone [1] This milestone we focused on operability and monitoring aspects of MagnetoDB. * Backup/Restore API has been introduced * Monitoring API has been introduced and implemented for Cassandra * The API URI format has been changed to split logically different functionality and simplify deployment * Updated documentation [2] The bunch of bags have been fixed [1]. Thanks everyone for making it possible! With best regards, Ilya Sviridov [1] https://launchpad.net/magnetodb/kilo/kilo-1 [2] https://magnetodb.readthedocs.org/en/2015.1.0b1/ signature.asc Description: OpenPGP digital signature __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[Openstack] MagnetoDB kilo-1 milestone release
Hello OpenStack users, MagnetoDB, key-value store for OpenStack, kilo-1 milestone has been released [1] This milestone we focused on operability and monitoring aspects of MagnetoDB. * Backup/Restore API has been introduced * Monitoring API has been introduced and implemented for Cassandra * The API URI format has been changed to split logically different functionality and simplify deployment * Updated documentation [2] The bunch of bags have been fixed [1]. With best regards, Ilya Sviridov [1] https://launchpad.net/magnetodb/kilo/kilo-1 [2] https://magnetodb.readthedocs.org/en/2015.1.0b1/ signature.asc Description: OpenPGP digital signature ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[openstack-dev] [MagnetoDB] IRC weekly meeting minutes 30-10-2014
Hello team, Thank you for coming to meeting today, you can find meeting minutes and logs below [1][2][3] Please pay attention, that next week meeting is canceled due to summit. [1] http://eavesdrop.openstack.org/meetings/magentodb/2014/magentodb.2014-10-30-14.00.html [2] http://eavesdrop.openstack.org/meetings/magentodb/2014/magentodb.2014-10-30-14.00.txt [3] http://eavesdrop.openstack.org/meetings/magentodb/2014/magentodb.2014-10-30-14.00.log.html Thank you, Ilya Sviridov isviridov @ FreeNode Meeting started by isviridov at 14:00:52 UTC (full logs http://eavesdrop.openstack.org/meetings/magentodb/2014/magentodb.2014-10-30-14.00.log.html ). Meeting summary 1. *action items* (isviridov http://eavesdrop.openstack.org/meetings/magentodb/2014/magentodb.2014-10-30-14.00.log.html#l-21, 14:03:42) 1. https://review.openstack.org/#/c/126335/ (isviridov http://eavesdrop.openstack.org/meetings/magentodb/2014/magentodb.2014-10-30-14.00.log.html#l-29, 14:05:49) 2. *Kilo roadmap https://etherpad.openstack.org/p/kilo-crossproject-summit-topics https://etherpad.openstack.org/p/kilo-crossproject-summit-topics isviridov* (isviridov http://eavesdrop.openstack.org/meetings/magentodb/2014/magentodb.2014-10-30-14.00.log.html#l-46, 14:12:29) 1. https://etherpad.openstack.org/p/magnetodb-kilo-roadmap (ikhudoshyn http://eavesdrop.openstack.org/meetings/magentodb/2014/magentodb.2014-10-30-14.00.log.html#l-48, 14:13:11) 2. https://etherpad.openstack.org/p/magnetodb-kilo-roadmap (isviridov http://eavesdrop.openstack.org/meetings/magentodb/2014/magentodb.2014-10-30-14.00.log.html#l-49, 14:13:15) 3. *ACTION*: dukhlov data encryption support blueprint (isviridov http://eavesdrop.openstack.org/meetings/magentodb/2014/magentodb.2014-10-30-14.00.log.html#l-87, 14:31:06) 4. *ACTION*: ikhudoshyn file a bug about dynamodb version support documentation (isviridov http://eavesdrop.openstack.org/meetings/magentodb/2014/magentodb.2014-10-30-14.00.log.html#l-100, 14:35:25) 5. *ACTION*: isviridov ikhudoshyn clarify roadmap item (isviridov http://eavesdrop.openstack.org/meetings/magentodb/2014/magentodb.2014-10-30-14.00.log.html#l-145, 14:51:48) 3. *Design session topics https://etherpad.openstack.org/p/magnetodb-kilo-design-summit https://etherpad.openstack.org/p/magnetodb-kilo-design-summit isviridov* (isviridov http://eavesdrop.openstack.org/meetings/magentodb/2014/magentodb.2014-10-30-14.00.log.html#l-146, 14:51:51) 4. *Next meeting isviridov* (isviridov http://eavesdrop.openstack.org/meetings/magentodb/2014/magentodb.2014-10-30-14.00.log.html#l-149, 14:53:18) 1. the next week meeting is canceled due to summit (isviridov http://eavesdrop.openstack.org/meetings/magentodb/2014/magentodb.2014-10-30-14.00.log.html#l-150, 14:53:44) Meeting ended at 15:01:15 UTC (full logs http://eavesdrop.openstack.org/meetings/magentodb/2014/magentodb.2014-10-30-14.00.log.html ). Action items 1. dukhlov data encryption support blueprint 2. ikhudoshyn file a bug about dynamodb version support documentation 3. isviridov ikhudoshyn clarify roadmap item Action items, by person 1. dukhlov 1. dukhlov data encryption support blueprint 2. ikhudoshyn 1. ikhudoshyn file a bug about dynamodb version support documentation 2. isviridov ikhudoshyn clarify roadmap item 3. isviridov 1. isviridov ikhudoshyn clarify roadmap item ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [MagnetoDB] MagnetoDB Juno release announce
Hello openstackers, MagnetoDB team is proud to announce release of Juno milestone [1] Thanks to everyone who participated and contributed! [1] https://launchpad.net/magnetodb/juno/2014.2 -- Ilya Sviridov isviridov @ FreeNode ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [MagnetoDB] PTL Candidacy
Hello openstackers, I'd like to announce my candidacy as PTL of MagnetoDB[1][2][3] project. As a PTL of MagnetoDB I'll continue my work on building great environment for contributors, making MagnetoDB well known and great software product. [1] https://launchpad.net/magnetodb [2] http://stackalytics.com/report/contribution/magnetodb/90 [3] http://stackalytics.com/?release=junometric=commitsproject_type=stackforgemodule=magnetodb-group Thank you, Ilya Sviridov isviridov @ FreeNode ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [MagnetoDB] IRC weekly meeting minutes 02-10-2014
Hello team, Thank you for attending meeting today. I'm puting here meeting minutes and link to logs [1] [2] [3] As usually agenda for meeting is free to extend [4] [1] http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.html [2] http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.txt [3] http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html [4] https://wiki.openstack.org/wiki/MagnetoDB/WeeklyMeetingAgenda Regards, Ilya Sviridov isviridov @ FreeNode Meeting summary 1. *Go through action items* (isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-20, 13:04:06) 1. ACTION: dukhlov ikhudoshyn review spec for https://blueprints.launchpad.net/magnetodb/+spec/monitoring-health-check (isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-41, 13:09:54) 2. https://github.com/openstack/nova-specs (isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-98, 13:25:45) 3. https://review.openstack.org/#/q/status:open+openstack/nova-specs,n,z (isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-99, 13:25:56) 4. http://docs-draft.openstack.org/41/125241/1/check/gate-nova-specs-docs/e966557/doc/build/html/specs/juno/add-all-in-list-operator-to-extra-spec-ops.html (isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-100, 13:26:12) 5. ACTION: ikhudoshyn dukhlov review https://wiki.openstack.org/wiki/MagnetoDB/specs/rbac (isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-113, 13:29:28) 6. ACTION: isviridov start create spec repo like https://github.com/openstack/nova-specs (isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-115, 13:29:59) 2. *Support and enforce user roles defined in Keystone* (isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-118, 13:30:28) 1. https://blueprints.launchpad.net/magnetodb/+spec/support-roles ( isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-119, 13:30:33) 3. *Monitoring - healthcheck http request* (isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-124, 13:31:39) 1. https://blueprints.launchpad.net/magnetodb/+spec/monitoring-health-check (isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-125, 13:31:47) 4. *Monitoring API* (isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-130, 13:32:34) 1. https://blueprints.launchpad.net/magnetodb/+spec/monitoring-api ( isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-131, 13:32:46) 2. https://wiki.openstack.org/wiki/MagnetoDB/specs/monitoring-api ( ikhudoshyn http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-169, 13:41:36) 3. ACTION: ominakov describe security impact here https://wiki.openstack.org/wiki/MagnetoDB/specs/monitoring-api ( isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-177, 13:44:02) 5. *Migrate MagnegoDB API to pecan lib* (isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-218, 13:55:32) 1. https://blueprints.launchpad.net/magnetodb/+spec/migration-to-pecan (isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-219, 13:55:40) 6. *Open discussion* (isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-242, 13:59:47) Meeting ended at 14:00:28 UTC (full logs http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html ). Action items 1. dukhlov ikhudoshyn review spec for https://blueprints.launchpad.net/magnetodb/+spec/monitoring-health-check 2. ikhudoshyn dukhlov review https://wiki.openstack.org/wiki/MagnetoDB/specs/rbac 3. isviridov start create spec repo like https://github.com/openstack/nova-specs 4. ominakov describe security impact here https://wiki.openstack.org/wiki/MagnetoDB/specs/monitoring-api Action items, by person 1. dukhlov 1. dukhlov ikhudoshyn review spec
Re: [openstack-dev] [MagnetoDB] PKI tokens size performance impact
Hello Aleksandr, Thank you for your efforts and sharing this. Looking closer to figures, I can assume that lightweight session won't help us a lot, but will introduce additional complexity. So, I'm marking the BP as Obsolete. Ilya Sviridov isviridov @ FreeNode On Mon, Sep 29, 2014 at 1:21 PM, Aleksandr Chudnovets achudnov...@mirantis.com wrote: Hello team, As it was discussed on IRC meeting yesterday I’m glad to share the results of my testing of performance impact of PKI token validation. My research is connected with bp [1] about adding lightweight session to mdb for improvement overall performance. For my tests I used PKI tokens with and without service catalog. (Actually, PKI tokens is only option for MagnetoDB, because MagnetoDB was designed to handle huge amount of requests per second.) Here is my results: - for lightweight requests, like list_tables, we can get 5% - 8% performance boost using tokens without service catalog; - for big and slow requests, like batch_write, performance boost is smaller. - disabling keystone support in api-paste gives us 20% performance boost for PKI tokens :) So it can be good practice for MagnetoDB clients to use PKI tokens without service catalog. Another conclusion I can made, the PKI token validation works fast enough and adding additional session mechanism won’t give big performance boost. Please share your views. [1] https://blueprints.launchpad.net/magnetodb/+spec/light-weight-session Thanks, Aleksandr Chudnovets ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [MagnetoDB] IRC weekly meeting minutes 25-09-2014
Hello team, Thank you for attending meeting today. I'm puting here meeting minutes and link to logs [1] [2] Please pay attention that we are having meeting at #magentodb because of schedule conflict. The meeting agenda is free to updated [3] [1] http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html [2] http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.txt [3] https://wiki.openstack.org/wiki/MagnetoDB/WeeklyMeetingAgenda Meeting summary 1. 1. from last meeting http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-18-13.01.html (isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-12, 13:02:11) 2. *Go through action items* (isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-14, 13:02:35) 1. https://wiki.openstack.org/wiki/MagnetoDB/specs/async-schema-operations (ikhudoshyn http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-19, 13:03:38) 2. https://review.openstack.org/#/c/122404/ (ikhudoshyn http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-31, 13:07:23) 3. ACTION: provide numbers about performance impact from big PKI token in ML (isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-38, 13:09:01) 3. *Asynchronous table creation and removal* (isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-39, 13:09:25) 4. *Monitoring API* (isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-58, 13:16:26) 1. https://blueprints.launchpad.net/magnetodb/+spec/monitoring-api ( isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-62, 13:18:28) 5. *Light weight session for authorization* (isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-78, 13:24:48) 6. *Review tempest tests and move to stable test dir* (isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-105, 13:33:49) 1. https://blueprints.launchpad.net/magnetodb/+spec/review-tempest-tests (isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-109, 13:35:09) 7. *Monitoring - healthcheck http request* (isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-133, 13:41:16) 1. AGREED: file missed tests as bugs (isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-138, 13:42:02) 2. ACTION: aostapenko write a spec about healthcheck (isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-148, 13:46:00) 8. *Log management* (isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-149, 13:46:16) 1. https://blueprints.launchpad.net/magnetodb/+spec/log-rotating ( isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-151, 13:46:23) 2. AGREED: put log rotation configs in mdb config. No separate logging config (isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-173, 13:54:35) 9. *Open discussion* (isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-176, 13:56:05) 1. https://blueprints.launchpad.net/magnetodb/+spec/oslo-notify ( ikhudoshyn http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-187, 14:00:13) 2. ACTION: ikhudoshyn write a spec for migration to oslo.messaging.notify (isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-190, 14:01:04) 3. ACTION: isviridov look how to created magentodb-spec repo ( isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-197, 14:02:12) 4. ACTION: ajayaa write spec for RBAC (isviridov http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html#l-205, 14:03:43) Meeting ended at 14:07:00 UTC (full logs http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html ). Action items 1. provide numbers about performance impact from big PKI token in ML 2.
[openstack-dev] [MagentoDB] Tomorrow IRC meeting
Hello stackers, MagnetoDB team is having IRC meeting tomorrow 13:00 UTC The agenda can be found here[1] Feel free to join and add items to agenda [1] https://wiki.openstack.org/wiki/MagnetoDB/WeeklyMeetingAgenda Have a nice day, Ilya Sviridov isviridov @ FreeNode ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [MagnetoDB] Core developer nomination
Hello magnetodb contributors, I'm glad to nominate Charles Wang to core developers of MagnetoDB. He is top non-core reviewer [1], implemented notifications [2] in mdb and made a great progress with performance, stability and scalability testing of MagnetoDB [1] http://stackalytics.com/report/contribution/magnetodb/90 [2] https://blueprints.launchpad.net/magnetodb/+spec/magnetodb-notifications Welcome to team, Charles! Looking forward for your contribution -- Ilya Sviridov isviridov @ FreeNode ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [MagnetoDB] Blueprint approvement process changes
Hello mdb contributors, As far as we are growing and not the one solid team as it was before, it is important to make step forward to openness of technical decisions and agreements. OpenStack community has a great experience in it, but I'm not sure if we need such a formal process with specs approval via gerrit workflow right now. I suggest to use wiki for drafting and IRC with ML for discussion and approvement. For this purpose the wiki section has been created [1]. The old drafts have been moved to new location as well. Feel free to share your thoughts about it. [1] https://wiki.openstack.org/wiki/MagnetoDB/specs/ Thanks -- Ilya Sviridov isviridov @ FreeNode ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [MagnetoDB] magnetodb juno-3 milestone release
Hello magnetians, Let me announce magnetodb juno-3 milestone release [1] New version is available at pypi repo as well [2] [1] https://launchpad.net/magnetodb/juno/juno-3 [2] https://pypi.python.org/pypi?:action=displayname=magnetodbversion=2014.2.b3 -- Ilya Sviridov isviridov @ FreeNode ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [MagnetoDB] python-magnetodbclient 1.0.1 released
Hello contributors, We have just released first version of python-magnetodbclient package [1] Can be found in pypi here [2] [1] https://launchpad.net/python-magnetodbclient/+milestone/1.0.1 [2] https://pypi.python.org/pypi?:action=displayname=python-magnetodbclientversion=1.0.1 -- Ilya Sviridov isviridov @ FreeNode ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [MagnetoDB] OpenStack versioning adoption
Hello contributors, As it was already discussed in #magnetodb IRC channel we are making one more step forward to community processes and adopting OpenStack versioning approach. The development branch 2.0.x [1] stopped and is not going to be supported. The last released version is 2.0.5 [2] The current scope will land master as juno-3 [3] [1] https://launchpad.net/magnetodb/2.0 [2] https://launchpad.net/magnetodb/2.0/2.0.5 [3] https://launchpad.net/magnetodb/+milestone/juno-3 -- Ilya Sviridov isviridov @ FreeNode ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [MagnetoDB] PTL Candidacy
Hello openstackers, I'd like to announce my candidacy as PTL of MagnetoDB[1] project. Several words about me: I'm software developer at Mirantis. I'm working with OpenStack for a year and a bit more. At the beginning it was integration and customization of HEAT for customer. After that I've contributed to Trove and now working on MagnetoDB 100% [2] of my time. I've started with MagnetoDB as idea [3] on Hong Kong summit and now it is a project with community of two major companies [4], with regular releases, roadmap[5] and plans for incubation. As a PTL of MagnetoDB I'll continue my work on building great environment for contributors, making MagnetoDB successful software product and eventually to be integrated to OpenStack. [1] https://launchpad.net/magnetodb [2] http://www.stackalytics.com/report/contribution/magnetodb/90 [3] http://lists.openstack.org/pipermail/openstack-dev/2013-October/017930.html [4] http://www.stackalytics.com/?release=allmetric=commitsproject_type=stackforgemodule=magnetodbcompany=user_id= [5] https://etherpad.openstack.org/p/ magnetodb-juno-roadmap Thank you, Ilya Sviridov isviridov @ FreeNode ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [MagnetoDB] MagnetoDB events notifications
Hello Charles, The specification looks very good and we can start with it for table and data item events. The only thing I'm not quite sure is implementation of table.usage information. I mean this part of description Periodic usage notification generated by the magnetodb-table-usage-audit cron job Until now we don't have central place for running such job for all tables and I wouldn't want to introduce it, but having it working on each MDB api instance will multiply the number of notifications. Also I'm not sure that it will scale well with growing count of tables. I believe that MDB should be pooled for table.usage as well as for other data what describes general state of system by monitoring system. What do you think? Thanks, Ilya On Tue, May 27, 2014 at 5:56 PM, Charles Wang charles_w...@symantec.comwrote: Hi Dmitriy, Thank you very much for your feedback. Although it looks like MagnetoDB Events Notifications component has some similarities to Ceilometer, it is much narrower scope. We only plan to provide immediate and periodic notifications of MagnetoDB table/data item CRUD activities based on Oslo Notification. There’s no backend database storing them, and no query API for those notifications. They are different from Ceilometer metrics and events. In the future when we integrate with Ceilometer, the MagnetoDB notifications are fed into Ceilometer to collect Ceilometer metrics, and/or generate Ceilometer events. Basically Ceilometer will be a consumer of MagnetoDB notifications. I’ll update the wiki further to define our scope clearer, and possibly drop the word events” to indicate we focus on notifications. Regards, Charles From: Dmitriy Ukhlov dukh...@mirantis.com Reply-To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date: Monday, May 26, 2014 at 7:28 AM To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [MagnetoDB] MagnetoDB events notifications Hi Charles! It looks like to me that we are duplicating functionality of Ceilometer project. Am I wrong? Have you considered Ceilometer integration for monitoring MagnetoDB? On Fri, May 23, 2014 at 6:55 PM, Charles Wang charles_w...@symantec.comwrote: Folks, Please take a look at the initial draft of MagnetoDB Events and Notifications wiki page: https://wiki.openstack.org/wiki/MagnetoDB/notification. Your feedback will be appreciated. Thanks, Charles Wang charles_w...@symantec.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best regards, Dmitriy Ukhlov Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [MagnetoDB] Configuring consistency draft of concept
Team, I believe it is quite complex task and we have to spend more time on concept. So, I've postponed it to next 3.0 seria, it is month from now and we can keep focus on stabilization of current version. Let us return to this discussion later. Thanks, Ilya Sviridov isviridov @ FreeNode On Mon, May 5, 2014 at 4:06 AM, Illia Khudoshyn ikhudos...@mirantis.comwrote: Can't say for others but I'm personally not really happy with Charles Dima approach. As Charles pointed out (or hinted) , QUORUM during write may be equal to both EVENTUAL and STRONG, depending on consistency level chosen for later read. The same is with QUORUM for read. I'm afraid, this way MDB will become way too complex, and it would take more effort to predict its behaviour from user's point of view. I'd rather prefer it to be as straightforward as possible -- take full control and responsibility or follow reasonable defaults. And, please note, we're aiming to multi DC support, soon or late. And for that we'll need more flexible consistency control, so binary option would not be enough. Thanks On Thu, May 1, 2014 at 12:10 AM, Charles Wang charles_w...@symantec.comwrote: Discussed further with Dima. Our consensus is to have WRITE consistency level defined in table schema, and READ consistency control at data item level. This should satisfy our use cases for now. For example, user defined table has Eventual Consistency (Quorum). After user writes data using the consistency level defined in table schema, when user tries to read data back asking for Strong consistency, MagnetoDB can do a READ Eventual Consistency (Quorum) to satisfy user's Strong consistency requirement. Thanks, Charles From: Charles Wang charles_w...@symantec.com Date: Wednesday, April 30, 2014 at 10:19 AM To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org, Illia Khudoshyn ikhudos...@mirantis.com Cc: Keith Newstadt keith_newst...@symantec.com Subject: Re: [openstack-dev] [MagnetoDB] Configuring consistency draft of concept Sorry for being late to the party. Since we follow mostly DynamoDB, it makes sense not to deviate too much away from DynamoDB’s consistency mode. From what I read about DynamoDB, READ consistency is defined to be either strong consistency or eventual consistency. ConsistentRead http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html#DDB-Query-request-ConsistentRead: *boolean*”, *ConsistentRead http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html#API_Query_RequestSyntax* If set to true, then the operation uses strongly consistent reads; otherwise, eventually consistent reads are used. Strongly consistent reads are not supported on global secondary indexes. If you query a global secondary index with *ConsistentRead* set to true, you will receive an error message. Type: Boolean Required: No http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html WRITE consistency is not clearly defined anywhere. From what Werner Vogel’s description, it seems to indicate writes are replicated across availability zones/data centers synchronously. I guess inside data center, writes are replicated asynchronously. And the API doesn’t allow user to specify WRITE consistency level. http://www.allthingsdistributed.com/2012/01/amazon-dynamodb.html Considering the above factors and what Cassandra’s capabilities, I propose we use the following model. READ: - Strong consistency (synchronously replicate to all, maps to Cassandra READ All consistency level) - Eventual consistency (quorum read, maps to Cassandra READ Quorum) - Weak consistency (not in DynamoDB, maps to Cassandra READ ONE) WRITE: - Strong consistency (synchronously replicate to all, maps to Cassandra WRITE All consistency level) - Eventual consistency (quorum write, maps to Cassandra WRITE Quorum) - Weak consistency (not in DynamoDB, maps to Cassandra WRITE ANY) For conditional writes (conditional putItem/deletItem), only strong and eventual consistency should be supported. Thoughts? Thanks, Charles From: Dmitriy Ukhlov dukh...@mirantis.com Reply-To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Date: Tuesday, April 29, 2014 at 10:43 AM To: Illia Khudoshyn ikhudos...@mirantis.com Cc: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [MagnetoDB] Configuring consistency draft of concept Hi Illia, WEAK/QUORUM instead of true/false it is ok for me. But we have also STRONG. What does STRONG mean? In current concept we a using QUORUM and say that it is strong. I guess it is confusing (at least for me) and can have different behavior for different backends. I believe that from user point of view only 4 usecases exist: write and read
[openstack-dev] [MagnetoDB] Design session followup
Hello everybody, Thank you for coming to MagnetoDB design session. It is very important to hear your feedback. If you missed the session somehow you can look at the presentation here http://www.slideshare.net/zaletniy/openstack-magnetodb-atlanta-2014-keyvalue -storage-openstack-usecases Demo script for Postman REST clienthttps://chrome.google.com/webstore/detail/postman-rest-client/fdmmgilgnpjigdojojpjoooidkmcomcmis available by link https://www.getpostman.com/collections/b50a9fda5560fedc4a16 If you have any questions feel free to catch me during design summit Ilya Sviridov isviridov @ FreeNode ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Point of Contact Request for MagnetoDB
Hello Tiffany, Thank you for your interest to MagnetoDB. It is great to see you here! My name is Ilya Sviridov and I'm leading the project. There is a wiki page https://wiki.openstack.org/wiki/MagnetoDB. MagentoDB is under active development right now, so it is a bit outdated. Also take a look at our screencast http://www.mirantis.com/blog/introducing-magnetodb-nosql-database-service-openstack/ If you have any questions feel free to ask in mail list with [MagnetoDB] tag or in #magnetodb IRC channel. Are you planning to attend Atlanta Summit? We are having the design session on *Tuesday* http://junodesignsummit.sched.org/event/c6474b1697193bcb88598209fe929f93#.U20Vgfl5N-c On Fri, May 9, 2014 at 10:10 AM, Mathews, Tiffany J. (LARC-E301)[SCIENCE SYSTEMS AND APPLICATIONS, INC] tiffany.j.math...@nasa.gov wrote: I am interested in establishing an expert POC for questions and concerns regarding MagnetoDB as I am working on creating a technology repository for one of the NASA Data Centers to identify and track technologies that we may not be currently using, however, would like to consider for potential future use. MagnetoDB is a technology that we are interested in learning about more- especially with regard to security. We are also interested in seeing if there are any white papers or demonstrations that could help us better understand this technology. Any guidance is greatly appreciated! Tiffany Mathews ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [MagnetoDB] PyCharm Professional Edition licence
Hello MagnetoDB community, Thanks JetBrains, we have PyCharm Professional Edition licence for every MagnetoDB project contributor. We have issued an OS license for your project. License key should arrive to your email in a separate message shortly. Please feel free to share this key with other project contributors (via secured channels only: please do not use a public forum or mailing list to share license keys). If you are interested, please contact me via e-mail or IRC. Ilya Sviridov isviridov @FreeNode ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] MagnetoDB CLI client
Hello Andrey, Great! Looking closer at blueprint, I've realized that parameter naming is confusing. I would suggest to use --request-file parameter instead --description-file used now. Also, I believe that table-list will be the most popular call and it has only two parameters, so would be better to avoid json for that in cli and pass all info via command line. like magnetodb table-list --exclusive-start-table-name table_1 --count 10 Probably we have to think about default behavior when no json passed or required arguments are passed as CLI arguments for easier usage. Scan looks as a good example. BTW: We have dedicated mail prefix in order to not to spam everybody, but only MagnetoDB project interest audience :) so just add [openstack-dev][MagnetoDB] at the beginning of email subject next time. Thank you Ilya On Fri, Apr 25, 2014 at 4:29 PM, ANDREY OSTAPENKO (CS) andrey_ostape...@symantec.com wrote: Hello, everyone! Now I'm starting to implement cli client for KeyValue Storage service MagnetoDB. I'm going to use heat approach for cli commands, e.g. heat stack-create --template-file FILE, because we have too many parameters to pass to the command. For example, table creation command: magnetodb create-table --description-file FILE File will contain json data, e.g.: { table_name: data, attribute_definitions: [ { attribute_name: Attr1, attribute_type: S }, { attribute_name: Attr2, attribute_type: S }, { attribute_name: Attr3, attribute_type: S } ], key_schema: [ { attribute_name: Attr1, key_type: HASH }, { attribute_name: Attr2, key_type: RANGE } ], local_secondary_indexes: [ { index_name: IndexName, key_schema: [ { attribute_name: Attr1, key_type: HASH }, { attribute_name: Attr3, key_type: RANGE } ], projection: { projection_type: ALL } } ] } Blueprint: https://blueprints.launchpad.net/magnetodb/+spec/magnetodb-cli-client If you have any comments, please let me know. Best regards, Andrey Ostapenko ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [MagnetoDB] Confusing Cassandra behavior
Hi Dmitriy, I have taken a look and found out that cassandra-cli output after UPDATE and INSERT operation is different, On first update it is update test SET value=null where id='1'; [default@test] list test; --- RowKey: 1 On first insert [default@test] list test; --- RowKey: 1 = (name=, value=, timestamp=1398166276501000) On Wed, Apr 23, 2014 at 8:56 AM, Dmitriy Ukhlov dukh...@mirantis.comwrote: Hello everyone! Today I'm faced with unexpected Cassandra behavior. Please keep in mind that if you execute UPDATE query end set all fields to null (or empty collections for collection type) it can delete your record, but also it can only set values to null and keep record alive. It depends on how to record was created: using insert query or update query. Please take a look at reproduce steps here https://gist.github.com/dukhlov/11195881. FYI: Cassandra 2.0.7 has been released. As we know there were some fixes for condition operation and batch operations which are necessary for us. So It would be nice to update magnetodb devstack to use Cassandra 2.0.7 -- Best regards, Dmitriy Ukhlov Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [MagnetoDB] Confusing Cassandra behavior
Sorry, the previous message has been mistakenly sent. So, I have taken a look and found out that cassandra-cli output after UPDATE and INSERT operation is different, On first update it is update test SET value=null where id='1'; [default@test] list test; --- RowKey: 1 On first insert insert into test (id, value) VALUES('1', null); [default@test] list test; --- RowKey: 1 = (name=, value=, timestamp=1398166276501000) It seems thous operations are handled in different ways in CQL part or something. I had no chance to look closer today, but hope it helps. About Cassandra version update, I think it is time and we have to do it. For now I've filed BP for that https://blueprints.launchpad.net/magnetodb/+spec/update-cassandra-to-2.0.7, let us discuss it. With best regards, Ilya Sviridov On Wed, Apr 23, 2014 at 2:41 PM, Ilya Sviridov isviri...@mirantis.comwrote: Hi Dmitriy, I have taken a look and found out that cassandra-cli output after UPDATE and INSERT operation is different, On first update it is update test SET value=null where id='1'; [default@test] list test; --- RowKey: 1 On first insert [default@test] list test; --- RowKey: 1 = (name=, value=, timestamp=1398166276501000) On Wed, Apr 23, 2014 at 8:56 AM, Dmitriy Ukhlov dukh...@mirantis.comwrote: Hello everyone! Today I'm faced with unexpected Cassandra behavior. Please keep in mind that if you execute UPDATE query end set all fields to null (or empty collections for collection type) it can delete your record, but also it can only set values to null and keep record alive. It depends on how to record was created: using insert query or update query. Please take a look at reproduce steps here https://gist.github.com/dukhlov/11195881. FYI: Cassandra 2.0.7 has been released. As we know there were some fixes for condition operation and batch operations which are necessary for us. So It would be nice to update magnetodb devstack to use Cassandra 2.0.7 -- Best regards, Dmitriy Ukhlov Mirantis Inc. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Openstack] magnetodb 2.0.2 released
Hello openstackers, MagnetoDB community is proud to announce the release of magnetodb-2.0.2 milestone. It is publicly available here https://pypi.python.org/pypi/magnetodb/2.0.2 http://tarballs.openstack.org/magnetodb/ The version contains following features and major fixes + devstack integration + devstack gate for MagnetoDB has been introduced and added to development process + BatchWrite API implemented + Error handling API defined and implemented + tempest coverage improvement * redesign index implementation for better performance More details can be found here https://launchpad.net/magnetodb/2.0/2.0.2 -- MagnetoDB community #magnetodb at FreeNode ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[Openstack] [openstack-dev] magnetodb 2.0.2 released
Hello openstackers, MagnetoDB community is proud to announce the release of magnetodb-2.0.2 milestone. It is publicly available here https://pypi.python.org/pypi/magnetodb/2.0.2 http://tarballs.openstack.org/magnetodb/ The version contains following features and major fixes + devstack integration + devstack gate for MagnetoDB has been introduced and added to development process + BatchWrite API implemented + Error handling API defined and implemented + tempest coverage improvement * redesign index implementation for better performance More details can be found here https://launchpad.net/magnetodb/2.0/2.0.2 -- MagnetoDB community #magnetodb at FreeNode ___ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
[openstack-dev] [OpenStack-Infra][Ceilometer][MagnetoDB] HBase database in devstack
Hello infra and devstack, I would like to start thread about adding of nosql databases support to devstack for development and gating purposes. Currently there is necessity of HBase and Cassandra in MagnetoDB project for running tempest tests. We have implemented Cassandra as part of MagnetoDB devstack integration ( https://github.com/stackforge/magnetodb/tree/master/contrib/devstack) and started working on HBase now ( https://blueprints.launchpad.net/magnetodb/+spec/devstack-add-hbase). From other side, HBase and Cassandra are supported as database backends in Ceilometer and it can be useful for development and gating to have it in devstack. So, it looks like common task for both projects and eventually will be integrated to devstack, so I'm suggesting to start that discussion in order push ahead with it. Cassandra and HBase are both Java applications, so come with JDK as dependency. It is proved we can use OpenJDK available in debian repos. The database itself are distributed in two ways: - as debian packages build and hosted by software vendors HBase deb http://public-repo-1.hortonworks.com/HDP/ubuntu12/2.x HDP main Cassandra deb http://debian.datastax.com/community stable main - as tar.gz hosted on Apache Download Mirrors HBase http://www.apache.org/dyn/closer.cgi/hbase/ Cassandra http://www.apache.org/dyn/closer.cgi/cassandra/ The distributions provided by Apache Foundation looks more reliable, but I heard, that third party sources can be not stable enough to introduce them as dependencies in devstack gating. I have registered BP in devstack project about adding HBase https://blueprints.launchpad.net/devstack/+spec/add-hbase-to-devstack and we have started working on it. Please share your thoughts about it to help make it real. Thank you. Have a nice day, Ilya Sviridov isviridov @ FreeNode ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[OpenStack-Infra] Meetbot for #magnetodb channel
Hello Infra team, During work on MagnetoDB on a daily basis, we have faced with necessity of OpenStack meetbot for our project channel. Having a daily scrum meetings, we would love to use it for tracking action items and make it available for everyone available. Is it possble to add meetbot to #magnetodb IRC channed at FreeeNode? Feel free to contact me in IRC or E-mail. Thanks Ilya Sviridov isviridov @ FreeNode ___ OpenStack-Infra mailing list OpenStack-Infra@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
[openstack-dev] [MagnetoDB] Weekly meeting summary
Hello openstackers, You can find MagnetoDB team weekly meeting notes below Meeting summary 1. *General project status overview* (isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-14, 13:02:15) 2. *MagnetoDB API Draft status* (isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-26, 13:08:37) 1. https://wiki.openstack.org/wiki/MagnetoDB/api (isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-28, 13:09:28) 2. ACTION: achudnovets start ML thread with API discussion (isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-36, 13:13:19) 3. https://launchpad.net/magnetodb/+milestone/2.0.1 (isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-39, 13:14:24) 3. *Third party CI status* (isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-40, 13:14:41) 1. https://blueprints.launchpad.net/magnetodb/+spec/third-party-ci ( isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-45, 13:16:39) 2. ACTION: achuprin discuss with infra the best way for our CI ( isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-68, 13:27:36) 3. ACTION: achuprin create wiki page with CI description (isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-69, 13:28:01) 4. *Support of other database backends except Cassandra. Support of HBase* (isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-74, 13:29:24) 1. ACTION: isviridov ikhudoshyn start mail thread about evalution other databases as backend for MagnetoDB (isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-98, 13:38:16) 5. *Devstack integration status* (isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-99, 13:38:35) 1. https://blueprints.launchpad.net/magnetodb/+spec/devstack-integration (isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-102, 13:39:07) 2. https://github.com/pcmanus/ccm (vnaboichenkohttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-105, 13:40:13) 3. ACTION: vnaboichenko devstack integration guide in OpenStack wiki ( isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-112, 13:42:15) 6. *Weekly meeting time slot* (isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-113, 13:42:33) 1. ACTION: isviridov find better time slot for meeting (isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-122, 13:44:47) 2. ACTION: isviridov start ML voting meeting time (isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-124, 13:45:05) 7. *Open discussion* (isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-127, 13:45:31) For more details, please follow the links Minutes: http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.html Minutes (text): http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.txt Log: http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html Have a nice day, Ilya Sviridov ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [MagnetoDB] Weekly IRC meeting schedule
Hello magnetodb contributors, I would like to suggest weekly IRC meetings on Thursdays, 1300 UTC. More technical details later. Let us vote by replying this email. With best regards, Ilya Sviridov ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [MagnetoDB] Weekly IRC meeting schedule
Any other opinions? On Thu, Mar 6, 2014 at 4:31 PM, Illia Khudoshyn ikhudos...@mirantis.comwrote: 1300UTC is fine for me On Thu, Mar 6, 2014 at 4:24 PM, Ilya Sviridov isviri...@mirantis.comwrote: Hello magnetodb contributors, I would like to suggest weekly IRC meetings on Thursdays, 1300 UTC. More technical details later. Let us vote by replying this email. With best regards, Ilya Sviridov ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best regards, Illia Khudoshyn, Software Engineer, Mirantis, Inc. 38, Lenina ave. Kharkov, Ukraine www.mirantis.com http://www.mirantis.ru/ www.mirantis.ru Skype: gluke_work ikhudos...@mirantis.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [MagnetoDB] a key-value storage service for OpenStack. The pilot implementation is available now
Hello openstackers, I'm excited to share that we have finished work on pilot implementation of MagnetoDB, a key-value storage service for OpenStack. During pilot development we have reached the following goals: - evaluated python cassandra clients maturity - evaluated python web stack maturity to handle high availability and high load scenarios - found a number of performance bottlenecks and analyzed the approaches to address - drafted and checked service architecture - drafted and checked deployment procedures The API implementation is compatible with AWS DynamoDB API and pilot version already supports the basic operations with tables and items. We tested with boto library and Java AWS SDK and it work seamlessly with both services. Currently we are working on RESTFul API what will follow OpenStack tenets in addition to current AWS DynamoDB API. We have chosen Cassandra for pilot as most suitable storage for service functionality. However, the cost of ownership and administration of additional type of software can be determinative factor in choosing of solution. That is why the backend database pluggability is important. Currently we are evaluating HBase as one of alternatives as far as Hadoop powered analytics often co-exists with OpenStack installations or works on top of it like Savanna. You can find more details on MagnetoDB along with the screencast on Mirantis blog [1]. We will be publishing more details on each area of the findings during the course of the next few weeks. Any questions and ideas are very welcome. For those who are interested to contribute, you can always find us on #magnetodb. Links [1] http://www.mirantis.com/blog/introducing-magnetodb-nosql-database-service-openstack/ [2] https://github.com/stackforge/magnetodb [3] https://wiki.openstack.org/wiki/MagnetoDB [4] https://launchpad.net/magnetodb With best regards, Ilya Sviridov ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Trove][Savanna][Murano] Unified Agent proposal discussion at Summit
Thinking in that direction, the Trove team had a design session about current status of agent in project. Just take a look https://etherpad.openstack.org/p/TroveGuestAgents With best regards, Ilya Sviridov http://www.mirantis.ru/ On Tue, Nov 12, 2013 at 4:29 PM, Igor Marnat imar...@mirantis.com wrote: Just to summarize, there was an interest expressed from Murano, Trove, Savanna and Heat teams in regards with implementation of this unified agent. Nothing specific was decided expect suggestion to keep pushing. I'd suggest to keep pushing this way: - create an etherpad - each team interested in having unified agent writes there detailed use cases for an agent to this etherpad - based on these use-cases we can generate very specific and detailed requirements to the agent - based on these requirements we can agree on architecture and approach to implementation. Teams? Regards, Igor Marnat On Tue, Nov 5, 2013 at 6:10 AM, Alexander Tivelkov ativel...@mirantis.com wrote: Hi guys, Recently we had several discussions about the guest VM agents: lot's of projects have the similar needs to run some special logic on the side of guest virtual machines. As far as I know, there are such agents in Savanna, Trove, Murano and may be some other projects as well. The obvious idea is to unite the efforts and have the unified solution which may satisfy everybody's needs. We've discussed this topic before with some of the teams, and got the promising-looking idea to create kind of unified agent library and put it in Oslo or some other shared project. We've scheduled an unconference session on the Summit, this Friday at 3:10 pm. Let's continue discussing the idea there: we may gather the common requirements, discuss the basic design concepts etc. See you there! -- Kind Regards, Alexander Tivelkov Principal Software Engineer OpenStack Platform Product division Mirantis, Inc +7(495) 640 4904, ext 0236 +7-926-267-37-97(cell) Vorontsovskaya street 35 B, building 3, Moscow, Russia. ativel...@mirantis.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Trove][Savanna][Murano] Unified Agent proposal discussion at Summit
Igor, better to create another one to track the requirements for such agent framework as far as this etherpad is official result of design session. With best regards, Ilya Sviridov http://www.mirantis.ru/ On Tue, Nov 12, 2013 at 5:02 PM, Igor Marnat imar...@mirantis.com wrote: Ilya, that's cool! Mind if Murano and Savanna teams join the same etherpad? Regards, Igor Marnat On Tue, Nov 12, 2013 at 6:58 PM, Ilya Sviridov isviri...@mirantis.comwrote: Thinking in that direction, the Trove team had a design session about current status of agent in project. Just take a look https://etherpad.openstack.org/p/TroveGuestAgents With best regards, Ilya Sviridov http://www.mirantis.ru/ On Tue, Nov 12, 2013 at 4:29 PM, Igor Marnat imar...@mirantis.comwrote: Just to summarize, there was an interest expressed from Murano, Trove, Savanna and Heat teams in regards with implementation of this unified agent. Nothing specific was decided expect suggestion to keep pushing. I'd suggest to keep pushing this way: - create an etherpad - each team interested in having unified agent writes there detailed use cases for an agent to this etherpad - based on these use-cases we can generate very specific and detailed requirements to the agent - based on these requirements we can agree on architecture and approach to implementation. Teams? Regards, Igor Marnat On Tue, Nov 5, 2013 at 6:10 AM, Alexander Tivelkov ativel...@mirantis.com wrote: Hi guys, Recently we had several discussions about the guest VM agents: lot's of projects have the similar needs to run some special logic on the side of guest virtual machines. As far as I know, there are such agents in Savanna, Trove, Murano and may be some other projects as well. The obvious idea is to unite the efforts and have the unified solution which may satisfy everybody's needs. We've discussed this topic before with some of the teams, and got the promising-looking idea to create kind of unified agent library and put it in Oslo or some other shared project. We've scheduled an unconference session on the Summit, this Friday at 3:10 pm. Let's continue discussing the idea there: we may gather the common requirements, discuss the basic design concepts etc. See you there! -- Kind Regards, Alexander Tivelkov Principal Software Engineer OpenStack Platform Product division Mirantis, Inc +7(495) 640 4904, ext 0236 +7-926-267-37-97(cell) Vorontsovskaya street 35 B, building 3, Moscow, Russia. ativel...@mirantis.com ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Proposal for the new service: MagnetoDB, DynamoDB API implementation for OpenStack
Hi, Openstackers I would like to propose a new initiative to implement AWS DynamoDB API for OpenStack. I believe this is a right moment for OpenStack community to start MagnetoDB discussion and design around such service as more data processing and management services, such as Savanna and Trove, are appearing in OpenStack, creating a sufficient basis for higher level data APIs like DynamoDB API. With Trove you can provision and manage different databases including NoSQL databases support, which will appear soon. It minimizes the maintenance process and turns database deployment and management to API calls. DynamoDB [1] http://aws.amazon.com/dynamodb/ is well-known key-value NoSQL storage provided by Amazon with HTTP interface. It uses JSON as its transfer data model. The data is stored as key-value, where value is list of attributes. The attribute’s value can be String, Number, Binary or sets of those types. Read consistency can be strong or eventual per request. The secondary indexes are also supported. Simplicity, reliability, and pleasant documentation have made it actively used in a lot of applications, but hosting them on OpenStack is impossible due to the absence of such storage. DynamoDB is already widely used and has its own niche [2] http://aws.amazon.com/dynamodb/testimonials/ despite being closed and proprietary. Let me share the idea to implement a similar service which imitates DynamoDB API [3]http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/API.html, called MagnetoDB. MagnetoDB will fully implement DynamoDB API thus enabling the possibility for cloud users to migrate their applications that use DynamoDB from AWS to OpenStack. For backend storage provisioning and management, OpenStack DBaaS Trove will be used. The backend database should be pluggable to provide flexibility in terms choosing the solution which best matched the existing or planned OpenStack installation technology stack. MySQL is one of the obvious options, being the de-facto standard for use with OpenStack and being supported by Trove right away. However, users of DynamoDB expect high performance, scalability, and availability. Those can be achieved more easily by using one of the NoSQL solutions in MagnetoDB. Apache Cassandra [4] http://cassandra.apache.org/looks very suitable for that case, due to its tunable consistency, easy scalability, key-value data model, ring topology and other features that give us predictable high performance and fault tolerance. The Cassandra data model perfectly fits MagnetoDB needs. Moreover, support of NoSQL databases is in Trove roadmap [5]https://blueprints.launchpad.net/trove/+spec/cassandra-db-supportand the fist version of Cassandra provisioning is currently in progress [6] https://review.openstack.org/#/c/51884/. AWS SDK [7]http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/UsingAWSSDK.htmlcan be used to query data. Provided by Amazon, it hides all protocol stuff giving the high level API. In Python world you can use the rather common Boto [8] https://github.com/boto/boto [9]http://boto.readthedocs.org/en/latest/dynamodb2_tut.htmllibrary. An initial draft of the more official proposal is available here: https://wiki.openstack.org/wiki/MagnetoDB Any contribution, comments and pieces of advice are very welcome Also, I plan to make the lightning talk on Tuesday, November 5 at Expo Breakout Room 1. See you on summit. [1] http://aws.amazon.com/dynamodb [2] http://aws.amazon.com/dynamodb/testimonials/ [3] http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/API.html [4] http://cassandra.apache.org/ [5] https://blueprints.launchpad.net/trove/+spec/cassandra-db-support [6] https://review.openstack.org/#/c/51884/ [7] http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/UsingAWSSDK.html [4] https://github.com/boto/boto [5] http://boto.readthedocs.org/en/latest/dynamodb2_tut.html With best regards, Ilya Sviridov http://www.mirantis.ru/ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance
Totally agree, however current concept supposes working with type and version as different entities, even if it is the attributes on one thing - the configuration. The reason for storing it as separate models can be cases when we are going to use them separately. Sounds really reasonable to keep it as one model, but another question comes to mind. How 'list datastore_type' will look like. That is important because API should be inambiguous. Following openstack tenets, each entity exposed via API has an id and can referenced by it. If we are storing datastore as one entity, we are not able to query versions or types only with their ids. But it is agreed as API /{tenant_id}/datastore_types /{tenant_id}/datastore_types/{datastore_type}/versions /{tenant_id}/datastore_types/versions/{id} So, with current concept it seems better to keep version and type as separate entities in database. With best regards, Ilya Sviridov http://www.mirantis.ru/ On Fri, Oct 25, 2013 at 10:25 PM, Nikhil Manchanda nik...@manchanda.mewrote: It seems strange to me to treat both the datastore_type and version as two separate entities, when they aren't really independent of each other. (You can't really deploy a mysql type with a cassandra version, and vice-versa, so why have separate datastore-list and version-list calls?) I think it's a better idea to store in the db (and list) actual representations of the datastore type/versions that an image we can deploy supports. Any disambiguation could then happen based on what entries actually exist here. Let me illustrate what I'm trying to get at with a few examples: Database has: id | type | version | active -- a | mysql | 5.6.14 | 1 b | mysql | 5.1.0 | 0 c | postgres | 9.3.1 | 1 d | redis | 2.6.16 | 1 e | redis | 2.6.15 | 1 f | cassandra | 2.0.1 | 1 g | cassandra | 2.0.0 | 0 Config specifies: default_datastore_id = a 1. trove-cli instance create ... Just works - Since nothing is specified, this uses the default_datastore_id from the config (mysql 5.6.14 a) . No need for disambiguation. 2. trove-cli instance create --datastore_id e The datastore_id specified always identifies a unique datastore type / version so no other information is needed for disambiguation. (In this case redis 2.6.15, identified by e) 3. trove-cli instance create --datastore_type postgres The datastore_type in this case uniquely identifies postgres 9.3.1 c, so no disambiguation is necessary. 4. trove-cli instance create --datastore_type cassandra In this case, there is only one _active_ datastore with the given datastore_type, so no further disambiguation is needed and cassandra 2.0.1 f is uniquely identified. 5. trove-cli instance create --datastore_type redis In this case, there are _TWO_ active versions of the specified datastore_type (2.6.16, and 2.6.17) so the call should return that further disambiguation _is_ needed. 6. trove-cli instance create --datastore_type redis --datastore_version 2.6.16 We have both datastore_type and datastore_version, and that uniquely identifies redis 2.6.16 e. No further disambiguation is needed. 7. trove-cli instance create --datastore_type cassandra --version 2.0.0, or trove-cli instance create --datastore_id g Here, we are attempting to deploy a datastore which is _NOT_ active and this call should fail with an appropriate error message. Cheers, -Nikhil Andrey Shestakov writes: 2. it can be confusing coz not clear to what type version belongs (possible add type field in version). also if you have default type, then specified version recognizes as version of default type (no lookup in version.datastore_type_id) but i think we can do lookup in version.datastore_type_id before pick default. 4. if default version is need, then it should be specified in db, coz switching via versions can be frequent and restart service to reload config all times is not good. On 10/21/2013 05:12 PM, Tim Simpson wrote: Thanks for the feedback Andrey. 2. Got this case in irc, and decided to pass type and version together to avoid confusing. I don't understand how allowing the user to only pass the version would confuse anyone. Could you elaborate? 3. Names of types and maybe versions can be good, but in irc conversation rejected this case, i cant remember exactly reason. Hmm. Does anyone remember the reason for this? 4. Actually, active field in version marks it as default in type. Specify default version in config can be usefull if you have more then one active versions in default type. If 'active' is allowed to be set for multiple rows of the 'datastore_versions' table then it isn't a good substitute for the functionality I'm seeking, which is to allow operators to specify a *single* default version for each datastore_type in the database. I still think we should still
Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance
So, we have 2 places for configuration management - database and config file Config file for tunning all datasource type behavior during installation and database for all changeable configurations during usage and administration of Trove installation. Database usecases: - update/custom image - update/custom packages - activating/deactivating datastore_type Config file usecases: - security group policy - provisioning mechanism - guest configuration parameters per database engine - provisioning parameters, templates - manager class ... In case if i need to register one more MySQL installation with following customization: - custom heat template - custom packages and additional monitoring tool package - open specific port for working with my monitoring tool on instance According to current concept should i add one more section in addition to existing mysql like below? [monitored_mysql] mount_point=/var/lib/mysql #8080 is port of my monitoring tool trove_security_group_rule_ports = 3306, 8080 heat_template=/etc/trove/heat_templates/monitored_mysql.yaml ... and put additional packages to database configuration? With best regards, Ilya Sviridov http://www.mirantis.ru/ On Wed, Oct 23, 2013 at 9:37 PM, Michael Basnight mbasni...@gmail.comwrote: On Oct 23, 2013, at 10:54 AM, Ilya Sviridov wrote: Besides the strategy of selecting the default behavior. Let me share with you my ideas of configuration management in Trove and how the datastore concept can help with that. Initially there was only one database and all configuration was in one config file. With adding of new databases, heat provisioning mechanism, we are introducing more options. Not only assigning specific image_id, but custom packages, heat templates, probably specific strategies of working with security groups. Such needs already exist because we have a lot of optional things in config, and any new feature is implemented with back sight to already existing legacy installations of Trove. What is actually datastore_type + datastore_version? The model which glues all the bricks together, so let us use it for all variable part of *service type* configuration. from current config file # Trove DNS trove_dns_support = False # Trove Security Groups for Instances trove_security_groups_support = True trove_security_groups_rules_support = False trove_security_group_rule_protocol = tcp trove_security_group_rule_port = 3306 trove_security_group_rule_cidr = 0.0.0.0/0 #guest_config = $pybasedir/etc/trove/trove-guestagent.conf.sample #cloudinit_location = /etc/trove/cloudinit block_device_mapping = vdb device_path = /dev/vdb mount_point = /var/lib/mysql All that configurations can be moved to data_strore (some defined in heat templates) and be manageable by operator in case if any default behavior should be changed. The trove-config becomes core functionality specific only. Its fine for it to be in the config or the heat templates… im not sure it matters. what i would like to see is that specific thing to each service be in their own config group in the configuration. [mysql] mount_point=/var/lib/mysql … [redis] volume_support=False ….. and so on. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance
Besides the strategy of selecting the default behavior. Let me share with you my ideas of configuration management in Trove and how the datastore concept can help with that. Initially there was only one database and all configuration was in one config file. With adding of new databases, heat provisioning mechanism, we are introducing more options. Not only assigning specific image_id, but custom packages, heat templates, probably specific strategies of working with security groups. Such needs already exist because we have a lot of optional things in config, and any new feature is implemented with back sight to already existing legacy installations of Trove. What is actually datastore_type + datastore_version? The model which glues all the bricks together, so let us use it for all variable part of *service type* configuration. from current config file # Trove DNS trove_dns_support = False # Trove Security Groups for Instances trove_security_groups_support = True trove_security_groups_rules_support = False trove_security_group_rule_protocol = tcp trove_security_group_rule_port = 3306 trove_security_group_rule_cidr = 0.0.0.0/0 #guest_config = $pybasedir/etc/trove/trove-guestagent.conf.sample #cloudinit_location = /etc/trove/cloudinit block_device_mapping = vdb device_path = /dev/vdb mount_point = /var/lib/mysql All that configurations can be moved to data_strore (some defined in heat templates) and be manageable by operator in case if any default behavior should be changed. The trove-config becomes core functionality specific only. What do you think about it? With best regards, Ilya Sviridov http://www.mirantis.ru/ On Tue, Oct 22, 2013 at 8:21 PM, Michael Basnight mbasni...@gmail.comwrote: On Oct 22, 2013, at 9:34 AM, Tim Simpson wrote: It's not intuitive to the User, if they are specifying a version alone. You don't boot a 'version' of something, with specifying what that some thing is. I would rather they only specified the datastore_type alone, and not have them specify a version at all. I agree for most users just selecting the datastore_type would be most intutive. However, when they specify a version it's going to a be GUID which they could only possibly know if they have recently enumerated all versions and thus *know* the version is for the given type they want. In that case I don't think most users would appreciate having to also pass the type- it would just be redundant. So in that case why not make it optional?] im ok w/ making either optional if the criteria for selecting the _other_ is not ambiguous. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TROVE] Thoughts on DNS refactoring, Designate integration.
On Tue, Oct 1, 2013 at 6:45 PM, Tim Simpson tim.simp...@rackspace.comwrote: Hi fellow Trove devs, With the Designate project ramping up, its time to refactor the ancient DNS code that's in Trove to work with Designate. The good news is since the beginning, it has been possible to add new drivers for DNS in order to use different services. Right now we only have a driver for the Rackspace DNS API, but it should be possible to write one for Designate as well. How it corelates with Trove dirrection to use HEAT for all provisioning and managing cloud resources? There are BPs for Designate resource ( https://blueprints.launchpad.net/heat/+spec/designate-resource) and Rackspace DNS (https://blueprints.launchpad.net/heat/+spec/rax-dns-resource) as well and it looks logically to use the HEAT for that. Currently Trove has logic for provisioning instances, dns driver, creation of security group, but with switching to HEAT way, we have duplication of the same functionality we have to support. However, there's a bigger topic here. In a gist sent to me recently by Dennis M. with his thoughts on how this work should proceed, he included the comment that Trove should *only* support Designate: https://gist.github.com/crazymac/6705456/raw/2a16c7a249e73b3e42d98f5319db167f8d09abe0/gistfile1.txt I disagree. I have been waiting for a canonical DNS solution such as Designate to enter the Openstack umbrella for years now, and am looking forward to having Trove consume it. However, changing all the code so that nothing else works is premature. All non mainstream resources like cloud provider specific can be implemented as HEAT plugins (https://wiki.openstack.org/wiki/Heat/Plugins) Instead, let's start work to play well with Designate now, using the open interface that has always existed. In the future after Designate enters integration status we can then make the code closed and only support Designate. Do we really need playing with Designate and then replace it? I expect designate resource will come together with designate or even earlier. With best regards, Ilya Sviridov Denis also had some other comments about the DNS code, such as not passing a single object as a parameter because it could be None. I think this is in reference to passing around a DNS entry which gets formed by the DNS instance entry factory. I see how someone might think this is brittle, but in actuality it has worked for several years so if anything changing it would introduce bugs. The interface was also written to use a full object in order to be flexible; a full object should make it easier to work with different types of DnsDriver implementations, as well as allowing more options to be set from the DnsInstanceEntryFactory. This later class creates a DnsEntry from an instance_id. It is possible that two deployments of Trove, even if they are using Designate, might opt for different DnsInstanceEntryFactory implementations in order to give the DNS entries associated to databases different patterns. If the DNS entry is created at this point its easier to further customize and tailor it. This will hold true even when Designate is ready to become the only DNS option we support (if such a thing is desirable). Thanks, Tim ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [trove] MySQL HA BP
The blueprint https://blueprints.launchpad.net/trove/+spec/mysql-ha In order to become production ready DBaaS, Trove should provide ability to deploy and manage high available database. There are several approaches to achive HA in MySQL: driven by high availability resource managers like Peacemaker [1] ,master-master replication, Percona XTraDB Cluster [2] based on Galera library [3] so on. But, as far as Trove DB instances are running in cloud environment, general approach can be not always the best suitable option and should be discussed -- [1] http://clusterlabs.org/ [2] http://www.percona.com/software/percona-xtradb-cluster [3] https://launchpad.net/galera With best regards, Ilya Sviridov http://www.mirantis.ru/ ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev