Re: [openstack-dev] [telemetry] Triggering alarms on Swift notifications

2017-02-16 Thread Denis Makogon
Sorry for confusing. I just wanted to try to setup telemetry and try it
with Picasso(Serverless functions).

And your suggestion worked well. So i think it is worth opening issue
against Aodh and Ceilometer for implementing given option configuration
just Panko devstack plugin does.

Kind regards,
Denis Makogon

чт, 16 февр. 2017 г. в 15:54, Mehdi Abaakouk <sil...@sileht.net>:

> On Thu, Feb 16, 2017 at 03:33:43PM +0200, Denis Makogon wrote:
> >Hello Mehdi. Thanks for response. See comments inline.
> >
> >2017-02-16 15:25 GMT+02:00 Mehdi Abaakouk <sil...@sileht.net>:
> >
> >> Hi,
> >>
> >> On Thu, Feb 16, 2017 at 03:04:52PM +0200, Denis Makogon wrote:
> >>
> >>> Greetings.
> >>>
> >>> Could someone provide any guidelines for checking why alarm or
> ok-action
> >>> webhooks are not being executed. Is it possible to investigate an
> issue by
> >>> analyzing ceilometer and/or aodh logs?
> >>>
> >>> So, my devstack setup based on master branch with local.conf (see
> >>> https://gist.github.com/denismakogon/4d88bdbea4bf428e55e88d25d52735f6)
> >>>
> >>> Once devstack installed i'm checking if notifications are being emitted
> >>> while uploading files to Swift and i'm able to see events in Ceilometer
> >>> (see https://gist.github.com/denismakogon/c6ad75899dcc50ce2a9b9f6
> >>> a4e0612f7).
> >>>
> >>> After that i'm trying to setup Aodh event alarm (see
> >>> https://gist.github.com/denismakogon/f6449e71ba9bb04cdd0065b52918b5af)
> >>>
> >>> And that's where i'm stuck, while working with Swift i see
> notifications
> >>> are coming from ceilometermiddleware to Panko and available in
> Ceilometer
> >>> via event-list but no alarm being triggered in Aodh.
> >>>
> >>> So, could someone explain me what am i doing wrong or am i missing
> >>> something?
> >>>
> >>
> >> I think devstack plugins currently doesn't setup the Ceilometer stuffs
> to
> >> be able to use events in Aodh.
> >>
> >>
> >So, if i would drop off both Aodh and Panko from this setup it may appear
> >that issue will be solved somehow?
>
> I don't get it, I have thought your goal was to create alarm triggered on
> event. And for this you need Aodh and Ceilometer (not Panko).
>
> You have to do
> https://docs.openstack.org/developer/aodh/event-alarm.html#configuration
> manually to get events emitted by Ceilometer received by Aodh.
>
> >> Note that Aodh doesn't query Panko, but listen for event on alarm.all
> >> topic by
> >> default. I'm guessing the Ceilometer conf/pipeline have to be tweaked to
> >> send
> >> events to Aodh somehow.
> >>
> >
> >Yes, i know that, in this setup Ceilometer depends on Panko as event
> >source, is that right?
>
> No, Panko is for storage of event in a database (It's a API and a
> Ceilometer dispatcher plugin). Events are still built from notification
> by Ceilometer (agent-notification).
>
> Rephrase from Ceilometer point of view. Ceilometer send events to Aodh AND
> Panko.
>
> Regards,
>
> --
> Mehdi Abaakouk
> mail: sil...@sileht.net
> irc: sileht
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] Triggering alarms on Swift notifications

2017-02-16 Thread Denis Makogon
Hello Mehdi. Thanks for response. See comments inline.

2017-02-16 15:25 GMT+02:00 Mehdi Abaakouk <sil...@sileht.net>:

> Hi,
>
> On Thu, Feb 16, 2017 at 03:04:52PM +0200, Denis Makogon wrote:
>
>> Greetings.
>>
>> Could someone provide any guidelines for checking why alarm or ok-action
>> webhooks are not being executed. Is it possible to investigate an issue by
>> analyzing ceilometer and/or aodh logs?
>>
>> So, my devstack setup based on master branch with local.conf (see
>> https://gist.github.com/denismakogon/4d88bdbea4bf428e55e88d25d52735f6)
>>
>> Once devstack installed i'm checking if notifications are being emitted
>> while uploading files to Swift and i'm able to see events in Ceilometer
>> (see https://gist.github.com/denismakogon/c6ad75899dcc50ce2a9b9f6
>> a4e0612f7).
>>
>> After that i'm trying to setup Aodh event alarm (see
>> https://gist.github.com/denismakogon/f6449e71ba9bb04cdd0065b52918b5af)
>>
>> And that's where i'm stuck, while working with Swift i see notifications
>> are coming from ceilometermiddleware to Panko and available in Ceilometer
>> via event-list but no alarm being triggered in Aodh.
>>
>> So, could someone explain me what am i doing wrong or am i missing
>> something?
>>
>
> I think devstack plugins currently doesn't setup the Ceilometer stuffs to
> be able to use events in Aodh.
>
>
So, if i would drop off both Aodh and Panko from this setup it may appear
that issue will be solved somehow?


> Note that Aodh doesn't query Panko, but listen for event on alarm.all
> topic by
> default. I'm guessing the Ceilometer conf/pipeline have to be tweaked to
> send
> events to Aodh somehow.
>

Yes, i know that, in this setup Ceilometer depends on Panko as event
source, is that right?


>
> Maybe this [1] have to be done manually.
>
> [1] https://docs.openstack.org/developer/aodh/event-alarm.html#
> configuration


Ok, will try.


>
>
> Regards,
>
> --
> Mehdi Abaakouk
> mail: sil...@sileht.net
> irc: sileht
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [telemetry] Triggering alarms on Swift notifications

2017-02-16 Thread Denis Makogon
Greetings.

Could someone provide any guidelines for checking why alarm or ok-action
webhooks are not being executed. Is it possible to investigate an issue by
analyzing ceilometer and/or aodh logs?

So, my devstack setup based on master branch with local.conf (see
https://gist.github.com/denismakogon/4d88bdbea4bf428e55e88d25d52735f6)

Once devstack installed i'm checking if notifications are being emitted
while uploading files to Swift and i'm able to see events in Ceilometer
(see https://gist.github.com/denismakogon/c6ad75899dcc50ce2a9b9f6a4e0612f7).

After that i'm trying to setup Aodh event alarm (see
https://gist.github.com/denismakogon/f6449e71ba9bb04cdd0065b52918b5af)

And that's where i'm stuck, while working with Swift i see notifications
are coming from ceilometermiddleware to Panko and available in Ceilometer
via event-list but no alarm being triggered in Aodh.

So, could someone explain me what am i doing wrong or am i missing
something?

Kind regards,
Denis Makogon
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift][swiftclient][horizon] Improve UX by enabling HTTP headers configuration in UI and CLI

2017-02-10 Thread Denis Makogon
Greetings.

I've been developing Swift middleware that depends on specific HTTP headers
and figured out that there's only one way to specify them on client side -
only in programmatically i can add HTTP headers to each Swift HTTP API
method (CLI and dashboard are not supporting HTTP headers configuration,
except by default enabled cases like "copy" middleware because swiftclient
defines in as separate API method).

My point here is, as developer, i don't have OpenStack-aligned way to
examine HTTP headers-dependent middleware without hacking into both
swiftclient and dashboard what makes me fall back to cURL that brings a lot
overhead while working with Swift.

So, is there any interest in having such thing in swiftclient and,
subsequently, in dashboard?
If yes, let me know (it shouldn't be that complicated because at
swiftclient python API level we already capable to send HTTP headers).

Kind regards,
Denis Makogon
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [meteos] Meteos Released !!

2016-12-16 Thread Denis Makogon
Hello Hiroyuki.

Congrats on public release. So, i'd like to clarify few things. I know that
for running ML jobs Meteos needs Spark instances and Meteos talks to Sahara
to deploy them. So, i'm not quite familiar with infrastructure drives in
Sahara, but would that make sense to work with Docker containers rather
than deploying virtual machines, the thing is spinning up a VM takes a lot
time, but Docker container takes 300ms to start.

Kind regards,
Denis Makogon


2016-12-16 7:14 GMT+02:00 Hiroyuki Eguchi <h-egu...@az.jp.nec.com>:

> Hi all,
>
>
>
> I'm pleased to announce the release of Meteos.
>
>
>
> Meteos is Machine Learning as a Service (MLaaS) in Apache Spark.
>
>
>
> Meteos allows users to analyze huge amount of data and predict a value by
> data mining and machine learning algorithms.
>
> Meteos create a workspace of Machine Learning via sahara spark plugin and
> manage some resources and jobs regarding Machine Learning.
>
>
>
> Everyone can participate in this project as a user, developer, reviewer in
> the same way as another OpenStack projects.
>
>
>
> Please give it a try.
>
> If you find any requests and comments, please feel free to feedback.
>
>
>
> See the following documents to find the relevant information:
>
>
>
> [Wiki]
>
> https://wiki.openstack.org/wiki/Meteos
>
>
>
> [Installation Document]
>
> https://wiki.openstack.org/wiki/Meteos/Devstack
>
>
>
> [Examples]
>
> Predict a Sales Figures by using LinearRegression Model
>
> https://wiki.openstack.org/wiki/Meteos/ExampleLinear
>
>
>
> Make a Decision to buy a stock by using DecisionTree Model
>
> https://wiki.openstack.org/wiki/Meteos/ExampleDecisionTree
>
>
>
> Recommend a Movie by using Recommendation Model
>
> https://wiki.openstack.org/wiki/Meteos/ExampleRecommend
>
>
>
> Search Synonyms by using Word2Vec Model
>
> https://wiki.openstack.org/wiki/Meteos/ExampleWord2Vec
>
>
>
>
>
> Thanks.
>
> Hiroyuki Eguchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zun][Glare][Glance] Building Docker images

2016-12-12 Thread Denis Makogon
Hello to All.

I’d like to get initial feedback on the idea of building Docker images
through Zun involving Glare as artifactory for all static components
required for image.

So, the idea here is in being capable to build a Docker image through Zun
API with storing all static data required for docker image building in
Glare or Swift. In order to keep the same UX from using Docker it would be
better to use Dockerfile as description format for image building.

In image creation process Glare could take role of artifactory, where users
stores, let’s say source code of his applications that would run in
containers, static data, etc. And those artifacts will be pulled during
image creation and used to inject into image (similar process of context
creation during Docker image building using native CLI). Please note that
artifacts are completely optional for images, but would give a capability
to keep artifacts in dedicated storage instead of transferring all data
through Zun API (the opposite concept to Docker build context).

Once image is created, it can be stored in underlying Docker in Zun or can
be published into Glance or Swift for further consumption (if user will
need to save image, he’ll use Glance image download API). I’ve mentioned
Swift VS Glance because Swift has concept of temp URLs that can be accessed
without being authorized. Such feature allows to use Swift as storage from
where possible to export image to Docker using Import API [1].


Any feedback on the idea is appreciated.

Kind regards,

Denis Makogon

[1] https://docs.docker.com/engine/reference/commandline/import/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] About k8s integration

2016-12-07 Thread Denis Makogon
Hello Hongbin.

See inline comments.

Kind regards,
Denis Makogon

2016-12-07 2:56 GMT+02:00 Hongbin Lu <hongbin...@huawei.com>:

> Hi all,
>
>
>
> This is a continued discussion of the k8s integration blueprint [1].
> Currently, Zun exposes a container-oriented APIs that provides service for
> end-users to operate on containers (i.e. CRUD). At the last team meeting,
> we discussed how to introduce k8s to Zun as an alternative to the Docker
> driver. There are two approaches that has been discussed:
>
>
>
> 1. Introduce the concept of Pod. If we go with this approach, an API
> endpoint (i.e. /pods) will be added to the Zun APIs. Both Docker driver and
> k8s driver need to implement this endpoint. In addition, all the future
> drivers need to implement this endpoint as well (or throw a NotImplemented
> exception). Some of our team members raised concerns about this approach.
> The main concern is that this approach will hide a lot of k8s-specific
> features (i.e. replication controller) or there will be a lot of work to
> bring all those features to Zun.
>

Exactly, i think Pods concept shouldn't appear in Zun (it's all about
Magnum, isn't it?). So, the problem is that K8t Pod is too different from
Docker Swarm node and different from Rkt. Since Zun is aimed to be an
abstraction on-top for different container technologies. So every infra
management should be leveraged to Magnum.

I think it would make more sense to introduce an abstraction, let's say
"Datastore", behind this abstraction we can hide different types of
technologies (required connection attributes, etc.). If i would need to
create container in Swarm i'll use "--datastore swarm.production.com", if i
would need to attach value, i'll ask magnum to do that and whatever i would
need in order to deploy required Zun container.


>
>
>   $ zun pod-create … # this create a k8s pod (if k8s driver is used), or
> create a sandbox with a set of containers (if docker driver is used)
>
>   $ zun create … # this create a k8s pod with one container, or create a
> sandbox with one container
>
>
>
> 2. Introduce a dedicated k8s endpoint that acts as a proxy to k8s APIs.
> This will expose all the k8s features but users won’t have a unified APIs
> across drivers.
>
>
>

This is exactly intersection with Magnum. Zun is meant to be
Containers-as-a-Service, but not Containers-infra-management-as-a-Service.
So, if i would need to deploy container on specific Pod i would like to
have capability to deploy it on that pod (no matter if it was deployed by
Magnum or by 3rd-party tools outside of OpenStack), of course there would
be problems with Cinder volumes.

  $ zun k8s pod create … # this create a k8s pod
>
>   $ zun docker container create … # this create a docker container
>
>   $ zun create … # the behavior of this command is unclear
>
>
>
> So far, we haven’t decided which approach to use (or use a third
> approach), but we wanted to collect more feedback before making a decision.
> Thoughts?
>
>
>

So, overall, Zun should remain to be agnostic to any container technologies
like Docker, K8t, Rkt, CEO. So every infra management should be leveraged
to Magnum, and Zun should consume container technology CRUD API and use
Magnum in order to modify underlying Nova/Cinder resources.

Another question, why do Zun needs K8t pods CRUD API? Can't Zun talk to
Magnum to work with Magnum?


> [1] https://blueprints.launchpad.net/zun/+spec/k8s-integration
>
>
>
> Best regards,
>
> Hongbin
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Finish test job transition to Ubuntu Xenial

2016-11-22 Thread Denis Makogon
Hello Neil

See comments inline

Kind regards,
Denis Makogon

2016-11-22 12:58 GMT+02:00 Neil Jerram <n...@tigera.io>:

> On Mon, Nov 7, 2016 at 9:50 PM Clark Boylan <cboy...@sapwetik.org> wrote:
>
>> [...]
>
> If you have jobs still running on trusty the next step is to fire up a
>> Xenial instance locally and run that test to see if it works. Usually
>> this will mean running the appropriate tox target or if using
>> devstack-gate you can grab the reproduce.sh script for that job and run
>> that script locally.
>>
>>
> Is there doc somewhere about what needs adding to a fresh Xenial server
> image, to allow reproduce.sh script to run successfully?
>
> So far, I've hit:
> - virtualenv
>

In Python 3.x you can create virtualenv with command: python3 -m virtualenv
.venv


> - gcc
>
> Neil
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [new][meteos] New project: Meteos

2016-10-20 Thread Denis Makogon
Hi.

Congrats.

Same questions.

Basically we'd all like to know project scope. Any additional information
in project appreciated.

Kind regards,
Denis Makogon

четверг, 20 октября 2016 г. пользователь Trinath Somanchi написал:

> Hi-
>
>
>
> Congratulations on this announcement.
>
>
>
> What is the initial stepin to work in this project?
>
> What are the plans and deliverables?
>
>
>
> /Trinath
>
>
>
> *From:* Hiroyuki Eguchi [mailto:h-egu...@az.jp.nec.com
> <javascript:_e(%7B%7D,'cvml','h-egu...@az.jp.nec.com');>]
> *Sent:* Thursday, October 20, 2016 12:38 PM
> *To:* 'openstack-dev@lists.openstack.org
> <javascript:_e(%7B%7D,'cvml','openstack-dev@lists.openstack.org');>' <
> openstack-dev@lists.openstack.org
> <javascript:_e(%7B%7D,'cvml','openstack-dev@lists.openstack.org');>>
> *Subject:* [openstack-dev] [new][meteos] New project: Meteos
>
>
>
> Hello.
>
>
>
> I'm pleased to introduce a new project called Meteos.
>
>
>
> Meteos is Machine Learning as a Service (MLaaS) in Apache Spark.
>
>
>
> Meteos allows users to analyze huge amount of data and predict a value by
> data mining and machine learning algorithms.
>
> Meteos create a workspace of Machine Learning via OpenStack Sahara’s spark
> plugin and manage some resources and jobs regarding Machine Learning.
>
>
>
> This project has just started.
>
> I plan to release the initial version by the end of this year.
>
>
>
> If you have any questions concerning this project, please feel free to
> contact me.
>
>
>
> For more details:
>
>
>
> [Wiki]
>
> https://wiki.openstack.org/wiki/Meteos
>
>
>
> [Use Case(Predict sales using Meteos)]
>
> https://wiki.openstack.org/wiki/Meteos/Usecase
>
>
>
> [Launchpad]
>
> https://launchpad.net/meteos
>
> https://launchpad.net/python-meteosclient
>
>
>
> Thanks.
>
>
>
> --
>
> Hiroyuki Eguchi
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ERROR: Not Authorized

2016-10-10 Thread Denis Makogon
Hello Courage.

It may appear that you didn't set authorization environment variables to
your OpenStack (i assume you're running devstack).
So just do

*source $PATHTODEVSTACKREPO/keystone.rc*

Kind regards,
Denis Makogon


2016-10-10 17:39 GMT+03:00 courage angeh <couragean...@gmail.com>:

> i have problems running zun. When i try to run comaands lik:
>
> zun start test or
> zun create --name test --image cirros --command "ping -c 4 8.8.8.8"
>
> I get the error: ERROR: Not Authorized
>
> Futher searching it seem like i cann't connect to 
> http://192.168.8.101:5000/v2.0
>
> Please can someone help me?
>
> Thanks
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PyCon Canada Call for Proposals

2016-08-17 Thread Denis Makogon
Hello Anita.

Thanks for discovering such opportunity. I've submitted couple talks there.

Kind regards,
Denys Makogon


2016-08-17 18:22 GMT+03:00 Anita Kuno :

> I spoke at this conference last year and it is a nice small friendly
> little conf. It was in Toronto last year and will be again this year. It is
> being held in November, most of the audience is university students.
>
> CFP is open for two more weeks: https://cfp.pycon.ca/?mc_cid=c
> 7c6b43fde_eid=3eccad7039
>
> Would be great to see you there.
>
> I'm not involved with the organization, just receiving their emails and
> thought I would share.
>
> Thanks,
> Anita.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Status of the OpenStack port to Python 3

2016-07-13 Thread Denis Makogon
Hello to All.


I have free capacity to work on porting code to Py3. So, if any PTL is
running out of team capacity i can help to work on project to enable Py3
support.

Kind regards,
Denys Makogon


2016-07-06 13:01 GMT+03:00 Flavio Percoco :

> On 24/06/16 12:17 -0400, Sean Dague wrote:
>
>> On 06/24/2016 11:48 AM, Doug Hellmann wrote:
>>
>>> Excerpts from Dmitry Tantsur's message of 2016-06-24 10:59:14 +0200:
>>>
 On 06/23/2016 11:21 PM, Clark Boylan wrote:

> On Thu, Jun 23, 2016, at 02:15 PM, Doug Hellmann wrote:
>
>> Excerpts from Thomas Goirand's message of 2016-06-23 23:04:28 +0200:
>>
>>> On 06/23/2016 06:11 PM, Doug Hellmann wrote:
>>>
 I'd like for the community to set a goal for Ocata to have Python
 3 functional tests running for all projects.

 As Tony points out, it's a bit late to have this as a priority for
 Newton, though work can and should continue. But given how close
 we are to having the initial phase of the port done (thanks
 Victor!),
 and how far we are from discussions of priorities for Ocata, it
 seems very reasonable to set a community-wide goal for our next
 release cycle.

 Thoughts?

 Doug

>>>
>>> +1
>>>
>>> Just think about it for a while. If we get Nova to work with Py3, and
>>> everything else is working, including all functional tests in
>>> Tempest,
>>> then after Otaca, we could even start to *REMOVE* Py2 support after
>>> Otaca+1. That would be really awesome to stop all the compat layer
>>> madness and use the new features available in Py3.
>>>
>>
>> We'll need to get some input from other distros and from deployers
>> before we decide on a timeline for dropping Python 2. For now, let's
>> focus on making Python 3 work. Then we can all rejoice while having
>> the
>> discussion of how much longer to support Python 2. :-)
>>
>>
>>> I really would love to ship a full stack running Py3 for Debian
>>> Stretch.
>>> However, for this, it'd be super helful to have as much visibility as
>>> possible. Are we setting a hard deadline for the Otaca release? Or is
>>> this just a goal we only "would like" to reach, but it's not really a
>>> big deal if we don't reach it?
>>>
>>
>> Let's see what PTLs have to say about planning, but I think if not
>> Ocata then we'd want to set one for the P release. We're running
>> out of supported lifetime for Python 2.7.
>>
>
> Keep in mind that there is interest in running OpenStack on PyPy which
> is python 2.7. We don't have to continue supporting CPython 2.7
> necessarily but we may want to support python 2.7 by way of PyPy.
>

 PyPy folks have been working on python 3 support for some time already:
 http://doc.pypy.org/en/latest/release-pypy3.3-v5.2-alpha1.html
 It's an alpha, but by the time we consider dropping Python 2 it will
 probably be released :)

>>>
>>> We're targeting Python >=3.4, right now.  We'll have to decide as
>>> a community whether PyPy support is a sufficient reason to keep
>>> support for "older" versions (either 2.x or earlier versions of 3).
>>> Before we can have that discussion, though, we need to actually run on
>>> Python 3, so let's focus on that and evaluate the landscape of other
>>> interpreters when the porting work is done.
>>>
>>
>> +1, please don't get ahead of things until there is real full stack
>> testing running on python3.
>>
>> It would also be good if some of our operators were running on python 3
>> and providing feedback that it works in the real world before we even
>> talk about dropping. Because our upstream testing (even the full stack
>> testing) only can catch so much.
>>
>> So next steps:
>>
>> 1) full stack testing of everything we've got on python3 - (are there
>> volunteers to get that going?)
>> 2) complete Nova port to enable full stack testing on python3 for iaas
>> base
>> 3) encourage operators to deploy with python3 in production
>> 4) gather real world feedback, develop rest of plan
>>
>
>
> Just one to +1 the above steps. I'd be very hesitant to make any plan
> until we
> are able to get not only nova but all the projects in the
> starter-kit:compute[0]
> running pn python3 (and w/ a full stack test).
>
> [0]
> https://governance.openstack.org/reference/tags/starter-kit_compute.html
>
>
> Flavio
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [all][Python 3.4-3.5] Async python clients

2016-07-04 Thread Denis Makogon
2016-07-04 16:47 GMT+03:00 Roman Podoliaka <rpodoly...@mirantis.com>:

> That's exactly what https://github.com/koder-ua/os_api is for: it
> polls status changes in a separate thread and then updates the
> futures, so that you can wait on multiple futures at once.
>
>
This is what i exactly want to avoid - new thread. I'm using event loop
with uvloop policy, so i must stay non-blocked within main thread and don't
mess up with GIL by instantiating new thread. With coroutines concept from
asyncio i can do non-blocking operations relying on EPOLL under the hood.

Kind regards,
Denys Makogon


> On Mon, Jul 4, 2016 at 2:19 PM, Denis Makogon <lildee1...@gmail.com>
> wrote:
> >
> >
> > 2016-07-04 13:22 GMT+03:00 Roman Podoliaka <rpodoly...@mirantis.com>:
> >>
> >> Denis,
> >>
> >> >  Major problem
> >> > appears when you trying to provision resource that requires to have
> some
> >> > time to reach ACTIVE/COMPLETED state (like, nova instance, stack,
> trove
> >> > database, etc.) and you have to use polling for status changes and in
> >> > general polling requires to send HTTP requests within specific time
> >> > frame
> >> > defined by number of polling retries and delays between them (almost
> all
> >> > PaaS solutions in OpenStack are doing it that might be the case of
> >> > distributed backend services, but not for async frameworks).
> >>
> >> How would an asynchronous client help you avoid polling here? You'd
> >> need some sort of a streaming API producing events on the server side.
> >>
> >
> > No, it would not help me to get rid of polling, but using async requests
> > will allow to proceed with next independent async tasks while awaiting
> > result on async HTTP request.
> >
> >>
> >> If you are simply looking for a better API around polling in OS
> >> clients, take a look at https://github.com/koder-ua/os_api , which is
> >> based on futures (be aware that HTTP requests are still *synchronous*
> >> under the hood).
> >>
> >> Thanks,
> >> Roman
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][Python 3.4-3.5] Async python clients

2016-07-04 Thread Denis Makogon
2016-07-04 13:22 GMT+03:00 Roman Podoliaka :

> Denis,
>
> >  Major problem
> > appears when you trying to provision resource that requires to have some
> > time to reach ACTIVE/COMPLETED state (like, nova instance, stack, trove
> > database, etc.) and you have to use polling for status changes and in
> > general polling requires to send HTTP requests within specific time frame
> > defined by number of polling retries and delays between them (almost all
> > PaaS solutions in OpenStack are doing it that might be the case of
> > distributed backend services, but not for async frameworks).
>
> How would an asynchronous client help you avoid polling here? You'd
> need some sort of a streaming API producing events on the server side.
>
>
No, it would not help me to get rid of polling, but using async requests
will allow to proceed with next independent async tasks while awaiting
result on async HTTP request.


> If you are simply looking for a better API around polling in OS
> clients, take a look at https://github.com/koder-ua/os_api , which is
> based on futures (be aware that HTTP requests are still *synchronous*
> under the hood).
>
> Thanks,
> Roman
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][Python 3.4-3.5] Async python clients

2016-07-04 Thread Denis Makogon
2016-07-04 12:40 GMT+03:00 Antoni Segura Puimedon <
toni+openstac...@midokura.com>:

>
>
> On Mon, Jul 4, 2016 at 11:16 AM, Julien Danjou <jul...@danjou.info> wrote:
>
>> On Sun, Jun 26 2016, Denis Makogon wrote:
>>
>> > I know that some work in progress to bring Python 3.4 compatibility to
>> > backend services and it is kinda hard question to answer, but i'd like
>> to
>> > know if there are any plans to support asynchronous HTTP API client in
>> the
>> > nearest future using aiohttp [1] (PEP-3156)?
>>
>
> We were not sure if aiohttp would be taken in as a requirement, so in our
> kuryr kubernetes
> prototype we did our own asyncio http request library (it only does GET
> for now)[2]
>
> <https://github.com/midonet/kuryr/blob/k8s/kuryr/raven/aio/methods.py>
>

Good to see that someone already using async features. But i'd doubt
regarding re-inventing wheel, aiohttp is good choice for implementing both
clients and servers, so going deeper to asyncio core parts is required when
you're doing something very protocol-specific on the transport layer.

So, current question is about have common part of code that utilizes async
HTTP coroutines for current SDKs, and unfortunately, backward compatibility
should remain because not all projects were shifted to Py34 or greater yet,
but pace of feature deliver should remain as is.


>
>
>>
>> I don't think there is unfortunately. Most clients now relies on
>> `requests', and unfortunately it's not async not it seems ready to be
>> last time I checked.
>>
>
> for the neutron clients now we use a thread executor from the asyncio loop
> any time
> we do neutron client request.
>
> [2] https://github.com/midonet/kuryr/blob/k8s/kuryr/raven/aio/methods.py
>
>>
>> --
>> Julien Danjou
>> // Free Software hacker
>> // https://julien.danjou.info
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][Python 3.4-3.5] Async python clients

2016-07-04 Thread Denis Makogon
2016-07-04 12:16 GMT+03:00 Julien Danjou <jul...@danjou.info>:

> On Sun, Jun 26 2016, Denis Makogon wrote:
>
> > I know that some work in progress to bring Python 3.4 compatibility to
> > backend services and it is kinda hard question to answer, but i'd like to
> > know if there are any plans to support asynchronous HTTP API client in
> the
> > nearest future using aiohttp [1] (PEP-3156)?
>
> I don't think there is unfortunately. Most clients now relies on
> `requests', and unfortunately it's not async not it seems ready to be
> last time I checked.
>
>
Unfortunately, it is what it is. So, i guess this is something that is
worth considering discuss during summit and find the way and capacity to
support async HTTP API during next release. I'll start work on general
concept that would satisfy both 2.7 and 3.4 or greater Python versions.

What would be the best place to submit spec?


--
> Julien Danjou
> // Free Software hacker
> // https://julien.danjou.info
>

Kind regards,
Denys Makogon
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New Python35 Jobs coming

2016-07-04 Thread Denis Makogon
2016-07-04 8:18 GMT+03:00 Andreas Jaeger :

> On 07/03/2016 09:26 PM, Henry Gessau wrote:
> > Clark Boylan  wrote:
> >> The infra team is working on taking advantage of the new Ubuntu Xenial
> >> release including running unittests on python35. The current plan is to
> >> get https://review.openstack.org/#/c/336272/ merged next Tuesday (July
> >> 5, 2016). This will add non voting python35 tests restricted to >=
> >> master/Newton on all projects that had python34 testing.
> >>
> >> The expectation is that in many cases python35 tests will just work if
> >> python34 testing was also working. If this is the case for your project
> >> you can propose a change to openstack-infra/project-config to make these
> >> jobs voting against your project. You should only need to edit
> >> jenkins/jobs/projects.yaml and zuul/layout.yaml and remove the '-nv'
> >> portion of the python35 jobs to do this.
> >>
> >> We do however expect that there will be a large group of failed tests
> >> too. If your project has a specific tox.ini py34 target to restrict
> >> python3 testing to a specific list of tests you will need to add a tox
> >> target for py35 that does the same thing as the py34 target. We have
> >> also seen bug reports against some projects whose tests rely on stable
> >> error messages from Python itself which isn't always the case across
> >> version changes so these tests will need to be updated as well.
> >>
> >> Note this change will not add python35 jobs for cases where projects
> >> have special tox targets. This is restricted just to the default py35
> >> unittesting.
> >>
> >> As always let us know if you questions,
> >> Clark
> >
> > How soon can projects replace py34 with py35?
>
> As soon as you think your project is ready, you can replace py34 with
> py35 for master.
>
> >
> > I tried py35 for neutron locally, and it ran without errors.
>
> Then let it run for a day or two in our CI, discuss with neutron team,
> and send a patch for project-config to change the setup,
>
>
Can confirm that nova, glance, cinder, heat clients are py35 compatible.


> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][Python 3.4-3.5] Async python clients

2016-06-26 Thread Denis Makogon
Hello stackers.


I know that some work in progress to bring Python 3.4 compatibility to
backend services and it is kinda hard question to answer, but i'd like to
know if there are any plans to support asynchronous HTTP API client in the
nearest future using aiohttp [1] (PEP-3156)?

If yes, could someone describe current state?

This question is being asked because i've been working on AIOrchestra [2]
(async TOSCA orchestration framework) and its OpenStack plugin [3], so from
its design approach i need to use asynchronous HTTP API clients in order to
get full power from UVLoop, AsyncIO event loop over Python 3.5 for fast,
lightweight and reliable orchestration. But current clients are still
synchronous and only Py3.4 or greater parser-compatible. Major problem
appears when you trying to provision resource that requires to have some
time to reach ACTIVE/COMPLETED state (like, nova instance, stack, trove
database, etc.) and you have to use polling for status changes and in
general polling requires to send HTTP requests within specific time frame
defined by number of polling retries and delays between them (almost all
PaaS solutions in OpenStack are doing it that might be the case of
distributed backend services, but not for async frameworks).


[1] https://github.com/KeepSafe/aiohttp
[2] https://github.com/aiorchestra/aiorchestra
[3] https://github.com/aiorchestra/aiorchestra-openstack-plugin


Kind regards,
Denys Makogon
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal: Architecture Working Group

2016-06-20 Thread Denis Makogon
Hello Clint.

I'd like to take part as well, so count me in.

Kind regards,
Denys Makogon


2016-06-20 10:23 GMT+03:00 Ghe Rivero :

> Hi all!
> I really like the idea of the group, so count me in!
>
> Ghe Rivero
>
> Quoting Clint Byrum (2016-06-17 23:52:43)
> > ar·chi·tec·ture
> > ˈärkəˌtek(t)SHər/
> > noun
> > noun: architecture
> >
> > 1.
> >
> > the art or practice of designing and constructing buildings.
> >
> > synonyms:building design, building style, planning, building,
> construction;
> >
> > formalarchitectonics
> >
> > "modern architecture"
> >
> > the style in which a building is designed or constructed, especially
> with regard to a specific period, place, or culture.
> >
> > plural noun: architectures
> >
> > "Victorian architecture"
> >
> > 2.
> >
> > the complex or carefully designed structure of something.
> >
> > "the chemical architecture of the human brain"
> >
> > the conceptual structure and logical organization of a computer or
> computer-based system.
> >
> > "a client/server architecture"
> >
> > synonyms:structure, construction, organization, layout, design,
> build, anatomy, makeup;
> >
> > informalsetup
> >
> > "the architecture of a computer system"
> >
> >
> > Introduction
> > =
> >
> > OpenStack is a big system. We have debated what it actually is [1],
> > and there are even t-shirts to poke fun at the fact that we don't have
> > good answers.
> >
> > But this isn't what any of us wants. We'd like to be able to point
> > at something and proudly tell people "This is what we designed and
> > implemented."
> >
> > And for each individual project, that is a possibility. Neutron can
> > tell you they designed how their agents and drivers work. Nova can
> > tell you that they designed the way conductors handle communication
> > with API nodes and compute nodes. But when we start talking about how
> > they interact with each other, it's clearly just a coincidental mash of
> > de-facto standards and specs that don't help anyone make decisions when
> > refactoring or adding on to the system.
> >
> > Oslo and cross-project initiatives have brought some peace and order
> > to the implementation and engineering processes, but not to the design
> > process. New ideas still start largely in the project where they are
> > needed most, and often conflict with similar decisions and ideas in other
> > projects [dlm, taskflow, tooz, service discovery, state machines, glance
> > tasks, messaging patterns, database patterns, etc. etc.]. Often times
> this
> > creates a log jam where none of the projects adopt a solution that would
> > align with others. Most of the time when things finally come to a head
> > these things get done in a piecemeal fashion, where it's half done here,
> > 1/3 over there, 1/4 there, and 3/4 over there..., which to the outside
> > looks like  chaos, because that's precisely what it is.
> >
> > And this isn't always a technical design problem. OpenStack, for
> instance,
> > isn't really a micro service architecture. Of course, it might look like
> > that in diagrams [2], but we all know it really isn't. The compute node
> is
> > home to agents for every single concern, and the API interactions between
> > the services is too tightly woven to consider many of them functional
> > without the same lockstep version of other services together. A game to
> > play is ask yourself what would happen if a service was isolated on its
> > own island, how functional would its API be, if at all. Is this something
> > that we want? No. But there doesn't seem to be a place where we can go
> > to actually design, discuss, debate, and ratify changes that would help
> > us get to the point of gathering the necessary will and capability to
> > enact these efforts.
> >
> > Maybe nova-compute should be isolated from nova, with an API that
> > nova, cinder and neutron talk to. Maybe we should make the scheduler
> > cross-project aware and capable of scheduling more than just nova
> > instances. Maybe we should have experimental groups that can look at how
> > some of this functionality could perhaps be delegated to non-openstack
> > projects. We hear that Mesos, for example to help with the scheduling
> > aspects, but how do we discuss these outside hijacking threads on the
> > mailing list? These are things that we all discuss in the hallways
> > and bars and parties at the summit, but because they cross projects at
> > the design level, and are inherently a lot of social and technical and
> > exploratory work, Many of us fear we never get to a place of turning
> > our dreams into reality.
> >
> > So, with that, I'd like to propose the creation of an Architecture
> Working
> > Group. This group's charge would not be design by committee, but a place
> > for architects to share their designs and gain support across projects
> > to move forward with and ratify architectural decisions. That 

Re: [openstack-dev] [higgins] Docker-compose support

2016-06-01 Thread Denis Makogon
Hello Hongbin.

I would disagree on what you are saying, because having Higgins doing too
basic stuff is not very valuable. As for those of us who works with
development and continuous delivery how can Higgins address, for example,
micro-service chaining?

In any case Higgins eventually will end up having its own DSL (or TOSCA, or
compose DSL) because there are not so much benefits from having API that
only spin-up containers separately. Developers will, again, have to build
solution over Higgins to support more advanced things like service chaining
and that would mean that Higgins doesn't meet their requirements for
further service consumption.


Kind regards,
Denys Makogon


2016-05-31 23:15 GMT+03:00 Hongbin Lu <hongbin...@huawei.com>:

> I don’t think it is a good to re-invent docker-compose in Higgins.
> Instead, we should leverage existing libraries/tools if we can.
>
>
>
> Frankly, I don’t think Higgins should interpret any docker-compose like
> DSL in server, but maybe it is a good idea to have a CLI extension to
> interpret specific DSL and translate it to a set of REST API calls to
> Higgins server. The solution should be generic enough so that we can re-use
> it to interpret another DSL (e.g. pod, TOSCA, etc.) in the future.
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Denis Makogon [mailto:lildee1...@gmail.com]
> *Sent:* May-31-16 3:25 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [higgins] Docker-compose support
>
>
>
> Hello.
>
>
>
> It is hard to tell if given API will be final version, but i tried to make
> it similar to CLI and its capabilities. So, why not?
>
>
>
> 2016-05-31 22:02 GMT+03:00 Joshua Harlow <harlo...@fastmail.com>:
>
> Cool good to know,
>
> I see
> https://github.com/docker/compose/pull/3535/files#diff-1d1516ea1e61cd8b44d000c578bbd0beR66
>
> Would that be the primary API? Hard to tell what is the API there
> actually, haha. Is it the run() method?
>
> I was thinking more along the line that higgins could be a 'interpreter'
> of the same docker-compose format (or similar format); if the library that
> is being created takes a docker-compose file and turns it into a
> 'intermediate' version/format that'd be cool. The compiled version would
> then be 'executable' (and introspectable to) by say higgins (which could
> say traverse over that intermediate version and activate its own code to
> turn the intermediate versions primitives into reality), or a
> docker-compose service could or ...
>
>
>
> What abou TOSCA? From my own perspective compose format is too limited, so
> it is really necessary to consider regarding use of TOSCA in Higgins
> workflows.
>
>
>
>
> Libcompose also seems to be targeted at a higher level library, from at
> least reading the summary, neither seem to be taking a compose yaml file,
> turning it into a intermediate format, exposing that intermediate format to
> others for introspection/execution (and also likely providing a default
> execution engine that understands that format) but instead both just
> provide an equivalent of:
>
>
>
> That's why i've started this thread, as community we have use cases for
> Higgins itself and for compose but most of them are not formalized or even
> written. Isn't this a good time to define them?
>
>
>
>   project = make_project(yaml_file)
>   project.run/up()
>
> Which probably isn't the best API for something like a web-service that
> uses that same library to have. IMHO having a long running run() method
>
>
>
> Well, compose allows to run detached executions for most of its API calls.
> By use of events, we can track service/containers statuses (but it is not
> really trivial).
>
>
>
> exposed, without the necessary state tracking, ability to
> interrupt/pause/resume that run() method and such is not going to end well
> for users of that lib (especially a web-service that needs to periodically
> be `service webservice stop` or restart, or ...).
>
>
>
> Yes, agreed. But docker or swarm by itself doesn't provide such API (can't
> tell the same for K8t).
>
>
>
> Denis Makogon wrote:
>
> Hello Stackers.
>
>
> As part of discussions around what Higgins is and what its mission there
> are were couple of you who mentioned docker-compose [1] and necessity of
> doing the same thing for Higgins but from scratch.
>
> I don't think that going that direction is the best way to spend
> development cycles. So, that's why i ask you to take a look at recent
> patchset submitted to docker-compose upstream [2] that makes this tool
> (initially designed as CLI) to become a library with Python API.  The
> whol

Re: [openstack-dev] [higgins] Docker-compose support

2016-06-01 Thread Denis Makogon
2016-05-31 22:56 GMT+03:00 Joshua Harlow <harlo...@fastmail.com>:

> Denis Makogon wrote:
>
>> Hello.
>>
>> It is hard to tell if given API will be final version, but i tried to
>> make it similar to CLI and its capabilities. So, why not?
>>
>> 2016-05-31 22:02 GMT+03:00 Joshua Harlow <harlo...@fastmail.com
>> <mailto:harlo...@fastmail.com>>:
>>
>> Cool good to know,
>>
>> I see
>>
>> https://github.com/docker/compose/pull/3535/files#diff-1d1516ea1e61cd8b44d000c578bbd0beR66
>>
>> Would that be the primary API? Hard to tell what is the API there
>> actually, haha. Is it the run() method?
>>
>> I was thinking more along the line that higgins could be a
>> 'interpreter' of the same docker-compose format (or similar format);
>> if the library that is being created takes a docker-compose file and
>> turns it into a 'intermediate' version/format that'd be cool. The
>> compiled version would then be 'executable' (and introspectable to)
>> by say higgins (which could say traverse over that intermediate
>> version and activate its own code to turn the intermediate versions
>> primitives into reality), or a docker-compose service could or ...
>>
>>
>> What abou TOSCA? From my own perspective compose format is too limited,
>> so it is really necessary to consider regarding use of TOSCA in Higgins
>> workflows.
>>
>
> Does anyone in the wider world actually use TOSCA anywhere? Has it gained
> any adoption? I've watched the TOSCA stuff, but have really been unable to
> tell what kind of an impact TOSCA actually has had (everyone seems to make
> there own format, and not care that much about TOSCA in general, for better
> or worse).


For cursory glance, in OpenStack only Tacker and Murano are supporting
TOSCA. From outside the world i know that Cloudify/Aria use using TOSCA.


>
>
>
>>
>> Libcompose also seems to be targeted at a higher level library, from
>> at least reading the summary, neither seem to be taking a compose
>> yaml file, turning it into a intermediate format, exposing that
>> intermediate format to others for introspection/execution (and also
>> likely providing a default execution engine that understands that
>> format) but instead both just provide an equivalent of:
>>
>>
>> That's why i've started this thread, as community we have use cases for
>> Higgins itself and for compose but most of them are not formalized or
>> even written. Isn't this a good time to define them?
>>
>>project = make_project(yaml_file)
>>project.run/up()
>>
>> Which probably isn't the best API for something like a web-service
>> that uses that same library to have. IMHO having a long running
>> run() method
>>
>>
>> Well, compose allows to run detached executions for most of its API
>> calls. By use of events, we can track service/containers statuses (but
>> it is not really trivial).
>>
>
> That's not exactly the same as what I was thinking,
>
> Let's take a compose yaml file,
> https://github.com/DataDog/docker-compose-example/blob/master/docker-compose.yml
>
> At some point this is turned into a set of actions to run (a workflow
> perhaps) to turn that yaml file into an actual running solution, now likely
> the creators of libcompose or the python version embedded those actions
> directly into the interpretation and made them inseparable but that doesn't
> need to be the case.
>
>
Well, if to follow that logic in Higgins we don't need compose at all, but
we need DSL translator with ability to embed custom actions over
services/containers descriptions.


>
>> exposed, without the necessary state tracking, ability to
>> interrupt/pause/resume that run() method and such is not going to
>> end well for users of that lib (especially a web-service that needs
>> to periodically be `service webservice stop` or restart, or ...).
>>
>>
>> Yes, agreed. But docker or swarm by itself doesn't provide such API
>> (can't tell the same for K8t).
>>
>
> Meh, that's not such a good excuse to try to do it (or at least to think
> about it). If we only did what was already done, we probably wouldn't be
> doing things over email or driving cars or... :-P
>
>
>> Denis Makogon wrote:
>>
>> Hello Stackers.
>>
>>
>> As part of discussions around what Higgins is and what its
>> mission there
>> are were couple of you who mentioned docker-compose [1] and

Re: [openstack-dev] [higgins] Docker-compose support

2016-05-31 Thread Denis Makogon
Hello.

It is hard to tell if given API will be final version, but i tried to make
it similar to CLI and its capabilities. So, why not?

2016-05-31 22:02 GMT+03:00 Joshua Harlow <harlo...@fastmail.com>:

> Cool good to know,
>
> I see
> https://github.com/docker/compose/pull/3535/files#diff-1d1516ea1e61cd8b44d000c578bbd0beR66
>
> Would that be the primary API? Hard to tell what is the API there
> actually, haha. Is it the run() method?
>
> I was thinking more along the line that higgins could be a 'interpreter'
> of the same docker-compose format (or similar format); if the library that
> is being created takes a docker-compose file and turns it into a
> 'intermediate' version/format that'd be cool. The compiled version would
> then be 'executable' (and introspectable to) by say higgins (which could
> say traverse over that intermediate version and activate its own code to
> turn the intermediate versions primitives into reality), or a
> docker-compose service could or ...
>

What abou TOSCA? From my own perspective compose format is too limited, so
it is really necessary to consider regarding use of TOSCA in Higgins
workflows.


>
> Libcompose also seems to be targeted at a higher level library, from at
> least reading the summary, neither seem to be taking a compose yaml file,
> turning it into a intermediate format, exposing that intermediate format to
> others for introspection/execution (and also likely providing a default
> execution engine that understands that format) but instead both just
> provide an equivalent of:
>
>
That's why i've started this thread, as community we have use cases for
Higgins itself and for compose but most of them are not formalized or even
written. Isn't this a good time to define them?


>   project = make_project(yaml_file)
>   project.run/up()
>
> Which probably isn't the best API for something like a web-service that
> uses that same library to have. IMHO having a long running run() method


Well, compose allows to run detached executions for most of its API calls.
By use of events, we can track service/containers statuses (but it is not
really trivial).


> exposed, without the necessary state tracking, ability to
> interrupt/pause/resume that run() method and such is not going to end well
> for users of that lib (especially a web-service that needs to periodically
> be `service webservice stop` or restart, or ...).
>
>
Yes, agreed. But docker or swarm by itself doesn't provide such API (can't
tell the same for K8t).


> Denis Makogon wrote:
>
>> Hello Stackers.
>>
>>
>> As part of discussions around what Higgins is and what its mission there
>> are were couple of you who mentioned docker-compose [1] and necessity of
>> doing the same thing for Higgins but from scratch.
>>
>> I don't think that going that direction is the best way to spend
>> development cycles. So, that's why i ask you to take a look at recent
>> patchset submitted to docker-compose upstream [2] that makes this tool
>> (initially designed as CLI) to become a library with Python API.  The
>> whole idea is to make docker-compose look similar to libcompose [3]
>> (written on Go).
>>
>> If we need to utilize docker-compose features in Higgins i'd recommend
>> to work on this with Docker community and convince them to land that
>> patch to upstream.
>>
>> If you have any questions, please let me know.
>>
>> [1] https://docs.docker.com/compose/
>> [2] https://github.com/docker/compose/pull/3535
>> [3] https://github.com/docker/libcompose
>>
>>
>> Kind regards,
>> Denys Makogon
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [higgins] Docker-compose support

2016-05-31 Thread Denis Makogon
Hello Stackers.


As part of discussions around what Higgins is and what its mission there
are were couple of you who mentioned docker-compose [1] and necessity of
doing the same thing for Higgins but from scratch.

I don't think that going that direction is the best way to spend
development cycles. So, that's why i ask you to take a look at recent
patchset submitted to docker-compose upstream [2] that makes this tool
(initially designed as CLI) to become a library with Python API.  The whole
idea is to make docker-compose look similar to libcompose [3] (written on
Go).

If we need to utilize docker-compose features in Higgins i'd recommend to
work on this with Docker community and convince them to land that patch to
upstream.

If you have any questions, please let me know.

[1] https://docs.docker.com/compose/
[2] https://github.com/docker/compose/pull/3535
[3] https://github.com/docker/libcompose


Kind regards,
Denys Makogon
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-25 Thread Denis Makogon
Hello to All.

This message is not about arguing weather OpenStack needs Go and other
language.

This is a good discussion. So, the main question here is "Go along with
Python for OpenStack" and the problem is to support Go code starting for
necessity of skilled Go developers to infrastructure for CI/CD, etc.

Correct me if i'm wrong, none of the messages above were stating about
support Go-extensions for Python (C extensions were mentioned couple
times). Starting Go v1.5 it is possible to develop extension for Python [1]
(lib that helps to develop extensions [2])
An idea is in:
  - "If think that your project is an exceptional one (swift, designate,
etc.) and you really think that Golang is what you need,
 then why can't you develop your own Go-extensions  and write Python
libs that are utilizing that code
 and then add that new python dependency to your project?"
  - distribute your go-extensions (*.so files) and DEB/RPM for further
consumption in DevStack for example (like we do for multiple components -
Kafka, Cassandra, MySQL, RMQ, etc.)

As i can see such behaviour would allow Python to be main lang for
OpenStack development, we wouldn't have an overhead for building new
infrastructure for Go and would allow projects to use Go for developing
their extensions out of OpenStack Big Tent.

[1] https://blog.filippo.io/building-python-modules-with-go-1-5/
[2] https://github.com/sbinet/go-python

Kind regards,
Denys Makogon


2016-05-25 14:21 GMT+03:00 Flavio Percoco :

> On 25/05/16 06:48 -0400, Sean Dague wrote:
>
> [snip]
>
>
> 4. Do we want to be in the business of building data plane services that
>> will all run into python limitations, and will all need to be rewritten
>> in another language?
>>
>> This is a slightly different spin on the question Thierry is asking.
>>
>> Control Plane services are very unlikely to ever hit a scaling concern
>> where rewriting the service in another language is needed for
>> performance issues. These are orchestrators, and the time spent in them
>> is vastly less than the operations they trigger (start a vm, configure a
>> switch, boot a database server). There was a whole lot of talk in the
>> threads of "well that's not innovative, no one will want to do just
>> that", which seems weird, because that's most of OpenStack. And it's
>> pretty much where all the effort in the containers space is right now,
>> with a new container fleet manager every couple of weeks. So thinking
>> that this is a boring problem no one wants to solve, doesn't hold water
>> with me.
>>
>> Data Plane services seem like they will all end up in the boat of
>> "python is not fast enough". Be it serving data from disk, mass DNS
>> transfers, time series database, message queues. They will all
>> eventually hit the python wall. Swift hit it first because of the
>> maturity of the project and they are now focused on this kind of
>> optimization, as that's what their user base demands. However I think
>> all other data plane services will hit this as well.
>>
>> Glance (which is partially a data plane service) did hit this limit, and
>> the way it is largely mitigated by folks is by using Ceph and exposing
>> that
>> directly to Nova so now Glance is only in the location game and metadata
>> game, and Ceph is in the data plane game.
>>
>
> Sorry for nitpicking here but Glance's API keeps being a data API. Sure it
> stores locations and sure you can do fancy things with those locations
> but, as
> far as end users go, it's still a data API. It is not be used as
> intensively as
> Swift's, though. Ceph's driver allows for fancier things to be done but
> there
> are deployments which don't use Ceph.
>
> I believe it'd be better to separate data services that *own* the data from
> those that integrate other backends. Swift owns the data. You upload it to
> swift, it stores the data using its own strategies and it serves it.
> Glance gets
> the data, puts it in some other store and then you can either access it
> (not
> always) directly from the store or have Glance serving it back.
>
>
> When it comes to doing data plan services in OpenStack, I'm quite mixed.
>> The technology concerns for data plane
>> services are quite different. All the control plane services kind of
>> look and feel the same. An API + worker model, a DB for state, message
>> passing / rpc to put work to the workers. This is a common pattern and
>> is something which even for all the project differences, does end up
>> kind of common between parts. Projects that follow this model are
>> debuggable as a group not too badly.
>>
>> 5. Where does Swift fit?
>>
>> This I think has always been a tension point in the community (at least
>> since I joined in 2012). Swift is an original service of OpenStack, as
>> it started as Swift and Nova. But they were very different things. Swift
>> is a data service, Nova was a control plane. Much of what is now
>> OpenStack is Nova derivative in some way (some times 

Re: [openstack-dev] [higgins] Continued discussion from the last team meeting

2016-05-25 Thread Denis Makogon
Hello to All.

See inline comments.

Kind regards,
Denys Makogon

2016-05-24 23:55 GMT+03:00 Hongbin Lu :

> Hi all,
>
>
>
> At the last team meeting, we tried to define the scope of the Higgins
> project. In general, we agreed to focus on the following features as an
> initial start:
>
> · Build a container abstraction and use docker as the first
> implementation.
>
> · Focus on basic container operations (i.e. CRUD), and leave
> advanced operations (i.e. keep container alive, rolling upgrade, etc.) to
> users or other projects/services.
>
> · Start with non-nested container use cases (e.g. containers on
> physical hosts), and revisit nested container use cases (e.g. containers on
> VMs) later.
>
> The items below needs further discussion so I started this ML to discuss
> it.
>
> 1.   Container composition: implement a docker compose like feature
>

In Docker-compose, at this point of time i'm working to extracting core
functionality into something similar to libcompose (written on Go) but with
Python API.
I can tell that it is not that fast, so that work would take some time
(couple releases). My suggestion is to implement abstraction layer that
will consume your own implementation of compose features and once
docker-compose will be ready to be consumed then in Higgins we will switch
to it.

Another thing, it is worth considering to use TOSCA modeling (see how
Tacker is doing it) for container orchestration.


> 2.   Container host management: abstract container host
>
> For #1, it seems we broadly agreed that this is a useful feature. The
> argument is where this feature belongs to. Some people think this feature
> belongs to other projects, such as Heat, and others think it belongs to
> Higgins so we should implement it. For #2, we were mainly debating two
> things: where the container hosts come from (provisioned by Nova or
> provided by operators); should we expose host management APIs to end-users?
> Thoughts?
>

Here's what i think, if we would take a look at Solum that uses swarm
cluster API endpoint that defined in its config, so for me, as for operator
it is not that useful.

As first step, we can live with that, but when you would think of multisite
OpenStack containers orchestration that case wouldn't work at all. As
proposal, i'd like to see special DB model that represents swarm cluster
entity (all necessary creds to connect to it: TLS certs, user/password,
endpoint, etc.) and provide advanced placement algorithm that would help to
define where should container lay or let users to pick concrete swarm
cluster to deploy their container.


>
>
> Best regards,
>
> Hongbin
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] Can RPCClient.call() be used with a subset of servers?

2015-02-09 Thread Denis Makogon
On Monday, February 9, 2015, Gravel, Julie Chongcharoen julie.gra...@hp.com
wrote:

  Hello,

 I want to use oslo.messaging.RPCClient.call() to invoke a
 method on multiple servers, but not all of them. Can this be done and how?
 I read the code documentation (client.py and target.py). I only saw either
 the call used for one server at a time, or for all of them using the fanout
 param. Neither options is exactly what I want.

 Any response/explanation would be highly appreciated.



Hello, I would say that there's no need to have such ability since since
oslo.messaging is unaware of your servers, so everything you need is to
write your own code to accomplish your mission. Even if you want to execute
call procedures at the same time you can parallelize your code. Would that
work for you?



 Regards,

 Julie Gravel




Kind regards,
DenisM.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][project-config][oslo.messaging] Pre-driver requirements split-up initiative

2015-02-06 Thread Denis Makogon
Hello to All.


As part of oslo.messaging initiative to split up requirements into certain
list of per messaging driver dependencies
https://review.openstack.org/#/c/83150/

it was figured that we need to find a way to use pip inner dependencies and
we were able to do that, short info our solution and how it works:



   - This is how regular requirements.txt looks:

dep1

…

dep n


   - This is how looks requirements.txt with inner dependencies:

dep1

-r somefolder/another-requirements.txt

-r completelyanotherfolder/another-requirements.txt

…

dep n

That’s what we’ve did for oslo.messaging. But we’ve faced with problem that
was defined as openstack-infra/project-config

tool issue, this tool called project-requirements-change
https://github.com/openstack-infra/project-config/blob/master/jenkins/scripts/project-requirements-change.py
.As you can see it’s not able to handle inner dependencies in any

of requirements.txt files, as you can see this tool expects to parse only
explicit set of requirements (see regular requirements.txt definition
above).

So, i decided to fix that tool to make it able to look over inner
dependencies, and here’s https://review.openstack.org/#/c/153227/ what i
have for yesterday,

Taking into account suggestion from Monty Taylor i’m bringing this
discussion to much wider audience.

And the question is: aren’t we doing something complex or are there any
less complex ways to

accomplish the initial idea of splitting requirements?


Kind regards,

Denis M.
IRC: denis_makogon at Freenode
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][project-config][oslo.messaging] Pre-driver requirements split-up initiative

2015-02-06 Thread Denis Makogon
On Fri, Feb 6, 2015 at 4:00 PM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2015-02-06 14:37:08 +0200 (+0200), Denis Makogon wrote:
  As part of oslo.messaging initiative to split up requirements into
  certain list of per messaging driver dependencies
 [...]

 I'm curious what the end goal is here... when someone does `pip
 install oslo.messaging` what do you/they expect to get installed?
 Your run-parts style requirements.d plan is sort of
 counter-intuitive to me in that I would expect it to contain
 number-prefixed sublists of requirements which should be processed
 collectively in an alphanumeric sort order, but I get the impression
 this is not the goal of the mechanism (I'll be somewhat relieved if
 you tell me I'm mistaken in that regard).


Yes, that's the main goal, as i can foresee, to have an ability to install
oslo.messaging with dependencies for specific driver.


  Taking into account suggestion from Monty Taylor i’m bringing this
  discussion to much wider audience. And the question is: aren’t we
  doing something complex or are there any less complex ways to
  accomplish the initial idea of splitting requirements?

 As for taking this to a wider audience we (OpenStack) are already
 venturing into special snowflake territory with PBR, however
 requirements.txt is a convention used at least somewhat outside of
 OpenStack-related Python projects. It might make sense to get input
 from the broader Python packaging community on something like this
 before we end up alienating ourselves from them entirely.


Sure, that's what i'm looking for.


 --
 Jeremy Stanley

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Kind regards,
Denis M.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][project-config][oslo.messaging] Pre-driver requirements split-up initiative

2015-02-06 Thread Denis Makogon
On Fri, Feb 6, 2015 at 4:16 PM, Donald Stufft don...@stufft.io wrote:


  On Feb 6, 2015, at 9:00 AM, Jeremy Stanley fu...@yuggoth.org wrote:
 
  On 2015-02-06 14:37:08 +0200 (+0200), Denis Makogon wrote:
  As part of oslo.messaging initiative to split up requirements into
  certain list of per messaging driver dependencies
  [...]
 
  I'm curious what the end goal is here... when someone does `pip
  install oslo.messaging` what do you/they expect to get installed?
  Your run-parts style requirements.d plan is sort of
  counter-intuitive to me in that I would expect it to contain
  number-prefixed sublists of requirements which should be processed
  collectively in an alphanumeric sort order, but I get the impression
  this is not the goal of the mechanism (I'll be somewhat relieved if
  you tell me I'm mistaken in that regard).
 
  Taking into account suggestion from Monty Taylor i’m bringing this
  discussion to much wider audience. And the question is: aren’t we
  doing something complex or are there any less complex ways to
  accomplish the initial idea of splitting requirements?
 
  As for taking this to a wider audience we (OpenStack) are already
  venturing into special snowflake territory with PBR, however
  requirements.txt is a convention used at least somewhat outside of
  OpenStack-related Python projects. It might make sense to get input
  from the broader Python packaging community on something like this
  before we end up alienating ourselves from them entirely.

 I’m not sure what exactly is trying to be achieved here, but I still assert
 that requirements.txt is the wrong place for pbr to be looking and it
 should
 instead look for dependencies specified inside of a setup.cfg.

 Sorry, i had to explain what i meant by saying 'inner dependency'. Let me
be more clear at this step to avoid misunderstanding in terminology.
Inner  dependency - is a redirection from requirements.txt to another file
that contains additional dependencies (-r another_deps.txt)

 More on topic, I'm not sure what inner dependencies are, but if what
 you're
 looking for is optional dependencies that only are needed in specific
 situation
 then you probably want extras, defined like:

 setup(
 extras_require={
 somename: [
 dep1,
 dep2,
 ],
 },
 )


That might be the case, but since we want to split up requirements into
per-driver dependecies, it would require to check if setup.cfg can handle
use of inner dependencies. for example:

setup(
extras_require={
somename: [
-r another_file_with_deps.txt,
],
},
)


 Then if you do ``pip install myproject[somename]`` it'll include dep1 and
 dep2
 in the list of dependencies, you can also depend on this in other projects
 like:

 setup(
 install_requires=[myproject[somename]=1.0],
 )


That's i've been looking for, so, for future installations it'll be very
useful if cloud deployer knows which AMQP service will be used,
then he'd be able to install only that type of oslo.messaging that he wants
i.e.

project/requirements.txt:
...
oslo.messaging[amqp1]=${version}

...

Really great input, thanks Donald. Appreciate it.

---
 Donald Stufft
 PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Kind regards,
Denis M.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][project-config][oslo.messaging] Pre-driver requirements split-up initiative

2015-02-06 Thread Denis Makogon
On Fri, Feb 6, 2015 at 5:54 PM, Doug Hellmann d...@doughellmann.com wrote:



 On Fri, Feb 6, 2015, at 09:56 AM, Denis Makogon wrote:
  On Fri, Feb 6, 2015 at 4:16 PM, Donald Stufft don...@stufft.io wrote:
 
  
On Feb 6, 2015, at 9:00 AM, Jeremy Stanley fu...@yuggoth.org
 wrote:
   
On 2015-02-06 14:37:08 +0200 (+0200), Denis Makogon wrote:
As part of oslo.messaging initiative to split up requirements into
certain list of per messaging driver dependencies
[...]
   
I'm curious what the end goal is here... when someone does `pip
install oslo.messaging` what do you/they expect to get installed?
Your run-parts style requirements.d plan is sort of
counter-intuitive to me in that I would expect it to contain
number-prefixed sublists of requirements which should be processed
collectively in an alphanumeric sort order, but I get the impression
this is not the goal of the mechanism (I'll be somewhat relieved if
you tell me I'm mistaken in that regard).
   
Taking into account suggestion from Monty Taylor i’m bringing this
discussion to much wider audience. And the question is: aren’t we
doing something complex or are there any less complex ways to
accomplish the initial idea of splitting requirements?
   
As for taking this to a wider audience we (OpenStack) are already
venturing into special snowflake territory with PBR, however
requirements.txt is a convention used at least somewhat outside of
OpenStack-related Python projects. It might make sense to get input
from the broader Python packaging community on something like this
before we end up alienating ourselves from them entirely.
  
   I’m not sure what exactly is trying to be achieved here, but I still
 assert
   that requirements.txt is the wrong place for pbr to be looking and it
   should
   instead look for dependencies specified inside of a setup.cfg.
  
   Sorry, i had to explain what i meant by saying 'inner dependency'. Let
 me
  be more clear at this step to avoid misunderstanding in terminology.
  Inner  dependency - is a redirection from requirements.txt to another
  file
  that contains additional dependencies (-r another_deps.txt)
 
   More on topic, I'm not sure what inner dependencies are, but if what
   you're
   looking for is optional dependencies that only are needed in specific
   situation
   then you probably want extras, defined like:
  
   setup(
   extras_require={
   somename: [
   dep1,
   dep2,
   ],
   },
   )
  
  
  That might be the case, but since we want to split up requirements into
  per-driver dependecies, it would require to check if setup.cfg can handle
  use of inner dependencies. for example:
 
  setup(
  extras_require={
  somename: [
  -r another_file_with_deps.txt,
  ],
  },
  )

 Let's see if we can make pbr add the extras_require values. We can then
 either specify the requirements explicitly in setup.cfg, or use a naming
 convention for separate requirements files. Either way, we shouldn't
 need setuptools to understand that we are managing the list of
 requirements in files.


That might be the case. And probably PBR is the only place where we can
place that logic since distutils already can do that.

Doug, i will take a look at PBR and will try to figure out the easiest way
to get extras_require into it. Thanks for input.


 
   Then if you do ``pip install myproject[somename]`` it'll include dep1
 and
   dep2
   in the list of dependencies, you can also depend on this in other
 projects
   like:
  
   setup(
   install_requires=[myproject[somename]=1.0],
   )
  
  
  That's i've been looking for, so, for future installations it'll be very
  useful if cloud deployer knows which AMQP service will be used,
  then he'd be able to install only that type of oslo.messaging that he
  wants
  i.e.
 
  project/requirements.txt:
  ...
  oslo.messaging[amqp1]=${version}
 
  ...
 
  Really great input, thanks Donald. Appreciate it.
 
  ---
   Donald Stufft
   PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
  
  
  
 __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  Kind regards,
  Denis M.
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack

Re: [openstack-dev] [Trove] Core reviewer update

2015-02-05 Thread Denis Makogon
+1

Congratulations Peter, Victoria and Edmond.


On Thu, Feb 5, 2015 at 6:26 PM, Nikhil Manchanda slick...@gmail.com wrote:

 Hello Trove folks:

 Keeping in line with other OpenStack projects, and attempting to keep
 the momentum of reviews in Trove going, we need to keep our core-team up
 to date -- folks who are regularly doing good reviews on the code should
 be brought in to core and folks whose involvement is dropping off should
 be considered for removal since they lose context over time, not being
 as involved.

 For this update I'm proposing the following changes:
 - Adding Peter Stachowski (peterstac) to trove-core
 - Adding Victoria Martinez De La Cruz (vkmc) to trove-core
 - Adding Edmond Kotowski (edmondk) to trove-core
 - Removing Michael Basnight (hub_cap) from trove-core
 - Removing Tim Simpson (grapex) from trove-core

 For context on Trove reviews and who has been active, please see
 Russell's stats for Trove at:
 - http://russellbryant.net/openstack-stats/trove-reviewers-30.txt
 - http://russellbryant.net/openstack-stats/trove-reviewers-90.txt

 Trove-core members -- please reply with your vote on each of these
 proposed changes to the core team. Peter, Victoria and Eddie -- please
 let me know of your willingness to be in trove-core. Michael, and Tim --
 if you are planning on being substantially active on Trove in the near
 term, also please do let me know.

 Thanks,
 Nikhil

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] Performance testing. Initial steps.

2015-01-28 Thread Denis Makogon
On Tue, Jan 27, 2015 at 10:26 PM, Gordon Sim g...@redhat.com wrote:

 On 01/27/2015 06:31 PM, Doug Hellmann wrote:

 On Tue, Jan 27, 2015, at 12:28 PM, Denis Makogon wrote:

 I'd like to build tool that would be able to profile messaging over
 various deployments. This tool would give me an ability to compare
 results of performance testing produced by native tools and
 oslo.messaging-based tool, eventually it would lead us into digging into
 code and trying to figure out where bad things are happening (that's
 the
 actual place where we would need to profile messaging code). Correct me
 if
 i'm wrong.


 It would be interesting to have recommendations for deployment of rabbit
 or qpid based on performance testing with oslo.messaging. It would also
 be interesting to have recommendations for changes to the implementation
 of oslo.messaging based on performance testing. I'm not sure you want to
 do full-stack testing for the latter, though.

 Either way, I think you would be able to start the testing without any
 changes in oslo.messaging.


 I agree. I think the first step is to define what to measure and then
 construct an application using olso.messaging that allows the data of
 interest to be captured using different drivers and indeed different
 configurations of a given driver.

 I wrote a very simple test application to test one aspect that I felt was
 important, namely the scalability of the RPC mechanism as you increase the
 number of clients and servers involved. The code I used is
 https://github.com/grs/ombt, its probably stale at the moment, I only
 link to it as an example of approach.

 Using that test code I was then able to compare performance in this one
 aspect across drivers (the 'rabbit', 'qpid' and new amqp 1.0 based drivers
 _ I wanted to try zmq, but couldn't figure out how to get it working at the
 time), and for different deployment options using a given driver (amqp 1.0
 using qpidd or qpid dispatch router in either standalone or with multiple
 connected routers).

 There are of course several other aspects that I think would be important
 to explore: notifications, more specific variations in the RPC 'topology'
 i.e. number of clients on given server number of servers in single group
 etc, and a better tool (or set of tools) would allow all of these to be
 explored.

 From my experimentation, I believe the biggest differences in scalability
 are going to come not from optimising the code in oslo.messaging so much as
 choosing different patterns for communication. Those choices may be
 constrained by other aspects as well of course, notably approach to
 reliability.



After couple internal discussions and hours of investigations, i think i've
foung the most applicabale solution
that will accomplish performance testing approach and will eventually be
evaluated as messaging drivers
configuration and AMQP service deployment recommendataion.

Solution that i've been talking about is already pretty well-known across
OpenStack components - Rally and its scenarios.
Why it would be the best option? Rally scenarios would not touch messaging
 core part. Scenarios are gate-able.
Even if we're talking about internal testing, scenarios are very useful in
this case,
since they are something that can be tuned/configured taking into account
environment needs.

Doug, Gordon, what do you think about bringing scenarios into messaging?

__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Kind regards,
Denis M.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] Performance testing. Initial steps.

2015-01-28 Thread Denis Makogon
On Wed, Jan 28, 2015 at 11:39 AM, Flavio Percoco fla...@redhat.com wrote:

 On 28/01/15 10:23 +0200, Denis Makogon wrote:



 On Tue, Jan 27, 2015 at 10:26 PM, Gordon Sim g...@redhat.com wrote:

On 01/27/2015 06:31 PM, Doug Hellmann wrote:
  On Tue, Jan 27, 2015, at 12:28 PM, Denis Makogon wrote:
  I'd like to build tool that would be able to profile
 messaging over
various deployments. This tool would give me an ability to
compare
results of performance testing produced by native tools and
oslo.messaging-based tool, eventually it would lead us into
 digging
into
code and trying to figure out where bad things are happening
(that's
the
actual place where we would need to profile messaging code).
Correct me
if
i'm wrong.


It would be interesting to have recommendations for deployment of
rabbit
or qpid based on performance testing with oslo.messaging. It would
 also
be interesting to have recommendations for changes to the
implementation
of oslo.messaging based on performance testing. I'm not sure you
 want
to
do full-stack testing for the latter, though.

Either way, I think you would be able to start the testing without
 any
changes in oslo.messaging.

I agree. I think the first step is to define what to measure and then
construct an application using olso.messaging that allows the data of
interest to be captured using different drivers and indeed different
configurations of a given driver.

I wrote a very simple test application to test one aspect that I felt
 was
important, namely the scalability of the RPC mechanism as you increase
 the
number of clients and servers involved. The code I used is https://
github.com/grs/ombt, its probably stale at the moment, I only link to
 it as
an example of approach.

Using that test code I was then able to compare performance in this one
aspect across drivers (the 'rabbit', 'qpid' and new amqp 1.0 based
 drivers
_ I wanted to try zmq, but couldn't figure out how to get it working
 at the
time), and for different deployment options using a given driver (amqp
 1.0
using qpidd or qpid dispatch router in either standalone or with
 multiple
connected routers).

There are of course several other aspects that I think would be
 important
to explore: notifications, more specific variations in the RPC
 'topology'
i.e. number of clients on given server number of servers in single
 group
etc, and a better tool (or set of tools) would allow all of these to be
explored.

From my experimentation, I believe the biggest differences in
 scalability
are going to come not from optimising the code in oslo.messaging so
 much as
choosing different patterns for communication. Those choices may be
constrained by other aspects as well of course, notably approach to
reliability.




 After couple internal discussions and hours of investigations, i think
 i've
 foung the most applicabale solution
 that will accomplish performance testing approach and will eventually be
 evaluated as messaging drivers
 configuration and AMQP service deployment recommendataion.

 Solution that i've been talking about is already pretty well-known across
 OpenStack components - Rally and its scenarios.
 Why it would be the best option? Rally scenarios would not touch messaging
  core part. Scenarios are gate-able.
 Even if we're talking about internal testing, scenarios are very useful
 in this
 case,
 since they are something that can be tuned/configured taking into account
 environment needs.

 Doug, Gordon, what do you think about bringing scenarios into messaging?


 I personally wouldn't mind having them but I'd like us to first
 discuss what kind of scenarios we want to test.

 I'm assuming these scenarios would be pure oslo.messaging scenarios
 and they won't require any of the openstack services. Therefore, I
 guess these scenarios would test things like performance with many

consumers, performance with several (a)synchronous calls, etc. What
 performance means in this context will have to be discussed as well.


Correct, oslo.messaging scenarios would expect to have only AMQP service
and nothing else.
Yes, that's what i've been thinking about. Also, i'd like to share doc that
i've found, see [1].
As i can see it would be more than useful to enable next scenarios:

   - Single multi-thread publisher (rpc client) against single multi-thread
   consumer
  - using RPC cast/call methods try to measure time between request and
  response.
   - Multiple multi-thread publishers against single multi-thread consumer
  - using RPC cast/call methods try to measure time between requests
  and responses to multiple publishers.
   - Multiple multi-thread

Re: [openstack-dev] [oslo.messaging] Performance testing. Initial steps.

2015-01-27 Thread Denis Makogon
On Thu, Jan 15, 2015 at 8:56 PM, Doug Hellmann d...@doughellmann.com
wrote:


  On Jan 15, 2015, at 1:30 PM, Denis Makogon dmako...@mirantis.com
 wrote:
 
  Good day to All,
 
  The question that i’d like to raise here is not simple one, so i’d like
 to involve as much readers as i can. I’d like to speak about oslo.messaging
 performance testing. As community we’ve put lots of efforts in making
 oslo.messaging widely used drivers stable as much as possible. Stability is
 a good thing, but is it enough for saying “works well”? I’d say that it’s
 not.
  Since oslo.messaging uses driver-based messaging workflow, it makes
 sense to dig into each driver and collect all required/possible performance
 metrics.
  First of all, it does make sense to figure out how to perform
 performance testing, first that came into my mind is to simulate high load
 on one of corresponding drivers. Here comes the question of how it can be
 accomplished withing available oslo.messaging tools - high load on any
 driver can perform an application that:
• can populate multiple emitters(rpc clients) and consumers (rpc
 servers).
• can force clients to send messages of pre-defined number of
 messages of any length.

 That makes sense.

  Another thing is why do we need such thing. Profiling, performance
 testing can improve the way in which our drivers were implemented. It can
 show us actual “bottlenecks” in messaging process, in general. In some
 cases it does make sense to figure out where problem takes its place -
 whether AMQP causes messaging problems or certain driver that speaks to
 AMQP fails.
  Next thing that i want to discuss the architecture of
 profiling/performance testing. As i can see it seemed to be a “good” way to
 add profiling code to each driver. If there’s any objection or better
 solution, please bring them to the light.

 What sort of extra profiling code do you anticipate needing?


As i can foresee (taking into account [1]) couple decorators, possibly one
that handles metering process. The biggest part of code will take highload
tool that'll be a part of messaging. But another question adding certain
dependecies to the project.


  Once we’d have final design for profiling we would need to figure out
 tools for profiling. After searching over the web, i found pretty
 interesting topic related to python profiling [1]. After certain
 investigations it does makes sense discuss next profiling options(apply one
 or both):
• Line-by-line timing and execution frequency with a profiler
 (there are possible Pros and Cons, but i would say the per-line statistics
 is more than appreciable at initial performance testing steps)
• Memory/CPU consumption
  Metrics. The most useful metric for us is time, any time-based metric,
 since it is very useful to know at which step or/and by whom delay/timeout
 caused, for example, so as it said, we would be able to figure out whether
 AMQP or driver fails to do what it was designed for.
  Before proposing spec i’d like to figure out any other requirements, use
 cases and restrictions for messaging performance testing. Also, if there
 any stories of success in boosting python performance - feel free to share
 it.

 The metrics to measure depend on the goal. Do we think the messaging code
 is using too much memory? Is it too slow? Or is there something else
 causing concern?

 It does make sense to have profiling for cases when trying to upscale
cluster and it'll be a good thing to have an ability to figure out if
scaled AMQP service has it's best configuration (i guess here would come
the question about doing performance testing using well-known tools), and
the most interesting question is about how messaging driver decreases (or
leaves untouched) throughput between RPC client and server. This metering
results can be compared to those tools that were designed for performance
testing. And that's why it'll be good step forward having
profiling/performance testing using high load technic.


 
 
 
  [1] http://www.huyng.com/posts/python-performance-analysis/
 
  Kind regards,
  Denis Makogon
  IRC: denis_makogon
  dmako...@mirantis.com
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Kind regards,
Denis M.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org

Re: [openstack-dev] [oslo.messaging] Performance testing. Initial steps.

2015-01-27 Thread Denis Makogon
On Tue, Jan 27, 2015 at 7:15 PM, Doug Hellmann d...@doughellmann.com
wrote:



 On Tue, Jan 27, 2015, at 10:56 AM, Denis Makogon wrote:
  On Thu, Jan 15, 2015 at 8:56 PM, Doug Hellmann d...@doughellmann.com
  wrote:
 
  
On Jan 15, 2015, at 1:30 PM, Denis Makogon dmako...@mirantis.com
   wrote:
   
Good day to All,
   
The question that i’d like to raise here is not simple one, so i’d
 like
   to involve as much readers as i can. I’d like to speak about
 oslo.messaging
   performance testing. As community we’ve put lots of efforts in making
   oslo.messaging widely used drivers stable as much as possible.
 Stability is
   a good thing, but is it enough for saying “works well”? I’d say that
 it’s
   not.
Since oslo.messaging uses driver-based messaging workflow, it makes
   sense to dig into each driver and collect all required/possible
 performance
   metrics.
First of all, it does make sense to figure out how to perform
   performance testing, first that came into my mind is to simulate high
 load
   on one of corresponding drivers. Here comes the question of how it can
 be
   accomplished withing available oslo.messaging tools - high load on any
   driver can perform an application that:
  • can populate multiple emitters(rpc clients) and consumers
 (rpc
   servers).
  • can force clients to send messages of pre-defined number of
   messages of any length.
  
   That makes sense.
  
Another thing is why do we need such thing. Profiling, performance
   testing can improve the way in which our drivers were implemented. It
 can
   show us actual “bottlenecks” in messaging process, in general. In some
   cases it does make sense to figure out where problem takes its place -
   whether AMQP causes messaging problems or certain driver that speaks to
   AMQP fails.
Next thing that i want to discuss the architecture of
   profiling/performance testing. As i can see it seemed to be a “good”
 way to
   add profiling code to each driver. If there’s any objection or better
   solution, please bring them to the light.
  
   What sort of extra profiling code do you anticipate needing?
  
  
  As i can foresee (taking into account [1]) couple decorators, possibly
  one
  that handles metering process. The biggest part of code will take
  highload
  tool that'll be a part of messaging. But another question adding certain
  dependecies to the project.
 
 
Once we’d have final design for profiling we would need to figure out
   tools for profiling. After searching over the web, i found pretty
   interesting topic related to python profiling [1]. After certain
   investigations it does makes sense discuss next profiling
 options(apply one
   or both):
  • Line-by-line timing and execution frequency with a profiler
   (there are possible Pros and Cons, but i would say the per-line
 statistics
   is more than appreciable at initial performance testing steps)
  • Memory/CPU consumption
Metrics. The most useful metric for us is time, any time-based
 metric,
   since it is very useful to know at which step or/and by whom
 delay/timeout
   caused, for example, so as it said, we would be able to figure out
 whether
   AMQP or driver fails to do what it was designed for.
Before proposing spec i’d like to figure out any other requirements,
 use
   cases and restrictions for messaging performance testing. Also, if
 there
   any stories of success in boosting python performance - feel free to
 share
   it.
  
   The metrics to measure depend on the goal. Do we think the messaging
 code
   is using too much memory? Is it too slow? Or is there something else
   causing concern?
  
   It does make sense to have profiling for cases when trying to upscale
  cluster and it'll be a good thing to have an ability to figure out if
  scaled AMQP service has it's best configuration (i guess here would come
  the question about doing performance testing using well-known tools), and
  the most interesting question is about how messaging driver decreases (or
  leaves untouched) throughput between RPC client and server. This metering
  results can be compared to those tools that were designed for performance
  testing. And that's why it'll be good step forward having
  profiling/performance testing using high load technic.

 That makes it sound like you want to build performance testing tools for
 the infrastructure oslo.messaging is using, and not for oslo.messaging
 itself. Is that right?

 I'd like to build tool that would be able to profile messaging over
various deployments. This tool would give me an ability to compare
results of performance testing produced by native tools and
oslo.messaging-based tool, eventually it would lead us into digging into
code and trying to figure out where bad things are happening (that's the
actual place where we would need to profile messaging code). Correct me if
i'm wrong.

Doug

 
 
   
   
   
[1] http://www.huyng.com/posts/python

Re: [openstack-dev] oslo_log/oslo_config initialization

2015-01-21 Thread Denis Makogon
On Wed, Jan 21, 2015 at 12:16 PM, Qiming Teng teng...@linux.vnet.ibm.com
wrote:

 Hi,

 In the oslo_log 0.1.0 release, the setup() function demands for a conf
 parameter, but I have failed to find any hint about setting this up.

 The problem is cfg.CONF() returns None, so the following code fails:

   conf = cfg.CONF(name='prog', project='project')
   # conf is always None here, so the following call fails
   log.setup(conf, 'project')

 Another attempt also failed, because it cannot find any options:

   log.setup(cfg.CONF, 'project')

 Any hint or sample code to setup logging if I'm abandoning the log
 module from oslo.incubator?


You might take a look at
https://github.com/openstack/oslo.log/blob/master/oslo_log/_options.py
Those options are what oslo_log expects to find in service configuration
files.


 Thanks!

 Regards,
   Qiming


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Kind regards,
Denis M.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove][Cassandra] Cassandra clustering implementation progress

2015-01-16 Thread Denis Makogon
Hello to All.


I'd like to notify those of you who's interested in clustering
implementation in Trove that i've made big progress with Cassandra
clustering implementation.
There are couple patches that may be useful for those of you who want to
verify given implementation:

   - python-troveclient patchset https://review.openstack.org/#/c/145801/
   - trove-integration patchset https://review.openstack.org/#/c/146098/
   - trove patches:
  - API https://review.openstack.org/#/c/146145/
  - Taskmanager + Guestagent + Integration testing
  https://review.openstack.org/#/c/146146/


There are couple tasks were left for discussions:

   1. Infra gate job - on regular basis or experimental ?
   2. Given integration testing covers minimum case - cluster provisioning.
   Additional tests would be proposed later.


If there's any question, feel free to ask. Thanks.


Kind regards,

Denis M.

IRC: denis_makogon

dmako...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.messaging] Performance testing. Initial steps.

2015-01-15 Thread Denis Makogon
Good day to All,

The question that i’d like to raise here is not simple one, so i’d like to
involve as much readers as i can. I’d like to speak about
oslo.messaging performance
testing. As community we’ve put lots of efforts in making oslo.messaging
widely used drivers stable as much as possible. Stability is a good thing,
but is it enough for saying “works well”? I’d say that it’s not.

Since oslo.messaging uses driver-based messaging workflow, it makes sense
to dig into each driver and collect all required/possible performance
metrics.

First of all, it does make sense to figure out how to perform performance
testing, first that came into my mind is to simulate high load on one of
corresponding drivers. Here comes the question of how it can be
accomplished withing available oslo.messaging tools - high load on any
driver can perform an application that:

   - can populate multiple emitters(rpc clients) and consumers (rpc
   servers).
   - can force clients to send messages of pre-defined number of messages
   of any length.

Another thing is why do we need such thing. Profiling, performance testing
can improve the way in which our drivers were implemented. It can show us
actual “bottlenecks” in messaging process, in general. In some cases it
does make sense to figure out where problem takes its place - whether AMQP
causes messaging problems or certain driver that speaks to AMQP fails.

Next thing that i want to discuss the architecture of profiling/performance
testing. As i can see it seemed to be a “good” way to add profiling code to
each driver. If there’s any objection or better solution, please bring them
to the light.

Once we’d have final design for profiling we would need to figure out tools
for profiling. After searching over the web, i found pretty interesting
topic related to python profiling [1]. After certain investigations it does
makes sense discuss next profiling options(apply one or both):

   - Line-by-line timing and execution frequency with a profiler (there are
   possible Pros and Cons, but i would say the per-line statistics is more
   than appreciable at initial performance testing steps)
   - Memory/CPU consumption

Metrics. The most useful metric for us is time, any time-based metric,
since it is very useful to know at which step or/and by whom delay/timeout
caused, for example, so as it said, we would be able to figure out whether
AMQP or driver fails to do what it was designed for.

Before proposing spec i’d like to figure out any other requirements, use
cases and restrictions for messaging performance testing. Also, if there
any stories of success in boosting python performance - feel free to share
it.



[1] http://www.huyng.com/posts/python-performance-analysis/

Kind regards,

Denis Makogon

IRC: denis_makogon

dmako...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Openstack Havana installation using devstack

2015-01-15 Thread Denis Makogon
On Thu, Jan 15, 2015 at 2:36 PM, abhishek jain ashujain9...@gmail.com
wrote:

 I'm also facing the same trouble.
 On Jan 15, 2015 12:05 PM, masoom alam masoom.a...@gmail.com wrote:

 No

 I want Havana purposefully.

 Thanks

 On Thu, Jan 15, 2015 at 10:45 AM, iKhan ik.ibadk...@gmail.com wrote:

 Go with stable/icehouse then.

 On Thu Jan 15 2015 at 11:13:37 AM masoom alam masoom.a...@gmail.com
 wrote:

 Hi every one,

 How can I install Openstack Havana using devstack.

 The problem is that Havana branch does not exist on Github.

 git clone https://github.com/openstack-dev/devstack.git -b
 stable/havana

 Please guide.



Github is not what you might relay on. Feel free to use git.o.o
http://git.openstack.org/cgit/openstack-dev/devstack/



 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Kind regards,
Denis M.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] confused about trove-guestagent need nova's auth info

2015-01-11 Thread Denis Makogon
Hello to all.

On Sunday, January 11, 2015, Mark Kirkwood mark.kirkw...@catalyst.net.nz
wrote:

 On 18/12/14 14:30, 乔建 wrote:

 When using trove, we need to configure nova’s user information in the
 configuration file of trove-guestagent, such as

 lnova_proxy_admin_user

 lnova_proxy_admin_pass

 lnova_proxy_admin_tenant_name




 Is it necessary? In a public cloud environment, It will lead to serious
 security risks.


 I traced the code, and noticed that the auth data mentioned above is
 packaged in a context object, then passed to the trove-conductor via
 message queue.

 Is it more suitable for trove-conductor to get the corresponding
 information from its own conf file?



Guest agent doesn't need configuration options described above. IIRC, only
taskmanager needs them.
About passing auth data. What are those benefits of changing the way in
which auth data is shipped? If you still think of security risks - you may
use SSL protocol that is available in most of messaging services.


 Yes - all good points. Experimenting with devstack Juno branch, it seems
 you can happily remove these three settings.

 However the guest agent does seem to need the rabbit host and password,
 which is probably undesirable for the same reasons that you mentioned above.

 Regards

 Mark


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Kind regards,
Denis M.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Should region name be case insensitive?

2015-01-07 Thread Denis Makogon
Hello Zhou.

On Wed, Jan 7, 2015 at 10:39 AM, Zhou, Zhenzan zhenzan.z...@intel.com
wrote:

 Hi,



 I meet such an issue when using glance/nova client deployed with Devstack
 to talk with a cloud deployed with TripleO:



 [minicloud@minicloud allinone]$ glance image-list

 public endpoint for image service in RegionOne region not found



Both glance/nova python client libraries allows users to specify region
name (see http://docs.openstack.org/user-guide/content/sdk_auth_nova.html).
So, you are free to metion any region you want.


  The reason is that Devstack uses “RegionOne” as default but TripleO uses
 “regionOne” and

 keystoneclient/service_catalog.py: get_endpoints() does a case sensitive
 string compare.



 I’m not a DB expert but normally database does case insensitive collation,
 so should we use do case insensitive compare here?

 Thanks a lot.



 BR

 Zhou Zhenzan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Kind regards,
Denis M.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder bug commit] How to commit a bug which depends on another bug that have not been merged?

2015-01-07 Thread Denis Makogon
Hello liuxinguo.

On Wed, Jan 7, 2015 at 9:13 AM, liuxinguo liuxin...@huawei.com wrote:

  Hi all,



 · I have commit a bug and it is not yet merged. Now I want to
 commit another bug but this bug is depends on the previous one which have
 not been merged?

 · So how should I do? Should I commit the latter bug directly or
 wait the previous bug to be merged?




You are free to make dependent commits. More info you can find @
https://ask.openstack.org/en/question/31633/gerrit-best-way-to-make-a-series-of-dependent-commits/


  Any input will be appreciated, thanks!



 liu



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Kind regards,
Denis M.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [CI][TripleO][Ironic] check-tripleo-ironic-xxx fails

2014-12-26 Thread Denis Makogon
On Fri, Dec 26, 2014 at 11:01 AM, Adam Gandelman ad...@ubuntu.com wrote:

 This is fallout from a new upstream release of pip that went out earlier
 in the week.  It looks like no formal bug ever got filed, tho the same
 problem discovered in devstack and trove's integration testing repository.
 Added some comments comments to the bug.


Proper solution was merged into devstack (see [1]) and proposed for
trove-integration (see [2]). So, it seems that we've faced with same
problem across multiple projects that are relaying on pip.


[1] https://review.openstack.org/#/c/143501
[2] https://review.openstack.org/#/c/143746/

Kind regards,
Denis M.


 On Thu, Dec 25, 2014 at 10:59 PM, James Polley j...@jamezpolley.com wrote:

 Thanks for the alert

 The earliest failure I can see because of this is
 http://logs.openstack.org/43/141043/6/check-tripleo/check-tripleo-ironic-overcloud-precise-nonha/36c9771/

 I've raised https://bugs.launchpad.net/tripleo/+bug/1405732 and I've put
 some preliminary notes on
 https://etherpad.openstack.org/p/tripleo-ci-breakages

 On Fri, Dec 26, 2014 at 3:42 AM, ZhiQiang Fan aji.zq...@gmail.com
 wrote:

 check-tripleo-ironic-xxx failed because:

 rm -rf /home/jenkins/.cache/image-create/pypi/mirror/
 rm: cannot remove `/home/jenkins/.cache/image-create/pypi/mirror/':
 Permission denie

 see

 http://logs.openstack.org/37/143937/1/check-tripleo/check-tripleo-ironic-overcloud-precise-nonha/9be729b/console.html

 search on logstash.openstack.org:
 message:rm: cannot remove
 `/home/jenkins/.cache/image-create/pypi/mirror/\': Permission denied
 there are 59 hits in last 48 hours



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [project-config][infra] Advanced way to run specific jobs against code changes

2014-12-11 Thread Denis Makogon
Good day Stackers.


I’d like to raise question about implementing custom pipelines for Zuul.
For those of you how’s pretty familiar with project-config and infra

itself it wouldn’t be a news that for now Zuul layout supports only few
pipelines types [1]
https://github.com/openstack-infra/project-config/blob/17990d544f5162b9eebaa6b9781e7abbeab57b42/zuul/layout.yaml
.

Most of OpenStack projects are maintaining more than one type of drivers
(for Nova - virt driver, Trove - datastore drivers,

Cinder - volume backends, etc.). And, as it can be seen, existing jenkins
check jobs are not wisely utilize infra resources.

This is a real problem, just remember end of every release - number of
check/recheck jobs is huge.


So, how can we utilize resources more wisely and run only needed check job?
Like we’ve been processing unstable new check jobs - putting them into

‘experimental’ pipeline. So why can’t we provide such ability for projects
to define their own pipelines?


For example, as code reviewer, i see that patch touches specific
functionality of Driver A and i know that project testing infrastructure
provides an ability

to examine specific workflow for Driver A. Then it seems to be more than
valid to post a comment on the review like “check driver-a”. As you can see

i want to ask gerrit to trig custom pipeline for given project. Let me
describe more concrete example from “real world”. In Trove we maintain 5
different

drivers for different datastores and it doesn’t look like a good thing to
run all check jobs against code that doesn’t actually touch any of existing
datastore

drivers (this is what we have right now [2]
https://github.com/openstack-infra/project-config/blob/17990d544f5162b9eebaa6b9781e7abbeab57b42/zuul/layout.yaml#L1500-L1526
).

   Now here comes my proposal. I’d like to extend existing Zuul
pipeline to support any of needed check jobs (see example of Triple-O, [3]
https://github.com/openstack-infra/project-config/blob/master/zuul/layout.yaml#L174-L220
).

   But, as i can, see there are possible problems with such
approach, so i also have an  alternative proposal to one above. The only
one way to deal with such

   approach is to use REGEX ‘files’ for job definitions (example:
requirements check job [4]
https://github.com/openstack-infra/project-config/blob/master/zuul/layout.yaml#L659-L665).
In this case we’d still maintain only one pipeline ‘experimental’

   for all second-priority jobs. To make small summary, two ways
were proposed:



   -

   Pipeline(s) per Project. Pros: reviewer can trig specific pipeline by
   himself. Cons: spamming status/zuul.
   -

   REGEX files per additional jobs.


Sorry, but i’m not able to describe all Pros/Cons for each of proposals.
So, if you know them, please help me to figure out them.


All thoughts/suggestions are welcome.


Kind regards

Denis M.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][oslo.messaging] Multi-versioning RPC Service API support

2014-11-19 Thread Denis Makogon
Hello Stackers.




When i was browsing through bugs of oslo.messaging [1] i found one
[2] pretty interesting (it’s old as universe), but it doesn’t seem like a
bug, mostly like a blueprint.

Digging into code of oslo.messaging i’ve found that, at least, for now,
there’s no way launch single service that would be able to handle

multiple versions (actually it can if manager implementation can handle
request for different RPC API versions).


So, i’d like to understand if it’s still valid? And if it is i’d
like to collect use cases from all projects and see if oslo.messaging can
handle such case.

But, as first step to understanding multi-versioning/multi-managers
strategy for RPC services, i want to clarify few things. Current code maps

single version to a list of RPC service endpoints implementation, so here
comes question:

- Does a set of endpoints represent single RPC API version cap?

If that’s it, how should we represent multi-versioning? If we’d
follow existing pattern: each RPC API version cap represents its own set of
endpoints,

let me provide some implementation details here, for now ‘endpoints’ is a
list of classes for a single version cap, but if we’d support multiple
version

caps ‘endpoints’ would become a dictionary that contains pairs of
‘version_cap’-’endpoints’. This type of multi-versioning seems to be the
easiest.


Thoughts/Suggestion?


[1] https://bugs.launchpad.net/oslo.messaging
https://launchpad.net/oslo.messaging

[2] https://bugs.launchpad.net/oslo.messaging/+bug/1050374


Kind regards,

Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Clustering API next steps

2014-11-17 Thread Denis Makogon
Good day, Stackers/Trovers.


During Paris Design Summit Clustering session [1] we, as community, came up
with need of changing existing clustering API. By changing an API i mean
deprecating existing clustering action because of its close connection to
MongoDB datastore. There were no disagreement or concerns about deprecating
existing “add_shard” action in favor of something more generic.

But here comes question about an API compatibility. To ensure that we
wouldn’t break it we would need to add another action (that would
eventually substitute “add_shard” action) and maintain “add_shard” action
for N releases (IIRC one more release would be enough).

In terms of given suggestions during Clustering session i’ve made a spec
[2] that reflects all needed changes in Clustering framework.

I’d like to collect all suggestions/concerns about given spec and i’d like
to discuss it during next BP meeting [3].

[1] Session etherpad
https://etherpad.openstack.org/p/kilo-summit-trove-clusters
[2] Spec proposal https://review.openstack.org/#/c/134583/
[3] BP review schedule
https://wiki.openstack.org/wiki/Meetings/TroveBPMeeting#Nov._24_Meeting


Kind regards,
Denis M.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps

2014-11-17 Thread Denis Makogon
Good day, Stackers.


During Paris Design summit oslo.messaging session was raised good question
about maintaining ZeroMQ driver in upstream (see section “dropping ZeroMQ
support in oslo.messaging” at [1]) . As we all know, good thoughts are
comming always after. I’d like to propose several improvements in process
of maintaining and developing of ZeroMQ driver in upstream.


Contribution focus. As we all see, that there are enough patches that are
trying to address certain problems related to ZeroMQ driver.

Few of them trying to add functional tests, which is definitely good, but …
there’s always ‘but’, they are not “gate”-able.

My proposal for this topic is to change contribution focus from
oslo.messaging by itself to OpenStack/Infra project and DevStack
(subsequently to devstack-gate too).

I guess there would be questions “why?”.  I think the answer is pretty
obvious: we have driver that is not being tested at all within DevStack and
project integration.

Also i’d say that such focus re-orientation would be very useful as source
of use cases and bugs eventually. Here’s a list of what we, as team, should
do first:

   1.

  Ensure that DevStack can successfully:
  1.

 Install ZeroMQ.
 2.

 Configure  each project to work with zmq driver from
 oslo.messaging.
 2.

  Ensure that we can run successfully simple test plan for each project
  (like boot VM, fill object store container, spin up volume, etc.).


ZeroMQ driver maintainers community organization. During design session was
raised question about who uses zmq driver in production.

I’ve seen folks from Canonical and few other companies. So, here’s my
proposals around improving process of maintaining of given driver:

   1.

  With respect to best practices of driver maintaining procedure, we
  might need to set up community sub-group. What would it give to us and to
  the project subsequently? It’s not pretty obvious, at least for now, but
  i’d try to light out couple moments:
  1.

 continuous driver stability
 2.

 continuous community support (across all OpenStack Project that
 are using same model: driver should have maintaining team,
would it be a
 company or community sub-group)
 2.

  As sub-group we would need to have our own weekly meeting. Separate
  meeting would keep us, as sub-group, pretty focused on zmq
driver only (but
  it doesn’t mean that we should not participate in regular meetings). Same
  question. What it would give us and to the project? I’d say that the only
  one valid answer is: we’d not disturb other folk that are not actually
  interested in given topic and in zqm drive too.


So, in the end, taking into account words above i’d like to get
feedback from all folks. I’m pretty open for discussion, and if needed i
can commit myself for driving such activities in oslo.messaging.


[1] https://etherpad.openstack.org/p/kilo-oslo-oslo.messaging


Kind regards,

Denis M.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Zero MQ remove central broker. Architecture change.

2014-11-17 Thread Denis Makogon
On Mon, Nov 17, 2014 at 4:26 PM, Russell Bryant rbry...@redhat.com wrote:

 On 11/17/2014 05:44 AM, Ilya Pekelny wrote:
  Hi, all!
 
  We want to discuss opportunity of implementation of the p-2-p messaging
  model in oslo.messaging for ZeroMQ driver.

 On a related note, have you looked into AMQP 1.0 at all?  I have been
 hopeful about the development to support it because of these same reasons.

 The AMQP 1.0 driver is now merged.  I'd really like to see some work
 around trying it out with the dispatch router [1].  It seems like using
 amqp 1.0 + a distributed network of disaptch routers could be a very
 scalable approach.  We still need to actually try it out and do some
 scale and performance testing, though.


Russel, thanks for pointing it out. We'd definitely would take a look at
this.
But the question about perfomance and integration/functional testing still
tough one.
We, as oslo.messaging, community trying to do our best on it.

[1] http://qpid.apache.org/components/dispatch-router/

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Best regards,
Denis M.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps

2014-11-17 Thread Denis Makogon
On Mon, Nov 17, 2014 at 4:26 PM, James Page james.p...@ubuntu.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 Hi Denis

 On 17/11/14 07:43, Denis Makogon wrote:
  During Paris Design summit oslo.messaging session was raised good
  question about maintaining ZeroMQ driver in upstream (see section
  “dropping ZeroMQ support in oslo.messaging” at [1]) . As we all
  know, good thoughts are comming always after. I’d like to propose
  several improvements in process of maintaining and developing of
  ZeroMQ driver in upstream.
 
  Contribution focus. As we all see, that there are enough patches
  that are trying to address certain problems related to ZeroMQ
  driver.
 
  Few of them trying to add functional tests, which is definitely
  good, but … there’s always ‘but’, they are not “gate”-able.

 I'm not sure I understand you statement about them not being
 gate-able - the functional/unit tests currently proposed for the zmq
 driver run fine as part of the standard test suite execution - maybe
 the confusion is over what 'functional' actually means, but in my
 opinion until we have some level of testing of this driver, we can't
 effectively make changes and fix bugs.


I do agree that there's a confusion what functional testing means.
Another thing, what the best solution is? Unit tests are welcome, but they
are still remain to be units (they are using mocks, etc.)
I'd try to define what 'fuctional testing' means for me. Functional testing
in oslo.messaging means that we've been using real service for messaging
(in this case - deployed 0mq). So, the simple definition, in term os
OpenStack integration, we should be able to run full Tempest test suit for
OpenStack services that are using oslo.messaging with enabled zmq driver.
Am i right or not?


  My proposal for this topic is to change contribution focus from
  oslo.messaging by itself to OpenStack/Infra project and DevStack
  (subsequently to devstack-gate too).
 
  I guess there would be questions “why?”.  I think the answer is
  pretty obvious: we have driver that is not being tested at all
  within DevStack and project integration.

 This was discussed in the oslo.messaging summit session, and
 re-enabling zeromq support in devstack is definately on my todo list,
 but I don't think the should block landing of the currently proposed
 unit tests on oslo.messaging.

 For example https://review.openstack.org/#/c/128233/ says about adding
functional and units. I'm ok with units, but what about functional tests?
Which oslo.messaging gate job runs them?


  Also i’d say that such focus re-orientation would be very useful
  as source of use cases and bugs eventually. Here’s a list of what
  we, as team, should do first:
 
  1.
 
  Ensure that DevStack can successfully:
 
  1.
 
  Install ZeroMQ.
 
  2.
 
  Configure  each project to work with zmq driver from
  oslo.messaging.
 
  2.
 
  Ensure that we can run successfully simple test plan for each
  project (like boot VM, fill object store container, spin up volume,
  etc.).

 A better objective would be able to run a full tempest test as
 conducted with the RabbitMQ driver IMHO.


I do agree with this too. But we should define step-by-step plan for this
type of testing. Since we want to see quick gate feedback adding full test
suit would be an overhead, at least for now.


  ZeroMQ driver maintainers communityorganization. During design
  session was raised question about who uses zmq driver in
  production.
 
  I’ve seen folks from Canonical and few other companies. So, here’s
  my proposals around improving process of maintaining of given
  driver:
 
  1.
 
  With respect to best practices of driver maintaining procedure, we
  might need to set up community sub-group. What would it give to us
  and to the project subsequently? It’s not pretty obvious, at least
  for now, but i’d try to light out couple moments:
 
  1.
 
  continuous driver stability
 
  2.
 
  continuous community support (across all OpenStack Project that are
  using same model: driver should have maintaining team, would it be
  a company or community sub-group)
 
  2.
 
  As sub-group we would need to have our own weekly meeting. Separate
  meeting would keep us, as sub-group, pretty focused on zmq driver
  only (but it doesn’t mean that we should not participate in regular
  meetings). Same question. What it would give us and to the project?
  I’d say that the only one valid answer is: we’d not disturb other
  folk that are not actually interested in given topic and in zqm
  drive too.

 I'd prefer that we continue to discuss ZMQ on the broader
 oslo.messaging context; I'm keen that the OpenStack community
 understands that we want ZMQ to be a first tier driver like qpid and
 rmq, and I'm not convinced that pushing discussion out to a separate
 sub-group enforces that message...


The only thing that i'm woried about is that we could eventually eat all
meeting time. That's why i try to build out drive maintaining/contribution

Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps

2014-11-17 Thread Denis Makogon
On Mon, Nov 17, 2014 at 5:12 PM, Eric Windisch e...@windisch.us wrote:



 On Mon, Nov 17, 2014 at 8:43 AM, Denis Makogon dmako...@mirantis.com
 wrote:

 Good day, Stackers.


 During Paris Design summit oslo.messaging session was raised good
 question about maintaining ZeroMQ driver in upstream (see section “dropping
 ZeroMQ support in oslo.messaging” at [1]) . As we all know, good
 thoughts are comming always after. I’d like to propose several improvements
 in process of maintaining and developing of ZeroMQ driver in upstream.



 I'm glad to see the community looking to revive this driver. What I think
 could be valuable if there are enough developers is a sub-team as is done
 with Nova. That doesn't mean to splinter the community, but to provide a
 focal point for interested developers to interact.


Yes, that's what i've been trying to say, sub-group'ing doing mean
completely new community. The reason why i've proposed it is a need to
maintain driver by those who's interested in it. As already said, there're
not so many of us who uses (or considering) zmq driver. So, eventually,
we're on the same boat - let's co-work on making it better than it is now.


 I agree with the idea that this should be tested via Tempest. It's easy
 enough to mask off the failing tests and enable more tests as either the
 driver itself improves, or support in consuming projects and/or
 oslo.messaging itself improves. I'd suggest that effort is better spent
 there than building new bespoke tests.

 Thanks and good luck! :)

 Regards,
 Eric Windisch

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Best regards,
Denis M.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps

2014-11-17 Thread Denis Makogon
понедельник, 17 ноября 2014 г. пользователь Mehdi Abaakouk написал:


 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 Le 2014-11-17 15:26, James Page a écrit :

 This was discussed in the oslo.messaging summit session, and
 re-enabling zeromq support in devstack is definately on my todo list,
 but I don't think the should block landing of the currently proposed
 unit tests on oslo.messaging.


 I would like to see this tests landed too, even we need to install redis
 or whatever and
 to start them manually. This will help a lot to review zmq stuffs and
 ensure fixed thing are not broken again.


I do agree that we need to find a way to prevent blocking of zmq
development. But I don't think that such testing way eventually will lead
us to failure. Why not just focus on setting up testing environment that
can be used for gating? Just as another example, we can consider on getting
at least 3d party CI for zmq driver until we have infra gating environment.


 - ---
 Mehdi Abaakouk
 mail: sil...@sileht.net
 irc: sileht

 -BEGIN PGP SIGNATURE-
 Version: OpenPGP.js v.1.20131017
 Comment: http://openpgpjs.org

 wsFcBAEBCAAQBQJUajTzCRAYkrQvzqrryAAAN3AP+QEdd4/kc0y+6WB4d3Tu
 g19EfSLR/1ekSBK0AeBc7z7hlDh5wVnQF1t0cm4Kv/fg2+59+Kjc0FhoBeDR
 DbOe75vlJTkkUIK+RgPiFLm2prjV7oHQVA7x5E75IhewG+jlLtPm47Wj2b12
 wRpeIJC3ofR8OETZ6yxr8NVUvdEWrQk+E2XfDrs3SC55RMYl+so9/FxVlR4y
 qwg2EKyhBvjCF8B7j0f3kZqrOCUTi00ivLEN2t+gqCA1WDm7o0cqSVLGvqDW
 +HvgJTnVeCu9F+OgsSjpfrVcAiWsF4K5sxZtLv76fLL75simDVG04gOTi5ZL
 UtZ2HSQGHrdamTz/qu9FckdhMWoGeUq9XeJf1ulCqJ/9Q4GWlh3KwM/h0hws
 A3lKBRxwdiG4afkddhXH3CXa2WyN/genTEaitbk0rk0Q6Q0dumiLPC+P5txB
 Fpn1DgwXYMdKVOVWGhUuKVtVWHN35+bJIaGXA/j9MuzEVyTkxhQsOl2aC992
 SrQzLvOE9Ao9o4zQCChDnKPfVg8NcxFsljnf55uLBCWQT6zrKNLL18EY1JvL
 kacwKipFWyW4TGYQc33ibV66353W8WY6L07ihDFWYo5Ww0NTWtgNM2FUpM2L
 QgiP9DcGsOMJ+Ez41uXVLzPueal0KCkgXFbl4Vrrk5PflTvZx8kaXf8TTbei
 Kcmc
 =hgqJ
 -END PGP SIGNATURE-


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Kind regards,
Denis M.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps

2014-11-17 Thread Denis Makogon
понедельник, 17 ноября 2014 г. пользователь James Page написал:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 On 17/11/14 09:01, Denis Makogon wrote:
  I'm not sure I understand you statement about them not being
  gate-able - the functional/unit tests currently proposed for the
  zmq driver run fine as part of the standard test suite execution -
  maybe the confusion is over what 'functional' actually means, but
  in my opinion until we have some level of testing of this driver,
  we can't effectively make changes and fix bugs.
 
  I do agree that there's a confusion what functional testing
  means. Another thing, what the best solution is? Unit tests are
  welcome, but they are still remain to be units (they are using
  mocks, etc.) I'd try to define what 'fuctional testing' means for
  me. Functional testing in oslo.messaging means that we've been
  using real service for messaging (in this case - deployed 0mq). So,
  the simple definition, in term os OpenStack integration, we should
  be able to run full Tempest test suit for OpenStack services that
  are using oslo.messaging with enabled zmq driver. Am i right or
  not?

 0mq is just a set of messaging semantics on top of tcp/ipc sockets; so
 its possible to test the entire tcp/ipc messaging flow standalone i.e.
 without involving any openstack services.  That's what the current
 test proposal includes - unit tests which mock out most things, and
 base functional tests that validate the tcp/icp messaging flows via
 the zmq-receiver proxy.  These are things that *just work* under a tox
 environment and don't require any special setup.

 Hm, I see what you've been trying to say. But unfortunately it breaks
whole idea of TDD. Why can't we just spend some time on getting non-voting
gates? Ok, let me describe what would satisfy all of us: Lest write up docs
that are describes how to setup(manually) environment to test out income
patches.

So, in any way. This topic is not for disagreement. It's for building out
team relationship.
I'd like to discuss next steps on developing zmq driver.

Kind regards,
Denis M.







 This will be supplemented with good devstack support and full tempest
 gate, but lets start from the bottom up please!  The work that's been
 done so far allows us to move forward with bug fixing and re-factoring
 that's backed up on having a base level of unit/functional tests.

 - --
 James Page
 Ubuntu and Debian Developer
 james.p...@ubuntu.com javascript:;
 jamesp...@debian.org javascript:;
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1

 iQIcBAEBCAAGBQJUajaZAAoJEL/srsug59jDm2wP/1xW99gc/63CXnNowJLwgCAK
 AflhWs4SAUSF0VizOFoys6j1ApjAwWDG33B927REH/YDNwmAd7PgHRilgcaBjR5w
 pgaPRctCHPpWtJCWRCAmgkogqJotN3gTDKORxRNaWo9otzjQQbyPP5sEzuLl86/8
 0n9KjwhjdJV42fcoKYvWt18uvz9yVOQLlPqj0WhAuzfpeP/5ZkXkd/dOvh6rwJnk
 wc+ZExPBhdeMNwaJFPZvle++Ki6tZCV8P8+Be5rqTZxdnGxoct72YnIohW48E9Nu
 1sjdJCg42vxIMZi8NfkJDDfTBWzOmkab2jcViIJd9ycTn8CT/e62ZK8nN/hnIjla
 qU8pdRxNkY7xY3AuVoTWYRZGAon+Pp6Xw3J+lh7xUYukKtP/PaN+PjLCmVYrfca0
 JQnc8N5bLfcZkz/tx8R09hxqV7cpaRZh/lM6D62XEMRQJ7y9rcUIaJQnHbsmqLw9
 lwriXjNE/77eyttQlGnItyBZrTFjCFED9zg6ihK5w0DNXQr3CbIvlgCjiWkXfxDD
 1QK05SbsukSlnO+Aqfs/HNICMdiZmqxcqcUcVs/XnKXf5Bi/Y/P0haLb43nFoa3E
 FaOYvY/T5HSJDvrFK6+kzPgT2zF3sWy4bZjRwKLl8GM8Mm7K65nfd5APhVCnQq5X
 yZOvpJehduiy6W/lQgzk
 =HAiM
 -END PGP SIGNATURE-

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org javascript:;
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps

2014-11-17 Thread Denis Makogon
вторник, 18 ноября 2014 г. пользователь Mehdi Abaakouk написал:


 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256



 Le 2014-11-17 22:53, Doug Hellmann a écrit :

  That’s a good goal, but that’s not what I had in mind for in-tree
 functional tests.


 An interesting idea that might be useful that taskflow implemented/has
 done...

 The examples @ https://github.com/openstack/
 taskflow/tree/master/taskflow/examples all get tested during unit test
 runs to ensure they work as expected. This seems close to your 'simple app'
 (where app is replaced with example), it might be nice to have a similar
 approach for oslo.messaging that has 'examples' that are these apps that
 get ran to probe the functionality of oslo.messaging (as well as useful for
 documentation to show people how to use it, which is the other usage
 taskflow has for these examples)

 The hacky example tester could likely be shared (or refactored, since it
 probably needs it), https://github.com/openstack/
 taskflow/blob/master/taskflow/tests/test_examples.py#L91


 Sure, that would be a good way to do it, too.


 We already have some works done in that ways. Gordon Sim have wrote some
 tests that use only the public API to test a driver:
 https://github.com/openstack/oslo.messaging/blob/master/
 tests/functional/test_functional.py

 You just have to set the TRANSPORT_URL environment variable to start them.

 I'm working to run them on a devstack vm for rabbit, qpid, amqp1.0 driver,
 the infra patch that add experimental jobs have just landed:
 https://review.openstack.org/#/c/130370/


Amazing work, Mehdi.


 I have two other patches waiting to make it works:
 * https://review.openstack.org/#/c/130370/
 * https://review.openstack.org/#/c/130437/


Will take a look at them asap.


 So if zmq driver support in devstack is fixed, we can easily add a new job
 to run them in the same way.


Btw this is a good question. I will take look at current state of zmq in
devstack.



 - ---
 Mehdi Abaakouk
 mail: sil...@sileht.net
 irc: sileht
 -BEGIN PGP SIGNATURE-
 Version: OpenPGP.js v.1.20131017
 Comment: http://openpgpjs.org

 wkYEAREIABAFAlRq6p4JEJZbdE7sD8foAAAWnACdHPwDAbga4mfP/tIL1Z9q
 A0w2zvAAnA/tvfXnAJO4a2n4TKiZYiVGbUdT
 =BVDs
 -END PGP SIGNATURE-


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Guest RPC API improvements. Messages, topics, queues, consumption.

2014-10-31 Thread Denis Makogon
Hello, Stackers/Trovers.



I’d like to start discussion about how do we use guestagent API that will
eventually be evaluated as a spec. For most of you who well-known with
Trove’s codebase knows how do Trove acts when provisioning new instance.

I’d like to point out next moments:

   1.

   When we provision new instance we expect that guest will create its
   topic/queue for RPC messaging needs.
   2.

   Taskmanager doesn’t validate that guest is really up before sending
   ‘prepare’ call.

And here comes the problem, what if guest wasn’t able to start properly and
consume ‘prepare’ message due to certain circumstances? In this case
‘prepare’ message would never be consumed.


 Me and Sergey Gotliv were looking for proper solution for this case. And
we end up with next requirements for provisioning workflow:

   1.

   We must be sure that ‘prepare’ message will be consumed by guest.
   2.

   Taskmanager should handle topic/queue management for guest.
   3.

   Guest just need to consume income messages for already existing
   topic/queue.

 As concrete proposal (or at least topic for discussions) i’d like to
discuss next improvements:

We need to add new guest RPC API that will represent “ping-pong” action. So
before sending any cast- or call-type messages we need to make sure that
guest is really running.


Pros/Cons for such solution:

   1.

   Guest will do only consuming.
   2.

   Guest would not manage its topics/queues.
   3.

   We’ll be 100% sure that no messages would be lost.
   4.

   Fast-fail during provisioning.
   5.

   Other minor/major improvements.



Thoughts?


P.S.: I’d like to discuss this topic during upcoming Paris summit (during
contribution meetup at Friday).



Best regards,

Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Guest RPC API improvements. Messages, topics, queues, consumption.

2014-10-31 Thread Denis Makogon
On Fri, Oct 31, 2014 at 7:49 PM, Tim Simpson tim.simp...@rackspace.com
wrote:

  Hi Denis,

  It seems like the issue you're trying to solve is that these 'prepare'
 messages can't be consumed by the guest.

Not only 'prepare' call. I want to point out that each RPC API call that
uses RPC 'cast' messaging type may remain for ever inside AMPQ service.

 So, if the guest never actually comes online and therefore can't consume
 the prepare call, then you'll be left with the message in the queue
 forever.

 Yes, it still may fail, but at least we can be sure that when we want to
'cast' something to guest it's alive and the only way to check if it's
alive to use 'call' messaging type, because if guest is down for some
reasons 'call' will fail as soon as possible.


  If you use a ping-pong message, you'll still be left with a stray
 message in the queue if it fails.

 Ok, let's discuss it. What do you thing would give us confidence that
guest is really up and ready to consume?


  I think the best fix is if we delete the queue when deleting an
 instance. This way you'll never have more queues in rabbit than are needed.

 I do agree that it may work. But as for me, it'll be more safe if
taskmanager will handle topic initializing(when instance gets created) and
canceling (when instance gets deleted).


  Thanks,

  Tim



Best regards,
Denis M.

  --
 *From:* Denis Makogon [dmako...@mirantis.com]
 *Sent:* Friday, October 31, 2014 4:32 AM
 *To:* OpenStack Development Mailing List
 *Subject:* [openstack-dev] [Trove] Guest RPC API improvements. Messages,
 topics, queues, consumption.

Hello, Stackers/Trovers.



  I’d like to start discussion about how do we use guestagent API that
 will eventually be evaluated as a spec. For most of you who well-known with
 Trove’s codebase knows how do Trove acts when provisioning new instance.

 I’d like to point out next moments:

1.

When we provision new instance we expect that guest will create its
topic/queue for RPC messaging needs.
2.

Taskmanager doesn’t validate that guest is really up before sending
‘prepare’ call.

   And here comes the problem, what if guest wasn’t able to start properly
 and consume ‘prepare’ message due to certain circumstances? In this case
 ‘prepare’ message would never be consumed.


  Me and Sergey Gotliv were looking for proper solution for this case. And
 we end up with next requirements for provisioning workflow:

1.

We must be sure that ‘prepare’ message will be consumed by guest.
2.

Taskmanager should handle topic/queue management for guest.
3.

Guest just need to consume income messages for already existing
topic/queue.

   As concrete proposal (or at least topic for discussions) i’d like to
 discuss next improvements:

 We need to add new guest RPC API that will represent “ping-pong” action.
 So before sending any cast- or call-type messages we need to make sure that
 guest is really running.


  Pros/Cons for such solution:

1.

Guest will do only consuming.
2.

Guest would not manage its topics/queues.
3.

We’ll be 100% sure that no messages would be lost.
4.

Fast-fail during provisioning.
5.

Other minor/major improvements.



  Thoughts?


  P.S.: I’d like to discuss this topic during upcoming Paris summit
 (during contribution meetup at Friday).



  Best regards,

  Denis Makogon


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Proposal to add Iccha Sethi to trove-core

2014-10-30 Thread Denis Makogon
+1

On Thu, Oct 30, 2014 at 3:02 PM, Tim Simpson tim.simp...@rackspace.com
wrote:

 +1

 
 From: Nikhil Manchanda [nik...@manchanda.me]
 Sent: Thursday, October 30, 2014 3:47 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Trove] Proposal to add Iccha Sethi to trove-core

 Hello folks:

 I'm proposing to add Iccha Sethi (iccha on IRC) to trove-core.

 Iccha has been working with Trove for a while now. She has been a
 very active reviewer, and has provided insightful comments on
 numerous reviews. She has submitted quality code for multiple bug-fixes
 in Trove, and most recently drove the per datastore volume support BP in
 Juno. She was also a crucial part of the team that implemented
 replication in Juno, and helped close out multiple replication related
 issues during Juno-3.

 https://review.openstack.org/#/q/reviewer:iccha,n,z
 https://review.openstack.org/#/q/owner:iccha,n,z

 Please respond with +1/-1, or any further comments.

 Thanks,
 Nikhil

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Trove Blueprint Meeting on 13 October canceled

2014-10-13 Thread Denis Makogon
On Mon, Oct 13, 2014 at 8:05 AM, Nikhil Manchanda nik...@manchanda.me
wrote:

 Hey folks:

 We have an empty agenda for the Trove Blueprint meeting tomorrow, so I'm
 going to go ahead and cancel it.

 We do have a few blueprints that are in-flight which need review
 comments, so please take this time to review these blueprints and
 provide feedback:
 https://review.openstack.org/#/c/123571/
 https://review.openstack.org/#/c/124717/
 https://review.openstack.org/#/c/122736/
 https://review.openstack.org/#/c/122767/

 To all Trove contributors and active community members. We _must_ put
enough efforts to review specs, they are hanging enough time to spend 1
hour(BP meeting time frame) per week to look at them.
FYI:

oslo.concurrency - staring Sept. 23 (reviewed only by: Robert Myers)

Cassandra clustering - starting Sept. 24 (reviewed only by: Amrith Kumar)

Added datastore log operation spec - starting Sept. 29 (no reviews)

Oracle 12c support - starting Sept. 19 (no reviews)


Let's stay productive, and let's get enough features for next release.


See you guys at the regular Trove meeting on Wednesday!

 Thanks,
 Nikhil

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Best regards,
Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] No one replying on tempest issue?Please share your experience

2014-09-23 Thread Denis Makogon
,
Denis Makogon.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Attach an USB disk to VM [nova]

2014-09-15 Thread Denis Makogon
Agreed with Max.
With nova you can use file injection mechanism. You just need to build a
dictionary of file paths and file content. But I do agree that it's not the
same as you want. But it's more than
valid way to inject files.

Best regards,
Denis Makogon

понедельник, 15 сентября 2014 г. пользователь Maksym Lobur написал:

 Try to use Nova Metadata Serivce [1] or Nova Config Drive [2]. There are
 options to pass Key-Value data as well as whole files during VM boot.

 [1]
 http://docs.openstack.org/grizzly/openstack-compute/admin/content/metadata-service.html
 [2]
 http://docs.openstack.org/grizzly/openstack-compute/admin/content/config-drive.html

 Best regards,
 Max Lobur,
 OpenStack Developer, Mirantis, Inc.

 Mobile: +38 (093) 665 14 28
 Skype: max_lobur

 38, Lenina ave. Kharkov, Ukraine
 www.mirantis.com
 www.mirantis.ru

 On Sun, Sep 14, 2014 at 10:21 PM, pratik maru fipuzz...@gmail.com
 javascript:_e(%7B%7D,'cvml','fipuzz...@gmail.com'); wrote:

 Hi Xian,

 Thanks for replying.

 I have some data which i wants to be passed to VM. To pass this data,  I
 have planned this to attach as an usb disk and this disk will be used
 inside the vm to read the data.

 What I am looking is for the functionality similar to -usb option with
 qemu.kvm command.

 Please let me know, how it can be achieved in openstack environment.

 Thanks
 fipuzzles

 On Mon, Sep 15, 2014 at 8:14 AM, Xiandong Meng mengxiand...@gmail.com
 javascript:_e(%7B%7D,'cvml','mengxiand...@gmail.com'); wrote:

 What is your concrete user scenario for this request?
 Where do you expect to plugin the USB disk? On the compute node that
 hosts the VM or from somewhere else?

 On Mon, Sep 15, 2014 at 3:01 AM, pratik maru fipuzz...@gmail.com
 javascript:_e(%7B%7D,'cvml','fipuzz...@gmail.com'); wrote:

 Hi,

 Is there any way to attach an USB disk as an external disk to VM while
 booting up the VM ?

 Any help in this respect will be really helpful.


 Thanks
 fipuzzles

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 javascript:_e(%7B%7D,'cvml','OpenStack-dev@lists.openstack.org');
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Regards,

 Xiandong Meng javascript:_e(%7B%7D,'cvml','mengxiand...@gmail.com');
 mengxiand...@gmail.com
 javascript:_e(%7B%7D,'cvml','mengxiand...@gmail.com');

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 javascript:_e(%7B%7D,'cvml','OpenStack-dev@lists.openstack.org');
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 javascript:_e(%7B%7D,'cvml','OpenStack-dev@lists.openstack.org');
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to inject files inside VM using Heat [heat]

2014-09-15 Thread Denis Makogon
On Mon, Sep 15, 2014 at 9:06 PM, pratik maru fipuzz...@gmail.com wrote:

 Hi All,

 I am trying to inject a file from outside into a guest using heat, what
 heat properties can i use for the same ?


You might take a look at
https://github.com/openstack/heat-templates/blob/7ec1eb98707dc759c699ad59d46e098e6c06e42c/cfn/F17/PuppetMaster_Single_Instance.template#L80-L156

Also you are able to parametrize file content.


 If I am not wrong, there is an option in nova boot --file to do the
 same, do we have an equivalent option in heat also ?


Correct.


 Thanks in advance.

 Regards
 Fipuzzles

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Best regards,
Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Not able to create volume backup using cinder

2014-09-02 Thread Denis Makogon
Hello, Vinod.


Sorry but dev. mailing list is not for usage questions. But it seems that
you don't have launched cindre-backup service (see Service cinder-backup
could not be found.). Take a look at
https://github.com/openstack/cinder/blob/master/bin/cinder-backup.

Best regards,
Denis Makogon


On Tue, Sep 2, 2014 at 10:49 AM, Vinod Borol vinod.bor...@gslab.com wrote:

 I am unable to create a back up of the volume using the cinder command
 even if all the conditions required are satisfied. I am getting a HTTP 500
 error. I am not sure what could be the problem here. Really appreciate if
 some one can give me some pointers to where i can look.

 I am using Openstack Havana

 C:cinder list
 +--+---+--+--+-+--+--+
 | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to
 |
 +--+---+--+--+-+--+--+
 | db9cd065-1906-4bb7-b00c-f3f04245f514 | available |
 ShitalVmNew1407152394855 | 50 | None | true | |
 +--+---+--+--+-+--+--+

 C:cinder backup-create db9cd065-1906-4bb7-b00c-f3f04245f514 ERROR:
 Service cinder-backup could not be found. (HTTP 500) (Request-ID:
 req-734e6a87-33f6-4654-8a14-5e3242318e87)

 Below is the exception in cinder logs

 Creating new backup {u'backup': {u'volume_id':
 u'f57c9a6f-cba2-4c4f-aca1-c4633e6bbbe4', u'container': None,
 u'description': None, u'name': u'vin-vol-cmd-bck'}} from (pid=4193) create
 /opt/stack/cinder/cinder/api/contrib/backups.py:218 2014-08-29 11:11:52.805
 AUDIT cinder.api.contrib.backups [req-12ec811b-a5b8-4043-8656-09a832e407d7
 433bc3b202ae4b438b2530cb487b97a5 7da65e0e61124e54a8bd0d91f22a1ac0] Creating
 backup of volume f57c9a6f-cba2-4c4f-aca1-c4633e6bbbe4 in container None
 2014-08-29 11:11:52.833 INFO cinder.api.openstack.wsgi
 [req-12ec811b-a5b8-4043-8656-09a832e407d7 433bc3b202ae4b438b2530cb487b97a5
 7da65e0e61124e54a8bd0d91f22a1ac0] HTTP exception thrown: Service
 cinder-backup could not be found. 2014-08-29 11:11:52.834 INFO
 cinder.api.openstack.wsgi [req-12ec811b-a5b8-4043-8656-09a832e407d7
 433bc3b202ae4b438b2530cb487b97a5 7da65e0e61124e54a8bd0d91f22a1ac0]
 http://10.193.72.195:8776/v1/7da65e0e61124e54a8bd0d91f22a1ac0/backups (
 http://10.193.72.195:8776/v1/7da65e0e...) returned with HTTP 500
 2014-08-29 11:11:52.835 INFO eventlet.wsgi.server
 [req-12ec811b-a5b8-4043-8656-09a832e407d7 433bc3b202ae4b438b2530cb487b97a5
 7da65e0e61124e54a8bd0d91f22a1ac0] 10.65.53.105 - - [29/Aug/2014 11:11:52]
 POST /v1/7da65e0e61124e54a8bd0d91f22a1ac0/backups HTTP/1.1 500 359
 0.043090



 Regards
 VB

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnetodb] Backup procedure for Cassandra backend

2014-08-29 Thread Denis Makogon
Hello, stackers. I'd like to start thread related to backuping procedure
for MagnetoDB, to be precise, for Cassandra backend.

In order to accomplish backuping procedure for Cassandra we need to
understand how does backuping work.

To perform backuping:

   1.

   We need to SSH into each node
   2.

   Call ‘nodetool snapshot’ with appropriate parameters
   3.

   Collect backup.
   4.

   Send backup to remote storage.
   5.

   Remove initial snapshot


Lets take a look how does ‘nodetool snapshot’ works. Cassandra backs up
data by taking a snapshot of all on-disk data files (SSTable files) stored
in the data directory. Each time an SSTable gets flushed and snapshotted it
becomes a hard link against initial SSTable pinned to specific timestamp.

Snapshots are taken per keyspace or per-CF and while the system is online.
However, nodes must be taken offline in order to restore a snapshot.

Using a parallel ssh tool (such as pssh), you can flush and then snapshot
an entire cluster. This provides an eventually consistent backup. Although
no one node is guaranteed to be consistent with its replica nodes at the
time a snapshot is taken, a restored snapshot can resume consistency using
Cassandra's built-in consistency mechanisms.

After a system-wide snapshot has been taken, you can enable incremental
backups on each node (disabled by default) to backup data that has changed
since the last snapshot was taken. Each time an SSTable is flushed, a hard
link is copied into a /backups subdirectory of the data directory.

Now lets see how can we deal with snapshot once its taken. Below you can
see a list of command that needs to be executed to prepare a snapshot:

Flushing SSTables for consistency

'nodetool flush'

Creating snapshots (for example of all keyspaces)

nodetool snapshot -t %(backup_name)s 1/dev/null,

where

   -

   backup_name - is a name of snapshot


Once it’s done we would need to collect all hard links into a common
directory (with keeping initial file hierarchy):

sudo tar cpzfP /tmp/all_ks.tar.gz\

$(sudo find %(datadir)s -type d -name %(backup_name)s)

where

   -

   backup_name - is a name of snapshot,
   -

   datadir - storage location (/var/lib/cassandra/data, by the default)


Note that this operation can be extended:

   -

   if cassandra was launched with more than one data directory (see
   cassandra.yaml
   
http://www.datastax.com/documentation/cassandra/2.0/cassandra/configuration/configCassandra_yaml_r.html
   )
   -

   if we want to backup only:
   -

  certain keyspaces at the same time
  -

  one keyspace
  -

  a list of CF’s for given keyspace


Useful links

http://www.datastax.com/documentation/cassandra/2.0/cassandra/tools/toolsNodetool_r.html

Best regards,
Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnetodb] Backup procedure for Cassandra backend

2014-08-29 Thread Denis Makogon
On Fri, Aug 29, 2014 at 4:29 PM, Dmitriy Ukhlov dukh...@mirantis.com
wrote:

 Hello Denis,
 Thank you for very useful knowledge sharing.

 But I have one more question. As far as I understood if we have
 replication factor 3 it means that our backup may contain three copies of
 the same data. Also it may contain some not compacted sstables set. Do we
 have any ability to compact collected backup data before moving it to
 backup storage?


Thanks for fast response, Dmitriy.

With replication factor 3 - yes, this looks like a feature that allows to
backup only one node instead of 3 of them. In other cases, we would need to
iterate over each node, as you know.
Correct, it is possible to have not compacted SSTables. To accomplish
compaction we might need to use compaction mechanism provided by the
nodetool, see
http://www.datastax.com/documentation/cassandra/2.0/cassandra/tools/toolsCompact.html,
we just need take into account that it's possible that sstable was already
compacted and force compaction wouldn't give valuable benefits.


Best regards,
Denis Makogon



 On Fri, Aug 29, 2014 at 2:01 PM, Denis Makogon dmako...@mirantis.com
 wrote:

 Hello, stackers. I'd like to start thread related to backuping procedure
 for MagnetoDB, to be precise, for Cassandra backend.

 In order to accomplish backuping procedure for Cassandra we need to
 understand how does backuping work.

 To perform backuping:

1.

We need to SSH into each node
2.

Call ‘nodetool snapshot’ with appropriate parameters
3.

Collect backup.
4.

Send backup to remote storage.
5.

Remove initial snapshot


  Lets take a look how does ‘nodetool snapshot’ works. Cassandra backs up
 data by taking a snapshot of all on-disk data files (SSTable files) stored
 in the data directory. Each time an SSTable gets flushed and snapshotted it
 becomes a hard link against initial SSTable pinned to specific timestamp.

 Snapshots are taken per keyspace or per-CF and while the system is
 online. However, nodes must be taken offline in order to restore a snapshot.

 Using a parallel ssh tool (such as pssh), you can flush and then snapshot
 an entire cluster. This provides an eventually consistent backup.
 Although no one node is guaranteed to be consistent with its replica nodes
 at the time a snapshot is taken, a restored snapshot can resume consistency
 using Cassandra's built-in consistency mechanisms.

 After a system-wide snapshot has been taken, you can enable incremental
 backups on each node (disabled by default) to backup data that has changed
 since the last snapshot was taken. Each time an SSTable is flushed, a hard
 link is copied into a /backups subdirectory of the data directory.

 Now lets see how can we deal with snapshot once its taken. Below you can
 see a list of command that needs to be executed to prepare a snapshot:

 Flushing SSTables for consistency

 'nodetool flush'

 Creating snapshots (for example of all keyspaces)

 nodetool snapshot -t %(backup_name)s 1/dev/null,

 where

-

backup_name - is a name of snapshot


 Once it’s done we would need to collect all hard links into a common
 directory (with keeping initial file hierarchy):

 sudo tar cpzfP /tmp/all_ks.tar.gz\

 $(sudo find %(datadir)s -type d -name %(backup_name)s)

 where

-

backup_name - is a name of snapshot,
-

datadir - storage location (/var/lib/cassandra/data, by the default)


  Note that this operation can be extended:

-

if cassandra was launched with more than one data directory (see
cassandra.yaml

 http://www.datastax.com/documentation/cassandra/2.0/cassandra/configuration/configCassandra_yaml_r.html
)
-

if we want to backup only:
-

   certain keyspaces at the same time
   -

   one keyspace
   -

   a list of CF’s for given keyspace


 Useful links


 http://www.datastax.com/documentation/cassandra/2.0/cassandra/tools/toolsNodetool_r.html

 Best regards,
 Denis Makogon

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Best regards,
 Dmitriy Ukhlov
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Proposal to add Amrith Kumar to trove-core

2014-08-26 Thread Denis Makogon
+1. Congratulations, Amrith.

Best regards,
Denis M.

вторник, 26 августа 2014 г. пользователь Sergey Gotliv написал:

 Strong +1 from me!


  -Original Message-
  From: Nikhil Manchanda [mailto:nik...@manchanda.me javascript:;]
  Sent: August-26-14 3:48 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: [openstack-dev] [Trove] Proposal to add Amrith Kumar to
 trove-core
 
  Hello folks:
 
  I'm proposing to add Amrith Kumar (amrith on IRC) to trove-core.
 
  Amrith has been working with Trove for a while now. He has been a
  consistently active reviewer, and has provided insightful comments on
  numerous reviews. He has submitted quality code for multiple bug-fixes in
  Trove, and most recently drove the audit and clean-up of log messages
 across
  all Trove components.
 
  https://review.openstack.org/#/q/reviewer:amrith,n,z
  https://review.openstack.org/#/q/owner:amrith,n,z
 
  Please respond with +1/-1, or any further comments.
 
  Thanks,
  Nikhil
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org javascript:;
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org javascript:;
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Datastore/Versions API improvements

2014-08-05 Thread Denis Makogon
On Tue, Aug 5, 2014 at 11:06 PM, Craig Vyvial cp16...@gmail.com wrote:




 On Wed, Jul 30, 2014 at 10:10 AM, Denis Makogon dmako...@mirantis.com
 wrote:

 Hello, Stackers.



 I’d like to gather Trove team around question related to
 Datastores/Version API responses (request/response payloads and HTTP codes).

 Small INFO

 When deployer creates datastore and versions for it Troves` backend
 receives request to store DBDatastore and DBDatastoreVersion objects with
 certain parameters. The most interesting attribute of DBDatastoreVersion is
 “packages” - it’s being stored as String object (and it’s totally fine).
 But when we’re trying to query given datastore version through the
 Datastores API attribute “packages” is being returned as String object too.
 And it seems that it breaks response pattern - “If given attribute
 represents complex attribute, such as: list, dict, tuple - it should be
 returned as is.

 So, the first question is - are we able to change it in terms of V1?

 If it does not break the public api then i do not think there is an issue
 making a change.


If modification means breaking then yes. I would say that type 'packages'
attribute should be changed to more appropriate type, such as list of
string. But it seems that this modification would be possible in abstract
V2,


 I made a change not long ago around making the packages a list thats sent
 to the guest. I'm a bit confused what you are wanting to change here.
 Are you suggesting changing the data that is stored for packages (string
 to a json.dumps list or something).
 Or making the model parse the string into a list when you request the
 packages for a datastore version?

I guess last thing. If i want to iterate over packages i would need to
manually split string an build appropriate data type.




 The second question is about admin_context decorator (see [1]). This
 method executes methods of given controller and verifies that user is able
 to execute certain procedure.

 Taking into account RFC 2616 this method should raise HTTP Forbidden
 (code 403) if user tried to execute request that he’s not allowed to.

 But given method return HTTP Unauthorized (code 401) which seems
 weird since user is authorized.

 I think this is a valid bug for the error code although the message makes
 it clear why you get the 401.
 https://github.com/openstack/trove/blob/master/trove/common/auth.py#L85


The problem is that user is authorized but doesn't have certain
permissions. Unauthorized means that user passed wrong credentials,
Forbidden (in terms of ReST) authorized but not permitted.

Craig, after digging into the problem i found out where current code is
broken, see
https://github.com/openstack/trove/blob/master/trove/common/wsgi.py#L316-L318




 This is definitely a bug. And it comes from [2].


 [1]
 https://github.com/openstack/trove/blob/master/trove/common/auth.py#L72-L87

 [2]
 https://github.com/openstack/trove/blob/master/trove/common/wsgi.py#L316-L318



 Best regards,

 Denis Makogon

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Service configuration options modification

2014-08-04 Thread Denis Makogon
Hello, Stackers.



I’d like to propose policy for configuration option deprecation taking into
account all requirements related to production deployment.

As you all know, there are a lot patches that are proposing modifications
to existing configuration. And options are being modified without any(not
always) documentation that reflects differences behaviour with old and new
options.

What we’re doing right now doesn’t seems to be the most valid way, we
shouldn’t delete options without any signs (DocImpact section, at least).

We should find more appropriate way to handle such cases, the most
appropriate way to handle given case is to use oslo-config abilities - mark
option as “Deprecated”.

Here’s proposed workflow for modifications. Once option is being modified:

   1.

   Leave option that is going to be modified as is, add Deprecated flag.
   Clean-up code from usage from deprecated option.
   2.

   Add new option. Add usage to existing code as substitution to previous
   option.
   3.

   Add documentation that reflects differences between options: new and
   deprecated one.
   4.

   Add documentation that reflects behaviour  of existing code with new
   option.



Thoughts?


Best regards,

Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Step by step OpenStack Icehouse Installation Guide

2014-08-03 Thread Denis Makogon
On Sun, Aug 3, 2014 at 1:49 PM, chayma ghribi chaym...@gmail.com wrote:

 Dear All,

 I want to share with you our OpenStack Icehouse Installation Guide for
 Ubuntu 14.04.


 https://github.com/ChaimaGhribi/OpenStack-Icehouse-Installation/blob/master/OpenStack-Icehouse-Installation.rst

 An additional  guide for Heat service installation is also available ;)


 https://github.com/MarouenMechtri/OpenStack-Heat-Installation/blob/master/OpenStack-Heat-Installation.rst


All this docs are awesome! Great work! But what about Trove(incubated and
integrated) installation guide?


 Hope this manuals will be helpful and simple !
 Your contributions are welcome, as are questions and suggestions :)

 Regards,
 Chaima Ghribi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Best regards,
Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Backup/restore namespace config move has leftovers

2014-08-02 Thread Denis Makogon
суббота, 2 августа 2014 г. пользователь Mark Kirkwood написал:

 On 02/08/14 18:24, Denis Makogon wrote:

  Mark, we don't have to add backup/restore namespace options to datastores
 that does't support backup/restore feature.
 You should take a look how backup procedure is being executed at
 Trove-API service site, see
 https://github.com/openstack/trove/blob/master/trove/backup/models.py
 (Method called _validate_can_perform_action)

 If you'll have another questions, feel free to catch me up at IRC
 (denis_makogon).


 Thanks Denis - I did wonder if it was an optional specification! Doh!
 However, while I'm a bit ignorant wrt redis and cassandra, I do have a bit
 todo with mongo and that certainly *does* support backup/restore...


Thank to you too, Mark. There were already filed several BPs related to
back/restore procedure: for Cassandra, for Mongodb( through mongodump and
tayra). Some of them are already hanging at review queue.

Best regards,
Denis Makogon


 Cheers

 Mark


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Backup/restore namespace config move has leftovers

2014-08-01 Thread Denis Makogon
On Fri, Aug 1, 2014 at 2:30 AM, Mark Kirkwood mark.kirkw...@catalyst.net.nz
 wrote:

 In my latest devstack pull I notice that

 backup_namespace
 restore_namespace

 have moved from the default conf group to per datastore (commit 61935d3).
 However they still appear in the common_opts section of


 trove/common/cfg.py



Correct, they are still there, see
https://github.com/openstack/trove/blob/master/trove/common/cfg.py#L177-L182
.

This this options from DEFAULT section should be dropped of, or at least
marked as DEPRECATED.



 This seems like an oversight - or is there something I'm missing?


You're not missing anything, you are right. I'd suggest to file a bug
report and fix given issue.


Best regards,
Denis Makogon


 Cheers

 Mark

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] V1 Client improvements

2014-08-01 Thread Denis Makogon
Hello, Stackers.

I’d like to raise the question related to list of API calls that were
implemented at Trove site but not used as part of V1 Client.

Ignored V1 Client APIs:

https://github.com/openstack/python-troveclient/blob/master/troveclient/v1/client.py#L72-L79

The problem is that described list of API endpoints are available for
at Trove, and can’t be used as part of V1 client (see link above). Given
API endpoints can be used only as part of compat client.

So, as i can see, we have  options:

   -

   Clean-up V1 client - remove ignored API, develop plans for V2
   -

   Make ignored API available through V1 client (will take certain efforts
   to add CLI representation of this calls, same for Trove site - will need to
   add integration tests to verify that given endpoints are accessible via V1
   client)


Thoughts?


Best regards,

Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Configuration Groups CLI improvements

2014-07-30 Thread Denis Makogon
Hello, Stackers.





Since Trove gives an ability to create post-deployment configuration for
instances, it would be nice to have an ability to pass database
configuration file location through configuration file location.

I’d like to propose feature that would improve usability of CLI for
configuration-create. To avoid text duplication i filed BP and wrote spec
for it


BP: https://blueprints.launchpad.net/trove/+spec/configuration-improvements

Wiki: https://wiki.openstack.org/wiki/Trove/ConfigurationShellImprovements


Any early feedbacks are appreciated.


Best regards,

Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Datastore/Versions API improvements

2014-07-30 Thread Denis Makogon
Hello, Stackers.



I’d like to gather Trove team around question related to
Datastores/Version API responses (request/response payloads and HTTP codes).

Small INFO

When deployer creates datastore and versions for it Troves` backend
receives request to store DBDatastore and DBDatastoreVersion objects with
certain parameters. The most interesting attribute of DBDatastoreVersion is
“packages” - it’s being stored as String object (and it’s totally fine).
But when we’re trying to query given datastore version through the
Datastores API attribute “packages” is being returned as String object too.
And it seems that it breaks response pattern - “If given attribute
represents complex attribute, such as: list, dict, tuple - it should be
returned as is.

So, the first question is - are we able to change it in terms of V1?

The second question is about admin_context decorator (see [1]). This method
executes methods of given controller and verifies that user is able to
execute certain procedure.

Taking into account RFC 2616 this method should raise HTTP Forbidden
(code 403) if user tried to execute request that he’s not allowed to.

But given method return HTTP Unauthorized (code 401) which seems weird
since user is authorized.

This is definitely a bug. And it comes from [2].


[1]
https://github.com/openstack/trove/blob/master/trove/common/auth.py#L72-L87

[2]
https://github.com/openstack/trove/blob/master/trove/common/wsgi.py#L316-L318



Best regards,

Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Specs improvements. Review request.

2014-07-30 Thread Denis Makogon
Hello, Stackers.


I’ve been working on several specs and i’d like to receive early
feedbacks on updated specs for:


   -

   Database log manipulations. Describes initial feature description.
   -

  https://wiki.openstack.org/wiki/Trove/DBInstanceLogOperation
  -

   Events notifications. Ceilometer integration.
   -

  https://wiki.openstack.org/wiki/Trove/ceilometer_integration
  -

   Datastores/Versions Management APIs.
   -

  https://wiki.openstack.org/wiki/Trove/DatastoreManagementAPI


Folks, if you don’t mind, i’d like to receive feedbacks due to the end
of the Friday to submit them for monday blueprint review.


Best regards,
Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Dynamic extension loading using stevedore -- BP ready for review

2014-07-28 Thread Denis Makogon
On Fri, Jul 25, 2014 at 9:00 PM, boden bo...@linux.vnet.ibm.com wrote:

 Gents,
 As we discussed at the BP meeting on July 14 - I've created a new BP and
 BP wiki to outline the dynamic extension loading using stevedore.

 BP: https://blueprints.launchpad.net/trove/+spec/dynamic-extension-loading
 Wiki: https://wiki.openstack.org/wiki/Trove/DynamicExtensionLoading
 PoC code: https://github.com/bodenr/trove/commit/
 fa06e1d96e6a49a2a54057e8feb8e624edeaf728

 I've also added this to the agenda for the next BP meeting:
 https://wiki.openstack.org/wiki/Meetings/TroveBPMeeting

 Please feel free to add comments to wiki or via email / IRC (@boden);
 otherwise we can sync-up on Monday's BP meeting.


The only strong concern i feel about refactoring API Extension is about
deprecation of all of this in K release. Trove _already_ uses lagacy code
which is not maintainable. Because WSGI ReST framework was dropped off from
oslo-incubator(during IceHouse release) and Trove will migrate into Pecan
before K2.

So, i'd like to freeze all refactoring/re-implementation around lagacy WSGI
code until Pecan support implementation will land.

Best regards,
Denis Makogon


 Thank you


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove][Monitoring][Ceilometer] Trove monitoring feature. Database metrics.

2014-07-28 Thread Denis Makogon
Hello, Stackers.


I’d like to start thread related to Monitoring of provisioned resources.

Taking a look at production-ready solutions for monitoring shows us that
almost all this systems are requiring plugin/agent deployment to provide
online server monitoring, up-to-date statistics, etc.

Monitoring question was raised at previous design summit, but there no
discussions, proposals were made.

I’d like to start discussion related to in-VM Database monitoring.

For now trove-guestagent sends its status once it’s being modified. For
monitoring perspective this type of reporting covers only
availability/accessibility of deployed database.

But what about other metrics? After certain research i’ve collected
next abstract set of metrics and its units for databases:

CPUUtilization

The percentage of CPU utilization.

Units: Percent

DatabaseConnections

The number of database connections in use.

Units: Count

DiskQueueDepth

The number of outstanding IOs (read/write requests) waiting to access the
disk.

Units: Count

FreeableMemory

The amount of available random access memory.

Units: Bytes

FreeStorageSpace

The amount of available storage space.

Units: Bytes

SwapUsage

The amount of swap space used on the DB Instance.

Units: Bytes

ReadIOPS

The average number of disk I/O operations per second.

Units: Count/Second

WriteIOPS

The average number of disk I/O operations per second.

Units: Count/Second

ReadLatency

The average amount of time taken per disk I/O operation.

Units: Seconds

WriteLatency

The average amount of time taken per disk I/O operation.

Units: Seconds

ReadThroughput

The average number of bytes read from disk per second.

Units: Bytes/Second

WriteThroughput

The average number of bytes written to disk per second.

Units: Bytes/Second

NetworkReceiveThroughput

The incoming (Receive) network traffic on the DB instance, including both
customer database traffic and Amazon RDS traffic used for monitoring and
replication.

Units: Bytes

NetworkTransmitThroughput

The outgoing (Transmit) network traffic on the DB instance, including both
customer database traffic and Amazon RDS traffic used for monitoring and
replication.

Units: Bytes


And the list of specific metrics specific for datastores:

   1.

   Cassandra (see [1])
   2.

   MongoDB (see [2])
   3.

   Redis (see [3])
   4.

   Couchbase (see [4])
   5.

   MySQL (see [5])

BinLogDiskUsage

The amount of disk space occupied by binary logs on the master. Applies to
MySQL read replicas.

Units: Bytes

ReplicaLag

The amount of time a Read Replica DB Instance lags behind the source DB
Instance. Applies to MySQL read replicas.

The ReplicaLag metric reports the value of the Seconds_Behind_Master field
of the MySQL SHOW SLAVE STATUS command. For more information, see [6]
http://dev.mysql.com/doc/refman/5.6/en/show-slave-status.html

Units: Seconds


To receive all metrics we might need to adopt guestagent to send them as
part of notification process (by using periodic task mechanism), as the
part of ceilometer integration.
So, the major goal of this thread is to collect all use cases and
requirements and build out suitable monitoring feature design (step by
step, of course).

Thoughts?

Links:

[1]
http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_monitoring_c.html

[2]
http://blog.mongodb.org/post/62152249344/the-top-5-metrics-to-watch-in-mongodb

[3]
http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/CacheMetrics.Redis.html

[4] http://blog.couchbase.com/monitoring-couchbase-cluster

[5] http://www.hyperic.com/products/mysql-monitoring

[6] http://dev.mysql.com/doc/refman/5.6/en/show-slave-status.html




Best regards,

Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Dynamic extension loading using stevedore -- BP ready for review

2014-07-28 Thread Denis Makogon
On Mon, Jul 28, 2014 at 6:40 PM, boden bo...@linux.vnet.ibm.com wrote:


 On 7/28/2014 8:40 AM, Denis Makogon wrote:




 On Fri, Jul 25, 2014 at 9:00 PM, boden bo...@linux.vnet.ibm.com
 mailto:bo...@linux.vnet.ibm.com wrote:

 Gents,
 As we discussed at the BP meeting on July 14 - I've created a new BP
 and BP wiki to outline the dynamic extension loading using stevedore.

 BP:
 https://blueprints.launchpad.__net/trove/+spec/dynamic-__
 extension-loading
 https://blueprints.launchpad.net/trove/+spec/dynamic-
 extension-loading
 Wiki:
 https://wiki.openstack.org/__wiki/Trove/__DynamicExtensionLoading
 https://wiki.openstack.org/wiki/Trove/DynamicExtensionLoading
 PoC code:
 https://github.com/bodenr/__trove/commit/__
 fa06e1d96e6a49a2a54057e8feb8e6__24edeaf728

 https://github.com/bodenr/trove/commit/
 fa06e1d96e6a49a2a54057e8feb8e624edeaf728

 I've also added this to the agenda for the next BP meeting:
 https://wiki.openstack.org/__wiki/Meetings/TroveBPMeeting

 https://wiki.openstack.org/wiki/Meetings/TroveBPMeeting

 Please feel free to add comments to wiki or via email / IRC
 (@boden); otherwise we can sync-up on Monday's BP meeting.


 The only strong concern i feel about refactoring API Extension is about
 deprecation of all of this in K release. Trove _already_ uses lagacy
 code which is not maintainable. Because WSGI ReST framework was dropped
 off from oslo-incubator(during IceHouse release) and Trove will migrate
 into Pecan before K2.

 So, i'd like to freeze all refactoring/re-implementation around lagacy
 WSGI code until Pecan support implementation will land.


 You are implying that Pecan will introduce a new way to discover and load
 extensions? Admittedly I'm not up to speed on Pecan yet, thus any refs you
 can provide are appreciated.


I wasn't able to find something about extension management at Pecan docs
(see http://pecan.readthedocs.org/en/latest/). I'm just saying that
everything that would possibly be done aroung deprecated code will be
eliminated once K branch would be opened for development. That's why i'd
not suggest to work around extension (they was deprecated, see
https://gist.github.com/denismakogon/a7ce440297ebe2ec65ae).

Actually i would suggest to talk about Pecan migration, instead of proposed
BP.




 Best regards,
 Denis Makogon

 Thank you


 _
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.__org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Should we stop using wsgi-intercept, now that it imports from mechanize? this is really bad!

2014-07-26 Thread Denis Makogon
This actually is good question. WSGI framework was deprecate at IceHouse
release(as I can recall). So, Trove should migrate to Pecan ReST framework
as soon as possible during Kilo release.
So, for now, the short answer - it's impossible to fix Trove to be ready
for Py3.4 unfortunately.


Best regards,
Denis Makogon


суббота, 26 июля 2014 г. пользователь Thomas Goirand написал:

 Hi,

 Trove is using wsgi-intercept. So it ended in the
 global-requirements.txt. It was ok until what's below...

 I was trying to fix this bug:
 https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=755315

 then I realize that the last version had the fix for Python 3.4. So I
 tried upgrading. But doing so, I have found out that wsgi-intercept now
 imports mechanize.

 The mechanize package from pypi is in a *very* bad state. It embeds all
 sorts of Python modules, like request, rfc3986, urllib2, beautifulsoup,
 and probably a lot more. It also isn't Python 3 compatible. I tried
 patching it. I ended up with:

  _beautifulsoup.py |   12 ++--
  _form.py  |   12 ++--
  _html.py  |8 
  _http.py  |4 ++--
  _mechanize.py |2 +-
  _msiecookiejar.py |4 ++--
  _opener.py|2 +-
  _sgmllib_copy.py  |   28 ++--
  _urllib2_fork.py  |   14 +++---
  9 files changed, 43 insertions(+), 43 deletions(-)

 probably that's not even enough to make it work with Python 3.4.

 Then I tried running the unit tests. First, they fail with Python 2.7 (2
 errors). It's to be noted that the unit tests were not even run at build
 time for the package. Then for Python 3, there's all sorts of errors
 that needs to be fixed as well...

 At this point, I gave-up with mechanize. But then, this makes me wonder:
 can we continue to use wsgi-intercept if it depends on such a bad Python
 module.

 If we are to stick to an older version of wsgi-intercept (which I do not
 recommend, for maintainability reasons), could someone help me to fix
 the Python 3.4 issue I'm having with wsgi-intercept? Removing Python 3
 support would be sad... :(

 Your thoughts?

 Cheers,

 Thomas Goirand (zigo)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org javascript:;
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.cfg] Dynamically load in options/groups values from the configuration files

2014-07-24 Thread Denis Makogon
On Thu, Jul 24, 2014 at 6:10 AM, Baohua Yang yangbao...@gmail.com wrote:

 Hi, all
  The current oslo.cfg module provides an easy way to load name known
 options/groups from he configuration files.
   I am wondering if there's a possible solution to dynamically load
 them?

   For example, I do not know the group names (section name in the
 configuration file), but we read the configuration file and detect the
 definitions inside it.

 #Configuration file:
 [group1]
 key1 = value1
 key2 = value2

Then I want to automatically load the group1. key1 and group2.
 key2, without knowing the name of group1 first.


That's actually good question. But it seems to be a bit complicated. You
have to tell to option loader about type of given configuration item.
I also was thinking about this type of feature, and that's what came into
my mind.

I found very useful JSON or YAML format for describing options. Here's
simple example how to describe configuration file that would describe
dynamic configuration.

options.yaml

- groups:
  - DEFAULT
  - NOT_DEFAULT
  - ANOTHER_ONE

- list:
  - option_a:
- group: DEFAULT
- value: [a, b ,c]
- description: description

- dict:
  - option_b:
- group: DEFAULT
- value: { a:b, c:d}
- description: description

and so on ...

Explanation:

` - groups` attribute defines which groups oslo.config should register.
`-  list` - option type
`- option_b` - option descriptor, each descriptor is a dict (string: list)
where key is an option name, and attributes inside it describes to which
group it belongs, its value, and description.

oslo.config would need just parse YAML file and register all options and in
the end you'll receive a set of registered options per their groups.

It's the best variant of dynamic option loading, but i'm open for
discussion.


Best regards,
Denis Makogon

 Thanks a lot!

 --
 Best wishes!
 Baohua

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Neutron integration test job

2014-07-24 Thread Denis Makogon
On Thu, Jul 24, 2014 at 12:32 PM, Nikhil Manchanda nik...@manchanda.me
wrote:


  On Wed, Jul 23, 2014 at 7:28 AM, Denis Makogon dmako...@mirantis.com
 wrote:
  [...]
 
  Add Neutron-based configuration for DevStack to let folks try it

 This makes sense to tackle; now that the neutron integration pieces have
 merged in Trove (yahoo!).

 However, it looks like the changes you propose in your DevStack patchset
 [1] have been copied directly from the trove-integration scripts at
 [2]. I have two primary concerns with this:

 a. Most of these values are only required for the trove functional
 tests to pass -- they aren't required for a user install of trove with
 Neutron. For such values, the trove-integration scripts seem like a
 better place for this configuration.

 b. Since the trove functional tests run based on the trove-integration
 scripts, what this means is that if this change is merged, this
 configuration code will be run twice, once in devstack, and once again
 as part of the test-init script from trove-integration.

 [1] https://review.openstack.org/#/c/108966
 [2]
 https://github.com/openstack/trove-integration/blob/master/scripts/redstack#L406-427



  Implementing/providing new type of testing job that will test on a
 regular
  basis all Trove tests with enabled Neutron to verify that all our
 networking
  preparations for instance are fine.
 
  The last thing is the most interesting. And i’d like to discuss it with
 all
  of you, folks.
  So, i’ve wrote initial job template taking into account specific
  configuration required by DevStack and Trove-integration, see [4], and
 i’d
  like to receive all possible feedbacks as soon as possible.
 

 So it looks like the test job you propose [3] is based on a current
 experimental job template: gate-trove-functional-dsvm-{datastore}
 [4]. Since pretty much most of it is an exact copy (except for the
 NEUTRON_ENABLED bit) I'd suggest working that in as a parameter to the
 current job template instead of duplicating the exact same code as part
 of another job.

 Nikhil, i already did lots of refactoring inside trove.yaml (see patchset
[1] and its dependent patchset). The same thing i'm going to do
I know that there's lots of duplications, just wanted to describe complete
template.

The actual question is - Is given template is correct? Would it work with
trove-integration and with pure devstack in the nearest future?

P.S. I've got only basic knowledge about jenkins jobs inside infra.

[1] https://review.openstack.org/#/c/100601/



 [3] https://gist.github.com/denismakogon/76d9bd3181781097c39b
 [4]
 https://github.com/openstack-infra/config/blob/master/modules/openstack_project/files/jenkins_job_builder/config/trove.yaml#L30-63


 Thanks,
 Nikhil

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove][stevedore] Datastore configuration opts refactoring. Stevedore integration.

2014-07-24 Thread Denis Makogon
/openstack/trove/blob/master/trove/guestagent/datastore/couchbase/manager.py
https://github.com/openstack/trove/blhttps://github.com/openstack/trove/blob/master/trove/guestagent/datastore/couchbase/manager.pyob/master/trove/guestagent/datastore/mysql/manager.py

[4]
https://github.com/openstack/trove/blob/master/trove/guestagent/datastore/redis/manager.py

[5]
https://github.com/openstack/trove/blob/master/trove/guestagent/datastore/cassandra/manager.py

[6]
https://github.com/openstack/trove/blob/master/trove/guestagent/datastore/mongodb/manager.py#L96-L160



Best regards,

Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance][Trove] Metadata Catalog

2014-07-24 Thread Denis Makogon
Hello, Stackers.

 I’d like to discuss the future of Trove metadata API. But first small
history info (mostly taken for Trove medata spec, see [1]):
Instance metadata is a feature that has been requested frequently by our
users. They need a way to store critical information for their instances
and have that be associated with the instance so that it is displayed
whenever that instance is listed via the API. This also becomes very usable
from a testing perspective when doing integration/ci. We can utilize the
metadata to store things like what process created the instance, what the
instance is being used for, etc... The design for this feature is modeled
heavily on the Nova metadata API with a few tweaks in how it works
internally.

And here comes conflict. Glance devs are working on “Glance Metadata
Catalog” feature (see [2]). And as for me, we don’t have to “reinvent the
wheel” for Trove. It seems that we would be able

to use Glance API to interact with   Metadata Catalog. And it seems to be
redundant to write our own API for metadata CRUD operations.



From Trove perspective, we need to define a list concrete use cases for
metadata usage (eg given goals at [1] are out of scope of Database program,
etc.).

From development and cross-project integration perspective, we need to
delegate all development to Glance devs. But we still able to help Glance
devs with this feature by taking active part in polishing proposed spec
(see [2]).



Unfortunately, we’re(Trove devs) are on half way to metadata - patch
for python-troveclient already merged. So, we need to consider
deprecation/reverting of merged and block

merging of proposed ( see [3]) patchsets in favor of Glance Metadata
Catalog.


Thoughts?

[1] https://wiki.openstack.org/wiki/Trove-Instance-Metadata

[2] https://review.openstack.org/#/c/98554/11

[3] https://review.openstack.org/#/c/82123/


Best regards,

Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Trove] Metadata Catalog

2014-07-24 Thread Denis Makogon
On Thu, Jul 24, 2014 at 5:46 PM, Arnaud Legendre alegen...@vmware.com
wrote:

  Hi Denis,

  I think this is a perfect time for you to review the spec for the glance
 metadata catalog https://review.openstack.org/#/c/98554/ and see if it
 fits your use case.
 Also, we have a session tomorrow at 9:00am PST at the Glance meetup to
 discuss this topic. I think it would be useful if you could join (in person
 or remotely). Please see the details:
 https://wiki.openstack.org/wiki/Glance/JunoCycleMeetup

 I will try to take part in, unfortunately remotely. Also, i'm reviewing
metadata spec right now. If there would be some kind of a gap or missing
abilities - i would leave comments. But for the cursory glance - it looks
exactly what we need, except our own(Trove) specific things.

Thanks,
Denis M.


  Thank you,
 Arnaud

  On Jul 24, 2014, at 6:32 AM, Denis Makogon dmako...@mirantis.com wrote:

   Hello, Stackers.

  I’d like to discuss the future of Trove metadata API. But first small
 history info (mostly taken for Trove medata spec, see [1]):
  Instance metadata is a feature that has been requested frequently by our
 users. They need a way to store critical information for their instances
 and have that be associated with the instance so that it is displayed
 whenever that instance is listed via the API. This also becomes very usable
 from a testing perspective when doing integration/ci. We can utilize the
 metadata to store things like what process created the instance, what the
 instance is being used for, etc... The design for this feature is modeled
 heavily on the Nova metadata API with a few tweaks in how it works
 internally.

  And here comes conflict. Glance devs are working on “Glance Metadata
 Catalog” feature (see [2]). And as for me, we don’t have to “reinvent the
 wheel” for Trove. It seems that we would be able
  to use Glance API to interact with   Metadata Catalog. And it seems to
 be redundant to write our own API for metadata CRUD operations.


  From Trove perspective, we need to define a list concrete use cases
 for metadata usage (eg given goals at [1] are out of scope of Database
 program, etc.).
  From development and cross-project integration perspective, we need to
 delegate all development to Glance devs. But we still able to help Glance
 devs with this feature by taking active part in polishing proposed spec
 (see [2]).


  Unfortunately, we’re(Trove devs) are on half way to metadata - patch
 for python-troveclient already merged. So, we need to consider
 deprecation/reverting of merged and block
  merging of proposed ( see [3]) patchsets in favor of Glance Metadata
 Catalog.


 Thoughts?

  [1] https://wiki.openstack.org/wiki/Trove-Instance-Metadata
  [2] https://review.openstack.org/#/c/98554/11
  [3] https://review.openstack.org/#/c/82123/


  Best regards,
  Denis Makogon
  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Guest prepare call polling mechanism issue

2014-07-23 Thread Denis Makogon
Hello, Stackers.


I’d like to discuss guestagent prepare call polling mechanism issue (see
[1]).

Let me first describe why this is actually an issue and why it should be
fixed. For those of you who is familiar with Trove knows that Trove can
provision instances through Nova API and Heat API (see [2] and see [3]).



What’s the difference between this two ways (in general)? The answer is
simple:

- Heat-based provisioning method has polling mechanism that verifies that
stack provisioning was completed with successful state (see [4]) which
means that all stack resources are in ACTIVE state.

- Nova-based provisioning method doesn’t do any polling (which is wrong,
since instance can’t fail as fast as possible because Trove-taskmanager
service doesn’t verify that launched server had reached ACTIVE state.
That’s the issue #1 - compute instance state is unknown, but right after
resources (deliverd by heat) already in ACTIVE states.

Once one method [2] or [3] finished, taskmanager trying to prepare data for
guest (see [5]) and then it tries to send prepare call to guest (see [6]).
Here comes issue #2 - polling mechanism does at least 100 API calls to Nova
to define compute instance status.

Also taskmanager does almost the same amount of calls to Trove backend to
discover guest status which is totally normal.

So, here comes the question,  why should i call 99 times Nova for the
same value if the value asked for the first time was completely acceptable?



There’s only one way to fix it. Since heat-based provisioning delivers
instance with status validation procedure, the same thing should be done
for nova-base provisioning (we should extract compute instance status
polling from guest prepare polling mechanism and integrate it into [2]) and
leave only guest status discovering in guest prepare polling mechanism.




Benefits? Proposed fix will give an ability for fast-failing for corrupted
instances, it would reduce amount of redundant Nova API calls while
attempting to discover guest status.


Proposed fix for this issue - [7].

[1] - https://launchpad.net/bugs/1325512

[2] -
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L198-L215

[3] -
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L190-L197

[4] -
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L420-L429

[5] -
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L217-L256

[6] -
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L254-L266

[7] - https://review.openstack.org/#/c/97194/


Thoughts?

Best regards,

Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Neutron integration test job

2014-07-23 Thread Denis Makogon
Hello, Stackers.


For those of you who’s interested in Trove just letting you know, that for
now Trove can work with Neutron (hooray!!)
 instead of Nova-network, see [1] and [2]. It’s a huge step forward on the
road of advanced OpenStack integration.

But let’s admit it’s not the end, we should deal with:

   1.

   Add Neutron-based configuration for DevStack to let folks try it (see
   [3]).
   2.

   Implementing/providing new type of testing job that will test on a
   regular basis all Trove tests with enabled Neutron to verify that all our
   networking preparations for instance are fine.


The last thing is the most interesting. And i’d like to discuss it with all
of you, folks.
So, i’ve wrote initial job template taking into account specific
configuration required by DevStack and Trove-integration, see [4], and i’d
like to receive all possible feedbacks as soon as possible.



[1] - Trove.
https://github.com/openstack/trove/commit/c68fef2b7a61f297b9fe7764dd430eefd4d4a767

[2] - Trove integration.
https://github.com/openstack/trove-integration/commit/9f42f5c9b1a0d8844b3e527bcf2eb9474485d23a

[3] - DevStack patchset. https://review.openstack.org/108966

[4] - POC. https://gist.github.com/denismakogon/76d9bd3181781097c39b


Best regards,

Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Neutron integration test job

2014-07-23 Thread Denis Makogon
On Wed, Jul 23, 2014 at 8:12 PM, Kyle Mestery mest...@mestery.com wrote:

 On Wed, Jul 23, 2014 at 7:28 AM, Denis Makogon dmako...@mirantis.com
 wrote:
  Hello, Stackers.
 
 
 
  For those of you who’s interested in Trove just letting you know, that
 for
  now Trove can work with Neutron (hooray!!)
   instead of Nova-network, see [1] and [2]. It’s a huge step forward on
 the
  road of advanced OpenStack integration.
 
  But let’s admit it’s not the end, we should deal with:
 
  Add Neutron-based configuration for DevStack to let folks try it (see
 [3]).
 
 I have some comments on this patch which I've posted in the review.


Thanks for keeping an eye on it. So, you've suggested to use
PRIVATE_NETWORK_NAME
and PRIVATE_SUBNET_NAME.

Correct me if i'm wrong. According to [1] and [2] when neutron get's
deployed, it uses pre-defined network (defined at [1]) and sub-network name
(defined at [2]).

If that's it - i'm totally fine to update patchset with suggested changed.

[1]
https://github.com/openstack-dev/devstack/blob/89a8a15ebe31f4b06e40ecadd4918e687087874c/stackrc#L418-L420
[2]
https://github.com/openstack-dev/devstack/blob/1ecd43da5434b8ef7dafb49b9b30c9c1b18afffe/lib/neutron



  Implementing/providing new type of testing job that will test on a
 regular
  basis all Trove tests with enabled Neutron to verify that all our
 networking
  preparations for instance are fine.
 
 
  The last thing is the most interesting. And i’d like to discuss it with
 all
  of you, folks.
  So, i’ve wrote initial job template taking into account specific
  configuration required by DevStack and Trove-integration, see [4], and
 i’d
  like to receive all possible feedbacks as soon as possible.
 
 This is great! I'd like to see this work land as well, thanks for
 taking this on. I'll add this to my backlog of items to review and
 provide some feedback as well.

 Sound amazing, thanks for keeping an eye on. The most interesting part for
me is a job template, i'd like to hear feedback in it as weel.

P.S.: sorry about putting job template on a gist instead of sending on the
review, but i thought it would be good enough to recieve a feedback.

 Best regards,
Denis Makogon

 Thanks,
 Kyle

 
 
  [1] - Trove.
 
 https://github.com/openstack/trove/commit/c68fef2b7a61f297b9fe7764dd430eefd4d4a767
 
  [2] - Trove integration.
 
 https://github.com/openstack/trove-integration/commit/9f42f5c9b1a0d8844b3e527bcf2eb9474485d23a
 
  [3] - DevStack patchset. https://review.openstack.org/108966
 
  [4] - POC.
 https://gist.github.com/denismakogon/76d9bd3181781097c39b
 
 
 
  Best regards,
 
  Denis Makogon
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Guest prepare call polling mechanism issue

2014-07-23 Thread Denis Makogon
On Wed, Jul 23, 2014 at 7:33 PM, Tim Simpson tim.simp...@rackspace.com
wrote:

  To summarize, this is a conversation about the following LaunchPad bug:
 https://launchpad.net/bugs/1325512
 and Gerrit review: https://review.openstack.org/#/c/97194/6

  You are saying the function _service_is_active in addition to polling
 the datastore service status also polls the status of the Nova resource. At
 first I thought this wasn't the case, however looking at your pull request
 I was surprised to see on line 320 (
 https://review.openstack.org/#/c/97194/6/trove/taskmanager/models.py)
 polls Nova using the get method (which I wish was called refresh as to
 me it sounds like a lazy-loader or something despite making a full GET
 request each time).
 So moving this polling out of there into the two respective
 create_server methods as you have done is not only going to be useful for
 Heat and avoid the issue of calling Nova 99 times you describe but it will
 actually help operations teams to see more clearly that the issue was with
 a server that didn't provision. We actually had an issue in Staging the
 other day that took us forever to figure out because the


Agreed, i guess i would need to update bug-report to add more info about
given issue, but i'm really glad to hear that proposed change would be
useful. And i agree, that from operation/support team would be useful to
track provisioning issues that has nothing common with Trove but tight to
infrastructure.


 server wasn't provisioning, but before anything checked that it was ACTIVE
 the DNS code detected the server had no ip address (never mind it was in a
 FAILED state) so the logs surfaced this as a DNS error. This change should
 help us avoid such issues.

  Thanks,

  Tim


  --
 *From:* Denis Makogon [dmako...@mirantis.com]
 *Sent:* Wednesday, July 23, 2014 7:30 AM
 *To:* OpenStack Development Mailing List
 *Subject:* [openstack-dev] [Trove] Guest prepare call polling mechanism
 issue

Hello, Stackers.


  I’d like to discuss guestagent prepare call polling mechanism issue (see
 [1]).

  Let me first describe why this is actually an issue and why it should be
 fixed. For those of you who is familiar with Trove knows that Trove can
 provision instances through Nova API and Heat API (see [2] and see [3]).



 What’s the difference between this two ways (in general)? The answer
 is simple:

 - Heat-based provisioning method has polling mechanism that verifies that
 stack provisioning was completed with successful state (see [4]) which
 means that all stack resources are in ACTIVE state.

 - Nova-based provisioning method doesn’t do any polling (which is wrong,
 since instance can’t fail as fast as possible because Trove-taskmanager
 service doesn’t verify that launched server had reached ACTIVE state.
 That’s the issue #1 - compute instance state is unknown, but right after
 resources (deliverd by heat) already in ACTIVE states.

  Once one method [2] or [3] finished, taskmanager trying to prepare data
 for guest (see [5]) and then it tries to send prepare call to guest (see
 [6]). Here comes issue #2 - polling mechanism does at least 100 API calls
 to Nova to define compute instance status.

 Also taskmanager does almost the same amount of calls to Trove backend to
 discover guest status which is totally normal.

  So, here comes the question,  why should i call 99 times Nova for
 the same value if the value asked for the first time was completely
 acceptable?



 There’s only one way to fix it. Since heat-based provisioning
 delivers instance with status validation procedure, the same thing should
 be done for nova-base provisioning (we should extract compute instance
 status polling from guest prepare polling mechanism and integrate it into
 [2]) and leave only guest status discovering in guest prepare polling
 mechanism.




  Benefits? Proposed fix will give an ability for fast-failing for
 corrupted instances, it would reduce amount of redundant Nova API calls
 while attempting to discover guest status.


  Proposed fix for this issue - [7].

  [1] - https://launchpad.net/bugs/1325512

 [2] -
 https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L198-L215

 [3] -
 https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L190-L197

 [4] -
 https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L420-L429

 [5] -
 https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L217-L256

 [6] -
 https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L254-L266

 [7] - https://review.openstack.org/#/c/97194/


  Thoughts?

  Best regards,

 Denis Makogon

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev

Re: [openstack-dev] [Trove] Guest prepare call polling mechanism issue

2014-07-23 Thread Denis Makogon
On Thu, Jul 24, 2014 at 1:01 AM, Nikhil Manchanda nik...@manchanda.me
wrote:


 Tim Simpson writes:

  To summarize, this is a conversation about the following LaunchPad
  bug: https://launchpad.net/bugs/1325512
  and Gerrit review: https://review.openstack.org/#/c/97194/6
 
  You are saying the function _service_is_active in addition to
  polling the datastore service status also polls the status of the Nova
  resource. At first I thought this wasn't the case, however looking at
  your pull request I was surprised to see on line 320
  (https://review.openstack.org/#/c/97194/6/trove/taskmanager/models.py)
  polls Nova using the get method (which I wish was called refresh
  as to me it sounds like a lazy-loader or something despite making a
  full GET request each time).  So moving this polling out of there into
  the two respective create_server methods as you have done is not
  only going to be useful for Heat and avoid the issue of calling Nova
  99 times you describe but it will actually help operations teams to
  see more clearly that the issue was with a server that didn't
  provision. We actually had an issue in Staging the other day that took
  us forever to figure out because the server wasn't provisioning, but
  before anything checked that it was ACTIVE the DNS code detected the
  server had no ip address (never mind it was in a FAILED state) so the
  logs surfaced this as a DNS error. This change should help us avoid
  such issues.
 

 Thanks for bringing this up, Tim / Denis.

 As Tim mentions, it does look like the '_service_is_active' call in
 the taskmanager also polls Nova to check whether the instance is in
 ERROR, causing some unnecessary, extra polling while figuring out the
 state of the Trove instance.

 Given this, it does seem reasonable to split up the polling into two
 separate methods, in a manner similar to what [1] is trying to
 accomplish. However, [1] does seems a bit rough around the edges, and
 needs a bit of cleaning up -- and I've commented on the review to this
 effect.


Of course, all comments are reasonable. Will send patchset soon.

Thanks,
Denis


 [1] https://review.openstack.org/#/c/97194

 Hope this helps,

 Thanks,
 Nikhil

 
  [...]

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TROVE] Guest prepare call polling mechanism issue

2014-07-21 Thread Denis Makogon
Hello Stackers.


I’d like to discuss raised issue related to Trove-guestagent prepare call
polling mechanism issue (see [1]).

Let me first describe why this is actually an issue and why it should be
fixed. For those of you who is familiar with Trove knows that Trove can
provision instances through Nova API and Heat API (see [2] and see [3]).



What’s the difference between this two ways (in general)? The answer is
simple:

- Heat-based provisioning method has polling mechanism that verifies that
stack provisioning was completed with successful state (see [4]) which
means that all stack resources are in ACTIVE state.

- Nova-based provisioning method doesn’t do any polling (which is wrong,
since instance can’t fail as fast as possible because Trove-taskmanager
service doesn’t verify that launched server had reached ACTIVE state.
That’s the issue #1 - compute instance state is unknown, but right after
resources (deliverd by heat) already in ACTIVE states.

Once one method [2] or [3] finished, taskmanager trying to prepare data for
guest (see [5]) and then it tries to send prepare call to guest (see [6]).
Here comes issue #2 - polling mechanism does at least 100 API calls to Nova
to define compute instance status.

Also taskmanager does almost the same amount of calls to Trove backend to
discover guest status which is totally normal.

So, here comes the question,  why should i call 99 times Nova for the
same value if the value asked for the first time was completely ok?



There’s only one way to fix it. Since heat-based provisioning delivers
instance with status validation procedure, the same thing should be done
for nova-base provisioning (we should extract compute instance status
polling from guest prepare polling mechanism and integrate it into [2]) and
leave only guest status discovering in guest prepare polling mechanism.




Benefits? Proposed fix will give an ability for fast-failing for corrupted
instances, it would reduce amount of redundant Nova API calls while
attempting to discover guest status.


Proposed fix for this issue - [7].

[1] - https://launchpad.net/bugs/1325512

[2] -
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L198-L215

[3] -
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L190-L197

[4] -
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L420-L429

[5] -
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L217-L256

[6] -
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L254-L266

[7] - https://review.openstack.org/#/c/97194/


Thoughts?

Best regards,

Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Postgresql Anyone working on DBaaS?

2014-07-10 Thread Denis Makogon
Hello Mark.

There are several patches for Postgresql are hanging at the review queue.

Here's useful link:

https://blueprints.launchpad.net/trove/+spec/postgresql-support

Contact point: Kevin Conway (you can ping him in IRC and ask if he needs
any help)


Best regards,
Denis Makogon


On Thu, Jul 10, 2014 at 9:24 AM, Mark Kirkwood 
mark.kirkw...@catalyst.net.nz wrote:

 Where I work we make use of Postgresql for most of our database needs. It
 would be nice to be able to offer a Postgresql flavor within the Trove
 framework. Is anyone working on adding it in?

 If noone else is, then I might look at doing it, if there are folks
 working on it - let me know if I can help with any part thereof.

 Regards

 Mark

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What projects need help?

2014-07-10 Thread Denis Makogon
On Thu, Jul 10, 2014 at 7:13 PM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2014-07-08 20:31:26 -0600 (-0600), Brian Jarrett wrote:
 [...]
  Are there any projects that could use more developers?
 [...]

 That's a more complex question than you might think. On the whole
 OpenStack does not need more developers writing new features. What
 we *are* in desperate need of is developers thoroughly reviewing
 proposed patches, writing (good) tests, documenting what's already
 there, fixing bugs, and helping new users and operators/deployers by
 answering their questions on mailing lists, in IRC, and on the Ask
 OpenStack site.


Completely agreed with Jeremy. As for me, OS got two big issues:
 - documentation (deployment, API docs, etc);
 - bugs, tons of then.

I guess the simple answer to your initial question depends on why type of
cloud services do you prefer (IaaS, PaaS(data-plane or data-source API),
SaaS, and of course OS eco-system - OSLO.* projects)?

So, choose wisely =)



 The fastest way to make your mark is to get involved with
 cross-project (sometimes referred to as horizontal) programs like
 documentation, quality assurance, project infrastructure, release
 cycle management, et cetera.

 Also, welcome aboard!
 --
 Jeremy Stanley

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


And, of course. Welcome, dear friend.


Best regards,
Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Trove] Trove instance got stuck in BUILD state

2014-07-07 Thread Denis Makogon
On Mon, Jul 7, 2014 at 2:33 PM, Syed Hussain syed_huss...@persistent.co.in
wrote:

  Hi,



 I’m installing and configuring trove(DBaaS) for exisitng openstack setup.




 I have openstack setup and able to boot nova instances with following
 components:

 1.   keystone

 2.   glance

 3.   neutron

 4.   nova

 5.   cinder



 Followed below documentation for *manual installation of trove*:

 http://docs.openstack.org/developer/trove/dev/manual_install.html  and
 few correction given in this mail thread
 https://www.mail-archive.com/openstack%40lists.openstack.org/msg05262.html
 .


Those docs are useless, since they are not reflecting significant step -
creating custom Trove images. You need to create image with Trove installed
in it, create upstart scriptto lauch Trove-guestagent with appropriate
configuration files that comes to the compute instance through file
injection.
Vanilla images are good, but they don't have Trove in it at all.

Here are some useful steps:
1. Create custom image with trove code in it (upstart scripts, etc).
2. Register datastore and associate given image with appropriate
datastore/version.

FYI, Trove is not fully integrated with devstack, so, personally i'd
suggest to use https://github.com/openstack/trove-integration  simple (3
clicks) Trove + DevStack deployment.




 Booted up a trove instance

 trove create myTrove 7 --size=2 --databases=db3 --datastore_version
 mysql-5.5 --datastore mysql --nic
 net-id=752554ef-800c-46d8-b991-361db6c58226



 Trove instance got created but is STUCK IN BUILD state.



 [image: cid:image003.jpg@01CF99FC.4F639B90]



 · nova instance associated with db instance got created
 successfully.

Correct.

  · Cinder volumes, security groups etc are also getting created
 successfully.

Correct.

  · I checked nova, cinder logs everything looks fine but in
 trove-taskmanager.log below error got logged:

 PollTimeOut: Polling request timed out


Correct since Trove-guest agent service wasn't able to  report about its
state.

 I am also unable to access mysql in the booted up trove instance . via : mysql
 –h instance-IP

 · Also I’m unable to delete this instance.

 oERROR: Instance 23c8f4d5-4905-47d2-9992-13118dfa003f is not ready.
 (HTTP 422) (may be this is expected)

Correct. You cannot modify/use instances that are remaining in BUILD state.


  I’m a novice in Openstack but new to trove.

 Thanks in advance and any help is greatly appreciaited.



 Thanks  Regards,

 *Syed Afzal Hussain | **Software Engineer | OpenStack*

 DISCLAIMER == This e-mail may contain privileged and confidential
 information which is the property of Persistent Systems Ltd. It is intended
 only for the use of the individual or entity to which it is addressed. If
 you are not the intended recipient, you are not authorized to read, retain,
 copy, print, distribute or use this message. If you have received this
 communication in error, please notify the sender and delete all copies of
 this message. Persistent Systems Ltd. does not accept any liability for
 virus infected mails.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I'd glad to help you with other question related to Trove deployment.


Best regards,
Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Trove] Trove instance got stuck in BUILD state

2014-07-07 Thread Denis Makogon
On Mon, Jul 7, 2014 at 3:40 PM, Amrith Kumar amr...@tesora.com wrote:

 Denis Makogon (dmako...@mirantis.com) writes:



 | Those docs are useless, since they are not reflecting significant step –

 | creating custom Trove images. You need to create image with Trove

 | installed in it, create upstart scriptto lauch Trove-guestagent with
 appropriate

 | configuration files that comes to the compute instance through file
 injection.

 | Vanilla images are good, but they don't have Trove in it at all.



 I think it is totally ludicrous (and to all the technical writers who work
 on OpenStack, downright offensive) to say the “docs are useless”. Not only
 have I been able to install and successfully operate a OpenStack
 installation by (largely) following the documentation, but
 “trove-integration” and “redstack” are useful for developers but I would
 highly doubt that a production deployment of Trove would use ‘redstack’.


Amrith, those doc doesn't reflect any post-deployment steps, even more, doc
still suggesting to use trove-cli that was deprecated long time ago. I do
agree that trove-integration project can't be used as production deployment
system, but first try-outs - more than enough.



 Syed, maybe you need to download a guest image for Trove, or maybe there
 is something else amiss with your setup. Happy to catch up with you on IRC
 and help you with that. Optionally, email me and I’ll give you a hand.



Syed, i'd suggest to use heat-jeos
http://docs.openstack.org/developer/heat/getting_started/jeos_building.html
tools to build custom images for Trove. Since it doesn't forces you to
relay on any pre-baked images built for other production deployments.
 Or there's another way to accomplish Trove instances provisioning - you
are able to use cloud-init mechanism (for more information see link
https://github.com/openstack/trove/blob/master/trove/common/cfg.py#L237-L239
- option for Trove-taskamanger service, each cloud-init script should be
placed under {{cloud-init-script-location}}/{{datastore}}
(/etc/trove/cloud-init/mysql, etc.)


 Good job on getting all the core services installed and running, and
 welcome to the OpenStack community.



 -amrith



 --



 Amrith Kumar, CTO, Tesora



 Phone: +1-978-563-9590

 Twitter: @amrithkumar

 Skype: amrith.skype

 Web: http://www.tesora.com

 IRC: amrith @freenode #openstack-trove #tesora







 *From:* Denis Makogon [mailto:dmako...@mirantis.com]
 *Sent:* Monday, July 07, 2014 8:00 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Cc:* Ram Nalluri
 *Subject:* Re: [openstack-dev] [Openstack] [Trove] Trove instance got
 stuck in BUILD state







 On Mon, Jul 7, 2014 at 2:33 PM, Syed Hussain 
 syed_huss...@persistent.co.in wrote:

 Hi,



 I’m installing and configuring trove(DBaaS) for exisitng openstack setup.




 I have openstack setup and able to boot nova instances with following
 components:

 1.   keystone

 2.   glance

 3.   neutron

 4.   nova

 5.   cinder



 Followed below documentation for *manual installation of trove*:

 http://docs.openstack.org/developer/trove/dev/manual_install.html  and
 few correction given in this mail thread
 https://www.mail-archive.com/openstack%40lists.openstack.org/msg05262.html
 .



 Those docs are useless, since they are not reflecting significant step -
 creating custom Trove images. You need to create image with Trove installed
 in it, create upstart scriptto lauch Trove-guestagent with appropriate
 configuration files that comes to the compute instance through file
 injection.

 Vanilla images are good, but they don't have Trove in it at all.

 Here are some useful steps:

 1. Create custom image with trove code in it (upstart scripts, etc).

 2. Register datastore and associate given image with appropriate
 datastore/version.

 FYI, Trove is not fully integrated with devstack, so, personally i'd
 suggest to use https://github.com/openstack/trove-integration  simple (3
 clicks) Trove + DevStack deployment.





 Booted up a trove instance

 trove create myTrove 7 --size=2 --databases=db3 --datastore_version
 mysql-5.5 --datastore mysql --nic
 net-id=752554ef-800c-46d8-b991-361db6c58226



 Trove instance got created but is STUCK IN BUILD state.



 [image: cid:image003.jpg@01CF99FC.4F639B90]



 · nova instance associated with db instance got created
 successfully.

 Correct.

 · Cinder volumes, security groups etc are also getting created
 successfully.

 Correct.

 · I checked nova, cinder logs everything looks fine but in
 trove-taskmanager.log below error got logged:

 PollTimeOut: Polling request timed out



 Correct since Trove-guest agent service wasn't able to  report about its
 state.

 I am also unable to access mysql in the booted up trove instance . via : mysql
 –h instance-IP

 · Also I’m unable to delete this instance.

 oERROR: Instance 23c8f4d5-4905-47d2-9992-13118dfa003f is not ready.
 (HTTP 422) (may

Re: [openstack-dev] [Openstack] [Trove] Trove instance got stuck in BUILD state

2014-07-07 Thread Denis Makogon
Mark, there are also no documentation about service tuning(no description
of service related options, sample configs in Trove repo is not enough).
So, I think we should extend your list of significant things to document.

Thanks,
Denis M.

вторник, 8 июля 2014 г. пользователь Mark Kirkwood написал:

 On 08/07/14 00:40, Amrith Kumar wrote:



 I think it is totally ludicrous (and to all the technical writers who
 work on OpenStack, downright offensive) to say the “docs are useless”. Not
 only have I been able to install and successfully operate a OpenStack
 installation by (largely) following the documentation, but
 “trove-integration” and “redstack” are useful for developers but I would
 highly doubt that a production deployment of Trove would use ‘redstack’.



 Syed, maybe you need to download a guest image for Trove, or maybe there
 is something else amiss with your setup. Happy to catch up with you on IRC
 and help you with that. Optionally, email me and I’ll give you a hand.




 It is a bit harsh, to be sure. However critical areas are light/thin or
 not covered at all - and this is bound to generate a bit of frustration for
 folk wanting to use this feature.

 In particular:

 - guest image preparation
 - guest file injection (/etc/guest_info) nova interaction
 - dns requirements for guest image (self hostname resolv)
 - swift backup config authorization
 - api_extensions_path setting and how critical that is

 There are probably more that I have forgotten (repressed perhaps...)!

 Regards

 Mark


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] guestagent config for overriding managers

2014-07-02 Thread Denis Makogon
Hi Craig.

Seems like perfect task to use stevedore and its plugin system. I do agree
that it looks very nasty to have huge dict of managers.
I don't like the idea of placing 'manager' under config groups, because
each config group should be registered, and when it's done only then you
can use options.

There should be another way to deal with it. As i already said, we should
take a look at stevedore.

Best regards,
Denis Makogon


On Wed, Jul 2, 2014 at 7:34 AM, Craig Vyvial cp16...@gmail.com wrote:

 If you want to override the trove guestagent managers its looks really
 nasty to have EVERY manager on a single line here.

 datastore_registry_ext =
 mysql:my.guestagent.datastore.mysql.manager.Manager,percona:my.guestagent.datastore.mysql.manager.Manager,...

 This needs to be tidied up and split out some way.
 Ideally each of these should be on a single line.

 datastore_registry_ext =
 mysql:my.guestagent.datastore.mysql.manager.Manager
 datastore_registry_ext =
 percona:my.guestagent.datastore.mysql.manager.Manager

 or maybe...

 datastores = mysql,precona
 [mysql]
 manager = my.guestagent.datastore.mysql.manager.Manager
 [percona]
 manager = my.guestagent.datastore.percona.manager.Manager

 After typing out the second idea i dont like it as much as something like
 the first way.

 Thoughts?

 Thanks,
 - Craig Vyvial

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] how to trigger a recheck of reddwarf CI?

2014-06-29 Thread Denis Makogon
Hello, Mat.

You need to log into https://rdjenkins.dyndns.org/job/Trove-Gate/ (Auth
system uses Launchpad OpenID).
Then you need to find certaun job you need to click Retrigger. The End.

FYI i already retrigged your job. You're welcome.

Best regards,
Denis Makogon




On Sun, Jun 29, 2014 at 4:34 PM, Matt Riedemann mrie...@linux.vnet.ibm.com
wrote:

 The reddwarf 3rd party CI is failing on an oslo sync patch [1] but Jenkins
 is fine, I'm unable to find any wiki or guideline on how to recheck just
 the reddwarf CI, is that possible?

 [1] https://review.openstack.org/#/c/103232/
 --

 Thanks,

 Matt Riedemann


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Heat integration

2014-06-11 Thread Denis Makogon
Good day, Stackers, Trove community.


I'd like to start thread related to orchestration based resource
management. At current state Heat support in Trove is nothing else than
experimental. Trove should be able to fully support Trove as resource
management driver.

Why is it so important?

Because Trove should not do what it does now (cloud service orchestration
is not the part of the OS Database Program). Trove should delegate all
tasks to Cloud Orchestration Service (Heat).

How can Heat help Trove?

Easily, Trove API allows to perform next resource operations:

   -

   Trove instance provisioning (a combination of nova compute instance and
   cinder volume).
   -

   Resize instances (compute instance flavor resize).
   -

   Volume resize (cinder volume resize).
   -

   Security groups management (nova-network, neutron).
   -

  create rules in group;
  -

  create group;
  -

  update rules CIDR;


Heat allows to do almost all given tasks.

Resource management interface

What is management interface – abstract class that describes the required
tasks to accomplish. From Trove-taskmanager perspective, management
interface is nothing else than RPC service manager that being used at
service start [1].

Why is it needed?

The first answer is: To split-out two completely different resource
management engines. Nova/Cinder/Neutron engine etc. called “NATIVES” and
Heat engine called “ORCHESTRATOR”.

As you can all know they cannot work together, because they are acting with
resources in their own manners. But both engines are sharing more than
enough common code inside the Trove.


Is it backward compatible?

Here comes the third (mixed) manager called “MIGRATION”. It allows to work
with previously provisioned instances through NATIVES engine (resizes,
migration, deletion) but new instances which would be provisioned in future
will be provisioned withing stacks through ORCHESTRATOR.

So, there are three valid options:

   -

   use NATIVES if there's no available Heat;
   -

   use ORCHESTRATOR to work with Heat only;
   -

   use MIGRATION to work with mixed manager;


TODO list

Trove

   -

   provide abstract manager interface;
   -

   extract common code shared between natives/heat/migration;
   -

   implement native management support;
   -

   implement orchestrator management support;
   -

   implement migration management support;

 Heat


   -

   implement instance resize; Done
   
https://github.com/openstack/heat/blob/master/heat/engine/resources/instance.py#L564-L648
   -

   implement volume resize; Done
   
https://github.com/openstack/heat/commit/34e215c3c930b3b79bc3795dca3b5a73678f2a36


Testing environment

In terms of this topic i’d like to propose two new experimental gates for
Trove:

   -

   gate-trove-heat-integration (integration testing)
   -

   gate-trove-heat-integration-faked (testing based upon fake
   implementation of heat).

 Fro the first iteration would gate-trove-heat-integration would be
proposed and used.

Review process

For the next BP review meeting (Trove) i will revisit this BP since all
required tasks were done.

For the bug-reports, i’d like to ask Trove-core team to take a look at
them. And review/approve/merge them as soon as possible to start working on
Heat integration.

There are already filed several blueprints which would give to Trove an
ability to fully support orchestrator based provisioning:

[TROVE BP SPACE]

https://blueprints.launchpad.net/trove/+spec/stack-id

https://blueprints.launchpad.net/trove/+spec/resource-manager-interface

[TROVE BUG-REPORT SPACE]

https://bugs.launchpad.net/trove/+bug/1276228

https://bugs.launchpad.net/trove/+bug/1325512

https://bugs.launchpad.net/trove/+bug/1328464


Best regards,
Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Datastore integration testing

2014-06-11 Thread Denis Makogon
Good day, Stackers, Trove community.


I’d like to start thread related to Datastore testing infrastructure.

Why does we need it?

At this time Trove has integrated more than databases. To be precise:

   -

   MySQL
   -

  mysql-server
  -

  percona
  -

   Cassandra
   -

   MongoDB
   -

   Redis
   -

   Couchbase


For now Trove integration tests are pinned to MySQL database. And we need
to be able to run integration tests for all of them automatically, to avoid
manual verification (spin redstack, build image, run tests) of each new
datastore that was proposed for integration.

How can it be accomplished?

There’s one thing
https://bugs.launchpad.net/trove-integration/+bug/1328557 that blocks us
from building new gates for each datastores and we have proposed solution
https://review.openstack.org/#/c/84964/ that will deal with described
issue.

Other thing is building another gates. For those of you who interested
in it, please take a look at what Nikhil had proposed
https://review.openstack.org/#/c/98517/. That type of gate will give us
an ability to accomplish continuous delivery of datastore-specific images.

Plans TODO

I’d like to involve Trove-core team into this discussion, since there’s
any issue that should be review/approved/merged as soon as possible.

After that, as for me, next plan looks like:


   1.

   Cassandra-image building/delivery gate.
   2.

   Cassandra-integration-tests gate.
   3.

   MongoDB-image building/delivery gate.
   4.

   MongoDB-integration-tests gate.
   5.

   Redis-image building/delivery gate.
   6.

   Redis-integration-tests gate.
   7.

   Couchbase-image building/delivery gate.
   8.

   Couchbase-integration-tests gate.


P.S.:For those of you who interested in building another gates, before
building another image/integration-tests gate please ensure that chosen
datastore passing all integration tests.


Best regards,

Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove][Notifications] Notifications refactoring

2014-05-23 Thread Denis Makogon
Good day, Trove community.


I would like to start thread related to Trove notification framework.

Notification design was defined as: “Trove will emit events for
resources as they are manipulated. These events can be used to meter the
service and possibly used to calculate bills.”

Actual reason of this mail is to start a discussion related to
re-implementing/refactoring of notifications. For now notifications are
hard-pinned to nova provisioning.

What kind of issues/problem do notifications have?

Let's first take a look at how they are implemented.
[1]https://wiki.openstack.org/wiki/Trove/trove-notifications– this
is how notifications design was defined and approved.
[2]https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L73-L133–
this is how notifications are being implemented. How notifications
should
look like [5] https://wiki.openstack.org/wiki/Trove/trove-notifications-v2
.

First of all, there are a lot issues with
[2]https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L73-L133:

   -

   pinning notifications to nova client – it's wrong way, because Trove is
   going to support heat for resource
managementhttps://blueprints.launchpad.net/trove/+spec/resource-manager-interface
   ;
   -

   availability zone – should be only used at “trove.instance.create”
   notification only, no need to use it each time “trove.instance.modify_*”
   happens (* - flavor, volume);
   -

   instance_size – this payload attribute referring to an amount of RAM
   defined by flavor;
   -

   instance_type – this payload attribute referring to flavor name, which
   seems odd;
   -

   instance_type_id – same thing, payload attribute referring to flavor id,
   which seems odd;
   -

   nova_instance_id – to be more generic, we should refuse from using
   specific names;
   -

   state_description and state – same referring to instance service status,
   actual duplication;
   -

   nova_volume_id – same as for nova_instance_id, should be more generic,
   since instance can have cinder volume that has nothing common with
   nova at all.


We need to have more generic, more flexible notifications, that can be
used with any provisioning engine, no matter what it actually is (nova/heat)

How do we can re-write notifications taking into account described
issues?

   1.

   We need to re-write
send_usage_eventhttps://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L88method.
   2.

   It should not ask nova for flavor, server and AZ, because it's
   redundant. So, the beginning of the method should look like
[3]https://gist.github.com/denismakogon/9c2d802e2a61eb6164d2
   .
   3.

   Payload should be re-written. It should have the following form
[4]https://gist.github.com/denismakogon/c4a784d364f0af0fc543
   .


What the actual value-add of this refactoring?

Notifications would be reusable for any kinds of actions (create,
delete, resizes), no matter what kind of the provisioning engine was used.

Next steps after suggested refactoring?

Next steps will cover required notifications that were described as
part of the ceilometer
integration.https://blueprints.launchpad.net/trove/+spec/ceilometer-integration


Best regards,

Denis Makogon

www.mirantis.com

dmako...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Proposal to add Craig Vyvial to trove-core

2014-05-06 Thread Denis Makogon
+1


On Tue, May 6, 2014 at 5:06 PM, Peter Stachowski pe...@tesora.com wrote:

 +1

 -Original Message-
 From: Nikhil Manchanda [mailto:nik...@manchanda.me]
 Sent: May-06-14 5:32 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Trove] Proposal to add Craig Vyvial to trove-core


 Hello folks:

 I'm proposing to add Craig Vyvial (cp16net) to trove-core.

 Craig has been working with Trove for a while now. He has been a
 consistently active reviewer, and has provided insightful comments on
 numerous reviews. He has submitted quality code to multiple features in
 Trove, and most recently drove the implementation of configuration groups
 in Icehouse.

 https://review.openstack.org/#/q/reviewer:%22Craig+Vyvial%22,n,z
 https://review.openstack.org/#/q/owner:%22Craig+Vyvial%22,n,z

 Please respond with +1/-1, or any further comments.

 Thanks,
 Nikhil

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Monitoring as a Service

2014-05-04 Thread Denis Makogon
Hello to All.

I also +1 this idea. As I can see, Telemetry program (according to
Launchpad) covers the process of the infrastructure metrics (networking,
etc) and in-compute-instances metrics/monitoring.
So, the best option, I guess, is to propose add such great feature to
Ceilometer. In-compute-instance monitoring will be the great value-add to
upstream Ceilometer.
As for me, it's a good chance to integrate well-known production ready
monitoring systems that have tons of specific plugins (like Nagios etc.)

Best regards,
Denis Makogon

воскресенье, 4 мая 2014 г. пользователь John Griffith написал:




 On Sun, May 4, 2014 at 9:37 AM, Thomas Goirand 
 z...@debian.orgjavascript:_e(%7B%7D,'cvml','z...@debian.org');
  wrote:

 On 05/02/2014 05:17 AM, Alexandre Viau wrote:
  Hello Everyone!
 
  My name is Alexandre Viau from Savoir-Faire Linux.
 
  We have submited a Monitoring as a Service blueprint and need feedback.
 
  Problem to solve: Ceilometer's purpose is to track and *measure/meter*
 usage information collected from OpenStack components (originally for
 billing). While Ceilometer is usefull for the cloud operators and
 infrastructure metering, it is not a *monitoring* solution for the tenants
 and their services/applications running in the cloud because it does not
 allow for service/application-level monitoring and it ignores detailed and
 precise guest system metrics.
 
  Proposed solution: We would like to add Monitoring as a Service to
 Openstack
 
  Just like Rackspace's Cloud monitoring, the new monitoring service -
 lets call it OpenStackMonitor for now -  would let users/tenants keep track
 of their ressources on the cloud and receive instant notifications when
 they require attention.
 
  This RESTful API would enable users to create multiple monitors with
 predefined checks, such as PING, CPU usage, HTTPS and SMTP or custom checks
 performed by a Monitoring Agent on the instance they want to monitor.
 
  Predefined checks such as CPU and disk usage could be polled from
 Ceilometer. Other predefined checks would be performed by the new
 monitoring service itself. Checks such as PING could be flagged to be
 performed from multiple sites.
 
  Custom checks would be performed by an optional Monitoring Agent. Their
 results would be polled by the monitoring service and stored in Ceilometer.
 
  If you wish to collaborate, feel free to contact me at
 alexandre.v...@savoirfairelinux.comjavascript:_e(%7B%7D,'cvml','alexandre.v...@savoirfairelinux.com');
  The blueprint is available here:
 https://blueprints.launchpad.net/openstack-ci/+spec/monitoring-as-a-service
 
  Thanks!

 I would prefer if monitoring capabilities was added to Ceilometer rather
 than adding yet-another project to deal with.

 What's the reason for not adding the feature to Ceilometer directly?

 Thomas


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgjavascript:_e(%7B%7D,'cvml','OpenStack-dev@lists.openstack.org');
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ​I'd also be interested in the overlap between your proposal and
 Ceilometer.  It seems at first thought that it would be better to introduce
 the monitoring functionality in to Ceilometer and make that project more
 diverse as opposed to yet another project.​

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >