[openstack-dev] [cinder] HA issues

2014-12-08 Thread Dulko, Michal
Hi all!

At the summit during crossproject HA session there were multiple Cinder issues 
mentioned. These can be found in this etherpad: 
https://etherpad.openstack.org/p/kilo-crossproject-ha-integration

Is there any ongoing effort to fix these issues? Is there an idea how to 
approach any of them?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] HA issues

2014-12-09 Thread Dulko, Michal
And what about no recovery in case of failure mid-task? I can see that there's 
some TaskFlow integration done. This lib seems to address these issues (if used 
with taskflow.persistent submodule, which Cinder isn't using). Any plans for 
further integration with TaskFlow?

-Original Message-
From: John Griffith [mailto:john.griffi...@gmail.com] 
Sent: Monday, December 8, 2014 11:28 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [cinder] HA issues

On Mon, Dec 8, 2014 at 8:18 AM, Dulko, Michal  wrote:
> Hi all!
>
>
>
> At the summit during crossproject HA session there were multiple 
> Cinder issues mentioned. These can be found in this etherpad:
> https://etherpad.openstack.org/p/kilo-crossproject-ha-integration
>
>
>
> Is there any ongoing effort to fix these issues? Is there an idea how 
> to approach any of them?
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
Thanks for the nudge on this, personally I hadn't seen this.  So the items are 
pretty vague, there are def plans to try and address a number of race 
conditions etc.  I'm not aware of any specific plans to focus on HA from this 
perspective, or anybody stepping up to work on it but certainly would be great 
for somebody to dig in and start flushing this out.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] meeting place for event Monday night in Tokyo ...

2015-10-25 Thread Dulko, Michal
On Sun, 2015-10-25 at 22:27 +0900, Mike Perez wrote:
> On Oct 25, 2015, at 18:19, Mike Perez  wrote:
> 
> >> On Oct 25, 2015, at 10:48, Mike Perez  wrote:
> >> 
> >> On 15:54 Oct 21, Jay S. Bryant wrote:
> >> 
> >> 
> >> 
> >>> Not sure where the evening will take us, but we are planning to meet
> >>> by registration at the Convention Center.  Looking at the map, if I
> >>> am reading it properly, in front of the Huawei Community Lounge on
> >>> the first floor looks like a good place to meet that is right near
> >>> registration.  So, lets go for that.  :-)
> >> 
> >> I know a place with local craft beer and pizza. The only time this week 
> >> you'll
> >> probably have it in Tokyo with the other festivities happening.
> >> 
> >> venue: http://bairdbeer.com/en/tap/nakameguro.html
> >> map: https://goo.gl/maps/umqvsjPag3S2
> >> 
> >> I delegate someone to make reservations for 15 people.
> > 
> > Taking initiative here and booked our restaurant for 15 people. Done.
> > 
> > T.Y. HARBOR
> > Japan, 〒140-0002 Tokyo, Shinagawa, Higashishinagawa, 2 Chome−1, 東品川2−1−3
> > https://goo.gl/maps/bg6awGGf2GT2
> 
> This will be at 7pm Monday.

Are we still meeting at 7:30pm near the Summit venue or should we head
directly to the restaurant?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] status of the live upgrade session

2015-11-02 Thread Dulko, Michal
On Sat, 2015-10-31 at 12:08 +0800, Gareth wrote:
> Hey guys,
> 
> In this summary
> https://wiki.openstack.org/wiki/Design_Summit/Mitaka/Etherpads, there
> is a live upgrade session. But the linked the etherpad is empty:
> https://etherpad.openstack.org/p/mitaka-crossproject-upgrades .
> 
> So what's the status or conclusion of this session?
> 

During the session Dan Smith explained how Nova achieved live upgrades
capabilities and explained the different parts of that effort. Apart
from that we've discussed the approach other projects should take on
that - Neutron, Cinder, Heat.

I've added my own notes to the Etherpad. I'm not sure if these are very
helpful, but probably it's better than nothing. :)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Specifications to support High Availability Active-Active configurations in Cinder

2015-11-03 Thread Dulko, Michal
On Tue, 2015-10-20 at 20:17 +0200, Gorka Eguileor wrote:
> Hi,
> 
> We finally have ready for review all specifications required to support
> High Availability Active/Active configurations in Cinder's Volume nodes.
> 
> There is a Blueprint to track this effort [1] and the specs are as follow:
> 
> - General description of the issues and solutions [2]
> - Removal of Races on API nodes [3]
> - Job distribution to clusters [4]
> - Cleanup process of crashed nodes [5]
> - Data corruption prevention [6]
> - Removing local file locks from the manager [7]
> - Removing local file locks from drivers [8]
> 
> (snip)
> 
> [1]: 
> https://blueprints.launchpad.net/cinder/+spec/cinder-volume-active-active-support
> [2]: https://review.openstack.org/232599
> [3]: https://review.openstack.org/207101
> [4]: https://review.openstack.org/232595
> [5]: https://review.openstack.org/236977
> [6]: https://review.openstack.org/237076
> [7]: https://review.openstack.org/237602
> [8]: https://review.openstack.org/237604

I just want to give a heads up that during the Summit we've discussed
this topic and specs will be modified to reflect decisions made there.
General notes from the sessions can be found in [1], [2]. Main points
are that on DLM session [3] it was decided that projects can hard depend
on DLM - which may make things easier for us. Also we want to disable
automatic cleanup of stale resources in the first version of c-vol A/A,
because such implementation should be simpler and safer.

[1] https://etherpad.openstack.org/p/mitaka-cinder-cvol-aa
[2] https://etherpad.openstack.org/p/mitaka-cinder-volmgr-locks
[3] https://etherpad.openstack.org/p/mitaka-cross-project-dlm


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Adding a new feature in Kilo. Is it possible?

2015-11-03 Thread Dulko, Michal
On Tue, 2015-11-03 at 18:57 +0100, Michał Dubiel wrote:
> Hi all,

> We have a simple patch allowing to use OpenContrail's vrouter with
> vhostuser vif types (currently only OVS has support for that). We
> would like to contribute it. 

> However, We would like this change to land in the next maintenance
> release of Kilo. Is it possible? What should be the process for this?
> Should we prepare a blueprint and review request for the 'master'
> branch first? It is small self contained change so I believe it does
> not need a nova-spec.

> Regards,
> Michal

The policy is that backports to Kilo are now possible for security fixes
only [1]. Even if your commit would fall into security bugfix category,
it would need to merged to master first.

I think your best call is contributing the feature to the current master
(Mitaka) and prepare downstream backport for your internal needs.

[1] https://wiki.openstack.org/wiki/Releases
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder]Do we have project scope for cinder?

2015-11-29 Thread Dulko, Michal
On Sat, 2015-11-28 at 10:56 +0800, hao wang wrote:
> Hi guys,
> 
> I notice nova have a clarification of project scope:
> http://docs.openstack.org/developer/nova/project_scope.html
> 
> I want to find cinder's, but failed,  do you know where to find it?
> 
> It's important to let developers know what feature should be
> introduced into cinder and what shouldn't.
> 
> BR
> Wang Hao

I believe Nova team needed to formalize the scop to have an explanation
for all the "this doesn't belong in Nova" comments on feature requests.
Does Cinder suffer from similar problems? From my perspective it's not
critically needed.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]New Quota Subteam on Nova

2015-12-02 Thread Dulko, Michal
On Tue, 2015-12-01 at 11:45 -0800, Vilobh Meshram wrote:

> Having worked in the area of Quotas for a while now by introducing
> features like Cinder Nested Quota Driver [1] [2] I strongly feel that
> something like a Nova Quota sub-team will definitely help. Mentioning
> about Cinder Quota driver since it was accepted in Mitaka design
> summit that Nova Nested Quota Driver[3] would like to pursue the route
> taken by Cinder.  Since Nested quota is a one part of Quota subsystem
> and working in small team helped to iterate quickly for Nested Quota
> patches[4][5][6][7] so IMHO forming a Nova quota subteam will help.

Just FYI - recently we've identified several caveats in Cinder's nested
quotas approach. Main issue is inability to function without Keystone V3
API. I'm not sure if dropping support for V2 was intentional. Apart from
that some exceptions are silenced which results in odd behavior when
calling quotas from non-admin user.

I don't want to disappreciate the work you've done, but just signal that
quota management functionality isn't trivial to work on.

On the whole topic - I think this may be even cross-project effort. In
case of Cinder - we have quotas code that's very similar to Nova's, so I
will be watching work of the subteam very closely for any improvements
that can be applied to Cinder. We're struggling with quotas getting out
of sync all the time.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [Taskflow]Review help to cinder bp:Implement function to manage/unmanage snapshots

2015-07-10 Thread Dulko, Michal
Feel free to start the work. Right now I’m working on c-vol’s create_volume 
flow. Review is there: https://review.openstack.org/#/c/193167/

From: hao wang [mailto:sxmatch1...@gmail.com]
Sent: Friday, July 10, 2015 4:47 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder] [Taskflow]Review help to cinder 
bp:Implement function to manage/unmanage snapshots

Sorry for the long delay, I'd like to help to review those patches, and I don't 
find the patch about refactoring manage_existing flow in manager yet, so maybe 
I can have a try.

2015-06-02 17:45 GMT+08:00 Dulko, Michal 
mailto:michal.du...@intel.com>>:
Right now we’re working on refactoring current TaskFlow implementations in 
Cinder to make them more readable and clean. Then we’ll be able to decide if we 
want to get more TaskFlow into Cinder or step back from using it. Deadline for 
refactoring work is around 1 of July.

Here’s related patch for scheduler’s create_volume workflow: 
https://review.openstack.org/#/c/186439/

Currently I’m working on a patch for API’s create_volume and John Griffith 
agreed to work on manager’s one (I don’t know current status). If you want to 
help with these efforts – reviews are always welcomed. You may also take a shot 
at refactoring manage_existing flow in manager. It seem simple enough but maybe 
there are some improvement that we can do to make it more readable.

From: hao wang [mailto:sxmatch1...@gmail.com<mailto:sxmatch1...@gmail.com>]
Sent: Tuesday, June 2, 2015 11:30 AM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: [openstack-dev] [Cinder] [Taskflow]Review help to cinder bp:Implement 
function to manage/unmanage snapshots

Hi, folks,

There is a cinder bp:Implement function to manage/unmanage 
snapshots(https://review.openstack.org/#/c/144590/), that we use taskflow to 
implement this feature.

So I need your guys' help(cinder & taskflow) to push this forward.

Thanks.



--

Best Wishes For You!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Best Wishes For You!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How should we expose host capabilities to the scheduler

2015-08-03 Thread Dulko, Michal
> -Original Message-
> From: Dugger, Donald D [mailto:donald.d.dug...@intel.com]
> Sent: Monday, August 3, 2015 7:40 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] How should we expose host capabilities to the
> scheduler
> 
> Without going into the solution space the first thing we need to do is make
> sure we know what the requirements are for exposing host capabilities.  At a
> minimu we need to:
> 
> Enumerate the capabilities.  This will involve both quantitative values
> (amount of RAM, amount of disk, .) and Boolean (magic instructions
> present).  Also, there will be static capabilities that are discovered at boot
> time and don't change afterwards and dynamic capabilities that vary during
> node operation.
> Expose the capabilities to both users and operators.
> Request specific capabilities.  A way of requesting an instance with an 
> explicit
> list of specific capabilities is a minimal requirement.  It would probably 
> also be
> good to have a way to easily specify an aggregate that encompasses a set of
> capabilities.
> 
> Note that I'm not saying we should remove flavors, but we might need a
> different way to specify what makes up a flavor.
> 
> As I said, I don't have the answer to how to do this but I want to start a
> discussion on where we go from here.
> 
> --
> Don Dugger
> "Censeo Toto nos in Kansa esse decisse." - D. Gale
> Ph: 303/443-3786

There already is a Glance Metadata Catalog which is enumerating and exposing 
different meaningful extra_specs that can be attached to a flavor. The list of 
capabilities is defined here: 
https://github.com/openstack/glance/tree/master/etc/metadefs. Example 
definition of flavor extra_specs: 
https://github.com/openstack/glance/blob/master/etc/metadefs/compute-host-capabilities.json.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] [nova] Cinder and Nova availability zones

2015-08-10 Thread Dulko, Michal
Hi,

In Kilo cycle [1] was merged. It started passing AZ of a booted VM to Cinder to 
make volumes appear in the same AZ as VM. This is certainly a good approach, 
but I wonder how to deal with an use case when administrator cares about AZ of 
a compute node of the VM, but wants to ignore AZ of volume. Such case would be 
when fault tolerance of storage is maintained on another level - for example 
using Ceph replication and failure domains.

Normally I would simply disable AvailabilityZoneFilter in cinder.conf, but it 
turns out cinder-api validates if availability zone is correct [2]. This means 
that if Cinder has no AZs configured all requests from Nova will fail on an API 
level.

Configuring fake AZs in Cinder is also problematic, because AZ cannot be 
configured on a per-backend manner. I can only configure it per c-vol node, so 
I would need N extra nodes running c-vol,  where N is number of AZs to achieve 
that.

Is there any solution to satisfy such use case?

[1] https://review.openstack.org/#/c/157041
[2] 
https://github.com/openstack/cinder/blob/master/cinder/volume/flows/api/create_volume.py#L279-L282

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [grenade][cinder] Updates of rootwrap filters

2015-08-26 Thread Dulko, Michal
Hi,

Recently when working on a simple bug [1] I've run into a need to change 
rootwrap filters rules for a few commands. After sending fix to Gerrit [2] it 
turns out that when testing the upgraded cloud grenade haven't copied my 
updated volume.filters file, and therefore failed the check. I wonder how 
should I approach the issue:
1. Make grenade script for Cinder to copy the new file to upgraded cloud.
2. Divide the patch into two parts - at first add new rules, leaving the old 
ones there, then fix the bug and remove old rules.
3. ?

Any opinions?

[1] https://bugs.launchpad.net/cinder/+bug/1488433
[2] https://review.openstack.org/#/c/216675/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade][cinder] Updates of rootwrap filters

2015-08-27 Thread Dulko, Michal
> -Original Message-
> From: Eric Harney [mailto:ehar...@redhat.com]
> Sent: Wednesday, August 26, 2015 5:15 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [grenade][cinder] Updates of rootwrap filters
> 
> On 08/26/2015 09:57 AM, Dulko, Michal wrote:
> > Hi,
> >
> > Recently when working on a simple bug [1] I've run into a need to change
> rootwrap filters rules for a few commands. After sending fix to Gerrit [2] it
> turns out that when testing the upgraded cloud grenade haven't copied my
> updated volume.filters file, and therefore failed the check. I wonder how
> should I approach the issue:
> > 1. Make grenade script for Cinder to copy the new file to upgraded cloud.
> > 2. Divide the patch into two parts - at first add new rules, leaving the old
> ones there, then fix the bug and remove old rules.
> > 3. ?
> >
> > Any opinions?
> >
> > [1] https://bugs.launchpad.net/cinder/+bug/1488433
> > [2] https://review.openstack.org/#/c/216675/
> 
> 
> I believe you have to go with option 1 and add code to grenade to handle
> installing the new rootwrap filters.
> 
> grenade is detecting an upgrade incompatibility that requires a config
> change, which is a good thing.  Splitting it into two patches will still 
> result in
> grenade failing, because it will test upgrading kilo to master, not patch A to
> patch B.
> 
> Example for neutron:
> https://review.openstack.org/#/c/143299/
> 
> A different example for nova (abandoned for unrelated reasons):
> https://review.openstack.org/#/c/151408/
> 
> 
> 
> /me goes to investigate whether he can set the system locale to something
> strange in the full-lio job, because he really thought we had fixed all of the
> locale-related LVM parsing bugs by now.

Thanks, I've addressed that in following patch: 
https://review.openstack.org/#/c/217625/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Cinder and Nova availability zones

2015-08-27 Thread Dulko, Michal
There were a little IRC discussion on that [1] and I've started to work on 
creating a spec for Mitaka. I've got a little busy last time, but finishing it 
is still in my backlog. I'll make sure to post it up for reviews once Mitaka 
specs bucket will open.

[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-cinder/%23openstack-cinder.2015-08-11.log.html#t2015-08-11T14:48:49

> -Original Message-
> From: Ivan Kolodyazhny [mailto:e...@e0ne.info]
> Sent: Thursday, August 27, 2015 4:44 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: Rogon, Kamil
> Subject: Re: [openstack-dev] [cinder] [nova] Cinder and Nova availability
> zones
> 
> Hi,
> 
> Looks like we need to be able to set AZ per backend. What do you think
> about such option?
> 
> 
> Regards,
> Ivan Kolodyazhny
> 
> On Mon, Aug 10, 2015 at 7:07 PM, John Griffith  <mailto:john.griffi...@gmail.com> > wrote:
> 
> 
> 
> 
>   On Mon, Aug 10, 2015 at 9:24 AM, Dulko, Michal
> mailto:michal.du...@intel.com> > wrote:
> 
> 
>   Hi,
> 
>   In Kilo cycle [1] was merged. It started passing AZ of a booted
> VM to Cinder to make volumes appear in the same AZ as VM. This is certainly
> a good approach, but I wonder how to deal with an use case when
> administrator cares about AZ of a compute node of the VM, but wants to
> ignore AZ of volume. Such case would be when fault tolerance of storage is
> maintained on another level - for example using Ceph replication and failure
> domains.
> 
>   Normally I would simply disable AvailabilityZoneFilter in
> cinder.conf, but it turns out cinder-api validates if availability zone is 
> correct
> [2]. This means that if Cinder has no AZs configured all requests from Nova
> will fail on an API level.
> 
>   Configuring fake AZs in Cinder is also problematic, because AZ
> cannot be configured on a per-backend manner. I can only configure it per c-
> vol node, so I would need N extra nodes running c-vol,  where N is number
> of AZs to achieve that.
> 
>   Is there any solution to satisfy such use case?
> 
>   [1] https://review.openstack.org/#/c/157041
>   [2]
> https://github.com/openstack/cinder/blob/master/cinder/volume/flows/ap
> i/create_volume.py#L279-L282
> 
> 
>   
> __
>   OpenStack Development Mailing List (not for usage
> questions)
>   Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe>
>   http://lists.openstack.org/cgi-
> bin/mailman/listinfo/openstack-dev
> 
> 
> 
>   ​Seems like we could introduce the capability in cinder to ignore that 
> if
> it's desired?  It would probably be worth looking on the Cinder side at being
> able to configure multiple AZ's for a volume (perhaps even an aggregate
> Zone just for Cinder).  That way we still honor the setting but provide a way
> to get around it for those that know what they're doing.
> 
> 
>   John
> 
> 
>   
> __
>   OpenStack Development Mailing List (not for usage questions)
>   Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe>
>   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Cinder and Nova availability zones

2015-08-28 Thread Dulko, Michal
> From: Ben Swartzlander [mailto:b...@swartzlander.org]
> Sent: Thursday, August 27, 2015 8:11 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> On 08/27/2015 10:43 AM, Ivan Kolodyazhny wrote:
> 
> 
>   Hi,
> 
>   Looks like we need to be able to set AZ per backend. What do you
> think about such option?
> 
> 
> 
> I dislike such an option.
> 
> The whole premise behind an AZ is that it's a failure domain. The node
> running the cinder services is in exactly one such failure domain. If you 
> have 2
> backends in 2 different AZs, then the cinder services managing those
> backends should be running on nodes that are also in those AZs. If you do it
> any other way then you create a situation where a failure in one AZ causes
> loss of services in a different AZ, which is exactly what the AZ feature is 
> trying
> to avoid.
> 
> If you do the correct thing and run cinder services on nodes in the AZs that
> they're managing then you will never have a problem with the one-AZ-per-
> cinder.conf design we have today.
> 
> -Ben

I disagree. You may have failure domains done on a different level, like using 
Ceph mechanisms for that. In such case you want to provide the user with a 
single backend regardless of compute AZ partitioning. To address such needs you 
would need to set multiple AZ per backend to make this achievable.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [Nova] [Cinder] Need to add selection of availability zone for new volume

2015-08-28 Thread Dulko, Michal
Hi,

If I recall correctly your Horizon-based solution won't be possible, because of 
how Nova's code works internally - it just passes Nova's AZ to Cinder API, 
without allowing to overwrite it.

We're discussing this particular issue in another ML thread 
http://lists.openstack.org/pipermail/openstack-dev/2015-August/071732.html. I'm 
planning to create a BP and spec to sort out all Cinder AZ issues in Mitaka.

Apart from that there's a bugreport[1] and patch[2] aiming in temporarily 
fixing it for Liberty cycle.

[1] https://bugs.launchpad.net/cinder/+bug/1489575
[2] https://review.openstack.org/#/c/217857/

> -Original Message-
> From: Timur Nurlygayanov [mailto:tnurlygaya...@mirantis.com]
> Sent: Monday, August 17, 2015 2:19 PM
> To: OpenStack Development Mailing List
> Subject: [openstack-dev] [Horizon] [Nova] [Cinder] Need to add selection of
> availability zone for new volume
> 
> Hi OpenStack dev team,
> 
> 
> we found issue [1] in Horizon (probably, in Nova API too), which blocks the
> ability to boot VMs with option Instance Boot Source = "Boot from image
> (creates new volume)" in case when we have several Availability Zones in
> Nova and Cinder - it will fail with error "Failure prepping block device".
> 
> 
> Looks like it is issue in the initial design of "Boot from image (creates new
> volume)" feature, because when we creates new volume we need to
> choose the Availability zone for this volume or use some default value (with
> depends on AZs configuration). In the same time Nova AZs and Cinder AZs
> are different Availability Zones and we need to manage them separately.
> 
> 
> For now, when we are using "Boot from image (creates new volume)"
> feature, Nova tries to create volume is selected Nova Availability Zone, which
> can be not presented in Cinder. In the result we will see error "Failure
> prepping block device".
> 
> I think Horizon UI should provide something like drop down list with the list
> of Cinder availability zones when user wants to boot VM with option "Boot
> from image (creates new volume)" - we can prepare the fix for the existing
> Horizon UI (to support many AZs for Nova & Cinder use case in Kilo and
> Liberty releases).
> 
> 
> Also, I know that Horizon team works on the new UI for Instance creation
> workflow, so, we need to make sure that it will be supported with new UI
> [2].
> 
> 
> Thank you!
> 
> 
> [1] https://bugs.launchpad.net/horizon/+bug/1485578
> [2] https://openstack.invisionapp.com/d/#/projects/2472307
> 
> --
> 
> 
> 
> Timur,
> Senior QA Engineer
> OpenStack Projects
> Mirantis Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Cinder and Nova availability zones

2015-08-28 Thread Dulko, Michal
> From: Duncan Thomas [mailto:duncan.tho...@gmail.com]
> Sent: Friday, August 28, 2015 2:31 PM
> 
> Except your failure domain includes the cinder volume service, independent
> of the resiliency of you backend, so if they're all on one node then you don't
> really have availability zones.
> 
> I have historically strongly espoused the same view as Ben, though there are
> lots of people who want fake availability zones... No strong use cases though

In case you have Ceph backend (actually I think this applies to any non-LVM 
backend), you normally run c-vol on your controller nodes in A/P manner. c-vol 
becomes more like control plane service and we don't provide AZs for control 
plane. Nova doesn't do it either, AZs are only for compute nodes.

Given that now Nova assumes that Cinder have same set of AZs, we should be able 
to create fake ones (or have a fallback option like in patch provided by Ned).
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] PTL Non-Candidacy

2015-09-14 Thread Dulko, Michal
> -Original Message-
> From: Mike Perez [mailto:thin...@gmail.com]
> Sent: Monday, September 14, 2015 6:16 PM
> 
> Hello all,
> 
> I will not be running for Cinder PTL this next cycle. Each cycle I ran was 
> for a
> reason [1][2], and the Cinder team should feel proud of our
> accomplishments:
> 
> * Spearheading the Oslo work to allow *all* OpenStack projects to have their
> database being independent of services during upgrades.
> * Providing quality to OpenStack operators and distributors with over
> 60 accepted block storage vendor drivers with reviews and enforced CI [3].
> * Helping other projects with third party CI for their needs.
> * Being a welcoming group to new contributors. As a result we grew greatly
> [4]!

As someone who started to contribute to Cinder just after Mike took the PTL 
role I can say that  this is true and it felt really great to participate in 
the project from just the beginning. Thanks!

> * Providing documentation for our work! We did it for Kilo [5], and I was very
> proud to see the team has already started doing this on their own to prepare
> for Liberty.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder]Behavior when one cinder-volume service is down

2015-09-15 Thread Dulko, Michal
> From: Eduard Matei [mailto:eduard.ma...@cloudfounders.com]
> Sent: Tuesday, September 15, 2015 4:04 PM
> 
> Hi,
> 
> This all started when we were testing Evacuate with our storage driver.
> We thought we found a bug
> (https://bugs.launchpad.net/cinder/+bug/1491276) then Scott replied that
> we should be running cinder-volume service separate from nova-compute.
> For some internal reasons we can't do that - yet, but we have some
> questions regarding the behavior of the service:
> 
> - on our original test setup we have 3 nodes (1 controller + compute + cinder,
> 2 compute + cinder).
> -- when we shutdown the second node and tried to evacuate, the call was
> routed to cinder-volume of the halted node instead of going to other nodes
> (there were still 2 cinder-volume services up) - WHY?

Cinder assumes that each c-vol can control only volumes which were scheduled 
onto it. As volume services are differentiated by hostname a known workaround 
is to set same value for host option in cinder.conf on each of the c-vols. This 
will make c-vols to listen on the same queue. You may however encounter some 
race conditions when running such configuration in Active/Active manner. 
Generally recommended approach is to use Pacemaker and run such c-vols in 
Active/Passive mode. Also expect that scheduler's decision will be generally 
ignored - as all the nodes are listening on the same queue.

> - on the new planned setup we will have 6 nodes (3 dedicated controller +
> cinder-volume, 3 compute)
> -- in this case which cinder-volume will manage which volume on which
> compute node?

Same situation - a volume will be controlled by c-vol which created it.

> -- what if: one compute node and one controller go down - will the Evacuate
> still work if one of the cinder-volume services is down? How can we tell - for
> sure - that this setup will work in case ANY 1 controller and 1 compute nodes
> go down?

The best idea is I think to use c-vol + Pacemaker in A/P manner. Pacemaker will 
make sure that on failure a new c-vol is spun up. Where are volumes physically 
in case of your driver? Is it like LVM driver (volume lies on the node which is 
running c-vol) or Ceph (Ceph takes care where volume will land physically, 
c-vol is just a proxy). 

> 
> Hypothetical:
> - if 3 dedicated controller + cinder-volume nodes work can perform evacuate
> when one of them is down (at the same time with one compute), WHY can't
> the same 3 nodes perform evacuate when compute services is running on
> the same nodes (so 1 cinder is down and 1 compute)

I think I've explained that.

> - if the answer to above question is "They can't " then what is the purpose of
> running 3 cinder-volume services if they can't handle one failure?

Running 3 c-vols is beneficial if you have multiple backends or use LVM driver.

> - and if the answer to above question is "You only run one cinder-volume"
> then how can it handle failure of controller node?

I've explained that too. There are efforts in the community to make it possible 
to run c-vol in A/A, but I don't think there's ETA yet.

> 
> Thanks,
> 
> Eduard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder]Behavior when one cinder-volume service is down

2015-09-15 Thread Dulko, Michal
> From: Eduard Matei [mailto:eduard.ma...@cloudfounders.com]
> Sent: Tuesday, September 15, 2015 4:54 PM
> 
> Hi,
> 
> Let me see if i got this:
> - running 3 (multiple) c-vols won't automatically give you failover
> - each c-vol is "master" of a certain number of volumes
> -- if the c-vol is "down" then those volumes cannot be managed by another
> c-vol
> 
> What i'm trying to achieve is making sure ANY volume is managed
> (manageable) by WHICHEVER c-vol is running (and gets the call first) - sort of
> A/A - so this means i need to look into Pacemaker and virtual-ips, or i should
> try first the "same name".
> 

I think you should try Pacemaker A/P configuration with same hostname in 
cinder.conf. That's the only safe option here.

I don't quite understand John's idea of how virtual IP can help with c-vol, as 
this service only listens on AMQP queue. I think VIP is useful only for running 
c-api service. 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pycharm License for OpenStack developers

2015-09-16 Thread Dulko, Michal
> From: Kekane, Abhishek [mailto:abhishek.kek...@nttdata.com]
> Sent: Wednesday, September 16, 2015 11:20 AM
> Hi Devs,
> 
> 
> 
> I am using Pycharm for development and current license is about to expire.
> 
> Please let me know if anyone has a new license key for the same.
> 
> 
> 
> Thank you in advance.
> 
> 
> 
> Abhishek


I've applied for the license for OpenStack a moment ago. I'll send an update to 
the ML once I get a response from JetBrains.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Servicegroup refactoring for the Control Plane - Mitaka

2015-09-25 Thread Dulko, Michal
On Wed, 2015-09-23 at 11:11 -0700, Vilobh Meshram wrote:

> Accepted in Liberty [1] [2] :
> [1] Services information be stored in respective backend configured
> by CONF.servicegroup_driver and all the interfaces which plan to
> access service information go through servicegroup layer.
> [2] Add tooz specific drivers e.g. replace existing nova servicegroup
> zookeeper driver with a new zookeeper driver backed by Tooz zookeeper
> driver.
> 
> 
> Proposal for Mitaka [3][4] :
> [3] Services information be stored in nova.services (nova database)
> and liveliness information be managed by CONF.servicegroup_driver
> (DB/Zookeeper/Memcache)
> [4] Stick to what is accepted for #2. Just that the scope will be
> decided based on whether we go with #1 (as accepted for Liberty) or #3
> (what is proposed for Mitaka)
> 
I like Mitaka (#3) proposal more. We still have whole data in the
persistent database and SG driver informs only if a host is alive. This
would make transitions between SG drivers easier for administrators and
at last this is why you want to use ZooKeeper - to know about failure
early and don't schedule new VMs to such non-responding host.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Convergence: Detecting and handling worker failures

2015-09-30 Thread Dulko, Michal
On Wed, 2015-09-30 at 02:29 -0700, Clint Byrum wrote:
> Excerpts from Anant Patil's message of 2015-09-30 00:10:52 -0700:
> > Hi,
> > 
> > One of remaining items in convergence is detecting and handling engine
> > (the engine worker) failures, and here are my thoughts.
> > 
> > Background: Since the work is distributed among heat engines, by some
> > means heat needs to detect the failure and pick up the tasks from failed
> > engine and re-distribute or run the task again.
> > 
> > One of the simple way is to poll the DB to detect the liveliness by
> > checking the table populated by heat-manage. Each engine records its
> > presence periodically by updating current timestamp. All the engines
> > will have a periodic task for checking the DB for liveliness of other
> > engines. Each engine will check for timestamp updated by other engines
> > and if it finds one which is older than the periodicity of timestamp
> > updates, then it detects a failure. When this happens, the remaining
> > engines, as and when they detect the failures, will try to acquire the
> > lock for in-progress resources that were handled by the engine which
> > died. They will then run the tasks to completion.
> > 
> > Another option is to use a coordination library like the community owned
> > tooz (http://docs.openstack.org/developer/tooz/) which supports
> > distributed locking and leader election. We use it to elect a leader
> > among heat engines and that will be responsible for running periodic
> > tasks for checking state of each engine and distributing the tasks to
> > other engines when one fails. The advantage, IMHO, will be simplified
> > heat code. Also, we can move the timeout task to the leader which will
> > run time out for all the stacks and sends signal for aborting operation
> > when timeout happens. The downside: an external resource like
> > Zookeper/memcached etc are needed for leader election.
> > 
> 
> It's becoming increasingly clear that OpenStack services in general need
> to look at distributed locking primitives. There's a whole spec for that
> right now:
> 
> https://review.openstack.org/#/c/209661/
> 
> I suggest joining that conversation, and embracing a DLM as the way to
> do this.
> 
> Also, the leader election should be per-stack, and the leader selection
> should be heavily weighted based on a consistent hash algorithm so that
> you get even distribution of stacks to workers. You can look at how
> Ironic breaks up all of the nodes that way. They're using a similar lock
> to the one Heat uses now, so the two projects can collaborate nicely on
> a real solution.

It is worth to mention that there's also an idea of using both Tooz and
hash ring approach [1].

There was enormously big discussion on this list when Cinder's faced
similar problem [2]. It finally became a discussion on whether we need a
common solution for DLM in OpenStack [3]. In the end Cinder is currently
trying to achieve A/A capabilities by using CAS DB operations. The
detecting of failed services is still discussed, but most mature
solution to this problem was described in [4]. It is based on database
checks.

Given that many projects are facing similar problems (well, it's not a
surprise that distributed system is facing general problems of
distributed systems…), we should certainly discuss how to approach that
class of issues. That's why a cross-project Design Summit session on the
topic was proposed [5] (this one is by harlowja, but I know that Mike
Perez also wanted to propose such session).

[1] https://review.openstack.org/#/c/195366/
[2]
http://lists.openstack.org/pipermail/openstack-dev/2015-July/070683.html
[3]
http://lists.openstack.org/pipermail/openstack-dev/2015-August/071262.html
[4] http://gorka.eguileor.com/simpler-road-to-cinder-active-active/
[5] http://odsreg.openstack.org/cfp/details/8
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Google Hangout recording of volume manger locks

2015-10-07 Thread Dulko, Michal
On Wed, 2015-10-07 at 11:01 -0700, Walter A. Boring IV wrote:
> Hello folks,
>I just wanted to post up the YouTube link for the video hangout that 
> the Cinder team just had.
> 
> We had a good discussion about the local file locks in the volume 
> manager and how it affects the interaction
> of Nova with Cinder in certain cases.  We are trying to iron out how to 
> proceed ahead with removing the
> volume manager locks in a way that doesn't break the world.  The hope of 
> this is to eventually allow Cinder
> to run active/active HA c-vol services.
> 
> The Youtube.com link for the recording is here on my personal account:
> https://www.youtube.com/watch?v=D_iXpNcWDv8
> 
> 
> We discussed several things in the meeting:
> * The etherpad that was used as a basis for discussion:
> https://etherpad.openstack.org/p/cinder-active-active-vol-service-issues
> * What to do with the current volume manager locks and how do we remove 
> them?
> * How do we move forward with checking 'ING' states for volume actions?
> * What is the process for moving forward with the compare/swap patches 
> that Gorka has in gerrit.
> 
> 
> Action Items:
> *  We agreed to take a deeper look into the main compare/swap changes 
> that Gorka has in gerrit and see if we can get those to land.
>* https://review.openstack.org/#/c/205834/
>* https://review.openstack.org/#/c/218012/
> * Gorka is to update the patches and add the references to the 
> specs/blueprints for reference.
> * Gorka is going to post up follow up patch sets to test the removal of 
> each lock and see if it is sufficient to remove each individual lock.
> 
> 
> Follow up items:
> * Does it make sense for the community to create an OpenStack Cinder 
> youtube account, where the PTL owns the account, and we run
> each of our google hangouts through that.  The advantage of this is to 
> allow the community to participate openly, as well as record each of
> our Cinder hangouts for folks that can't attend the live event.  We 
> could use this account for the meetups as well as the conference sessions,
> and have them all recorded and saved in one spot.

Unfortunately I wasn't able to attend and after watching the video I
feel like I'm on the same page, so for me it seems like a brilliant
idea! I think recordings are very beneficial in cross-timezone
community.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-13 Thread Dulko, Michal
On Mon, 2015-10-12 at 10:58 -0700, Joshua Harlow wrote:
> Just a related thought/question. It really seems we (as a community) 
> need some kind of scale testing ground. Internally at yahoo we were/are 
> going to use a 200 hypervisor cluster for some of this and then expand 
> that into 200 * X by using nested virtualization and/or fake drivers and 
> such. But this is a 'lab' that not everyone can have, and therefore 
> isn't suited toward community work IMHO. Has there been any thought on 
> such a 'lab' that is directly in the community, perhaps trystack.org can 
> be this? (users get free VMs, but then we can tell them this area is a 
> lab, so don't expect things to always work, free isn't free after all...)
> 
> With such a lab, there could be these kinds of experiments, graphs, 
> tweaks and such...

https://www.mirantis.com/blog/intel-rackspace-want-cloud/

"The plan is to build out an OpenStack developer cloud that consists of
two 1,000 node clusters available for use by anyone in the OpenStack
community for scaling, performance, and code testing. Rackspace plans to
have the cloud available within the next six months."

Stuff you've described is actually being worked on for a few months. :)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-13 Thread Dulko, Michal
On Mon, 2015-10-12 at 10:13 -0700, Clint Byrum wrote:
> Zookeeper sits in a very different space from Cassandra. I have had good
> success with it on OpenJDK as well.
> 
> That said, we need to maybe go through some feature/risk matrices and
> compare to etcd and Consul (this might be good to do as part of filling
> out the DLM spec). The jvm issues goes away with both of those, but then
> we get to deal Go issues.
> 
> Also, ZK has one other advantage over those: It is already in Debian and
> Ubuntu, making access for developers much easier.

What about RHEL/CentOS? Maybe I'm mistaken, but I think these two
doesn't have it packaged.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-14 Thread Dulko, Michal
On Tue, 2015-10-13 at 08:47 -0700, Joshua Harlow wrote:
> Well great!
> 
> When is that going to be accessible :-P
> 
> Dulko, Michal wrote:
> > On Mon, 2015-10-12 at 10:58 -0700, Joshua Harlow wrote:
> >> Just a related thought/question. It really seems we (as a community)
> >> need some kind of scale testing ground. Internally at yahoo we were/are
> >> going to use a 200 hypervisor cluster for some of this and then expand
> >> that into 200 * X by using nested virtualization and/or fake drivers and
> >> such. But this is a 'lab' that not everyone can have, and therefore
> >> isn't suited toward community work IMHO. Has there been any thought on
> >> such a 'lab' that is directly in the community, perhaps trystack.org can
> >> be this? (users get free VMs, but then we can tell them this area is a
> >> lab, so don't expect things to always work, free isn't free after all...)
> >>
> >> With such a lab, there could be these kinds of experiments, graphs,
> >> tweaks and such...
> >
> > https://www.mirantis.com/blog/intel-rackspace-want-cloud/
> >
> > "The plan is to build out an OpenStack developer cloud that consists of
> > two 1,000 node clusters available for use by anyone in the OpenStack
> > community for scaling, performance, and code testing. Rackspace plans to
> > have the cloud available within the next six months."
> >
> > Stuff you've described is actually being worked on for a few months. :)

Judging from 6-month ETA and the fact that the work started in August it
seems that the answer is - beginning of 2016.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] New extension API for detecting cinder-backup ?

2015-10-19 Thread Dulko, Michal
On Fri, 2015-10-16 at 17:36 +, Ramakrishna, Deepti wrote:
> Thanks Duncan. 
>  
> Should I publish a BP and spec for this? And follow it up with code
> changes to the server, client, horizon and documentation? 
>  
> Thanks,
> Deepti 
> 

I believe a BP and spec is required as this is a new API call added.
Also having a spec makes it easier to discuss whole idea with rest of
the team.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Rolling upgrades - missing pieces

2015-10-19 Thread Dulko, Michal
Hi all,

One of our priority goals for Liberty was the adoption of
oslo.versionedobjects in order for Cinder to achieve ability to do
rolling upgrades. We weren't successful with that in L, and work got
postponed to Mitaka. I want to highlight remaining work in that topic as
well as other pieces that are still missing in order for Cinder to
support no-downtime-upgrades.

Basically in order to be able to perform such upgrade we need make sure
that our services are compatible between versions. There is a set of
problems that needs to be solved:
* Non-compatible DB migrations (e.g. dropping or altering DB columns).
* Non-compatible RPC API changes (e.g. rename of an argument of a RPC
method).
* Non-compatible changes inside objects/dicts sent over RPC (e.g.
removal of a key there).

Good explanation of how Nova solves these can be found in a series of
posts by Dan Smith - [1][2][3]. I'll walk through all of these.

DB migrations
-
Since Juno no non-compatible DB migration was merged. We may stick to
this approach and allow only additive migrations to be performed (we
probably may allow dropping columns in further release - require that
only two subsequent releases are compatible). This is easy to prevent
using an unit test [4]. Another solution would be to implement online
schema migrations. This was implemented in Nova [5], but is considered
to be unstable and experimental.

RPC API compatibility
-
We're already versioning our RPC layer, but we aren't actually
benefiting from it - we don't support RPC API pinning and don't pay
attention to merge only changes that are backward compatible. This
requires cultural change in reviewing and I think we should discuss the
approach at the Design Summit sprint.

Versioned Objects
-
Right now there's a few outstanding DB-based objects:
* CGSnapshot (in review).
* Volume (partly in review).

You can find patches in [5].

Apart from that I think we need to convert dictionaries sent over RPC to
versioned objects. This would include:
* request_spec (scheduler.rpcapi)
* filter_properites (scheduler.rpcapi)
* capabilities (scheduler.rpcapi) - I'm not sure on this one…

Changing this is required for us to be able to remove or rename fields
in these dictionaries and still be able to provide interoperability of
services working in different versions.

I would love to get some feedback on these thoughts and possibly start a
pre-summit discussion on the whole topic.

[1] http://www.danplanet.com/blog/2015/10/05/upgrades-in-nova-rpc-apis/
[2] http://www.danplanet.com/blog/2015/10/06/upgrades-in-nova-objects/
[3] 
http://www.danplanet.com/blog/2015/10/07/upgrades-in-nova-database-migrations/
[4] 
https://github.com/openstack/nova/blob/master/nova/tests/unit/db/test_migrations.py#L186-L227
[5] 
http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/online-schema-changes.html
[6] 
https://review.openstack.org/#/q/status:open+project:openstack/cinder+branch:master+topic:bp/cinder-objects,n,z

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Rolling upgrades - missing pieces

2015-10-19 Thread Dulko, Michal
On Mon, 2015-10-19 at 11:19 -0500, Sean McGinnis wrote:
> On Mon, Oct 19, 2015 at 03:10:16PM +0000, Dulko, Michal wrote:
> > Hi all,
> > 
> > One of our priority goals for Liberty was the adoption of
> > oslo.versionedobjects in order for Cinder to achieve ability to do
> > rolling upgrades. We weren't successful with that in L, and work got
> > postponed to Mitaka. I want to highlight remaining work in that topic as
> > well as other pieces that are still missing in order for Cinder to
> > support no-downtime-upgrades.
> > 
> 
> > 
> > Changing this is required for us to be able to remove or rename fields
> > in these dictionaries and still be able to provide interoperability of
> > services working in different versions.
> > 
> > I would love to get some feedback on these thoughts and possibly start a
> > pre-summit discussion on the whole topic.
> 
> Thanks for bringing this up Michal. Will you be around for the weekly
> meeting this week? It would be great if we could get this on the agenda
> just to make sure everyone is aware of it. 
> 
> That may help to make sure more folks have had a chance to think about
> this, even briefly, before the design summit.
> 
> Thanks!
> Sean

I've added an item to the agenda. This is a big topic, but I'll try to
prepare some general questions to shape the discussion a little.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [Taskflow]Review help to cinder bp:Implement function to manage/unmanage snapshots

2015-06-02 Thread Dulko, Michal
Right now we’re working on refactoring current TaskFlow implementations in 
Cinder to make them more readable and clean. Then we’ll be able to decide if we 
want to get more TaskFlow into Cinder or step back from using it. Deadline for 
refactoring work is around 1 of July.

Here’s related patch for scheduler’s create_volume workflow: 
https://review.openstack.org/#/c/186439/

Currently I’m working on a patch for API’s create_volume and John Griffith 
agreed to work on manager’s one (I don’t know current status). If you want to 
help with these efforts – reviews are always welcomed. You may also take a shot 
at refactoring manage_existing flow in manager. It seem simple enough but maybe 
there are some improvement that we can do to make it more readable.

From: hao wang [mailto:sxmatch1...@gmail.com]
Sent: Tuesday, June 2, 2015 11:30 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Cinder] [Taskflow]Review help to cinder bp:Implement 
function to manage/unmanage snapshots

Hi, folks,

There is a cinder bp:Implement function to manage/unmanage 
snapshots(https://review.openstack.org/#/c/144590/), that we use taskflow to 
implement this feature.

So I need your guys' help(cinder & taskflow) to push this forward.

Thanks.



--

Best Wishes For You!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Looking for help getting git-review to work over https

2015-06-11 Thread Dulko, Michal
In our environment we're using SOCKS proxy to bypass firewall. Maybe it's an 
option for you? I just execute tsocks git-review instead of plain git-review 
and it seem to work.

I've just tried solution you've mentioned and it doesn't help in my case.

> -Original Message-
> From: KARR, DAVID [mailto:dk0...@att.com]
> Sent: Thursday, June 11, 2015 5:00 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] Looking for help getting git-review to work over
> https
> 
> I could use some help with setting up git-review in a slightly unfriendly
> firewall situation.
> 
> I'm trying to set up git-review on my CentOS7 VM, and our firewall blocks the
> non-standard ssh port.  I'm following the instructions at
> http://docs.openstack.org/infra/manual/developers.html#accessing-gerrit-
> over-https , for configuring git-review to use https on port 443, but this 
> still
> isn't working (times out with "Could not connect to gerrit").  I've confirmed
> that I can reach other external sites on port 443.
> 
> Can someone give me a hand with this?
> 
> --
> David M. Karr | AT&T | Service Standards - Open Platform for Network
> Function Virtualization
> (425) 580-4547 work
> (206) 909-0664 cell
> (425) 892-5432 cell
> 
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [taskflow] Returning information from reverted flow

2015-06-12 Thread Dulko, Michal
Hi,

In Cinder we had merged a complicated piece of code[1] to be able to return 
something from flow that was reverted. Basically outside we needed an 
information if volume was rescheduled or not. Right now this is done by 
injecting information needed into exception thrown from the flow. Another idea 
was to use notifications mechanism of TaskFlow. Both ways are rather 
workarounds than real solutions.

I wonder if TaskFlow couldn't provide a mechanism to mark stored element to not 
be removed when revert occurs. Or maybe another way of returning something from 
reverted flow?

Any thoughts/ideas?

[1] https://review.openstack.org/#/c/154920/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [taskflow] Returning information from reverted flow

2015-06-15 Thread Dulko, Michal
> -Original Message-
> From: Joshua Harlow [mailto:harlo...@outlook.com]
> Sent: Friday, June 12, 2015 5:49 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [taskflow] Returning information from reverted
> flow
> 
> Dulko, Michal wrote:
> > Hi,
> >
> > In Cinder we had merged a complicated piece of code[1] to be able to
> > return something from flow that was reverted. Basically outside we
> > needed an information if volume was rescheduled or not. Right now this
> > is done by injecting information needed into exception thrown from the
> > flow. Another idea was to use notifications mechanism of TaskFlow.
> > Both ways are rather workarounds than real solutions.
> 
> Unsure about notifications being a workaround (basically u are notifying to
> some other entities that rescheduling happened, which seems like exactly
> what it was made for) but I get the point ;)

Please take a look at this review - https://review.openstack.org/#/c/185545/. 
Notifications cannot help if some further revert decision needs to be based on 
something that happened earlier.
> 
> >
> > I wonder if TaskFlow couldn't provide a mechanism to mark stored element
> > to not be removed when revert occurs. Or maybe another way of returning
> > something from reverted flow?
> >
> > Any thoughts/ideas?
> 
> I have a couple, I'll make some paste(s) and see what people think,
> 
> How would this look (as pseudo-code or other) to you, what would be your
> ideal, and maybe we can work from there (maybe u could do some paste(s)
> to and we can prototype it), just storing information that is returned
> from revert() somewhere? Or something else? There has been talk about
> task 'local storage' (or something like that/along those lines) that
> could also be used for this similar purpose.

I think that the easiest idea from the perspective of an end user would be to 
save items returned from revert into flow engine's storage *and* do not remove 
it from storage when whole flow gets reverted. This is completely backward 
compatible, because currently revert doesn't return anything. And if revert has 
to record some information for further processing - this will also work.

> 
> >
> > [1] https://review.openstack.org/#/c/154920/
> >
> >
> __
> 
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [taskflow] Returning information from reverted flow

2015-06-17 Thread Dulko, Michal


> -Original Message-
> From: Joshua Harlow [mailto:harlo...@outlook.com]
> Sent: Tuesday, June 16, 2015 4:52 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [taskflow] Returning information from reverted
> flow
> 
> Dulko, Michal wrote:
> >> -Original Message-
> >> From: Joshua Harlow [mailto:harlo...@outlook.com]
> >> Sent: Friday, June 12, 2015 5:49 PM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: Re: [openstack-dev] [taskflow] Returning information from
> >> reverted flow
> >>
> >> Dulko, Michal wrote:
> >>> Hi,
> >>>
> >>> In Cinder we had merged a complicated piece of code[1] to be able to
> >>> return something from flow that was reverted. Basically outside we
> >>> needed an information if volume was rescheduled or not. Right now
> >>> this is done by injecting information needed into exception thrown
> >>> from the flow. Another idea was to use notifications mechanism of
> TaskFlow.
> >>> Both ways are rather workarounds than real solutions.
> >> Unsure about notifications being a workaround (basically u are
> >> notifying to some other entities that rescheduling happened, which
> >> seems like exactly what it was made for) but I get the point ;)
> >
> > Please take a look at this review -
> https://review.openstack.org/#/c/185545/. Notifications cannot help if some
> further revert decision needs to be based on something that happened
> earlier.
> 
> That sounds like conditional reverting, which seems like it should be handled
> differently anyway, or am I misunderstanding something?

Current version of the patch takes another approach which I think handles it 
correctly. So you were probably right. :)

> 
> >>> I wonder if TaskFlow couldn't provide a mechanism to mark stored
> >>> element to not be removed when revert occurs. Or maybe another way
> >>> of returning something from reverted flow?
> >>>
> >>> Any thoughts/ideas?
> >> I have a couple, I'll make some paste(s) and see what people think,
> >>
> >> How would this look (as pseudo-code or other) to you, what would be
> >> your ideal, and maybe we can work from there (maybe u could do some
> >> paste(s) to and we can prototype it), just storing information that
> >> is returned from revert() somewhere? Or something else? There has
> >> been talk about task 'local storage' (or something like that/along
> >> those lines) that could also be used for this similar purpose.
> >
> > I think that the easiest idea from the perspective of an end user would be
> to save items returned from revert into flow engine's storage *and* do not
> remove it from storage when whole flow gets reverted. This is completely
> backward compatible, because currently revert doesn't return anything. And
> if revert has to record some information for further processing - this will 
> also
> work.
> >
> 
> Ok, let me see what this looks like and maybe I can have a POC in the next
> few days, I don't think its impossible to do (obviously) and hopefully will be
> useful for this.

Great!
> 
> >>> [1] https://review.openstack.org/#/c/154920/
> >>>
> >>>
> >>
> __
> >> 
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe: OpenStack-dev-
> >> requ...@lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> __
> >> 
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: OpenStack-dev-
> >> requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __
> 
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to properly detect and fence a compromised host (and why I dislike TrustedFilter)

2015-06-24 Thread Dulko, Michal
> -Original Message-
> From: Sylvain Bauza [mailto:sba...@redhat.com]
> Sent: Wednesday, June 24, 2015 9:39 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] How to properly detect and fence a
> compromised host (and why I dislike TrustedFilter)
> 
> (general point, could we please try not top-posting ? It makes a little harder
> to follow the conversation)
> 
> Replies inline.
> 
> Le 24/06/2015 08:15, Wei, Gang a écrit :
> > Only if all the hosts managed by OpenStack are capable for measured boot
> process, then let 3rd-party tool call nova fencing API might be better than
> using TrustedFilter.
> >
> > But if not all the hosts support measured boot, then with TrustedFilter we
> can schedule VM to only measured and trusted host, but in 3rd-party tool
> case, only untrusted/compromised hosts will be fenced, the host with
> unknown trustworthiness will still be able to run VM but the owner is not
> willing to do it that way.
> You don't need a specific filter for fencing one host from being scheduled.
> Just calling the Nova os-services API to explicitly disable the service (and
> providing a reason) just makes the hosts belonging to the service not able to
> be elected (thanks to the ComputeFilter)
> 
> To be clear, I would love to see the logic inverted, ie. something which would
> call the OAT service for a specific host would then fire a service disable.
> 
> 
> > So I would suggest using the 3rd-party tools as enhancing way to
> supplement our TCP/trustedfilter feature. And the 3rd party tools can also
> call attestation API for host attestation.
> 
> I don't see much benefits of keeping such filter for the reasons I mentioned
> below. Again, if you want to fence one host, you can just disable its service,
> that's enough.

This won't address the case in which you have heterogenic environment and you 
want only some important VMs to run on trusted hosts (and for the rest of the 
VMs you don't care).

> 
> > Thanks
> > Jimmy
> >
> > -Original Message-
> > From: Bhandaru, Malini K [mailto:malini.k.bhand...@intel.com]
> > Sent: Wednesday, June 24, 2015 1:13 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [nova] How to properly detect and fence a
> > compromised host (and why I dislike TrustedFilter)
> >
> > Would like to add to Shane's points below.
> >
> > 1) The Trust filter can be treated as an API, with different underlying
> implementations. Its default could even be "Not Implemented" and always
> return false.
> >   And Nova.conf could specify use the OAT trust implementation. This
> would not break present day users of the functionality.
> 
> Don't get me wrong, I'm not against OAT, I'm just saying that the
> TrustedFilter design is wrong. Even if another alternative would come up to
> serve the TrustedComputePool model of things, it would still be bad for the
> reasons I mentioned below, and wouldn't cover the usecase I quoted.
> 
> 
> > 2) The issue in the original bug is a a VM waking up after a reboot on a 
> > host
> that has not pre-determined whether the host is still trustable.
> >   This is essentially begging a feature to check that all constraints
> requested by a VM during launch are confirmed to hold when it re-awakens,
> even if it is not
> >   going through Nova scheduler at this point.
> 
> So I think we are in agreement that for covering that usecase, it can't be
> done at the scheduler level.
> Using TrustedFilter just ensures that at the instance creation time, the host 
> is
> checked but confuses people because they think it will be enforced for the
> whole instance lifecyle.
> 
> 
> >   This holds even for aggregates that might be specified by geo, or even
> reservation such as "Coke" or "Pepsi".
> >   What if a host, even without a reboot and certainly before a reboot 
> > was
> assigned from Coke to Pepsi, there is cross contamination.
> >   Perhaps we need Nova hooks that can be registered with functions that
> check expected aggregate values.
> 
> I don't honestly see the point of an host aggregate. Given the failure domain
> is an host, you only need to trust that host or not. The fact that the host
> belongs to an aggregate or not is orthogonal to our problem IMHO.
> 
> >   Better still have  libvirt functionality that makes a call back for 
> > each VM
> on a host to ensure its constraints are satisfied on start-up/boot, and 
> re-start
> when it comes out of pause.
> 
> Hum, doesn't it sound weird to have the host being the source of truth ?
> Also, if an host gets compromised, why couldn't we assume that the
> instances can be compromised too and need to be resurrected (ie.
> evacuated) ?
> 
> 
> >   Using aggregate for trust with a cron job to check for trust is 
> > inefficient
> in this case, trust status gets updated only on a host reboot. Intel TXT is a
> boot
> >   time authentication.
> 
> Isn't 

Re: [openstack-dev] [nova] How to properly detect and fence a compromised host (and why I dislike TrustedFilter)

2015-06-25 Thread Dulko, Michal
> -Original Message-
> From: John Garbutt [mailto:j...@johngarbutt.com]
> Sent: Thursday, June 25, 2015 2:22 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] How to properly detect and fence a
> compromised host (and why I dislike TrustedFilter)
> 
> On 24 June 2015 at 09:35, Dulko, Michal  wrote:
> >> -Original Message-
> >> From: Sylvain Bauza [mailto:sba...@redhat.com]
> >> Sent: Wednesday, June 24, 2015 9:39 AM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: Re: [openstack-dev] [nova] How to properly detect and fence
> >> a compromised host (and why I dislike TrustedFilter)

(snip)

> >> > So I would suggest using the 3rd-party tools as enhancing way to
> >> supplement our TCP/trustedfilter feature. And the 3rd party tools can
> >> also call attestation API for host attestation.
> >>
> >> I don't see much benefits of keeping such filter for the reasons I
> >> mentioned below. Again, if you want to fence one host, you can just
> >> disable its service, that's enough.
> >
> > This won't address the case in which you have heterogenic environment
> and you want only some important VMs to run on trusted hosts (and for the
> rest of the VMs you don't care).
> 
> This is an interesting one to dig into.
> 
> I had assumed in this case you put all the VMs that want the attestation
> check in a subset of nodes that are setup to use that set.
> You can do that using host aggregates and our existing filters.
> 
> An external system could then just disable hosts within that subset of hosts
> that have the attestation check working.
> 
> Does that work for your use case?

It should be fine for this case.  But then - why not go further and remove SG 
API? Let's leave monitoring of services to Pacemaker and NagiOS and they 
disable them if they consider that service is down.

My point is that following this logic we may use external services to replace 
any filter that has such simple logic. Is this the right direction?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][oslo] Locks for create from volume/snapshot

2015-06-29 Thread Dulko, Michal
There’s also some similar situations when we actually don’t lock on resources. 
For  example – a cgsnapshot may get deleted while creating a consistencygroup 
from it.

From my perspective it seems best to have atomic state changes and state-based 
exclusion in API. We would need some kind of 
currently_used_to_create_snapshot/volums/consistencygroups states to achieve 
that. Then we would be also able to return VolumeIsBusy exceptions so retrying 
a request would be on the user side.

From: Duncan Thomas [mailto:duncan.tho...@gmail.com]
Sent: Sunday, June 28, 2015 12:16 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [cinder][oslo] Locks for create from 
volume/snapshot


We need mutual exclusion for several operations. Whether that is done by entity 
queues, locks, state based locking at the api later, or something else, we need 
mutual exclusion.

Our current api does not lend itself to looser consistency, and I struggle to 
come up with a sane api that does - nobody doing an operation on a volume  
wants it to happen maybe, at some time...
On 28 Jun 2015 07:30, "Avishay Traeger" 
mailto:avis...@stratoscale.com>> wrote:
Do we really need any of these locks?  I'm sure we could come up with some way 
to remove them, rather than make them distributed.

On Sun, Jun 28, 2015 at 5:07 AM, Joshua Harlow 
mailto:harlo...@outlook.com>> wrote:
John Griffith wrote:


On Sat, Jun 27, 2015 at 11:47 AM, Joshua Harlow 
mailto:harlo...@outlook.com>
>> wrote:

Duncan Thomas wrote:

We are working on some sort of distributed replacement for the
locks in
cinder, since file locks are limiting our ability to do HA. I'm
afraid
you're unlikely to get any traction until that work is done.

I also have a concern that some backend do not handle load well,
and so
benefit from the current serialisation. It might be necessary to
push
this lock down into the driver and allow each driver to choose it's
locking model for snapshots.


IMHO (and I know this isn't what everyone thinks) but I'd rather
have cinder (and other projects) be like this from top gear (
https://www.youtube.com/watch?v=xnWKz7Cthkk ) where that toyota
truck is virtually indestructible vs. trying to be a
high-maintenance ferrari (when most openstack projects do a bad job
of trying to be one). So, maybe for a time (and I may regret saying
this) we could consider focusing on reliability, consistency, being
the toyota vs. handling some arbitrary amount of load (trying to be
a ferrari).

Also I'd expect/think operators would rather prefer a toyota at this
stage of openstack :) Ok enough analogies, ha.


​Well said Josh, I guess I've been going about this all wrong by not
using the analogies :)​

Exactly!! IMHO should be the new 'openstack mantra, built from 
components/projects that survive like a toyota truck' haha. Part 2 
(https://www.youtube.com/watch?v=xTPnIpjodA8) and part 3 
(https://www.youtube.com/watch?v=kFnVZXQD5_k) are funny/interesting also :-P

Now we just need openstack to be that reliable and tolerant of 
failures/calamities/...


-Josh


On 27 Jun 2015 06:18, "niuzhenguo" 
mailto:niuzhen...@huawei.com>
>
 


Re: [openstack-dev] [cinder][oslo] Locks for create from volume/snapshot

2015-06-29 Thread Dulko, Michal
That’s right, it might be painful. V3 API implememtation would be also a hard, 
because then we would need different manager behavior for requests from V2 and 
V3… So maybe we need some config flag with deprecation procedure scheduled?

From: Duncan Thomas [mailto:duncan.tho...@gmail.com]
Sent: Monday, June 29, 2015 2:46 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [cinder][oslo] Locks for create from 
volume/snapshot

On 29 June 2015 at 15:23, Dulko, Michal 
mailto:michal.du...@intel.com>> wrote:
There’s also some similar situations when we actually don’t lock on resources. 
For  example – a cgsnapshot may get deleted while creating a consistencygroup 
from it.

From my perspective it seems best to have atomic state changes and state-based 
exclusion in API. We would need some kind of 
currently_used_to_create_snapshot/volums/consistencygroups states to achieve 
that. Then we would be also able to return VolumeIsBusy exceptions so retrying 
a request would be on the user side.


I'd agree, except that gives quite a big behaviour change in the tenant-facing 
API, which will break clients and scripts. Not sure how to square that 
circle... I'd say V3 API except Mike might kill me...
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Why doesn't Gerrit email me?

2015-06-30 Thread Dulko, Michal
I'm also experiencing some difficulties with Gerrit email notifications. Around 
the time Kilo was released it became unreliable. Some notifications are coming 
after few days, some of them instantly. In particular I'm often receiving 
comments on a patch in invalid order.

> -Original Message-
> From: Louis Taylor [mailto:lo...@kragniz.eu]
> Sent: Tuesday, June 30, 2015 3:45 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] Why doesn't Gerrit email me?
> 
> On Tue, Jun 30, 2015 at 02:08:45PM +0100, Neil Jerram wrote:
> > Apologies if this is an FAQ - I tried a quick search, but that didn't
> > find anything that looked both up to date and authoritative.
> >
> > I keep going back to Gerrit jobs that I've reviewed or commented on,
> > and finding that there have been other comments since mine, but that
> > Gerrit didn't email me about.
> >
> > Does anyone know why that happens?  It's really important to my
> > workflow, and to continuing review conversations effectively, that
> > Gerrit emails new comments reliably and in good time.  Could I be
> > doing something wrong that is causing this not to happen?
> 
> This is probably a silly question, but have you enabled email notifications 
> for
> all comments in your watched projects? You can create a watch item for a
> particular project with 'owner:me' in the 'Only If' field to prevent a deluge 
> of
> comment notifications.
> 
> Cheers,
> Louis

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] no more expand/contract for live upgrade?

2016-03-19 Thread Dulko, Michal
On Fri, 2016-03-18 at 08:27 +, Tan, Lin wrote:
> Hi,
> 
> I noticed that expand/migrate/contract was revert in 
> https://review.openstack.org/#/c/239922/
> There is a new CMD 'online_data_migrations' was introduced to Nova and some 
> data-migration scripts have been added.
> So I wonder will Nova keep expand the DB schema at beginning of live upgrade 
> like before Or Nova have some new ways to handle DB Schema change?
> The upgrade doc was not update for a long time 
> http://docs.openstack.org/developer/nova/upgrade.html
> 
> Thanks a lot.
> 
> Best Regards,
> 
> Tan

[1] will help you understand current way of doing live schema upgrades.

[1] 
http://www.danplanet.com/blog/2015/10/07/upgrades-in-nova-database-migrations/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Block subtractive schema changes

2016-03-24 Thread Dulko, Michal
On Thu, 2016-03-24 at 10:45 +, stuart.mcla...@hp.com wrote:
> I think this makes sense (helps us spot things which could impact upgrade).
> 
> >Hi Glance Team,
> >
> >I have registered a blueprint [1] for blocking subtractive schema changes.
> >Cinder and Nova are already supporting blocking of subtractive schema 
> >operations. Would like to add similar support here.
> >
> >Please let me know your opinion on the same.
> >
> >[1] 
> >https://blueprints.launchpad.net/glance/+spec/block-subtractive-operations
> >
> >
> >Thank you,
> >
> >Abhishek Kekane

You'll probably need some way to actually perform such migrations when
needed. In Cinder we've introduced guidelines [1], which allow us to
ALTER or DROP a column with a process stretching through 2-3 releases.

Nova does a little better by not allowing nova-compute to access the DB
(nova-conductor is acting as a proxy).

Also note that unit test won't prevent you from all of the cases. It
won't for example detect DB-specific migrations written in plain SQL as
in [2].

[1] 
http://specs.openstack.org/openstack/cinder-specs/specs/mitaka/online-schema-upgrades.html
[2] https://review.openstack.org/#/c/190300
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Resuming of workflows/tasks

2015-02-24 Thread Dulko, Michal
Hi all,

I was working on spec[1] and prototype[2] to make Cinder to be able to resume 
workflows in case of server or service failure. Problem of requests lost and 
resources left in unresolved states in case of failure was signaled at the 
Paris Summit[3].

What I was able to prototype was to resume running tasks locally after service 
restart using persistence API provided by TaskFlow. However core team agreed 
that we should aim at resuming workflows globally even by other service 
instances (which I think is a good decision).

There are few major problems blocking this approach:

1. Need of distributed lock to avoid same task being resumed by two instances 
of a service. Do we need tooz to do that or is there any other solution?
2. Are we going to step out from using TaskFlow? Such idea came up at the 
mid-cycle meetup, what's the status of it? Without TaskFlow's persistence 
implementing task resumptions would be a lot more difficult.
3. In case of cinder-api service we're unable to monitor it's state using 
servicegroup API. Do we have alternatives here to make decision if particular 
workflow being processed by cinder-api is abandoned?

As this topic is deferred to Liberty release I want to start discussion here to 
be continued at the summit.

[1] https://review.openstack.org/#/c/147879/
[2] https://review.openstack.org/#/c/152200/
[3] https://etherpad.openstack.org/p/kilo-crossproject-ha-integration

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Module_six_moves_urllib_parse error

2015-02-25 Thread Dulko, Michal
Normally “apt-get remove python-six” helps.

From: Jordan Pittier [mailto:jordan.pitt...@scality.com]
Sent: Wednesday, February 25, 2015 12:08 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Module_six_moves_urllib_parse error

Hi
You probably have an old python-six version installed system-wide.

Jordan

On Wed, Feb 25, 2015 at 11:56 AM, Manickam, Kanagaraj 
mailto:kanagaraj.manic...@hp.com>> wrote:
Hi,

I see the below error in my devstack and is raised from the package ‘six’

AttributeError: 'Module_six_moves_urllib_parse' object has no attribute 
'SplitResult'

Currently my devstack setup is having six 1.9.0 version. Could anyone help here 
to fix the issue? Thanks.

Regards
Kanagaraj M

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [cinder] API service in service group API

2015-05-08 Thread Dulko, Michal
Hi,

I wonder why nova-api or cinder-api aren't present service group API of each 
project:

mdulko:devstack/ (master) $ cinder service-list
+--+---+--+-+---++-+
|  Binary  |  Host | Zone |  Status | State |   
  Updated_at | Disabled Reason |
+--+---+--+-+---++-+
|  cinder-backup   |   mdulko-VirtualBox   | nova | enabled |   up  | 
2015-05-08T11:58:50.00 |-|
| cinder-scheduler |   mdulko-VirtualBox   | nova | enabled |   up  | 
2015-05-08T11:58:49.00 |-|
|  cinder-volume   | mdulko-VirtualBox@lvmdriver-1 | nova | enabled |   up  | 
2015-05-08T11:58:50.00 |-|
|  cinder-volume   | mdulko-VirtualBox@lvmdriver-2 | nova | enabled |   up  | 
2015-05-08T11:58:50.00 |-|
+--+---+--+-+---++-+

Are there any technical limitations to include API services there? Use case is 
that when service dies during request processing - it leaves some garbage in 
the DB and quotas. This could be cleaned up by another instance of a service. 
For that aforementioned instance would need to know if service that was 
processing the request is down.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cinder] API service in service group API

2015-05-08 Thread Dulko, Michal
Are there a blueprint or spec for that? Or is this currently just an open idea?

Can you explain what exactly such idea makes easier in versioning? 

From: Duncan Thomas [mailto:duncan.tho...@gmail.com] 
Sent: Friday, May 8, 2015 2:14 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [nova] [cinder] API service in service group API

In the case of cinder, there is a proposal to add it in, since it makes some of 
the versioning work easier
On 8 May 2015 15:08, "Dulko, Michal"  wrote:
Hi,

I wonder why nova-api or cinder-api aren't present service group API of each 
project:

mdulko:devstack/ (master) $ cinder service-list
+--+---+--+-+---++-+
|      Binary      |              Host             | Zone |  Status | State |   
      Updated_at         | Disabled Reason |
+--+---+--+-+---++-+
|  cinder-backup   |       mdulko-VirtualBox       | nova | enabled |   up  | 
2015-05-08T11:58:50.00 |        -        |
| cinder-scheduler |       mdulko-VirtualBox       | nova | enabled |   up  | 
2015-05-08T11:58:49.00 |        -        |
|  cinder-volume   | mdulko-VirtualBox@lvmdriver-1 | nova | enabled |   up  | 
2015-05-08T11:58:50.00 |        -        |
|  cinder-volume   | mdulko-VirtualBox@lvmdriver-2 | nova | enabled |   up  | 
2015-05-08T11:58:50.00 |        -        |
+--+---+--+-+---++-+

Are there any technical limitations to include API services there? Use case is 
that when service dies during request processing - it leaves some garbage in 
the DB and quotas. This could be cleaned up by another instance of a service. 
For that aforementioned instance would need to know if service that was 
processing the request is down.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] API-WG PTG recap

2017-03-07 Thread Dulko, Michal
On Fri, 2017-03-03 at 17:12 +, Chris Dent wrote:



> 
> # Capabilities Discovery
> 
> https://etherpad.openstack.org/p/capabilities-pike
> 
> This began as an effort to refine a proposed guideline for
> expressing what a cloud can do:
> 
>  https://review.openstack.org/#/c/386555/
> 
> That was modeled as what a cloud can do, what a type of resource in
> that cloud can do, and what this specific instance of this resource
> can do.
> 
> Discussion was wide ranging but eventually diverged into two
> separate directions:
> 
> * Expressing cloud-level capabilities (e.g., does this cloud do floating
>    ips) at either the deployment or service level. The use of the
>    URL /capabilities is in the original spec, but since swift already
>    provides an implementation of an idea like this at /info we should
>    go with that. It's not clear what the next steps with this are,
>    other than to iterate the spec. We need volunteers to work on at
>    least reviewing that, and perhaps picking up the authorship.

Unfortunately I won't be able to dedicate time to the spec in nearest
timeframe. Anyone can feel free to grab it and update it with PTG's
decisions.

> 
> 
> * Satisfying the use case that prompted the generic idea above:
>    Making the right buttons show up in dashboards like horizon that
>    indicate whether or not an instance can be snapshotted and other
>    similar features.
> 
> The next steps on that latter direction are to modify the server
> info representation in the compute api to include a new key which
> answers the top 5 questions that horizon wants to be able to answer.
> Once we see how well that's working, move on.
> 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Emails for OpenStack R Release Name voting going out - please be patient

2017-04-12 Thread Dulko, Michal
On Wed, 2017-04-12 at 06:57 -0500, Monty Taylor wrote:
> On 04/06/2017 07:34 AM, Monty Taylor wrote:
> > 
> > Hey all!
> > 
> > I've started the R Release Name poll and currently am submitting
> > everyone's email address to the system. In order to not make our fine
> > friends at Carnegie Mellon (the folks who run the CIVS voting service)
> > upset, I have a script that submits the emails one at a time with a
> > half-second delay between each email. That means at best, since there
> > are 40k people to process it'll take ~6 hours for them all to go out.
> > 
> > Which is to say - emails are on their way - but if you haven't gotten
> > yours yet, that's fine. I'll send another email when they've all gone
> > out, so don't worry about not receiving one until I've sent that mail.
> Well- that took longer than I expected. Because of some rate limiting, 
> 1/2 second delay was not long enough...
> 
> Anyway - all of the emails should have gone out now. Because that took 
> so long, I'm going to hold the poll open until next Wednesday.
> 
> Monty

Not sure why, but I haven't received an email yet.

Thanks,
Michal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Cleaning up inactive meetbot channels

2017-07-20 Thread Dulko, Michal
On Wed, 2017-07-19 at 19:24 +, Jeremy Stanley wrote:
> For those who are unaware, Freenode doesn't allow any one user to
> /join more than 120 channels concurrently. This has become a
> challenge for some of the community's IRC bots in the past year,
> most recently the "openstack" meetbot (which not only handles
> meetings but also takes care of channel logging to
> eavesdrop.openstack.org and does the nifty bug number resolution
> some people seem to like).
> 
> I have run some rudimentary analysis and come up with the following
> list of channels which have had fewer than 10 lines said by anyone
> besides a bot over the past three months:
> 
> #craton
> #openstack-api
> #openstack-app-catalog
> #openstack-bareon
> #openstack-cloudpulse
> #openstack-community
> #openstack-cue
> #openstack-diversity
> #openstack-gluon
> #openstack-gslb
> #openstack-ko
> #openstack-kubernetes
> #openstack-networking-cisco
> #openstack-neutron-release
> #openstack-opw
> #openstack-pkg
> #openstack-product
> #openstack-python3
> #openstack-quota
> #openstack-rating
> #openstack-solar
> #openstack-swauth
> #openstack-ux
> #openstack-vmware-nsx
> #openstack-zephyr
> 
> I have a feeling many of these are either no longer needed, or what
> little and infrequent conversation they get used for could just as
> easily happen in a general channel like #openstack-dev or #openstack
> or maybe in the more active channel of their parent team for some
> subteams. Who would miss these if we ceased logging/using them? Does
> anyone want to help by asking around to people who might not see
> this thread, maybe by popping into those channels and seeing if any
> of the sleeping denizens awaken and say they still want to keep it
> around?
> 
> Ultimately we should improve our meetbot deployment to support
> sharding channels across multiple bots, but that will take some time
> to implement and needs volunteers willing to work on it. In the
> meantime we're running with the meetbot present in 120 channels and
> have at least one new channel that desires logging and can't get it
> until we whittle that number down.

Would it be possible to *add* #openstack-helm channel during those
changes? I have a review doing that [1], which is hanging for some time
now and #openstack-helm channel is currently logged only by chatbot
from k8s Slack. That's a pretty active channel by the way.

Thanks,
Michal

[1] https://review.openstack.org/#/c/455742/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Cleaning up inactive meetbot channels

2017-07-20 Thread Dulko, Michal
On Thu, 2017-07-20 at 13:06 +, Jeremy Stanley wrote:

On 2017-07-20 07:49:08 + (+), Dulko, Michal wrote:
[...]


Would it be possible to *add* #openstack-helm channel during those
changes?


[...]

Absolutely! I've left a comment on your change linking this ML
thread, but I expect we'll have the situation (temporarily) resolved
over the next few days.

Great, thank you!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Destructive / HA / fail-over scenarios

2016-11-30 Thread Dulko, Michal
On Mon, 2016-11-28 at 15:51 +0300, Timur Nurlygayanov wrote:
> Hi OpenStack developers and operators,
> 
> we are going to create the test suite for destructive testing of
> OpenStack clouds. We want to hear your feedback and ideas
> about possible destructive and failover scenarios which we need
> to check.

In Cinder we're pursuing A/A for our cinder-volume service. It would be
useful to run some destructive tests on patch chain [1] to make sure no
volume operations are failing while clustered cinder-volume service
gets killed. In the future we should have a CI testing that in periodic
zuul queue.

[1] https://review.openstack.org/#/c/355968

> 
> Which scenarios we need to check if we want to make sure that
> some OpenStack cluster is configured in High Availability mode
> and can be published as a "production/enterprise" cluster.
> 
> Your ideas are welcome, let's discuss the ideas of test scenarios in
> this email thread.
> 
> The spec for High Availability testing is on review: [1]
> The user story for destructive testing of OpenStack clouds is
> on review: [2].
> 
> Thank you!
> 
> [1] https://review.openstack.org/#/c/399618/
> [2] https://review.openstack.org/#/c/396142
> 
> -- 
> 
> Timur,
> QA Manager
> OpenStack Projects
> Mirantis Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] drbdmanage is no more GPL2

2016-12-12 Thread Dulko, Michal
On Mon, 2016-12-12 at 07:58 +0100, Mehdi Abaakouk wrote:

Hi,

I have recently seen that drbdmanage python library is no more GPL2 but
need a end user license agreement [1].

Is this compatible with the driver policy of Cinder ?

[1] 
http://git.drbd.org/drbdmanage.git/commitdiff/441dc6a96b0bc6a08d2469fa5a82d97fc08e8ec1



Issues with licensing are mostly around possibilities of including the official 
driver in any OpenStack distro. It seems to me that following statement in the 
new license prohibits that for drdbmanage:

3.4) Without prior written consent of LICENSOR or an authorized partner,
LICENSEE is not allowed to:



b) provide commercial turn-key solutions based on the LICENSED SOFTWARE or
commercial services for the LICENSED SOFTWARE or its modifications to any
third party (e.g. software support or trainings).

I think we need to collect feedback from distro vendors and DRDB team and then 
decide if we should remove the driver from Cinder.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Does cinder use novaclient?

2016-12-20 Thread Dulko, Michal
On Tue, 2016-12-20 at 14:44 +0800, lij...@gohighsec.com wrote:
> Hi Cinder community,
> 
> Does cinder use novaclient? What's the use of cinder/compute/nova.py?
> 
> I find that there seems to have some problems when construct
> novaclient in the file  cinder/compute/nova.py,
>  but I think the file is not being used. 
> 
> Would you explain the use of this file or how does cinder communicate
> with nova?
> 
> Looking forward to your comments. Thanks.
> 
> 
> BR,
> 
> Jane

Hi,

Quick check tells me that the module is used in
cinder.scheduler.filters.InstanceLocalityFilter [1]. It's a cinder-
scheduler filter that tries to schedule volumes on nodes where the
instance that is being booted will be located.

It's likely we're not exercising this filter in the gate. Can you
elaborate on what's broken in the cinder.compute.nova module and file a
bug so we can track and fix it?

Thanks,
Michal

[1] 
https://github.com/openstack/cinder/blob/41bbdbc8a9d445cda51b61d81abbd0e427216c59/cinder/scheduler/filters/instance_locality_filter.py
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Using \ for multiline statements

2016-12-23 Thread Dulko, Michal
On Thu, 2016-12-22 at 17:52 -0600, Matt Riedemann wrote:
> On 12/22/2016 5:28 PM, Sean McGinnis wrote:
> > 
> > Looking for input from everyone, particularly those with more in-
> > depth
> > Python knowledge.
> > 
> > In Cinder for some time we have been trying to enforce using () or
> > reformatting code to avoid using \ to have statements span multiple
> > lines. I'm not sure when this actually started, but I think it may
> > be one of those things where someone got a review disagreement, so
> > then that person started downvoting on it, then the next person,
> > etc.
> > 
> > I've seen some allusions to the use of \ having some issues, but I
> > can't find any concrete examples where this can cause problems. I
> > do
> > seem to remember trying to write a hacking check or a code parsing
> > tool to do something that choked on these, but it's long enough ago
> > that I don't remember the details, and I could very well be mixing
> > that up with something else.
> > 
> > So my question is - is there a technical reason for enforcing this
> > rule, or is this just a bad downvote that's gotten out of control?
> > 
> > Thanks!
> > 
> > Sean
> > 
> > 
> > ___
> > ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsu
> > bscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> I wouldn't -1 it. I've noticed \ showing up a bit more in Nova
> recently 
> simply for the exact reason I think people used to -1 it, because it
> was 
> considered ugly to use. But we've also had cases of () gone haywire.
> I 
> typically see \ used in unit tests or in DB API code when chaining 
> sqlalchemy ORM objects together to generate a single query.
> 
> Like most things like this, I'd rather than squabble over it, and
> take 
> it on a case by case basis. If a patch is hard to read and could be 
> improved using one or the other, then I'd comment as such, but
> wouldn't 
> -1 for using \ as a rule.
> 

I'm one of these people that tend to complain about use of backslash in
reviews. To be honest I've simply remembered that rule from the first
time I've read OpenStack Style Guidelines [1]. I think if we want to be
more permissive in this matter, we should indicate it there.

[1] http://docs.openstack.org/developer/hacking/#general

Thanks,
Michal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] PTG planning etherpad

2017-01-10 Thread Dulko, Michal
Hi,

PTG planning etherpad wasn't advertised on the list, so I'm linking it
below. It's still pretty empty, so I guess it's time to start filling
it up.

https://etherpad.openstack.org/p/ATL-cinder-ptg-planning

Thanks,
Michal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] [qa] [infra] Proposed new Cinder gate jobs

2017-01-19 Thread Dulko, Michal
Hi all,

I've seen some confusion around new Cinder CI jobs being proposed to
project-config in yesterday's IRC scrollback. This email aims to sum
this up and explain purposes of what's being proposed.

Background
==

For a few releases we're aiming to increase our functional and
integration tests coverage. This was manifested by adding new Tempest
tests, enabling functional tests, providing CIs for open source volume
drivers and enabling multinode grenade testing of rolling upgrade.
We're continuing the efforts with various new jobs:

Multinode grenade
=

In Newton we've introduced a job that tests master c-api and c-sch with
stable c-vol and c-bak.

We would like to be able to test other combinations as well. Currently
Grenade doesn't support upgrading services on a node one-by-one, while
running tests in-between, so that's why we've decided to create
multiple jobs. This is being developed in [1].

I understand that two more multinode jobs put a lot of burden on gate's
resources and that's why we plan to keep these jobs in experimental
queue. We can fire them up on potentially breaking changes like RPC API
modifications and DB migrations.

Zero downtime
=

This was triggered by introduction of assert:supports-zero-downtime-
upgrade tag [2] and Cinder's implementation is being worked on in [3].
The exact testing solution is currently evaluated in Nova and Cinder's
implementation is following that. I think adding this job for Cinder is
future work - we'll let Nova team spearhead this.

Note that at first patch [3] was to introduce 3 more multinode jobs. I
don't think this will be necessary and we will require only a single
job. Anyway - that's future.

Volume migration


This is being worked on in [4] and is Cinder's equivalent of gate-
tempest-dsvm-multinode-live-migration-ubuntu-xenial in Nova.

Run in-tree tests
=

This effort aims to increase Cinder's community control over what
Tempest tests are run in Cinder jobs. It's gathered under run-intree-
tests topic [5].

ZeroMQ (merged)
===

This case is pretty simple, gate-tempest-dsvm-zeromq-multibackend-
ubuntu-xenial in experimental queue aims to test multibackend scenario
with ZeroMQ. Such scenario wasn't functional until [6] was merged. I
believe that we can pretty easily identify patches that can potentially
break ZeroMQ support, so this will stay in experimental for now and be
run only on demand.

I hope this helps to clear out some doubts. As you can see some of the
jobs with the highest demand for gate resources are intended to only
stay in experimental queue to be run by Cinder reviewers on demand.

[1] https://review.openstack.org/#/c/384836/
[2] 
https://governance.openstack.org/tc/reference/tags/assert_supports-zero-downtime-upgrade.html
[3] https://review.openstack.org/#/c/420375/
[4] https://review.openstack.org/#/c/381737
[5] https://review.openstack.org/#/q/topic:run-intree-tests
[6] https://review.openstack.org/#/c/398452/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder]Can we run cinder-volume and cinder-backup on a same host?

2017-01-20 Thread Dulko, Michal
On Fri, 2017-01-20 at 14:15 +0900, Rikimaru Honjo wrote:
> Hi Cinder devs,
> 
> I have a question about cinder.
> Can I run cinder-volume and cinder-backup on a same host when I using
> iscsi backend?
> 
> I afraid that iscsi operations will be conflicted between cinder-
> volume and cinder-backup.
> In my understanding, iscsi operations are serialized for each
> individual process.
> But these could be raced between processes.
> 
> e.g.(Caution: This is just a forecast.)
> If cinder-backup execute "multipath -r" while cinder-volume is
> terminating connection,
> a multipath garbage will remain unexpectedly.

Hi,

Before Mitaka it was *required* to place cinder-volume and cinder-
backup on the same node. As both services shared same file lock
directory, it was safe. In fact cinder-backup simply imported cinder-
volume code.

Since Mitaka cinder-backup doesn't do any iSCSI operations directly and
attaches volumes by calling cinder-volume over RPC. This means that
it's possible to place cinder-backup on other node than cinder-volume,
but it's still totally safe to place them together.

If you're able to reproduce a scenario that fails these assumptions,
please file a bug report and we'll be happy to investigate and provide
a fix.

Thanks,
Michal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] cinder 10.0.0.0rc1 (ocata)

2017-02-07 Thread Dulko, Michal
Hi,

With Cinder master being open for Pike development I ask not to merge
any changes with DB schema migrations before sanity check migration [1]
gets in. It's supposed to block going forward until all of the Ocata's
data migrations were executed and should be the first migration in
Pike.

[1] https://review.openstack.org/#/c/414549/

Thanks,
Michal

On Mon, 2017-02-06 at 23:38 +, no-re...@openstack.org wrote:
> Hello everyone,
> 
> A new release candidate for cinder for the end of the Ocata
> cycle is available!  You can find the source code tarball at:
> 
> https://tarballs.openstack.org/cinder/
> 
> Unless release-critical issues are found that warrant a release
> candidate respin, this candidate will be formally released as the
> final Ocata release. You are therefore strongly
> encouraged to test and validate this tarball!
> 
> Alternatively, you can directly test the stable/ocata release
> branch at:
> 
> http://git.openstack.org/cgit/openstack/cinder/log/?h=stable/ocata
> 
> Release notes for cinder can be found at:
> 
> http://docs.openstack.org/releasenotes/cinder/
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][oslo] ZeroMQ support with multibackend

2016-11-24 Thread Dulko, Michal
Hi,

Cinder is lacking ZeroMQ messaging support in multibackend
configurations. This is due to the fact that we're abusing
``Target.server`` property by appending "@backend" suffix to the
hostname. This works just fine in RabbitMQ, as it routes messages using
queue names, but fails in ZeroMQ when 'server' property stops matching
hostnames.

There is a workaround for this in oslo.messaging [1], but it breaks
multibackend configrations, as backend A on a host can get message
addressed for the backend B on the same host.

We've seen tries [2], [3], [4] to fix it, but these implementations
were based on doing special operations when ZeroMQ is configured. This
invalidates concept of oslo.messaging being an abstraction layer.

My current idea to fix this [5] is to switch whole multibackend message
routing from being based on ``Target.server`` to ``Target.topic``.
Implementation is based on keeping the old RPC server that listens on
"cinder-volume" topic for fanout messages and creating a new RPC server
which listens on "cinder-volume.host@backend" topic and "host" server.
This is fine with rolling upgrades, as '$topic.$server' is how server
queue names are built in oslo.messaging, so we'll still listen on old
queues.

Please note that only the messaging layer is being changed. Internally
volumes will still be seen as existing on "host@backend" servers.

I've added an experimental gate-tempest-dsvm-zeromq-multibackend job,
which passes fine on [5], but fails on master, which proves that the
fix works.

Is anyone seeing dangers in this approach? If not - I'm asking for
reviews on [5].

Thanks,
Michal

[1] 
https://github.com/openstack/oslo.messaging/blob/7b5bec3133b6d74c4144fb8a33b9c9d2803e8858/oslo_messaging/_drivers/zmq_driver/zmq_address.py#L36-L39
[2] https://review.openstack.org/#/c/271848/
[3] https://review.openstack.org/#/c/277113/
[4] https://review.openstack.org/#/c/351862
[5] https://review.openstack.org/#/c/398452
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Exposing project team's metadata in README files

2016-11-25 Thread Dulko, Michal
On Fri, 2016-11-25 at 12:38 +0100, Flavio Percoco wrote:
> Greetings,
> 
> Just a heads up for everyone. The work on this front has moved
> forward and the
> badges are now being generated as part of the governance CI[0].
> 
> You can find the list of badges here[1] and the pattern is quite
> obvious, the
> name of the image is based on the project repo name.
> 
> I've edited the README files for all repositories listed in the
> projects.yaml
> file and I've started to submit these patches[2]. I'm not a fan of
> "viral
> changes" but I've done my best to explain what's changing, provide
> references
> and examples on the commit message. These changes are being submitted
> using the
> tag 'project-badges'[2].

Looks like Cinder patch is missing there?

> 
> Note that these badges are *JUST* a graphical representation of
> what's in the
> governance repo. If you don't want to have them in the README file, I
> guess it's
> fine. I'd, however, encourage everyone to add them to provide
> consistency and a
> more immediate information of what the project is about, what some of
> the
> project capabilities are and what its status is.
> 
> Ideally this should also be added in projects documentation as well
> but I'll
> leave that to every team to do.
> 
> Happy to answer questions,
> Flavio
> 
> P.S: The current layout is being improved[3], if you have better
> ideas please
> help out.
> 
> [0] https://review.openstack.org/#/c/391588/
> [1] http://governance.openstack.org/badges/
> [2] https://review.openstack.org/#/q/topic:project-badges
> [3] https://review.openstack.org/#/c/399278/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev