Re: [openstack-dev] [cinder][nova] Proper behavior for os-force_detach

2018-07-18 Thread Walter Boring
The whole purpose of this test is to simulate the case where Nova doesn't
know where the vm is anymore,
or may simply not exist, but we need to clean up the cinder side of
things.   That being said, with the new
attach API, the connector is being saved in the cinder database for each
volume attachment.

Walt

On Wed, Jul 18, 2018 at 5:02 AM, Gorka Eguileor  wrote:

> On 17/07, Sean McGinnis wrote:
> > On Tue, Jul 17, 2018 at 04:06:29PM -0300, Erlon Cruz wrote:
> > > Hi Cinder and Nova folks,
> > >
> > > Working on some tests for our drivers, I stumbled upon this tempest
> test
> > > 'force_detach_volume'
> > > that is calling Cinder API passing a 'None' connector. At the time
> this was
> > > added several CIs
> > > went down, and people started discussing whether this
> (accepting/sending a
> > > None connector)
> > > would be the proper behavior for what is expected to a driver to
> do[1]. So,
> > > some of CIs started
> > > just skipping that test[2][3][4] and others implemented fixes that
> made the
> > > driver to disconnected
> > > the volume from all hosts if a None connector was received[5][6][7].
> >
> > Right, it was determined the correct behavior for this was to disconnect
> the
> > volume from all hosts. The CIs that are skipping this test should stop
> doing so
> > (once their drivers are fixed of course).
> >
> > >
> > > While implementing this fix seems to be straightforward, I feel that
> just
> > > removing the volume
> > > from all hosts is not the correct thing to do mainly considering that
> we
> > > can have multi-attach.
> > >
> >
> > I don't think multiattach makes a difference here. Someone is forcibly
> > detaching the volume and not specifying an individual connection. So
> based on
> > that, Cinder should be removing any connections, whether that is to one
> or
> > several hosts.
> >
>
> Hi,
>
> I agree with Sean, drivers should remove all connections for the volume.
>
> Even without multiattach there are cases where you'll have multiple
> connections for the same volume, like in a Live Migration.
>
> It's also very useful when Nova and Cinder get out of sync and your
> volume has leftover connections. In this case if you try to delete the
> volume you get a "volume in use" error from some drivers.
>
> Cheers,
> Gorka.
>
>
> > > So, my questions are: What is the best way to fix this problem? Should
> > > Cinder API continue to
> > > accept detachments with None connectors? If, so, what would be the
> effects
> > > on other Nova
> > > attachments for the same volume? Is there any side effect if the
> volume is
> > > not multi-attached?
> > >
> > > Additionally to this thread here, I should bring this topic to
> tomorrow's
> > > Cinder's meeting,
> > > so please join if you have something to share.
> > >
> >
> > +1 - good plan.
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Update attachments on replication failover

2018-02-27 Thread Walter Boring
I think you might be able to get away with just calling os-brick's
connect_volume again without the need to call disconnect_volume first.
 calling disconnect_volume wouldn't be good for volumes that are being
used, just to refresh the connection_info on that volume.

On Tue, Feb 27, 2018 at 2:56 PM, Matt Riedemann  wrote:

> On 2/27/2018 10:02 AM, Matthew Booth wrote:
>
>> Sounds like the work Nova will have to do is identical to volume update
>> (swap volume). i.e. Change where a disk's backing store is without actually
>> changing the disk.
>>
>
> That's not what I'm hearing. I'm hearing disconnect/reconnect. Only the
> libvirt driver supports swap volume, but I assume all other virt drivers
> could support this generically.
>
>
>> Multi-attach! There might be more than 1 instance per volume, and we
>> can't currently support volume update for multi-attached volumes.
>>
>
> Good point - cinder would likely need to reject a request to replicate an
> in-use multiattach volume if the volume has more than one attachment.
>
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] removing screen from devstack - RSN

2017-09-07 Thread Walter Boring
I completely agree with you here John.   I still prefer to use screen for
my devstack installs, it's just far far easier to use for development.
 systemd is a pain to use in comparison.
This is a major step backwards for developers.

:(

On Thu, Sep 7, 2017 at 1:29 PM, John Griffith 
wrote:

> Please don't, some of us have no issues with screen and use it extensively
> for debugging.  Unless there's a viable option using systemd I fail to
> understand why this is such a big deal.  I've been using devstack in screen
> for a long time without issue, and I still use rejoin that supposedly
> didn't work (without issue).
>
> I completely get the "run like customers" but in theory I'm not sure how
> screen makes it much different than what customers do, it's executing the
> same binary at the end of the day.  I'd also ask then is devstack no longer
> "dev" stack, but now a preferred method of install for running production
> clouds?  Anyway, I'd just ask to leave it as an option, unless there's
> equivalent options for things like using pdb etc.  It's annoying enough
> that we lost that capability for the API services, is there a possibility
> we can reconsider not allowing this an option?
>
> Thanks,
> John
>
> On Thu, Sep 7, 2017 at 7:31 AM, Davanum Srinivas 
> wrote:
>
>> w00t!
>>
>> On Thu, Sep 7, 2017 at 8:45 AM, Sean Dague  wrote:
>> > On 08/31/2017 06:27 AM, Sean Dague wrote:
>> >> The work that started last cycle to make devstack only have a single
>> >> execution mode, that was the same between automated QA and local, is
>> >> nearing it's completion.
>> >>
>> >> https://review.openstack.org/#/c/499186/ is the patch that will remove
>> >> screen from devstack (which was only left as a fall back for things
>> like
>> >> grenade during Pike). Tests are currently passing on all the gating
>> jobs
>> >> for it. And experimental looks mostly useful.
>> >>
>> >> The intent is to merge this in about a week (right before PTG). So, if
>> >> you have a complicated devstack plugin you think might be affected by
>> >> this (and were previously making jobs pretend to be grenade to keep
>> >> screen running), now is the time to run tests against this patch and
>> see
>> >> where things stand.
>> >
>> > This patch is in the gate and now merging, and with it devstack now has
>> > a single run mode, using systemd units, which is the same between test
>> > and development.
>> >
>> > Thanks to everyone helping with the transition!
>> >
>> > -Sean
>> >
>> > --
>> > Sean Dague
>> > http://dague.net
>> >
>> > 
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.op
>> enstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> --
>> Davanum Srinivas :: https://twitter.com/dims
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Nova][Requirements] Lib freeze exception for os-brick

2017-07-31 Thread Walter Boring
Do it +1

On Mon, Jul 31, 2017 at 7:37 AM, Sean McGinnis 
wrote:

> I am requesting a library release of os-brick during the feature freeze
> in order to fix an issue with the recently landed online volume extend
> feature across Nova and Cinder.
>
> Patches have landed in both projects to add this feature. It wasn't until
> later that Matt was able to get tempest tests in that found an issue with
> some of the logic in the os-brick library. That has now been fixed in the
> stable/pike branch in os-brick with this patch:
>
> https://review.openstack.org/#/c/489227/
>
> We can get a new library release out as soon as the freeze is over, but
> due to the fact that we do not raise global requirements for stable
> branches after release, there could be some deployments that would still
> use the old ("broken") lib. We would need to get this release out before
> the final pike branching of Cinder and Nova to be able to raise G-R to
> make sure the new release is used with this fix.
>
> I see this change as a low risk for other regression, and it would allow
> us to not ship a broken feature.
>
> Thanks,
> Sean
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Proposing TommyLikeHu for Cinder core

2017-07-25 Thread Walter Boring
+1

On Tue, Jul 25, 2017 at 1:07 AM, Sean McGinnis 
wrote:

> I am proposing we add TommyLike as a Cinder core.
>
> DISCLAIMER: We work for the same company.
>
> I have held back on proposing him for some time because of this conflict.
> But
> I think from his number of reviews [1] and code contributions [2] it's
> hopefully clear that my motivation does not have anything to do with this.
>
> TommyLike has consistently done quality code reviews. He has contributed a
> lot of bug fixes and features. And he has been available in the IRC channel
> answering questions and helping out, despite some serious timezone
> challenges.
>
> I think it would be great to add someone from this region so we can get
> more
> perspective from the APAC area, as well as having someone around that may
> help as more developers get involved in non-US and non-EU timezones.
>
> Cinder cores, please respond with your opinion. If no reason is given to do
> otherwise, I will add TommyLike to the core group in one week.
>
> And absolutely call me out if you see any in bias in my proposal.
>
> Thanks,
> Sean
>
> [1] http://stackalytics.com/report/contribution/cinder-group/90
> [2] https://review.openstack.org/#/q/owner:%22TommyLike+%
> 253Ctommylikehu%2540gmail.com%253E%22++status:merged
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Looking for Docker images for Cinder, Glance etc for oVirt

2017-07-14 Thread Walter Boring
Also,
  For what it's worth, Cinder has a docker compose contrib directory here
that we merged a month or so ago for standing up cinder.

https://github.com/openstack/cinder/tree/master/contrib/block-box

On Sat, Jul 8, 2017 at 2:03 PM, Leni Kadali Mutungi 
wrote:

> Hello all.
>
> I am trying to use the Cinder and Glance Docker images you provide in
> relation to the setup here:
> http://www.ovirt.org/develop/release-management/features/
> cinderglance-docker-integration/
>
> I tried to run `sudo docker pull
> kollaglue/centos-rdo-glance-registry:latest` and got an error of not
> found. I thought that it could possible to use a Dockerfile to spin up
> an equivalent of it, so I would like some guidance on how to go about
> doing that. Best practices and so on. Alternatively, if it is
> possible, may you point me in the direction of the equivalent images
> mentioned in the guides if they have been superseded by something else?
> Thanks.
>
> CCing the oVirt users and devel lists to see if anyone has experienced
> something similar.
>
> --
> - Warm regards
> Leni Kadali Mutungi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Target classes in Cinder

2017-06-09 Thread Walter Boring
I had initially looked into this for the 3PAR drivers when we initially
were working on the target driver code.   The problem I found was, it would
take a fair amount of time to refactor the code, with marginal benefit.
Yes, the design is better, but I couldn't justify the refactoring time,
effort and testing of the new driver model just to get the same
functionality.   Also, we would still need 2 CIs to ensure that the FC vs.
iSCSI target drivers for 3PAR would work correctly, so it doesn't really
save CI efforts much.   I guess what I'm trying to say is that, even though
it's a better model, we always have to weigh the time investment to reward,
and I couldn't justify it with all the other efforts I was involved with at
the time.

I kind of assume that for the most part, most developers don't even
understand why we have the target driver model, and secondly if they were
educated on it, that they'd run into the same issue I had.

On Fri, Jun 2, 2017 at 12:47 PM, John Griffith 
wrote:

> Hey Everyone,
>
> So quite a while back we introduced a new model for dealing with target
> management in the drivers (ie initialize_connection, ensure_export etc).
>
> Just to summarize a bit:  The original model was that all of the target
> related stuff lived in a base class of the base drivers.  Folks would
> inherit from said base class and off they'd go.  This wasn't very flexible,
> and it's why we ended up with things like two drivers per backend in the
> case of FibreChannel support.  So instead of just say having "driver-foo",
> we ended up with "driver-foo-iscsi" and "driver-foo-fc", each with their
> own CI, configs etc.  Kind of annoying.
>
> So we introduced this new model for targets, independent connectors or
> fabrics so to speak that live in `cinder/volume/targets`.  The idea being
> that drivers were no longer locked in to inheriting from a base class to
> get the transport layer they wanted, but instead, the targets class was
> decoupled, and your driver could just instantiate whichever type they
> needed and use it.  This was great in theory for folks like me that if I
> ever did FC, rather than create a second driver (the pattern of 3 classes:
> common, iscsi and FC), it would just be a config option for my driver, and
> I'd use the one you selected in config (or both).
>
> Anyway, I won't go too far into the details around the concept (unless
> somebody wants to hear more), but the reality is it's been a couple years
> now and currently it looks like there are a total of 4 out of the 80+
> drivers in Cinder using this design, blockdevice, solidfire, lvm and drbd
> (and I implemented 3 of them I think... so that's not good).
>
> What I'm wondering is, even though I certainly think this is a FAR
> SUPERIOR design to what we had, I don't like having both code-paths and
> designs in the code base.  Should we consider reverting the drivers that
> are using the new model back and remove cinder/volume/targets?  Or should
> we start flagging those new drivers that don't use the new model during
> review?  Also, what about the legacy/burden of all the other drivers that
> are already in place?
>
> Like I said, I'm biased and I think the new approach is much better in a
> number of ways, but that's a different debate.  I'd be curious to see what
> others think and what might be the best way to move forward.
>
> Thanks,
> John
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [Nova] Extend attached volume

2017-04-03 Thread Walter Boring
Actually, this is incorrect.

The sticking point of this all was doing the coordination and initiation of
workflow from Nova.   Cinder already has the ability to call the driver to
do the resize of the volume.  Cinder just prevents this now, because there
is work that has to be done on the attached side to make the new size
actually show up.

What needs to happen is:
 A new Nova API needs to be created to initiate and coordinate this effort.
  The API would call Cinder to extend the size, then get the connection
information from Cinder for that volume, then call os-brick to extend the
size, then update the domain xml to tell libvirt to extend the size.   The
end user inside the VM would have to issue the same SCSI bus rescan and
refresh that happens inside of os-brick, to make the kernel and filesystem
in the VM recognize the new size.

os-brick does all of the heavy lifting already on the host side of things.
The Connector API entry point:
https://github.com/openstack/os-brick/blob/master/os_brick/initiator/initiator_connector.py#L153

iSCSI example:
https://github.com/openstack/os-brick/blob/master/os_brick/initiator/connectors/iscsi.py#L370

os-brick's code works for single path and multipath attached volumes.
multipath has a bunch of added complexity with resize that should already
be taken care of here:
https://github.com/openstack/os-brick/blob/master/os_brick/initiator/linuxscsi.py#L375


Walt

On Sat, Apr 1, 2017 at 10:17 AM, Jay Bryant  wrote:

> Matt,
>
> I think discussion on this goes all the way back to Tokyo.  There was work
> on the Cinder side to send the notification to Nova which I believe all the
> pieces were in place for.  The missing part (sticking point) was doing a
> rescan of the SCSI bus in the node that had the extended volume attached.
>
> Has doing that been solved since Tokyo?
>
> Jay
>
>
> On 4/1/2017 10:34 AM, Matt Riedemann wrote:
>
>> On 3/31/2017 8:55 PM, TommyLike Hu wrote:
>>
>>> There was a time when this feature had been both proposed in Cinder [1]
>>> and Nova [2], but unfortunately no one (correct me if I am wrong) is
>>> going to handle this feature during Pike. We do think extending an
>>> online volume is a beneficial and mostly supported by venders feature.
>>> We really don't want this feature missed from OpenStack and would like
>>> to continue on. So anyone could share your knowledge of how many works
>>> are left till now and  where should I start with?
>>>
>>> Thanks
>>> TommyLike.Hu
>>>
>>> [1] https://review.openstack.org/#/c/272524/
>>> [2]
>>> https://blueprints.launchpad.net/nova/+spec/nova-support-att
>>> ached-volume-extend
>>>
>>>
>>> __
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> The nova blueprint description does not contain much for details, but
>> from what is there it sounds a lot of like the existing volume swap
>> operation which is triggered from Cinder by a volume migration or retype
>> operation. How do those existing operations not already solve this use case?
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Project mascot available

2017-02-14 Thread Walter Boring
How does Horse translate to Cinder block?
I don't get it.  Our original Cinder block made a lot of sense, considering
the name of the project.


This is...just an odd horse for no particular reason.

On Mon, Feb 13, 2017 at 6:43 PM, Sean McGinnis 
wrote:

> For your viewing and slide designing pleasure...
>
> We have the official mascot completed from the illustrators. Multiple
> formats
> are available from here:
>
> https://www.dropbox.com/sh/8s3859c6qulu1m3/AABu_rIyuBM_bZmfGkagUXhua?dl=0
>
> Stay golden, pony boy.
>
> Sean
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev