We only bump the version if something has changed IIRC. I think bumping
when nothing has changed would create a burden for implementers of client
software. So its not like you get a chance to sneak this in "for free".
Does this information really need to be available in the host OS? Its
I think a good start would be a concrete list of the places you felt you
needed to change upstream and the specific reasons for each that it wasn't
done as part of the community.
For example, I look at your nova fork and it has a "don't allow this call
during an upgrade" decorator on many API
further to last week's example of how to add a new privsep'ed call in Nova,
I thought I'd write up how to add privsep to a new OpenStack project. I've
used Cinder in this worked example, but it really applies to any project
which wants to do things with escalated permissions.
The write up is
I was asked yesterday for a guide on how to write new escalated methods
with oslo privsep, so I wrote up a blog post about it this morning. It
might be useful to others here.
I intend to write up how to add privsep to a
I'm confused about the design of AE to be honest. Is there a good reason
that this functionality couldn't be provided by cloud-init? I think there's
a lot of cost in deviating from the industry standard, so the reasons to do
so have to be really solid.
I'm also a bit confused by what seems to be
> > Phone: +86-10-82451493
> > Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
> > District, Beijing 100193, PRC
> > Inactive hide details for Matt Riedemann ---04/12/2018 08:46:25 AM---On
> > 4/11/2018 5:09 PM, Michael Still wrote:
The more I think about it, the more I dislike how the proposed driver also
"lies" about it using iso9660. That's definitely wrong:
if CONF.config_drive_format in ['iso9660']:
# cloud-init only support iso9660 and vfat, but in z/VM
# implementation, can't link a
https://review.openstack.org/#/c/527658 is a z/VM patch which introduces
their support for config drive. They do this by attaching a tarball to the
instance, having pretended in the nova code that it is an iso9660. This
In the past we've been concerned about adding new
https://review.openstack.org/#/c/523387 proposes adding a z/VM specific
dependancy to nova's requirements.txt. When I objected the counter argument
is that we have examples of windows specific dependancies (os-win) and
powervm specific dependancies in that file already.
I think perhaps all
ell, <klmi...@mit.edu> wrote:
> On Wed, 2018-04-04 at 07:54 +1000, Michael Still wrote:
> > Thanks to jichenjc for fixing the pep8 failures I was seeing on
> > master. I'd decided they were specific to my local dev environment
> > given no one else was seeing them.
Thanks to jichenjc for fixing the pep8 failures I was seeing on master. I'd
decided they were specific to my local dev environment given no one else
was seeing them.
As I said in the patch that fixed the issue , I think its worth
exploring how these got through the gate in the first place.
n Fri, Mar 16, 2018 at 7:34 AM, Michael Still <mi...@stillhq.com> wrote:
>> Thanks for this. I read the README for the project after this and I do
>> now realise you're using notifications for some of these events.
>> I guess I'm still pondering if its reasonable to
> On Thu, Mar 15, 2018 at 3:42 PM, Michael Still <mi...@stillhq.com> wrote:
>> I've just stumbled across Tatu and the design presentation , and I am
>> wondering how you handle cleaning up in
I've just stumbled across Tatu and the design presentation , and I am
wondering how you handle cleaning up instances when they are deleted given
that nova vendordata doesn't expose a "delete event".
Specifically I'm wondering if we should add support for such an event to
>> Try setting the ironic_log_dir variable to /var/log/ironic, or setting
>> [default] log_dir to the same in ironic.conf.
>> I'm surprised it's not logging to a file by default.
>> On 4 Mar 2018 8:33 p.m., "Micha
e to /var/log/ironic, or setting
> [default] log_dir to the same in ironic.conf.
> I'm surprised it's not logging to a file by default.
> On 4 Mar 2018 8:33 p.m., "Michael Still" <mi...@stillhq.com> wrote:
>> Ok, so I applied your patch an
I was thinking about this the other day... How do you de-register instances
from freeipa when the instance is deleted? Is there a missing feature in
vendordata there that you need?
On Fri, Nov 11, 2016 at 2:01 AM, Rob Crittenden wrote:
> Wanted to let you know I'm
cause the required management
> and power interfaces were not enabled. The patch should address that but
> please let us know if there are further issues.
> On 4 Mar 2018 7:59 p.m., "Michael Still" <mi...@stillhq.com> wrote:
> Replying to a single email
Replying to a single email because I am lazier than you.
I would have included logs, except /var/log/ironic on the bifrost machine
is empty. There are entries in syslog, but nothing that seems related (its
all periodic task kind of stuff).
However, Mark is right. I had an /etc/ironic/ironic.conf
I've been playing with bifrost to help me manage some lab machines. I must
say the install process was well documented and smooth, so that was a
That said, I am struggling to get a working node enrolment. I'm resisting
using the JSON file / ansible playbook approach,
Sorry for the slow reply, I've spent the last month camping in a tent and
it was wonderful.
The privsep transition isn't complete in Nova, but it was never intended to
be in Queens. We did get further than we envisaged and its doable to finish
off in Rocky.
That said, I feel like we have a nice
Do we continue to support the previous two releases as stable branches?
Doesn't that mean we double the amount of time we need to keep older CI
setups around? Isn't that already a pain point for the stable teams?
On Wed, Dec 13, 2017 at 8:17 AM, Thierry Carrez
I'm out of my depth a little here. I've done the following:
- installed kubernetes
- followed the deploy guide for kolla-kubernetes 
- except where I didn't because I had to fix it 
I can download an openrc and even send a "nova boot" that looks like it
works. However, I have this
Thanks for this summary. I'd say the cinder-booted IPA is definitely of
interest to the operators I've met. Building new IPAs, especially when
trying to iterate on what drivers are needed is a pain so being able to
iterate faster would be very useful. That said, I guess this implies
That does work for me, except it means I'll still need to port it to
privsep to hit my goal of no rootwrap in Queens. I can live with that.
On Wed, Nov 8, 2017 at 4:54 PM, Matt Riedemann <mriede...@gmail.com> wrote:
> On 11/8/2017 12:24 PM, Michael Still wrote:
a really really long time ago (think 2011), we added support in Nova for
configuring the mkfs commands that are run for new ephemeral disks using
the virt_mkfs command. The current implementation is in
nova/virt/disk/api.py for your reading pleasure.
I'm battling a little with how to move
On Mon, Nov 6, 2017 at 1:26 PM, Dan Smith wrote:
> > I hope everyone travelling to the Sydney Summit is enjoying jet lag
> > just as much as I normally do. Revenge is sweet! My big advice is that
> > caffeine is your friend, and to not lick any of the wildlife.
> I wasn't
The privsep session doesn't appear to be in that list. Did it get dropped
On Wed, Nov 1, 2017 at 12:04 AM, Thierry Carrez
> Hi everyone,
> Etherpads for the Forum sessions in Sydney can be found here:
I hope everyone travelling to the Sydney Summit is enjoying jet lag just as
much as I normally do. Revenge is sweet! My big advice is that caffeine is
your friend, and to not lick any of the wildlife.
On a more serious note, I want to give a checkpoint for the Nova privsep
I think new-keypair-on-rebuild makes sense for some forms of key rotation
as well. For example, I've worked with a big data ironic customer who uses
rebuild to deploy new OS images onto their ironic managed machines.
Presumably if they wanted to do a keypair rotation they'd do it in a very
discuss anything in flight for Queens, what
> we're working on, and have a chance to ask questions of operators/users for
> feedback. For example, we plan to add vGPU support but it will be quite
> simple to start, similar with volume multi-attach.
> 4. Michael Still had an item
One thing I'd like to explore is what the functional difference between a
rebuild and a delete / create cycle is. With a rebuild you get to keep your
IP I suppose, but that could also be true of floating IPs for a delete /
create as well.
Operationally, why would I want to inject a new keypair?
this email is a courtesy message to make sure you're all aware that at the
PTG we decided to try to convert all of nova-compute to privsep for the
Queens release. This will almost certainly have an impact on out of tree
drivers, although I am hoping the fall out is minimal.
A change like
Dims, I'm not sure that's actually possible though. Many of these files
have been through rewrites and developed over a large number of years.
Listing all authors isn't practical.
Given the horse has bolted on forking these files, I feel like a comment
acknowledging the original source file is
emann" <mriede...@gmail.com> wrote:
> On 8/20/2017 1:11 AM, Michael Still wrote:
>> Specifically we could do something like this:
> Sounds like we're OK with doing this in Queens given the other discussion
n, Aug 20, 2017 at 03:43:22PM +1000, Michael Still wrote:
> > Hi,
> > nova.virt.libvirt.storage.lvm.clear_volume() has a comment that we could
> > use shred to zero out volumes efficiently if we could assume that shred
> > 8.22 was in all our downstream distr
I'm going to take the general silence on this as permission to remove the
idmapshift binary from nova. You're welcome.
On Sat, Jul 29, 2017 at 10:09 AM, Michael Still <mi...@stillhq.com> wrote:
> I'm working through the process of converting the libvirt
Specifically we could do something like this:
On Sun, Aug 20, 2017 at 3:43 PM, Michael Still <mi...@stillhq.com> wrote:
> nova.virt.libvirt.storage.lvm.clear_volume() has a comment that we could
> use shred to zero out v
nova.virt.libvirt.storage.lvm.clear_volume() has a comment that we could
use shred to zero out volumes efficiently if we could assume that shred
8.22 was in all our downstream distros . shred 8.22 shipped in 2013 .
Can we assume that thing now? xenial appears to ship with 8.25 for
I'm working through the process of converting the libvirt driver in Nova to
privsep with the assistance of Tony Breeds. For various reasons, I started
with removing all the calls to the chown binary and am replacing them with
privsep equivalents. You can see this work at:
I'm cc'ing openstack-dev because your email is the same as the comment you
made on the relevant review, and I think getting visibility with the wider
Nova team is a good idea.
Unfortunately this is a risk of having an out of tree Nova driver, which
has never been the recommended path for
Certainly removing the "--no-binary :all:" results in a build that builds.
I'll test and see if it works todayish.
On Mon, Jun 12, 2017 at 9:56 PM, Chris Smart <m...@csmart.io> wrote:
> On Mon, 12 Jun 2017, at 21:36, Michael Still wrote:
> > The experiment
> On 06/12/2017 04:29 AM, Michael Still wrote:
> > Hi,
> > I'm trying to explain this behaviour in stable/newton, which specifies
> > Routes==2.3.1 in upper-constraints:
> > $ pip install --no-binary :all: Routes==2.3.
I'm trying to explain this behaviour in stable/newton, which specifies
Routes==2.3.1 in upper-constraints:
$ pip install --no-binary :all: Routes==2.3.1
Could not find a version that satisfies the requirement Routes==2.3.1
(from versions: 1.5, 1.5.1, 1.5.2, 1.6, 1.6.1, 1.6.2, 1.6.3,
t; On Wed, Sep 24, 2014 at 08:26:44AM +1000, Michael Still wrote:
> > On Tue, Sep 23, 2014 at 8:58 PM, Daniel P. Berrange <berra...@redhat.com>
> > > On Tue, Sep 23, 2014 at 02:27:52PM +0400, Roman Bogorodskiy wrote:
> > >> Michael Still wrote:
This sort of question comes up every six months or so it seems.
The issue is that for config drive users we don't have a way of rebuilding
all of the config drive (for example, the root password is gone). That's
probably an issue for rescue because its presumably one of the things you
It would be interesting for this to be built in a way where other endpoints
could be added to the list that have extra headers added to them.
For example, we could end up with something quite similar to EC2 IAMS if we
could add headers on the way through for requests to OpenStack endpoints.
Config drive over read-only NFS anyone?
On Sun, Feb 19, 2017 at 6:12 AM, Steve Gordon wrote:
> - Original Message -
> > From: "Artom Lifshitz"
> > To: "OpenStack Development Mailing List (not for usage questions)" <
We have had this discussion several times in the past for other reasons.
The reality is that some people will never deploy the metadata API, so I
feel like we need a better solution than what we have now.
However, I would consider it probably unsafe for the hypervisor to read the
At a previous employer we had a policy that all passwords started with "/"
because of the sheer number of times someone typed the root password into a
public IRC channel.
On Thu, Feb 9, 2017 at 10:04 AM, Jay Pipes wrote:
> On 02/08/2017 03:36 PM, Kendall Nelson
What version of nova is tripleo using here? This wont work quite right if
you're using Mitaka until https://review.openstack.org/#/c/427547/ lands
and is released.
Also, I didn't know novajoin existed and am pleased to have discovered it.
On Fri, Feb 3, 2017 at 11:27 AM, Juan Antonio
I think #3 is the right call for now. The person we had working on privsep
has left the company, and I don't have anyone I could get to work on this
right now. Oh, and we're out of time.
On Thu, Jan 26, 2017 at 3:49 PM, Matt Riedemann wrote:
> The patch to add
Jan 3, 2017 19:29, "Matt Riedemann" <mrie...@linux.vnet.ibm.com> wrote:
> On 1/3/2017 8:48 PM, Michael Still wrote:
>> Our python3 tests hate  my exception handling for continued
>> vendordata implementation .
Our python3 tests hate  my exception handling for continued vendordata
Basically, it goes a bit like this -- I need to move from using requests to
keystoneauth1 for external vendordata requests. This is because we're
adding support for sending keystone headers with
I'd be remiss if I didn't point out that the nova LXC driver is much better
supported than the nova-docker driver.
On Thu, Dec 29, 2016 at 8:01 PM, Esra Celik
> Hi Sam,
> nova-lxc is not recommended in production . And LXD is built on top of
radar was an antique effort to import some outside-OpenStack code that did
CI reliability dashboarding. It was never really a thing, and has been
abandoned over time.
The last commit that wasn't part of a project wide change series was in
Does anyone object to me following the
+1, I'd value him on the team.
On Sat, Dec 3, 2016 at 2:22 AM, Matt Riedemann
> I'm proposing that we add Stephen Finucane to the nova-core team. Stephen
> has been involved with nova for at least around a year now, maybe longer,
> my ability to tell
On Mon, Nov 28, 2016 at 4:37 PM, Jay Pipes wrote:
> I don't see any compelling reason not to work with the Nova and Ironic
> projects and add the functionality you wish to see in those respective
Jay, I agree and I don't. First off, I think improving
This is a good summary, thanks. I finally uploaded the spec which describes
the decisions from the summit. Its here:
On Thu, Nov 10, 2016 at 7:11 AM, Matt Riedemann <mrie...@linux.vnet.ibm.com>
> Michael Still led a session on c
I've been asked to let you all know about a single day OpenStack conference
in Canberra that's coming up in a few weeks. The event is being run by the
OpenStack Foundation along with the various meetup organizers.
The conference is on Monday 14 November and has two tracks -- a management
Doh, and I'd already done one cleanup patch against it.
On Fri, Oct 28, 2016 at 1:24 PM, Devdatta Kulkarni <
> Hi Steve,
> Your observation is correct.
> Solum team had created solum-infra-guestagent repository to investigate
> the idea of a build
So, its good that you're working on third party CI, but I see that as a
blocker before we can start this conversation -- we need to have a solid
history there before we can do much. You also don't mention (that I can
find) where the code to the driver is. Can I have a pointer to that please?
There is a bit of a wish list of things people want in metadata for Ocata
at https://etherpad.openstack.org/p/ocata-nova-metadata-wishlist -- it
might be worth adding your requirements and a link to your spec there?
On Sat, Oct 8, 2016 at 5:04 AM, Bence Romsics
On Thu, Sep 1, 2016 at 11:58 AM, Adam Young <ayo...@redhat.com> wrote:
> On 08/31/2016 07:56 AM, Michael Still wrote:
> There is a quick sketch of what a service account might look like at
> https://review.openstack.org/#/c/363606/ -- I need to do some more
> On 8/30/2016 4:36 PM, Michael Still wrote:
>> Sorry for being slow on this one, I've been pulled into some internal
>> things at work.
>> So... Talking to Matt Riedemann just now, it seems like we should
Sorry for being slow on this one, I've been pulled into some internal
things at work.
So... Talking to Matt Riedemann just now, it seems like we should continue
to pass through the user authentication details when we have them to the
plugin. The problem is what to do in the case where we do not
Its a shame so many are -2'ed. There is a lot there I could have merged
yesterday if it wasn't for that.
On Mon, Aug 22, 2016 at 9:00 PM, Sean Dague <s...@dague.net> wrote:
> On 08/22/2016 12:10 AM, Michael Still wrote:
>> So, if this is about preserving CI time, th
So, if this is about preserving CI time, then its cool for me to merge
these on a US Sunday when the gate is otherwise idle, right?
On Fri, Aug 19, 2016 at 7:02 AM, Sean Dague <s...@dague.net> wrote:
> On 08/18/2016 04:46 PM, Michael Still wrote:
> > We're still ok with m
We're still ok with merging existing ones though?
On Fri, Aug 19, 2016 at 5:18 AM, Jay Pipes wrote:
> Roger that.
> On 08/18/2016 11:48 AM, Matt Riedemann wrote:
>> We have a lot of open changes for the centralize / cleanup config option
On Fri, Aug 19, 2016 at 1:00 AM, Matt Riedemann
> It's that time of year again to talk about killing this job, at least from
> the integrated gate (move it to experimental for people that care about
> postgresql, or make it gating on a smaller subset of
On Thu, Aug 11, 2016 at 7:38 AM, Doug Hellmann
> Excerpts from Michael Still's message of 2016-08-11 07:27:07 +1000:
> > On Thu, Aug 11, 2016 at 7:12 AM, Doug Hellmann
> > wrote:
> > > Excerpts from Michael Still's message of 2016-08-11
On Thu, Aug 11, 2016 at 7:12 AM, Doug Hellmann
> Excerpts from Michael Still's message of 2016-08-11 07:01:37 +1000:
> > On Thu, Aug 11, 2016 at 2:24 AM, Doug Hellmann
> > wrote:
> > > It's time to make sure we have all of our active
On Thu, Aug 11, 2016 at 2:24 AM, Doug Hellmann
> It's time to make sure we have all of our active technical contributors
> (ATCs) identified for Newton.
> Following the Foundation bylaws  and TC Charter , Project
> teams should identify contributors who have
On Tue, Jul 26, 2016 at 4:44 PM, Fox, Kevin M wrote:
The issue is, as I see it, a parallel activity to one of the that is
> currently accepted into the Big Tent, aka Containerized Deployment
This seems to be the crux of the matter as best as I can tell. Is
On 16 Jul 2016 1:27 PM, "Thomas Herve" wrote:
> On Fri, Jul 15, 2016 at 8:36 PM, Fox, Kevin M wrote:
> > Some specific things:
> > Magnum trying to not use Barbican as it adds an addition dependency.
See the discussion on the devel mailing list for
So, is now a good time to mention that "Quamby" is the name of a local
On Fri, Jul 15, 2016 at 7:50 PM, Eoghan Glynn wrote:
> > (top posting on purpose)
> > I have re-started the Q poll and am slowly adding all of you fine folks
> > to it. Let's
On Wed, Jun 22, 2016 at 11:13 PM, Sean Dague <s...@dague.net> wrote:
> On 06/22/2016 09:03 AM, Matt Riedemann wrote:
> > On 6/21/2016 12:53 AM, Michael Still wrote:
> >> So, https://review.openstack.org/#/c/317739 is basically done I think.
> >> I'm after people's
So, https://review.openstack.org/#/c/317739 is basically done I think. I'm
after people's thoughts on:
- I need to do some more things, as described in the commit message. Are
we ok with them being in later patches to get reviews moving on this?
- I'm unsure what level of tempest testing makes
On Fri, Jun 10, 2016 at 7:18 AM, Tony Breeds
> On Wed, Jun 08, 2016 at 08:10:47PM -0500, Matt Riedemann wrote:
> > Agreed, but it's the worked example part that we don't have yet,
> > chicken/egg. So we can drop the hammer on all new things until someone
On Thu, Jun 9, 2016 at 7:10 AM, Matt Riedemann
> While sitting in Angus' cross-project session on oslo.privsep at the
> Austin summit I believe I had a conversation with myself in my head that
> Nova should stop adding new rootwrap filters and anything
On Tue, Jun 7, 2016 at 7:41 AM, Clif Houck wrote:
> Hello all,
> At Rackspace we're running into an interesting problem: Consider a user
> who boots an instance in Nova with an image which only supports SSH
> public-key authentication, but the user doesn't provide a public
I've always done it manually by eyeballing the review, but the script is
On 27 May 2016 8:42 PM, "Sean Dague" <s...@dague.net> wrote:
> On 05/27/2016 05:36 AM, Michael Still wrote:
> > Hi,
> > I've spent some time today aband
I've spent some time today abandoning old reviews from the Nova queue.
Specifically, anything which hadn't been updated before February this year
has been abandoned with a message like this:
"This patch has been idle for a long time, so I am abandoning it to keep
the review clean sane. If
On Tue, May 24, 2016 at 11:42 PM, Muneeb Ahmad
> If not, can I add it's support? any ideas how can I do that?
> On Sat, May 21, 2016 at 10:23 PM, Muneeb Ahmad
>> Hey guys,
>> Does OpenStack support Xvisor?
On Tue, May 10, 2016 at 2:12 AM, Markus Zoeller wrote:
> We're close to have all options moved to "nova/conf/". At the bottom
> is a list of the remaining options and their open reviews.
> The documentation of the options in "nova/conf/" is done for ~ 150
> options. Which
On Fri, May 6, 2016 at 12:50 AM, Matthew Booth wrote:
> I mentioned in the meeting last Tuesday that there are now 2 of us working
> on the persistent storage metadata patches: myself and Diana Clarke. I've
> also been talking to Paul Carlton today trying to work out how he
I can't think of a reason. In fact its a bit warty because we've changed
the way we name the instance directories at least once. Its just how this
code was written back in the day.
Cleaning this up would be a fair bit of work though. Is it really worth the
effort just so people can have different
On Wed, May 4, 2016 at 11:03 AM, Davanum Srinivas wrote:
> The stackalytics bots do not have access to gerrit at the moment. We
> noticed it last friday and talked to infra folks:
The instance of stackalytics run by the openstack-infra team seems to be
gummed up. It alleges that the last time there was a nova code review was
April 17, which seems... unlikely.
Who looks after this thing so I can ping them gently?
On Sun, May 1, 2016 at 10:27 PM, ZhiQiang Fan wrote:
> Hi Nova cores,
> There is a spec submitted to Telemetry project for Newton release,
> mentioned that a new feature requires libvirt >= 1.3.4 , I'm not sure if
> this will have bad impact to Nova service, so I open
2 Apr 2016 08:54:17 +1000
From: Michael Still <mi...@stillhq.com>
Reply-To: OpenStack Development Mailing List (not for usage questions)
To: OpenStack Development Mailing List (not for usage questions)
On 12 Apr 2016 12:19 AM, "Sean Dague" wrote:
> On 04/11/2016 10:08 AM, Ed Leafe wrote:
> > On 04/11/2016 08:38 AM, Julien Danjou wrote:
> >> There's a lot of assumption in oslo.log about Nova, such as talking
> >> about "instance" and "context" in a lot of the code by
On Tue, Apr 12, 2016 at 6:49 AM, Matt Riedemann
> A few people have been asking about planning for the nova midcycle for
> newton. Looking at the schedule  I'm thinking weeks R-15 or R-11 work
> the best. R-14 is close to the US July 4th holiday, R-13 is
On Wed, Apr 6, 2016 at 7:28 AM, Ian Cordasco <sigmaviru...@gmail.com> wrote:
> -Original Message-
> From: Michael Still <mi...@stillhq.com>
> Reply: OpenStack Development Mailing List (not for usage questions) <
As a recent newcomer to using our client libraries, my only real objection
to this plan is that our client libraries as a mess . The interfaces
we expect users to use are quite different for basic things like initial
auth between the various clients, and by introducing another library we
I normally do this in one big batch, but haven't had a chance yet. I'll do
that later this week.
On 17 Mar 2016 7:50 AM, "Matt Riedemann" wrote:
> Specs are proposed to the 'approved' subdirectory and when they are
> completely implemented in launchpad (the
On Sun, Feb 7, 2016 at 8:07 PM, Jay Pipes wrote:
> Many contributors submit talks to speak at the conference part of an
> OpenStack Summit because their company says it's the only way they will pay
> for them to attend the design summit. This is, IMHO, a terrible
On Mon, Feb 8, 2016 at 1:51 AM, Monty Taylor wrote:
> Fifth - if we do this, the real need for the mid-cycles we currently have
> probably goes away since the summit week can be a legit wall-to-wall work
Another reply to a specific point...
I know its late to ask, but what is the parking situation at the office? Is
driving reasonable as a plan or should be walk from the Holiday Inn?
On 25 Jan 2016 4:58 PM, "Murray, Paul (HP Cloud)" wrote:
> See updated event detail information for the mid-cycle at:
I am not aware of anyone working on this. That said, its also not clear to
me that this is actually a good idea. Why can't you just loop through the
instances and delete them one at a time?
On Wed, Jan 20, 2016 at 12:08 AM, vishal yadav
> Hey guys,
1 - 100 of 474 matches
Mail list logo