Re: [openstack-dev] [watcher] Stepping down as Watcher spec core

2017-08-15 Thread Joe Cropper
Antoine,

It’s been a pleasure getting to work alongside you in the open source 
community.  Your leadership was paramount to the project and I wish you all the 
best in your next chapter!

Best,
Joe (jwcroppe)


> On Aug 15, 2017, at 1:26 PM, Susanne Balle  wrote:
> 
> Thanks for all the hard work. best of luck, 
> 
> Susanne
> 
> On Fri, Jul 21, 2017 at 3:48 AM, Чадин Александр (Alexander Chadin) 
> > wrote:
> Antoine,
> 
> Congratulations to the new step of your life!
> You’ve set high level of project management and this is big honour to me to 
> fit it.
> Hope to see you in Vancouver!
> 
> Best Regards,
> _
> Alexander Chadin
> OpenStack Developer
> 
>> On 21 Jul 2017, at 03:44, Hidekazu Nakamura > > wrote:
>> 
>> Hi Antoine,
>> 
>> I am grateful for your support from my starting contributing to Watcher.
>> Thanks to you I am contributing to Watcher actively now. 
>> 
>> I wish you live a happy life and a successful career.
>> 
>> Hidekazu Nakamura
>> 
>> 
>>> -Original Message-
>>> From: Antoine Cabot [mailto:antoinecabo...@gmail.com 
>>> ]
>>> Sent: Thursday, July 20, 2017 6:35 AM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> >> >
>>> Subject: [openstack-dev] [watcher] Stepping down as Watcher spec core
>>> 
>>> Hey guys,
>>> 
>>> It's been a long time since the last summit and our last discussions !
>>> I hope Watcher is going well and you are getting more traction
>>> everyday in the OpenStack community !
>>> 
>>> As you may guess, my last 2 months have been very busy with my
>>> relocation in Vancouver with my family. After 8 weeks of active job
>>> search in the cloud industry here in Vancouver, I've got a Senior
>>> Product Manager position at Parsable, a start-up leading the Industry
>>> 4.0 revolution. I will continue to deal with very large customers but
>>> in different industries (Oil & Gas, Manufacturing...) to build the
>>> best possible product, leveraging cloud and mobile technologies.
>>> 
>>> It was a great pleasure to lead the Watcher initiative from its
>>> infancy to the OpenStack Big Tent and be able to work with all of you.
>>> I hope to be part of another open source community in the near future
>>> but now, due to my new attributions, I need to step down as a core
>>> contributor to Watcher specs. Feel free to reach me in any case if I
>>> still hold restricted rights on launchpad or anywhere else.
>>> 
>>> I hope to see you all in Vancouver next year for the summit and be
>>> part of the traditional Watcher dinner (I will try to find the best
>>> place for you guys).
>>> 
>>> Cheers,
>>> 
>>> Antoine
>>> 
>>> __
>>> 
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org 
>>> ?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
>>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
>> ?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
>> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Watcher] Nominating Prudhvi Rao Shedimbi to Watcher Core

2017-02-14 Thread Joe Cropper
+1 !

> On Feb 14, 2017, at 4:05 AM, Vincent FRANÇOISE  
> wrote:
> 
> Team,
> 
> I would like to promote Prudhvi Rao Shedimbi to the core team. He's done
> a great work in reviewing many patchsets[1] and I believe that he has a
> good vision of Watcher as a whole.
> 
> I think he would make an excellent addition to the team.
> 
> Please vote
> 
> [1] http://stackalytics.com/report/contribution/watcher/90
> 
> Vincent FRANCOISE
> B<>COM
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [watcher] Nominate Prashanth Hari as core for watcher-specs

2016-08-16 Thread Joe Cropper
+1

> On Aug 16, 2016, at 11:59 AM, Jean-Émile DARTOIS 
>  wrote:
> 
> +1
> 
> Jean-Emile
> DARTOIS
> 
> {P} Software Engineer
> Cloud Computing
> 
> {T} +33 (0) 2 56 35 8260
> {W} www.b-com.com
> 
> 
> De : Antoine Cabot 
> Envoyé : mardi 16 août 2016 16:57
> À : OpenStack Development Mailing List (not for usage questions)
> Objet : [openstack-dev] [watcher] Nominate Prashanth Hari as core for   
> watcher-specs
> 
> Hi Watcher team,
> 
> I'd like to nominate Prashanth Hari as core contributor for watcher-specs.
> As a potential end-user for Watcher, Prashanth gave us a lot of good
> feedback since the Mitaka release.
> Please vote for his candidacy.
> 
> Antoine
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [watcher] Stepping down from core

2016-08-03 Thread Joe Cropper
Thanks Taylor… thanks so much for your contributions!

Best,
Joe

> On Aug 3, 2016, at 11:32 AM, Taylor D Peoples  wrote:
> 
> Hi all,
> 
> I'm stepping down from Watcher core and will be leaving the OpenStack 
> community to pursue other opportunities.
> 
> I wasn't able to contribute to Watcher anywhere near the amount that I was 
> hoping, but I have enjoyed the little that I was able to contribute.
> 
> Good luck to the Watcher team going forward.
> 
> 
> Best,
> Taylor Peoples
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [watcher] Mascot final choice

2016-07-28 Thread Joe Cropper
+2 to Jellyfish!

> On Jul 28, 2016, at 4:08 PM, Antoine Cabot  wrote:
> 
> Hi Watcher team,
> 
> Last week during the mid-cycle, we came up with a list of possible mascots 
> for Watcher. The only one which is in conflict with other projects is the 
> bee. 
> So we have this final list :
> 1. Jellyfish
> 2. Eagle
> 3. Hammerhead shark
> 
> I'm going to confirm jellyfish as the Watcher mascot by EOW except if any 
> contributor is against this choice. Please let me know.
> 
> Antoine
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [watcher] Watcher meetings in Austin

2016-04-25 Thread Joe Cropper
Hi Watcher team,

Resending this in case folks missed it.

Thanks,

Joe (jwcroppe)


> On Apr 22, 2016, at 4:37 AM, Antoine Cabot  wrote:
> 
> Hi Watcher team,
> 
> We will have a couple of meetings next week to discuss Watcher
> achievements during the Mitaka cycle and define priorities for Newton.
> 
> Meetings will be held in the developer lounge in Austin
> https://wiki.openstack.org/wiki/Design_Summit/Newton/Etherpads#Watcher
> 
> A proposed open agenda is available
> https://etherpad.openstack.org/p/watcher-newton-design-session
> 
> If you want to meet with Watcher team and discuss your use cases,
> feel free to join us and add your discussion topic to the agenda.
> 
> Thanks,
> 
> Antoine (acabot)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Watcher] Nominating Vincent Francoise to Watcher Core

2016-02-17 Thread Joe Cropper
+1

> On Feb 17, 2016, at 8:05 AM, David TARDIVEL  wrote:
> 
> Team,
> 
> I’d like to promote Vincent Francoise to the core team. Vincent's done a 
> great work
> on code reviewing and has proposed a lot of patchsets. He is currently the 
> most active 
> non-core reviewer on Watcher project, and today, he has a very good vision of 
> Watcher. 
> I think he would make an excellent addition to the team.
> 
> Please vote
> 
> David TARDIVEL
> b<>COM
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [watcher] Travel logistics for mid-cycle in Austin

2016-01-28 Thread Joe Cropper
Hi Watcher Team,

For those of you attending the Watcher mid-cycle next week in Austin 
(Tue-Thurs), please take a quick look at the travel details [1] — particularly 
the bit about where to “check in” every day once you arrive to the IBM campus.

As another FYI, light breakfast foods, lunch and an afternoon snack will be 
served daily.

Looking forward to seeing everyone next week!

[1] 
https://wiki.openstack.org/wiki/Watcher_mitaka_mid-cycle_meetup_agenda#Travel

Thanks,
Joe
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [watcher] New project: Watcher

2015-11-02 Thread Joe Cropper

> On Nov 2, 2015, at 4:50 AM, Thierry Carrez  wrote:
> 
> Antoine CABOT wrote:
>> We are pleased to introduce Watcher, a new project in the OpenStack 
>> ecosystem. We believe that a "resource optimization" service in an 
>> OpenStack-based cloud is a crucial component that has been missing to 
>> date and we have started working towards filling that gap.
>> 
>> OpenStack Watcher provides a flexible and scalable resource 
>> optimization service for multi-tenant OpenStack-based clouds. 
>> [...]
> 
> Please note that this project is not an official OpenStack project yet.
> In particular, it's not been approved by the Technical Committee to join
> the "big tent" yet, and it was actually never proposed there.
> 
> If you intend to propose this new project for inclusion, please propose
> an openstack/governance change to that effect. You can find the current
> project requirements at:
> 
> http://governance.openstack.org/reference/new-projects-requirements.html
> 
> In the mean time, please refrain from calling your project "OpenStack
> Watcher". Thanks in advance!

No problem, Thierry!  Our apologies for the terminology hiccup.  We do intent 
to propose it as part of the “big tent” in the coming months—the project’s wiki 
and source code repository were just recently configured on openstack.org and 
we’re actively working on the checklist of items you referenced.  :-)

Regards,
Joe (jwcroppe)

> 
> -- 
> Thierry Carrez (ttx)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] questions about nova compute monitors extensions

2015-09-12 Thread Joe Cropper
The new framework does indeed support user-defined monitors.  You just extend 
whatever monitor’d like (e.g., nova.compute.monitors.cpu.virt_driver.Monitor) 
and add your customized logic.  And since the new framework uses 
stevedore-based extension points, you just need to be sure to add the 
appropriate entry to your project’s setup.py file (or entry_points.txt in your 
egg) so that stevedore can load them properly.

Hope this helps!

Thanks,
Joe
> On Sep 10, 2015, at 2:52 AM, Hou Gang HG Liu  wrote:
> 
> Hi all, 
> 
> I notice nova compute monitor now only tries to load monitors with namespace 
> "nova.compute.monitors.cpu", and only one monitor in one namespace can be 
> enabled( 
> https://review.openstack.org/#/c/209499/6/nova/compute/monitors/__init__.py 
> ).
>  
> 
> Is there a plan to make MonitorHandler.NAMESPACES configurable or just hard 
> code constraint as it is now? And how to make compute monitor support user 
> defined as it was? 
> 
> Thanks! 
> B.R 
> 
> Hougang Liu (刘侯刚) 
> Developer - IBM Platform Resource Scheduler   
>   Attachment.gif> 
> Systems and Technology Group 
> 
> Mobile: 86-13519121974 | Phone: 86-29-68797023 | Tie-Line: 87023  
>  陕西省西安市高新六路42号中清大厦3层
> E-mail: liuh...@cn.ibm.com
>  Xian, Shaanxi Province 710075, 
> China
> 
>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Intended behavior for instance.host on reschedule?

2015-03-03 Thread Joe Cropper
Hi Folks,

I was wondering if anyone can comment on the intended behavior of how 
instance.host is supported to be set during reschedule operations.  For 
example, take this scenario:

1. Assume an environment with a single host… call it host-1
2. Deploy a VM, but force an exception in the spawn path somewhere to simulate 
some hypervisor error”
3. The scheduler correctly attempts to reschedule the VM, and ultimately ends 
up (correctly) with a NoValidHost error because there was only 1 host
4. However, the instance.host (e.g., [nova show vm]) is still showing 
‘host-1’ — is this the expected behavior?

It seems like perhaps the claim should be reverted (read: instance.host nulled 
out) when we take the exception path during spawn in step #2 above, but maybe 
I’m overlooking something?  This behavior was observed on a Kilo base from a 
couple weeks ago, FWIW.

Thoughts/comments?

Thanks,
Joe
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Intended behavior for instance.host on reschedule?

2015-03-03 Thread Joe Cropper

 On Mar 3, 2015, at 8:34 AM, Vishvananda Ishaya vishvana...@gmail.com wrote:
 
 I’m pretty sure it has always done this: leave the host set on the final 
 scheduling attempt. I agree that this could be cleared which would free up 
 room for future scheduling attempts.
 

Thanks Vish for the comment.  Do we know if this is an intended feature or 
would we consider this a bug?  It seems like we could free this up, as you 
said, to allow room for additional VMs, especially since we know it didn’t 
successfully deploy anyway?

 Vish
 
 On Mar 3, 2015, at 12:15 AM, Joe Cropper cropper@gmail.com wrote:
 
 Hi Folks,
 
 I was wondering if anyone can comment on the intended behavior of how 
 instance.host is supported to be set during reschedule operations.  For 
 example, take this scenario:
 
 1. Assume an environment with a single host… call it host-1
 2. Deploy a VM, but force an exception in the spawn path somewhere to 
 simulate some hypervisor error”
 3. The scheduler correctly attempts to reschedule the VM, and ultimately 
 ends up (correctly) with a NoValidHost error because there was only 1 host
 4. However, the instance.host (e.g., [nova show vm]) is still showing 
 ‘host-1’ — is this the expected behavior?
 
 It seems like perhaps the claim should be reverted (read: instance.host 
 nulled out) when we take the exception path during spawn in step #2 above, 
 but maybe I’m overlooking something?  This behavior was observed on a Kilo 
 base from a couple weeks ago, FWIW.
 
 Thoughts/comments?
 
 Thanks,
 Joe
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Intended behavior for instance.host on reschedule?

2015-03-03 Thread Joe Cropper
Logged a bug [1] and submitted a fix [2].  Review away!

[1] https://bugs.launchpad.net/nova/+bug/1427944
[2] https://review.openstack.org/#/c/161069/

- Joe


 On Mar 3, 2015, at 4:42 PM, Jay Pipes jaypi...@gmail.com wrote:
 
 On 03/03/2015 06:55 AM, Joe Cropper wrote:
 On Mar 3, 2015, at 8:34 AM, Vishvananda Ishaya
 vishvana...@gmail.com wrote:
 
 I’m pretty sure it has always done this: leave the host set on the
 final scheduling attempt. I agree that this could be cleared which
 would free up room for future scheduling attempts.
 
 Thanks Vish for the comment.  Do we know if this is an intended
 feature or would we consider this a bug?  It seems like we could free
 this up, as you said, to allow room for additional VMs, especially
 since we know it didn’t successfully deploy anyway?
 
 Seems like a bug to me. Feel free to create one in Launchpad and we'll get on 
 it.
 
 Best,
 -jay
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question about force_host skip filters

2015-02-17 Thread Joe Cropper
+1 to using a filter property to indicate whether the filter needs to be run on 
force_hosts.  As others have said, there are certain cases that need to be 
checked even if the admin is trying to intentionally place a VM somewhere such 
that we can fail early vs. letting the hypervisor blow up on the request in the 
future (i.e., to help prevent the user from stepping on their own toes).  :-)

Along these lines—dare I bring up the topic of providing an enhanced mechanism 
to determine which filter(s) contributed to NoValidHost exceptions?  Do others 
ever hear about operators getting this, and then having no idea why a VM deploy 
failed?  This is likely another thread, but thought I’d pose it here to see if 
we think this might be a potential blueprint as well.

- Joe

 On Feb 17, 2015, at 10:20 AM, Nikola Đipanov ndipa...@redhat.com wrote:
 
 On 02/17/2015 04:59 PM, Chris Friesen wrote:
 On 02/16/2015 01:17 AM, Nikola Đipanov wrote:
 On 02/14/2015 08:25 AM, Alex Xu wrote:
 
 Agree with Nikola, the claim already checking that. And instance booting
 must be failed if there isn't pci device. But I still think it should go
 through the filters, because in the future we may move the claim into
 the scheduler. And we needn't any new options, I didn't see there is any
 behavior changed.
 
 
 I think that it's not as simple as just re-running all the filters. When
 we want to force a host - there are certain things we may want to
 disregard (like aggregates? affinity?) that the admin de-facto overrides
 by saying they want a specific host, and there are things we definitely
 need to re-run to set the limits and for the request to even make sense
 (like NUMA, PCI, maybe some others).
 
 So what I am thinking is that we need a subset of filters that we flag
 as - we need to re-run this even for force-host, and then run them on
 every request.
 
 Yeah, that makes sense.  Also, I think that flag should be an attribute
 of the filter itself, so that people adding new filters don't need to
 also add the filter to a list somewhere.
 
 
 This is basically what I had in mind - definitely a filter property!
 
 N.
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org 
 mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] review closure for nova blueprint review.openstack.org/#/c/140133/

2015-01-05 Thread Joe Cropper
Prefixing subject with [nova] so folks’ mail rules catch this.

- Joe

 On Jan 5, 2015, at 1:47 PM, Kenneth Burger burg...@us.ibm.com wrote:
 
 Hi, I am trying to get approval on this nova blueprint, 
 https://review.openstack.org/#/c/140133/ 
 https://review.openstack.org/#/c/140133/.   There was a +2 from Michael 
 Still ( twice in prior patches ) and a +1 from Jay Bryant from a cinder 
 perspective.The only changes from the patches receiving the +2  was 
 related to a directory change of the spec location in the repository.   
 
 Is it still possible to get approval for this blueprint?
 
 Thanks,
 Ken Burger
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] host aggregate's availability zone

2014-12-21 Thread Joe Cropper
Did you enable the AvailabilityZoneFilter in nova.conf that the scheduler uses? 
 And enable the FilterScheduler?  These are two common issues related to this.

- Joe

 On Dec 21, 2014, at 10:28 AM, Danny Choi (dannchoi) dannc...@cisco.com 
 wrote:
 
 Hi,
 
 I have a multi-node setup with 2 compute hosts, qa5 and qa6.
 
 I created 2 host-aggregate, each with its own availability zone, and assigned 
 one compute host:
 
 localadmin@qa4:~/devstack$ nova aggregate-details host-aggregate-zone-1
 ++---+---+---+--+
 | Id | Name  | Availability Zone | Hosts | Metadata   
   |
 ++---+---+---+--+
 | 9  | host-aggregate-zone-1 | az-1  | 'qa5' | 
 'availability_zone=az-1' |
 ++---+---+---+--+
 localadmin@qa4:~/devstack$ nova aggregate-details host-aggregate-zone-2
 ++---+---+---+--+
 | Id | Name  | Availability Zone | Hosts | Metadata   
   |
 ++---+---+---+--+
 | 10 | host-aggregate-zone-2 | az-2  | 'qa6' | 
 'availability_zone=az-2' |
 ++---+---+---+—+
 
 My intent is to control at which compute host to launch a VM via the 
 host-aggregate’s availability-zone parameter.
 
 To test, for vm-1, I specify --availiability-zone=az-1, and 
 --availiability-zone=az-2 for vm-2:
 
 localadmin@qa4:~/devstack$ nova boot --image cirros-0.3.2-x86_64-uec --flavor 
 1 --nic net-id=5da9d715-19fd-47c7-9710-e395b5b90442 --availability-zone az-1 
 vm-1
 +--++
 | Property | Value
   |
 +--++
 | OS-DCF:diskConfig| MANUAL   
   |
 | OS-EXT-AZ:availability_zone  | nova 
   |
 | OS-EXT-SRV-ATTR:host | -
   |
 | OS-EXT-SRV-ATTR:hypervisor_hostname  | -
   |
 | OS-EXT-SRV-ATTR:instance_name| instance-0066
   |
 | OS-EXT-STS:power_state   | 0
   |
 | OS-EXT-STS:task_state| -
   |
 | OS-EXT-STS:vm_state  | building 
   |
 | OS-SRV-USG:launched_at   | -
   |
 | OS-SRV-USG:terminated_at | -
   |
 | accessIPv4   |  
   |
 | accessIPv6   |  
   |
 | adminPass| kxot3ZBZcBH6 
   |
 | config_drive |  
   |
 | created  | 2014-12-21T15:59:03Z 
   |
 | flavor   | m1.tiny (1)  
   |
 | hostId   |  
   |
 | id   | 854acae9-b718-4ea5-bc28-e0bc46378b60 
   |
 | image| cirros-0.3.2-x86_64-uec 
 (61409a53-305c-4022-978b-06e55052875b) |
 | key_name | -
   |
 | metadata | {}   
   |
 | name | vm-1 
   |
 | os-extended-volumes:volumes_attached | []   
   |
 | progress | 0
   |
 | security_groups  | default  
   |
 | status   | BUILD
   |
 | tenant_id

Re: [openstack-dev] [All] Maintenance mode in OpenStack during patching/upgrades

2014-10-17 Thread Joe Cropper
I’m glad to see this topic getting some focus once again.  :-)

From several of the administrators I talk with, when they think of putting a 
host into maintenance mode, the common requests I hear are:

1. Don’t schedule more VMs to the host
2. Provide an optional way to automatically migrate all (usually active) VMs 
off the host so that users’ workloads remain “unaffected” by the maintenance 
operation

#1 can easily be achieved, as has been mentioned several times, by simply 
disabling the compute service.  However, #2 involves a little more work, 
although certainly possible using all the operations provided by nova today 
(e.g., live migration, etc.).  I believe these types of discussions have come 
up several times over the past several OpenStack releases—certainly since 
Grizzly (i.e., when I started watching this space).

It seems that the general direction is to have the type of workflow needed for 
#2 outside of nova (which is certainly a valid stance).  To that end, it would 
be fairly straightforward to build some code that logically sits on top of 
nova, that when entering maintenance:

1. Prevents VMs from being scheduled to the host;
2. Maintains state about the maintenance operation (e.g., not in maintenance, 
migrations in progress, in maintenance, or error);
3. Provides mechanisms to, upon entering maintenance, dictates which VMs 
(active, all, none) to migrate and provides some throttling capabilities to 
prevent hundreds of parallel migrations on densely packed hosts (all done via a 
REST API).

If anyone has additional questions, comments, or would like to discuss some 
options, please let me know.  If interested, upon request, I could even share a 
video of how such cases might work.  :-)  My colleagues and I have given these 
use cases a lot of thought and consideration and I’d love to talk more about 
them (perhaps a small session in Paris would be possible).

- Joe

On Oct 17, 2014, at 4:18 AM, John Garbutt j...@johngarbutt.com wrote:

 On 17 October 2014 02:28, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:
 
 
 On 10/16/2014 7:26 PM, Christopher Aedo wrote:
 
 On Tue, Sep 9, 2014 at 2:19 PM, Mike Scherbakov
 mscherba...@mirantis.com wrote:
 
 On Tue, Sep 9, 2014 at 6:02 PM, Clint Byrum cl...@fewbar.com wrote:
 
 The idea is not simply deny or hang requests from clients, but provide
 them
 we are in maintenance mode, retry in X seconds
 
 You probably would want 'nova host-servers-migrate host'
 
 yeah for migrations - but as far as I understand, it doesn't help with
 disabling this host in scheduler - there is can be a chance that some
 workloads will be scheduled to the host.
 
 
 Regarding putting a compute host in maintenance mode using nova
 host-update --maintenance enable, it looks like the blueprint and
 associated commits were abandoned a year and a half ago:
 https://blueprints.launchpad.net/nova/+spec/host-maintenance
 
 It seems that nova service-disable host nova-compute effectively
 prevents the scheduler from trying to send new work there.  Is this
 the best approach to use right now if you want to pull a compute host
 out of an environment before migrating VMs off?
 
 I agree with Tim and Mike that having something respond down for
 maintenance rather than ignore or hang would be really valuable.  But
 it also looks like that hasn't gotten much traction in the past -
 anyone feel like they'd be in support of reviving the notion of
 maintenance mode?
 
 -Christopher
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 host-maintenance-mode is definitely a thing in nova compute via the os-hosts
 API extension and the --maintenance parameter, the compute manager code is
 here [1].  The thing is the only in-tree virt driver that implements it is
 xenapi, and I believe when you put the host in maintenance mode it's
 supposed to automatically evacuate the instances to some other host, but you
 can't target the other host or tell the driver, from the API, which
 instances you want to evacuate, e.g. all, none, running only, etc.
 
 [1]
 http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py?id=2014.2#n3990
 
 We should certainly make that more generic. It doesn't update the VM
 state, so its really only admin focused in its current form.
 
 The XenAPI logic only works when using XenServer pools with shared NFS
 storage, if my memory serves me correctly. Honestly, its a bit of code
 I have planned on removing, along with the rest of the pool support.
 
 In terms of requiring DB downtime in Nova, the current efforts are
 focusing on avoiding downtime all together, via expand/contract style
 migrations, with a little help from objects to avoid data migrations.
 
 That doesn't mean maintenance mode if not useful for other things,
 like an emergency patching of the hypervisor.
 
 John
 
 

Re: [openstack-dev] [nova] Server Groups - remove VM from group?

2014-09-11 Thread Joe Cropper
Great to hear.  I started a blueprint for this [1].  More detail can be added 
once the kilo nova-specs directory is created… for now, I’ve tried to put some 
fairly detailed notes on the blueprint’s description.

[1] https://blueprints.launchpad.net/nova/+spec/dynamic-server-groups

- Joe
On Sep 11, 2014, at 2:11 AM, Sylvain Bauza sba...@redhat.com wrote:

 
 Le 11/09/2014 01:10, Joe Cropper a écrit :
 Agreed - I’ll draft up a formal proposal in the next week or two and we can 
 focus the discussion there. Thanks for the feedback - this provides a good 
 framework for implementation considerations.
 
 Count me on it, I'm interested in discussing the next stage.
 
 When preparing the scheduler split, I just discovered it was unnecessary to 
 keep the instance groups setup in the scheduler, because it was creating 
 dependencies to other Nova objects that the Scheduler doesn't necessarly need 
 to handle.
 I proposed accordingly a patch for moving the logic to the conductor instead, 
 see the proposal here :
 https://review.openstack.org/110043
 
 Reviews are welcome of course.
 
 -Sylvain
 
 
 - Joe
 On Sep 10, 2014, at 6:00 PM, Russell Bryant rbry...@redhat.com wrote:
 
 On 09/10/2014 06:46 PM, Joe Cropper wrote:
 Hmm, not sure I follow the concern, Russell.  How is that any different
 from putting a VM into the group when it’s booted as is done today?
 This simply defers the ‘group insertion time’ to some time after
 initial the VM’s been spawned, so I’m not sure this creates anymore race
 conditions than what’s already there [1].
 
 [1] Sure, the to-be-added VM could be in the midst of a migration or
 something, but that would be pretty simple to check make sure its task
 state is None or some such.
 The way this works at boot is already a nasty hack.  It does policy
 checking in the scheduler, and then has to re-do some policy checking at
 launch time on the compute node.  I'm afraid of making this any worse.
 In any case, it's probably better to discuss this in the context of a
 more detailed design proposal.
 
 -- 
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Server Groups - remove VM from group?

2014-09-11 Thread Joe Cropper
I would be a little wary about the DB level locking for stuff like that — it’s 
certainly doable, but also comes at the expense of things behaving 
ever-so-slightly different from DBMS to DBMS.  Perhaps there are multiple 
“logical efforts” here—i.e., adding some APIs and cleaning up existing code.

In any case, I’ve started a blueprint on this [1] and we can continue iterating 
in the nova-spec once kilo opens up.  Thanks all for the good discussion on 
this thus far.

[1] https://blueprints.launchpad.net/nova/+spec/dynamic-server-groups

- Joe
On Sep 11, 2014, at 5:04 PM, Chris Friesen chris.frie...@windriver.com wrote:

 On 09/11/2014 03:01 PM, Jay Pipes wrote:
 On 09/11/2014 04:51 PM, Matt Riedemann wrote:
 On 9/10/2014 6:00 PM, Russell Bryant wrote:
 On 09/10/2014 06:46 PM, Joe Cropper wrote:
 Hmm, not sure I follow the concern, Russell.  How is that any different
 from putting a VM into the group when it’s booted as is done today?
  This simply defers the ‘group insertion time’ to some time after
 initial the VM’s been spawned, so I’m not sure this creates anymore
 race
 conditions than what’s already there [1].
 
 [1] Sure, the to-be-added VM could be in the midst of a migration or
 something, but that would be pretty simple to check make sure its task
 state is None or some such.
 
 The way this works at boot is already a nasty hack.  It does policy
 checking in the scheduler, and then has to re-do some policy checking at
 launch time on the compute node.  I'm afraid of making this any worse.
 In any case, it's probably better to discuss this in the context of a
 more detailed design proposal.
 
 
 This [1] is the hack you're referring to right?
 
 [1]
 http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py?id=2014.2.b3#n1297
 
 
 That's the hack *I* had in the back of my mind.
 
 I think that's the only boot hack related to server groups.
 
 I was thinking that it should be possible to deal with the race more cleanly 
 by recording the selected compute node in the database at the time of 
 scheduling.  As it stands, the host is implicitly encoded in the compute node 
 to which we send the boot request and nobody else knows about it.
 
 Chris
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Server Groups - remove VM from group?

2014-09-10 Thread Joe Cropper
I agree, Chris.  I think a number of folks put in a lot of really great work 
into the existing server groups and there has been a lot of interest on their 
usage, especially given that the scheduler already has some constructs in place 
to piggyback on them.

I would like to craft up a blueprint proposal for Kilo to add two simple 
extensions to the existing server group APIs that I believe will make them 
infinitely more usable in any ‘real world’ scenario.  I’ll put more details in 
the proposal, but in a nutshell:

1. Adding a VM to a server group
Only allow it to succeed if its policy wouldn’t be violated by the addition of 
the VM

2. Removing a VM from a server group
Just allow it

I think this would round out the support that’s there and really allow us to 
capitalize on the hard work everyone’s already put into them.

- Joe

On Aug 26, 2014, at 6:39 PM, Chris Friesen chris.frie...@windriver.com wrote:

 On 08/25/2014 11:25 AM, Joe Cropper wrote:
 I was thinking something simple such as only allowing the add
 operation to succeed IFF no policies are found to be in violation...
 and then nova wouldn't need to get into all the complexities you
 mention?
 
 Personally I would be in favour of this...nothing fancy, just add it if it 
 already meets all the criteria.  This is basically just a database operation 
 so I would hope we could make it reliable in the face of simultaneous things 
 going on with the instance.
 
 And remove would be fairly straightforward as well since no
 constraints would need to be checked.
 
 Agreed.
 
 Chris
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Server Groups - remove VM from group?

2014-09-10 Thread Joe Cropper
Hmm, not sure I follow the concern, Russell.  How is that any different from 
putting a VM into the group when it’s booted as is done today?  This simply 
defers the ‘group insertion time’ to some time after initial the VM’s been 
spawned, so I’m not sure this creates anymore race conditions than what’s 
already there [1].

[1] Sure, the to-be-added VM could be in the midst of a migration or something, 
but that would be pretty simple to check make sure its task state is None or 
some such.

- Joe
On Sep 10, 2014, at 5:16 PM, Russell Bryant rbry...@redhat.com wrote:

 
 
 On Sep 10, 2014, at 2:03 PM, Joe Cropper cropper@gmail.com wrote:
 
 I agree, Chris.  I think a number of folks put in a lot of really great work 
 into the existing server groups and there has been a lot of interest on 
 their usage, especially given that the scheduler already has some constructs 
 in place to piggyback on them.
 
 I would like to craft up a blueprint proposal for Kilo to add two simple 
 extensions to the existing server group APIs that I believe will make them 
 infinitely more usable in any ‘real world’ scenario.  I’ll put more details 
 in the proposal, but in a nutshell:
 
 1. Adding a VM to a server group
 Only allow it to succeed if its policy wouldn’t be violated by the addition 
 of the VM
 
 
 I'm not sure that determining this at the time of the API request is possible 
 due to the parallel and async nature of the system. I'd love to hear ideas on 
 how you think this might be done, but I'm really not optimistic and would 
 rather just not go down this road. 
 
 2. Removing a VM from a server group
 Just allow it
 
 I think this would round out the support that’s there and really allow us to 
 capitalize on the hard work everyone’s already put into them.
 
 - Joe
 
 On Aug 26, 2014, at 6:39 PM, Chris Friesen chris.frie...@windriver.com 
 wrote:
 
 On 08/25/2014 11:25 AM, Joe Cropper wrote:
 I was thinking something simple such as only allowing the add
 operation to succeed IFF no policies are found to be in violation...
 and then nova wouldn't need to get into all the complexities you
 mention?
 
 Personally I would be in favour of this...nothing fancy, just add it if it 
 already meets all the criteria.  This is basically just a database 
 operation so I would hope we could make it reliable in the face of 
 simultaneous things going on with the instance.
 
 And remove would be fairly straightforward as well since no
 constraints would need to be checked.
 
 Agreed.
 
 Chris
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Server Groups - remove VM from group?

2014-09-10 Thread Joe Cropper
+1, Chris.

I think the key thing here is that such race conditions can already happen if 
timed just right, unless there’s been some additional checks put in place in 
the compute API layer since I last scanned the code.  We could even look at 
some x-process locking mechanisms as well if we think that’s necessary.

Overall, this seems like a feasible, fairly minor enhancement--that wouldn’t 
make us any worse off than we are today (perhaps address some existing race 
conditions), all whilst providing a large improvement to the overall usability 
of the server groups.  :-)

- Joe
On Sep 10, 2014, at 5:54 PM, Chris Friesen chris.frie...@windriver.com wrote:

 On 09/10/2014 04:16 PM, Russell Bryant wrote:
 
 
 On Sep 10, 2014, at 2:03 PM, Joe Cropper cropper@gmail.com
 wrote:
 
 I would like to craft up a blueprint proposal for Kilo to add two
 simple extensions to the existing server group APIs that I believe
 will make them infinitely more usable in any ‘real world’ scenario.
 I’ll put more details in the proposal, but in a nutshell:
 
 1. Adding a VM to a server group Only allow it to succeed if its
 policy wouldn’t be violated by the addition of the VM
 
 
 I'm not sure that determining this at the time of the API request is
 possible due to the parallel and async nature of the system. I'd love
 to hear ideas on how you think this might be done, but I'm really not
 optimistic and would rather just not go down this road.
 
 I can see a possible race against another instance booting into the group, or 
 another already-running instance being added to the group.  I think the 
 solution is to do the update as an atomic database transaction.
 
 It seems like it should be possible to create a database operation that does 
 the following in a single transaction:
 --look up the hosts for the instances in the group
 --check that the scheduler policy would be satisfied (at least for the basic 
 affinity/anti-affinity policies)
 --add the instance to the group
 
 Chris
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Server Groups - remove VM from group?

2014-09-10 Thread Joe Cropper
Agreed - I’ll draft up a formal proposal in the next week or two and we can 
focus the discussion there.  Thanks for the feedback - this provides a good 
framework for implementation considerations.

- Joe
On Sep 10, 2014, at 6:00 PM, Russell Bryant rbry...@redhat.com wrote:

 On 09/10/2014 06:46 PM, Joe Cropper wrote:
 Hmm, not sure I follow the concern, Russell.  How is that any different
 from putting a VM into the group when it’s booted as is done today?
 This simply defers the ‘group insertion time’ to some time after
 initial the VM’s been spawned, so I’m not sure this creates anymore race
 conditions than what’s already there [1].
 
 [1] Sure, the to-be-added VM could be in the midst of a migration or
 something, but that would be pretty simple to check make sure its task
 state is None or some such.
 
 The way this works at boot is already a nasty hack.  It does policy
 checking in the scheduler, and then has to re-do some policy checking at
 launch time on the compute node.  I'm afraid of making this any worse.
 In any case, it's probably better to discuss this in the context of a
 more detailed design proposal.
 
 -- 
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] nova-specs for Kilo?

2014-09-10 Thread Joe Cropper
Hi Folks,

Just wondering if the nova-specs master branch will have a ‘kilo’ directory 
created soon for Kilo proposals?  I have a few things I’d like to submit, just 
looking for the proper home.

Thanks,
Joe
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-specs for Kilo?

2014-09-10 Thread Joe Cropper
Thanks!  Exactly what I was looking for.

On Sep 11, 2014, at 12:38 AM, Russell Bryant rbry...@redhat.com wrote:

 On 09/11/2014 01:32 AM, Joe Cropper wrote:
 Hi Folks,
 
 Just wondering if the nova-specs master branch will have a ‘kilo’
 directory created soon for Kilo proposals? I have a few things I’d like
 to submit, just looking for the proper home.
 
 There's some more info on that here:
 
 http://lists.openstack.org/pipermail/openstack-dev/2014-August/044431.html
 
 -- 
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Feature freeze + Juno-3 milestone candidates available

2014-09-06 Thread Joe Cropper
Thanks, ttx.

If there’s anyone that can do a final review on 
https://review.openstack.org/#/c/118535/ — would be much appreciated and I’m 
happy to llet the i18n folks know once it merges.

- Joe

On Sep 6, 2014, at 9:36 AM, Thierry Carrez thie...@openstack.org wrote:

 In that precise case, given how early it is in the freeze, I think
 giving a quick heads-up to the -i18n team/list should be enough :) Also
 /adding/ a string is not as disruptive to their work as modifying a
 potentially-already-translated one.
 
 Joe Cropper wrote:
 +1 to what Jay said.
 
 I’m not sure whether the string freeze applies to bugs, but the defect
 that Matt mentioned (for which I authored the fix) adds a string, albeit
 to fix a bug.  Hoping it’s more desirable to have an untranslated
 correct message than a translated incorrect message.  :-)
 
 - Joe
 On Sep 5, 2014, at 3:41 PM, Jay Bryant jsbry...@electronicjungle.net
 mailto:jsbry...@electronicjungle.net wrote:
 
 Matt,
 
 I don't think that is the right solution.
 
 If the string changes I think the only problem is it won't be
 translated if it is thrown.   That is better than breaking the coding
 standard imho.
 
 Jay
 
 On Sep 5, 2014 3:30 PM, Matt Riedemann mrie...@linux.vnet.ibm.com
 mailto:mrie...@linux.vnet.ibm.com wrote:
 
 
 
On 9/5/2014 5:10 AM, Thierry Carrez wrote:
 
Hi everyone,
 
We just hit feature freeze[1], so please do not approve
changes that add
features or new configuration options unless those have been
granted a
feature freeze exception.
 
This is also string freeze[2], so you should avoid changing
translatable
strings. If you have to modify a translatable string, you
should give a
heads-up to the I18N team.
 
Finally, this is also DepFreeze[3], so you should avoid adding new
dependencies (bumping oslo or openstack client libraries is OK
until
RC1). If you have a new dependency to add, raise a thread on
openstack-dev about it.
 
The juno-3 development milestone was tagged, it contains more
than 135
features and 760 bugfixes added since the juno-2 milestone 6
weeks ago
(not even counting the Oslo libraries in the mix). You can
find the full
list of new features and fixed bugs, as well as tarball
downloads, at:
 
https://launchpad.net/__keystone/juno/juno-3
https://launchpad.net/keystone/juno/juno-3
https://launchpad.net/glance/__juno/juno-3
https://launchpad.net/glance/juno/juno-3
https://launchpad.net/nova/__juno/juno-3
https://launchpad.net/nova/juno/juno-3
https://launchpad.net/horizon/__juno/juno-3
https://launchpad.net/horizon/juno/juno-3
https://launchpad.net/neutron/__juno/juno-3
https://launchpad.net/neutron/juno/juno-3
https://launchpad.net/cinder/__juno/juno-3
https://launchpad.net/cinder/juno/juno-3
https://launchpad.net/__ceilometer/juno/juno-3
https://launchpad.net/ceilometer/juno/juno-3
https://launchpad.net/heat/__juno/juno-3
https://launchpad.net/heat/juno/juno-3
https://launchpad.net/trove/__juno/juno-3
https://launchpad.net/trove/juno/juno-3
https://launchpad.net/sahara/__juno/juno-3
https://launchpad.net/sahara/juno/juno-3
 
Many thanks to all the PTLs and release management liaisons
who made us
reach this important milestone in the Juno development cycle.
Thanks in
particular to John Garbutt, who keeps on doing an amazing job
at the
impossible task of keeping the Nova ship straight in troubled
waters
while we head toward the Juno release port.
 
Regards,
 
[1] https://wiki.openstack.org/__wiki/FeatureFreeze
https://wiki.openstack.org/wiki/FeatureFreeze
[2] https://wiki.openstack.org/__wiki/StringFreeze
https://wiki.openstack.org/wiki/StringFreeze
[3] https://wiki.openstack.org/__wiki/DepFreeze
https://wiki.openstack.org/wiki/DepFreeze
 
 
I should probably know this, but at least I'm asking first. :)
 
Here is an example of a new translatable user-facing error message
[1].
 
From the StringFreeze wiki, I'm not sure if this is small or large.
 
Would a compromise to get this in be to drop the _() so it's just
a string and not a message?
 
Maybe I should just shut-up and email the openstack-i18n mailing
list [2].
 
[1] https://review.openstack.org/#__/c/118535/
https://review.openstack.org/#/c/118535/
[2]
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-i18n
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-i18n
 
-- 
 
Thanks,
 
Matt Riedemann
 
 
_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack

Re: [openstack-dev] Feature freeze + Juno-3 milestone candidates available

2014-09-05 Thread Joe Cropper
+1 to what Jay said.

I’m not sure whether the string freeze applies to bugs, but the defect that 
Matt mentioned (for which I authored the fix) adds a string, albeit to fix a 
bug.  Hoping it’s more desirable to have an untranslated correct message than a 
translated incorrect message.  :-)

- Joe
On Sep 5, 2014, at 3:41 PM, Jay Bryant jsbry...@electronicjungle.net wrote:

 Matt,
 
 I don't think that is the right solution.
 
 If the string changes I think the only problem is it won't be translated if 
 it is thrown.   That is better than breaking the coding standard imho.
 
 Jay
 
 On Sep 5, 2014 3:30 PM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:
 
 
 On 9/5/2014 5:10 AM, Thierry Carrez wrote:
 Hi everyone,
 
 We just hit feature freeze[1], so please do not approve changes that add
 features or new configuration options unless those have been granted a
 feature freeze exception.
 
 This is also string freeze[2], so you should avoid changing translatable
 strings. If you have to modify a translatable string, you should give a
 heads-up to the I18N team.
 
 Finally, this is also DepFreeze[3], so you should avoid adding new
 dependencies (bumping oslo or openstack client libraries is OK until
 RC1). If you have a new dependency to add, raise a thread on
 openstack-dev about it.
 
 The juno-3 development milestone was tagged, it contains more than 135
 features and 760 bugfixes added since the juno-2 milestone 6 weeks ago
 (not even counting the Oslo libraries in the mix). You can find the full
 list of new features and fixed bugs, as well as tarball downloads, at:
 
 https://launchpad.net/keystone/juno/juno-3
 https://launchpad.net/glance/juno/juno-3
 https://launchpad.net/nova/juno/juno-3
 https://launchpad.net/horizon/juno/juno-3
 https://launchpad.net/neutron/juno/juno-3
 https://launchpad.net/cinder/juno/juno-3
 https://launchpad.net/ceilometer/juno/juno-3
 https://launchpad.net/heat/juno/juno-3
 https://launchpad.net/trove/juno/juno-3
 https://launchpad.net/sahara/juno/juno-3
 
 Many thanks to all the PTLs and release management liaisons who made us
 reach this important milestone in the Juno development cycle. Thanks in
 particular to John Garbutt, who keeps on doing an amazing job at the
 impossible task of keeping the Nova ship straight in troubled waters
 while we head toward the Juno release port.
 
 Regards,
 
 [1] https://wiki.openstack.org/wiki/FeatureFreeze
 [2] https://wiki.openstack.org/wiki/StringFreeze
 [3] https://wiki.openstack.org/wiki/DepFreeze
 
 
 I should probably know this, but at least I'm asking first. :)
 
 Here is an example of a new translatable user-facing error message [1].
 
 From the StringFreeze wiki, I'm not sure if this is small or large.
 
 Would a compromise to get this in be to drop the _() so it's just a string 
 and not a message?
 
 Maybe I should just shut-up and email the openstack-i18n mailing list [2].
 
 [1] https://review.openstack.org/#/c/118535/
 [2] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-i18n
 
 -- 
 
 Thanks,
 
 Matt Riedemann
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Call for review w.r.t. scheduler being passed requested networks

2014-09-03 Thread Joe Cropper
Hi Stackers,

I was wondering if I could get a few folks to look at
https://review.openstack.org/#/c/118010/ -- basically with the switch
to conductor having a bigger role in the deployment process (and
lessening the scheduler's role), the scheduler lost the ability to
look at the networks being requested for the initial deployment
(unless I'm missing something, which could be the case).

This patch aims to 'fix' that.  Comments are welcomed.

Thanks,
Joe

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Server Groups - remove VM from group?

2014-08-25 Thread Joe Cropper
Hello,

Is our long-term vision to allow a VMs to be dynamically added/removed
from a group?  That is, unless I'm overlooking something, it appears
that you can only add a VM to a server group at VM boot time and
effectively remove it by deleting the VM?

Just curious if this was a design point, or merely an approach at a
staged implementation [that might welcome some additions]?  :)

Thanks,
Joe

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Server Groups - remove VM from group?

2014-08-25 Thread Joe Cropper
Thanks Jay.  Those are the same types of questions I was pondering as
well when debating how someone might use this.  I think what we have
is fine for a first pass, but that's what I was poking at... whether
some of the abilities to add/remove members dynamically could exist
(e.g., I no longer want this VM to have an anti-affinity policy
relative to the others, etc.).

- Joe

On Mon, Aug 25, 2014 at 10:16 AM, Jay Pipes jaypi...@gmail.com wrote:
 On 08/25/2014 11:10 AM, Joe Cropper wrote:

 Hello,

 Is our long-term vision to allow a VMs to be dynamically added/removed
 from a group?  That is, unless I'm overlooking something, it appears
 that you can only add a VM to a server group at VM boot time and
 effectively remove it by deleting the VM?

 Just curious if this was a design point, or merely an approach at a
 staged implementation [that might welcome some additions]?  :)


 See here:

 http://lists.openstack.org/pipermail/openstack-dev/2014-April/033746.html

 If I had my druthers, I would revert the whole extension.

 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Server Groups - remove VM from group?

2014-08-25 Thread Joe Cropper
That was indeed a rather long (and insightful) thread on the topic.
It sounds like there are still some healthy discussions worth having
on the subject -- either exploring your [potentially superseding]
proposal, or minimally rounding out the existing server group API to
support add existing VM [1] and remove VM -- I think these would
make it a lot more usable (I'm thinking of the poor cloud
administrator that makes a mistake when they boot an instance and
either forgets to put it in a group or puts it in the wrong group --
it's square 1 for them)?

Is this queued up as a discussion point for Paris?  If so, count me in!

Thanks,
Joe

On Mon, Aug 25, 2014 at 11:08 AM, Jay Pipes jaypi...@gmail.com wrote:
 On 08/25/2014 11:31 AM, Joe Cropper wrote:

 Thanks Jay.  Those are the same types of questions I was pondering as
 well when debating how someone might use this.  I think what we have
 is fine for a first pass, but that's what I was poking at... whether
 some of the abilities to add/remove members dynamically could exist
 (e.g., I no longer want this VM to have an anti-affinity policy
 relative to the others, etc.).


 I guess what I was getting at is that I think the whole interface is flawed
 and it's not worth putting in the effort to make it slightly less flawed.

 Best,
 -jay


 - Joe

 On Mon, Aug 25, 2014 at 10:16 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 08/25/2014 11:10 AM, Joe Cropper wrote:


 Hello,

 Is our long-term vision to allow a VMs to be dynamically added/removed
 from a group?  That is, unless I'm overlooking something, it appears
 that you can only add a VM to a server group at VM boot time and
 effectively remove it by deleting the VM?

 Just curious if this was a design point, or merely an approach at a
 staged implementation [that might welcome some additions]?  :)



 See here:

 http://lists.openstack.org/pipermail/openstack-dev/2014-April/033746.html

 If I had my druthers, I would revert the whole extension.

 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Server Groups - remove VM from group?

2014-08-25 Thread Joe Cropper
I was thinking something simple such as only allowing the add operation to 
succeed IFF no policies are found to be in violation... and then nova wouldn't 
need to get into all the complexities you mention?

And remove would be fairly straightforward as well since no constraints would 
need to be checked. 

Thoughts?

Thanks,
Joe

 On Aug 25, 2014, at 12:10 PM, Russell Bryant rbry...@redhat.com wrote:
 
 On 08/25/2014 12:56 PM, Joe Cropper wrote:
 That was indeed a rather long (and insightful) thread on the topic.
 It sounds like there are still some healthy discussions worth having
 on the subject -- either exploring your [potentially superseding]
 proposal, or minimally rounding out the existing server group API to
 support add existing VM [1] and remove VM -- I think these would
 make it a lot more usable (I'm thinking of the poor cloud
 administrator that makes a mistake when they boot an instance and
 either forgets to put it in a group or puts it in the wrong group --
 it's square 1 for them)?
 
 Is this queued up as a discussion point for Paris?  If so, count me in!
 
 Adding a VM is far from trivial and is why we ripped it out before
 merging.  That implies a potential reshuffling of a bunch of existing
 VMs.  Consider an affinity group of instances A and B and then you add
 running instance C to that group.  What do you expect to happen?  Live
 migrate C to the host running A and B?  What if there isn't room?
 Reschedule all 3 to find a host and live migrate all of them?  This kind
 of orchestration is a good bit outside of the scope of what's done
 inside of Nova today.
 
 -- 
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Server Groups - remove VM from group?

2014-08-25 Thread Joe Cropper
 Even something like this is a lot more complicated than it sounds due to
the fact that several operations can be happening in parallel.

That's fair, but I was thinking that the 'add existing' VM is fairly
close in behavior to 'add new' VM to the group, less of course any
parallel operations happening on the VM itself.

 I think we just need to draw a line for Nova that just doesn't include this
functionality.

If that's our general direction, no problem.  I'm just thinking about
this from a user's perspective; this would be very difficult for any
administrator to use in its current form because you essentially can't
make a mistake in the group management--any mistake results in you
having to essentially delete the VM and start over, which is a pretty
major usability issue IMO, at least in terms of most production
environments.

Don't get me wrong, I think server groups have a lot of interesting
use cases (I actually would really like to use them) in their current
form and I think as a starting point, this is great.  But I think
without some of these added flexibilities, I think it would be very
challenging for any IT administrator to use them--hence why I'm
exploring adding some additional functionality; I'm even happy to help
implement this, if we can get any type of concurrence on the subject.
:-)

- Joe

On Mon, Aug 25, 2014 at 12:58 PM, Russell Bryant rbry...@redhat.com wrote:
 On 08/25/2014 01:25 PM, Joe Cropper wrote:
 I was thinking something simple such as only allowing the add operation to 
 succeed IFF no policies are found to be in violation... and then nova 
 wouldn't need to get into all the complexities you mention?

 Even something like this is a lot more complicated than it sounds due to
 the fact that several operations can be happening in parallel.  I think
 we just need to draw a line for Nova that just doesn't include this
 functionality.

 And remove would be fairly straightforward as well since no constraints 
 would need to be checked.

 Right, remove is straight forward, but seems a bit odd to have without
 add.  I'm not sure there's much value to it.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Suggestion that Fix to logic of diskfilter

2014-08-25 Thread Joe Cropper
You're always welcome to submit a patch for a valid bug. Just put:

Closes-Bug: #number

At the bottom of the commit message to link the change set to the bug.  :)

- Joe

 On Aug 25, 2014, at 9:44 PM, Jae Sang Lee hyan...@gmail.com wrote:
 
 Hi, all.
 
 Diskfilter based on host disk usage, it check between usable host disk size 
 and requested vm disk size.
 But when create a VM with boot volume, Diskfilter has filtered a host 
 although a VM doesn't use host disk. 
 (Usually ISCSI volume could be attached)
 
 So I have filed following defect to fix the logic that doesn't check when a 
 VM created with boot volume.
 https://bugs.launchpad.net/nova/+bug/1358566
 
 Could anyone please let me know whether I can fix this for Juno? 
 
 Thanks.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Server Group API: add 'action' to authorizer?

2014-08-23 Thread Joe Cropper
Hi Folks,

Would anyone be opposed to adding the 'action' checking to the v2/v3
authorizers?  This would allow administrators more fine-grained
control over  who can read vs. create/update/delete server groups.

Thoughts?

If folks are supportive, I'd be happy to add this... but not sure if
we'd treat this as a 'bug' or whether there is a blueprint under which
this could be done?

Thanks,
Joe

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-09 Thread Joe Cropper
There may also be specific software entitlement issues that make it useful
to deterministically know which host your VM will be placed on.  This can
be quite common in large organizations that have certain software that can
be tied to certain hardware or hardware with certain # of CPU capacity, etc.

Regards,
Joe


On Mon, Jun 9, 2014 at 11:32 AM, Chris Friesen chris.frie...@windriver.com
wrote:

 On 06/09/2014 07:59 AM, Jay Pipes wrote:

 On 06/06/2014 08:07 AM, Murray, Paul (HP Cloud) wrote:

 Forcing an instance to a specific host is very useful for the
 operator - it fulfills a valid use case for monitoring and testing
 purposes.


 Pray tell, what is that valid use case?


 I find it useful for setting up specific testcases when trying to validate
 thingsput *this* instance on *this* host, put *those* instances on
 *those* hosts, now pull the power plug on *this* host...etc.

 I wouldn't expect the typical openstack end-user to need it though.

 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Arbitrary extra specs for compute nodes?

2014-06-09 Thread Joe Cropper
On Mon, Jun 9, 2014 at 10:07 AM, Chris Friesen
chris.frie...@windriver.com wrote:
 On 06/07/2014 12:30 AM, Joe Cropper wrote:

 Hi Folks,

 I was wondering if there was any such mechanism in the compute node
 structure to hold arbitrary key-value pairs, similar to flavors'
 extra_specs concept?

 It appears there are entries for things like pci_stats, stats and
 recently added extra_resources -- but these all tend to have more
 specific usages vs. just arbitrary data that may want to be maintained
 about the compute node over the course of its lifetime.

 Unless I'm overlooking an existing construct for this, would this be
 something that folks would welcome a Juno blueprint for--i.e., adding
 extra_specs style column with a JSON-formatted string that could be
 loaded as a dict of key-value pairs?


 If nothing else, you could put the compute node in a host aggregate and
 assign metadata to it.

Yeah, I recognize this could be done, but I think that would be using
the host aggregate metadata a little too loosely since the metadata
I'm after is really tied explicitly to the compute node.  This would
present too many challenges when someone would want to use host
aggregates and the compute node-specific metadata.


 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Arbitrary extra specs for compute nodes?

2014-06-09 Thread Joe Cropper
On Mon, Jun 9, 2014 at 5:17 AM, Day, Phil philip@hp.com wrote:
 Hi Joe,



 Can you give some examples of what that data would be used for ?

Sure!  For example, in the PowerKVM world, hosts can be dynamically
configured to run in split-core processor mode.  This setting can be
dynamically changed and it'd be nice to allow the driver to track this
somehow -- and it probably doesn't warrant its own explicit field in
compute_node.  Likewise, PowerKVM also has a concept of the maximum
SMT level in which its guests can run (which can also vary dynamically
based on the split-core setting) and it would also be nice to tie such
settings to the compute node.

Overall, this would give folks writing compute drivers the ability to
attach the extra spec style data to a compute node for a variety of
purposes -- two simple examples provided above, but there are many
more.  :-)




 It sounds on the face of it that what you’re looking for is pretty similar
 to what Extensible Resource Tracker sets out to do
 (https://review.openstack.org/#/c/86050
 https://review.openstack.org/#/c/71557)

Thanks for pointing this out.  I actually ran across these while I was
searching the code to see what might already exist in this space.
Actually, the compute node 'stats' was always a first guess, but these
are clearly heavily reserved for the resource tracker and wind up
getting purged/deleted over time since the 'extra specs' I reference
above aren't necessarily tied to the spawning/deleting of instances.
In other words, they're not really consumable resources, per-se.
Unless I'm overlooking a way (perhaps I am) to use this
extensible-resource-tracker blueprint for arbitrary key-value pairs
**not** related to instances, I think we need something additional?

I'd happily create a new blueprint for this as well.




 Phil



 From: Joe Cropper [mailto:cropper@gmail.com]
 Sent: 07 June 2014 07:30
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] Arbitrary extra specs for compute nodes?



 Hi Folks,

 I was wondering if there was any such mechanism in the compute node
 structure to hold arbitrary key-value pairs, similar to flavors'
 extra_specs concept?

 It appears there are entries for things like pci_stats, stats and recently
 added extra_resources -- but these all tend to have more specific usages vs.
 just arbitrary data that may want to be maintained about the compute node
 over the course of its lifetime.

 Unless I'm overlooking an existing construct for this, would this be
 something that folks would welcome a Juno blueprint for--i.e., adding
 extra_specs style column with a JSON-formatted string that could be loaded
 as a dict of key-value pairs?

 Thoughts?

 Thanks,

 Joe


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Arbitrary extra specs for compute nodes?

2014-06-09 Thread Joe Cropper
On Mon, Jun 9, 2014 at 12:56 PM, Jay Pipes jaypi...@gmail.com wrote:
 On 06/09/2014 01:38 PM, Joe Cropper wrote:

 On Mon, Jun 9, 2014 at 5:17 AM, Day, Phil philip@hp.com wrote:

 Hi Joe,



 Can you give some examples of what that data would be used for ?


 Sure!  For example, in the PowerKVM world, hosts can be dynamically
 configured to run in split-core processor mode.  This setting can be
 dynamically changed and it'd be nice to allow the driver to track this
 somehow -- and it probably doesn't warrant its own explicit field in
 compute_node.  Likewise, PowerKVM also has a concept of the maximum
 SMT level in which its guests can run (which can also vary dynamically
 based on the split-core setting) and it would also be nice to tie such
 settings to the compute node.


 That information is typically stored in the compute_node.cpu_info field.


 Overall, this would give folks writing compute drivers the ability to
 attach the extra spec style data to a compute node for a variety of
 purposes -- two simple examples provided above, but there are many
 more.  :-)


 If it's something that the driver can discover on its own and that the
 driver can/should use in determining the capabilities that the hypervisor
 offers, then at this point, I believe compute_node.cpu_info is the place to
 put that information. It's probably worth renaming the cpu_info field to
 just capabilities instead, to be more generic and indicate that it's a
 place the driver stores discoverable capability information about the
 node...

Thanks, that's a great point!  While that's fair for those items that
are self-discoverable for the driver that also are cpu_info'ish in
nature, there are also some additional use cases I should mention.
Imagine some higher level projects [above nova] want to associate
arbitrary bits of information with the compute host for
project-specific uses.  For example, suppose I have an orchestration
project that does coordinated live migrations and I want to put some
specific restrictions on the # of concurrent migrations that should
occur for the respective compute node (and let the end-user adjust
these values).  Having it directly associated with the compute node in
nova gives us some nice ways to keep data consistency.  I think this
would be a great way to gain some additional parity with some of the
other nova structures such as flavors' extra_specs and instances'
metadata/system_metadata.

Thanks,
Joe


 Now, for *user-defined* taxonomies, I'm a big fan of simple string tagging,
 as is proposed for the server instance model in this spec:

 https://review.openstack.org/#/c/91444/

 Best,
 jay





 It sounds on the face of it that what you’re looking for is pretty
 similar
 to what Extensible Resource Tracker sets out to do
 (https://review.openstack.org/#/c/86050
 https://review.openstack.org/#/c/71557)


 Thanks for pointing this out.  I actually ran across these while I was
 searching the code to see what might already exist in this space.
 Actually, the compute node 'stats' was always a first guess, but these
 are clearly heavily reserved for the resource tracker and wind up
 getting purged/deleted over time since the 'extra specs' I reference
 above aren't necessarily tied to the spawning/deleting of instances.
 In other words, they're not really consumable resources, per-se.
 Unless I'm overlooking a way (perhaps I am) to use this
 extensible-resource-tracker blueprint for arbitrary key-value pairs
 **not** related to instances, I think we need something additional?

 I'd happily create a new blueprint for this as well.




 Phil



 From: Joe Cropper [mailto:cropper@gmail.com]
 Sent: 07 June 2014 07:30
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] Arbitrary extra specs for compute nodes?



 Hi Folks,

 I was wondering if there was any such mechanism in the compute node
 structure to hold arbitrary key-value pairs, similar to flavors'
 extra_specs concept?

 It appears there are entries for things like pci_stats, stats and
 recently
 added extra_resources -- but these all tend to have more specific usages
 vs.
 just arbitrary data that may want to be maintained about the compute node
 over the course of its lifetime.

 Unless I'm overlooking an existing construct for this, would this be
 something that folks would welcome a Juno blueprint for--i.e., adding
 extra_specs style column with a JSON-formatted string that could be
 loaded
 as a dict of key-value pairs?

 Thoughts?

 Thanks,

 Joe


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http

[openstack-dev] Arbitrary extra specs for compute nodes?

2014-06-07 Thread Joe Cropper
Hi Folks,

I was wondering if there was any such mechanism in the compute node
structure to hold arbitrary key-value pairs, similar to flavors'
extra_specs concept?

It appears there are entries for things like pci_stats, stats and recently
added extra_resources -- but these all tend to have more specific usages
vs. just arbitrary data that may want to be maintained about the compute
node over the course of its lifetime.

Unless I'm overlooking an existing construct for this, would this be
something that folks would welcome a Juno blueprint for--i.e., adding
extra_specs style column with a JSON-formatted string that could be loaded
as a dict of key-value pairs?

Thoughts?

Thanks,
Joe
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev