Re: [Openstack] Project wrap-up videos from Portland

2013-05-21 Thread Eric Windisch
Yes, thank you. 

I'm not sure if the videos weren't posted when I checked, or if I may have 
accidentally been looking at the Grizzly summit video listing. Either way, 
they're now where they belong (as long as one looks in the right place). 

Regards,
Eric Windisch


On Tuesday, May 21, 2013 at 11:55 AM, Stefano Maffulli wrote:

 On 05/16/2013 12:24 PM, Eric Windisch wrote:
  This message from Russell reminded me that of the videos that were
  uploaded from Portland, ones that seem to be vitally important, yet
  missing, are those project wrap-up talks given by the various PTLs.
  
 
 
 I think you're referring to these:
 
 https://www.openstack.org/summit/portland-2013/session-videos/presentation/project-update-compute-nova
 https://www.openstack.org/summit/portland-2013/session-videos/presentation/project-update-networking
 https://www.openstack.org/summit/portland-2013/session-videos/presentation/project-updates-dashboard-and-image-service
 https://www.openstack.org/summit/portland-2013/session-videos/presentation/project-updates-oslo-and-keystone
 https://www.openstack.org/summit/portland-2013/session-videos/presentation/project-update-block-storage
 https://www.openstack.org/summit/portland-2013/session-videos/presentation/project-update-heat
 https://www.openstack.org/summit/portland-2013/session-videos/presentation/project-overview-openstack-queuing-and-notification-service-marconi
 https://www.openstack.org/summit/portland-2013/session-videos/presentation/project-update-object-storage
 https://www.openstack.org/summit/portland-2013/session-videos/presentation/hyper-v-grizzly-features-exposed
 
 Sometimes some videos take a bit longer to appear on that page.
 
 /stef
 
 -- 
 Ask and answer questions on https://ask.openstack.org
 
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Project wrap-up videos from Portland

2013-05-16 Thread Eric Windisch
Forwarded message:
 From: Russell Bryant rbry...@redhat.com
 As for those that would normally lurk in the back of the room, if they
 can be there, great. If not, it's not the end of the world. There
 should not be anything that is *exclusively* discussed and decided at
 the design summit. Things should be documented, discussed on the
 mailing list, and vetted on gerrit all the same, so everyone should
 still have visibility into what is going on.
 
 
 


This message from Russell reminded me that of the videos that were uploaded 
from Portland, ones that seem to be vitally important, yet missing, are those 
project wrap-up talks given by the various PTLs.

Perhaps someone (Stefano?) knows if those videos might be available?  Having 
videos such as these available following Hong Kong will be all so much more 
important for those that fail to make it.

Regards,
Eric Windisch

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] I release naming (calling APAC community)

2013-05-10 Thread Eric Windisch
Extending the rules to those things of cultural or political significance
would have established precedence with the 'Grizzly exception'.

If such an exception were useful in naming this release, I'd suggest it
become a standard part of the naming scheme.
On May 10, 2013 6:40 AM, Matt Joyce matt.jo...@cloudscaling.com wrote:

 Alternatively... we name it after a kung fu movie.

 Invincible Fist for instance was a Shaw Brothers film.  Hong Kong is
 famous for it's Kung Fu films =P


 On Thu, May 9, 2013 at 8:57 PM, Wang, Shane shane.w...@intel.com wrote:

   If the next release summit is going to be held in “Japan”, it should
 be easier to give a nameJ

 ** **

 --

 Shane Wang

 *From:* Openstack [mailto:openstack-bounces+shane.wang=
 intel@lists.launchpad.net] *On Behalf Of *Rajesh Vellanki
 *Sent:* Friday, May 10, 2013 3:50 AM
 *To:* Atul Jha; Matt Joyce; John Wong

 *Cc:* Thierry Carrez; openstack@lists.launchpad.net
 *Subject:* Re: [Openstack] I release naming (calling APAC community)

  ** **

 +1 

 ** **

 Rajesh
  --

 *From:* Openstack [openstack-bounces+rvellank=
 rackspace@lists.launchpad.net] on behalf of Atul Jha [
 atul@csscorp.com]
 *Sent:* Thursday, May 09, 2013 2:31 PM
 *To:* Matt Joyce; John Wong
 *Cc:* Thierry Carrez; openstack@lists.launchpad.net
 *Subject:* Re: [Openstack] I release naming (calling APAC community)

 Hi all,

 I was too late to respond here i would love to see next release name as
 India.  I am saying because India is in APAC still i have need to find
 out rule book for release.

 Just my 1$ suggestion. :)

 Cheers!!

 Atul 
  --

 *From:* Openstack [openstack-bounces+atul.jha=
 csscorp@lists.launchpad.net] on behalf of Matt Joyce [
 matt.jo...@cloudscaling.com]
 *Sent:* Friday, May 10, 2013 12:58 AM
 *To:* John Wong
 *Cc:* Thierry Carrez; openstack@lists.launchpad.net
 *Subject:* Re: [Openstack] I release naming (calling APAC community)

 'Impossible' might be a good name... seeing as how impossible it is
 becoming to find a word beginning with I in the region.

 ** **

 On Thu, May 9, 2013 at 12:04 PM, John Wong gokoproj...@gmail.com wrote:
 

 Good point. A long time ago we used to call Beijing as Peking, but
 the journalists now will always pick up the pinyin version: Beijing. Since
 we are after I, we are pretty much down to either Ichang or Ili. Should
 either one be chosen, I think that we should document the pinyin version as
 well. The difference of Y and I is due to historical difference, which we
 are all familiar with. For example, there are more Wong than Wang, Chan
 than Chen in HK. 

 ** **

 John

 ** **

 On Thu, May 9, 2013 at 2:21 PM, Yi Yang yyos1...@gmail.com wrote:

 AS the official translation of the citiy names are Yichang and Yili, we
 are facing a risk to pick up a Chinese city name that most Chinese won't
 recognize.

 If we really have to choose a city, IMO, Yichang (ichang) would be a
 better choice, as the Yili(ili) is more than 2300 miles away from Hong
 Kong, while Yichang (ichang) is (only) 600 miles away.

 Just my $.02

 Yi 




 On 5/6/13 5:00 AM, Thierry Carrez wrote:

 Jacob Godin wrote:

 +1 Ili

 Thanks for all the suggestions ! We probably have enough place names so
 that we don't need to extend the naming rules to things that are not
 place names.

 So far, the only suggestion that fits in our strict naming rules is
 Ili (a city or county in the country/state where the design summit is
 held, single word of 10 characters or less). To have more than one
 option, we'll probably extend the rules to include other places (like
 street names) in Hong Kong itself.

 We'll go through name checks and set up a vote soon, I'll keep you posted.
 

 ** **

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

 ** **


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

 ** **

 http://www.csscorp.com/common/email-disclaimer.php 



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] RPC call timeouts

2013-04-12 Thread Eric Windisch
 
 Finally, it appears that when an RPC call times out, the original message, at 
 least is some cases like 'network.hostname' queue, is left in place to be 
 consumed when a consumer is available. This seems like an odd design--if I 
 take corrective action in response to an RPC timeout, I don't think I want 
 that message processed ever. 
 


For what it is worth, Grizzly introduces TTLs for individual messages as 
they're injected into RabbitMQ and Qpid.  This was already the behavior for 
ZeroMQ, in Folsom.

Regards,
Eric Windisch


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] ODS schedule app for Android?

2013-04-11 Thread Eric Windisch
Is there a good reason there is no Android app published for ODS? 
I've use the website on my phone in the past and it is okay, but not great.

I see there is an iOS app, but sched.org offers applications for both 
ecosystems.

Regards,
Eric Windisch

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] ODS schedule app for Android?

2013-04-11 Thread Eric Windisch


On Thursday, April 11, 2013 at 18:54 PM, Stefano Maffulli wrote:

 On Thu 11 Apr 2013 03:22:46 PM PDT, Eric Windisch wrote:
  Is there a good reason there is no Android app published for ODS?
 
 
 I have no idea... is there supposed to be one?
 
 I use no app, I'm in the I can't stand apps phase. I get the calendar 
 via .ics in my calendar applications. The ics feed can be accessed from 
 the mobile url:
 
 http://openstacksummitapril2013.sched.org/mobile-site#.UWc-XkmJSoM
 

Right, there are viable alternatives to the app. However, the question is, 
since there *is* an app available from sched.org for organizers, and the iOS 
app is being made available to attendees, the question is: why the disparity?

Regards,
Eric Windisch

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] ODS schedule app for Android?

2013-04-11 Thread Eric Windisch
Someone has just informed me of the @OpenStack twitter feed which reads: 

Coming to the Summit? Download the new iphone app! (Android coming soon) 
awe.sm/dE87S (http://awe.sm/dE87S)

Case closed, I guess ;-) 

Regards,
Eric Windisch


On Thursday, April 11, 2013 at 19:48 PM, Eric Windisch wrote:

 
 
 On Thursday, April 11, 2013 at 18:54 PM, Stefano Maffulli wrote:
 
  On Thu 11 Apr 2013 03:22:46 PM PDT, Eric Windisch wrote:
   Is there a good reason there is no Android app published for ODS?
  
  
  I have no idea... is there supposed to be one?
  
  I use no app, I'm in the I can't stand apps phase. I get the calendar 
  via .ics in my calendar applications. The ics feed can be accessed from 
  the mobile url:
  
  http://openstacksummitapril2013.sched.org/mobile-site#.UWc-XkmJSoM
  
 
 Right, there are viable alternatives to the app. However, the question is, 
 since there *is* an app available from sched.org (http://sched.org) for 
 organizers, and the iOS app is being made available to attendees, the 
 question is: why the disparity?
 
 Regards,
 Eric Windisch
 
 
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Gerrit Review + SSH

2013-04-05 Thread Eric Windisch
 As OS dev cycle involves Gerrit review tool which requires ssh into the
 gerrit server, I was wondering if any of you guys face problems where your
 company/org does not allow ssh to external hosts.


I'm not sure if we support it or not (I don't think we do), but Gerrit
generally works with doing HTTPS push. Someone on the QA team could comment
on whether or not it is worth providing support.

More information:
https://groups.google.com/d/msg/repo-discuss/7BXB_t7cHs8/KQ6TCLKYEh4J

Regards,
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] TC candidacy

2013-03-18 Thread Eric Windisch
Hello,

I'd like to run for a seat on the Technical Committee. I am a Principal 
Engineer at Cloudscaling, but I am running as an individual.

For over two years, beginning with Bexar, I have been working to bring working 
OpenStack solutions to customers. My first code contributions landed in Cactus, 
but my contributions have not been limited to code alone. I propose and drive 
summit and mailing list discussions, contribute to those discussions led by 
others, have begun contributing to packaging efforts, and soon intend to land 
documentation changes.

The code I do write and review tends to be in Oslo. Working with Oslo, and with 
OpenStack deployments, I have the perspective to work across projects.

Before my involvement with OpenStack, in my former role as the owner and 
technical director of a VPS hosting company, I was already building open source 
cloud infrastructure as early as 2007.  For years preceding the existence of 
OpenStack, I've worked on many of the technical problems and challenges facing 
a cloud, not only those related to virtualization, networking, and storage, but 
also billing, metering, and user interfaces.

The number of companies and contributors involved in OpenStack is exploding. 
Each conference has been shockingly larger than the last. We have new projects 
being added in each 6-month release. These are problems that the TC must be 
prepared to deal with. We need technical leadership that understands the 
problems of these projects, can help them succeed, and especially for those 
individually elected seats -- can bring them to together.

I am deeply committed to the success of OpenStack and to open source cloud. I 
thank you for your consideration and ask that you please vote for me. 

Regards,
Eric Windisch



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] RabbitMQ

2013-01-02 Thread Eric Windisch


On Tuesday, January 1, 2013 at 07:16 AM, Daniel He wrote:

 I also thought zeromq but we need some quantitative measure. Does
 anyone have tested zeromq already? thanks
 
I've seen ZeroMQ based messaging running in multiple production OpenStack 
installations.

As the primary author of the zeromq messaging driver, my goal has been to 
achieve partition tolerance and high availability. Performance is not a primary 
motivator and is not something I have too deeply concerned myself with. I 
believe the pattern implemented by the zeromq driver should prove performant, 
but I also recognize that I have chosen not to prematurely optimize and there 
is room for improvement.

In terms of partition tolerance and high availability, I (perhaps biased) 
believe that ZeroMQ driver is superior to AMQP based messaging systems for the 
specific use-cases within Nova, Cinder, Glance, Quantum.  However, the 
complexities of supporting fanout and topic exchanges with ZeroMQ makes the 
driver the most difficult to configure in the short-term. For Nova, the 
configuration is actually quite basic, but for Quantum, the zeromq driver is 
currently impractical with upstream code. Specifically, considerable 
improvements must be made to the matchmaker modules, which involves host 
discovery for fanout and topic exchanges. The matchmaker is pluggable, so I 
hesitate to say what is out there now is wholly incapable of supporting 
Quantum, but we need something better upstream.

I'm seeking to make the driver configuration-free, hopefully within Grizzly, to 
make it the easiest out-of-the-box experience available within OpenStack. You 
wouldn't need a dedicated broker (or brokers), configuration of messaging 
hosts, usernames, or passwords. If successful, messaging will just work out 
of the box with ZeroMQ selected.

Regards,
Eric Windisch





___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Using nova-volumes openstack LVM group for other pourposes

2012-10-24 Thread Eric Windisch


On Wednesday, October 24, 2012 at 14:56 PM, Daniel Vázquez wrote:

 Hi here!
  
 Can we create and use news logical volumes for own/custom use(out of
 openstack) on nova-volumes openstack LVM group, and use it beside
 openstack operational?
 IMO it's LVM and no problem, but it has openstack collateral consequences?
  
  
  

I generally advise not to do this due to potential security concerns.

In practice, your concerns will be with deleting manually created volumes and 
creating volumes that match the pattern set in the nova-volumes/cinder 
configuration.

Regards,
Eric Windisch

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Use of MAC addresses in Openstack VMs

2012-10-23 Thread Eric Windisch
  
  The issue reported with that is that it creates a smaller space for
  collision amongst OpenStack users. Encouraging people to use
  locally-assigned OUI or buy their own might therefore be a better strategy.
  
I see the problem being that we shouldn't automatically choose a random OUI. 
Administrators might be using local OUIs elsewhere.   Isn't collision with 
other devices that might have locally-assigned OUIs more immediately 
threatening than colliding with deployments in (presumably) physically separate 
domains?

If a random OUI might be dangerous and we're going to choose a non-random OUI, 
why not make it our own?

Xen, VMware, and HyperV provide their own OUI for automatically generated 
addresses. Yes, these collide between deployments. XenSource does go so far as 
to provide the following recommendation, to use one of the following strategies 
in order of preference:

 1) Use your own OUI
 2) Use a random OUI
 3) Use the XenSource-assigned OUI
 

When I first deployed Xen, it would choose #3 by default, because while it 
wasn't the best option overall, it worked deterministically out of the box 
without risk of collisions (outside of other Xen host's VMs).  Colliding with 
artifacts of your own software is better than colliding with local operator 
configurations and preferences.

Regards,
Eric Windisch


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Use of MAC addresses in Openstack VMs

2012-10-20 Thread Eric Windisch


Sent from my iPad

On Oct 20, 2012, at 10:19, Salvatore Orlando sorla...@nicira.com wrote:

 I understand your concerns about conflicts with already assigned OUIs.
 It is however my opinion that it is not up to the Openstack Foundation, but 
 to entities deploying Openstack, to buy MAC OUIs.

For comparison and reference, Xen has its own OUI which is used by default for 
its VMs.  Users of Xen do not need to apply for their own OUIs and I don't 
believe they should.

It isn't necessary for VMs to have globally unique MACs, but they shouldn't 
overlap with devices from other vendors.

I believe OpenStack should have its own OUI.

Regards,
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Versioning for notification messages

2012-10-09 Thread Eric Windisch



On Tuesday, October 9, 2012 at 15:58 PM, David Ripton wrote:

 On 10/09/2012 01:07 PM, Day, Phil wrote:
 
  What do people think about adding a version number to the notification
  systems, so that consumers of notification messages are protected to
  some extent from changes in the message contents ?
 

Right now, there is no appropriate or acceptable way to consume notifications. 
Plainly, while notifications exist, they shouldn't be considered stable or even 
usable until this exists.

Message formats and versioning should be a consideration in the effort to 
create a reusable consumption pattern.

Regards,
Eric Windisch




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Announcing candidacy for Technical Committee

2012-09-19 Thread Eric Windisch
I am announcing my candidancy for the Technical Committee.

For nearly 10 years, I had been the owner of a small web hosting and VPS hosting
company. There, I provided leadership and managerial duties, in addition to
architecting and implementing our own (open source) virtualization/cloud
management solution, an effort we began in 2006.

For over a year, I've been at Cloudscaling helping deliver similar
automation technology to others.



I have been a contributor to Nova since Cactus, but recently have become
more involved in openstack-common.

I believe in providing fault-tolerant / resilient, scalable architecture,
modular software design, and a security-minded approach to implementation.

To date, most of my contributions have been to advance these points.
In Folsom, I contributed a ZeroMQ-based RPC driver to provide scalability
and fault tolerance. In Cactus, I had contributed a simple, but extensive, fix
to utils.execute as a significant security improvement. I intend to make further
such improvements to the scalability and security of OpenStack, both directly,
and through leadership.

Furthermore, I have lent my voice to security-related topics such as improving
rootwrap, configuration-drive / metadata injection, and own a blueprint
to introduce message signing in Grizzly. At summits, I have aimed to voice the
promotion of scalable patterns and security-mindedness.

In Grizzly, besides message signing, I would like to see improvements
in the database handling, increased cooperation with eventlet coroutines, and
a more extensive security review with subsequent fixes. I also believe
that reasonable tradeoffs in the architecture have been made at times,
but that these are owed re-evaluation, because continued improvements
mean that valid decisions do not necessarily remain valid in an evolving
architecture.

I believe that OpenStack requires leaders with experience 'in the trenches' of
operations, implementation, and of course, leadership. I ask for your trust,
and your votes, in this coming Technical Committee election.



Thank you,
Eric Windisch



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OSSA 2012-011] Compute node filesystem injection/corruption (CVE-2012-3447)

2012-08-14 Thread Eric Windisch
On Tuesday, August 14, 2012 at 14:30 PM, Matt Joyce wrote:
 I have to ask.  Wasn't FUSE designed to do alot of this stuff?  It is 
 userspace and it doesn't do nasty stuff to file systems.  Why aren't we going 
 that route?
Fuse was really designed for the opposite scenario. Fuse modules run as 
daemons, they're not libraries.  These daemons attach to a character device and 
map userspace code into the VFS.  Instead, we want to access a filesystem from 
userspace code.  It is a shame, however, because you're right… there is plenty 
of code there that knows how to read filesystems in userspace.  Unfortunately, 
the FUSE design really doesn't do us any favors.

That said, there are some crazy options to fix that. One could theoretically 
replace the FUSE character device with one that spoke to userspace processes, 
instead of interacting with the VFS.  There has even been work into creating 
user-space character devices.  One could also make FUSE work with Unix sockets 
as an alternative to character devices…

None of this is out of the box, tested, or even in existence...

Regards,
Eric Windisch


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OSSA 2012-011] Compute node filesystem injection/corruption (CVE-2012-3447)

2012-08-14 Thread Eric Windisch



On Tuesday, August 14, 2012 at 16:41 PM, Matt Joyce wrote:

 I get what you are saying.  And for the sake of compatibility with other 
 clouds and their images obviously that's the way to go, but my inner nerd is 
 screaming Well, about that...  and wanting me to rally people to the idea 
 of putting the logic inside the images rather than inside of the cloud.   Let 
 init negotiate the api access and produce the filesystems it needs to get 
 booted up properly.  
 
Are we having the same conversation? :-)  You were arguing for FUSE, I simply 
said that particular user-space solution isn't very viable due.  Otherwise, I 
believe you and I agree.

I agree that the the approach being taken here isn't ideal. However, I also 
advocate that if this path is going to be traveled, it should be done in the 
safest way possible - in userspace, and write-once-read-never, if at all 
possible.  However, I'm not too confident of libguestfs, but I understand why 
it is attractive in absence of good userspace filesystem tools.  Several have 
pointed to mtools as one, and I'll also add debug2fs to this list, for those of 
strong conviction.

Regards,
Eric Windisch




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Setting Expectations

2012-08-10 Thread Eric Windisch


On Aug 10, 2012, at 20:49, Nathanael Burton nathanael.i.bur...@gmail.com 
wrote:

 I personally equate OpenStack to the Linux Kernel. It's the foundation and 
 core components that, in OpenStack's case, make up an Infrastructure as as 
 Service (IaaS) system, a cloud kernel.  We should expect the core 
 components and APIs to be stable with sane deprecation policies, but 
 OpenStack shouldn't do everything for everyone. It should facilitate and 
 provide the stable framework or foundation in which to build production 
 quality, large scale (and small) public and private IaaS systems. In and of 
 itself I believe OpenStack is not an IaaS distribution, ala Linux 
 distributions (Debian, Fedora, RedHat, SuSe, Ubuntu) which take the Linux 
 kernel and build all the user-space and complementary services that make up a 
 manageable, secure, monitored system.
 

An even better example might be Apache. They have their own foundation and have 
a number of services that get installed to machines, but they don't have a 
distribution or any clear deployment solutions.  Some of their applications 
such as the httpd are just core pieces that get installed to a single system 
and multiple services on multiple machines don't communicate, but others are 
horizontally scaling solutions with inter-process communication, such as 
Hadoop.  Whatever the case, they're still not building a distribution.

OpenStack in some ways appears to be the kernel on which applications run, but 
its applications are just applications. If the question is where the foundation 
draws the line at acceptance of projects, it is an interesting one... as long 
as there is a foundation, you can't really use Linux as any sort of example.  
Instead, if you want to draw parallels to operating systems, you'll have to 
look more closely to the BSD systems.

With BSD, they've coupled the kernels and the distributions. I do not think 
this is a model that OpenStack should follow, but I do see a tendency of some 
toward this. Instead, I believe the community and the foundation should move 
into the direction of Apache.

If someone wants to create their own independent distribution, they should, but 
it shouldn't be part of the project or blessed by the foundation. Instead, they 
would follow the steps of Slackware, Debian, and Gentoo; not the steps taken by 
FreeBSD. The community already has a number of emerging proprietary and/or 
corporate-sponsored distributions, it would not do the community a favor for 
the foundation to create its own. 

Regards,
Eric Windisch
(sent from my iPad)___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Setting Expectations

2012-08-10 Thread Eric Windisch


On Friday, August 10, 2012 at 22:53 PM, George Reese wrote:

 First, thanks to Andrew for starting this thread. I think it's an important 
 discussion.
  
 Second, the problem with the analogies made so far is, IMHO, at the API level 
 and what it does to the ecosystem.
  
 Let's take, for example, Apache HTTP Client stuff. 3.1 and 4.x are wildly 
 different object models. But that's OK for the Apache model in a way that 
 isn't for OpenStack.
Aren't you making the wrong comparison here?  This might apply to the OS API 
client libraries, or euca2ools (which is NOT an OpenStack project), but this 
comparison does not apply to the OpenStack services or EC2 endpoint 
compatibility.  Instead, the comparison of EC2 endpoint compatibility should be 
made to Apache's HTTP endpoint, their httpd.

A better question is: Has the Apache httpd broken HTTP compatibility in any 
significant way? Besides, perhaps, from allowing you to configure it in a 
broken manner?

In fact, this might be a good comparison, because Apache's httpd doesn't HAVE 
to be deployed in a standards-compliant way, although tools (i.e. browsers) and 
operators have figured out how to make this work, and to a large degree, people 
are able to successfully load webpages served by the Apache httpd.

In a similar way, OpenStack Nova installations may vary greatly in their 
behavior. They don't HAVE to be deployed in a standards-complaint way (whatever 
that means); Yet, hopefully, people will be able to successfully launch 
instances served by OpenStack Nova with a common set of tools.

I'm often advocating compatible clouds over advanced client API wrappers (such 
as Dasein), but the reality is that it needs to happen on both ends. The cloud 
server software needs to enable a compatible and standards-compliant service 
endpoint, enforced or not… and the client API libraries need to be flexible 
enough to handle a variety of services that might not be 100% identical.  Just 
like the HTTP servers and client libraries that we have today.

Regards,
Eric Windisch




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-dev] Making the RPC backend a required configuration parameter

2012-08-09 Thread Eric Windisch
  
 I'm not talking about all configuration options. I'm talking about this
 single configuration option. Existing installations that did not
 specify rpc_backend because they did not need to will break if the
 default is changed.
  
They would only break in Grizzly, following a one-release depreciation 
schedule. During Folsom, if they run Folsom, they'll only get warnings.  When 
it becomes required, if they haven't yet updated their configuration, and if 
they don't see it clearly stated in release-notes, they'll discover the change 
quickly within their testing.

If they ignore the warnings in Folsom, don't notice the release-notes, and 
don't do any testing before deploying; If they don't test Grizzly, deploy into 
production, and run into this… well, I hope they're a small deployment, in 
which case it won't be a very big thorn.  If they're a large deployment and 
they ignore all of this, including ignoring any need for testing before doing a 
large rollout…

Regards,
Eric Windisch




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OSSA 2012-011] Compute node filesystem injection/corruption (CVE-2012-3447)

2012-08-08 Thread Eric Windisch
 
 I think the first step is to make sure that a filesystem that the guest
 touched never gets used by the host again, not doing so is just way to
 much of a security risk.
 
 Second there are lots of options to create filesystem entirely in
 userspace with contents that can later be written to:
 
 Especially udf is a very interesting options as just about any modern
 operating system supports it. The same is true for vfat, but vfat is
 fairly limiting for many use cases.


Agreed on all points. 

 
 Why do we ever read a filesystem touched by a guest in the host?
I believe this is more of reading filesystems that were uploaded by users into 
glance. However, it is essentially the same thing.

I don't think we need to do this and don't think we should do this. Clearly, 
however, someone somewhere, at some point, thought they wanted this.

Regards,
Eric Windisch




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Making the RPC backend a required configuration parameter

2012-08-08 Thread Eric Windisch
I believe that the RPC backend should no longer have any default.



Historically, it seems that the Kombu driver is default only because it existed 
before all others and before there was an abstraction. With multiple 
implementations now available, it may be time for a change.

Why?
* A default skews the attitudes and subsequent architectures toward a specific 
implementation


* A default skews the practical testing scenarios, ensuring maturity of one 
driver over others.
* The kombu driver does not work out of the box, so it is no more reasonable 
as a default than impl_fake.
* The RPC code is now in openstack-common, so addressing this later will only 
create additional technical debt.

My proposal is that for Folsom, we introduce a future_required flag on the 
configuration option, rpc_backend. This will trigger a WARNING message if the 
rpc_backend configuration value is not set.  In Grizzly, we would make the 
rpc_backend variable mandatory in the configuration.

Mark McLoughlin wisely suggested this come before the mailing list, as it will 
affect a great many people. I welcome feedback and discussion.

Regards,
Eric Windisch



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Making the RPC backend a required configuration parameter

2012-08-08 Thread Eric Windisch
 
 
 Regardless of the actual default in openstack-common, the devstack 
 default is going to skew all of this as well (if not more so), and 
 devstack does need a default. Much like db backend.

Devstack doesn't need a default, necessarily. Or more clearly, it doesn't need 
to have a hard default. It could have a soft-default, via a prompt on first-run 
unless defined in the localrc, similar to how passwords are currently handled.

Regards,
Eric Windisch




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OSSA 2012-011] Compute node filesystem injection/corruption (CVE-2012-3447)

2012-08-07 Thread Eric Windisch

 Pádraig Brady from Red Hat discovered that the fix implemented for
 CVE-2012-3361 (OSSA-2012-008) was not covering all attack scenarios. By
 crafting a malicious image with root-readable-only symlinks and
 requesting a server based on it, an authenticated user could still
 corrupt arbitrary files (all setups affected) or inject arbitrary files
 (Essex and later setups with OpenStack API enabled and a libvirt-based
 hypervisor) on the host filesystem, potentially resulting in full
 compromise of that compute node.
  

Unfortunately, this won't be the end of vulnerabilities coming from this 
feature.

Even if all the edge-cases around safely writing files are handled (and I'm not 
sure they are), simply mounting a filesystem is a very dangerous operation for 
the host.

The idea had been suggested early-on to supporting ISO9660 filesystems created 
with mkisofs, which can be created in userspace, are read-only, and fairly safe 
to produce, even as root on compute host.

That idea was apparently shot-down because, the people who 
documented/requested the blueprint requested a read-write filesystem that you 
cannot obtain with ISO9660.  Now, everyone has to live with a serious 
technical blunder.

Per the summit discussion Etherpad:
 injecting files into a guest is a very popular desire.

Popular desires not necessary smart desires. We should remove all file 
injection post-haste.

Regards,
Eric Windisch




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OSSA 2012-011] Compute node filesystem injection/corruption (CVE-2012-3447)

2012-08-07 Thread Eric Windisch
 
 - if the user specifies an image from glance for the injection to occur
 to. This is almost certainly functionality that you're not going to like
 for the reasons stated above. Its there because v1 did it, and I'm
 willing to remove it if there is a consensus that's the right thing to
 do. However, file IO on this image mount is done as the nova user, not
 root, so that's a tiny bit safer (I hope).
 

This might be kind-of okay if it uses libguestfs, but I'd need to look more 
closely at libguestfs before considering it safe. If it is only updating vfat, 
another option is mtools which is entirely userspace and can be run with some 
safety on the host. 

Regards,
Eric Windisch


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OSSA 2012-011] Compute node filesystem injection/corruption (CVE-2012-3447)

2012-08-07 Thread Eric Windisch
  
 This might be kind-of okay if it uses libguestfs, but I'd need to look more 
 closely at libguestfs before considering it safe. If it is only updating 
 vfat, another option is mtools which is entirely userspace and can be run 
 with some safety on the host.  
  


I just realized you said glance… I'm assuming these are probably ext2/3/4 or 
other Linux filesystems.  Libguestfs might be the best option, besides simply 
not having that feature.

Regards,
Eric windisch


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OSSA 2012-011] Compute node filesystem injection/corruption (CVE-2012-3447)

2012-08-07 Thread Eric Windisch


 What's the security vulnerability here? Its writing to something which
 might be a symlink to somewhere special, right?


Mounting filesystems tends to be a source of vulnerabilities in and of
itself. There are userspace tools as an alternative, but a standard OS
mount is clearly not secure. While libguestfs is such a userspace
alternative, and guestmount is in some ways safer than a standard mount, it
is not used by Nova in a way that has any clear advantage to a standard
mount as it runs as root.

As this CVE indicates, injecting data into a mounted filesystem has its own
problems, whether or not that filesystem is mounted directly in-kernel or
via FUSE. There are also solutions here, some very complex, few if any are
foolproof.

The solution here may be to use libguestfs, which seems to be a modern
alternative to mtools, but to use it as a non-privileged user and to forego
any illusions of mounting the filesystem anywhere via the kernel or FUSE.

-- 
Regards,
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OSSA 2012-011] Compute node filesystem injection/corruption (CVE-2012-3447)

2012-08-07 Thread Eric Windisch


 Also notice that libguestfs is supported as an injection mechanism
 which mounts images in a separate VM, with one of the big advantages
 of that being better security.


Are you sure about this? Reading the driver source, it appears to be using
'guestmount' as glue between libguestfs and FUSE. Worse, this is done as
root.  This mounts the filesystem in userspace on the host, but the
userspace process runs as root.  Because the filesystem is mounted, all
reads and writes must also happen as root, leading to potential escalation
scenarios.

It does seem that libguestfs could be used securely, but it isn't.

-- 
Regards,
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] best practices for merging common into specific projects

2012-08-03 Thread Eric Windisch
 
 Ah, yes. I've urged the team to use the term ServiceGroup instead of the
 Zookeeper membership terminology -- as membership has other
 connotations in Glance and Nova -- for instance, membership in a
 project/tenant really has nothing to do with the concept of service
 groups that can monitor response/hearbeat of service daemons.
 
I see.  For some additional context, I'm looking to use this managing consumers 
of round-robin and fanout queues with the ZeroMQ driver, instead of the static 
hashmap that is used currently.

Regards,
Eric Windisch




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] best practices for merging common into specific projects

2012-08-02 Thread Eric Windisch


On Monday, July 23, 2012 at 12:04 PM, Doug Hellmann wrote:

   Sorry if this rekindles old arguments, but could someone summarize the
   reasons for an openstack-common PTL without voting rights? I would
   have defaulted to giving them a vote *especially* because the code in
   common is, well, common to all of the projects.
  
  
  So far, the PPB considered openstack-common to be driven by all PTLs,
  so it didn't have a specific PTL.
  
  As far as future governance is concerned (technical committee of the
  Foundation), openstack-common would technically be considered a
  supporting library (rather than a core project) -- those can have leads,
  but those do not get granted an automatic TC seat.
 
 
 OK, I can see the distinction there. I think the project needs an official 
 leader, even if we don't call them a PTL in the sense meant for other 
 projects. And I would expect anyone willing to take on the PTL role for 
 common to be qualified to run for one of the open positions on the new TC, if 
 they wanted to participate there.
The scope of common is expanding. I believe it is time to seriously consider a 
proper PTL. Preferably, before the PTL elections.

The RPC code is there now. We're talking about putting the membership services 
there too, for the sake of RPC, and even the low-level SQLAlchemy/MySQL access 
code for the sake of membership services. A wrapper around pyopenssl is likely 
to land there too, for the sake of RPC. These are just some of the changes that 
have already landed, or are expected to land within Folsom.

Common contains essential pieces to the success of OpenStack which are 
currently lacking (official) leadership. Everyone's problem is nobody's problem.

Consider this my +1 on assigning a PTL for common.

Regards,
Eric Windisch




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] best practices for merging common into specific projects

2012-08-02 Thread Eric Windisch
 
 What do you mean by membership services?
See the email today from Yun Mao.  This is a proposal to have a pluggable 
framework for integration services that maintain memberships. This was 
originally desiged to replace the MySQL heartbeats in Nova, although there will 
be a mysql-heartbeat backend by default as a drop-in replacement. There is a 
zookeeper backend in the works, and we've discussed the possibility of building 
a backend that can poll RabbitMQ's list_consumers.

This is useful for more than just Nova's heartbeats, however.  This will 
largely supplant the requirement for the matchmaker to build these backends in 
itself, which had been my original plan (the matchmaker is already in 
openstack-common).  As such, it had already been my intent to have a 
MySQL-backed matchmaker.  The only thing new is that someone has actually 
written the code.

In the first pass, the intention is to leave the matchmaker in and introduce 
the membership modules.  Then, the matchmaker would either use the new 
membership modules as a backend, or even replaced entirely.

Regards,
Eric Windisch




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Suspend/Stop VM

2012-07-30 Thread Eric Windisch
I'm not sure where the actions are documented, but you can infer them from here:
https://github.com/openstack/python-novaclient/blob/master/novaclient/v1_1/servers.py
 

Below, I've pasted a few of the methods related to this thread.  These are 
POST'ed to the action URI, as Mark suggested. 

Regards,
Eric Windisch

def stop(self, server):

Stop the server.

return self._action('os-stop', server, None)

def start(self, server):

Start the server.

self._action('os-start', server, None)

def pause(self, server):

Pause the server.

self._action('pause', server, None)

def unpause(self, server):

Unpause the server.

self._action('unpause', server, None)

def lock(self, server):

Lock the server.

self._action('lock', server, None)

def unlock(self, server):

Unlock the server.

self._action('unlock', server, None)

def suspend(self, server):

Suspend the server.

self._action('suspend', server, None)

def resume(self, server):

Resume the server.

self._action('resume', server, None)



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] Proposal for Sean Dague to join nova-core

2012-07-23 Thread Eric Windisch
On Friday, July 20, 2012 at 13:49 PM, Vishvananda Ishaya wrote:
 
 When I was going through the list of reviewers to see who would be good for 
 nova-core a few days ago, I left one out. Sean has been doing a lot of 
 reviews lately[1] and did the refactor and cleanup of the driver loading 
 code. I think he would also be a great addition to nova-core.
+1.  I've read through the list and gerrit. Sean seems to be doing a great job. 

Regards,
Eric Windisch


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] High Available queues in rabbitmq

2012-07-23 Thread Eric Windisch
 
 The only problem is, it breaks backward compatibility a bit: my patch
 assumes you have a flag rabbit_addresses which should look like
 rmq-host1:5672,rmq-host2:5672 instead of the prior rabbit_host and
 rabbit_port flags.
 
 Guys, can you advise on a way to do this without being ugly and
 without breaking compatibility?
 
 

One way would to use the matchmaker which I introduced to solve a similar 
problem with the ZeroMQ driver. The matchmaker is a client-side emulation of 
bindings/exchanges for mapping topic keys to an array of topic/host pairs.

You would query the matchmaker with a topic (key) and it would return tuples in 
the form of:
 (topic, broker_ip)

In the ZeroMQ case, the broker_ip is always the peer, but with RabbitMQ, this 
would be one (or more) of your selected brokers.  Generally, you would return 
multiple brokers when you're doing fanout messaging.


Regards,
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] Proposal to add Yun Mao to nova-core

2012-07-18 Thread Eric Windisch


On Wednesday, July 18, 2012 at 19:10 PM, Vishvananda Ishaya wrote:

 Hello Everyone!
 
 Yun has been putting a lot of effort into cleaning up our state management, 
 and has been contributing a lot to reviews[1]. I think he would make a great 
 addition to nova-core.
+1

Regards,
Eric Windisch


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] File injection support

2012-07-13 Thread Eric Windisch



On Friday, July 13, 2012 at 20:35 PM, Pádraig Brady wrote:

   
  More interesting though, and what might be of use to other people, the
  kpartx -a calls get run and then the code in nova/virt/disk/mount.py
  immediately checks for whether or not the newly created
  /dev/mapper/nbdXXpX partitions exist. They'd actually get created but
  the os.exists call would fail. Apparently the os.exists call was
  getting run too soon. I added a time.sleep() after both the 'kpartx
  -a' and 'kpartx -d' calls, to give things time to catch up before the
  os.exists calls and things worked much better.
  
 Sigh.
  
 The amount of synchronization bugs I've noticed in lower
 Linux layers lately is worrying.
  


The kernel tools can certainly be asynchronous at times.

Perhaps try to see if using inotify would solve this. It is platform-dependent, 
but it is the best way to solve these problems without race conditions.  If 
we're calling kpartx, platform independence is unlikely to be an issue anyway.

However, if compatibility is desired, BSD/MacOS provide similar functionality 
via kqueue, and Windows FindFirstChangeNotification.

Regards,
Eric Windisch


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova Cells

2012-07-12 Thread Eric Windisch


On Thursday, July 12, 2012 at 15:13 PM, Chris Behrens wrote:

 Partially developed. This probably isn't much use, but I'll throw it out 
 there: http://comstud.com/cells.pdf
 
We're going to have to sync once more on removing _to_server calls from the RPC 
layer.

With the matchmaker upstream now, we should be able to use this to provide 
N-broker support to the AMQP drivers, although we'd need to work this into 
ampq.py.  Also, since the design summit, I should note the code has moved in 
the direction of providing a Bindings/Exchanges metaphor, which I hope should 
be easier to digest from the perspective of the queue-server buffs.

Let me know when you're ready to have a chat about it, it might do better to do 
this on the phone or IRC than by email.

-- 
Eric Windisch




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread Eric Windisch
 
 
 Excellent points. Let me make the following proposal:
 
 1) Leave the code in nova-volume for now.
 2) Document and test a clear migration path to cinder.
 3) Take the working example upgrade to the operators list and ask them for 
 opinions.
 4) Decide based on their feedback whether it is acceptable to cut the 
 nova-volume code out for folsom.
 
Finally something I can put a +1 against.

-- 
Eric Windisch




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Openstack-Common] RPC

2012-07-10 Thread Eric Windisch
 
 In addition to this I have a few additional questions and or concerns:
 1. When we import code from openstack common the test cases for the 
 modules are not imported (maybe I missed something with running setup). 
 When the code is copied the imports are updated. It would be nice to 
 know that the auto tests are also run in the context of the project 
 importing the code.


As Russell said, the tests inside common are intended to ensure independent 
functionality of the features within common. There should be tests in your own 
project that test your project's use of RPC. There has been some discussion on 
having integration tests for common test official openstack projects for 
compatibility/breakages.

You might also want to look at and contribute to the thread best practices for 
merging common into specific projects.
 2. I based my integration on the patch 
 https://review.openstack.org/#/c/9166/. A number of files were missing. 
 Should this have specifically mentioned the missing files or should the 
 rpc part have taken care of this?


Are you concerned that the RPC module itself has dependencies on 
openstack-common which are not being automatically resolved and included when 
you run update.py?

Regards,
Eric Windisch 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Openstack-Common ZeroMQ

2012-07-10 Thread Eric Windisch


On Tuesday, July 10, 2012 at 13:24 PM, Jason Kölker wrote:

 The zeromq tests are failing in jenkins. I created bug
 https://bugs.launchpad.net/openstack-common/+bug/1023060 for this.
 Anyone with an interest in ZeroMQ support, please help to resolve this
 bug.
  

I'm maintaining this code and would love to see it working again.

There were already bugs filed for this and changes already in gerrit for 
review, that once committed, should fix the tests.

The bigger issue is getting people to do the reviews...

--  
Eric Windisch




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Openstack-Common ZeroMQ

2012-07-10 Thread Eric Windisch
 
 The bigger issue is getting people to do the reviews...
 

Here is the link for those that want to help: 
https://review.openstack.org/#/q/status:open+project:openstack/openstack-common+branch:master+topic:bug/1021459,n,z

Regards,
Eric Windisch


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Don't bypass code-review.

2012-07-10 Thread Eric Windisch
I've had code reviews sitting out for over a week, looking to fix issues with 
the ZeroMQ driver in openstack-common. I'd love to get it fixed, and nudged a 
couple of people to get the reviews in, but figured that it would get in 
eventually - and if not, I'd prod harder, or perhaps someone else that would 
want to see these problems fixed, would help review.  Instead... 

Today, I had someone ignore my reviews in progress, push his own review, and 
even APPROVED his own review.

What the hell?

Review in question:
https://review.openstack.org/#/c/9594/
(My) Reviews in progress:
https://review.openstack.org/#/q/status:open+project:openstack/openstack-common+branch:master+topic:bug/1021459,n,z

-- 
Eric Windisch

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Don't bypass code-review.

2012-07-10 Thread Eric Windisch
 
 Jenkins was failing to merge *ANY* code reviews for openstack-common.
 The root of the dependency tree of your patch sets is insufficient to
 make the test suite pass. The set of patches needed to go in in one
 commit. I was ping'd on IRC to check into the tests that were failing
 when the build hosts got pyzmq installed on them by another patch set. 
 
 I'm sorry that I bypassed the code review, but in light of nothing
 getting through due to the ZeroMQ tests being broken to begin with,
 there was not much I could do. The RPC bits got added in prior to test
 gating, and further these tests only trigger when pyzmq is installed.
 
I understand the dilemma, but it wasn't like I didn't have reviews in progress 
to address this.  If the patches weren't sufficient, contacting me or giving me 
a -1 would have been a better option. Alternatively, you could have 
investigated why pyzmq might have gotten installed, and seen if it could have 
been disabled.  Instead, you merged your change 30 minutes before contacting 
the mailing list or otherwise attempting communication.  It took, what, 10 
minutes for me to receive, read, and reply to your post?

I've already pushed an updated change that collapses the changes into a larger 
patch that should get Jenkins going again (tests pass locally, we'll see how 
Jenkins feels about it)

Right now, I'd just like to get past this. I'm sorry if I've been at all too 
rough on you.

I'd just appreciate that in the future, even if the build is broken, that code 
review is not bypassed.  Additionally, if there is a reasonable way to approach 
the author of code, especially if there is already a patch in review, that 
opposing patches aren't shoe-horned in without review or oversight.

 
Regards,
Eric Windisch 



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] best practices for merging common into specific projects

2012-07-04 Thread Eric Windisch
On Tuesday, July 3, 2012 at 21:17 PM, Monty Taylor wrote:
 Interestingly enough - gerrit supports submodules pretty well... and it
 does exactly what Eric said below ... if both the project and
 superproject are in gerrit, and a change is made to the project, gerrit
 can automatically update the superproject reference.



Actually, I may not have been clear enough. I meant to say that we do NOT need 
to update the project reference automatically, this can be done manually, more 
reflective of the current development pattern. Yet, even if done manually, it 
would greatly minimize the double-commit scenario we have today. Someone could 
manually update the reference, which is a single-line change in a single 
commit, rather than merging and reintegrating many patches to many files.

 At least as an external library you can freeze a version requirement until 
 such time as you see fit to properly updated that code and *ensure* 
 compatibility in your project
Isn't this what we get with git submodules? Sure, that version is just a 
commit-id (or tag), but it isn't tracking HEAD, either. For stable releases, we 
can tag and update the reference to point to that tag.

-- 
Eric Windisch


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Single global dependency list

2012-07-03 Thread Eric Windisch
I have to agree with others that copying files around is not ideal, and I can 
see the maintenance of this getting more involved as Nova becomes more coupled 
with common.

  Additionally, we'd make the copy only copy in the versions from
  openstack-common for package that were already listed in the target
  project, so that we wouldn't add django to python-swiftclient, for instance.
 

 


This seems to be a reasonable argument against using git submodules, but I'm 
afraid we might be losing more than we're gaining here.

Just because python-swiftclient depends on openstack-common, and django-using 
code exists there, doesn't mean that django needs to be installed for 
python-swiftclient. We might do better to use git submodules and solve the 
dependency problem, than continuing down this copy-everything path.

Alternatively, speed up the movement from incubation to library.

Regards,
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] best practices for merging common into specific projects

2012-07-03 Thread Eric Windisch
git submodules don't have to be linked to the head of a branch. Instead of 
double-commiting (for every commit), we can do a single commit in each project 
to change the git reference of the submodule. This would not be too far from 
the existing behavior, except that it would minimize the double commits. 

-- 
Eric Windisch


On Tuesday, July 3, 2012 at 15:47 PM, Andrew Bogott wrote:

 On 7/3/12 1:59 PM, Gabriel Hurley wrote:
  The notion that copying code is any protection against APIs that may change 
  is a red herring. It's the exact same effect as pegging a version of a 
  dependency (whether it's a commit hash or a real version number), except 
  now you have code duplication. An unstable upgrade path is an unstable 
  upgrade path, and copying the code into the project doesn't alleviate the 
  pain for the project if the upstream library decides to change its APIs.
  
  Also, we're really calling something used by more or less all the core 
  projects incubated? ;-) Seems like it's past the proof-of-concept phase 
  now, at least for many parts of common. Questions of API stability are an 
  issue unto themselves.
  
  Anyhow, I'm +1 on turning it into a real library of its own, as a couple 
  people suggested already.
  
  - Gabriel
 
 I feel like I should speak up since I started this fight in the first 
 place :)
 
 Like most people in this thread, I too long for an end to the weird 
 double-commit process that we're using now. So I'm happy to set aside 
 my original Best Practices proposal until there's some consensus 
 regarding how much longer we're going to use that process. Presumably 
 opinions about how to handle merge-from-common commits will vary in the 
 meantime, but that's something we can live with.
 
 In terms of promoting common into a real project, though, I want to 
 raise another option that's guaranteed to be unpopular: We make 
 openstack-common a git-submodule that is automatically checked out 
 within the directory tree of each other project. Then each commit to 
 common would need to be gated by the full set of tests on every project 
 that includes common.
 
 I haven't thought deeply about the pros and cons of code submodule vs. 
 python project, but I want to bring up the option because it's the 
 system that I'm the most familiar with, and one that's been discussed a 
 bit off and on.
 
 -Andrew
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net (mailto:openstack@lists.launchpad.net)
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp
 
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Openstack special sauce checking tool??

2012-06-28 Thread Eric Windisch
I've recently discovered that running code against Cython tends to catch things 
that pep8/pylint won't catch.  

One great thing it does is detect if a required import is missing. The other 
tools don't do this.  The only downside I've found so far has been that it has 
very limited support for __future__.

--  
Eric Windisch


On Thursday, June 28, 2012 at 13:48 PM, Timothy Daly wrote:

 nova has tools/hacking.py, which looks like it does check some import stuff, 
 among other things.
  
 -tim
  
 On Jun 28, 2012, at 10:15 AM, Joshua Harlow wrote:
  
  Hi all,
   
  I remember hearing once that someone had a openstack import/hacking style 
  checking tool.
   
  I was wondering if such a thing existed to verify same the openstack way of 
  doing imports and other special checks to match the openstack style.
   
  I know a lot of us run pep8/pylint, but those don’t catch these special 
  sauce checks
   
  And it would be nice to not have to manually check these (visually...) but 
  be able to run a tool that would just do the basic checks.
   
  Does such a thing exist??
   
  Thx,
   
  Josh
  
  
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net (mailto:openstack@lists.launchpad.net)
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp
  
  


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] RPC Semantics

2012-06-12 Thread Eric Windisch
We actually do have ACKs in ZeroMQ, as far as I understand how they work in 
AMQP, but they're really simple. The send() method is actually synchronous with 
the message being received on the other end. However, we don't wait for this 
and spawn an eventlet coroutine, because there is no benefit of blocking the 
caller.

All calls have a timeout (TTL). The ZeroMQ driver also implements a TTL on the 
casts, and I'm quite sure we should support this in Kombu/Qpid as well to avoid 
a thundering-herd.

While there is no message persistence in the ZeroMQ driver, there is some 
limited benefit of having this on casts. There is limited or no benefit for 
calls, because the return value won't be received -- the calling stack is no 
longer going to do anything with the return value. (This would be a good case 
for a better Actor-driven model, because we could actually handle return values 
across a relaunched caller)

Anyway, in the ZeroMQ driver, we could have a local queue to track casts and 
remove them when the send() coroutine completes.  This would provide restart 
protection for casts. 

-- 
Eric Windisch


On Tuesday, June 12, 2012 at 09:55 AM, Johannes Erdfelt wrote:

 As part of a patch to add idempotency to the xenapi driver, I've
 modified impl_kombu driver to implement delayed ACKs. This is so RPC
 messages are retried after nova-compute restarts.
 
 https://review.openstack.org/#/c/7323/
 
 However, this only works with impl_kombu.
 
 impl_qpid does not ACK anything, although since it's AMQP based there
 should be an ACK somewhere, I just don't see it in impl_qpid or the Qpid
 module so I must be missing something.
 
 0MQ does not have any concept of ACKing messages, which makes sense
 since there is no message persistence.
 
 I'm not aware of any formal discussion on what the RPC layer semantics
 are, but a set of implied semantics have developed. In particular, that
 messages in Openstack cannot reliably be persistent at the RPC layer.
 
 My changes to impl_kombu are obviously not the path forward since they
 cannot be implemented in the other RPC drivers, but it's not clear what
 the best path forward is.
 
 Is it a custom layer on top of RPC?
 
 Is it an orchestration layer?
 
 Something else?
 
 Related, should queue persistence be disabled in impl_kombu?
 
 JE
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net (mailto:openstack@lists.launchpad.net)
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp
 
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] RPC Semantics

2012-06-12 Thread Eric Windisch
 
 For instance, an instance migration can take a while since we need to
 copy many gigabytes of disks to another host. If we want to do a
 software upgrade, we either need to wait a long time for the migration
 to finish, or we need to restart the service and then restart the
 processing of the message.
 
 

You wait a long time period. If you wait a long time and it fails, you're 
restarting. Having it do so automatically on the consumer-side isn't 
necessarily a good thing. 
 
 If all software gets restarted, then persistence is important.
Again, I see an argument in having callers have limited persistence, but not 
consumers.
 
  All calls have a timeout (TTL). The ZeroMQ driver also implements a TTL
  on the casts, and I'm quite sure we should support this in Kombu/Qpid
  as well to avoid a thundering-herd.
  
 
 
 What thundering herd problems exist in Openstack?

Say we have one api service, one scheduler.  If the scheduler fails, API 
requests to create an instance will pile up, until the scheduler returns. The 
returning scheduler will get all of those instance creation requests and will 
launch those instances. (This would also be applicable for messages between the 
scheduler and a compute service)

The end-user will see the run-instance command as potentially failing and may 
attempt to launch again. The queue will hold all of these requests and they 
will all get processed when the scheduler returns.

This is especially problematic with auto-scaling. How well will Rightscale or 
Enstratus run against a system that takes hours and hours to launch instances?  
They'll just retry and retry. You don't want these to just queue up.
 I do know there are problems with queuing combined with timeouts. It
 makes less sense to process a get_nw_info request if the requestor has
 already timed out and will ignore the response. Is that what you're
 referring to with TTLs?
 
 
 


That is important too, in the case of calls, but not all that important.  I'm 
not so concerned about machines sending useless replies, we can ignore them.
 
 
 

 
 Idempotent actions want persistence so it will actually complete the
 action requested in the message. For instance, if nova-compute is
 stopped in the middle of an instance-create, we want to actually finish
 the create after the process is restarted.
Only if it hasn't timed out. Otherwise, you'd only asking for a thundering herd.

What has happened on the caller side? Has it timed out and given the user an 
error?  What manager methods (rpc methods) that call RPC, how deep does that 
stack go?

Perhaps it is better that if nova-compute is stopped in the middle of an 
instance-create that it can *cleanup* on a restore, rather than attempting to 
continue an arguably pointless and potentially dangerous path of actually 
creating that instance?
 There is no process waiting for a return value, but we certainly would
 like for the message to be persisted so we can restart it.
 
I'm not sure about that.
 
 
  Anyway, in the ZeroMQ driver, we could have a local queue to track
  casts and remove them when the send() coroutine completes. This would
  provide restart protection for casts. 
  
 
 
 Assuming the requesting process remains running the entire time?
I meant ONLY persisting in the requesting process. If the requesting process 
fails while before that message is consumed, the requesting process can attempt 
to resubmit that message for consumption upon relaunch.  The requesting process 
would track the amount of time waiting for the message to be consumed and would 
subtract that time from the remaining timeout. 

Regards,
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] RPC Semantics

2012-06-12 Thread Eric Windisch
 
 When will the sending service know that it should resend a message?
 Wouldn't this be best done a pull basis be the receiving service?
 


I'm obviously approaching from the perspective of the ZeroMQ driver which has a 
PUSH-PULL pair that are tightly coupled. A successful PULL is a successful PUSH.

Regards,
Eric Windisch


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] ZFS/ZVol + iscsi for volume

2012-06-11 Thread Eric Windisch
Also, there is a serious problem with the divergence of EOL schedules between 
the FreeBSD and Debian camps. Basically, the FreeBSD kernel and other bits will 
go EOL before the Debian bits will. You need to have a certain amount of faith 
that the small Debian/kFreeBSD team and/or your own team will be able to 
sufficiently provide the necessary security backports and hotfixes.

For this reason, the support for stable releases of the kFreeBSD releases, in a 
sense, may be considered significantly shortened compared to standard Debian 
releases.  

--  
Eric Windisch


On Monday, June 11, 2012 at 13:49 PM, Alberto Molina Coballes wrote:

 El lun, 11-06-2012 a las 10:26 -0500, Narayan Desai escribió:
  How integrated is the network target support for zfs on freebsd? One
  of the most compelling features (IMHO) of ZFS on illumos is the whole
  comstar stack. On the zfs linux port at least, there are just
  integration hooks out to the standard linux methods (kernel-nfs, etc)
  for nfs, iscsi, etc.
   
  I'm really interested to hear how this goes on FreeBSD. I've been
  playing with zfs on linux as well, but the results haven't been so
  good for me.
  -nld
   
  
  
 AFAIK, support for ZFS on FreeBSD is good. The main problem for using
 Debian/KFreeBSD at this moment is nova-volume: it depends on lvm2 and
 tgt [1], packages that are not available in Debian/KfreeBSD port.
  
 [1] http://qa.debian.org/debcheck.php?dist=unstablepackage=nova
  
  
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net (mailto:openstack@lists.launchpad.net)
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp
  
  


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Question on nova disk injection...

2012-06-06 Thread Eric Windisch
 
 
 
 What implementation suboption would have your preference ? Is
 nova-rootwrap now universally used ? Should we prefer compatibility or
 absence of confusion ?

There is an issue of how to extend rootwrap from third-party backend drivers. 
If this was (is?) addressed, universal use of rootwrap will be an easier sell.___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Question on nova disk injection...

2012-06-05 Thread Eric Windisch


On Tuesday, June 5, 2012 at 19:18 PM, Joshua Harlow wrote:

 Re: [Openstack] Question on nova disk injection... Why couldn’t nova just 
 escalate pythons privileges to the super user when writing a file (thus 
 allowing it to use python file writing functions and such).

Because we use Eventlet. os.setuid applies to the entire process. Coroutine 
switching during this would be dangerous.

Three options seem to exist:

1. We can fork, but then we'll need use IPC, which in our case would be 
implemented via the RPC abstraction.  We would need to make changes to 
services.py and/or the binaries and possibly the RPC abstraction itself.  This 
would work well with ZeroMQ as it would be actual IPC, but the brokered RPC 
solutions would be less efficient. Overall, this reeks of complexity and 
danger, but the end result should be a clear net positive.

2. One less elegant, but easy, solution might just be to extend the rootwrap 
functionality. Have it support calling commands on the system *and* executing 
trusted Python methods with trusted arguments.  We'd still be shelling out to 
rootwrap, but rootwrap could internally provide 'mkdir' and 'chmod' style 
commands around the os stdlib, rather than shelling out a second time, as it 
does currently.

3. rootwrap itself could actually be implemented as a Nova service, if we could 
trust the RPC mechanism direct access to the rootwrap methods -- which we is 
not all too safe, currently. This would effectively be a mix of options 1/2.

I'm inclined to suggest option #2 as it is a relatively simple improvement that 
would give us short-term gains without much friction. This wouldn't exclude the 
other options from being worked on and seems to be a requirement for #3, anyway.

--  
Eric Windisch

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Question on nova disk injection...

2012-06-05 Thread Eric Windisch
Yun,

The setuid bit is unnecessary, python can be launched by the root user. It 
would then drop privileges. For instance, the sshd daemon does not require a 
setuid bit, it is simply executed by root. It uses privilege separation  and 
does the set(e)uid for users that login through it.

Having a compiled program and a setuid bit destroys a number of the reasons why 
you would want to have this run as root. For one thing, if your daemon runs as 
root and drops to the 'nova' user, compromises within the context of running as 
the nova user cannot allow the daemon to be modified and re-executed.  Without 
having any 'sudo' requirements, the nova user would be quite constrained, 
relative to the current situation.  

--  
Eric Windisch


On Tuesday, June 5, 2012 at 21:18 PM, Yun Mao wrote:

 Python is a scripting language. To get setuid work, you usually have
 to give the setuid permission to /usr/bin/python which is a big no no.
  
 One work around is to have a customized compiled program (e.g. from
 C), which takes a python file as input, do all kinds of sanity check,
 and switch to root user to execute Python. But in that case it's not
 that much more appealing from the rootwrap.
  
 my 2c.
 Yun
  
 On Tue, Jun 5, 2012 at 5:42 PM, Joshua Harlow harlo...@yahoo-inc.com 
 (mailto:harlo...@yahoo-inc.com) wrote:
  Hi all,
   
  Just some questions that I had about how nova is doing disk injection and
  such.
   
  I was noticing that it the main disk/api.py does a lot of tee, cat and
  similar commands. Is there any reason it couldn’t just use the standard
  python open and write data and such.
   
  Is it because of sudo access (which is connected to rootwrap?), just
  wondering since it seems sort of odd that to write a file there a tee call
  has to be done with piped input, when python already has file operators and
  such...
   
  Thx!
   
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net 
  (mailto:openstack@lists.launchpad.net)
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
   
  
  
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net (mailto:openstack@lists.launchpad.net)
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp
  
  


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] ZeroMQ driver (review needed)

2012-05-30 Thread Eric Windisch
Still needing reviews. I've addressed all the concerns to date… and to make it 
easier for reviewers, I've split out the generic testing changes and the 
matchmaker addition (as dependencies).  

The changes are:
https://review.openstack.org/#/c/7633/  # zeromq
https://review.openstack.org/#/c/7921/2  # matchmaker  
https://review.openstack.org/#/c/7770/ # new common rpc tests and fake_impl.py 
bugfix

--  
Eric Windisch


On Wednesday, May 23, 2012 at 11:34 AM, Eric Windisch wrote:

 Looking for code reviews of the ZeroMQ driver:  
 https://review.openstack.org/#/c/7633/
  
 I believe I have addressed all the concerns of the previous reviewers and 
 have ironed out all the pep8/hacking and unit testing issues.  
  
 This patch also introduces the matchmaker and two new common RPC unit tests 
 (and a bug-fix for nova/rpc/impl_fake.py)
  
 --  
 Eric Windisch
  

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] rabbit MQ and mirrored queues

2012-05-25 Thread Eric Windisch
I feel that the best way to deploy RabbitMQ is to run multiple independently 
queue servers and have separate consumers to these servers. You can then do 
client-side balancing to select which Rabbit server messages go. To get that in 
Nova today would be pretty minor -- especially after the matchmaker lands (this 
can provide client-side balancing of servers).


-- 
Eric Windisch


On Friday, May 25, 2012 at 09:18 AM, Stephen Gran wrote:

 Hello,
 
 I am investigating various high availability options for a pending
 deploy of open stack. One of the obvious services to make resilient is
 the mq service. We're going to be using rabbitmq, and we'll most likely
 have N of them in a standard rabbit mq cluster behind a load balancer
 configured as active/passive. One of the obvious improvements on this
 would be to use mirrored queues to protect against message loss as well
 as service downtime.
 
 Are there recommended ways of doing this? I see that I can use durable
 queues, which might work around the problems of openstack checking queue
 parameters on reconnect, but it seems a shame there's not an obvious way
 to do this out of the box. Unless I'm missing something?
 
 Cheers,
 -- 
 Stephen Gran
 Senior Systems Integrator - guardian.co.uk (http://guardian.co.uk)
 
 Please consider the environment before printing this email.
 --
 Visit guardian.co.uk (http://guardian.co.uk) - newspaper of the year
 
 www.guardian.co.uk (http://www.guardian.co.uk) www.observer.co.uk 
 (http://www.observer.co.uk) www.guardiannews.com 
 (http://www.guardiannews.com) 
 
 On your mobile, visit m.guardian.co.uk (http://m.guardian.co.uk) or download 
 the Guardian
 iPhone app www.guardian.co.uk/iphone (http://www.guardian.co.uk/iphone)
 
 To save up to 30% when you subscribe to the Guardian and the Observer
 visit www.guardian.co.uk/subscriber (http://www.guardian.co.uk/subscriber) 
 -
 This e-mail and all attachments are confidential and may also
 be privileged. If you are not the named recipient, please notify
 the sender and delete the e-mail and all attachments immediately.
 Do not disclose the contents to another person. You may not use
 the information for any purpose, or store, or copy, it in any way.
 
 Guardian News  Media Limited is not liable for any computer
 viruses or other material transmitted with or as part of this
 e-mail. You should employ virus checking software.
 
 Guardian News  Media Limited
 
 A member of Guardian Media Group plc
 Registered Office
 PO Box 68164
 Kings Place
 90 York Way
 London
 N1P 2AP
 
 Registered in England Number 908396
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net (mailto:openstack@lists.launchpad.net)
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp
 
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] rabbit MQ and mirrored queues

2012-05-25 Thread Eric Windisch
Sebastien,  

For my part, I don't do *any of it*. I'm the author of the ZeroMQ 
implementation, where this is a non-issue.

I think that having the Rabbit queues decoupled makes a lot of sense, 
especially since the code to do this can be generalized across multiple RPC 
implementations. (i.e. this would be a win for Qpid as well)   I'm clearly not 
a die-hard RabbitMQ admin -- is there a reason to use clustering over a 
decoupled solution for a greenfield application?  

--  
Eric Windisch


On Friday, May 25, 2012 at 17:54 PM, Sébastien Han wrote:

 Why don't you use the RabbitMQ builtin cluster solution?
 I setup an active/active cluster with the buildin mecanism and put an HAProxy 
 on top with a priority on a specific node. (weight and backup options).
  
 For the mirrored queues don't we need to edit the openstack code?
  
 Cheers.
 ~Seb
  
  
  
 On Fri, May 25, 2012 at 10:10 PM, Eric Windisch e...@cloudscaling.com 
 (mailto:e...@cloudscaling.com) wrote:
  I feel that the best way to deploy RabbitMQ is to run multiple 
  independently queue servers and have separate consumers to these servers. 
  You can then do client-side balancing to select which Rabbit server 
  messages go. To get that in Nova today would be pretty minor -- especially 
  after the matchmaker lands (this can provide client-side balancing of 
  servers).  
   
  --  
  Eric Windisch
   
   
  On Friday, May 25, 2012 at 09:18 AM, Stephen Gran wrote:
   
   Hello,

   I am investigating various high availability options for a pending
   deploy of open stack. One of the obvious services to make resilient is
   the mq service. We're going to be using rabbitmq, and we'll most likely
   have N of them in a standard rabbit mq cluster behind a load balancer
   configured as active/passive. One of the obvious improvements on this
   would be to use mirrored queues to protect against message loss as well
   as service downtime.

   Are there recommended ways of doing this? I see that I can use durable
   queues, which might work around the problems of openstack checking queue
   parameters on reconnect, but it seems a shame there's not an obvious way
   to do this out of the box. Unless I'm missing something?

   Cheers,
   --  
   Stephen Gran
   Senior Systems Integrator - guardian.co.uk (http://guardian.co.uk)

   Please consider the environment before printing this email.
   --
   Visit guardian.co.uk (http://guardian.co.uk) - newspaper of the year

   www.guardian.co.uk (http://www.guardian.co.uk) www.observer.co.uk 
   (http://www.observer.co.uk) www.guardiannews.com 
   (http://www.guardiannews.com)  

   On your mobile, visit m.guardian.co.uk (http://m.guardian.co.uk) or 
   download the Guardian
   iPhone app www.guardian.co.uk/iphone (http://www.guardian.co.uk/iphone)

   To save up to 30% when you subscribe to the Guardian and the Observer
   visit www.guardian.co.uk/subscriber 
   (http://www.guardian.co.uk/subscriber)  
   -
   This e-mail and all attachments are confidential and may also
   be privileged. If you are not the named recipient, please notify
   the sender and delete the e-mail and all attachments immediately.
   Do not disclose the contents to another person. You may not use
   the information for any purpose, or store, or copy, it in any way.

   Guardian News  Media Limited is not liable for any computer
   viruses or other material transmitted with or as part of this
   e-mail. You should employ virus checking software.

   Guardian News  Media Limited

   A member of Guardian Media Group plc  
   Registered Office
   PO Box 68164
   Kings Place
   90 York Way
   London
   N1P 2AP

   Registered in England Number 908396


   ___
   Mailing list: https://launchpad.net/~openstack
   Post to : openstack@lists.launchpad.net 
   (mailto:openstack@lists.launchpad.net)
   Unsubscribe : https://launchpad.net/~openstack
   More help : https://help.launchpad.net/ListHelp



   
   
   
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net 
  (mailto:openstack@lists.launchpad.net)
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
   
  

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] ZeroMQ driver (review needed)

2012-05-23 Thread Eric Windisch
Looking for code reviews of the ZeroMQ driver: 
https://review.openstack.org/#/c/7633/

I believe I have addressed all the concerns of the previous reviewers and have 
ironed out all the pep8/hacking and unit testing issues. 

This patch also introduces the matchmaker and two new common RPC unit tests 
(and a bug-fix for nova/rpc/impl_fake.py)

-- 
Eric Windisch

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] adding a worker pool notion for RPC consumers

2012-05-22 Thread Eric Windisch
Bringing my conversation with Doug back on-list...

 In nova.rpc with fanout=True every consumer gets a copy of the event because 
 every consumer has its own queue. With fanout=False only one consumer *at 
 all* will get a copy of the event, since they all listen on the same queue. 
 The changes I made come somewhere in between that. It allows multiple 
 consumers to receive a given message, but it also allows several consumers to 
 declare that they are collaborating so that only one of the subset receives a 
 copy. That means that multiple types of consumers can be listening to 
 notifications (metering and audit logging, for example) and each type of 
 consumer can have a load balanced pool of workers so that messages are only 
 processed once for metering and once for logging.
 

We can do this today with the Matchmaker. You can use a standard fanout, but 
make one of the hosts a DNS entry with multiple A or CNAME records for 
round-robin DNS, where that host will act as a pool of workers.  It would be 
trivial to update the matchmaker to support nested lists to support this with 
IP addresses as well, doing round-robin or random-selection of hosts without a 
pool of workers.

Unfortunately, doing this in the AMQP fashion of registering workers is 
difficult to do via the matchmaker. Not impossible, but it requires that the 
matchmakers have a (de)centralized datastore. This could be solved by having 
get_workers and/or create_consumer communicate to the matchmaker and update 
mysql, zookeeper, redis, etc.  While I think this is a viable approach, I've 
avoided /requiring/ this paradigm as the alternatives of using hash maps and/or 
DNS are significantly less complex and easier to scale and keep available.

We should consider to what degree dynamic vs static configuration is necessary, 
if dynamic is truly required, and how a method like get_workers should behave 
on a statically configured system.

Regards,
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] adding a worker pool notion for RPC consumers

2012-05-22 Thread Eric Windisch

 I wanted our ops team to be able to bring more collector service instances 
 online when our cloud starts seeing an increase in the sorts of activity that 
 generates metering events, without having to explicitly register the new 
 workers in a configuration file. It sounds like having the zeromq driver 
 (optionally?) communicate to a central registry would let it reproduce some 
 of the features built into AMQP to achieve that sort of dynamic 
 self-configuration.

Understood.  Supporting the dynamic case is viable, I just don't want to 
(blindly) do it at the expense of static configurations.  Here, I think we can 
simply warn/error if a static configuration is in place.

I'm thinking that the zeromq driver would support create_workers by passing the 
call into the matchmaker.  Some matchmakers would support it, others would not 
(and would be static), logging a message.  The question might be if we should 
create an exception and raise this as well, or not, but I'm leaning toward not.
 I mentioned off-list that I'm not a messaging expert, and I wasn't around 
 when the zeromq driver work was started. Is the goal of that work to 
 eventually permanently replace AMQP, or just to provide a compatible 
 alternative?
 
 
 
 

It is currently a compatible alternative.

We do intend for this to remain compatible, and for the abstraction to be 
useful across all the available messaging  plugins.  It remains to be seen 
which, if any, messaging platform will be the /default/ in Nova/OpenStack 
long-term.  Currently, RabbitMQ is the default, but Essex introduced Qpid 
messaging, and we'll have ZeroMQ messaging if we can get it out of review ;-)

Regards,
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] adding a worker pool notion for RPC consumers

2012-05-22 Thread Eric Windisch

 
 If a consumer is trying to subscribe to a worker pool but the underlying 
 implementation for the messaging system does not support those semantics, we 
 should fail loudly and explicitly instead of configuring the consumer using 
 other semantics that may result in subtle bugs or data corruption.
If we were doing that right now with the ZeroMQ driver, we'd be raising some 
ugly exceptions up without any benefit.  It only consumes the 
'service.host' topics.   Fanout and round-robin of direct exchanges (bare 
topics without a dot-character) are handled by the *sender* and are thus not 
consumed, which I realize is 180-degrees from how this is handled in AMQP.

My suggestion is that for static matchmakers, on the registration of a 
consumer, we do a host lookup in the matchmaker to see if that host has been 
pre-registered.  If it is not in the map/lookup, then we raise an ugly 
Exception.

Regards,
Eric Windisch 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [metering] Choice of a messaging queue

2012-05-18 Thread Eric Windisch
The nova rpc implementation is moving into openstack common, I agree with using 
this abstraction.

As per ZeroMQ, I'm the author of that plugin. There is a downloadable plugin 
for Essex and I'm preparing to make a Folsom merge prop within the next week or 
so, if all goes well.

Sent from my iPad

On May 18, 2012, at 7:26, Doug Hellmann doug.hellm...@dreamhost.com wrote:

 
 
 On Fri, May 18, 2012 at 4:42 AM, Nick Barcet nick.bar...@canonical.com 
 wrote:
 Hello everyone,
 
 Next week's irc meeting will have for goal to choose a reference
 messaging queue service for the ceilometer project.  For this meeting to
 be able to be successful, a discussion on the choices that we have to
 make need to occur first right here.
 
 To open the discussion here are a few requirements that I would consider
 important for the queue to support:
 
 a) the queue must guaranty the delivery of messages.
 To the contrary of monitoring, loss of events may have important billing
 impacts, it therefore cannot be an option that message be lost.
 
 b) the client should be able to store and forward.
 As the load of system or traffic increases, or if the client is
 temporarily disconnected, client element of the queue should be able to
 hold messages in a local queue to be emitted as soon as condition permit.
 
 c) client must authenticate
 Only client which hold a shared private key should be able to send
 messages on the queue.
 
 Does the username/password authentication of rabbitmq meet this requirement?
  
 
 d) queue may support client signing of individual messages
 Each message should be individually signed by the agent that emits it in
 order to guaranty non repudiability.  This function can be done by the
 queue client or by the agent prior to en-queuing of messages.
 
 We can embed the message signature in the message, so this requirement 
 shouldn't have any bearing on the bus itself. Unless I'm missing something?
  
 
 d) queue must be highly available
 the queue servers must be able to support multiple instances running in
 parallel in order to support continuation of operations with the loss of
 one server.  This should be achievable without the need to use complex
 fail over systems and shared storage.
 
 e) queue should be horizontally scalable
 The scalability of queue servers should be achievable by increasing the
 number of servers.
 
 Not sure this list is exhaustive or viable, feel free to comment on it,
 but the real question is: which queue should we be using here?
 
 While I see the benefit of discussing requirements for the message bus 
 platform in general, I'm not sure we need to dictate a specific 
 implementation. If we say we are going to use the nova RPC library to 
 communicate with the bus for sending and receiving messages, then we can use 
 all of the tools for which there are drivers in nova -- rabbit, qpid, zeromq 
 (assuming someone releases a driver, which I think is being worked on 
 somewhere), etc. This leaves the decision of which bus to use up to the 
 deployer, where the decision belongs. It also means we won't end up choosing 
 a tool for which the other projects have no driver, leading us to have to 
 create one and add a new dependency to the project.
 
 Doug
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [metering] Choice of a messaging queue

2012-05-18 Thread Eric Windisch

 
 
 a) the queue must guaranty the delivery of messages.
 To the contrary of monitoring, loss of events may have important billing
 impacts, it therefore cannot be an option that message be lost.

Losing messages should always be an option, in the extreme cases. If a message 
is undeliverable for an excessive amount of time, it should be dropped. 
Otherwise, you'll need the queuing equivalent of a DBA doing periodic cleanup, 
which isn't very cloudy (or scalable).

I agree that the failure cases here are different than we'd normally see with 
Nova. Timeouts on messages would need to be much higher, and potentially 
disablable (but I do insist that a timeout, even if high, should be used).

 
 b) the client should be able to store and forward.
 As the load of system or traffic increases, or if the client is
 temporarily disconnected, client element of the queue should be able to
 hold messages in a local queue to be emitted as soon as condition permit.

The zeromq driver definitely does this (kind of). It will try and send all 
messages at once via green threads, which is effectively the same thing. The 
nice thing is that with 0mq, when a message is sent, delivery to a peer is 
confirmed. 

I think, but may be wrong, that rabbit and qpid essentially do the same for 
store and forward, blocking their green threads until they hit a successful 
connection to the queue, or a timeout. With the amqp drivers, the sender only 
has a confirmation of delivery to the queuing server, not to the destination.
 
One thing the zeromq driver doesn't do is resume sending attempts across a 
service restart. Messages aren't durable in that fashion. This is largely 
because the timeout in Nova does not need to be very large, so there would be 
very little benefit. This goes back to your point in 'a'. Adding this feature 
would be relatively minor, it just wasn't needed in Nova. Actually, this 
limitation would be presumably true of rabbit and qpid as well, in the store 
and forward case.

 c) client must authenticate
 Only client which hold a shared private key should be able to send
 messages on the queue.
 d) queue may support client signing of individual messages
 Each message should be individually signed by the agent that emits it in
 order to guaranty non repudiability.  This function can be done by the
 queue client or by the agent prior to en-queuing of messages


There is a Folsom blueprint to add signing and/or encryption to the rpc layer.

 d) queue must be highly available
 the queue servers must be able to support multiple instances running in
 parallel in order to support continuation of operations with the loss of
 one server.  This should be achievable without the need to use complex
 fail over systems and shared storage.


 e) queue should be horizontally scalable
 The scalability of queue servers should be achievable by increasing the
 number of servers.

d/e are NOT properties of the rabbit (and qpid?) driver today in Nova, but it 
could (should) be made to work this way. You get this with the zeromq driver, 
of course ;)

 
 Not sure this list is exhaustive or viable, feel free to comment on it,
 but the real question is: which queue should we be using here?

The OpenStack common rpc mechanism, for sure. I'm biased, but I believe that 
while the zeromq driver is the newest, it is the only driver that meets all of 
the above requirements, except, to the exceptions marked above.

Improving the other implementations should be done, but I don't know of anyone 
committed to that work.

Regards,
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] fake_flags + common breakout

2012-05-04 Thread Eric Windisch
Russell,

FYI, with the flags patch, it is no longer possible to set rpc-implementation 
dependent flags in fake_flags.

Even Rabbit has a flag in there (fake_rabbit), but Rabbit flags are currently 
global, so it works. That won't be true for long… We're going to have to fix 
this.

One option is to initialize the RPC layer from fake_flags.py and then set the 
options.

Another option, for now, might just be to push this problem into the 
implementations and have a global testing flag that is implementation/backend 
independent. This is uglier on the implementation/driver side, but cleaner on 
the unit tests.  This is basically what 'fake_rabbit' is now, anyway.

Thoughts?  

--  
Eric Windisch

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] fake_flags + common breakout

2012-05-04 Thread Eric Windisch
I added a flag in my branch which uses a flag defined in the ZeroMQ driver.

This was fine before, but is a problem now, because tests/__init__.py doesn't 
do rpc.register_opts. Instead, it does FLAGS.register_opts(rpc.rpc_opts), which 
only loads the common RPC flags.

Of course, the flags file *defines* the RPC driver to be used, so there would 
be no benefit of running rpc.register_opts until after fake_flags is loaded.

The easiest solutions I see are:
A. load fake_flags.py, then rpc.register_opts, then run a fake_rpc_flags.py.
B. import rpc, add rpc.register_opts to fake_flags.py, then add any flags we 
want.
C. not support testing flags on RPC drivers, have a common testing flag.


--  
Eric Windisch


On Friday, May 4, 2012 at 6:08 PM, Russell Bryant wrote:

 On 05/04/2012 11:53 AM, Eric Windisch wrote:
  Russell,
   
  FYI, with the flags patch, it is no longer possible to set
  rpc-implementation dependent flags in fake_flags.
   
  Even Rabbit has a flag in there (fake_rabbit), but Rabbit flags are
  currently global, so it works. That won't be true for long… We're going
  to have to fix this.
   
  One option is to initialize the RPC layer from fake_flags.py and then
  set the options.
   
  
  
 Each place that fake_flags is imported, rpc gets initialized first. See
 these lines of code, and the fake_flags import shortly after:
  
 https://github.com/openstack/nova/blob/master/nova/tests/__init__.py#L63
 https://github.com/openstack/nova/blob/master/bin/nova-dhcpbridge#L103
  
  Another option, for now, might just be to push this problem into the
  implementations and have a global testing flag that is
  implementation/backend independent. This is uglier on the
  implementation/driver side, but cleaner on the unit tests. This is
  basically what 'fake_rabbit' is now, anyway.
   
  
  
 As far as I can tell, this isn't actually a problem with the uses of
 fake_flags right now. What problem did you hit?
  
 --  
 Russell Bryant
  
  


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] fake_flags + common breakout

2012-05-04 Thread Eric Windisch
 
 I guess another question is, why do you need to set ZeroMQ related flags
 in fake_flags? I think those should only be settings that apply for
 *all* unit tests. I would just register your flags in your unit tests.
 
 https://github.com/openstack/nova/blob/master/nova/tests/rpc/test_qpid.py#L69
 
 
 


The fake_rabbit flag doesn't, but this is otherwise a good point.   For now, 
I've just hard-coded the flag into the test.

The specific flag was to set the MatchMaker. I was forcing messages to use a 
Localhost matchmaker. I could override the flag from the test itself, but 
there might be reasons why someone would want to override this. In fact, I have 
overridden this to test the various matchmakers.  It is also more transparent.

To be honest, this specific requirement will be much lessened once I write 
independent tests for the matchmaker classes.

However, it would also be *nice* to be able to override various settings to 
test them without modifying the modules or tests themselves. I'd like if I 
could bump up/down the number of ZeroMQ IO threads to use during tests (it 
defaults to 1).   While modifying the test/module isn't much worse than 
modifying fake_flags, I find it cleaner.

-- 
Eric Windisch___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] extending rootwrap securely

2012-04-30 Thread Eric Windisch
These are all installation-specific. Devstack is the closest thing there is to 
an official installer and that clearly doesn't do all the right things, from 
the perspective of making it *easy* to work with and test, rather than making 
it production-ready.  I think most of the integrators are doing the right 
thing, or at least trying/intending to.

The typical deployment is a 'nova' user running nova with a sudoers 
configuration allowing the execution of rootwrap.  Because the Nova user 
executes the nova commands, the nova user is able to re-execute its commands 
with different arguments.  The nova.conf is a different story, but largely 
irrelevant.

The only ways to extend rootwrap right now are to use sudoers-only (foregoing 
some of the checks it does), wrap rootwrap itself, or some entirely different 
setuid wrapper.

I'd really like to see this security mechanism overhauled. Rootwrap was an 
improvement over what was there before, however, I don't believe that rootwrap 
is a viable long-term solution as currently designed.  Rootwrap has resulted in 
the use of potentially insecure shell-outs for the purposes of privilege 
escalation in cases where pure Python would be safer.

-- 
Eric Windisch


On Sunday, April 29, 2012 at 7:41 PM, Andrew Bogott wrote:

 As part of the plugin framework, I'm thinking about facilities for 
 adding commands to the nova-rootwrap list without directly editing the 
 code in nova-rootwrap. This is, naturally, super dangerous; I'm worried 
 that I'm going to open a security hole big enough to pass a herd of 
 elephants.
 
 It doesn't help that I mostly know about devstack, and don't know a 
 whole lot about the variety of ways that Nova is installed on actual 
 production systems. So, my questions:
 
 a) Is the nova code on a production system generally owned by root and 
 read-only? (If the answer to this one is ever 'no' then we're done, 
 because we're already 100% insecure.)
 
 b) Does nova usually run as root user? (Again, thinking 'no' because 
 otherwise we wouldn't need a rootwrap tool in the first place.)
 
 c) Who generally has rights to modify nova.conf and/or add command-line 
 args to the nova launch? (I want the answer to this to be 'just root' 
 but I fear the answer is 'both root and the nova user.')
 
 The crux: If additional commands can be added to rootwrap via nova.conf 
 or the commandline, does that open security holes that aren't already 
 open? Such a facility will give root to anyone who can modify the 
 nova.conf or the nova commandline. So, if the nova user can modify the 
 commandline, the question is: did the nova user /already/ have root access?
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net (mailto:openstack@lists.launchpad.net)
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp
 
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] proposal for Russell Bryant to be added to Nova Core

2012-04-27 Thread Eric Windisch
+1

-- 
Eric Windisch


On Friday, April 27, 2012 at 11:09 AM, Dan Prince wrote:

 Russell Bryant wrote the Nova Qpid rpc implementation and is a member of the 
 Nova security team. He has been helping chipping away at reviews and 
 contributing to discussions for some time now.
 
 I'd like to seem him Nova core so he can help out w/ reviews... definitely 
 the RPC ones.
 
 Dan
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net (mailto:openstack@lists.launchpad.net)
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp
 
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Canonical AWSOME

2012-04-25 Thread Eric Windisch
I've heard a few people mention pulling messages off the queue, or 
communicating via RPC outside of the project, or outside of Python. In theory, 
this sounds nice, but the RPC implementations are strictly making sure that A 
can execute calls on target B and that responses get back to A.

This has little to do with message queues, other than that message queues are 
optionally supported. You shouldn't be peeking behind that curtain. This is 
specific to each RPC mechanism and enforcing something this early might be more 
problematic than you expect.

I think that in the Folsom time frame, if not beyond, we should suggest a best 
practice of using the OpenStack RPC modules if you intend to communicate with 
Nova services. They're moving into OpenStack common, so this will become more 
friendly to use. It will require Python, but I don't see a reasonable 
alternative at the moment.

Additionally, each RPC driver can provide a guide to complying with their 
protocol, which extends beyond simply the transport (i.e. AMQP or ZeroMQ). This 
might be harder than it sounds and might vary between, or even within, releases.

--  
Eric Windisch


On Wednesday, April 25, 2012 at 1:24 PM, Joshua Harlow wrote:

 Re: [Openstack] Canonical AWSOME So if what u are talking about is anything 
 RPC/MQ based, then I would say those are not internal API’s.
  
 Once a RPC/MQ mechanism is introduced they don’t really become internal API’s 
 anymore (if we are talking about the same API’s, haha).
  
 Since other stuff can be reading those messages on the MQ its very useful to 
 have a schema to know exactly what to read :-)
  
 On 4/24/12 11:46 AM, Russell Bryant rbry...@redhat.com wrote:
  
  On 04/24/2012 01:25 PM, Joshua Harlow wrote:
   I’m more in favor of just having a schema, I don’t care if that compiles
   to protocol buffers, json, NEWAWESOMEhipsterMSGFORMAT.
  
   That schema will force people to think a little more when they add
   messages, and it will automatically document the messages that are being
   sent around.
  
   That’s a big win I think and is a step to getting those schemas
   versioned...
   
  I'm not sure a schema is really necessary aside from the Python classes
  themselves.  They're internal APIs, so they shouldn't be used from
  outside of Nova.  A schema would be useful if we had to define the
  interfaces in some language neutral-format, but I don't think that
  really matters here.
   
  I'm going to work on a proposal / prototype for how we can handle
  versioning, though.  The big goal here is making sure that we can
  maintain compatibility with the previous release.
   
  --
  Russell Bryant
   
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
   
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net (mailto:openstack@lists.launchpad.net)
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp
  
  


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] rpc APIs (was: Canonical AWSOME)

2012-04-25 Thread Eric Windisch
Sure, but then the contract becomes between the notifier and the client, 
presumably? I'm not as familiar with the notification system as I should be. 

I haven't written a ZeroMQ notifier yet, figuring that task would be better 
delayed until the move to openstack-common. 

-- 
Eric Windisch


On Wednesday, April 25, 2012 at 3:29 PM, Russell Bryant wrote:

 On 04/25/2012 03:22 PM, Eric Windisch wrote:
  I've heard a few people mention pulling messages off the queue, or
  communicating via RPC outside of the project, or outside of Python. In
  theory, this sounds nice, but the RPC implementations are strictly
  making sure that A can execute calls on target B and that responses get
  back to A.
  
  This has little to do with message queues, other than that message
  queues are optionally supported. You shouldn't be peeking behind that
  curtain. This is specific to each RPC mechanism and enforcing something
  this early might be more problematic than you expect.
  
 
 
 I agree with you that any discussion of other things poking at the rpc
 communications is broken and wrong.
 
 The only case where it does make sense is notifications. In that case,
 the fact that it's using rpc is just an implementation detail. If you
 enable the rabbit notifier (should probably be renamed at some point),
 there is a specific AMQP message exchange where external applications
 are to receive notifications from nova.
 
 This implementation detail means that this can also be used with zeromq
 ... though I'm not sure that makes sense. There would probably be a
 notifier implementation specific to zeromq that could make better use of
 that messaging model.
 
 -- 
 Russell Bryant
 
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] rpc APIs (was: Canonical AWSOME)

2012-04-25 Thread Eric Windisch
Looking at the rabbitmq notifier, it actually just wraps rpc (so it is really 
an rpc notifier)… it won't work with the ZeroMQ driver because it breaks the 
presumption that topics are $topic.$host. Granted, that might have been 
presumptuous, but nothing besides the notifier did this. I'd much rather that a 
dash or underscore was used here, if possible.

Then, the ZeroMQ driver would just work with the existing notifier by 
implementing fanout_cast() for notify().  

--  
Eric Windisch


On Wednesday, April 25, 2012 at 6:23 PM, Monsyne Dragon wrote:

 The notification system  is simply 'borrowing'  some code from rpc to push 
 notifications.  The notifications have a specified JSON message format, 
 documented on the wiki: http://wiki.openstack.org/NotificationSystem  
 As far as the notification drivers, they are very very simple.  
  
  
 On Apr 25, 2012, at 2:45 PM, Eric Windisch wrote:  
  Sure, but then the contract becomes between the notifier and the client, 
  presumably? I'm not as familiar with the notification system as I should 
  be.  
   
  I haven't written a ZeroMQ notifier yet, figuring that task would be better 
  delayed until the move to openstack-common.  
   
  --  
  Eric Windisch  
   
   
  On Wednesday, April 25, 2012 at 3:29 PM, Russell Bryant wrote:
   
   On 04/25/2012 03:22 PM, Eric Windisch wrote:
I've heard a few people mention pulling messages off the queue, or
communicating via RPC outside of the project, or outside of Python. In
theory, this sounds nice, but the RPC implementations are strictly
making sure that A can execute calls on target B and that responses get
back to A.
 
This has little to do with message queues, other than that message  
queues are optionally supported. You shouldn't be peeking behind that
curtain. This is specific to each RPC mechanism and enforcing something
this early might be more problematic than you expect.
 
 


   I agree with you that any discussion of other things poking at the rpc  
   communications is broken and wrong.

   The only case where it does make sense is notifications. In that case,  
   the fact that it's using rpc is just an implementation detail. If you
   enable the rabbit notifier (should probably be renamed at some point),
   there is a specific AMQP message exchange where external applications
   are to receive notifications from nova.

   This implementation detail means that this can also be used with zeromq  
   ... though I'm not sure that makes sense. There would probably be a
   notifier implementation specific to zeromq that could make better use of
   that messaging model.

   --  
   Russell Bryant



   
   
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net 
  (mailto:openstack@lists.launchpad.net)
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
  
 --  
 Monsyne M. Dragon
 OpenStack/Nova  
 cell 210-441-0965
 work x 5014190
  
  
  
  

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Canonical AWSOME

2012-04-24 Thread Eric Windisch
 
 Actually, I think JSON schema for our message-bus messages might be a really 
 good idea (tm).  Violations could just be warnings until we get things locked 
 down... maybe I should propose a blueprint? (Although I have enough of a 
 blueprint backlog as it is...) 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Canonical AWSOME

2012-04-24 Thread Eric Windisch
 
 Actually, I think JSON schema for our message-bus messages might be a really 
 good idea (tm).  Violations could just be warnings until we get things locked 
 down... maybe I should propose a blueprint? (Although I have enough of a 
 blueprint backlog as it is...)

This was discussed at the summit in (I believe) the API versioning talk. There 
is a strong bias toward JSON inside RPC messages. However, this is actually 
transparent to Nova as the RPC implementations do the decoding and encoding. It 
is also hard to test and trigger such warnings as, as far as Nova knows, all 
the interfaces pass python data types, not JSON.  Some RPC mechanisms could 
transparently serialize.  As long as there is an abstraction layer, it should 
be oblivious to the serialization and we should not care too strongly.

There was a patch a few weeks ago that has enforced using a common RPC 
exception serializer using JSON, which I'm not sure is, or is not, a good 
thing. I haven't yet updated the ZeroMQ driver to use this, but am in the 
process of making these changes as I update for Folsom.

All that said, I do intend that the ZeroMQ driver will use JSON when it lands 
in Folsom.   (it currently Pickles, but only because there was a bug in Essex 
at one point, breaking  JSON serialization)

-- 
Eric Windisch
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Canonical AWSOME

2012-04-24 Thread Eric Windisch
 
 The change was just to the fake rpc backend to help catch more errors in
 unit tests where non-primitive types are getting passed into rpc.
 
 


My current code should still work, but the tests seem to have eliminated the 
more generic exception handling case with the assumption that testing the 
exceptions and the exception serialization/deserialization is sufficient.

That said, I'd rather utilize *more* stuff from rpc.common, so won't complain 
too badly. I know that the ZeroMQ driver could be thinned a bit, by better 
using some of the primitives available in rpc.common and we might be able to 
refactor some of the stuff in ampq.py to be more generically useful (maybe).  
Right now, none of that is a huge concern to me, we can get it integrated and 
do the DRY later.

-- 
Eric Windisch

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Canonical AWSOME

2012-04-23 Thread Eric Windisch
Creating a contract on the private API will allow the external APIs to be 
created and tested without needing a translation layer, even if contributory 
APIs were developed outside of the project (such as in AWSOME).

It is clearly better, architecturally, if the EC2/OCCI apis can access the 
internal apis directly. The RPC and database can be made to scale in Nova, but 
a REST endpoint won't be as reliable or scale as well. 

-- 
Eric Windisch


On Monday, April 23, 2012 at 11:15 AM, Justin Santa Barbara wrote:

  What's the advantage of replacing the native EC2 compatibility layer with 
  AWSOME from a user / operator point of view?
 
 Although I wasn't able to attend the design summit session, right now we have 
 two native APIs, which means we have two paths into the system.  That is 
 poor software engineering, because we must code and debug everything twice.  
 Some developers will naturally favor one API over the other, and so 
 disparities happen.  Today, both APIs are effectively using an undocumented 
 private API, which is problematic.  We also can't really extend the EC2 API, 
 so it is holding us back as we extend OpenStack's capabilities past those of 
 the legacy clouds.  
 
 With one native API, we can focus all our energies on making sure that API 
 works.  Then, knowing that the native API works, we can build other APIs on 
 top through simple translation layers, and they will work also.  Other APIs 
 can be built on top in the same way (e.g. OCCI) 
 
 Which is a long way of saying the external approach will result in _all_ APIs 
 (OpenStack, EC2, OCCI etc) becoming more reliable, more secure and just more 
 AWSOME.
 
 Justin
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net (mailto:openstack@lists.launchpad.net)
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp
 
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Canonical AWSOME

2012-04-23 Thread Eric Windisch



On Monday, April 23, 2012 at 3:42 PM, Justin Santa Barbara wrote:

 I didn't realize people were willing to do so.
 

Ah yes, well, that problem might still remain.  There are certainly seem to be 
volunteers to work on the versioning code, but defining, tagging, and adhering 
to API contracts are another matter...

-- 
Eric Windisch___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Canonical AWSOME

2012-04-23 Thread Eric Windisch



On Monday, April 23, 2012 at 4:00 PM, Joshua Harlow wrote:

 Re: [Openstack] Canonical AWSOME How are REST endpoints not reliable or 
 scalable ;-)
  
 I’d like to know, seeing as the web is built on them :-)
The resiliency of the internet is actually built on BGP. REST endpoints fall 
over constantly. Look no further than Google 500 errors, the fail-whale, etc. 
Even the EC2 API has been known to fall-over.  Making HTTP services reliable is 
not as trivial as it should be. The reason is because they are single points.

It is possible, through running many services and doing intelligent load 
balancing and failover, to make REST reasonably reliable. However, I'd rather 
not broker my requests through a questionably reliable REST broker, and send 
messages directly to their destinations to RPC consumers which are already 
running (and required) on those machines.  If the destination is offline, it 
doesn't need my message.  If the REST broker is offline, the recipient on the 
other end of that broker should still be guaranteed delivery…

The problem can be simplified as:
* How many REST endpoints do you need to service 100 compute machines? How many 
REST endpoints do you need to service 100 compute machines? How many points 
of failure exist?
* How many compute machines do you need to service 100 compute machines? How 
many compute machines do you need to service 100 compute machines?  How 
many points of failure exist?

It is unclear how many REST endpoints you'll need. The compute machines scale 
as they scale, they're not dependent on a REST broker. Every compute machine 
itself can fail, although this failure is likely trivial (messages to a dead 
machine are generally vain).  Meanwhile, the REST service has to deal with dead 
compute machines *and* the death of REST services supporting the architecture.

--  
Eric Windisch___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Metadata and File Injection (code summit session?)

2012-04-10 Thread Eric Windisch
I maintain my stance from pre-Diablo, that the configuration drive should be 
exported as a virtual cdrom device with an ISO9660 filesystem. We can generate 
the filesystem without root access and the filesystem is well-supported.  
Additionally, it lacks the patent-related issues associated with the other 
many-platform filesystems (i.e. FAT).

Also, doing the above happens to make the configuration-drive surprisingly 
similar to the optional sub-feature of OVF. I'm not sure what priority OVF is 
for Nova (it is a low priority for me), but it might be worth considering, 
especially since Glance seems to advertise some OVF support.

-- 
Eric Windisch


On Tuesday, April 10, 2012 at 11:52 AM, Scott Moser wrote:

 On Tue, 10 Apr 2012, Andrew Bogott wrote:
 
  I'm reviving this ancient thread to ask: Will there be a code summit session
  about this? And/or are there plans to start developing a standard set of
  guest agents for Folsom?
  
 
 
 http://summit.openstack.org/sessions/view/100
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net (mailto:openstack@lists.launchpad.net)
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp
 
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Metadata and File Injection (code summit session?)

2012-04-10 Thread Eric Windisch
EC2 is strategically important.  I don't believe that building gold images is 
the right focus for OpenStack.  

Anyone wanting to use config-drive would need to support it in their images, 
that is a no-brainer. It should not be required that images launching in Nova 
via the EC2 API support config-drive. The EC2 metadata service must remain.

The EC2 API is intended to mimic EC2 behavior and provide compatibility. The 
OpenStack implementations should not diverge or break that compatibility.  

--  
Eric Windisch


On Tuesday, April 10, 2012 at 2:05 PM, Justin Santa Barbara wrote:

 Config drive can support all EC2 functionality, I believe.
  
 Images would need to be respun for OpenStack with config-drive, unless we 
 populated the config drive in a way that worked with cloud-init.  (Scott?)  
  
 Personally, I'd rather our effort went into producing great images for 
 OpenStack, than into compatibility with last-generation clouds.  Any idea 
 what the important EC2 images are?  Are there a handful of images that we 
 could duplicate and then just forget about EC2?  
  
  
 On Tue, Apr 10, 2012 at 10:30 AM, Joshua Harlow harlo...@yahoo-inc.com 
 (mailto:harlo...@yahoo-inc.com) wrote:
  Except for the fact that the config drive is non-EC2 right?
   
  That might blow it out of the water to start, as I know a lot of people 
  want the EC2 equivalents/compat.
   
  But maybe if done right it shouldn’t matter (ie cloud-init could instead of 
  calling out to urls could also call out to “local files” on a config drive).
   
  I just worry that config drive is openstack api only, afaik.  
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net (mailto:openstack@lists.launchpad.net)
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp
  
  


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] EC2 compat.

2012-04-10 Thread Eric Windisch
I agree that it is important to access the limitations of the OpenStack EC2 API 
implementation.

To that end, make sure to take a look at 
https://github.com/cloudscaling/aws-compat

--  
Eric Windisch


On Tuesday, April 10, 2012 at 7:39 PM, Joshua Harlow wrote:

 EC2 compat. Hi all,
  
 I’ve started gathering tools/docs and possibly a mock ec2 server (wip) that 
 can allow openstack to figure out exactly what is broken with there ec2 
 implementation.
  
 The process of course starts with figuring out what is there currently, what 
 is broken and what needs fixing.
  
 I’ve started this @ https://github.com/yahoo/Openstack-EC2/wiki
  
 This includes XSD’s grabbed from amazon (converted from there wsdls), 
 https://github.com/yahoo/Openstack-EC2/tree/master/data/xsds
  
 I’ve started to fill in what might be a good template for the run instances 
 call @ https://github.com/yahoo/Openstack-EC2/wiki/RunInstances
  
 Do people think this would be useful? Or any suggestions on how to do it 
 better...  
  
 Since EC2 is so key to openstack, the only way to make it better is to have a 
 very detailed level of docs on how compatible it is, hopefully this is a 
 start.
  
 Of course contributions are welcome, since documenting what is in amazon EC2  
 responses/requests and what is in openstack responses/requests/code is a very 
 laborious task (but it has to be done).
  
 -Josh
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net (mailto:openstack@lists.launchpad.net)
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp
  
  


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] EC2 compat.

2012-04-10 Thread Eric Windisch
Josh, as a follow-up, it would be good to keep an open dialogue on this. 
When/if you get a chance to review the aws-compat branch, I'd like to get your 
feedback as well.

PS  I meant to write assess, not access. I only noticed when I read back my 
email. I'm too pedantic to not correct myself.  

--  
Eric Windisch


On Tuesday, April 10, 2012 at 8:30 PM, Eric Windisch wrote:

 I agree that it is important to access the limitations of the OpenStack EC2 
 API implementation.
  
 To that end, make sure to take a look at 
 https://github.com/cloudscaling/aws-compat
  
 --  
 Eric Windisch
  
  
 On Tuesday, April 10, 2012 at 7:39 PM, Joshua Harlow wrote:
  
  EC2 compat. Hi all,
   
  I’ve started gathering tools/docs and possibly a mock ec2 server (wip) that 
  can allow openstack to figure out exactly what is broken with there ec2 
  implementation.
   
  The process of course starts with figuring out what is there currently, 
  what is broken and what needs fixing.
   
  I’ve started this @ https://github.com/yahoo/Openstack-EC2/wiki
   
  This includes XSD’s grabbed from amazon (converted from there wsdls), 
  https://github.com/yahoo/Openstack-EC2/tree/master/data/xsds
   
  I’ve started to fill in what might be a good template for the run instances 
  call @ https://github.com/yahoo/Openstack-EC2/wiki/RunInstances
   
  Do people think this would be useful? Or any suggestions on how to do it 
  better...  
   
  Since EC2 is so key to openstack, the only way to make it better is to have 
  a very detailed level of docs on how compatible it is, hopefully this is a 
  start.
   
  Of course contributions are welcome, since documenting what is in amazon 
  EC2  responses/requests and what is in openstack responses/requests/code is 
  a very laborious task (but it has to be done).
   
  -Josh
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net 
  (mailto:openstack@lists.launchpad.net)
  Unsubscribe : https://launchpad.net/~openstack
  More help : https://help.launchpad.net/ListHelp
   
   
   
  
  

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Notifiers

2012-03-14 Thread Eric Windisch
To make the notifiers common, the QAL / RPC will also have to be made common. 
The Glance code for notifications currently implements queue/rpc-implementation 
specific notifiers, so this might not be a bad thing…

If there is an effort to move the RPC into openstack.common, then I would like 
to be involved, if only as a reviewer to ensure the queue abstraction layer 
makes it over safely. [I still question if we need an rpc.notify()…]

--  
Eric Windisch


On Wednesday, March 14, 2012 at 5:19 AM, Juan Antonio García Lebrijo wrote:

 Hi,
  
 we are thinking to contribute to increase the types of notifications (events) 
 through the rabbit server. We are thinking to notify from each Openstack 
 component (nova, glance,  quantum in the future ...)
  
 We think that unify the code on the openstack-common stuff is a great idea.
  
 We are really interested to collaborate on this topic.
  
 Cheers,
 Juan
  
 On 12/03/12 14:33, Swaminathan Venkataraman wrote:  
  Hi,  
  I've been playing around with openstack for a month now and was looking to 
  see how I can contribute. I saw that nova and glance use different methods 
  to send out notifications. It looked relatively straight forward to make 
  them use one common library for notifications. That way, when we make 
  changes or add new transports we do not have to do it twice (or more number 
  of times depending on how other components of openstack do notifications). 
  Let me know if someone is already working on this. If not, let me know if 
  this makes sense and I can start by creating a blueprint. Also, is there 
  any preference to use object oriented (or not)?
   
  Cheers,  
  Venkat
   
   
  ___ Mailing list: 
  https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net 
  (mailto:openstack@lists.launchpad.net) Unsubscribe : 
  https://launchpad.net/~openstack More help : 
  https://help.launchpad.net/ListHelp  
  
 --  
  
 Juan Antonio García Lebrijo
 CTO
 www.stackops.com (http://www.stackops.com/) |  juan.lebr...@stackops.com 
 (mailto:juan.lebr...@stackops.com) | skype:juan.lebrijo.stackops
  
  
  ADVERTENCIA LEGAL   
 Le informamos, como destinatario de este mensaje, que el correo electrónico y 
 las comunicaciones por medio de Internet no permiten asegurar ni garantizar 
 la confidencialidad de los mensajes transmitidos, así como tampoco su 
 integridad o su correcta recepción, por lo que STACKOPS TECHNOLOGIES S.L. no 
 asume responsabilidad alguna por tales circunstancias. Si no consintiese en 
 la utilización del correo electrónico o de las comunicaciones vía Internet le 
 rogamos nos lo comunique y ponga en nuestro conocimiento de manera inmediata. 
 Este mensaje va dirigido, de manera exclusiva, a su destinatario y contiene 
 información confidencial y sujeta al secreto profesional, cuya divulgación no 
 está permitida por la ley. En caso de haber recibido este mensaje por error, 
 le rogamos que, de forma inmediata, nos lo comunique mediante correo 
 electrónico remitido a nuestra atención y proceda a su eliminación, así como 
 a la de cualquier documento adjunto al mismo. Asimismo, le comunicamos que la 
 distribución, copia o utilización de este mensaje, o de cualquier documento 
 adjunto al mismo, cualquiera que fuera su finalidad, están prohibidas por la 
 ley.  
  
 * PRIVILEGED AND CONFIDENTIAL   
 We hereby inform you, as addressee of this message, that e-mail and Internet 
 do not guarantee the confidentiality, nor the completeness or proper 
 reception of the messages sent and, thus, STACKOPS TECHNOLOGIES S.L. does not 
 assume any liability for those circumstances. Should you not agree to the use 
 of e-mail or to communications via Internet, you are kindly requested to 
 notify us immediately. This message is intended exclusively for the person to 
 whom it is addressed and contains privileged and confidential information 
 protected from disclosure by law. If you are not the addressee indicated in 
 this message, you should immediately delete it and any attachments and notify 
 the sender by reply e-mail. In such case, you are hereby notified that any 
 dissemination, distribution, copying or use of this message or any 
 attachments, for any purpose, is strictly prohibited by law.
  
  
  
  
  
  
  
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net (mailto:openstack@lists.launchpad.net)
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp
  
  


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-05 Thread Eric Windisch
 an rpc implementation that writes to disk and returns,

A what? I'm not sure what problem you're looking to solve here or what you 
think the RPC mechanism should do. Perhaps you're speaking of a Kombu or AMQP 
specific improvement?

There is no absolute need for persistence or durability in RPC. I've done quite 
a bit of analysis of this requirement and it simply isn't necessary. There is 
some need in AMQP for this due to implementation-specific issues, but not 
necessarily unsolvable. However, these problems simply do not exist for all RPC 
implementations...

-- 
Eric Windisch




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] eventlet weirdness

2012-03-01 Thread Eric Windisch
Just because MySQL is a C library doesn't necessarily mean it can't be made to 
work with coroutines. ZeroMQ is supported through eventlet.green.zmq and there 
exists geventmysql (although it appears to me as more a proof-of-concept).

Moving to a pure-python mysql library might be the path of least resistance as 
long as we're committed to eventlet. 

-- 
Eric Windisch


On Thursday, March 1, 2012 at 3:36 PM, Vishvananda Ishaya wrote:

 Yes it does. We actually tried to use a pool at diablo release and it was 
 very broken. There was discussion about moving over to a pure-python mysql 
 library, but it hasn't been tried yet.
 
 Vish
 
 On Mar 1, 2012, at 11:45 AM, Yun Mao wrote:
 
  There are plenty eventlet discussion recently but I'll stick my
  question to this thread, although it's pretty much a separate
  question. :)
  
  How is MySQL access handled in eventlet? Presumably it's external C
  library so it's not going to be monkey patched. Does that make every
  db access call a blocking call? Thanks,
  
  Yun
  
  On Wed, Feb 29, 2012 at 9:18 PM, Johannes Erdfelt johan...@erdfelt.com 
  (mailto:johan...@erdfelt.com) wrote:
   On Wed, Feb 29, 2012, Yun Mao yun...@gmail.com 
   (mailto:yun...@gmail.com) wrote:
Thanks for the explanation. Let me see if I understand this.

1. Eventlet will never have this problem if there is only 1 OS thread
-- let's call it main thread.
   
   
   
   In fact, that's exactly what Python calls it :)
   
2. In Nova, there is only 1 OS thread unless you use xenapi and/or the
virt/firewall driver.
3. The python logging module uses locks. Because of the monkey patch,
those locks are actually eventlet or green locks and may trigger a
green thread context switch.

Based on 1-3, does it make sense to say that in the other OS threads
(i.e. not main thread), if logging (plus other pure python library
code involving locking) is never used, and we do not run a eventlet
hub at all, we should never see this problem?
   
   
   
   That should be correct. I'd have to double check all of the monkey
   patching that eventlet does to make sure there aren't other cases where
   you may inadvertently use eventlet primitives across real threads.
   
   JE
   
   
   ___
   Mailing list: https://launchpad.net/~openstack
   Post to : openstack@lists.launchpad.net 
   (mailto:openstack@lists.launchpad.net)
   Unsubscribe : https://launchpad.net/~openstack
   More help : https://help.launchpad.net/ListHelp
  
  
  
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net 
  (mailto:openstack@lists.launchpad.net)
  Unsubscribe : https://launchpad.net/~openstack
  More help : https://help.launchpad.net/ListHelp
 
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net (mailto:openstack@lists.launchpad.net)
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Running for Nova PTL

2012-02-24 Thread Eric Windisch
On Friday, February 24, 2012 at 12:01 PM, ed_con...@dell.com wrote:
 That can work and may be the only choice if there is an extended feature 
 freeze. Although, that may end up creating a service provider-specific 
 fork...which may not be a bad thing.


It can also be a very, very bad thing. Segmentation of the community and an 
exponentially increased complexity for those of us playing both sides of the 
private/public fence.  I really can't see any advantage of forking, even 
temporarily.

I'm opposed to a broad Folsom feature freeze as it would too greatly limit 
progression. However, I also agree that there needs to be a better core focus, 
rather than on sprawl.  I'm not opposed to selective feature inclusion.  On the 
same token, I believe we should either approach the Linux kernel model of 
include the kitchen sink or not, and by not, Nova would be the core framework 
upon which various drivers would be provided and could be plumbed.

If today, for instance, it was announced that Folsom won't include new 
features, then it would be impossible for Coraid, Pillar, or some other storage 
solution provider to offer a driver in Folsom and would have wait until G.  
Yet, Nexenta just got their driver into Essex. Nexenta's 6 month lead just 
turned into a 12 month lead! Sure, their competitor could ship separately, but 
there *is* a difference between inclusion, and now, if only politically and 
from the perspective of marketing.

If new drivers and new code won't be accepted easily, then these third party 
drivers should be maintained as external plugins. While nice in theory, I don't 
agree with it at this time. These are contributions to OpenStack and are core, 
essential to the success of the community. Breaking out drivers, while easier, 
would fracture the community in potentially devastating ways.

-- 
Eric Windisch

 ___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] ZeroMQ RPC Driver - FF-Exception request

2012-02-21 Thread Eric Windisch
It has been a pretty long month. I absorbed the community's comments, made 
improvements where they've been raised, and have also refactored to  
use fully asynchronous messaging.

I won't say the driver isn't without bugs, but it is again fully-functional and 
feature complete. I've been working with Russell Bryant and his helpful reviews 
this morning to polish up for inclusion, if it can get the approvals. 

-- 
Eric Windisch


On Tuesday, February 7, 2012 at 4:05 PM, Eric Windisch wrote:

 The ZeroMQ RPC driver is now feature-complete. I'm cleaning up for a 
 merge-proposal! 
 
 -- 
 Eric Windisch 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Question on i8ln?

2012-02-13 Thread Eric Windisch
Josh, you raise an interesting point. This also affects the ability to search 
on these terms. On the other hand, having log messages in a language that the 
user doesn't understand will require those users to seek assistance when they 
might otherwise be able to solve the problems themselves.

It might be an interesting exercise to provide log messages in two languages 
(one always being English), if we don't simply standardize on English.

--  
Eric Windisch


On Monday, February 13, 2012 at 12:50 PM, Joshua Harlow wrote:

 Question on i8ln? Hi all,
  
 I was just wondering if I could get clarification on something I never 
 understood related to i8ln.
  
 In nova HACKING.rst there is a line that mentions how log messages should be 
 using gettext for i8ln.
  
 Is it common in other companies to attempt to internationalize log messages?  
  
 I’ve seen this throughout the different openstack code and never quite 
 understood why.
  
 I can understand horizon being internationalized, but debugging/error/warning 
 (logging) messages? Isn’t that meant to be read by a developer, who will most 
 likely understand english (to some degree).
  
 ??
  
 -Josh
  
  
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net (mailto:openstack@lists.launchpad.net)
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp
  
  


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Question on i8ln?

2012-02-13 Thread Eric Windisch
Josh,  

I think the problem is that there are different levels of logging. DEBUG 
messages can more safely be forced to English than INFO messages, in my opinion.

The compromise solution might be to i18n all logs, but provide error codes 
which can be looked up and are thus universal and remain useful for debugging 
purposes.  

--  
Eric Windisch


On Monday, February 13, 2012 at 1:15 PM, Joshua Harlow wrote:

 Re: [Openstack] Question on i8ln? Sure but to contribute they have to 
 understand python which itself is english based??
 I can understand for sys-ops people that can’t understand english this might 
 be useful, but then they are running in unix which is also english based.
  
 For FE (front-end facing) sites I completely agree that, those must be made 
 i8ln accessible. But at a code/logging level? I thought english won at that 
 level (since if not all programming languages that I know of are english 
 based – minus brainfuck, haha). But maybe this is just my bias, from working 
 at a company that does this (code in english, comments in english).  
  
 Do other open source projects attempt to do this? I’ve never seen apache log 
 messages in different languages, or hadoop logging in different languages, 
 but maybe that’s just my lack of experience/not using that feature.  
  
 -Josh
  
 On 2/13/12 1:09 PM, Ronald Bradford ronald.bradf...@gmail.com wrote:
  
  Hi Josh,
   
  I cannot speak about the i18n specifics in OpenStack, but let me give you 
  an answer to the general question.
  I added a slide to a presentation last week that uses the graph at 
  http://www.internetworldstats.com/top20.htm  I recommend you check it out.
   
  The title of the slide is English is not the only language, infact for 
  the majoring of Internet users, English is not the first language. Yes, the 
  tech savvy people probably do understand English, but they do not use it by 
  default.  As a speaker in many different countries, I always see people use 
  native search engines (e.g. German, Espanol etc), it is because that is 
  what they know.  Many web sites (my own ronaldbradford.com 
  (http://ronaldbradford.com) http://ronaldbradford.com ) for example does 
  not rate well in foreign languages.
   
  The same thought applies to the actual software that runs site and products 
  and is supported by those that do not use English as a first language.  Why 
  should a message be in English and not Spanish?
   
  The ability for foreign language error messages enables more developers 
  from those countries to actively contribute without a requirement of a 
  language.
   
  Regards
   
  Ronald

   
  On Mon, Feb 13, 2012 at 3:50 PM, Joshua Harlow harlo...@yahoo-inc.com 
  wrote:
   Hi all,

   I was just wondering if I could get clarification on something I never 
   understood related to i8ln.

   In nova HACKING.rst there is a line that mentions how log messages should 
   be using gettext for i8ln.

   Is it common in other companies to attempt to internationalize log 
   messages?  

   I’ve seen this throughout the different openstack code and never quite 
   understood why.

   I can understand horizon being internationalized, but 
   debugging/error/warning (logging) messages? Isn’t that meant to be read 
   by a developer, who will most likely understand english (to some degree).

   ??

   -Josh



   ___
   Mailing list: https://launchpad.net/~openstack 
   https://launchpad.net/%7Eopenstack  
   Post to : openstack@lists.launchpad.net
   Unsubscribe : https://launchpad.net/~openstack 
   https://launchpad.net/%7Eopenstack  
   More help   : https://help.launchpad.net/ListHelp

   
   
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net (mailto:openstack@lists.launchpad.net)
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp
  
  


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Question on i8ln?

2012-02-13 Thread Eric Windisch
Open source? I haven't seen it so much as with enterprise apps from shops 
like IBM.  They're valuable as long as they're accompanied with readable text, 
and not just a code.

I'm guessing that you're looking at this from a developer's perspective rather 
than a support and operations perspective. Developers will understand English, 
but the operations and especially the support team may not. Having native 
language log messages has the potential to significantly decrease support costs 
for users both domestic and abroad (where domestic users might outsource 
support).

-- 
Eric Windisch


On Monday, February 13, 2012 at 1:50 PM, Joshua Harlow wrote:

 Re: [Openstack] Question on i8ln? Interesting, that is one way of doing it.
 
 Is this pretty common with other major open-source projects?
 
 Logging just seems like a different place to me, where english seems like it 
 should be required. I am biased of course ;)
 
 On 2/13/12 1:39 PM, Eric Windisch e...@cloudscaling.com wrote:
 
  
   Josh, 
  
  I think the problem is that there are different levels of logging. DEBUG 
  messages can more safely be forced to English than INFO messages, in my 
  opinion.
  
  The compromise solution might be to i18n all logs, but provide error codes 
  which can be looked up and are thus universal and remain useful for 
  debugging purposes.
  

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Question on i8ln?

2012-02-13 Thread Eric Windisch
Error codes are also searchable, but I agree that text is even more 
search-aware. However, also as a Yankee, I often find search results for an 
error string leading me to a site in, say, Russian or Korean.  This is a direct 
result of log messages NOT being i18n'ed.  I suppose it swings both ways...  

--  
Eric Windisch


On Monday, February 13, 2012 at 2:41 PM, Joshua Harlow wrote:

 Re: [Openstack] Question on i8ln? Agreed, I do that as well.
  
 But I’m also a biased yankee, now a californian (not hippie/ster yet, haha).
  
 On 2/13/12 2:37 PM, Andrew Bogott abog...@wikimedia.org wrote:
  
On 2/13/12 3:58 PM, Eric Windisch wrote:  

 
   I'm guessing that you're looking at this from a developer's perspective 
   rather than a support and operations perspective. Developers will 
   understand English, but the operations and especially the support team 
   may not. Having native language log messages has the potential to 
   significantly decrease support costs for users both domestic and abroad 
   (where domestic users might outsource support).


 

  The one thing I consistently use log messages for is googling.  If everyone 
  in the world gets the same log message for a given error, that drastically 
  increases the chances that I'll find that log message in a forum post 
  someplace.  Doesn't localizing log messages fragment the world of support 
  forums into a zillion language-specific shards?  (Full disclosure:  I speak 
  English, so this argument may be an unconscious front for Yankee 
  Imperialism.)




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] ZeroMQ RPC Driver - FF-Exception request

2012-02-07 Thread Eric Windisch
The ZeroMQ RPC driver is now feature-complete. I'm cleaning up for a 
merge-proposal! 

-- 
Eric Windisch___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] ZeroMQ RPC Driver - FF-Exception request

2012-01-30 Thread Eric Windisch
On Mon, Jan 30, 2012 at 5:05 AM, Rosa, Andrea andrea.r...@hp.com wrote:

 Hi

 In my opinion there is another point to consider: at this moment it's
 possible, in rabbitmq by auth-mechanism-ssl plugin, to use client and
 server authentication through certificates.
 I don't know 0MQ, so maybe the answer to my question is addressed by 0MQ
 documentation (I'll have a look), but is there some feature to easily
 implement this security feature?


The ZeroMQ driver will neither encrypt nor authenticate packets in the
Essex release. With ZeroMQ, it will be the requirement of the serialization
protocol to manage this. Currently, data is simply pickled. Perhaps for
Folsom we can create a blueprint for the signing  verification of messages.

-- 
Regards,
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] ZeroMQ RPC Driver - FF-Exception request

2012-01-25 Thread Eric Windisch
Alexis, 

It is also obvious that the link I provided is a particularly biased source, so 
it should be taken with a grain of salt. I only mentioned Qpid because Nova 
just received a driver for it, I didn't know the differences in such detail.

One of the problems Nova has is that it registers N queues for N hosts, with 
one host pulling from each queue (1:1). This is why ZeroMQ is a good fit, 
whereby messages can be sent directly to those hosts. There are also a small 
(but active) number of N to N queues which remain centralized and for which 
running Rabbit or Qpid is a good fit.

It would be interesting exercise to allow the ZeroMQ driver to defer back to 
the Kombu or Qpid driver for those messages which must remain centralized.

-- 
Eric Windisch


On Wednesday, January 25, 2012 at 1:18 AM, Alexis Richardson wrote:

 On Wed, Jan 25, 2012 at 4:46 AM, Eric Windisch e...@cloudscaling.com 
 (mailto:e...@cloudscaling.com) wrote:
  Sorry, I had originally sent only to Yun Mao. Sending to list.
  
  ---
  
  Rather than attempt to answer this, I defer to the ZeroMQ guide. It should
  be noted that the designers of AMPQ, iMatix, designed and build ZeroMQ.
  (RabbitMQ and QUID implement AMQP)
  
 
 
 Hold on a second there...
 
 There has been a LOT of muddle and fud (fuddle?) around AMQP. Let
 me try to clear that up.
 
 Qpid's default option is AMQP 0-10. While feature-rich, 0-10 is not
 widely used and was described by the AMQP chairman as too long and
 complicated not long after it was published. See also commentary on
 the web, on the subject of its length. Rabbit does not and will not
 support this version, and other folks have not implemented it either.
 
 WHEREAS --
 
 RabbitMQ implements AMQP 0-91, a 40 page spec. It's the one most people use.
 
 0-9-1 is the version of AMQP that is used across a very large number
 of use cases, that is quite easy to implement. It was created by all
 the implementers of AMQP that existed at time of writing including
 Rabbit, Redhat, JPMorgan, and of course iMatix. Pieter @iMatix was
 the spec editor, and did a fantastic job. 0-9-1 provides
 interoperable messaging as witnessed by the large number (100s) of
 clients and add-ons that have been created by the community. There
 have also been several servers implemented, that all just work with
 those clients. For example Kaazing, StormMQ, and OpenAMQ. I believe
 that Qpid also supports it, which might be important for this
 community (Redhat guys please note).
 
 This is what Pieter said: Read AMQP/0.9.1, it is a beautiful, clean,
 minimalist work essentially created by cooperation in the working
 group to improve AMQP/0.8. I edited AMQP/0.9.1, based on a hundred or
 more fixes made by the best individual brains in that group. Alexis is
 right - this is not disappearing. (Ref - comments here:
 http://it.toolbox.com/blogs/open-source-smb/whats-the-future-of-amqp-44450)
 
 I agree with Pieter. We also like the way that 0-9-1 can play nicely
 with 0mq and other protocols. Note that Rabbit supports these via
 plugins.
 
 alexis
 
 
 
 
 
 
  http://zguide.zeromq.org/page:all#Why-We-Needed-MQ
  
  
  --
  Eric Windisch
  
  On Tuesday, January 24, 2012 at 5:20 PM, Yun Mao wrote:
  
  Hi I'm curious and unfamiliar with the subject. What's the benefit of
  0MQ vs Kombu? Thanks,
  
  Yun
  
  On Tue, Jan 24, 2012 at 7:08 PM, Eric Windisch e...@cloudscaling.com 
  (mailto:e...@cloudscaling.com)
  wrote:
  
  Per today's meeting, I am proposing the ZeroMQ RPC driver for a
  feature-freeze exception.
  
  I am making good progress on this blueprint, it adds a new optional module
  and service without modifying any existing code or modules. I have been
  pushing to complete this work by E3, so I am close to completion, but cannot
  finish by tonight's deadline.
  
  The ZeroMQ driver will provide an alternative to Kombu (RabbitMQ) and QPID
  for messaging within Nova. Currently, the code passes unit tests but fails
  on smoketests. I expect to have the code viable for a merge proposal in less
  than a week, tomorrow if I'm smart, lucky, and the store doesn't sell out of
  RedBull. A two week grace would give me a nice buffer.
  
  Thanks,
  Eric Windisch
  
  
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net 
  (mailto:openstack@lists.launchpad.net)
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
  
  
  
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net 
  (mailto:openstack@lists.launchpad.net)
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
  
 
 
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack

[Openstack] ZeroMQ RPC Driver - FF-Exception request

2012-01-24 Thread Eric Windisch
Per today's meeting, I am proposing the ZeroMQ RPC driver for a feature-freeze 
exception. 

I am making good progress on this blueprint, it adds a new optional module and 
service without modifying any existing code or modules. I have been pushing to 
complete this work by E3, so I am close to completion, but cannot finish by 
tonight's deadline.

The ZeroMQ driver will provide an alternative to Kombu (RabbitMQ) and QPID for 
messaging within Nova. Currently, the code passes unit tests but fails on 
smoketests. I expect to have the code viable for a merge proposal in less than 
a week, tomorrow if I'm smart, lucky, and the store doesn't sell out of 
RedBull. A two week grace would give me a nice buffer.

Thanks,
Eric Windisch

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] ZeroMQ RPC Driver - FF-Exception request

2012-01-24 Thread Eric Windisch
Sorry, I had originally sent only to Yun Mao. Sending to list.
---
Rather than attempt to answer this, I defer to the ZeroMQ guide. It should be 
noted that the designers of AMPQ, iMatix, designed and build ZeroMQ. (RabbitMQ 
and QUID implement AMQP)
http://zguide.zeromq.org/page:all#Why-We-Needed-MQ


-- 
Eric Windisch


On Tuesday, January 24, 2012 at 5:20 PM, Yun Mao wrote:

 Hi I'm curious and unfamiliar with the subject. What's the benefit of
 0MQ vs Kombu? Thanks,
 
 Yun
 
 On Tue, Jan 24, 2012 at 7:08 PM, Eric Windisch e...@cloudscaling.com 
 (mailto:e...@cloudscaling.com) wrote:
  Per today's meeting, I am proposing the ZeroMQ RPC driver for a
  feature-freeze exception.
  
  I am making good progress on this blueprint, it adds a new optional module
  and service without modifying any existing code or modules. I have been
  pushing to complete this work by E3, so I am close to completion, but cannot
  finish by tonight's deadline.
  
  The ZeroMQ driver will provide an alternative to Kombu (RabbitMQ) and QPID
  for messaging within Nova. Currently, the code passes unit tests but fails
  on smoketests. I expect to have the code viable for a merge proposal in less
  than a week, tomorrow if I'm smart, lucky, and the store doesn't sell out of
  RedBull. A two week grace would give me a nice buffer.
  
  Thanks,
  Eric Windisch
  
  
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net 
  (mailto:openstack@lists.launchpad.net)
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
  
 
 
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] ZeroMQ RPC Driver - FF-Exception request

2012-01-24 Thread Eric Windisch



On Tuesday, January 24, 2012 at 6:05 PM, Zhongyue Luo wrote:

 I assume the messages will be delivered directly to the destination rather 
 than piling up on a queue server?
 
 
 


Although the blueprint doesn't specify this level of detail, the intention had 
originally been to deliver a queue-server model in Essex with a 
distributed-model in Folsom.  The first-cut code which passes tests does pile 
onto a queue server.  I now have code which implements a distributed model, but 
it is in a raw state. There are a number of edge-cases a distributed model can 
face, especially in regard to threading mechanisms (i.e. eventlet).

My current blockers for having this code ready for a merge-proposal are not 
related to a single-server or distributed model. Once I address these problems, 
there can be a decision on how much time is left to debug a distributed model 
within a FFe time-frame, or if this can be accepted as a separate blueprint 
within E4. If I get a solid 1-2 weeks or more for a FFe, I'll be more confident 
in the distributed model as there is more room to test, debug, and fix.

By the way, I must note in this thread, that I haven't done this alone.  A good 
amount of this work, i.e. the first-first-cut, was completed by Zed Shaw and 
was made possible through his QAL. Duncan McGreggor has been on hand, helpful, 
and has contributed some commits as well.

Regards,Eric Windisch___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Memory quota in nova-compute nodes.

2012-01-20 Thread Eric Windisch


On Thursday, January 19, 2012 at 12:52 PM, Jorge Luiz Correa wrote:

 I would like to know if it's possible to configure quota in each nova-compute 
 node. For example, I set up a new hardware with 8 GB of memory and install 
 the nova-compute, but I wish only 4 GB of memory are used (dedicated to 
 nova-compute). Is it possible? If so, how can I configure that?
 
 I've seen quotas in projects, configured using nova-manage command line tool. 
 But it isn't what I'm looking for. 
 
In Essex, you can use 'reserved_host_memory_mb' with the ZoneManager to reserve 
a certain amount of memory per host.

If you're on Diablo, Joe Gordon made a pluggable scheduler based on the 
SimpleScheduler to do the same:
  https://github.com/cloudscaling/cs-nova-simplescheduler
The relevent key here would be 'cs_host_reserved_memory_mb'.

Note that both of these define how much memory goes to your OS and 
applications, rather than how much memory is set aside for Nova / VMs.  If you 
had 8GB and wanted to give Nova 6GB, you would reserve 2GB for your host OS.  
This is a soft limit, your OS will happily take more memory absent cgroup 
support as aforementioned.

-- 
Eric Windisch

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] lunr reference iSCSI target driver

2011-05-02 Thread Eric Windisch

On May 2, 2011, at 12:50 PM, FUJITA Tomonori wrote:

 Hello,
 
 Chuck told me at the conference that lunr team are still working on
 the reference iSCSI target driver design and a possible design might
 exploit device mapper snapshot feature.


You're involved in the tgt project and it is the tgt project's purgative to add 
features as seen fit, but are you sure that you want to support this feature?

I can see the advantages of having Swift support in tgt; However, is 
considerably complex.  You're mapping a file-backed block device on a remote 
filesystem to a remote block storage protocol (iSCSI).  Might it not be better, 
albeit less integrated, to improve the Swift FUSE driver and use a standard 
loopback device?  This loopback device would be supported by the existing AIO 
driver in tgt.

To clarify on the subject of snapshots: The naming of snapshots in Nova and 
their presence on disk is more confusing than it should be. There was some 
discussion of attempting to clarify the naming conventions.  Storage snapshots 
as provided by the device mapper are copy-on-write block devices, while Nova 
will also refer to file-backing stores as snapshots.  This latter definition is 
also used by EC2, but otherwise unknown and unused in the industry.

I foresee that Lunr could use dm-snapshot to facilitate backups and/or to 
provide a COW against dm-zero for the purpose of preventing information 
disclosure.  Both of these use-cases would be most applicable to local storage, 
whereas most iSCSI targets would provide these as part of their filer API and 
it would probably not be very useful at all for Swift.  The only reasons to 
perform storage-snapshots/COW for iSCSI targets would be for relatively dumb 
filers that cannot do this internally, or for deployments where their smart 
filers have edge-cases preventing or breaking the use of these features.

Regards,
Eric Windisch
e...@cloudscaling.com




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] lunr reference iSCSI target driver

2011-05-02 Thread Eric Windisch
 
 You're involved in the tgt project and it is the tgt project's purgative to 
 add features as seen fit, but are you sure that you want to support this 
 feature?

Major spell check fail: prerogative ;-)


Regards,
Eric Windisch
e...@cloudscaling.com




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] lunr reference iSCSI target driver

2011-05-02 Thread Eric Windisch

 Surely, FUSE is another possible option, I think. I heard that lunr
 team was thinking about the approach too.

I'm concerned about the performance/stability of FUSE, but I'm not sure if 
using iSCSI is a significantly better option when the access is likely to be 
local. If I had to choose something in-between, I'd evaluate if NBD was any 
better of a solution. 

I expect there will be great demand for an implementation of a Swift as a block 
device client.  Care should be made in deciding what will be the best-supported 
method/implementation. That said, you have an implementation, and that goes a 
long way versus the alternatives which don't currently exist.


 As I wrote in the previous mail, the tricky part of the dm-snapshot
 approach is getting the delta of snaphosts (I assume that we want to
 store only deltas on Swift). dm-snapshot doesn't provide the
 user-space API to get the deltas. So Lunr needs to access to
 dm-snapshot volume directly. It's sorta backdoor approach (getting the
 information that Linux kernel doesn't provide to user space). As a
 Linux kernel developer, I would like to shout at people who do such :)


With dm-snapshot, the solution is to look at the device mapper table (via the 
device mapper API) and access the backend volume. I don't see why this is a bad 
solution. In fact, considering that the device mapper table could be 
arbitrarily complex and some backend volumes might be entirely virtual, i.e. 
dm-zero, this seems fairly reasonable to me.

I really don't see at all how Swift-as-block-device relates at all to (storage) 
snapshots, other than the fact that this makes it possible to use Swift with 
dm-snapshot.

Regards,
Eric Windisch
e...@cloudscaling.com




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


  1   2   >