Re: [openstack-dev] [Heat] Meeting time redux

2014-04-25 Thread Huruifeng (Victor)
Agree with shardy that the meeting structure should be inclusive. Since it's a 
global project, it would be better to allow people from different time zones to 
attend the meeting. Of course, PTL be on the scene is most important.

For me, personally, I'm new to Openstack and willing to contribute to Heat. I 
learned a lot from the website and maillist. And I'm planning to join the IRC 
meeting from next week for i think it would be a great help to get involved. 
00:00UTC or 12:00UTC (it's 8 am or 8 pm here)  both suit me (and people in the 
same team with me) well.

Though there will never be a convenient time for all people, I believe passion 
would conquer inconvenience:)

Regards,
胡瑞丰(Victor)
2014-04-25

发件人: Steven Hardymailto:sha...@redhat.com
发送时间: 2014-04-24 19:17
收件人: OpenStack Development Mailing List (not for usage 
questions)mailto:openstack-dev@lists.openstack.org
抄送:
主题: Re: [openstack-dev] [Heat] Meeting time redux


On Wed, Apr 23, 2014 at 03:12:40PM -0400, Zane Bitter wrote:

 At the beginning of this year we introduced alternating times for

 the Heat weekly IRC meeting, in the hope that our contributors in

 Asia would be able to join us. The consensus is that this hasn't

 worked out as well as we had hoped - even the new time falls at 8am

 in Beijing, so folks are regularly unable to make the meeting. It

 also falls at 5pm on the west coast of the US, so folks from there

 are also regularly unable to make the meeting too. And of course it

 is in the middle of the night for Europe, so the meeting room looks

 like a ghost town.



 Since we are in a new development cycle (with the PTL in a different

 location) and daylight savings has kicked in/out in many places,

 let's review our options. Here are our choices as I see it:



 * Keep going with the current system or some minor tweak to it.



Do we know why it's not working? 8am and 5pm seem like relatively

reasonable times from where I'm sitting, so would tweaking it by a couple

of hours make any difference?



 * Flip the alternate meeting by 12 hours to 1200 UTC. (8pm in China,

 late night in Oceania, early-morning on the east coast of the US and

 we lose the rest of the US.)



 * Lose all US-based folks and have a meeting for the rest of the

 world at around 0700 UTC. (US-based folks include me, so I would

 have to ask someone else to take care of passing on

 messages-from-the-PTL.)



Personally I think it's important for the PTL to attend most meetings, so

-1 on this.



 * Abandon the alternating meetings altogether.



I'm mildly in favour of this, but probably only for selfish reasons since

the current alternating time is bad for me personally ;)



I definitely want our meeting structure to be inclusive, but I don't really

have a clear idea of how many are exlcuded from participating (i.e those

who actually will turn up if we continue alternating), so unless we get

feedback from folks saying they will turn up at $alternating_time to solve

the ghost town problem, I'm in favour of abandoning.



My £0.002 :)



Steve




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting minutes March 24

2014-04-25 Thread Sergey Lukjanov
Thanks everyone who have joined Sahara meeting.

Here are the logs from the meeting:

Minutes: 
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-04-24-18.00.html
Log: 
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-04-24-18.00.log.html

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] nominating Pablo Andres Fuente for the Climate core reviewers team

2014-04-25 Thread Nikolay Starodubtsev
Congrats, Pablo! I was out of office and have no internet and couldn't give
you +1 :(



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1


2014-04-24 17:43 GMT+04:00 Fuente, Pablo A pablo.a.fue...@intel.com:

 Thanks, it's an honor!

 On Thu, 2014-04-24 at 13:20 +, Sanchez, Cristian A wrote:
  Congratulations Pablo!
 
  From: Sylvain Bauza sylvain.ba...@gmail.commailto:
 sylvain.ba...@gmail.com
  Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.orgmailto:
 openstack-dev@lists.openstack.org
  Date: jueves, 24 de abril de 2014 10:16
  To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 
  Subject: Re: [openstack-dev] [Climate] nominating Pablo Andres Fuente
 for the Climate core reviewers team
 
  Welcome Pablo !
 
 
  2014-04-24 15:06 GMT+02:00 Dina Belova dbel...@mirantis.commailto:
 dbel...@mirantis.com:
  Well, as 3/4 core team members are okay with it, I'll do this)
 
 
  On Thu, Apr 24, 2014 at 2:14 PM, Sylvain Bauza sylvain.ba...@gmail.com
 mailto:sylvain.ba...@gmail.com wrote:
  http://russellbryant.net/openstack-stats/climate-reviewers-90.txt
 
  As per the stats, +1 to this.
 
 
  2014-04-24 12:10 GMT+02:00 Dina Belova dbel...@mirantis.commailto:
 dbel...@mirantis.com:
  I propose to add Pablo Andreas Fuente (pafuent on the IRC) to Climate
 core team.
 
  He's Python contributor from Intel, and he took great part in Climate
 development including  design suggestions and great ideas. He has been
 quite active during the Icehouse and given his skills, interest and
 background I suppose it'll be great to add him to the team.
 
 
  Best regards,
 
  Dina Belova
 
  Software Engineer
 
  Mirantis Inc.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.orgmailto:
 OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.orgmailto:
 OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
 
  Best regards,
 
  Dina Belova
 
  Software Engineer
 
  Mirantis Inc.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.orgmailto:
 OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Todo / Wish list management

2014-04-25 Thread Kashyap Chamarthy
On Thu, Apr 24, 2014 at 03:10:01PM -0400, Sean Dague wrote:
 In the QA program we've now got the qa-specs repository to help craft
 the details of a blueprint before someone implements it. However there
 are problems we know we need to solve, which we only have a more general
 thought on, and currently have no owner.
 
 Many of these would actually be very reasonable chunks of work for new
 folks to dive in on. Many aren't all that complicated, just need a
 little time.
 
 I'm thinking a single TODO.rst at the top of qa-specs might be one way
 to keep track of these things. It has the advantage that it would go
 through review, so it would include items the core team was in agreement
 on.
 
 Other thoughts or options here?

FWIW, sounds reasonable to me. Most open source projects maintain some
sort of TODO in their canonical source repositories and update/modify it
progressively.

-- 
/kashyap

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Floating IPs and vgw

2014-04-25 Thread Michał Dubiel
You're absolutely right. I addressed openstack instead of opencontrail list
by mistake. Sorry for making unnecessary burden.

Michal.


On 25 April 2014 01:09, Edgar Magana Perdomo (eperdomo)
eperd...@cisco.comwrote:

  I don’t think this is the right mailing list for this question.
 Contrail is not supported in Icehouse release. You may want to contact
 Juniper people directly.

  Edgar

   From: Michał Dubiel m...@semihalf.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Thursday, April 24, 2014 at 2:44 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: [openstack-dev] Floating IPs and vgw

   Hi,

  I've got problems with setting up floating ips using vgw. I did all
 steps from:

 https://github.com/Juniper/contrail-controller/wiki/How-to-setup-floating-ip-with-Neutron
 but can not ping or ssh into the VM.

  floating ip shows:
  +-+--+
 | Field   | Value|
 +-+--+
 | fixed_ip_address|  |
 | floating_ip_address | 192.168.0.253 |
 | floating_network_id | 2a507f13-a5eb-426a-8335-6dc6e0aca537 |
 | id  | d37ab0f0-66a2-48d2-a3ef-0182ba6b23e9 |
 | port_id | ba125194-b9b9-4c91-a780-989d17d4b8bc |
 | router_id   |  |
 | tenant_id   | 74b2f138fec64943ab1ed06ade35d67a |
 +-+--+

  Routing on host is:
 Destination Gateway Genmask Flags   MSS Window  irtt
 Iface
 default 10.100.0.2540.0.0.0 UG0 0  0
 vhost0
 10.100.0.0  *   255.255.0.0 U 0 0  0
 vhost0
 169.254.0.4 *   255.255.255.255 UH0 0  0
 vhost0
 192.168.0.0 *   255.255.255.0   U 0 0  0
 vgw
 192.168.122.0   *   255.255.255.0   U 0 0  0
 virbr0

  Is it OK that fixed_ip_address is null for the floating ip above?
 Do you see anything wrong here, may I ask you to suggest me what should I
 check next?

  Regards,
 Michal.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Use Case Question

2014-04-25 Thread Carlos Garza
Trevor is referring to our plans on using the SSL session ID of the 
ClientHello to provide session persistence.
See RFC 5264 section 7.4.1.2 which sends an SSL session ID in the clear 
(Unencrypted) so that a load balancer with out the decrypting key can use it to 
make decisions on which
back end node to send the request to.  Users browsers while typically use the 
same session ID for a while between connections.

Also note this is supported in TLS 1.1 as well in the same section according to 
RFC 4346. And in TLS 1.0 RFC2246 as well.

So we have the ability to offer http cookie based persistence as you 
described only if we have the key but if not we can also offer SSL Session Id 
based persistence.



On Apr 24, 2014, at 7:53 PM, Stephen Balukoff 
sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net wrote:

Hi Trevor,

If the use case here requires the same client (identified by session cookie) to 
go to the same back-end, the only way to do this with HTTPS is to decrypt on 
the load balancer. Re-encryption of the HTTP request may or may not happen on 
the back-end depending on the user's needs. Again, if the client can 
potentially change IP addresses, and the session still needs to go to the same 
back-end, the only way the load balancer is going to know this is by decrypting 
the HTTPS request. I know of no other way to make this work.

Stephen


On Thu, Apr 24, 2014 at 9:25 AM, Trevor Vardeman 
trevor.varde...@rackspace.commailto:trevor.varde...@rackspace.com wrote:
Hey,

I'm looking through the use-cases doc for review, and I'm confused about one of 
them.  I'm familiar with HTTP cookie based session persistence, but to satisfy 
secure-traffic for this case would there be decryption of content, injection of 
the cookie, and then re-encryption?  Is there another session persistence type 
that solves this issue already?  I'm copying the doc link and the use case 
specifically; not sure if the document order would change so I thought it would 
be easiest to include both :)

Use Cases:  
https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis

Specific Use Case:  A project-user wants to make his secured web based 
application (HTTPS) highly available. He has n VMs deployed on the same private 
subnet/network. Each VM is installed with a web server (ex: apache) and 
content. The application requires that a transaction which has started on a 
specific VM will continue to run against the same VM. The application is also 
available to end-users via smart phones, a case in which the end user IP might 
change. The project-user wishes to represent them to the application users as a 
web application available via a single IP.

-Trevor Vardeman

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ceilometer][gantt] Dynamic scheduling

2014-04-25 Thread Sylvain Bauza
2014-04-25 1:38 GMT+02:00 Jay Lau jay.lau@gmail.com:

 Seems http://summit.openstack.org/cfp/details/262 can cover this? Thanks.


Well, that will depend on the number of sessions we could get. This #262
proposal was agreed to be merged with #140 if no enough slots.




 2014-04-25 5:09 GMT+08:00 Sylvain Bauza sylvain.ba...@gmail.com:


 Le 24 avr. 2014 19:20, Henrique Truta henriquecostatr...@gmail.com a
 écrit :

 
  Donald,
 
  By selection, I think Jenny means identifying whether and which
 active VM should be migrated, once the current Nova scheduler only deals
 with the VM in the momment of its creation or with a specific user input.
 

 As Don said, we're beginning the process to spin-off the scheduler by
 defining a line in the sand in between the sched and other Nova bits.

 That's the first step before the real fork which will lead to a separate
 project standing by its own, Gantt. Having that said, the scope of Gantt is
 currently still subject to discussion, which will happen during a Summit
 design session.

 Regarding the need to have a dynamic scheduler acting also on
 notifications and not only on boot requests, that could be seen either as
 part to Gantt, or a separate service which would send request to Gantt for
 selecting destinations. IMHO, with respect to the timeline, I would like to
 see the finale endpoint going to Gantt, with a temporary Stackforge project
 if necessary.

 Again, IMHO, this feature request deserves its own session proposal for
 the Summit. As the deadline has passed for submissions, we need to know if
 that has been proposed yet so we could add it as subject of interest for
 Gantt in an etherpad.

 Thanks,
 -Sylvain

 
  2014-04-24 12:08 GMT-03:00 Dugger, Donald D donald.d.dug...@intel.com
 :
 
  Jenny-
 
 
 
  You should look at the `Propose Scheduler Library blueprint’:
 
 
 
  https://review.openstack.org/#/c/82133/9
 
 
 
  This BP is to create a client library for making calls to the
 scheduler.  If you base your work upon this library then you shouldn’t need
 to care about whether the Core Scheduler is the Nova integrated scheduler
 or the Gantt separated scheduler, the library will call `a` scheduler as
 appropriate.
 
 
 
  Having said that, I’m not sure I understand the distinction you are
 seeing between `selection’ and `placement’.  The current scheduler filters
 all hosts based upon filters (the selection part) and then the weighting
 function finds the best node to host the VM (the placement part).  Seems to
 me the current scheduler does both of those tasks.  (We can argue the
 effectiveness/efficiency of the current implementation but I think it’s
 functionally complete.)
 
 
 
  Also, have you proposed a session at the Juno summit on your proposal
 for dynamic scheduling, seems like it would be appropriate.
 
 
 
  --
 
  Don Dugger
 
  Censeo Toto nos in Kansa esse decisse. - D. Gale
 
  Ph: 303/443-3786
 
 
 
  From: Jiangying (Jenny) [mailto:jenny.jiangy...@huawei.com]
  Sent: Thursday, April 24, 2014 3:36 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [nova][ceilometer][gantt] Dynamic
 scheduling
 
 
 
  Hi,
 
  We have checked that gantt now just made a synced up copy of the code
 in nova.
 
  We still think dynamic scheduling will be a benefit of the nova
 scheduler (or gantt later). The main difference between static and dynamic
 scheduling is that static scheduling is a vm placement problem, while
 dynamic scheduling deals with both vm selection and vm placement.
 
 
 
  Our scheduling mechanism consists of three parts:
 
  1.   Controller, which triggers the scheduling;
 
  2.   Data Collector, which collects the resource usage and topo for
 scheduling;
 
  3.   Core Scheduler, which decides how to schedule the vms;
 
 
 
  We prefer to reuse the nova scheduler as the Core Scheduler, in order
 to avoid the possible inconsistent between static scheduling and dynamic
 scheduling. The vm selection function will be added into nova scheduler.
 For Data Collector, we expect to get the performance data from ceilometer
 and topo from nova.
 
 
 
  There is still one question that where the controller should be
 implemented?
 
  We regard implementing the controller in nova scheduler as the first
 choice. And we also consider extending ceilometer.(Ie. When ceilometer
 discovers an overload host, an alarm can be reported and it can trigger a
 vm evacuate.)
 
 
 
  Do you have any comments?
 
 
 
  Jenny
 
 
 
  发件人: Henrique Truta [mailto:henriquecostatr...@gmail.com]
  发送时间: 2014年4月12日 1:00
  收件人: OpenStack Development Mailing List (not for usage questions)
  主题: Re: [openstack-dev] [nova] Dynamic scheduling
 
 
 
  Is there anyone currently working on Neat/Gantt projects? I'd like to
 contribute to them, as well.
 
 
 
  2014-04-11 11:37 GMT-03:00 Andrew Laski andrew.la...@rackspace.com:
 
  On 04/10/14 at 11:33pm, Oleg Gelbukh wrote:
 
  Andrew,
 
  Thank you for 

Re: [openstack-dev] [ceilometer] Exposing Ceilometer alarms as SNMP traps

2014-04-25 Thread Florian Haas
Hi Eric,

On Thu, Apr 24, 2014 at 7:02 PM, Eric Brown bro...@vmware.com wrote:
 I'm pretty familiar with SNMP as I have worked with it for a number years.
 I know Telcos like it, but I feel its a protocol that is near end of life.
 It hasn't
 kept up on security guidelines.  SNMPv1 and v2c are totally insecure and
 SNMPv3 is barely usable.  But even SNMPv3 still uses MD5 and SHA1.

I agree, but at least with my limited SNMP experience I've seen quite
a few v2c deployments out there, so forgoing that altogether doesn't
seem like a good idea to me.

 That being said, the Alarm MIB would be my choice of MIB.  A custom MIB
 would be a mess and a nightmare to maintain.

Thanks for confirming. :)

 Can pysnmp do v3 notifications?  You might want to also consider informs
 rather than traps since they are acknowledged.

Yes, pysnmp can do INFORMs:
http://pysnmp.sourceforge.net/examples/current/v3arch/oneliner/agent/ntforg/inform-v3.html

However, speaking of acknowledgments, is the concept of an alert being
acknowledged even present in Ceilometer?

I'm afraid I've opened a can of worms here. :)

Cheers,
Florian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit downtime and upgrade on 2014-04-28

2014-04-25 Thread Yuriy Taraday
Hello.

On Wed, Apr 23, 2014 at 2:40 AM, James E. Blair jebl...@openstack.orgwrote:

 * The new Workflow label will have a -1 Work In Progress value which
   will replace the Work In Progress button and review state.  Core
   reviewers and change owners will have permission to set that value
   (which will be removed when a new patchset is uploaded).


Wouldn't it be better to make this label more persistent?
As I remember there were some ML threads about keeping WIP mark across
patch sets. There were even talks about changing git-review to support this.
How about we make it better with the new version of Gerrit?

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [nova][ceilometer][gantt] Dynamic scheduling

2014-04-25 Thread Jiangying (Jenny)
Exactly. Thank you for your clarification.
This is part of my job and my company would like to contribute to openstack.
Also I hope we can have more  discussion about scheduling.

发件人: Henrique Truta [mailto:henriquecostatr...@gmail.com]
发送时间: 2014年4月25日 1:16
收件人: OpenStack Development Mailing List (not for usage questions)
主题: Re: [openstack-dev] [nova][ceilometer][gantt] Dynamic scheduling

Donald,
By selection, I think Jenny means identifying whether and which active VM 
should be migrated, once the current Nova scheduler only deals with the VM in 
the momment of its creation or with a specific user input.

2014-04-24 12:08 GMT-03:00 Dugger, Donald D 
donald.d.dug...@intel.commailto:donald.d.dug...@intel.com:
Jenny-

You should look at the `Propose Scheduler Library blueprint’:

https://review.openstack.org/#/c/82133/9

This BP is to create a client library for making calls to the scheduler.  If 
you base your work upon this library then you shouldn’t need to care about 
whether the Core Scheduler is the Nova integrated scheduler or the Gantt 
separated scheduler, the library will call `a` scheduler as appropriate.

Having said that, I’m not sure I understand the distinction you are seeing 
between `selection’ and `placement’.  The current scheduler filters all hosts 
based upon filters (the selection part) and then the weighting function finds 
the best node to host the VM (the placement part).  Seems to me the current 
scheduler does both of those tasks.  (We can argue the effectiveness/efficiency 
of the current implementation but I think it’s functionally complete.)

Also, have you proposed a session at the Juno summit on your proposal for 
dynamic scheduling, seems like it would be appropriate.

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

From: Jiangying (Jenny) 
[mailto:jenny.jiangy...@huawei.commailto:jenny.jiangy...@huawei.com]
Sent: Thursday, April 24, 2014 3:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][ceilometer][gantt] Dynamic scheduling

Hi,
We have checked that gantt now just made a synced up copy of the code in nova.
We still think dynamic scheduling will be a benefit of the nova scheduler (or 
gantt later). The main difference between static and dynamic scheduling is that 
static scheduling is a vm placement problem, while dynamic scheduling deals 
with both vm selection and vm placement.

Our scheduling mechanism consists of three parts:
1.   Controller, which triggers the scheduling;
2.   Data Collector, which collects the resource usage and topo for scheduling;
3.   Core Scheduler, which decides how to schedule the vms;

We prefer to reuse the nova scheduler as the Core Scheduler, in order to avoid 
the possible inconsistent between static scheduling and dynamic scheduling. The 
vm selection function will be added into nova scheduler. For Data Collector, we 
expect to get the performance data from ceilometer and topo from nova.

There is still one question that where the controller should be implemented?
We regard implementing the controller in nova scheduler as the first choice. 
And we also consider extending ceilometer.(Ie. When ceilometer discovers an 
overload host, an alarm can be reported and it can trigger a vm evacuate.)

Do you have any comments?

Jenny

发件人: Henrique Truta [mailto:henriquecostatr...@gmail.com]
发送时间: 2014年4月12日 1:00
收件人: OpenStack Development Mailing List (not for usage questions)
主题: Re: [openstack-dev] [nova] Dynamic scheduling

Is there anyone currently working on Neat/Gantt projects? I'd like to 
contribute to them, as well.

2014-04-11 11:37 GMT-03:00 Andrew Laski 
andrew.la...@rackspace.commailto:andrew.la...@rackspace.com:
On 04/10/14 at 11:33pm, Oleg Gelbukh wrote:
Andrew,

Thank you for clarification!


On Thu, Apr 10, 2014 at 3:47 PM, Andrew Laski 
andrew.la...@rackspace.commailto:andrew.la...@rackspace.comwrote:


The scheduler as it currently exists is a placement engine.  There is
sufficient complexity in the scheduler with just that responsibility so I
would prefer to see anything that's making runtime decisions separated out.
 Perhaps it could just be another service within the scheduler project once
it's broken out, but I think it will be beneficial to have a clear
distinction between placement decisions and runtime monitoring.


Do you think that auto-scaling could be considered another facet of this
'runtime monitoring' functionality? Now it is a combination of Heat and
Ceilometer. Does it worth moving to hypothetical runtime mobility service
as well?

Auto-scaling is certainly a facet of runtime monitoring.  But auto-scaling 
performs actions based on a set of user defined rules and is very visible while 
the enhancements proposed below are intended to benefit deployers and be very 
invisible to users.  So the set of allowable actions is very constrained 
compared to what auto-scaling can do.
In my opinion what's being 

Re: [openstack-dev] [MagnetoDB] Doubt about test data correctness

2014-04-25 Thread Aleksandr Minakov
Hello, Dmitriy!

In this particular case, scenario “Put item to INSERT 1 attribute (an
existing table, 1 correct attribute “S”). It means that we want put one
item with one attribute of type S to existing table.

In terms of DynamoDB Item (
https://github.com/stackforge/magnetodb/blob/master/tempest/api/keyvalue/stable/rest/test_put_item.py#L38):


A map of attribute name/value pairs, one for each attribute. Only the
primary key attributes are required; you can optionally provide other
attribute name-value pairs for the item.

You must provide all of the attributes for the primary key.  For example,
with a hash type primary key, you only need to specify the hash attribute.
For a hash-and-range type primary key, you must specify both the hash
attribute and the range attribute.

(In our test case we have only one attribute value in attribute defenition
and key schema, table without index)

If you specify any attributes that are part of an index key, then the data
types for those attributes must match those of the schema in the table's
attribute definition.

For the test case we have created the following table:

Create table (
https://github.com/stackforge/magnetodb/blob/master/tempest/api/keyvalue/stable/rest/test_put_item.py#L34
).

We creating table with request: POST
http://127.0.0.1:8480/v1/default_tenant/data/tables ;

and request body:

{

   key_schema: [

   {

   key_type: HASH,

   attribute_name: message

   }

   ],

   table_name: testtempest757038441, attribute_definitions: [

   {

   attribute_type: S,

   attribute_name: message

   }

]}


Put Item (
https://github.com/stackforge/magnetodb/blob/master/tempest/api/keyvalue/stable/rest/test_put_item.py#L41
):

After successful creating table we put item with request: POST
http://127.0.0.1:8480/v1/default_tenant/data/tables/testtempest757038441/put_item

and request body:

{

   item: {

   message: {

   S: message_text

   }

   }

}


Expected response is:

Response Status: 200

Response Body: {}

Actual response is:

Response Status: 200

Response Body: {}

Such behavior is equivalent to DynamoDB.

What seems to be incorrect in this test case?



Best Regards,
Aleksandr Minakov
Junior Software Engineer
Mirantis, Inc
Skype: m._.a._.g
Phone: +38 095 043 06 43


On Thu, Apr 24, 2014 at 9:34 AM, Dmitriy Ukhlov dukh...@mirantis.comwrote:

 Hello everyone!

 I found out that some MagnetoDB tests use test data with empty value. Is
 it correct?
 Is DynamoDB allows such behavior? Please take a look:

 https://github.com/stackforge/magnetodb/blob/master/tempest/api/keyvalue/stable/rest/test_put_item.py#L39

 --
 Best regards,
 Dmitriy Ukhlov
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit downtime and upgrade on 2014-04-28

2014-04-25 Thread Roman Podoliaka
Hi all,

 Wouldn't it be better to make this label more persistent?

+1. It's really annoying to press Work in Progress button every time
you upload a new patch set.

Thanks,
Roman

On Fri, Apr 25, 2014 at 11:02 AM, Yuriy Taraday yorik@gmail.com wrote:
 Hello.

 On Wed, Apr 23, 2014 at 2:40 AM, James E. Blair jebl...@openstack.org
 wrote:

 * The new Workflow label will have a -1 Work In Progress value which
   will replace the Work In Progress button and review state.  Core
   reviewers and change owners will have permission to set that value
   (which will be removed when a new patchset is uploaded).


 Wouldn't it be better to make this label more persistent?
 As I remember there were some ML threads about keeping WIP mark across patch
 sets. There were even talks about changing git-review to support this.
 How about we make it better with the new version of Gerrit?

 --

 Kind regards, Yuriy.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit downtime and upgrade on 2014-04-28

2014-04-25 Thread Ricardo Carrillo Cruz
It would be nice to have a switch on git-review to push patchsets on WIP,
the same way you can do with Draft.

Anyone knows if there is an enhancement request/bug opened against Gerrit
so we could implement it?

Regards


2014-04-25 10:49 GMT+02:00 Roman Podoliaka rpodoly...@mirantis.com:

 Hi all,

  Wouldn't it be better to make this label more persistent?

 +1. It's really annoying to press Work in Progress button every time
 you upload a new patch set.

 Thanks,
 Roman

 On Fri, Apr 25, 2014 at 11:02 AM, Yuriy Taraday yorik@gmail.com
 wrote:
  Hello.
 
  On Wed, Apr 23, 2014 at 2:40 AM, James E. Blair jebl...@openstack.org
  wrote:
 
  * The new Workflow label will have a -1 Work In Progress value which
will replace the Work In Progress button and review state.  Core
reviewers and change owners will have permission to set that value
(which will be removed when a new patchset is uploaded).
 
 
  Wouldn't it be better to make this label more persistent?
  As I remember there were some ML threads about keeping WIP mark across
 patch
  sets. There were even talks about changing git-review to support this.
  How about we make it better with the new version of Gerrit?
 
  --
 
  Kind regards, Yuriy.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [qa] EC2 status and call for assistance

2014-04-25 Thread Alexandre Levine

Joe,

In regard to your first question - yes we'll be going in this direction 
very soon. It's being discussed with Randy now.
As for the second question - we'd love to participate in fixing it (in 
fact we've done it for OCS already) and probably maintaining it but I'm 
not sure what it takes and means to commit to this - we'll discuss it as 
well.


Best regards,
  Alex Levine

24.04.2014 23:33, Joe Gordon ?:




On Thu, Apr 24, 2014 at 10:10 AM, Alexandre Levine 
alev...@cloudscaling.com mailto:alev...@cloudscaling.com wrote:


Cristopher,

FYI in regard to 


Its the sort of direction that we tried to steer the GCE
API folks in I
cehouse, though I don't know what they ended up doing



We ended up perfectly ok. The project is on Stackforge for some
time https://github.com/stackforge/gce-api. It works.
I believe that this is exactly what should be done with EC2 as
well. We even considered and tried to estimate it once.

I can tell you even more that we do have lots of AWS Tempest tests
specifically to check various compatibility issues in OpenStack.
And we've created a number of fixes for proprietary implementation
of a cloud based on OpenStack. Some of them are in EC2 layer, some
are in nova core. 



Any plans to contribute this to the community?


But anyways, I'm completely convinced that:

1. Any further improvements to EC2 layer should be done after its
separation from nova. 



So the fundamental problem we are having with Nova's EC2 
implementation is that no one is maintaining it upstream.  If pulling 
EC2 out of nova into its own repo solves this problem then wonderful. 
But the status quo is untenable, Nova does not want to ship code that 
we know to be broken, so we need folks interested in it to help fix it.


2. EC2 should still somehow be supported by OpenStack because as
far as I know lots of people use euca2ools to access it.


Best regards,
  Alex Levine

24.04.2014 19 tel:24.04.2014%2019:24, Christopher Yeoh ?:

On Thu, 24 Apr 2014 09:10:19 +1000
Michael Still mi...@stillhq.com mailto:mi...@stillhq.com
wrote:

These seem like the obvious places to talk to people about
helping us
get this code maintained before we're forced to drop it.
Unfortunately
we can't compel people to work on things, but we can make
it in their
best interests.

A followup question as well -- there's a proposal to
implement the
Nova v2 API on top of the v3 API. Is something similar
possible with
EC2? Most of the details of EC2 have fallen out of my
brain, but I'd
be very interested in if such a thing is possible.

So there's sort of a couple of ways we suggested doing a V2
API on top
of V3 long term. The current most promising proposal (and I think
Kenichi has covered this a bit in another email) is a very
thin layer
inside the Nova API code. This works well because the V2 and
V3 APIs in
many areas are very closely related anyway - so emulation is
straightforward.

However there is another alternative (which I don't think is
necessary
for V2) and that is to have a more fuller fledged type proxy where
translation is say done between receiving V2 requests and
translating
them to native V3 API requests. Responses are similarly
translated but
in reverse. Its the sort of direction that we tried to steer
the GCE
API folks in Icehouse, though I don't know what they ended up
doing -
IIRC I think they said it would be possible.

Longer term I suspect its something we should consider if we
could do
something like that for the EC2 API and then be able to rip
out the
ec2 API specific code from the nova API part of tree. The
messiness of
any UUID or state map translation perhaps could then be
handled in a
very isolated manner from the core Nova code (though I won't
pretend to
understand the specifics of what is required here). I guess the
critical question will be if the emulation of the EC2 API is good
enough, but as Sean points out - there are lots of existing issues
already so it may end up not perfect, but still much better
than what we
have now.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [SWIFT] swift and authorization policy

2014-04-25 Thread Chmouel Boudjnah
I haven't done a full review but I like what you did and this should be the
proper way to handle ACL for keystoneauth.

I am not sure tho that forking oslo.common.policy is any better than
copy/pasting it with its dependences.

I would suggest we move `swift-keystoneauth` to its own project part of the
swift umbrella projects to handle such things and be able to include those
policy module directly.

I think we have enough swift core now that uses keystone everyday to
sponsors the core review for that project.

That mething i'd like to talk about with the folks at altanta even tho
there is no session for it.

Chmouel



On Fri, Apr 25, 2014 at 11:25 AM, Nassim Babaci nassim.bab...@cloudwatt.com
 wrote:

 Hi everyone

 I would like to point out the bp
 https://blueprints.launchpad.net/swift/+spec/authorization-policy to may
 be have some early feedback from the community.
 I have submitted a first patch which I hope will serves as an example/base
 for discussion.

 Specially I was wondering what would be the best way to integrate the
 policy engine (or not) within swift.
 For now I have roughly adapted the policy engine found here (
 https://github.com/openstack/oslo-incubator/blob/master/openstack/common/policy.py)
 and I removed all the unnecessary dependencies to modules like oslo.config,
 log, etc, and finally kept only the parts that deal with parsing and rule
 checking, but any advice in this (or more globally) would be highly
 appreciated.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [designate] FW: Open Source @ OpenStack Submission

2014-04-25 Thread Joe Mcbride
Good news, team.  I’m waiting to hear back on how much time we will be given.

On 4/24/14, 8:58 PM, Chris Hoge 
chris.h...@puppetlabs.commailto:chris.h...@puppetlabs.com wrote:

Dear Graham and Joe,

Thank you for you submission for Designate to the new Open Source @ OpenStack 
program. We're happy to inform you that your developer session was accepted and 
is scheduled for the afternoon of Wednesday, May 14 in room B308. If this time 
works for you, please respond to this e-mail as soon as possible so that we can 
add your session to the official schedule. Once you're confirmed, we will 
follow up with additional resource and session information.

Thanks for your contributions to OpenStack and open source software in general, 
and we look forward to seeing you in Atlanta!

-Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [PTLs] No release meeting next week

2014-04-25 Thread Thierry Carrez
Hi,

We'll skip the project/release meeting next week (during the recommended
off week).

The next project/release meeting will be held on May 6th, and we'll
(ab)use it to finalize the Design Summit agenda and collectively resolve
the last scheduling conflicts.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Neutron Routers and LLAs

2014-04-25 Thread Xuhan Peng
Sean and Robert,

Sorry for replying this late, but after giving this a second thought, I
think it makes sense to not allow a subnet with a LLA gateway IP address to
be attached to a neutron router for the following reasons:

1. A subnet with LLA address gateway specified is only used to receive RA
from provider router.  I could not think of any other use case that a user
want to specify LLA address gateway for an Neutron subnet.

2. Attach a subnet with LLA (or even any address outside subnet's cidr)
will cause the port have two LLA (or another address outside subnet's cidr)
on the subnet gateway port (qr-). This will confuse dnsmasq about
binding with which address.

3. For allowing RA sending from dnsmasq on gateway port, we can use ip
command to get the LLA. Currently I use calculation method to get the
source address, but I will improve it to use ip command to make sure the
source IP is right.

Thoughts?  If we all agree, I will open a bug to disallow subnet with
gateway outside CIDR be attached to a router.

Xuhan


On Wed, Mar 26, 2014 at 9:52 PM, Robert Li (baoli) ba...@cisco.com wrote:

 Hi Sean,

 Unless I have missed something, this is my thinking:
   -- I understand that the goal is to allow RAs from designated sources
 only.
   -- initially, xuhanp posted a diff for
 https://review.openstack.org/#/c/72252. And my comment was that subnet
 that was created with gateway ip not on the same subnet can't be added
 into the neutron router.
   -- as a result, https://review.openstack.org/#/c/76125/ was posted to
 address that issue. With that diff, LLA would be allowed. But a
 consequence of that is a gateway port would end up having two LLAs: one
 that is automatically generated, the other from the subnet gateway IP.
   -- with xuhanp's new diff for https://review.openstack.org/#/c/72252, if
 openstack native RA is enabled, then the automatically generated LLA will
 be used; and if it's not enabled, it will use the external gateway's LLA.
 And the diff seems to indicate this LLA comes from the subnet's gateway
 IP.
   -- Therefore, the change of https://review.openstack.org/#/c/76125/
 seems to be able to add the gateway IP as an external gateway.
   -- Thus, my question is: should such a subnet be allowed to add in a
 router? And if it should, what would the semantics be? If not, proper
 error should be provided to the user. I'm also trying to figure out the
 reason that such a subnet needs to be created in neutron (other than
 creating L2 ports for VMs).

 -- Another thought is that if the RA is coming from the provider net, then
 the provider net should have installed mechanisms to prevent rogue RAs
 from entering the network. There are a few RFCs that address the rogue RA
 issue.

 see inline as well.

 I hope that I didn't confuse you guys.

 Thanks,
 Robert


 On 3/25/14 2:18 PM, Collins, Sean sean_colli...@cable.comcast.com
 wrote:

 During the review[0] of the patch that only allows RAs from known
 addresses, Robert Li brought up a bug in Neutron, where a
 IPv6 Subnet could be created, with a link local address for the gateway,
 that would fail to create the Neutron router because the IP address that
 the router's port would be assigned, was a link local
 address that was not on the subnet.
 
 This may or may have been run before force_gateway_on_subnet flag was
 introduced. Robert - if you can give us what version of Neutron you were
 running that would be helpful.

 [Robert] I'm using the latest


 
 Here's the full text of what Robert posted in the review, which shows
 the bug, which was later filed[1].
 
  This is what I've tried, creating a subnet with a LLA gateway address:
 
  neutron subnet-create --ip-version 6 --name myipv6sub --gateway
 fe80::2001:1 mynet :::/64
 
  Created a new subnet:
 
 +--+
 +
  | Field | Value |
 
 +--+
 +
  | allocation_pools | {start: :::1, end:
 ::::::fffe} | | cidr | :::/64 | |
 dns_nameservers | | | enable_dhcp | True | | gateway_ip | fe80::2001:1
 | | host_routes | | | id | a1513aa7-fb19-4b87-9ce6-25fd238ce2fb | |
 ip_version | 6 | | name | myipv6sub | | network_id |
 9c25c905-da45-4f97-b394-7299ec586cff | | tenant_id |
 fa96d90f267b4a93a5198c46fc13abd9 |
 
 +--+
 +
 
  openstack@devstack-16:~/devstack$ neutron router-list
 
 
 +--+-+--
 ---+
  | id | name | external_gateway_info
  |
 +--+-+--
 ---+
  | 7cf084b4-fafd-4da2-9b15-0d25a3e27e67 | router1 | {network_id:
 02673c3c-35c3-40a9-a5c2-9e5c093aca48, enable_snat: true}
  |
 
 

Re: [openstack-dev] [Solum] New Solum Team Meeting Schedule

2014-04-25 Thread Angus Salkeld
Nice! Now I can come along.

-Angus

On 4/25/14, 2:38 AM, Adrian Otto adrian.o...@rackspace.com wrote:

Team,

We have a new alternating meeting schedule to accommodate participation
from our globally distributed team. The new meeting schedule has been
posted to the OpenStack team meeting schedule [1] and links are provided
to convert these times to your local timezone at the Meetings/Solum wiki
page [2]. Our next meeting is 2014-04-29 at 2200 UTC.

Regards,

Thanks,

Adrian

[1] https://wiki.openstack.org/wiki/Meetings
[2] https://wiki.openstack.org/wiki/Meetings/Solum
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [SWIFT] swift and authorization policy

2014-04-25 Thread Julien Danjou
On Fri, Apr 25 2014, Chmouel Boudjnah wrote:

 I am not sure tho that forking oslo.common.policy is any better than
 copy/pasting it with its dependences.

+1, forking is not an option.

 I would suggest we move `swift-keystoneauth` to its own project part of the
 swift umbrella projects to handle such things and be able to include those
 policy module directly.

 I think we have enough swift core now that uses keystone everyday to
 sponsors the core review for that project.

I like that plan. :)

-- 
Julien Danjou
-- Free Software hacker
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] BBG edit of new API proposal

2014-04-25 Thread Eugene Nikanorov
Hi Stephen,

Thanks for the great document. As I promised, I'll try to make a few action
items out if it.
First of all, I'd like to say that the API you have proposed is very close
to what is proposed in the blueprint
https://review.openstack.org/#/c/89903/with several differences I'd
like to address here and make them action
items.

So, first of all, I think that API described in the doc seems to account
for all cases we had in mind, i didn't check on case-by-cases basis, it's
just a first glance impression looking at REST resource set and their
attributes.

General idea of the whole API/obj model improvement is that we create a
baseline for all advanced features/usecases that we have in the roadmap.
Which means that those features then can be added incrementally.
Incrementally means that resources or attributes might be added, but
workflow remains and backward compatibility is preserved. That was not the
case with multiple listeners and L7.

So, a couple on general comments:

1. Whole discussion about API/obj model improvement had the goal of
allowing multiple pools and multiple listeners.
For that purpose loadbalancer instance might be an extra. The good thing
about 'loadbalancer' is that it can be introduced in the API in incremental
way.
So, VIP+listeners itself is already quite flexible construct (where VIP is
a root object playing loadbalancer role) that addresses our immediate needs.
So I'd like to extract loadbalancer API and corresponding use cases in
another blueprint.
You know that loadbalancer concept has raised very heated discussions so it
makes sense to continue discussing it separately, keeping in mind that
introducing loadbalancer is not very complex and it may be done on top of
the VIPs/listeners API

2. SSL-related objects. SSL is rather big deal, both from API and object
model, it was a separate blueprint in Icehouse and i think it makes sense
to work separately on it.
What I mean is that ssl don't affect core API (VIPs/listeners/pools) other
than adding some attributes to listeners.

3. L7 is also a separate work, it will not be accounted in 'API
improvement' blueprint. You can sync with Samuel for this as we already
have pretty detailed blueprints on that.

4. Attribute differences in REST resources.
This falls into two categories:
- existing attributes that should belong to one or another resource,
- attributes that should be added (e.g. they didn't exist in current API)
The first class is better to be addressed in the blueprint review. The
second class could be a small action items/blueprints or even bugs.
Example:
  1) custom_503 - that attribute deserves it's own miniblueprint, I'd
keep it out of scope of 'API improvement' work.
2) ipv6_subnet_id/addressv6 - that IMO also deserves it's own miniblueprint
(whole thing about ipv6 support)

So, I'd like to make the following action items out of the document:

1. Extract 'core API' - VIPs/Listeners/Pools/Members/Healthmonitors.
This action item is actually the blueprint that I've filed and that's what
I'm going to implement

2. Work on defining single-call API that goes along with single object core
API (above)
Your document already does a very good job on this front.

3. Extract 'Loadbalancer' portion of API into additional Doc/blueprint. I
deserves it's own discussion and use cases.
I think separating it will also help to reduce discussion contention.

4. Work with https://review.openstack.org/#/c/89903/ to define proper
attribute placement of existing attributes

5. Define a set of attributes that are missing in proposed API and make a
list of work items based on that.
(I assume that there also could be some, that may make sense to include in
proposal)

I think following this list will actually help us to make iterative
progress and also to work on items in parallel.

Thanks again for the great document!

Eugene.



On Thu, Apr 24, 2014 at 4:07 AM, Stephen Balukoff sbaluk...@bluebox.netwrote:

 Hi Brandon!

 Thanks for the questions. Responses inline:


 On Wed, Apr 23, 2014 at 2:51 PM, Brandon Logan 
 brandon.lo...@rackspace.com wrote:

  Hey Stephen!
 Thanks for the proposal and spending time on it (I know it is a large
 time investment).  This is actually very similar in structure to something
 I had started on except a load balancer object was the root and it had a
 one-to-many relationship to VIPs and each VIP had a one-to-many
 relationship to listeners.  We decided to scrap that because it became a
 bit complicated and the concept of sharing VIPs across load balancers
 (single port and protocol this time), accomplished the same thing but with
 a more streamlined API.  The multiple VIPs having multiple listeners was
 the main complexity and your proposal does not have that either.

 Anyway, some comments and questions on your proposal are listed below.
 Most are minor quibbles, questions and suggestions that can probably be
 fleshed out later when we decide on one proposal and I am going to use your
 object names as 

Re: [openstack-dev] [Heat] [Glance] How about managing heat template like flavors in nova?

2014-04-25 Thread Alexander Tivelkov
Hi Randall,

The current design document on artifacts in glance is available here [1].
It was just published, we are currently gathering the feedback on it.
Please feel free to add comments to the document or write any
suggestions or questions to the ML.
There was a little discussion on yesterdays IRC meeting ([2]), I've
answered some questions there.
I will be happy to answer any questions directly (I am ativelkov in
IRC) or we may discuss the topic in more details on the next Glance
meeting next Thursday.

Looking forward for collaboration with Heat team on this topic.

[1] 
https://docs.google.com/document/d/1tOTsIytVWtXGUaT2Ia4V5PWq4CiTfZPDn6rpRm5In7U/edit?usp=sharing
[2] 
http://eavesdrop.openstack.org/meetings/glance/2014/glance.2014-04-24-14.01.log.html
--
Regards,
Alexander Tivelkov


On Mon, Apr 21, 2014 at 4:29 PM, Randall Burt
randall.b...@rackspace.com wrote:
 We discussed this with the Glance community back in January and it was
 agreed that we should extend Glance's scope to include Heat templates as
 well as other artifacts. I'm planning on submitting some patches around this
 during Juno.

 Adding the Glance tag as this is relevant to them as well.


  Original message 
 From: Mike Spreitzer
 Date:04/19/2014 9:43 PM (GMT-06:00)
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Heat] How about managing heat template like
 flavors in nova?

 Gouzongmei gouzong...@huawei.com wrote on 04/19/2014 10:37:02 PM:

 We can supply APIs for getting, putting, adding and deleting current
 templates in the system, then when creating heat stacks, we just
 need to specify the name of the template.

 Look for past discussion of Heat Template Repository (Heater).  Here is part
 of it: https://wiki.openstack.org/wiki/Heat/htr

 Regards,
 Mike

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] cancel next team meeting May 1

2014-04-25 Thread Sergey Lukjanov
Hey folks,

May 1 is a non-working day in Russia and I'm starting traveling next
day, so, I'll not be able to chair it.

So, I'm proposing to cancel this meeting.

Any thoughts/objections?

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] cinder not support quota for user

2014-04-25 Thread Harshada Kakad
Hi Everyone,

I see most of the project allows quota for tenant as well as user.
Here is example of nova supporting quota for both tenant and user.

ubuntu@harshada-dev:~/harshada/temp/cinder$ nova help quota-show
usage: nova quota-show [--tenant tenant-id] [--user user-id]

List the quotas for a tenant/user.

Optional arguments:
  --tenant tenant-id  ID of tenant to list the quotas for.
  --user user-id  ID of user to list the quotas for.


But cinder allows quota for just tenant only. Why is it so designed, I am
actually not aware about it.
Ideally it show also allow quota for user. Could any one let me know why is
it so designed only for tenant and not for user?

ubuntu@harshada-dev:~/harshada/temp/cinder$ cinder help quota-show
usage: cinder quota-show tenant_id

List the quotas for a tenant.

Positional arguments:
  tenant_id  UUID of tenant to list the quotas for.


-- 
*Regards,*
*Harshada Kakad*
**
*Sr. Software Engineer*
*C3/101, Saudamini Complex, Right Bhusari Colony, Paud Road, Pune - 411013,
India*
*Mobile-9689187388*
*Email-Id : harshada.ka...@izeltech.com harshada.ka...@izeltech.com*
*website : www.izeltech.com http://www.izeltech.com*

-- 
*Disclaimer*
The information contained in this e-mail and any attachment(s) to this 
message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information of Izel 
Technologies Pvt. Ltd. If you are not the intended recipient, you are 
notified that any review, use, any form of reproduction, dissemination, 
copying, disclosure, modification, distribution and/or publication of this 
e-mail message, contents or its attachment(s) is strictly prohibited and 
you are requested to notify us the same immediately by e-mail and delete 
this mail immediately. Izel Technologies Pvt. Ltd accepts no liability for 
virus infected e-mail or errors or omissions or consequences which may 
arise as a result of this e-mail transmission.
*End of Disclaimer*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][keystone] Best practices for admin roles and policies

2014-04-25 Thread Frittoli, Andrea (HP Cloud)
In the QA meeting yesterday, we discussed about accounts, admin roles and 
policies, and how we use them in tempest and in our test environments (e.g. 
devstack and toci).

The conversation was triggered by a discussion on running tempest without admin 
credentials – see [0], [1].

 

So we are looking for recommendations and best practises available on how to 
setup roles in keystone and policies when running keystone v3.

 

A little more background on this.

 

Tempest uses admin accounts for two purposes: 

-  setting up test accounts on the fly, useful to run tests in parallel in 
isolation 

- run tests which require service specific admin privileges, such as list all 
VMs in a cloud for nova, or manage public routers and networks in neutron

 

When running tests with keystone v2, we use an identity admin account, which is 
also acts admin for all other services – only exception being swift, for which 
a dedicated admin role is defined.

We need to define how we want the accounts and roles setup to look like with 
keystone v3. 

 

Keystone V3 provides (at least) two level of admin role: overall identity admin 
and domain admin. 

 

Domain admin is sufficient to create users and tenants within a certain domain, 
which is nice as it could allow tempest to run with a domain admin account only 
and still use tenant isolation.

But how does that map to the service admin roles, given the fact that services 
are not domain aware?

 

For instance a nova list --all-tenants will show all VMs in the cloud, and 
there is no option to see only the VMs owned by users in a certain domain.

Thus the administrator of a single domain should not be able to assign a system 
wide role (such as nova admin) to one user in his/her domain, as it would give 
such user visibility to all the VMs in all domains.

 

[0]https://blueprints.launchpad.net/tempest/+spec/run-without-admin / 
https://blueprints.launchpad.net/tempest/+spec/run-without-admin%20/  

[1] https://review.openstack.org/#/c/86967/

 

-- 

Andrea Frittoli

QA Tech Lead

HP Cloud ☁



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove][Heat] Use cases required for Trove

2014-04-25 Thread Denis Makogon
 Hello Trove/Heat community.

I'd like to start thread related to required use cases (from Trove
perspective). To support (completely) Heat in Trove, it neeeds to support
required operations like:



   1.

   Resize operations:

   - resize a volume (to bigger size);

   - consistent resize of instance flavor.
   2.

   Migration operation (from host to host).
   3.

   Security group/rules operations:

  - add new rules to already existing group;

  - swap existing rules to a new ones.
   4.

   Designate (DNSaaS) resource provisioning.


 Current Trove master branch code is not ready to fully support all cases
proposed. implementation you can find at :

https://github.com/denismakogon/trove/tree/bp/resource-management-driver


I'd like to envolve Heat community to help Trove community to migrate from
use of native clients like novaclient, cinderclient, neutronclient,
designateclient to heatclient as single point of resource management.

Some of the blueprints are already filed and (some of them) already
discussed with the community and some of them are not (yet).

So, i'd like to clarify the which use cases already coverd and which are
not. Appreciate any response.

Also, i'd like to propose cross-project discussion about this topic out of
ATL Summit schedule.

Here some useful links:

* Trove blueprints*

https://blueprints.launchpad.net/trove/+spec/resource-management-driver

* Heat related blueprints*

https://blueprints.launchpad.net/heat/+spec/handle-update-for-security-groups

https://blueprints.launchpad.net/heat/+spec/update-cinder-volume

Best regards,

Denis Makogon

dmako...@mirantis.com

www.mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][db] oslo.db repository review request

2014-04-25 Thread Matt Riedemann



On 4/18/2014 1:18 PM, Doug Hellmann wrote:

Nice work, Victor!

I left a few comments on the commits that were made after the original
history was exported from the incubator. There were a couple of small
things to address before importing the library, and a couple that can
wait until we have the normal code review system. I'd say just add new
commits to fix the issues, rather than trying to amend the existing
commits.

We haven't really discussed how to communicate when we agree the new
repository is ready to be imported, but it seems reasonable to use the
patch in openstack-infra/config that will be used to do the import:
https://review.openstack.org/#/c/78955/

Doug

On Fri, Apr 18, 2014 at 10:28 AM, Victor Sergeyev
vserge...@mirantis.com wrote:

Hello all,

During Icehouse release cycle our team has been working on splitting of
openstack common db code into a separate library blueprint [1]. At the
moment the issues, mentioned in this bp and [2] are solved and we are moving
forward to graduation of oslo.db. You can find the new oslo.db code at [3]

So, before moving forward, I want to ask Oslo team to review oslo.db
repository [3] and especially the commit, that allows the unit tests to pass
[4].

Thanks,
Victor

[1] https://blueprints.launchpad.net/oslo/+spec/oslo-db-lib
[2] https://wiki.openstack.org/wiki/Oslo/GraduationStatus#oslo.db
[3] https://github.com/malor/oslo.db
[4]
https://github.com/malor/oslo.db/commit/276f7570d7af4a7a62d0e1ffb4edf904cfbf0600

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I'm probably just late to the party, but simple question: why is it in 
the malor group in github rather than the openstack group, like 
oslo.messaging and oslo.rootwrap?  Is that temporary or will it be moved 
at some point?


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][db] oslo.db repository review request

2014-04-25 Thread Julien Danjou
On Fri, Apr 25 2014, Matt Riedemann wrote:

 I'm probably just late to the party, but simple question: why is it in the
 malor group in github rather than the openstack group, like oslo.messaging
 and oslo.rootwrap?  Is that temporary or will it be moved at some point?

That's just for creating and validating it, it will be moved on
git.openstack.org, as usual, then.

-- 
Julien Danjou
/* Free Software hacker
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] cinder not support quota for user

2014-04-25 Thread Cazzolato, Sergio J
There is a blueprint to add this to cinder 
https://blueprints.launchpad.net/cinder/+spec/per-project-user-quotas-support

It has been implemented and then rejected due to there are some issues around 
the current quota implementation to be fixed before introduce this new feature.



From: Harshada Kakad [mailto:harshada.ka...@izeltech.com] 
Sent: Friday, April 25, 2014 8:28 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [cinder] cinder not support quota for user



Hi Everyone,

I see most of the project allows quota for tenant as well as user.
Here is example of nova supporting quota for both tenant and user.

ubuntu@harshada-dev:~/harshada/temp/cinder$ nova help quota-show
usage: nova quota-show [--tenant tenant-id] [--user user-id]

List the quotas for a tenant/user.

Optional arguments:
  --tenant tenant-id  ID of tenant to list the quotas for.
  --user user-id      ID of user to list the quotas for.


But cinder allows quota for just tenant only. Why is it so designed, I am 
actually not aware about it.
Ideally it show also allow quota for user. Could any one let me know why is it 
so designed only for tenant and not for user?

ubuntu@harshada-dev:~/harshada/temp/cinder$ cinder help quota-show
usage: cinder quota-show tenant_id

List the quotas for a tenant.

Positional arguments:
  tenant_id  UUID of tenant to list the quotas for.


-- 
Regards,
Harshada Kakad

Sr. Software Engineer
C3/101, Saudamini Complex, Right Bhusari Colony, Paud Road, Pune - 411013, India
Mobile-9689187388
Email-Id : harshada.ka...@izeltech.com
website : www.izeltech.com

*Disclaimer*
The information contained in this e-mail and any attachment(s) to this message 
are intended for the exclusive use of the addressee(s) and may contain 
proprietary, confidential or privileged information of Izel Technologies Pvt. 
Ltd. If you are not the intended recipient, you are notified that any review, 
use, any form of reproduction, dissemination, copying, disclosure, 
modification, distribution and/or publication of this e-mail message, contents 
or its attachment(s) is strictly prohibited and you are requested to notify us 
the same immediately by e-mail and delete this mail immediately. Izel 
Technologies Pvt. Ltd accepts no liability for virus infected e-mail or errors 
or omissions or consequences which may arise as a result of this e-mail 
transmission.
*End of Disclaimer*

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][db] oslo.db repository review request

2014-04-25 Thread Doug Hellmann
On Fri, Apr 25, 2014 at 8:33 AM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:


 On 4/18/2014 1:18 PM, Doug Hellmann wrote:

 Nice work, Victor!

 I left a few comments on the commits that were made after the original
 history was exported from the incubator. There were a couple of small
 things to address before importing the library, and a couple that can
 wait until we have the normal code review system. I'd say just add new
 commits to fix the issues, rather than trying to amend the existing
 commits.

 We haven't really discussed how to communicate when we agree the new
 repository is ready to be imported, but it seems reasonable to use the
 patch in openstack-infra/config that will be used to do the import:
 https://review.openstack.org/#/c/78955/

 Doug

 On Fri, Apr 18, 2014 at 10:28 AM, Victor Sergeyev
 vserge...@mirantis.com wrote:

 Hello all,

 During Icehouse release cycle our team has been working on splitting of
 openstack common db code into a separate library blueprint [1]. At the
 moment the issues, mentioned in this bp and [2] are solved and we are
 moving
 forward to graduation of oslo.db. You can find the new oslo.db code at
 [3]

 So, before moving forward, I want to ask Oslo team to review oslo.db
 repository [3] and especially the commit, that allows the unit tests to
 pass
 [4].

 Thanks,
 Victor

 [1] https://blueprints.launchpad.net/oslo/+spec/oslo-db-lib
 [2] https://wiki.openstack.org/wiki/Oslo/GraduationStatus#oslo.db
 [3] https://github.com/malor/oslo.db
 [4]

 https://github.com/malor/oslo.db/commit/276f7570d7af4a7a62d0e1ffb4edf904cfbf0600

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 I'm probably just late to the party, but simple question: why is it in the
 malor group in github rather than the openstack group, like oslo.messaging
 and oslo.rootwrap?  Is that temporary or will it be moved at some point?

This is the copy of the code being prepared to import into a new
oslo.db repository. It's easier to set up that temporary hosting on
github. The repo has been approved to be imported, and after that
happens it will be hosted on our git server like all of the other oslo
libraries.

Doug


 --

 Thanks,

 Matt Riedemann



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] cancel next team meeting May 1

2014-04-25 Thread Matthew Farrellee

On 04/25/2014 07:23 AM, Sergey Lukjanov wrote:

Hey folks,

May 1 is a non-working day in Russia and I'm starting traveling next
day, so, I'll not be able to chair it.

So, I'm proposing to cancel this meeting.

Any thoughts/objections?


if folks have topics they'd like to cover, use the mailing list

see you all at summit!

best,


matt


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] cancel next team meeting May 1

2014-04-25 Thread Sergey Lukjanov
We'll have one more meeting before the summit - May 8 :)

On Fri, Apr 25, 2014 at 5:09 PM, Matthew Farrellee m...@redhat.com wrote:
 On 04/25/2014 07:23 AM, Sergey Lukjanov wrote:

 Hey folks,

 May 1 is a non-working day in Russia and I'm starting traveling next
 day, so, I'll not be able to chair it.

 So, I'm proposing to cancel this meeting.

 Any thoughts/objections?


 if folks have topics they'd like to cover, use the mailing list

 see you all at summit!

 best,


 matt


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Design Summit Sessions

2014-04-25 Thread Kyle Mestery
Hi everyone:

I've pushed out the Neutron Design Summit Schedule to sched.org [1].
Like the other projects, it was tough to fit everything in. If your
proposal didn't make it, there will still be opportunities to talk
about it at the Summit in the project Pod area. Also, I encourage
you to still file a BP using the new Neutron BP process [2].

I expect some slight juggling of the schedule may occur as the entire
Summit schedule is set, but this should be approximately where things
land.

Thanks!
Kyle

[1] http://junodesignsummit.sched.org/overview/type/neutron
[2] https://wiki.openstack.org/wiki/Blueprints#Neutron

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Results of the TC Election

2014-04-25 Thread Anita Kuno
Please join me in congratulating the 7 newly elected members of the TC.

* Thierry Carrez
* Jay Pipes
* Vishvananda Ishaya
* Michael Still
* Jim Blair
* Mark McClain
* Devananda van der Veen

Full results:
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_d34934c9fd1f6282

Thank you to all candidates who stood for election, having a good group
of candidates helps engage the community in our democratic process,

Thank you to all who voted and who encouraged others to vote. We need to
ensure your voice is heard.

Thanks to my fellow election official, Tristan Cacqueray, I appreciate
your help and perspective.

Thank you for another great round.

Here's to Juno,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] HAProxy and Keystone setup (in Overcloud)

2014-04-25 Thread Jan Provazník

Hello,
one of missing bits for running multiple control nodes in Overcloud is 
setting up endpoints in Keystone to point to HAProxy which will listen 
on a virtual IP and not-standard ports.


HAProxy ports are defined in heat template, e.g.:

haproxy:
  nodes:
  - name: control1
ip: 192.0.2.5
  - name: control2
ip: 192.0.2.6
  services:
  - name: glance_api_cluster
proxy_ip: 192.0.2.254 (=virtual ip)
proxy_port: 9293
port:9292


means that Glance's Keystone endpoint should be set to:
http://192.0.2.254:9293/

ATM Keystone setup is done from devtest_overcloud.sh when Overcloud 
stack creation successfully completes. I wonder what of the following 
options how to set up endpoints in HA mode, is preferred by community?:
1) leave it in post-stack-create phase and extend init-keystone script. 
But then how to deal with list of not-standard ports (proxy_port in 
example above):
  1a) consider these not-standard ports as static and just hardcode 
them (similar to what we do with SSL ports already). But ports would be 
hardcoded on 2 places (heat template and this script). If a user changes 
them in heat template, he has to change them in init-keystone script too.
  2b) init-keystone script would fetch list of ports from heat stack 
description (if it's possible?)


Long-term plan seems to be rewrite Keystone setup into os-cloud-config:
https://blueprints.launchpad.net/tripleo/+spec/tripleo-keystone-cloud-config
So alternative to extending init-keystone script would be implement it 
as part of the blueprint, anyway the concept of keeping Keystone setup 
in post-stack-create phase remains.



2) do Keystone setup from inside Overcloud:
Extend keystone element, steps done in init-keystone script would be 
done in keystone's os-refresh-config script. This script would have to 
be called only on one of nodes in cluster and only once (though we 
already do similar check for other services - mysql/rabbitmq, so I don't 
think this is a problem). Then this script can easily get list of 
haproxy ports from heat metadata. This looks like more attractive option 
to me - it eliminates an extra post-create config step.


Related to Keystone setup is also the plan around keys/cert setup 
described here:

http://lists.openstack.org/pipermail/openstack-dev/2014-March/031045.html
But I think this plan would remain same no matter which of the options 
above would be used.



What do you think?

Jan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] status of VPNaaS and FWaaS APIs in Icehouse

2014-04-25 Thread Akihiro Motoki
we need to correct the previous reply.

Both should still be considered experimental because of
the multivendor work was NOT completed in Icehouse.

We can use only one service backend for each service and
there are no way to choose a backend when creating a service instance.

In addition, FWaaS API does not provides a way to specify
a router which a firewall instance is applied to.
It will be addressed in the service insertion blueprint.

Akihiro

(2014/04/25 6:55), McCann, Jack wrote:
 Thanks Mark.

 What steps are necessary to promote these APIs beyond experimental?

 - Jack

 -Original Message-
 From: Mark McClain [mailto:mmccl...@yahoo-inc.com]
 Sent: Thursday, April 24, 2014 11:07 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [neutron] status of VPNaaS and FWaaS APIs in
 Icehouse


 On Apr 23, 2014, at 6:20 PM, McCann, Jack jack.mcc...@hp.com wrote:

 Are VPNaaS and FWaaS APIs still considered experimental in Icehouse?

 For VPNaaS, [1] says This extension is experimental for the Havana 
 release.
 For FWaaS, [2] says The Firewall-as-a-Service (FWaaS) API is an 
 experimental
 API...


 Thanks for asking.  Both should still be considered experimental because of 
 the
 multivendor work was completed in Icehouse.


 [1] 
 http://docs.openstack.org/api/openstack-network/2.0/content/vpnaas_ext.html
 [2] http://docs.openstack.org/admin-guide-cloud/content/fwaas.html



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Flavor(?) Framework

2014-04-25 Thread Akihiro Motoki
Hi,

I have a same question from Mark. Why is flavor not desired?
My first vote is flavor first, and then type.

There is similar cases in other OpenStack projects.
Nova uses flavor and Cinder uses (volume) type for similar cases.
Both cases are similar to our use cases and I think it is better to use
either of them to avoid more confusion from naming for usesr and operators.

Cinder volume_type detail is available at [1]. In Cinder volume_type,
we can define multiple volume_type for one driver.
(more precisely, volume_type is associated to one backend defintion
and we can define multiple backend definition for one backend driver).

In addition, I prefer to managing flavor/type through API and decoupling
flavor/type definition from provider definitions in configuration files
as Cinder and Nova do.

[1] http://docs.openstack.org/admin-guide-cloud/content/multi_backend.html

Thanks,
Akihiro

(2014/04/24 0:05), Eugene Nikanorov wrote:
Hi neutrons,

A quick question of the ^^^
I heard from many of you that a term 'flavor' is undesirable, but so far there 
were no suggestions for the notion that we are going to introduce.
So please, suggest you name for the resource.
Names that I've been thinking of:
 - Capability group
 - Service Offering

Thoughts?

Thanks,
Eugene.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Flavor(?) Framework

2014-04-25 Thread Eugene Nikanorov
 In addition, I prefer to managing flavor/type through API and decoupling
 flavor/type definition from provider definitions in configuration files
 as Cinder and Nova do.

Yes, that's the current proposal.

Thanks,
Eugene.


On Fri, Apr 25, 2014 at 5:41 PM, Akihiro Motoki mot...@da.jp.nec.comwrote:

  Hi,

 I have a same question from Mark. Why is flavor not desired?
 My first vote is flavor first, and then type.

 There is similar cases in other OpenStack projects.
 Nova uses flavor and Cinder uses (volume) type for similar cases.
 Both cases are similar to our use cases and I think it is better to use
 either of them to avoid more confusion from naming for usesr and operators.

 Cinder volume_type detail is available at [1]. In Cinder volume_type,
 we can define multiple volume_type for one driver.
 (more precisely, volume_type is associated to one backend defintion
 and we can define multiple backend definition for one backend driver).

 In addition, I prefer to managing flavor/type through API and decoupling
 flavor/type definition from provider definitions in configuration files
 as Cinder and Nova do.

 [1] http://docs.openstack.org/admin-guide-cloud/content/multi_backend.html

 Thanks,
 Akihiro


 (2014/04/24 0:05), Eugene Nikanorov wrote:

 Hi neutrons,

  A quick question of the ^^^
 I heard from many of you that a term 'flavor' is undesirable, but so far
 there were no suggestions for the notion that we are going to introduce.
 So please, suggest you name for the resource.
 Names that I've been thinking of:
  - Capability group
  - Service Offering

  Thoughts?

  Thanks,
 Eugene.


 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Selecting Compute tests by driver/hypervisor

2014-04-25 Thread Christopher Yeoh
On Tue, 22 Apr 2014 19:46:06 -0400
Sean Dague s...@dague.net wrote:
 
 I think the policy about what's allowed to be not implemented or not
 shouldn't be changing so quickly that it needs to be left to the
 projects to decide after the fact.
 

+1. I think it is extremely important that we gate on this so we don't
accidentally start not supporting a feature and not realise for a long
time.

If the current situation is very complicated then perhaps a tool could
be used to produce information about how things currently look based on
where we currently get not implemented errors. But that still needs to
be reviewed manually to ensure its reasonable and Tempest should not be
trying to adapt dynamically. to feature suport disappearing. 

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Results of the TC Election

2014-04-25 Thread Chris K
I would like to thank the parting TC members for their service and
congratulate the new TC members for the up and coming cycle!

Well Done, and Great Job ALL!

Chris



On Fri, Apr 25, 2014 at 8:30 AM, Anita Kuno ante...@anteaya.info wrote:

 Please join me in congratulating the 7 newly elected members of the TC.

 * Thierry Carrez
 * Jay Pipes
 * Vishvananda Ishaya
 * Michael Still
 * Jim Blair
 * Mark McClain
 * Devananda van der Veen

 Full results:
 http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_d34934c9fd1f6282

 Thank you to all candidates who stood for election, having a good group
 of candidates helps engage the community in our democratic process,

 Thank you to all who voted and who encouraged others to vote. We need to
 ensure your voice is heard.

 Thanks to my fellow election official, Tristan Cacqueray, I appreciate
 your help and perspective.

 Thank you for another great round.

 Here's to Juno,
 Anita.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] design summit sessions

2014-04-25 Thread Doug Hellmann
The second draft of the Oslo design summit sessions have been posted
to http://junodesignsummit.sched.org/overview/type/oslo. I don't
expect any changes, but we may juggle slots if we find there are
conflicts with sessions from other projects where we need cross-over
attendance.

Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Use Case Question

2014-04-25 Thread Nathan Kinder


On 04/25/2014 12:50 AM, Carlos Garza wrote:
 Trevor is referring to our plans on using the SSL session ID of the
 ClientHello to provide session persistence.
 See RFC 5264 section 7.4.1.2 which sends an SSL session ID in the clear
 (Unencrypted) so that a load balancer with out the decrypting key can
 use it to make decisions on which
 back end node to send the request to.  Users browsers while typically
 use the same session ID for a while between connections. 

This is a nice approach, as it eliminates the need to decrypt/encrypt on
the load balancer.  I know that HAProxy has the ability to do this, so
it's definitely possible.

I do recall reading that the session ID isn't always sent as a part of
the client hello, so you would need to check the server hello as well.
If there is a blueprint on this, it would be good to make sure it
mentions that the server hello should be checked as well.

 
 Also note this is supported in TLS 1.1 as well in the same section
 according to RFC 4346. And in TLS 1.0 RFC2246 as well.
 
 So we have the ability to offer http cookie based persistence as you
 described only if we have the key but if not we can also offer SSL
 Session Id based persistence.

One other use-case that calls for re-encrypting on the load balancer is
when you want to use different certificates on the backend network (such
as a completely separate internal PKI).

I wrote up some thoughts on SSL/TLS with load balancers for the API
endpoints a few weeks ago.  It wasn't related to LBaaS, but I think it's
applicable.  It specifically discusses affinity via SSL/TLS session ID
as well as separate backend PKI use cases:

https://blog-nkinder.rhcloud.com/?p=7

I think the important take away is that everyone has different security
requirements, which requires flexibility.  There are arguments to be
made for termination at the load balancer, passing SSL/TLS through the
load balancer, and re-encryption at the load balancer.  Providing
support for all of these should be the goal.

Thanks,
-NGK

 
 
 
 On Apr 24, 2014, at 7:53 PM, Stephen Balukoff sbaluk...@bluebox.net
 mailto:sbaluk...@bluebox.net wrote:
 
 Hi Trevor,

 If the use case here requires the same client (identified by session
 cookie) to go to the same back-end, the only way to do this with HTTPS
 is to decrypt on the load balancer. Re-encryption of the HTTP request
 may or may not happen on the back-end depending on the user's needs.
 Again, if the client can potentially change IP addresses, and the
 session still needs to go to the same back-end, the only way the load
 balancer is going to know this is by decrypting the HTTPS request. I
 know of no other way to make this work.

 Stephen


 On Thu, Apr 24, 2014 at 9:25 AM, Trevor Vardeman
 trevor.varde...@rackspace.com mailto:trevor.varde...@rackspace.com
 wrote:

 Hey,

 I'm looking through the use-cases doc for review, and I'm confused
 about one of them.  I'm familiar with HTTP cookie based session
 persistence, but to satisfy secure-traffic for this case would
 there be decryption of content, injection of the cookie, and then
 re-encryption?  Is there another session persistence type that
 solves this issue already?  I'm copying the doc link and the use
 case specifically; not sure if the document order would change so
 I thought it would be easiest to include both :)

 Use Cases:
  
 https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis

 Specific Use Case:  A project-user wants to make his secured web
 based application (HTTPS) highly available. He has n VMs deployed
 on the same private subnet/network. Each VM is installed with a
 web server (ex: apache) and content. The application requires that
 a transaction which has started on a specific VM will continue to
 run against the same VM. The application is also available to
 end-users via smart phones, a case in which the end user IP might
 change. The project-user wishes to represent them to the
 application users as a web application available via a single IP.

 -Trevor Vardeman

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 -- 
 Stephen Balukoff
 Blue Box Group, LLC
 (800)613-4305 x807
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Neutron] Flavor(?) Framework

2014-04-25 Thread Jay Pipes
On Fri, 2014-04-25 at 13:41 +, Akihiro Motoki wrote:
 Hi,
 
 I have a same question from Mark. Why is flavor not desired?
 My first vote is flavor first, and then type.

Some reasons:

First, flavor, in English, can and often is spelled differently
depending on where you live in the world (flavor vs. flavour).

Second, type is the appropriate term for what this is describing, and
doesn't have connotations of taste, which flavor does.

I could also mention that the term flavor is a vestige of the
Rackspace Cloud API and, IMO, should be killed off in place of the more
common and better understood instance type which is used by the EC2
API.

 There is similar cases in other OpenStack projects.
 Nova uses flavor and Cinder uses (volume) type for similar cases.
 Both cases are similar to our use cases and I think it is better to
 use
 either of them to avoid more confusion from naming for usesr and
 operators.
 
 Cinder volume_type detail is available at [1]. In Cinder volume_type,
 we can define multiple volume_type for one driver. 
 (more precisely, volume_type is associated to one backend
 defintion
 and we can define multiple backend definition for one backend
 driver).
 
 In addition, I prefer to managing flavor/type through API and
 decoupling
 flavor/type definition from provider definitions in configuration
 files
 as Cinder and Nova do.

Yes, I don't believe there's any disagreement on that particular point.
This effort is all about trying to provide a more comfortable and
reasonable way for classification of these advanced services to be
controlled by the user.

Best,
-jay

 [1]
 http://docs.openstack.org/admin-guide-cloud/content/multi_backend.html
 
 Thanks,
 Akihiro
 
 (2014/04/24 0:05), Eugene Nikanorov wrote:
 
  Hi neutrons, 
  
  
  A quick question of the ^^^
  I heard from many of you that a term 'flavor' is undesirable, but so
  far there were no suggestions for the notion that we are going to
  introduce.
  So please, suggest you name for the resource.
  Names that I've been thinking of:
   - Capability group
   - Service Offering
  
  
  Thoughts?
  
  
  Thanks,
  Eugene.
  
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] HAProxy and Keystone setup (in Overcloud)

2014-04-25 Thread Miller, Mark M (EB SW Cloud - RD - Corvallis)
Hello,

I am somewhat hesitant to bring up the stunnel topic in this thread, but it 
needs to be considered in that an endpoint naming solution and a certificate 
creation/distribution solution needs to consider both the haproxy and stunnel 
requirements because there are many similarities. I am currently looking at a 
DevTest deployment that includes stunnel on one node and am trying to figure 
out how to modify all of the configuration files in OpenStack that reference 
the Keystone IP address and the hard coded ports 5000 and 35357 to make use 
of the SSL hardened stunnel proxy.

Regards,

Mark


-Original Message-
From: Jan Provazník [mailto:jprov...@redhat.com] 
Sent: Friday, April 25, 2014 6:31 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [TripleO] HAProxy and Keystone setup (in Overcloud)

Hello,
one of missing bits for running multiple control nodes in Overcloud is setting 
up endpoints in Keystone to point to HAProxy which will listen on a virtual IP 
and not-standard ports.

HAProxy ports are defined in heat template, e.g.:

 haproxy:
   nodes:
   - name: control1
 ip: 192.0.2.5
   - name: control2
 ip: 192.0.2.6
   services:
   - name: glance_api_cluster
 proxy_ip: 192.0.2.254 (=virtual ip)
 proxy_port: 9293
 port:9292


means that Glance's Keystone endpoint should be set to:
http://192.0.2.254:9293/

ATM Keystone setup is done from devtest_overcloud.sh when Overcloud stack 
creation successfully completes. I wonder what of the following options how to 
set up endpoints in HA mode, is preferred by community?:
1) leave it in post-stack-create phase and extend init-keystone script. 
But then how to deal with list of not-standard ports (proxy_port in example 
above):
   1a) consider these not-standard ports as static and just hardcode them 
(similar to what we do with SSL ports already). But ports would be hardcoded on 
2 places (heat template and this script). If a user changes them in heat 
template, he has to change them in init-keystone script too.
   2b) init-keystone script would fetch list of ports from heat stack 
description (if it's possible?)

Long-term plan seems to be rewrite Keystone setup into os-cloud-config:
https://blueprints.launchpad.net/tripleo/+spec/tripleo-keystone-cloud-config
So alternative to extending init-keystone script would be implement it as part 
of the blueprint, anyway the concept of keeping Keystone setup in 
post-stack-create phase remains.


2) do Keystone setup from inside Overcloud:
Extend keystone element, steps done in init-keystone script would be done in 
keystone's os-refresh-config script. This script would have to be called only 
on one of nodes in cluster and only once (though we already do similar check 
for other services - mysql/rabbitmq, so I don't think this is a problem). Then 
this script can easily get list of haproxy ports from heat metadata. This looks 
like more attractive option to me - it eliminates an extra post-create config 
step.

Related to Keystone setup is also the plan around keys/cert setup described 
here:
http://lists.openstack.org/pipermail/openstack-dev/2014-March/031045.html
But I think this plan would remain same no matter which of the options above 
would be used.


What do you think?

Jan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] looking to add support for server groups to heat...any comments?

2014-04-25 Thread Chris Friesen


I'm looking to add support for server groups to heat.  I've got working 
code, but I thought I'd post the overall design here in case people had 
objections.


Basically, what I propose is to add a class NovaServerGroup resource. 
 Currently it would only support a policy property to store the 
scheduler policy for the server group.  The scheduler policy would not 
support updating on the fly.


The LaunchConfiguration and Instance classes would be extended with 
an optional ServerGroup property.  In the Instance class if the 
ServerGroup property is set then the group name is added to the 
scheduler_hints when building the instance.


The Server class would be extended with an optional server_group 
property.  If it is set then the group name is added to the 
scheduler_hints when building the server.


All in all, its around a hundred lines of code.

Any comments?

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] looking to add support for server groups to heat...any comments?

2014-04-25 Thread Mike Spreitzer
Chris Friesen chris.frie...@windriver.com wrote on 04/25/2014 12:23:00 
PM:

 I'm looking to add support for server groups to heat.  I've got working 
 code, but I thought I'd post the overall design here in case people had 
 objections.
 
 Basically, what I propose is to add a class NovaServerGroup resource. 
   Currently it would only support a policy property to store the 
 scheduler policy for the server group.  The scheduler policy would not 
 support updating on the fly.
 
 The LaunchConfiguration and Instance classes would be extended with 
 an optional ServerGroup property.  In the Instance class if the 
 ServerGroup property is set then the group name is added to the 
 scheduler_hints when building the instance.
 
 The Server class would be extended with an optional server_group 
 property.  If it is set then the group name is added to the 
 scheduler_hints when building the server.
 
 All in all, its around a hundred lines of code.
 
 Any comments?

Sounds exactly right to me, and timely too.

Cheers,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] looking to add support for server groups to heat...any comments?

2014-04-25 Thread Zane Bitter

On 25/04/14 12:23, Chris Friesen wrote:


I'm looking to add support for server groups to heat.  I've got working
code, but I thought I'd post the overall design here in case people had
objections.

Basically, what I propose is to add a class NovaServerGroup resource.
  Currently it would only support a policy property to store the
scheduler policy for the server group.  The scheduler policy would not
support updating on the fly.


If I correctly understood Mike when he previously talked about this, a 
server group policy is an actual thing in Nova with a UUID that gets 
passed to servers when you create them. If that is the case, then +1 for 
this design.



The LaunchConfiguration and Instance classes would be extended with
an optional ServerGroup property.  In the Instance class if the
ServerGroup property is set then the group name is added to the
scheduler_hints when building the instance.


-1 for making changes to AWS resources. These only exist for portability 
from/to CloudFormation; if people want to use OpenStack-only features 
then they should use the native resource types.


In the case of autoscaling, I'd say you probably want to add the 
property to e.g. InstanceGroup rather than to the LaunchConfiguration. 
(I guess this will become somewhat academic in the future, as I believe 
the plan for new native autoscaling resources is to have the launch 
configuration defined as part of the scaling group.)



The Server class would be extended with an optional server_group
property.  If it is set then the group name is added to the
scheduler_hints when building the server.


Given that we already expose the scheduler_hints directly, can you talk 
about why it would be advantageous to have a separate property as well? 
(e.g. syntax would be really finicky?)


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] looking to add support for server groups to heat...any comments?

2014-04-25 Thread Mike Spreitzer
Zane Bitter zbit...@redhat.com wrote on 04/25/2014 12:36:00 PM:

 On 25/04/14 12:23, Chris Friesen wrote:
 ...
  The LaunchConfiguration and Instance classes would be extended 
with
  an optional ServerGroup property.  In the Instance class if the
  ServerGroup property is set then the group name is added to the
  scheduler_hints when building the instance.
 
 -1 for making changes to AWS resources. These only exist for portability 

 from/to CloudFormation; if people want to use OpenStack-only features 
 then they should use the native resource types.

Oh yes, I overlooked that in my enthusiasm.  Good catch.

 In the case of autoscaling, I'd say you probably want to add the 
 property to e.g. InstanceGroup rather than to the LaunchConfiguration. 
 (I guess this will become somewhat academic in the future, as I believe 
 the plan for new native autoscaling resources is to have the launch 
 configuration defined as part of the scaling group.)

Two of our current four kinds of group have already dispatched 
LaunchConfig to,
well, pick your favorite from
http://www.nytimes.com/1983/10/16/magazine/on-language-dust-heaps-of-history.html
As pointed out above, one of the two LaunchConfig-philes, 
AWS::AutoScaling::AutoScalingGroup,
should be left alone.  That leaves OS::Heat::InstanceGroup --- which is, 
in the Python, a superclass
of the AWS ASG --- so it would be oddly irregular to add something to 
InstanceGroup but not the AWS ASG.
More important is Zane's following question.

  The Server class would be extended with an optional server_group
  property.  If it is set then the group name is added to the
  scheduler_hints when building the server.
 
 Given that we already expose the scheduler_hints directly, can you talk 
 about why it would be advantageous to have a separate property as well? 
 (e.g. syntax would be really finicky?)

Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Gerrit downtime and upgrade on 2014-04-28

2014-04-25 Thread James E. Blair
Hi,

This is the third and final reminder that next week Gerrit will be
unavailable for a few hours starting at 1600 UTC on April 28th.

You may read about the changes that will impact you as a developer
(please note that the SSH host key change is particularly important) at
this location:

  https://wiki.openstack.org/wiki/GerritUpgrade

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Gerrit downtime and upgrade on 2014-04-28

2014-04-25 Thread Jay Faulkner
Can you guys publish the ssh host key we should expect from the new
gerrit server?

Thanks,
Jay Faulkner

On 4/25/14, 10:08 AM, James E. Blair wrote:
 Hi,

 This is the third and final reminder that next week Gerrit will be
 unavailable for a few hours starting at 1600 UTC on April 28th.

 You may read about the changes that will impact you as a developer
 (please note that the SSH host key change is particularly important) at
 this location:

   https://wiki.openstack.org/wiki/GerritUpgrade

 -Jim

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Flavor(?) Framework

2014-04-25 Thread Anne Gentle
On Fri, Apr 25, 2014 at 11:01 AM, Jay Pipes jaypi...@gmail.com wrote:

 On Fri, 2014-04-25 at 13:41 +, Akihiro Motoki wrote:
  Hi,
 
  I have a same question from Mark. Why is flavor not desired?
  My first vote is flavor first, and then type.

 Some reasons:

 First, flavor, in English, can and often is spelled differently
 depending on where you live in the world (flavor vs. flavour).

 Second, type is the appropriate term for what this is describing, and
 doesn't have connotations of taste, which flavor does.

 I could also mention that the term flavor is a vestige of the
 Rackspace Cloud API and, IMO, should be killed off in place of the more
 common and better understood instance type which is used by the EC2
 API.



I agree with Jay on all these points about flavor. See if you can use a
classification of type.

Thanks,
Anne


  There is similar cases in other OpenStack projects.
  Nova uses flavor and Cinder uses (volume) type for similar cases.
  Both cases are similar to our use cases and I think it is better to
  use
  either of them to avoid more confusion from naming for usesr and
  operators.
 
  Cinder volume_type detail is available at [1]. In Cinder volume_type,
  we can define multiple volume_type for one driver.
  (more precisely, volume_type is associated to one backend
  defintion
  and we can define multiple backend definition for one backend
  driver).
 
  In addition, I prefer to managing flavor/type through API and
  decoupling
  flavor/type definition from provider definitions in configuration
  files
  as Cinder and Nova do.

 Yes, I don't believe there's any disagreement on that particular point.
 This effort is all about trying to provide a more comfortable and
 reasonable way for classification of these advanced services to be
 controlled by the user.

 Best,
 -jay

  [1]
  http://docs.openstack.org/admin-guide-cloud/content/multi_backend.html
 
  Thanks,
  Akihiro
 
  (2014/04/24 0:05), Eugene Nikanorov wrote:
 
   Hi neutrons,
  
  
   A quick question of the ^^^
   I heard from many of you that a term 'flavor' is undesirable, but so
   far there were no suggestions for the notion that we are going to
   introduce.
   So please, suggest you name for the resource.
   Names that I've been thinking of:
- Capability group
- Service Offering
  
  
   Thoughts?
  
  
   Thanks,
   Eugene.
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Design Summit Sessions

2014-04-25 Thread Matthew Farrellee

On 04/24/2014 10:51 AM, Sergey Lukjanov wrote:

Hey folks,

I've pushed the draft schedule for Sahara sessions on ATL design
summit. The description isn't fully completed, I'm working on it. I'll
do it till the end of week and add an etherpad to each session.

Sahara folks, please, take a look on a schedule and share your
thoughts / comments.

Thanks.

http://junodesignsummit.sched.org/overview/type/sahara+%28ex-savanna%29


will you swap v2-api and scalable slots? part of it will flow into ux re 
image-registry.


maybe add some error handling / state machine to the ux improvements

best,


matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Flavor(?) Framework

2014-04-25 Thread Mohammad Banikazemi

As I understand the proposed flavor framework, the intention is to provide
a mechanism for specifying different flavors of a given service type
as they  are already defined. So using the term type may be confusing.
Here we want to specify possibly different set of capabilities within a
given defined service type.

Mohammad




From:   Jay Pipes jaypi...@gmail.com
To: openstack-dev@lists.openstack.org,
Date:   04/25/2014 12:09 PM
Subject:Re: [openstack-dev] [Neutron] Flavor(?) Framework



On Fri, 2014-04-25 at 13:41 +, Akihiro Motoki wrote:
 Hi,

 I have a same question from Mark. Why is flavor not desired?
 My first vote is flavor first, and then type.

Some reasons:

First, flavor, in English, can and often is spelled differently
depending on where you live in the world (flavor vs. flavour).

Second, type is the appropriate term for what this is describing, and
doesn't have connotations of taste, which flavor does.

I could also mention that the term flavor is a vestige of the
Rackspace Cloud API and, IMO, should be killed off in place of the more
common and better understood instance type which is used by the EC2
API.

 There is similar cases in other OpenStack projects.
 Nova uses flavor and Cinder uses (volume) type for similar cases.
 Both cases are similar to our use cases and I think it is better to
 use
 either of them to avoid more confusion from naming for usesr and
 operators.

 Cinder volume_type detail is available at [1]. In Cinder volume_type,
 we can define multiple volume_type for one driver.
 (more precisely, volume_type is associated to one backend
 defintion
 and we can define multiple backend definition for one backend
 driver).

 In addition, I prefer to managing flavor/type through API and
 decoupling
 flavor/type definition from provider definitions in configuration
 files
 as Cinder and Nova do.

Yes, I don't believe there's any disagreement on that particular point.
This effort is all about trying to provide a more comfortable and
reasonable way for classification of these advanced services to be
controlled by the user.

Best,
-jay

 [1]
 http://docs.openstack.org/admin-guide-cloud/content/multi_backend.html

 Thanks,
 Akihiro

 (2014/04/24 0:05), Eugene Nikanorov wrote:

  Hi neutrons,
 
 
  A quick question of the ^^^
  I heard from many of you that a term 'flavor' is undesirable, but so
  far there were no suggestions for the notion that we are going to
  introduce.
  So please, suggest you name for the resource.
  Names that I've been thinking of:
   - Capability group
   - Service Offering
 
 
  Thoughts?
 
 
  Thanks,
  Eugene.
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Gerrit downtime and upgrade on 2014-04-28

2014-04-25 Thread James E. Blair
Jay Faulkner j...@jvf.cc writes:

 Can you guys publish the ssh host key we should expect from the new
 gerrit server?

Certainly!  As the wiki page[1] notes, you can see the current ssh host
key fingerprints at:

  https://review.openstack.org/#/settings/ssh-keys

Of course, right now, that's for the current key.  After the upgrade
when you visit that page it will display the values for the new key.

It might seem odd to verify the fingerprints for the server you are
connecting to by visiting a web page on the same server, however, since
it is over HTTPS, some additional confidence is provided by the trust in
the CA system.

Of course, for some of us, that's not a lot.  So on Monday, we'll send a
GPG signed email with the fingerprints as well.  And this is just
another reminder that as a community, we should endeavor to build our
GPG web of trust.  See you at the Summit!

-Jim

[1] https://wiki.openstack.org/wiki/GerritUpgrade

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Flavor(?) Framework

2014-04-25 Thread Jay Pipes
On Fri, 2014-04-25 at 13:32 -0400, Mohammad Banikazemi wrote:
 As I understand the proposed flavor framework, the intention is to
 provide a mechanism for specifying different flavors of a given
 service type  as they  are already defined. So using the term type
 may be confusing. Here we want to specify possibly different set of
 capabilities within a given defined service type.

Hi Mohammed,

Yes, the trouble in Neutron is the existing service type usage... I
proposed to rename that to service family or service class in a previous
email, and use a type for each service class, so:

load balancer type
firewall type
VPN type

I'd also recommend simplifying the API and CLI by removing the
implementation-focused provider type stuff eventually, as well, since
a service type framework would essentially make that no longer needed --
at least on the public API side of things.

Best,
-jay

 Inactive hide details for Jay Pipes ---04/25/2014 12:09:43 PM---On
 Fri, 2014-04-25 at 13:41 +, Akihiro Motoki wrote:  Hi,Jay Pipes
 ---04/25/2014 12:09:43 PM---On Fri, 2014-04-25 at 13:41 +, Akihiro
 Motoki wrote:  Hi,
 
 From: Jay Pipes jaypi...@gmail.com
 To: openstack-dev@lists.openstack.org, 
 Date: 04/25/2014 12:09 PM
 Subject: Re: [openstack-dev] [Neutron] Flavor(?) Framework
 
 
 
 __
 
 
 
 On Fri, 2014-04-25 at 13:41 +, Akihiro Motoki wrote:
  Hi,
  
  I have a same question from Mark. Why is flavor not desired?
  My first vote is flavor first, and then type.
 
 Some reasons:
 
 First, flavor, in English, can and often is spelled differently
 depending on where you live in the world (flavor vs. flavour).
 
 Second, type is the appropriate term for what this is describing, and
 doesn't have connotations of taste, which flavor does.
 
 I could also mention that the term flavor is a vestige of the
 Rackspace Cloud API and, IMO, should be killed off in place of the
 more
 common and better understood instance type which is used by the EC2
 API.
 
  There is similar cases in other OpenStack projects.
  Nova uses flavor and Cinder uses (volume) type for similar
 cases.
  Both cases are similar to our use cases and I think it is better to
  use
  either of them to avoid more confusion from naming for usesr and
  operators.
  
  Cinder volume_type detail is available at [1]. In Cinder
 volume_type,
  we can define multiple volume_type for one driver. 
  (more precisely, volume_type is associated to one backend
  defintion
  and we can define multiple backend definition for one backend
  driver).
  
  In addition, I prefer to managing flavor/type through API and
  decoupling
  flavor/type definition from provider definitions in configuration
  files
  as Cinder and Nova do.
 
 Yes, I don't believe there's any disagreement on that particular
 point.
 This effort is all about trying to provide a more comfortable and
 reasonable way for classification of these advanced services to be
 controlled by the user.
 
 Best,
 -jay
 
  [1]
 
 http://docs.openstack.org/admin-guide-cloud/content/multi_backend.html
  
  Thanks,
  Akihiro
  
  (2014/04/24 0:05), Eugene Nikanorov wrote:
  
   Hi neutrons, 
   
   
   A quick question of the ^^^
   I heard from many of you that a term 'flavor' is undesirable, but
 so
   far there were no suggestions for the notion that we are going to
   introduce.
   So please, suggest you name for the resource.
   Names that I've been thinking of:
- Capability group
- Service Offering
   
   
   Thoughts?
   
   
   Thanks,
   Eugene.
   
   
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove][Heat] Use cases required for Trove

2014-04-25 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2014-04-25 10:05:31 -0700:
 On 25/04/14 08:30, Denis Makogon wrote:
Hello Trove/Heat community.
 
  I'd like to start thread related to required use cases (from Trove
  perspective). To support (completely) Heat in Trove, it neeeds to
  support required operations like:
 
 
   1.
 
  Resize operations:
 
  - resize a volume (to bigger size);
 
 This is kind of messy, because it involves operations (i.e. association 
 with an instance) that are, in theory, expressed independently of the 
 Volume itself in the template. It should be do-able though.
 
  - consistent resize of instance flavor.
 
 Server resize is implemented already in Heat. I believe what is missing 
 is a hook to allow Trove to confirm (or not) the resize. Heat always 
 confirms at the moment.
 
 http://junodesignsummit.sched.org/event/d422b859492cd1dbcb9763c8b7245f23
 

Right, this is similar to the rebuild case that TripleO needs. I believe
software deployments has laid a nice framework down for this, and I'm
working right now on a POC for rebuild that will quite easily translate
into resize as well.

   2.
 
  Migration operation (from host to host).
 
 It's not clear to me how this would fit into Heat's declarative model 
 (since it's an action, rather than a thing or a relationship between 
 things). Given that this is presumably an admin-type operation, maybe it 
 is OK for this to continue to be done with the Nova client.
 

Assuming you mean nova migration, which is an administrative action
in Nova, I agree with Zane, except that I wouldn't say maybe, but
most likely.

However, the statement is a bit vague, so you could mean _trove_ host
to _trove_ host. As in, migrating a whole database and then a VIP to
another database.

I think there is a role for Heat to play there, as you would change your
desired end-goal to have two servers, and even use Heat to communicate
the details how the migration path to the new one, and then change the
goal again to just have the new server, with the VIP moved.

   3.
 
  Security group/rules operations:
 
  - add new rules to already existing group;
 
  - swap existing rules to a new ones.
 
 This appears to be implemented already (for Neutron security groups), 
 though the implementation looks a bit suspect to me:
 
 https://github.com/openstack/heat/blob/master/heat/engine/resources/neutron/security_group.py#L237
 
   4.
 
  Designate (DNSaaS) resource provisioning.
 
 +1! If someone wanted to write this we would be more than happy to see 
 it in /contrib right now, and moving to the main tree as soon as 
 Designate is accepted for incubation.
 

Agreed. DNS all the things.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] looking to add support for server groups to heat...any comments?

2014-04-25 Thread Chris Friesen

On 04/25/2014 11:01 AM, Mike Spreitzer wrote:

Zane Bitter zbit...@redhat.com wrote on 04/25/2014 12:36:00 PM:

  On 25/04/14 12:23, Chris Friesen wrote:



More important is Zane's following question.

   The Server class would be extended with an optional server_group
   property.  If it is set then the group name is added to the
   scheduler_hints when building the server.
 
  Given that we already expose the scheduler_hints directly, can you talk
  about why it would be advantageous to have a separate property as well?
  (e.g. syntax would be really finicky?)



I was thinking it'd be more intuitive for the end-user (and more 
future-proof if novaclient changes), but maybe I'm wrong.


In the version I have currently it looks something like this:

  cirros_server1:
type: OS::Nova::Server
properties:
  name: cirros1
  image: 'cirros'
  flavor: 'm1.tiny'
  server_group: { get_resource: my_heat_group }



In the nova boot command we pass the group uuid like this:

--hint group=e4cf5dea-4831-49a1-867d-e263f2579dd0

If we were to make use of the scheduler hints, how would that look? 
Something like this?  (I'm not up to speed on my YAML, so forgive me if 
this isn't quite right.)  And how would this look if we wanted to 
specify other scheduler hints as well?


  cirros_server1:
type: OS::Nova::Server
properties:
  name: cirros1
  image: 'cirros'
  flavor: 'm1.tiny'
  scheduler_hints: {group: { get_resource: my_heat_group }}


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] (no subject)

2014-04-25 Thread Dmitriy Ukhlov
In my opinion it would be enough to read table schema
from stdio, then it is possible to use pipe for input from any stream


On Fri, Apr 25, 2014 at 6:25 AM, ANDREY OSTAPENKO (CS) 
andrey_ostape...@symantec.com wrote:

 Hello, everyone!

 Now I'm starting to implement cli client for KeyValue Storage service
 MagnetoDB.
 I'm going to use heat approach for cli commands, e.g. heat stack-create
 --template-file FILE,
 because we have too many parameters to pass to the command.
 For example, table creation command:

 magnetodb create-table --description-file FILE

 File will contain json data, e.g.:

 {
 table_name: data,
 attribute_definitions: [
 {
 attribute_name: Attr1,
 attribute_type: S
 },
 {
 attribute_name: Attr2,
 attribute_type: S
 },
 {
 attribute_name: Attr3,
 attribute_type: S
 }
 ],
 key_schema: [
 {
 attribute_name: Attr1,
 key_type: HASH
 },
 {
 attribute_name: Attr2,
 key_type: RANGE
 }
 ],
 local_secondary_indexes: [
 {
 index_name: IndexName,
 key_schema: [
 {
 attribute_name: Attr1,
 key_type: HASH
 },
 {
 attribute_name: Attr3,
 key_type: RANGE
 }
 ],
 projection: {
 projection_type: ALL
 }
 }
 ]
 }

 Blueprint:
 https://blueprints.launchpad.net/magnetodb/+spec/magnetodb-cli-client

 If you have any comments, please let me know.

 Best regards,
 Andrey Ostapenko




-- 
Best regards,
Dmitriy Ukhlov
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Meeting minutes

2014-04-25 Thread Martinez, Christian
Hello,
One comment regarding 
https://blueprints.launchpad.net/climate/+spec/before-end-notification-crud :
One of Dina’s comments on the https://review.openstack.org/#/c/89833/ was that 
it is her intention to not add this functionality into v1 API.
If that’s the case, then the changes I proposed for the climateclient at 
https://review.openstack.org/#/c/89837/ won’t make sense since the client only 
works with v1 API.
I see a couple of options here:

· Give support for v1 and change the client accordantly

· Give support only on v2, and open a bp for climateclient v2 support.

Hope I make myself clear.
I’ll be waiting for your feedback ☺

Regards,
Christian
From: Sylvain Bauza [mailto:sylvain.ba...@gmail.com]
Sent: Friday, April 25, 2014 1:04 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Climate] Meeting minutes

Hi,

Sorry again about my non-presence for 20 mins, I had an IRC client/connection 
issue.
That impacted much the discussions, feel free to reply to this email with any 
concerns you didn't had time to raise on the meeting, so we could continue.

That said, meeting minutes can be found here :

(18:00:32) openstack: Minutes: 
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-04-25-15.00.html
(18:00:33) openstack: Minutes (text): 
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-04-25-15.00.txt
(18:00:34) openstack: Log: 
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-04-25-15.00.log.html

Thanks,
-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] HAProxy and Keystone setup (in Overcloud)

2014-04-25 Thread Clint Byrum
Excerpts from Jan Provazník's message of 2014-04-25 06:30:31 -0700:
 Hello,
 one of missing bits for running multiple control nodes in Overcloud is 
 setting up endpoints in Keystone to point to HAProxy which will listen 
 on a virtual IP and not-standard ports.
 
 HAProxy ports are defined in heat template, e.g.:
 
  haproxy:
nodes:
- name: control1
  ip: 192.0.2.5
- name: control2
  ip: 192.0.2.6
services:
- name: glance_api_cluster
  proxy_ip: 192.0.2.254 (=virtual ip)
  proxy_port: 9293
  port:9292
 
 
 means that Glance's Keystone endpoint should be set to:
 http://192.0.2.254:9293/
 
 ATM Keystone setup is done from devtest_overcloud.sh when Overcloud 
 stack creation successfully completes. I wonder what of the following 
 options how to set up endpoints in HA mode, is preferred by community?:
 1) leave it in post-stack-create phase and extend init-keystone script. 
 But then how to deal with list of not-standard ports (proxy_port in 
 example above):
1a) consider these not-standard ports as static and just hardcode 
 them (similar to what we do with SSL ports already). But ports would be 
 hardcoded on 2 places (heat template and this script). If a user changes 
 them in heat template, he has to change them in init-keystone script too.
2b) init-keystone script would fetch list of ports from heat stack 
 description (if it's possible?)
 
 Long-term plan seems to be rewrite Keystone setup into os-cloud-config:
 https://blueprints.launchpad.net/tripleo/+spec/tripleo-keystone-cloud-config
 So alternative to extending init-keystone script would be implement it 
 as part of the blueprint, anyway the concept of keeping Keystone setup 
 in post-stack-create phase remains.
 

We may want to consider making use of Heat outputs for this.

Rather than assuming hard coding, create an output on the overcloud
template that is something like 'keystone_endpoint'. It would look
something like this:

Outputs:
  keystone_endpoint:
Fn::Join:
  - ''
  - - http://;
- {Fn::GetAtt: [ haproxy_node, first_ip ]} # fn select and yada
- :
- {Ref: KeystoneEndpointPort} # thats a parameter
- /v2.0


These are then made available via heatclient as stack.outputs in
'stack-show'.

That way as we evolve new stacks that have different ways of controlling
the endpoints (LBaaS anybody?) we won't have to change os-cloud-config
for each one.

 
 2) do Keystone setup from inside Overcloud:
 Extend keystone element, steps done in init-keystone script would be 
 done in keystone's os-refresh-config script. This script would have to 
 be called only on one of nodes in cluster and only once (though we 
 already do similar check for other services - mysql/rabbitmq, so I don't 
 think this is a problem). Then this script can easily get list of 
 haproxy ports from heat metadata. This looks like more attractive option 
 to me - it eliminates an extra post-create config step.

Things that can be done from outside the cloud, should be done from
outside the cloud. This helps encourage the separation of concerns and
also makes it simpler to reason about which code is driving the cloud
versus code that is creating the cloud.

 
 Related to Keystone setup is also the plan around keys/cert setup 
 described here:
 http://lists.openstack.org/pipermail/openstack-dev/2014-March/031045.html
 But I think this plan would remain same no matter which of the options 
 above would be used.
 
 
 What do you think?
 
 Jan
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [MagnetoDB] Configuring consistency draft of concept

2014-04-25 Thread MAKSYM IARMAK (CS)
Hi openstackers,

In order to implement 
https://blueprints.launchpad.net/magnetodb/+spec/support-tuneable-consistency 
we need tunable consistency support in MagnetoDB what is described here 
https://blueprints.launchpad.net/magnetodb/+spec/configurable-consistency

So, here is specification draft of concept.

1. First of all, there is a list of suggested consistency levels for MagnetoDB:

 *   STRONG - Provides the highest consistency and the lowest availability of 
any other level. (A write must be written to the commit log and memory table on 
all replica nodes in the cluster for that row. Read returns the record with the 
most recent timestamp after all replicas have responded.)
 *   WEAK - Provides low latency. Delivers the lowest consistency and highest 
availability compared to other levels. (A write must be written to the commit 
log and memory table of at least one replica node. Read returns a response from 
at least one replica node)
 *   QUORUM - Provides strong consistency if you can tolerate some level of 
failure. (A write must be written to the commit log and memory table on a 
quorum of replica nodes. Read returns the record with the most recent timestamp 
after a quorum of replicas has responded regardless of data center.)

And special Multi Data Center consistency levels:

 *   MDC_EACH_QUORUM - Used in multiple data center clusters to strictly 
maintain consistency at the same level in each data center. (A write must be 
written to the commit log and memory table on a quorum of replica nodes in all 
data centers. Read returns the record with the most recent timestamp once a 
quorum of replicas in each data center of the cluster has responded.)
 *   MDC_LOCAL_QUORUM - Used in multiple data center clusters to maintain 
consistency in local (current) data center. (A write must be written to the 
commit log and memory table on a quorum of replica nodes in the same data 
center as the coordinator node. Read returns the record with the most recent 
timestamp once a quorum of replicas in the current data center as the 
coordinator node has reported. Avoids latency of inter-data center 
communication.)

BUT: We can't use inconsistent write if we use indexed table and condition 
operations which indexes based on. Because this staff requires the state of 
data. So it seems that we can:
1) tune consistent read/write operation in the next combinations: 
QUORUM/QUORUM, MDC_LOCAL_QUORUM/MDC_EACH_QUORUM, 
MDC_EACH_QUORUM/MDC_LOCAL_QUORUM, STRONG/WEAK) .
And also we have inconsistent read operation with CL=WEAK
2) if we really need inconsistent write we can allow it for tables without 
indexing. In this case we provide more flexibility and optimization 
possibility, but on another hand we make MagnetoDB more complicated.



2. JSON request examples.

I suggest adding new 'consistency_level' attribute. So we should check 
corresponding naming in backend API, cause it can be little different there.



For read data operation we will use for example get item request:

{
key: {
ForumName: {
S: MagnetoDB
},
Subject: {
S: What about configurable consistency support?
}
},
attributes_to_get: [LastPostDateTime,Message,Tags],
consistency_level: STRONG
}

Here we use consistency level STRONG, so it means, that response returns the 
record with the most recent timestamp after all replicas have responded. In 
this case we will have the highest consistency but the lowest availability of 
any other level.

For write data operation we will use for example put item request:

{
item: {
LastPostDateTime: {
S: 201303190422
},
Tags: {
SS: [Update,Multiple items,HelpMe]
},
ForumName: {
S: Amazon DynamoDB
},
Message: {
S: I want to update multiple items.
},
Subject: {
S: How do I update multiple items?
},
LastPostedBy: {
S: f...@example.commailto:f...@example.com
}
},
expected: {
ForumName: {
exists: false
},
Subject: {
exists: false
},
},
consistency_level: WEAK
}


Here we use consistency level WEAK, so it means, that write will be written to 
the commit log and memory table of at least one replica node. In this case we 
will have lowest consistency but highest availability compared to other 

Re: [openstack-dev] [Neutron] Flavor(?) Framework

2014-04-25 Thread Eugene Nikanorov
 I'd also recommend simplifying the API and CLI by removing the
 implementation-focused provider type stuff eventually, as well, since
 a service type framework would essentially make that no longer needed --
 at least on the public API side of things.
Correct, that's the part of the proposed change, although we probably need
to support it for one more release.

Eugene.


On Fri, Apr 25, 2014 at 9:40 PM, Jay Pipes jaypi...@gmail.com wrote:

 On Fri, 2014-04-25 at 13:32 -0400, Mohammad Banikazemi wrote:
  As I understand the proposed flavor framework, the intention is to
  provide a mechanism for specifying different flavors of a given
  service type  as they  are already defined. So using the term type
  may be confusing. Here we want to specify possibly different set of
  capabilities within a given defined service type.

 Hi Mohammed,

 Yes, the trouble in Neutron is the existing service type usage... I
 proposed to rename that to service family or service class in a previous
 email, and use a type for each service class, so:

 load balancer type
 firewall type
 VPN type

 I'd also recommend simplifying the API and CLI by removing the
 implementation-focused provider type stuff eventually, as well, since
 a service type framework would essentially make that no longer needed --
 at least on the public API side of things.

 Best,
 -jay

  Inactive hide details for Jay Pipes ---04/25/2014 12:09:43 PM---On
  Fri, 2014-04-25 at 13:41 +, Akihiro Motoki wrote:  Hi,Jay Pipes
  ---04/25/2014 12:09:43 PM---On Fri, 2014-04-25 at 13:41 +, Akihiro
  Motoki wrote:  Hi,
 
  From: Jay Pipes jaypi...@gmail.com
  To: openstack-dev@lists.openstack.org,
  Date: 04/25/2014 12:09 PM
  Subject: Re: [openstack-dev] [Neutron] Flavor(?) Framework
 
 
 
  __
 
 
 
  On Fri, 2014-04-25 at 13:41 +, Akihiro Motoki wrote:
   Hi,
  
   I have a same question from Mark. Why is flavor not desired?
   My first vote is flavor first, and then type.
 
  Some reasons:
 
  First, flavor, in English, can and often is spelled differently
  depending on where you live in the world (flavor vs. flavour).
 
  Second, type is the appropriate term for what this is describing, and
  doesn't have connotations of taste, which flavor does.
 
  I could also mention that the term flavor is a vestige of the
  Rackspace Cloud API and, IMO, should be killed off in place of the
  more
  common and better understood instance type which is used by the EC2
  API.
 
   There is similar cases in other OpenStack projects.
   Nova uses flavor and Cinder uses (volume) type for similar
  cases.
   Both cases are similar to our use cases and I think it is better to
   use
   either of them to avoid more confusion from naming for usesr and
   operators.
  
   Cinder volume_type detail is available at [1]. In Cinder
  volume_type,
   we can define multiple volume_type for one driver.
   (more precisely, volume_type is associated to one backend
   defintion
   and we can define multiple backend definition for one backend
   driver).
  
   In addition, I prefer to managing flavor/type through API and
   decoupling
   flavor/type definition from provider definitions in configuration
   files
   as Cinder and Nova do.
 
  Yes, I don't believe there's any disagreement on that particular
  point.
  This effort is all about trying to provide a more comfortable and
  reasonable way for classification of these advanced services to be
  controlled by the user.
 
  Best,
  -jay
 
   [1]
  
  http://docs.openstack.org/admin-guide-cloud/content/multi_backend.html
  
   Thanks,
   Akihiro
  
   (2014/04/24 0:05), Eugene Nikanorov wrote:
  
Hi neutrons,
   
   
A quick question of the ^^^
I heard from many of you that a term 'flavor' is undesirable, but
  so
far there were no suggestions for the notion that we are going to
introduce.
So please, suggest you name for the resource.
Names that I've been thinking of:
 - Capability group
 - Service Offering
   
   
Thoughts?
   
   
Thanks,
Eugene.
   
   
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [Heat] looking to add support for server groups to heat...any comments?

2014-04-25 Thread Zane Bitter

On 25/04/14 13:50, Chris Friesen wrote:

On 04/25/2014 11:01 AM, Mike Spreitzer wrote:

Zane Bitter zbit...@redhat.com wrote on 04/25/2014 12:36:00 PM:

  On 25/04/14 12:23, Chris Friesen wrote:



More important is Zane's following question.

   The Server class would be extended with an optional server_group
   property.  If it is set then the group name is added to the
   scheduler_hints when building the server.
 
  Given that we already expose the scheduler_hints directly, can you
talk
  about why it would be advantageous to have a separate property as
well?
  (e.g. syntax would be really finicky?)



I was thinking it'd be more intuitive for the end-user (and more
future-proof if novaclient changes), but maybe I'm wrong.

In the version I have currently it looks something like this:

   cirros_server1:
 type: OS::Nova::Server
 properties:
   name: cirros1
   image: 'cirros'
   flavor: 'm1.tiny'
   server_group: { get_resource: my_heat_group }



In the nova boot command we pass the group uuid like this:

--hint group=e4cf5dea-4831-49a1-867d-e263f2579dd0

If we were to make use of the scheduler hints, how would that look?
Something like this?  (I'm not up to speed on my YAML, so forgive me if
this isn't quite right.)  And how would this look if we wanted to
specify other scheduler hints as well?

   cirros_server1:
 type: OS::Nova::Server
 properties:
   name: cirros1
   image: 'cirros'
   flavor: 'm1.tiny'
   scheduler_hints: {group: { get_resource: my_heat_group }}


Something like that (I don't think you need the quotes around group). 
Or, equivalently:


  cirros_server1:
type: OS::Nova::Server
properties:
  name: cirros1
  image: 'cirros'
  flavor: 'm1.tiny'
  scheduler_hints:
group: { get_resource: my_heat_group }

- ZB


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit downtime and upgrade on 2014-04-28

2014-04-25 Thread Yuriy Taraday
On Fri, Apr 25, 2014 at 8:10 PM, Zaro zaro0...@gmail.com wrote:

 Gerrit 2.8 allows setting label values on patch sets either thru the
 command line[1] or REST API[2].  Since we will setup WIP as a -1 score
 on a label this will just be a matter of updating git-review to set
 the label on new patchsets.  I'm no sure if there's a bug entered in
 our the issue tracker for this but you are welcome to create one.

 [1] https://review-dev.openstack.org/Documentation/cmd-review.html
 [2]
 https://review-dev.openstack.org/Documentation/rest-api-changes.html#set-review


Why do you object making it a default behavior on the Gerrit side?
Is there any issue with making this label pass on to new patch sets?

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove][Heat] Use cases required for Trove

2014-04-25 Thread Denis Makogon
On Fri, Apr 25, 2014 at 8:44 PM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Zane Bitter's message of 2014-04-25 10:05:31 -0700:
  On 25/04/14 08:30, Denis Makogon wrote:
 Hello Trove/Heat community.
  
   I'd like to start thread related to required use cases (from Trove
   perspective). To support (completely) Heat in Trove, it neeeds to
   support required operations like:
  
  
1.
  
   Resize operations:
  
   - resize a volume (to bigger size);
 
  This is kind of messy, because it involves operations (i.e. association
  with an instance) that are, in theory, expressed independently of the
  Volume itself in the template. It should be do-able though.
 

   Agreed. But as for me, this use case is the last of the list that's
not fully covered in Heat. Only VomuleAttached class has handle_update
task, but missing for base Volume class. Last question, can volume update
blueprint be approved or needs to be discussed with all Heat community?

   - consistent resize of instance flavor.
 
  Server resize is implemented already in Heat. I believe what is missing
  is a hook to allow Trove to confirm (or not) the resize. Heat always
  confirms at the moment.
 

  From the Trove perspective, the best use case is the case when Heat
handles most of the logic over the resize and its confirmation. My best
guess that Trove needs to do only stack update procedure, and nothing else.


  http://junodesignsummit.sched.org/event/d422b859492cd1dbcb9763c8b7245f23
 

 Right, this is similar to the rebuild case that TripleO needs. I believe
 software deployments has laid a nice framework down for this, and I'm
 working right now on a POC for rebuild that will quite easily translate
 into resize as well.

2.
  
   Migration operation (from host to host).
 
  It's not clear to me how this would fit into Heat's declarative model
  (since it's an action, rather than a thing or a relationship between
  things). Given that this is presumably an admin-type operation, maybe it
  is OK for this to continue to be done with the Nova client.
 

 Assuming you mean nova migration, which is an administrative action
 in Nova, I agree with Zane, except that I wouldn't say maybe, but
 most likely.

 However, the statement is a bit vague, so you could mean _trove_ host
 to _trove_ host. As in, migrating a whole database and then a VIP to
 another database.

 I think there is a role for Heat to play there, as you would change your
 desired end-goal to have two servers, and even use Heat to communicate
 the details how the migration path to the new one, and then change the
 goal again to just have the new server, with the VIP moved.

   Agreed, migrations will still be the part of the responsibilities of
Nova.

3.
  
   Security group/rules operations:
  
   - add new rules to already existing group;
  
   - swap existing rules to a new ones.
 
  This appears to be implemented already (for Neutron security groups),
  though the implementation looks a bit suspect to me:
 
 
 https://github.com/openstack/heat/blob/master/heat/engine/resources/neutron/security_group.py#L237
 

  For now Trove cannot work with Neutron as network management service.
So, mostly insterested in nova-network based flow. But for the future we'll
also would need Neutron based flow.

4.
  
   Designate (DNSaaS) resource provisioning.
 
  +1! If someone wanted to write this we would be more than happy to see
  it in /contrib right now, and moving to the main tree as soon as
  Designate is accepted for incubation.
 

 Agreed. DNS all the things.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit downtime and upgrade on 2014-04-28

2014-04-25 Thread Zaro
Do you mean making it default to WIP on every patchset that gets
uploaded?  That isn't possible with Gerrit version 2.8 because it
doesn't allow you to set a default score.  Also I don't think that
would be appropriate because I definitely would not want that as the
default setting for my workflow.  I think it would be more appropriate
to build that into git-review, maybe a configuration setting that
would set every patchset to WIP upon upload.

Gerrit 2.8 does allow you to carry the same label score forward[1] if
it's either a trivial rebase or no code has changed.  We plan to set
these options for the 'Code-Review' label, but not the Workflow label.

[1] https://gerrit-review.googlesource.com/Documentation/config-labels.html

On Fri, Apr 25, 2014 at 11:58 AM, Yuriy Taraday yorik@gmail.com wrote:
 On Fri, Apr 25, 2014 at 8:10 PM, Zaro zaro0...@gmail.com wrote:

 Gerrit 2.8 allows setting label values on patch sets either thru the
 command line[1] or REST API[2].  Since we will setup WIP as a -1 score
 on a label this will just be a matter of updating git-review to set
 the label on new patchsets.  I'm no sure if there's a bug entered in
 our the issue tracker for this but you are welcome to create one.

 [1] https://review-dev.openstack.org/Documentation/cmd-review.html
 [2]
 https://review-dev.openstack.org/Documentation/rest-api-changes.html#set-review


 Why do you object making it a default behavior on the Gerrit side?
 Is there any issue with making this label pass on to new patch sets?

 --

 Kind regards, Yuriy.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] looking to add support for server groups to heat...any comments?

2014-04-25 Thread Chris Friesen

On 04/25/2014 12:00 PM, Zane Bitter wrote:

On 25/04/14 13:50, Chris Friesen wrote:



In the nova boot command we pass the group uuid like this:

--hint group=e4cf5dea-4831-49a1-867d-e263f2579dd0

If we were to make use of the scheduler hints, how would that look?
Something like this?  (I'm not up to speed on my YAML, so forgive me if
this isn't quite right.)  And how would this look if we wanted to
specify other scheduler hints as well?

   cirros_server1:
 type: OS::Nova::Server
 properties:
   name: cirros1
   image: 'cirros'
   flavor: 'm1.tiny'
   scheduler_hints: {group: { get_resource: my_heat_group }}


Something like that (I don't think you need the quotes around group).
Or, equivalently:

   cirros_server1:
 type: OS::Nova::Server
 properties:
   name: cirros1
   image: 'cirros'
   flavor: 'm1.tiny'
   scheduler_hints:
 group: { get_resource: my_heat_group }



Okay...assuming it works like that then that looks fine to me.

If we go this route then the changes are confined to a single new file. 
 Given that, do we need a blueprint or can I just submit the code for 
review once I port it to the current codebase?


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Cryptography audit by OSSG

2014-04-25 Thread Nathan Kinder


On 04/18/2014 06:55 AM, Lisa Clark wrote:
 Barbicaneers,
 
Is anyone following the openstack-security list and/or part of the
 OpenStack Security Group (OSSG)?  This sounds like another group and list
 we should keep our eyes on.
 
In the below thread on the security list, Nathan Kinder is conducting a
 security audit of the various integrated OpenStack projects.  He's
 answering questions such as what crypto libraries are being used in the
 projects, algorithms used, sensitive data, and potential improvements that
 can be made.  Check the links out in the below thread.
 
Though we're not yet integrated, it might be beneficial to put together
 our security audit page under Security/Icehouse/Barbican.

I've started a page for you (but for Juno).  There is a lot to fill in
still (by folks more familiar with the Barbican code than I), but it's a
start.

https://wiki.openstack.org/wiki/Security/Juno/Barbican

It would be great if the Barbican team can fill this in and keep it up
to date as development continues.

I've also added the rest of the projects currently in incubation on the
top-level Security page for Juno in case other projects are interested
in filling in their info as well:

https://wiki.openstack.org/wiki/Security/Juno

Thanks,
-NGK

 
Another thing to consider as you're reviewing the security audit pages
 of Keystone and Heat (and others as they are added): Would Barbican help
 to solve any of the security concerns/issues that these projects are
 experiencing?
 
 -Lisa
 

 Message: 5
 Date: Thu, 17 Apr 2014 16:27:30 -0700
 From: Nathan Kinder nkin...@redhat.com
 To: Bryan D. Payne bdpa...@acm.org, Clark, Robert Graham
  robert.cl...@hp.com
 Cc: openstack-secur...@lists.openstack.org
  openstack-secur...@lists.openstack.org
 Subject: Re: [Openstack-security] Cryptographic Export Controls and
  OpenStack
 Message-ID: 53506362.3020...@redhat.com
 Content-Type: text/plain; charset=windows-1252

 On 04/16/2014 10:28 AM, Bryan D. Payne wrote:
 I'm not aware of a list of the specific changes, but this seems quite
 related to the work that Nathan has started played with... discussed on
 his blog here:

 https://blog-nkinder.rhcloud.com/?p=51

 This is definitely related to the security audit effort that I'm
 driving.  It's hard to make recommendations on configurations and
 deployment architectures from a security perspective when we don't even
 have a clear picture of the current state of things are in the code from
 a security standpoint.  This clear picture is what I'm trying to get to
 right now (along with keeping this picture up to date so it doesn't get
 stale).

 Once we know things such as what crypto algorithms are used and how
 sensitive data is being handled, we can see what is configurable and
 make recommendations.  We'll surely find that not everything is
 configurable and sensitive data isn't well protected in areas, which are
 things that we can turn into blueprints and bugs and work on improving
 in development.

 It's still up in the air as to where this information should be
 published once it's been compiled.  It might be on the wiki, or possibly
 in the documentation (Security Guide seems like a likely candidate).
 There was some discussion of this with the PTLs from the Project Meeting
from 2 weeks ago:


 http://eavesdrop.openstack.org/meetings/project/2014/project.2014-04-08-21
 .03.html

 I'm not so worried myself about where this should be published, as that
 doesn't matter if we don't have accurate and comprehensive information
 collected in the first place.  My current focus is on the collection and
 maintenance of this info on a project by project basis.  Keystone and
 Heat have started, which is great!:

  https://wiki.openstack.org/wiki/Security/Icehouse/Keystone
  https://wiki.openstack.org/wiki/Security/Icehouse/Heat

 If any other OSSG members are developers on any of the projects, it
 would be great if you could help drive this effort within your project.

 Thanks,
 -NGK

 Cheers,
 -bryan



 On Tue, Apr 15, 2014 at 1:38 AM, Clark, Robert Graham
 robert.cl...@hp.com mailto:robert.cl...@hp.com wrote:

 Does anyone have a documented run-down of changes that must be made
 to OpenStack configurations to allow them to comply with EAR
 requirements?
 http://www.bis.doc.gov/index.php/policy-guidance/encryption

 It seems like something we should consider putting into the security
 guide. I realise that most of the time it?s just ?don?t use your own
 libraries, call to others, make algorithms configurable? etc but
 it?s a question I?m seeing more and more, the security guide?s
 compliance section looks like a great place to have something about
 EAR.

 -Rob

 ___
 Openstack-security mailing list
 openstack-secur...@lists.openstack.org
 mailto:openstack-secur...@lists.openstack.org
 
 

Re: [openstack-dev] [Horizon] [UX] Summary of Horizon Usability Testing and plan for Summit session

2014-04-25 Thread Jason Rist
On 04/24/2014 09:10 AM, Liz Blanchard wrote:
 Hi All,
 
 One of the sessions that I proposed for the Horizon track is to review the 
 results that we got from the Usability Test that was run on Horizon in early 
 March. I wanted to share some of the background of this test and the high 
 level results with you all so that we can start the conversation on this list 
 and then continue with agreeing on next steps during Summit. There will be a 
 few follow-ups to this e-mail from myself and Jacki Bauer which will propose 
 some potential solutions to the high priority findings, so be on the look out 
 for those :)
 
 ---Quick overview of Usability Testing...What is it? Why do it?---
 Usability testing is a technique used in user-centered interaction design to 
 evaluate a product by testing it on users. This can be seen as an 
 irreplaceable usability practice, since it gives direct input on how real 
 users use the system.
 
 ---Who was involved? What did we need to do to prepare?---
 A number of user experience engineers from a bunch of different companies got 
 together and helped plan for a usability test that would focus on 
 self-service end users and the ease of use of the Horizon Dashboard as it 
 exists for the Icehouse release. This effort spun off from the Persona work 
 that we've been doing together. Some folks in the group are just getting into 
 contributing to the design of OpenStack and doing a baseline usability test 
 of Horizon was a great introduction to how the usability of the Horizon UI 
 could continue to be improved based on user's direct feedback.
 
 What we needed to get done before actually jumping into the testing:
 1) Agree on the goals of the testing.
 2) Create a screener and send out to the OpenStack community.
 3) Create a list of tasks that the user would complete and give feedback 
 on during the testing.
 
 ---Who we tested?---
 6 people who we considered to be self-service end users based on their 
 screener responses.
 
 ---What were the tasks that were tested?---
 
 Scenario 1: Launching an instance
 Individual Tasks:
 -Create a security key pair.
 -Create a network.
 -Boot from cirros image.
 -Confirm that instance was launched successfully.
  
 Scenario 2: Understand how many vCPUs are currently in use vs. how much quota 
 is left.
 Individual Tasks:
 -Check out Overview Page to review current quota use/limit details.
  
 Scenario 3: Take a snapshot of an Instance to save for later use.
 Individual Tasks:
 -Either Launch an instance successfully, or identify a running instance in 
 the instance view.
 -Choose to take a snapshot of that instance.
 -Confirm that the snapshot was successful.
  
 Scenario 4: Launch an instance from a snapshot.
 Individual Tasks:
 -Choose to either create an instance and boot from a snapshot, or identify a 
 snapshot to create an instance from.
 -Confirm that the instance was started successfully.
  
 Scenario 5: Launch an instance that boots from a specific volume.
 Individual Tasks:
 -Create a volume.
 -Launch an instance using Volume X as a boot source.
 
 ---When and how we ran the tests?---
 These hour long tests were run over the first two weeks of March 2014. We 
 focused on the latest bits that could be seen in the Icehouse release. The 
 moderator (a UX researcher from HP) would ask the questions and the rest of 
 the group would vigourously take notes :) After all of the testing was 
 complete, we spent some time together debriefing and agreeing on the 
 prioritized list of updates that would be best to make to the Horizon UI 
 based on user feedback.
 
 ---What were the findings?---
 
 High Priority
 * Improve error messages and error message catalog.
 * Fix Launch Instance workflow for end user and power user.
 * Improve informational help information about form fields.
 * Fix terminology. (e.g. launch instance, boot, shutoff, shutdown, etc.)
 * Show details for key pair and network in Launch Instance workflow.
 * Recommend a new Information Architecture.
  
 Medium Priority
 * Create UI guidelines (of best practices) for Developers to use.
 * Improve Online Help.
 * Provide clearer indication the application is working after clicking a 
 button and the application doesn’t respond immediately.
 * Ensure consistency of network selection. (Drag and drop of networks very 
 inconsistent from the other pieces of the launch instance modal)
 * Create consistency of visualizations and section of action button 
 recommendations on Instance table.
 * Suggest defaults for the forms entry fields.
 * Provide Image information details during image selection.
  
 Low Priority
 * Allow users to edit the network an instance after launching instance.
 * Resolve confusion around the split inline actions button.
 * Explain what the Instance Boot Source field in Create Instance modal.
 * Provide description/high level information about flavors for flavor 
 selection.
 * Make sorting clearer visually.
 * Provide solution for subnet 

[openstack-dev] [nova] Proposal: remove the server groups feature

2014-04-25 Thread Jay Pipes
Hi Stackers,

When recently digging in to the new server group v3 API extension
introduced in Icehouse, I was struck with a bit of cognitive dissonance
that I can't seem to shake. While I understand and support the idea
behind the feature (affinity and anti-affinity scheduling hints), I
can't help but feel the implementation is half-baked and results in a
very awkward user experience.

The use case here is very simple: 

Alice wants to launch an instance and make sure that the instance does
not land on a compute host that contains other instances of that type.

The current user experience is that the user creates a server group
like so:

nova server-group-create $GROUP_NAME --policy=anti-affinity

and then, when the user wishes to launch an instance and make sure it
doesn't land on a host with another of that instance type, the user does
the following:

nova boot --group $GROUP_UUID ...

There are myriad problems with the above user experience and
implementation. Let me explain them.

1. The user isn't creating a server group when they issue a nova
server-group-create call. They are creating a policy and calling it a
group. Cognitive dissonance results from this mismatch.

2. There's no way to add an existing server to this group. What this
means is that the user needs to effectively have pre-considered their
environment and policy before ever launching a VM. To realize why this
is a problem, consider the following:

 - User creates three VMs that consume high I/O utilization
 - User then wants to launch three more VMs of the same kind and make
sure they don't end up on the same hosts as the others

No can do, since the first three VMs weren't started using a --group
scheduler hint.

3. There's no way to remove members from the group

4. There's no way to manually add members to the server group

5. The act of telling the scheduler to place instances near or away from
some other instances has been hidden behind the server group API, which
means that users doing a nova help boot will see a --group option that
doesn't make much sense, as it doesn't describe the scheduling policy
activity.

Proposal


I propose to scrap the server groups API entirely and replace it with a
simpler way to accomplish the same basic thing.

Create two new options to nova boot:

 --near-tag TAG
and
 --not-near-tag TAG

The first would tell the scheduler to place the new VM near other VMs
having a particular tag. The latter would tell the scheduler to place
the new VM *not* near other VMs with a particular tag.

What is a tag? Well, currently, since the Compute API doesn't have a
concept of a single string tag, the tag could be a key=value pair that
would be matched against the server extra properties.

Once a real user-controlled simple string tags system is added to the
Compute API, a tag would be just that, a simple string that may be
attached or detached from some object (in this case, a server object).

How does this solve all the issues highlighted above? In order, it
solves the issues like so:

1. There's no need to have any server group object any more. Servers
have a set of tags (key/value pairs in v2/v3 API) that may be used to
identify a type of server. The activity of launching an instance would
now have options for the user to indicate their affinity preference,
which removes the cognitive dissonance that happens due to the user
needing to know what a server group is (a policy, not a group).

2. Since there is no more need to maintain a separate server group
object, if a user launched 3 instances and then wanted to make sure that
3 new instances don't end up on the same hosts, all the user needs to do
is tag the existing instances with a tag, and issue a call to:

 nova boot --not-near-tag $TAG ...

and the affinity policy is applied properly.

3. Removal of members of the server group is no longer an issue.
Simply untag a server to remove it from the set of servers you wish to
use in applying some affinity policy

4. Likewise, since there's no server group object, in order to relate an
existing server to another is to simply place a tag on the server.

5. The act of applying affinity policies is now directly related to the
act of launching instances, which is where it should be.

I'll type up a real blueprint spec for this, but wanted to throw the
idea out there, since it's something that struck me recently when I
tried to explain the new server groups feature.

Thoughts and feedback welcome,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: remove the server groups feature

2014-04-25 Thread Chris Behrens

On Apr 25, 2014, at 2:15 PM, Jay Pipes jaypi...@gmail.com wrote:

 Hi Stackers,
 
 When recently digging in to the new server group v3 API extension
 introduced in Icehouse, I was struck with a bit of cognitive dissonance
 that I can't seem to shake. While I understand and support the idea
 behind the feature (affinity and anti-affinity scheduling hints), I
 can't help but feel the implementation is half-baked and results in a
 very awkward user experience.

I agree with all you said about this.

 Proposal
 
 
 I propose to scrap the server groups API entirely and replace it with a
 simpler way to accomplish the same basic thing.
 
 Create two new options to nova boot:
 
 --near-tag TAG
 and
 --not-near-tag TAG
 
 The first would tell the scheduler to place the new VM near other VMs
 having a particular tag. The latter would tell the scheduler to place
 the new VM *not* near other VMs with a particular tag.
 
 What is a tag? Well, currently, since the Compute API doesn't have a
 concept of a single string tag, the tag could be a key=value pair that
 would be matched against the server extra properties.

You can actually already achieve this behavior… although with a little more 
work. There’s the Affinty filter which allows you to specify a 
same_host/different_host scheduler hint where you explicitly specify the 
instance uuids you want…  (the extra work is having to know the instance uuids).

But yeah, I think this makes more sense to me.

- Chris



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] looking to add support for server groups to heat...any comments?

2014-04-25 Thread Zane Bitter

On 25/04/14 16:07, Chris Friesen wrote:

On 04/25/2014 12:00 PM, Zane Bitter wrote:

On 25/04/14 13:50, Chris Friesen wrote:



In the nova boot command we pass the group uuid like this:

--hint group=e4cf5dea-4831-49a1-867d-e263f2579dd0

If we were to make use of the scheduler hints, how would that look?
Something like this?  (I'm not up to speed on my YAML, so forgive me if
this isn't quite right.)  And how would this look if we wanted to
specify other scheduler hints as well?

   cirros_server1:
 type: OS::Nova::Server
 properties:
   name: cirros1
   image: 'cirros'
   flavor: 'm1.tiny'
   scheduler_hints: {group: { get_resource: my_heat_group }}


Something like that (I don't think you need the quotes around group).
Or, equivalently:

   cirros_server1:
 type: OS::Nova::Server
 properties:
   name: cirros1
   image: 'cirros'
   flavor: 'm1.tiny'
   scheduler_hints:
 group: { get_resource: my_heat_group }



Okay...assuming it works like that then that looks fine to me.


Cool, +1 for that then.


If we go this route then the changes are confined to a single new file.
  Given that, do we need a blueprint or can I just submit the code for
review once I port it to the current codebase?


I guess wearing my PTL hat I ought to say that you should still raise a 
blueprint (no real content necessary though, or just link to this thread).


Wearing my core team hat, I personally couldn't care less either way ;) 
The change is self-explanatory and you've already done a good job of 
consulting on the changes before submitting them.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] design summit sessions

2014-04-25 Thread Mark Washenberger
The first draft of the Glance design summit session have been posted at
http://junodesignsummit.sched.org/overview/type/glance. We may still
shuffle the times and the exact split of the topics around a bit if there
are opportunities for improvement.

I would like to ask at this time if key contributors to these sessions
would please let me know if this schedule creates any significant time
conflicts for you.

Otherwise, session leads, please start assembling etherpads for your
sessions and invite comments from some other Glance folks. Our goal at this
point is to ensure that our design summit sessions are as engaging,
relevant, and productive as possible. We will discuss progress on this
front at the upcoming glance team meeting. [1]

Thanks!


[1] -
http://www.timeanddate.com/worldclock/fixedtime.html?msg=Glance+Meetingiso=20140501T20ah=1
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [UX] Proposed tools and workflows for OpenStack User Experience contributors

2014-04-25 Thread Toshiyuki Hayashi
Hi,

Thank you for coming up this topics. It is a big step to going forward
for OpenStack UX process.
(It's been a while to post to ML, I'm kinda hesitating.., haha)

 Wiki
+1

Mailing list - [UX]
+1, for the general discussion, it would be the one on the best way to
communicate to the community people.

 Discussion forum - (terminate)
Although Askbot doesn't work well, but as Jacki said, I think ml
wouldn't work well for all of UX discussion.
Like gerrit is used for code review, It is better to have another
discussion tool for the detail UI/UX design discussion.
At least Google+ worked well compared to Askbot. I thought StoryBoard
will be the tool, won't it?

 IRC meetings
Big +1 for this. It's gonna help to keep everyone on the same page.
Also it is important to put up the UX activity.


 Launchpad (StoryBoard in the future) and Wishlist (currently Launchpad)
+1

 Storage place (GitHUb) and Templates library
I think these two things can be same, what the deference? Also I like
this idea to use Github, it is easy to track
the activity, and recently many design guys are used to use it.

Thanks again for bring up this topic!

Regards,
Toshi



On Thu, Apr 24, 2014 at 9:30 AM, Jacki Bauer jacki.ba...@rackspace.com wrote:
 Thanks for starting this conversation. It's really important and there's a 
 ton of work to be done!


 On Apr 23, 2014, at 9:46 AM, Liz Blanchard lsure...@redhat.com wrote:


 On Apr 23, 2014, at 8:13 AM, Jaromir Coufal jcou...@redhat.com wrote:

 Dear OpenStack UX community and everybody else who is interested in 
 OpenStack's user experience,


 Thanks very much for taking the time to write this up, Jarda. I think this 
 would be an awesome list of topics to cover in the User Experience 
 cross-project sessions scheduled for the Summit on Tuesday afternoon. What 
 do others think? I'll also add some thoughts below to start to drive the 
 conversation further on this list.

 When there is more contributors appearing in time, I would like us to 
 establish a formal process of how the UX work should be organized. 
 Therefore I am suggesting a few tools below for us to be more effective, 
 transparent and to provide a single way to all contributors so that it is 
 easy for everybody to start, to contribute and to get oriented in our world.


 Wiki
 
 = introduction to OpenStack User Experience
 = how to contribute guide
 = documentation of processes
 = redirecting people to the right places of their interest (IRC, Launchpad, 
 etc)

 +1. It would be awesome to include some basics about using the mailing list 
 for communication along with IRC and how to balance the two.

 +1



 Mailing list - [UX]
 ---
 = discussions about various issues
 = openstack-dev mailing list, using [UX] tag in the subject
 + brings more attention to the UX issues
 + not separated from other OpenStack's projects
 + threading is already there (e-mail clients)
 + no need for setting up and maintaining additional server to run our own 
 forum
 - requires to store attachments somewhere else (some other server)
   ... similar case with current askbot anyway
 - requires contributors to register to the openstack-dev mailing list
   ... each contributor should do that anyway

 A big +1 to this. Currently there is a mailing list called 
 openstack-personas that has been meant just for the persona effort, but I've 
 been trying to get folks who have been involved in that effort to be sure to 
 subscribe to this list and start generating any conversations that are pure 
 UX here on the dev list instead of that personas mailing list. The personas 
 mailing list was really just meant to kick off all of the work that would be 
 done and then we'd bring high level details to this list anyways. Having 
 more or less all UX conversations in one place makes way more sense to me.

 There are a lot of discussions on the persona list that I don't think belong 
 on dev - things like the logistics of planning user research, methodologies 
 and so on. There will also be discussions that require feedback from 
 designers, but would really confuse devs (designs in early stages). One 
 negative impact of using the dev list is that the content we want devs to 
 respond to - research results, designs in later stages,etc - might be ignored 
 or missed because of the other 'noise'. Could we use the dev list for 
 anything we want wider community feedback on, and use another tool (ux 
 mailing list, invision, something else) for the rest of the conversations?



 Discussion forum - (terminate)
 --
 + more interactive
 + easier for newcomers
 - separating UX outside the OpenStack world
 - we haven't found any other suitable tool for discussions yet (current 
 AskBot is not doing very well)
 - in order not to fragment discussions into multiple places, I am 
 suggesting termination of current AskBot and keeping discussions in mailing 
 list

 Another idea would be to use the general OpenStack Askbot, 

[openstack-dev] [Glance] Announcing glance-specs repo

2014-04-25 Thread Mark Washenberger
Hey hey glancy glance,

Recently glance drivers made a somewhat snap decision to adopt the -specs
gerrit repository approach for new blueprints.

Pursuant to that, Arnaud has been kind enough to put forward some infra
patches to set things up. After the patches to create the repo [1] and
enable tests [2] land, we will need one more patch to add the base
framework to the glance-specs repo, so there is a bit of time needed before
people will be able to submit their specs.

I'd like to see us use this system for Juno blueprints. I think it would
also be very helpful if any blueprints being discussed at the design summit
could adopt this format in time for review prior to the summit (which is
just over two weeks away). I understand that this is all a bit late in the
game to make such requirements, so obviously we'll try to be very
understanding of any difficulties.

Additionally, if any glance folks have serious reservations about adopting
the glance-specs repo, please speak up now.

Thanks again to Arnaud for spearheading this effort. And thanks to the Nova
crew for paving a nice path for us to follow.

Cheers,
markwash


[1] - https://review.openstack.org/#/c/90461/
[2] - https://review.openstack.org/#/c/90469/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: remove the server groups feature

2014-04-25 Thread Steve Gordon
- Original Message -
 5. The act of applying affinity policies is now directly related to the
 act of launching instances, which is where it should be.

Well, that may be another discussion :). Something I have been pondering of 
late is the intersection of the server groups feature with the find host and 
evacuate instance proposal that allows evacuation without specifying a target 
host - instead re-scheduling the instance [1]. 

User expectations, I think, in this case would be that the group policies 
(however implemented) would stay with the VM throughout its lifecycle (or at 
least until the policy was explicitly removed/changed) and be taken into 
account when evacuating in this fashion. With the current model this 
expectation appears to be met for anti-affinity, but when it comes to affinity 
I would expect that the scheduler will place the evacuated instance right back 
where it came from as it doesn't have enough understanding to attempt to 
evacuate the entire group at once.

-Steve

[1] https://blueprints.launchpad.net/nova/+spec/find-host-and-evacuate-instance

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [UX] Summary of Horizon Usability Testing and plan for Summit session

2014-04-25 Thread Toshiyuki Hayashi
Hi Liz,

Thank you for sharing this, it'd really interesting and suggestive.
Also this activity is really important for improving UI/UX matter of
Horizon in a right direction. I think Horizon's UI still needs more
improvement for production or commercial uses, it is really efficient
way to going forward.

I really looking forward to hear this session.

Thanks,
Toshi


On Fri, Apr 25, 2014 at 2:04 PM, Jason Rist jr...@redhat.com wrote:
 On 04/24/2014 09:10 AM, Liz Blanchard wrote:
 Hi All,

 One of the sessions that I proposed for the Horizon track is to review the 
 results that we got from the Usability Test that was run on Horizon in early 
 March. I wanted to share some of the background of this test and the high 
 level results with you all so that we can start the conversation on this 
 list and then continue with agreeing on next steps during Summit. There will 
 be a few follow-ups to this e-mail from myself and Jacki Bauer which will 
 propose some potential solutions to the high priority findings, so be on the 
 look out for those :)

 ---Quick overview of Usability Testing...What is it? Why do it?---
 Usability testing is a technique used in user-centered interaction design to 
 evaluate a product by testing it on users. This can be seen as an 
 irreplaceable usability practice, since it gives direct input on how real 
 users use the system.

 ---Who was involved? What did we need to do to prepare?---
 A number of user experience engineers from a bunch of different companies 
 got together and helped plan for a usability test that would focus on 
 self-service end users and the ease of use of the Horizon Dashboard as it 
 exists for the Icehouse release. This effort spun off from the Persona work 
 that we've been doing together. Some folks in the group are just getting 
 into contributing to the design of OpenStack and doing a baseline usability 
 test of Horizon was a great introduction to how the usability of the Horizon 
 UI could continue to be improved based on user's direct feedback.

 What we needed to get done before actually jumping into the testing:
 1) Agree on the goals of the testing.
 2) Create a screener and send out to the OpenStack community.
 3) Create a list of tasks that the user would complete and give feedback 
 on during the testing.

 ---Who we tested?---
 6 people who we considered to be self-service end users based on their 
 screener responses.

 ---What were the tasks that were tested?---

 Scenario 1: Launching an instance
 Individual Tasks:
 -Create a security key pair.
 -Create a network.
 -Boot from cirros image.
 -Confirm that instance was launched successfully.

 Scenario 2: Understand how many vCPUs are currently in use vs. how much 
 quota is left.
 Individual Tasks:
 -Check out Overview Page to review current quota use/limit details.

 Scenario 3: Take a snapshot of an Instance to save for later use.
 Individual Tasks:
 -Either Launch an instance successfully, or identify a running instance in 
 the instance view.
 -Choose to take a snapshot of that instance.
 -Confirm that the snapshot was successful.

 Scenario 4: Launch an instance from a snapshot.
 Individual Tasks:
 -Choose to either create an instance and boot from a snapshot, or identify a 
 snapshot to create an instance from.
 -Confirm that the instance was started successfully.

 Scenario 5: Launch an instance that boots from a specific volume.
 Individual Tasks:
 -Create a volume.
 -Launch an instance using Volume X as a boot source.

 ---When and how we ran the tests?---
 These hour long tests were run over the first two weeks of March 2014. We 
 focused on the latest bits that could be seen in the Icehouse release. The 
 moderator (a UX researcher from HP) would ask the questions and the rest of 
 the group would vigourously take notes :) After all of the testing was 
 complete, we spent some time together debriefing and agreeing on the 
 prioritized list of updates that would be best to make to the Horizon UI 
 based on user feedback.

 ---What were the findings?---

 High Priority
 * Improve error messages and error message catalog.
 * Fix Launch Instance workflow for end user and power user.
 * Improve informational help information about form fields.
 * Fix terminology. (e.g. launch instance, boot, shutoff, shutdown, etc.)
 * Show details for key pair and network in Launch Instance workflow.
 * Recommend a new Information Architecture.

 Medium Priority
 * Create UI guidelines (of best practices) for Developers to use.
 * Improve Online Help.
 * Provide clearer indication the application is working after clicking a 
 button and the application doesn't respond immediately.
 * Ensure consistency of network selection. (Drag and drop of networks very 
 inconsistent from the other pieces of the launch instance modal)
 * Create consistency of visualizations and section of action button 
 recommendations on Instance table.
 * Suggest defaults for the forms entry fields.
 * Provide Image 

Re: [openstack-dev] [nova] Proposal: remove the server groups feature

2014-04-25 Thread Day, Phil
Hi Jay,

I'm going to disagree with you on this one, because:

i) This is a feature that was discussed in at least one if not two Design 
Summits and went through a long review period, it wasn't one of those changes 
that merged in 24 hours before people could take a good look at it.  Whatever 
you feel about the implementation,  it is now in the API and we should assume 
that people have started coding against it.  I don't think it gives any 
credibility to Openstack as a platform if we yank features back out just after 
they've landed. 

ii) Sever Group - It's a way of defining a group of servers, and the initial 
thing (only thing right now) you can define for such a group is the affinity or 
anti-affinity for scheduling.  Maybe in time we'll add other group properties 
or operations - like delete all the servers in a group (I know some QA folks 
that would love to have that feature).  I don't see why it shouldn't be 
possible to have a server group that doesn't have a scheduling policy 
associated to it.   I don't see any  Cognitive dissonance here - I think your 
just assuming that the only reason for being able to group servers is for 
scheduling.

iii) If the issue is that you can't add or remove servers from a group, then 
why don't we add those operations to the API (you could add a server to a group 
providing doing so  doesn't break any policy that might be associated with the 
group).   Seems like a useful addition to me.

iv) Since the user created the group, and chose a name for it that is 
presumably meaningful, then I don't understand why you think --group XXX 
isn't going to be meaningful to that same user ?

So I think there are a bunch of API operations missing, but I don't see any 
advantage in throwing away what's now in place and  replacing it with a tag 
mechanism that basically says everything with this tag is in a sort of group.

Cheers,
Phil


PS: Congrats on the TC election


 -Original Message-
 From: Jay Pipes [mailto:jaypi...@gmail.com]
 Sent: 25 April 2014 22:16
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [nova] Proposal: remove the server groups feature
 
 Hi Stackers,
 
 When recently digging in to the new server group v3 API extension
 introduced in Icehouse, I was struck with a bit of cognitive dissonance that I
 can't seem to shake. While I understand and support the idea behind the
 feature (affinity and anti-affinity scheduling hints), I can't help but feel 
 the
 implementation is half-baked and results in a very awkward user experience.
 
 The use case here is very simple:
 
 Alice wants to launch an instance and make sure that the instance does not
 land on a compute host that contains other instances of that type.
 
 The current user experience is that the user creates a server group
 like so:
 
 nova server-group-create $GROUP_NAME --policy=anti-affinity
 
 and then, when the user wishes to launch an instance and make sure it
 doesn't land on a host with another of that instance type, the user does the
 following:
 
 nova boot --group $GROUP_UUID ...
 
 There are myriad problems with the above user experience and
 implementation. Let me explain them.
 
 1. The user isn't creating a server group when they issue a nova server-
 group-create call. They are creating a policy and calling it a group. 
 Cognitive
 dissonance results from this mismatch.
 
 2. There's no way to add an existing server to this group. What this means
 is that the user needs to effectively have pre-considered their environment
 and policy before ever launching a VM. To realize why this is a problem,
 consider the following:
 
  - User creates three VMs that consume high I/O utilization
  - User then wants to launch three more VMs of the same kind and make
 sure they don't end up on the same hosts as the others
 
 No can do, since the first three VMs weren't started using a --group
 scheduler hint.
 
 3. There's no way to remove members from the group
 
 4. There's no way to manually add members to the server group
 
 5. The act of telling the scheduler to place instances near or away from some
 other instances has been hidden behind the server group API, which means
 that users doing a nova help boot will see a --group option that doesn't make
 much sense, as it doesn't describe the scheduling policy activity.
 
 Proposal
 
 
 I propose to scrap the server groups API entirely and replace it with a 
 simpler
 way to accomplish the same basic thing.
 
 Create two new options to nova boot:
 
  --near-tag TAG
 and
  --not-near-tag TAG
 
 The first would tell the scheduler to place the new VM near other VMs
 having a particular tag. The latter would tell the scheduler to place the 
 new
 VM *not* near other VMs with a particular tag.
 
 What is a tag? Well, currently, since the Compute API doesn't have a
 concept of a single string tag, the tag could be a key=value pair that would 
 be
 matched against the server extra properties.
 
 Once a real 

Re: [openstack-dev] [nova] Proposal: remove the server groups feature

2014-04-25 Thread Jay Pipes
On Fri, 2014-04-25 at 22:00 +, Day, Phil wrote:
 Hi Jay,
 
 I'm going to disagree with you on this one, because:

No worries, Phil, I expected some dissention and I completely appreciate
your feedback and perspective :)

 i) This is a feature that was discussed in at least one if not two Design 
 Summits and went through a long review period, it wasn't one of those changes 
 that merged in 24 hours before people could take a good look at it.

Completely understood. That still doesn't mean we can't propose to get
rid of it early instead of letting it sit around when an alternate
implementation would be better for the user of OpenStack.

   Whatever you feel about the implementation,  it is now in the API and we 
 should assume that people have started coding against it.

Sure, maybe. AFAIK, it's only in the v2 API, though, not in the v3 API
(sorry, I made a mistake about that in my original email). Is there a
reason it wasn't added to the v3 API?

   I don't think it gives any credibility to Openstack as a platform if we 
 yank features back out just after they've landed.

Perhaps not, though I think we have less credibility if we don't
recognize when a feature isn't implemented with users in mind and leave
it in the code base to the detriment and confusion of users. We
absolutely must, IMO, as a community, be able to say this isn't right
and have a path for changing or removing something.

If that path is deprecation vs outright removal, so be it, I'd be cool
with that. I'd just like to nip this anti-feature in the bud early so
that it doesn't become the next feature like file-injection to persist
in Nova well after its time has come and passed.

 ii) Sever Group - It's a way of defining a group of servers, and the initial 
 thing (only thing right now) you can define for such a group is the affinity 
 or anti-affinity for scheduling.

We already had ways of defining groups of servers. This new feature
doesn't actually define a group of servers. It defines a policy, which
is not particularly useful, as it's something that is better specified
at the time of launching.

   Maybe in time we'll add other group properties or operations - like delete 
 all the servers in a group (I know some QA folks that would love to have 
 that feature).

We already have the ability to define a group of servers using key=value
tags. Deleting all servers in a group is a three-line bash script that
loops over the results of a nova list command and calls nova delete.
Trust me, I've done group deletes in this way many times.

   I don't see why it shouldn't be possible to have a server group that 
 doesn't have a scheduling policy associated to it.

I don't think the grouping of servers should have *anything* to do with
scheduling :) That's the point of my proposal. Servers can and should be
grouped using simple tags or key=value pair tags.

The grouping of servers together doesn't have anything of substance to
do with scheduling policies.

I don't see any  Cognitive dissonance here - I think your just assuming 
 that the only reason for being able to group servers is for scheduling.

Again, I don't think scheduling and grouping of servers has anything to
do with each other, thus my proposal to remove the relationship between
groups of servers and scheduling policies, which is what the existing
server group API and implementation does.

 iii) If the issue is that you can't add or remove servers from a group, then 
 why don't we add those operations to the API (you could add a server to a 
 group providing doing so  doesn't break any policy that might be associated 
 with the group). 

We already have this ability today, thus my proposal to get rid of
server groups.

   Seems like a useful addition to me.

It's an addition that isn't needed, as we already have this today.

 iv) Since the user created the group, and chose a name for it that is 
 presumably meaningful, then I don't understand why you think --group XXX 
 isn't going to be meaningful to that same user ?

See point above about removing the unnecessary relationship between
grouping of servers and scheduling policies.

 So I think there are a bunch of API operations missing, but I don't see any 
 advantage in throwing away what's now in place and  replacing it with a tag 
 mechanism that basically says everything with this tag is in a sort of 
 group.

We already have the tag group mechanism in place, that's kind of what
I've been saying...

 PS: Congrats on the TC election

Cheers, appreciated. Looking forward to healthy debates and buying you a
few beers at the summit :)

best,
-jay

 
  -Original Message-
  From: Jay Pipes [mailto:jaypi...@gmail.com]
  Sent: 25 April 2014 22:16
  To: OpenStack Development Mailing List
  Subject: [openstack-dev] [nova] Proposal: remove the server groups feature
  
  Hi Stackers,
  
  When recently digging in to the new server group v3 API extension
  introduced in Icehouse, I was struck with a bit of cognitive 

Re: [openstack-dev] [Heat] [Glance] How about managing heat template like flavors in nova?

2014-04-25 Thread Adrian Otto
Alexander,

I put a bunch of feedback into the Google doc, and request edit access to it so 
I can make the language more crisp in a bunch of areas.

Adrian

On Apr 25, 2014, at 4:12 AM, Alexander Tivelkov ativel...@mirantis.com wrote:

 Hi Randall,
 
 The current design document on artifacts in glance is available here [1].
 It was just published, we are currently gathering the feedback on it.
 Please feel free to add comments to the document or write any
 suggestions or questions to the ML.
 There was a little discussion on yesterdays IRC meeting ([2]), I've
 answered some questions there.
 I will be happy to answer any questions directly (I am ativelkov in
 IRC) or we may discuss the topic in more details on the next Glance
 meeting next Thursday.
 
 Looking forward for collaboration with Heat team on this topic.
 
 [1] 
 https://docs.google.com/document/d/1tOTsIytVWtXGUaT2Ia4V5PWq4CiTfZPDn6rpRm5In7U/edit?usp=sharing
 [2] 
 http://eavesdrop.openstack.org/meetings/glance/2014/glance.2014-04-24-14.01.log.html
 --
 Regards,
 Alexander Tivelkov
 
 
 On Mon, Apr 21, 2014 at 4:29 PM, Randall Burt
 randall.b...@rackspace.com wrote:
 We discussed this with the Glance community back in January and it was
 agreed that we should extend Glance's scope to include Heat templates as
 well as other artifacts. I'm planning on submitting some patches around this
 during Juno.
 
 Adding the Glance tag as this is relevant to them as well.
 
 
  Original message 
 From: Mike Spreitzer
 Date:04/19/2014 9:43 PM (GMT-06:00)
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Heat] How about managing heat template like
 flavors in nova?
 
 Gouzongmei gouzong...@huawei.com wrote on 04/19/2014 10:37:02 PM:
 
 We can supply APIs for getting, putting, adding and deleting current
 templates in the system, then when creating heat stacks, we just
 need to specify the name of the template.
 
 Look for past discussion of Heat Template Repository (Heater).  Here is part
 of it: https://wiki.openstack.org/wiki/Heat/htr
 
 Regards,
 Mike
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Glance][Solum] How about managing heat template like flavors in nova?

2014-04-25 Thread Adrian Otto
+Solum team.

We have plans to use this too.

Thanks,

Adrian

On Apr 25, 2014, at 4:03 PM, Adrian Otto adrian.o...@rackspace.com
 wrote:

 Alexander,
 
 I put a bunch of feedback into the Google doc, and request edit access to it 
 so I can make the language more crisp in a bunch of areas.
 
 Adrian
 
 On Apr 25, 2014, at 4:12 AM, Alexander Tivelkov ativel...@mirantis.com 
 wrote:
 
 Hi Randall,
 
 The current design document on artifacts in glance is available here [1].
 It was just published, we are currently gathering the feedback on it.
 Please feel free to add comments to the document or write any
 suggestions or questions to the ML.
 There was a little discussion on yesterdays IRC meeting ([2]), I've
 answered some questions there.
 I will be happy to answer any questions directly (I am ativelkov in
 IRC) or we may discuss the topic in more details on the next Glance
 meeting next Thursday.
 
 Looking forward for collaboration with Heat team on this topic.
 
 [1] 
 https://docs.google.com/document/d/1tOTsIytVWtXGUaT2Ia4V5PWq4CiTfZPDn6rpRm5In7U/edit?usp=sharing
 [2] 
 http://eavesdrop.openstack.org/meetings/glance/2014/glance.2014-04-24-14.01.log.html
 --
 Regards,
 Alexander Tivelkov
 
 
 On Mon, Apr 21, 2014 at 4:29 PM, Randall Burt
 randall.b...@rackspace.com wrote:
 We discussed this with the Glance community back in January and it was
 agreed that we should extend Glance's scope to include Heat templates as
 well as other artifacts. I'm planning on submitting some patches around this
 during Juno.
 
 Adding the Glance tag as this is relevant to them as well.
 
 
  Original message 
 From: Mike Spreitzer
 Date:04/19/2014 9:43 PM (GMT-06:00)
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Heat] How about managing heat template like
 flavors in nova?
 
 Gouzongmei gouzong...@huawei.com wrote on 04/19/2014 10:37:02 PM:
 
 We can supply APIs for getting, putting, adding and deleting current
 templates in the system, then when creating heat stacks, we just
 need to specify the name of the template.
 
 Look for past discussion of Heat Template Repository (Heater).  Here is part
 of it: https://wiki.openstack.org/wiki/Heat/htr
 
 Regards,
 Mike
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] detailed git commit messages

2014-04-25 Thread Devananda van der Veen
Hi all,

We've all been pretty lax about the amount of detail that we put in commit
messages some times, and I'd like to change that as we start Juno
development. Why? Well, just imagine that, six months from now, you're
going to write a document describing *all* the changes in Juno, just based
on the commit messages...

The git commit message should be a succinct but complete description of the
changes in your patch set. If you can't summarize the change in a few
paragraphs, perhaps that's a sign the patch should be split up! So, I'm
going to start -1'ing patches if I don't think the commit message has
enough detail in it. I would like to encourage other cores to do the same.

What's enough detail? It's subjective, but there are some lengthy and
detailed guidelines here that everyone should be familiar with :)
  https://wiki.openstack.org/wiki/GitCommitMessages


Cheers,
Devananda


(If English isn't your native language, feel free to ask in channel for a
little help writing the summary.)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-04-25 Thread Kyle Mestery
According to this page [1], update-device is supported from libvirt
0.8.0 onwards. So in theory, this should be working with your 0.9.8
version you have. If you continue to hit issues here Oleg, I'd suggest
sending an email to the libvirt mailing list with the specifics of the
problem. I've found in the past there are lots of very helpful on that
mailing list.

Thanks,
Kyle

[1] http://libvirt.org/sources/virshcmdref/html-single/#sect-update-device

On Thu, Apr 24, 2014 at 7:42 AM, Oleg Bondarev obonda...@mirantis.com wrote:
 So here is the etherpad for the migration discussion:
 https://etherpad.openstack.org/p/novanet-neutron-migration
 I've also filed a design session on this:
 http://summit.openstack.org/cfp/details/374

 Currently I'm still struggling with instance vNic update, trying to move it
 from one bridge to another.
 Tried the following on ubuntu 12.04 with libvirt 0.9.8:
 https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/sect-dynamic-vNIC.html
 virsh update-device shows success but nothing actually changes in the
 instance interface config.
 Going to try this with later libvirt version.

 Thanks,
 Oleg



 On Wed, Apr 23, 2014 at 3:24 PM, Rossella Sblendido rsblend...@suse.com
 wrote:


 Very interesting topic!
 +1 Salvatore

 It would be nice to have an etherpad to share the information and organize
 a plan. This way it would be easier for interested people  to join.

 Rossella


 On 04/23/2014 12:57 AM, Salvatore Orlando wrote:

 It's great to see that there is activity on the launchpad blueprint as
 well.
 From what I heard Oleg should have already translated the various
 discussion into a list of functional requirements (or something like that).

 If that is correct, it might be a good idea to share them with relevant
 stakeholders (operators and developers), define an actionable plan for Juno,
 and then distribute tasks.
 It would be a shame if it turns out several contributors are working on
 this topic independently.

 Salvatore


 On 22 April 2014 16:27, Jesse Pretorius jesse.pretor...@gmail.com wrote:

 On 22 April 2014 14:58, Salvatore Orlando sorla...@nicira.com wrote:

 From previous requirements discussions,


 There's a track record of discussions on the whiteboard here:
 https://blueprints.launchpad.net/neutron/+spec/nova-to-quantum-upgrade

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: remove the server groups feature

2014-04-25 Thread Russell Bryant
On 04/25/2014 05:15 PM, Jay Pipes wrote:
 Thoughts and feedback welcome,

I'd love to talk about how to improve this functionality, while still
allowing for future policy expansion.

Can we punt this discussion to summit though?  I don't think we need
much time.  Perhaps over a break or something.  I'd like to be involved
with this discussion and decision, but I'm out for the next month or so
(but will be at the summit).

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] cinder not support query volume/snapshot with regular expression

2014-04-25 Thread Zhangleiqiang (Trump)
Hi, all:

I see Nova allows search instances by name, ip and ip6 fields which can 
be normal string and regular expression:

[stack@leiqzhang-stack cinder]$ nova help list

List active servers.

Optional arguments:
--ip ip-regexp  Search with regular expression match by 
IP address
(Admin only).
--ip6 ip6-regexpSearch with regular expression match by 
IPv6 address
 (Admin only).
--name name-regexp  Search with regular expression match by 
name
--instance-name name-regexp Search with regular expression 
match by server name
(Admin only).

I think it is also needed for Cinder when query the 
volume/snapshot/backup by name. Any advice?

--
zhangleiqiang (Trump)

Best Regards


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: remove the server groups feature

2014-04-25 Thread Mike Spreitzer
Jay Pipes jaypi...@gmail.com wrote on 04/25/2014 06:28:38 PM:

 On Fri, 2014-04-25 at 22:00 +, Day, Phil wrote:
  Hi Jay,
  
  I'm going to disagree with you on this one, because:
 
 No worries, Phil, I expected some dissention and I completely appreciate
 your feedback and perspective :)

I myself sit between the two camps on this one.  I share Jay's unhappiness 
with server groups, as they are today.  However, I see an evolutionary 
path forward from today's server groups to something that makes much more 
sense to me and my colleagues.  I do not see as clear a path forward from 
Jay's proposal, but am willing to think more about that.  I will start by 
outlining where I want to go, and then address the specific points that 
have been raised in this email thread so far.

I would like to see the OpenStack architecture have a place for what I 
have been calling holistic scheduling.  That is making a simultaneous 
scheduling decision about a whole collection of virtual resources of 
various types (Nova VMs, Cinder storage volumes, network bandwidth, ...), 
taking into account a rich composite policy statement.  This is not just a 
pipe dream, my group has been doing this for years.  What we are 
struggling with is finding an evolutionary path to a place where it can be 
done in an OpenStack context.  One part of the struggle is due to the fact 
that in our previous work the part that is analogous to Heat is not 
optional, while in OpenStack Heat is most definitely optional.  Some of 
the things I have written in the past have not clearly separated 
scheduling and Heat and left Heat optional, but please rest assured that I 
am making no proposal now to violate those things.  I see scheduling and 
orchestration as distinct functions; the potential for confusion arises 
because (a) holistic scheduling needs input that has some similarity to 
what you see in a Heat template today and (b) making the scheduling 
simultaneous requires moving it from its current place (downstream from 
orchestration) to an earlier place (upstream from orchestration).

The OpenStack community has historically used the word scheduling to 
refer to placement problems, always in the time-invariant 
now-and-forseeable future, and I am following that usage here.  Other 
communities consider scheduling to also include interesting variation 
over time, but I am not trying to bring that into this debate.  (Nor am I 
denying its interest and value, I am just trying to keep this discussion 
focused.)

The discussion in this email thread has recognized that scheduler hints 
are applied only at creation time today, but it has already been noted 
(e.g., in http://summit.openstack.org/cfp/details/99) that scheduling 
policy statements should be retained for the lifetime of the virtual 
resource.  That is true regardless of whether the policy statements come 
in through today's server groups, the alternate proposal from Jay Pipes, 
or some other alternative or evolution.

I agree with Jay that groups have no inherent connection to scheduling. My 
colleagues and I have found grouping to be a useful technique to make APIs 
and documents more concise, and we find a top-level group to be the 
natural scope for a simultaneous decision.  We have been working example 
problems with a non-trivial size and amount of structure; when you get 
beyond small simple examples you see the usefulness of grouping more 
clearly.  For a couple of examples, see a 3-tier web application in 
https://docs.google.com/drawings/d/1nridrUUwNaDrHQoGwSJ_KXYC7ik09wUuV3vXw1MyvlY 
and a deployment of an IBM product called Connections in 
https://docs.google.com/file/d/0BypF9OutGsW3ZUYwYkNjZGJFejQ (this latter 
example has been shorn of its networking policies, and is a literal 
abstract of something we did using software that could not cope with 
policies applied directly to virtual resources, so some of its groups are 
not well motivated --- but others *are*).  The groups are handy for making 
it possible to draw pictures without too many lines, and write documents 
that are readably concise.  But everything said with groups could be said 
without groups, if we allowed policy statements to be placed on virtual 
resources and on pairs of virtual resources --- it would just take a heck 
of a lot more policy statements.

If you want to make a simultaneous decision about several virtual 
resources, you need a description of all those virtual resources up-front. 
 So even in a totally Heat-free environment you find yourself wanting 
something that looks like a document or data structure describing multiple 
virtual resources --- and the policies that apply to them, and thus also 
the groups that allow for concise applications of policies; note also that 
the whole set of virtual resources involved is a group.

When you have an example of non-trivial size and structure, you generally 
do not want to make a change by a collection of atomic edits, each 
individually scheduled.