Re: [openstack-dev] Curvature and Donabe repos are now public!

2013-10-04 Thread Debojyoti Dutta
Edgar

Thanks for your feedback during its early days ;)

debo

On Thu, Oct 3, 2013 at 12:34 PM, Edgar Magana emag...@plumgrid.com wrote:
 Debo,

 Congratulations on this move! The entire Cisco team is doing an awesome
 work around OpenStack.

 Cheers,

 Edgar

 On 10/3/13 11:43 AM, Debojyoti Dutta ddu...@gmail.com wrote:

Hi!

We @Cisco just made the following repos public
https://github.com/CiscoSystems/donabe
https://github.com/CiscoSystems/curvature

Donabe was pitched as a recursive container before Heat days.
Curvature is an alternative interactive GUI front end to openstack
that can handle virtual resources, templates and can instantiate
Donabe workloads. The D3 + JS stuff was incorporated into Horizon. A
short demo was shown last summit and can be found at
http://www.openstack.org/summit/portland-2013/session-videos/presentation/
interactive-visual-orchestration-with-curvature-and-donabe

Congrats to the primary developers: @CaffeinatedBrad @John_R_Davidge
@Tehsmash_ @JackPeterFletch ... Special thanks to @lewtucker for
supporting this.

Hope this leads to more cool stuff for the Openstack community!

--
-Debo~

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
-Debo~

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nominating Zhi Yan Liu for glance-core

2013-10-04 Thread Flavio Percoco



On 03/10/13 00:25 -0400, Iccha Sethi wrote:

Hey,

I would like to nominate Zhi Yan Liu(lzydev) for glance core. I think Zhi has 
been an active reviewer/contributor to the glance community [1] and has always 
been on top of reviews.


Big +1 from me!

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nominating Fei Long Wang for glance core

2013-10-04 Thread Flavio Percoco

On 03/10/13 00:28 -0400, Iccha Sethi wrote:

Hey,

I would like to nominate Fei Long Wang(flwang) for glance core. I think Fei has 
been an active reviewer/contributor to the glance community [1] and has always 
been on top of reviews.


Absolute +1!

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC: adding on_shared_storage field to instance

2013-10-04 Thread Cristian Tomoiaga
Hello Chris,

Just a note regarding this. I was thinking on using local plus shared
storage for an instance ( eg. root disk local and another disk as a cinder
volume ).
If I understand this correctly, flagging the instance as having local
storage may not be such a good idea in this particular case right ?
Maybe root_on_local ?

Regards



-- 
Regards,
Cristian Tomoiaga
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC: adding on_shared_storage field to instance

2013-10-04 Thread Caitlin Bestler
On Oct 3, 2013 1:45 PM, Chris Friesen chris.frie...@windriver.com wrote:

 On 10/03/2013 02:02 PM, Caitlin Bestler wrote:



 On October 3, 2013 12:44:50 PM Chris Friesen
 chris.frie...@windriver.com wrote:


 I was wondering if there is any interest in adding an
 on_shared_storage field to the Instance class.  This would be set
 once at instance creation time and we would then be able to avoid
 having the admin manually pass it in for the various API calls
 (evacuate/rebuild_instance/migration/etc.)

 It could even be set automatically for the case of booting off block
 storage, and maybe we add a config flag indicating that a given
 compute node is using shared storage for its instances.

 This would also allow for nova host-evacuate to work properly if
 some of the instances are on unshared storage and some are booting
 from block storage (which is shared).  As it stands, the host-evacuate
 command assumes that they're all the same.

 Thoughts?

 Chris


 *What* is on shared storage?

 The boot drive?
 A snapshot of the running VM?



Meaning that this is not an attribute of the instance, it is an attribute
of the Cinder drive, or more precisely from the Volume
Driver responsible for that drive.

I believe reporting of attributes that are of potential meaning to
users of a drive is a feature that should be added (And documented) for all
Volume Drivers. But drive vendors want one
place to report these things.

Further the question can actually be complex. Is a thin local volume backed
by a remote volume local? If so, at what hit rate for the local cache?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Climate] DB unit testing and the use of Fixture ?

2013-10-04 Thread Sylvain Bauza
Hi team,

So far as I can see, the current way for unittest the DB APIs in Openstack is 
either mocking the DB or using a tmp/ram sqlite engine.
I went through Fixture SQLAlchemy DataSets [1], I would give a try on it. 
Anyone shouting no ?

Maybe other Openstack folks could shed some light on the right way/best 
practices in Openstack for unittesting ORM calls ?

Thanks,
-Sylvain

[1] : 
http://farmdev.com/projects/fixture/using-loadable-fixture.html#an-example-of-loading-data-using-sqlalchemy
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] team meeting minutes October 3

2013-10-04 Thread Sergey Lukjanov
Thanks everyone who have joined Savanna meeting.

Here are the logs from the meeting:

Minutes: 
http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-10-03-18.05.html
Minutes (text): 
http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-10-03-18.05.txt
Log: 
http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-10-03-18.05.log.html

P.S. 0.3-rc1 - https://launchpad.net/savanna/0.3/0.3-rc1

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hi All

2013-10-04 Thread Yamini
Hi All,
I am Yamini. I am new to openstack. I would like to contribute my knowledge to 
openstack.
Can anyone please guide me to start.

Thanks,
Yamini.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] DB unit testing and the use of Fixture ?

2013-10-04 Thread Dina Belova
I think that's not a bad idea, but I prefer getting info from more
experienced OpenStack guys too.
So please, if you have idea on it, share it, guys.


On Fri, Oct 4, 2013 at 2:23 PM, Sylvain Bauza sylvain.ba...@bull.netwrote:

  Hi team,

 So far as I can see, the current way for unittest the DB APIs in Openstack
 is either mocking the DB or using a tmp/ram sqlite engine.
 I went through Fixture SQLAlchemy DataSets [1], I would give a try on it.
 Anyone shouting no ?

 Maybe other Openstack folks could shed some light on the right way/best
 practices in Openstack for unittesting ORM calls ?

 Thanks,
 -Sylvain

 [1] :
 http://farmdev.com/projects/fixture/using-loadable-fixture.html#an-example-of-loading-data-using-sqlalchemy

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Keystone + AD / LDAP problem

2013-10-04 Thread Endre Karlson
Hi, I have a problem with my keystone where it doesn't honor the setting
under the [identity] section with the user_id_attribute setting set to
'sAMAccountName'. I have reported a comment on a existing bug:
https://bugs.launchpad.net/keystone/+bug/1210141

Any clues on what I am doing wrong?

I got Windows server 2008 as the AD / LDAP .

Endre
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon]Template for mobile browsers

2013-10-04 Thread Maxime Vidori
Hi,

I have to work on this blueprint 
https://blueprints.launchpad.net/horizon/+spec/horizon-mobile-ui, and I am 
wondering if something has be ever done on it? There is no activity on it since 
mid august. I will soon upload some specification, features and design I need, 
so if someone is interested in, I will be happy to heard his ideas.

Thanks

Max

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC: adding on_shared_storage field to instance

2013-10-04 Thread Chris Friesen

On 10/04/2013 03:31 AM, Cristian Tomoiaga wrote:

Hello Chris,

Just a note regarding this. I was thinking on using local plus shared
storage for an instance ( eg. root disk local and another disk as a
cinder volume ).
If I understand this correctly, flagging the instance as having local
storage may not be such a good idea in this particular case right ?
Maybe root_on_local ?


Here's how I understand it.  (I don't have a lot of practical experience 
with OpenStack though, just started working on it in the last couple 
months.)


Suppose you store the instance files local to the compute node and have 
a cinder volume for block storage.  If your compute node dies and you 
need to evacuate the instance the contents of the cinder volume will 
persist over the evacuation but the instance rootfs will be regenerated 
from the image file.


Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Candidate proposals for TC (Technical Committee) positions are now open

2013-10-04 Thread Anita Kuno
Candidate proposals for the Technical Committee positions (11 positions) 
are now open and will remain open until 23:59 UTC October 10, 2013.


Candidates for the Technical Committee Positions:
Any Foundation individual member can propose his candidacy for an 
available, directly-elected TC seat. [0][1]


Propose your candidacy by sending an email to the 
openstack-dev@lists.openstack.org mailing-list, with the subject: TC 
candidacy. Please start your own thread so we have one thread per 
candidate. Since there will be many people voting for folks with whom 
they might not have worked, including a platform or statement to help 
voters make an informed choice is recommended, though not required.


Thierry and I will confirm candidates with an email to the candidate 
thread as well as create a link to the confirmed candidate's proposal 
email on the wikipage for this election. [2]


Given that my prior request for a stemming of the +1 emails went largely 
unnoticed and some folks feel it necessary to offer a testimonial for 
the candidate of their choice, if you have submitted a candidate 
proposal and have not received a confirmation within 24 hours (except if 
you are submitting within the remaining 24 hours, in which case please 
act before the deadline) please pm myself (anteaya) or Thierry (ttx) on 
irc so we see your candidate proposal and acknowledge you. We don't want 
to miss anybody.


The election will be held from October 11 through to October 17, 2013. 
The electorate are the Foundation individual members that are also 
committers for one of the official programs projects over the 
Grizzly-Havana timeframe (from 2012-09-27 to 2013-09-26, 23:59 PST), as 
well as the 2 non-code ATCs who were acknowledged by the TC.


Candidates ranking 1st to 6th will get one-year seats, and candidates 
ranking 7th to 11th will get 6-month seats.


Please see the wikipage for additional details about this election. [2]

If you have any questions please be sure to either voice them on the 
mailing list or email myself at the above email address or contact 
Thierry or myself on IRC.


Thank you, and I look forward to reading your candidate proposals,
Anita Kuno (anteaya)


[0] https://wiki.openstack.org/wiki/Governance/Foundation/TechnicalCommittee

[1] Note: After this election, I am going to submit a proposal to the TC 
that the wording for the charter include her in this sentence, as in 
Any Foundation individual member can propose his/her candidacy ... but 
that is not the wording of the charter at present, though I do believe 
it is the spirit.


[2] https://wiki.openstack.org/wiki/TC_Elections_Fall_2013

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC candidacy

2013-10-04 Thread Monty Taylor
Hi all!

I would like to continue serving the project on the OpenStack TC.

Background
--

I've been with this bad boy since before day 1, and you can pretty much
blame me for trunk gating. You can also blame me for the bzr days - so
I'm not going to try to claim that I'm always right. :) I started and
until yesterday have been the PTL of the OpenStack Infrastructure team.
During my tenure there, we have grown what is quite possibly one of the
largest elastic build farms dedicated to a single project anywhere. More
importantly than large though, we've spent the effort to make as much of
our ops work as is possible completely open and able to receive changes
via code review in the exact same manner as the OpenStack projects
themselves.

The OpenStack Infrastructure system themselves are a scalable
application that runs across two clouds. This means that I am, as part
of that team, a very large user of OpenStack, which I believe gives me
an excellent perspective on what users of OpenStack might want. (hint:
vendor differences in how I get a sane hostname on a system I'm spinning
up - not what I want)

I work a LOT on cross-project coordination, compatibility and
automation. I have commits in every single OpenStack repo. In addition
to infra, have worked on pbr, hacking, openstack/requirements and
oslo.sphinx. I helped battle this cycle's setuptools/distribute re-merge
and ensuing upgrade nightmare. I landed patches in tox upstream to allow
us to skip the costly sdist step and use setup.py develop directly in
the virtualenv.

I'm also one of the only people who can have a conversation with ttx
about how our version numbers work and understand every nuance. I wrote
the code in pbr that encodes the design inside of ttx's brain.

Adjacent to OpenStack, I've managed to convince HP to fund me and a
group of merry troublemakers to work on OpenStack. One of the things
that has sprung from that is the TripleO project. I can't take credit
for any of the actual code there, but this is a TC candidacy, not a
developer cadidacy, and I think a large portion of being on the TC is
being able to steward the success of something both with and without
directly coding on it yourself.

I'm also a member of the Python Software Foundation and have been
working hard this cycle to start to align and integrate better with what
upstream Python do. Despite our intent to be good python people, it
turns out we do a bunch of things quite differently. Over time, I think
it would be better for those differences to decrease. I'm currently
working on writing my first PEP.

Platform


The following was said to me on IRC a couple of day ago. It was not
meant as a compliment, but I will take it as one, and I think it quite
explicitly sums up why I should be on the TC:

mordred: it is infact you who are thinking narrowly *only* considering
openstack's goals

I believe that OpenStack is One Project. I believe it is in our best
interest to be one. I believe that the more we work together across the
projects, the more better OpenStack will be.

As a TC member, I will continue to strive to enhance the view that
OpenStack itself is an important thing and it not just a loose
confederation of friendly projects.

I have an expansive view of the scope of OpenStack. I do not think that
'pure IaaS' as a limiting factor in the definition serves or will serve
any of our users. I think that instead of trying to come up with random
or theoretical labels and then keeping ourselves inside of the
pre-defined label, we should focus on writing software that solves
problems for users. trove and savana are great examples of this. As a
user, I want to store my data in a database, and I want to run some
map-reduce work. I'm not interested in figuring out the best way to
administer a MySQL instance inside of a VM. So, as a user, a database
service helps me write scalable cloud-based applications, and having it
be part of OpenStack means I can write that scalable cloud-based
applications that span multiple clouds.

As a TC member, I will continue to support a viewpoint that a consistent
set of features across clouds is important to a user, and the more
features there are that the user can count on to be in all of the
OpenStack clouds, the better the user experience will be.

I believe that the users of clouds are our users, not just the deployers
of clouds. In fact, if I have to choose between the desires of the users
of clouds and the desires of deployers of clouds, I will choose in favor
of the users of clouds.

As a TC member, I will strive to consider the needs of the end users in
discussions we have about coordinated choices and overall shape of
OpenStack.

Lastly, as a person who only considers OpenStack's goals and does not
have a direct affiliation with any of the individual projects, I believe
I'm in a good position to mediate issues between projects should they
arise. Thus far, I do not believe that the TC has had to act in that
capacity, but 

[openstack-dev] [TripleO] Undercloud Ceilometer

2013-10-04 Thread Ladislav Smola

Hello,

just a few words about role of Ceilometer in the Undercloud and the work 
in progress.


Why we need Ceilometer in Undercloud:
---

In Tuskar-UI, we will display number of statistics, that will show 
Undercloud metrics.
Later also number of alerts and notifications, that will come from 
Ceilometer.


But I do suspect, that the Heat will use the Ceilometer Alarms, similar 
way it is using it for

auto-scaling in Overcloud. Can anybody confirm?

What is planned in near future
---

The Hardware Agent capable of obtaining statistics:
https://blueprints.launchpad.net/ceilometer/+spec/monitoring-physical-devices
It uses SNMP inspector for obtaining the stats. I have tested that with 
the Devtest tripleo setup

and it works.

The planned architecture is to have one Hardware Agent(will be merged to 
central agent code)
placed on Control Node (or basically anywhere). That agent will poll 
SNMP daemons placed on
hardware in the Undercloud(baremetals, network devices). Any objections 
why this is a bad idea?


We will have to create a Ceilometer Image element, snmpd element is 
already there, but we should
test it. Anybody volunteers for this task? There will be a hard part: 
doing the right configurations.
(firewall, keystone, snmpd.conf) So it's all configured in a clean and a 
secured way. That would
require a seasoned sysadmin to at least observe the thing. Any 
volunteers here? :-)


The IPMI inspector for Hardware agent just started:
https://blueprints.launchpad.net/ceilometer/+spec/ipmi-inspector-for-monitoring-physical-devices
Seems it should query the Ironic API, which would provide the data 
samples. Any objections?

Any volunteers for implementing this on Ironic side?

devananda and lifeless had a greatest concern about the scalability of a 
Central agent. The Ceilometer
is not doing any scaling right now, but they are planning Horizontal 
scaling of the central agent
for the future. So this is a very important task for us, for larger 
deployments. Any feedback about

scaling? Or changing of architecture for better scalability?


Thank you for any feedback.

Kind Regards,
Ladislav









___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nominating Fei Long Wang for glance core

2013-10-04 Thread Monty Taylor
+1

On 10/03/2013 12:28 AM, Iccha Sethi wrote:
 Hey,
 
 I would like to nominate Fei Long Wang(flwang) for glance core. I think Fei 
 has been an active reviewer/contributor to the glance community [1] and has 
 always been on top of reviews.
 
 Thanks for the good work Fei!
 
 Iccha
 
 [1] http://russellbryant.net/openstack-stats/glance-reviewers-30.txt
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC candidacy

2013-10-04 Thread Anita Kuno

Confirmed.

On 10/04/2013 11:14 AM, Monty Taylor wrote:

Hi all!

I would like to continue serving the project on the OpenStack TC.

Background
--

I've been with this bad boy since before day 1, and you can pretty much
blame me for trunk gating. You can also blame me for the bzr days - so
I'm not going to try to claim that I'm always right. :) I started and
until yesterday have been the PTL of the OpenStack Infrastructure team.
During my tenure there, we have grown what is quite possibly one of the
largest elastic build farms dedicated to a single project anywhere. More
importantly than large though, we've spent the effort to make as much of
our ops work as is possible completely open and able to receive changes
via code review in the exact same manner as the OpenStack projects
themselves.

The OpenStack Infrastructure system themselves are a scalable
application that runs across two clouds. This means that I am, as part
of that team, a very large user of OpenStack, which I believe gives me
an excellent perspective on what users of OpenStack might want. (hint:
vendor differences in how I get a sane hostname on a system I'm spinning
up - not what I want)

I work a LOT on cross-project coordination, compatibility and
automation. I have commits in every single OpenStack repo. In addition
to infra, have worked on pbr, hacking, openstack/requirements and
oslo.sphinx. I helped battle this cycle's setuptools/distribute re-merge
and ensuing upgrade nightmare. I landed patches in tox upstream to allow
us to skip the costly sdist step and use setup.py develop directly in
the virtualenv.

I'm also one of the only people who can have a conversation with ttx
about how our version numbers work and understand every nuance. I wrote
the code in pbr that encodes the design inside of ttx's brain.

Adjacent to OpenStack, I've managed to convince HP to fund me and a
group of merry troublemakers to work on OpenStack. One of the things
that has sprung from that is the TripleO project. I can't take credit
for any of the actual code there, but this is a TC candidacy, not a
developer cadidacy, and I think a large portion of being on the TC is
being able to steward the success of something both with and without
directly coding on it yourself.

I'm also a member of the Python Software Foundation and have been
working hard this cycle to start to align and integrate better with what
upstream Python do. Despite our intent to be good python people, it
turns out we do a bunch of things quite differently. Over time, I think
it would be better for those differences to decrease. I'm currently
working on writing my first PEP.

Platform


The following was said to me on IRC a couple of day ago. It was not
meant as a compliment, but I will take it as one, and I think it quite
explicitly sums up why I should be on the TC:

mordred: it is infact you who are thinking narrowly *only* considering
openstack's goals

I believe that OpenStack is One Project. I believe it is in our best
interest to be one. I believe that the more we work together across the
projects, the more better OpenStack will be.

As a TC member, I will continue to strive to enhance the view that
OpenStack itself is an important thing and it not just a loose
confederation of friendly projects.

I have an expansive view of the scope of OpenStack. I do not think that
'pure IaaS' as a limiting factor in the definition serves or will serve
any of our users. I think that instead of trying to come up with random
or theoretical labels and then keeping ourselves inside of the
pre-defined label, we should focus on writing software that solves
problems for users. trove and savana are great examples of this. As a
user, I want to store my data in a database, and I want to run some
map-reduce work. I'm not interested in figuring out the best way to
administer a MySQL instance inside of a VM. So, as a user, a database
service helps me write scalable cloud-based applications, and having it
be part of OpenStack means I can write that scalable cloud-based
applications that span multiple clouds.

As a TC member, I will continue to support a viewpoint that a consistent
set of features across clouds is important to a user, and the more
features there are that the user can count on to be in all of the
OpenStack clouds, the better the user experience will be.

I believe that the users of clouds are our users, not just the deployers
of clouds. In fact, if I have to choose between the desires of the users
of clouds and the desires of deployers of clouds, I will choose in favor
of the users of clouds.

As a TC member, I will strive to consider the needs of the end users in
discussions we have about coordinated choices and overall shape of
OpenStack.

Lastly, as a person who only considers OpenStack's goals and does not
have a direct affiliation with any of the individual projects, I believe
I'm in a good position to mediate issues between projects should they
arise. Thus far, I do not 

Re: [openstack-dev] Nominating Zhi Yan Liu for glance-core

2013-10-04 Thread Monty Taylor
+1

On 10/03/2013 12:25 AM, Iccha Sethi wrote:
 Hey,
 
 I would like to nominate Zhi Yan Liu(lzydev) for glance core. I think Zhi has 
 been an active reviewer/contributor to the glance community [1] and has 
 always been on top of reviews.
 
 Thanks for the good work Zhi!
 
 Iccha
 
 [1] http://russellbryant.net/openstack-stats/glance-reviewers-30.txt
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Blueprint for IPAM and Policy extensions in Neutron

2013-10-04 Thread Rudra Rugge
Hi All,

The link in the email was incorrect. Please follow the following link:

https://blueprints.launchpad.net/neutron/+spec/ipam-policy-extensions-for-neutron

Thanks,
Rudra

On Oct 3, 2013, at 11:38 AM, Rudra Rugge 
rru...@juniper.netmailto:rru...@juniper.net wrote:

Hi All,

A blueprint has been registered to add IPAM and Policy
extensions to Neutron. Please review the blueprint and
the attached specification.

https://blueprints.launchpad.net/neutron/+spec/juniper-contrail-ipam-policy-extensions-for-neutron

All comments are welcome.

Thanks,
Rudra
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What should be Neutron behavior with scoped token?

2013-10-04 Thread Ravi Chunduru
Does the described behavior qualify as a bug?

Thanks,
-Ravi.


On Thu, Oct 3, 2013 at 5:21 PM, Ravi Chunduru ravi...@gmail.com wrote:

 Hi,
   In my tests, I observed that when an admin of a tenant runs 'nova list'
 to list down all the servers of the tenant - nova-api makes a call to
 quantum to get_ports with filter set to device owner. This operation is
 taking about 1m 30s in our setup(almost having 100 VMs i.e  100 ports)

 While a user of a tenant runs the same command, the response is immediate.

 Going into details - the only difference between those two operations is
 the 'role'.

 Looking into the code, I have the following questions
 1) Scoped Admin token returned all entries of a resource. Any reason not
 filtered per tenant?
 Comparing with Nova - it always honored tenant from the scoped token and
 returns values specific to tenant.

 2) In the above described test, the DB access should not take much time
 with or with out tenant-id in filter. Why change in response time for
 tenant admin or a member user?

 Thanks,
 -Ravi.







-- 
Ravi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC candidacy

2013-10-04 Thread Zane Bitter

I would like to propose my candidacy for the OpenStack Technical Committee.

I have been involved with OpenStack since we started the Heat project in 
early 2012. I'm still a member of the Heat Core team and - according to 
the semi-official statistics from Bitergia 
(http://bitergia.com/public/reports/openstack/2013_04_grizzly/), at 
least - among the more prolific patch contributors to OpenStack.


Over the last year I have often worked closely with the TC, beginning 
with helping to shepherd Heat through the incubation process. That 
process involved developing a consensus on the scope of the OpenStack 
project as a whole, and new procedures and definitions for incubating 
projects in OpenStack. These changes paved the way for projects like 
Trove, Marconi and Savanna to be incubated. I hope and expect that more 
projects will continue to follow in these footsteps.


I remain a reasonably frequent, if irregular, attendee at TC meetings - 
occasionally as a proxy for the Heat PTL, but more often just because I 
feel I can contribute. At this stage of its evolution, I think the main 
responsibility of the TC is to grow the OpenStack project in a 
responsible, sustainable way, so I take particular interest in 
discussions around incubation and graduation of new projects. Many new 
projects also have potential integration points with Heat, so having 
folks from the Heat core team involved seems valuable.


I also think that the TC could do more to communicate its inner workings 
(which are public but, I suspect, not widely read). While most decisions 
eventually come down to a vote and the results are reported, the most 
important work of the committee is not in voting but in building 
consensus. I believe the community would benefit from more insight into 
that process, and to that end I have started blogging about important 
decisions of the TC - not only the outcomes, but the reasons behind them 
and the issues that were considered along the way:


http://www.zerobanana.com/archive/2013/09/25#savanna-incubation
http://www.zerobanana.com/archive/2013/09/04#icehouse-incubation
http://www.zerobanana.com/archive/2013/08/07#non-relational-trove

These posts appear on Planet OpenStack and are regularly featured in the 
Community Newsletter, so I like to think that this is helping to bring 
the workings of the TC in front of an audience who might not otherwise 
be aware of them.


If elected, I'd like to act as an informal point of contact for projects 
that are already in incubation or are considering it, to help explain 
the incubation process and the committee's expectations around it.


I consider myself fortunate that my employer permits me to spend 
substantially all of my time on OpenStack, and that my colleagues and I 
have a clear mandate to do what we consider best for the _entire_ 
OpenStack community, because we know that we succeed only when everyone 
succeeds.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC: adding on_shared_storage field to instance

2013-10-04 Thread Caitlin Bestler

On 10/4/2013 7:33 AM, Chris Friesen wrote:

On 10/04/2013 04:11 AM, Caitlin Bestler wrote:

 On Oct 3, 2013 1:45 PM, Chris Friesen chris.frie...@windriver.com
 mailto:chris.frie...@windriver.com wrote:
  
   On 10/03/2013 02:02 PM, Caitlin Bestler wrote:

   On October 3, 2013 12:44:50 PM Chris Friesen
   chris.frie...@windriver.com mailto:chris.frie...@windriver.com
 wrote:

   I was wondering if there is any interest in adding an
   on_shared_storage field to the Instance class.  This would be set
   once at instance creation time and we would then be able to avoid
   having the admin manually pass it in for the various API calls
   (evacuate/rebuild_instance/migration/etc.)

   *What* is on shared storage?
  
   The boot drive?
   A snapshot of the running VM?

 Meaning that this is not an attribute of the instance, it is an
 attribute of the Cinder drive, or more precisely from the Volume
 Driver responsible for that drive.

Booting an instance from a cinder volume is only one way of getting
shared storage.  (And yes, any instance booting from a cinder volume
could be considered to be on shared storage--but the existing code
doesn't use that knowledge.)

The compute node can mount a shared filesystem and store the instance
files on it, and all instances on that compute node would be on shared
storage.  The evacuate code currently requires the admin to specify
whether the instance files are shared or not--which means the admin
potentially needs to look up the instance, figure out what node it's on,
and check whether the files are shared.  Interestingly, when a compute
node comes back up it actually creates temporary files to see whether
instances are shared or not so that it can delete the ones that aren't
shared--it'd be way more efficient to just store that information once
at instance creation.

The existing host-evacuate command only works if all instances on a
given compute node are either shared or not shared.  If some of them are
local and some boot from cinder volumes then you have to evacuate them
one at a time until the remaining ones are all of the same time.

 Further the question can actually be complex. Is a thin local volume
 backed by a remote volume local? If so, at what hit rate for the local
 cache?

For the purposes of the evacuate command, this would be local storage
because the thin volume (containing all the instance-specific data)
would be lost if the compute node goes down.

Maybe on_shared_storage is too generic, and instance_shared_storage
would be more accurate.  I'm not hung up on the name, but I think it
would be good for the instance itself to track whether or not its rootfs
is persistent over compute node failure rather than forcing the admin to
remember it.

Chris

You've covered some reasons why there might be an instance attribute, 
but you

still need to deal with getting the information about the underlying storage
services from those storage services.

Don't make assumptions about what a storage service is doing.

Don't expect the storage services to export their characteristics beyond 
the scope

that they would be focused upon.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] SAML

2013-10-04 Thread Adam Young
For Icehouse Keystone should support SAML.  This is an attempt to pull 
together the various pieces necessary to make that happen.


The general apporach is that a Keystone will maintain a short lived set 
of user records for users that have presented valid SAML assertions.  
The assertions will be processed through a mapping backend and stored 
in the identity backend.


Morgan Fainberg is going to be reworking the Memcached backend so that 
it uses  dogpile, the same mechanism that we are using for caching.  
Bascially, we will have one Key/Value Store backend, and then various 
drivers for mapping that to in memory, memcached, Cassandra, or any 
others that come up.  I think we will continue to call this the 
Key/Value Store  (KVS) backend.


Henry Nash is working on integrating multiple LDAP servers into 
Keystone.  Each LDAP server backs a single domain.  Each one gets its 
own mapping from LDAP calls to Identity based on a config file.


For Federation,  we will want to use the KVS backend for identity. Thus, 
we need to be able to configure a domain or set of domains to store 
identity information in KVS.  This will follow the pattern of Henry 
Nash's LDAP work.


We need to keep user IDs Globally unique.  In addition, we need to 
ensure that a user Id can be mapped to the appropriate identity 
backend.  This is slated to be discussed at the summit Federated ID session:

http://summit.openstack.org/cfp/details/28

The diagram at the bottom of the federation blueprint shows how they are 
linked together.


https://blueprints.launchpad.net/keystone/+spec/federation
https://blueprints.launchpad.net/keystone/+spec/mapping-distributed-admin
https://blueprints.launchpad.net/keystone/+spec/saml-id
https://blueprints.launchpad.net/keystone/+spec/dogpile-kvs-backends
https://blueprints.launchpad.net/keystone/+spec/multiple-datastores
https://blueprints.launchpad.net/keystone/+spec/abfab


We have a planned API freeze for Keystone in I2.  Grizzly 2 was in Mid 
January. The Grizzly Summit was about 3 weeks early than the Icehouse 
summit, so if we go by a similar schedule, we should plan on having 
until the end of January to get this work done. If we wait until the 
Summit to get started, we will miss Icehouse.










___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Blueprint for IPAM and Policy extensions in Neutron

2013-10-04 Thread Nachi Ueno
Hi Rudra

Two comment from me

(1) IPAM and Network policy extension looks like independent extension.
so IPAM part and Network policy should be divided for two blueprints.

(2) The team IPAM is too general word. IMO we should use more specific word.
How about SubnetGroup?

(3) Network Policy Resource
I would like to know more details of this api

I would like to know resource definition and
sample API request and response json.

(This is one example
https://wiki.openstack.org/wiki/Quantum/VPNaaS )

Especially, I'm interested in src-addresses, dst-addresses, action-list
properties.
Also, how can we express any port in your API?

Best
Nachi


2013/10/4 Rudra Rugge rru...@juniper.net:
 Hi All,

 The link in the email was incorrect. Please follow the following link:

 https://blueprints.launchpad.net/neutron/+spec/ipam-policy-extensions-for-neutron

 Thanks,
 Rudra

 On Oct 3, 2013, at 11:38 AM, Rudra Rugge rru...@juniper.net wrote:

 Hi All,

 A blueprint has been registered to add IPAM and Policy
 extensions to Neutron. Please review the blueprint and
 the attached specification.

 https://blueprints.launchpad.net/neutron/+spec/juniper-contrail-ipam-policy-extensions-for-neutron

 All comments are welcome.

 Thanks,
 Rudra
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC: adding on_shared_storage field to instance

2013-10-04 Thread Chris Friesen

On 10/04/2013 12:06 PM, Caitlin Bestler wrote:


You've covered some reasons why there might be an instance attribute,
 but you still need to deal with getting the information about the
underlying storage services from those storage services.

Don't make assumptions about what a storage service is doing.

Don't expect the storage services to export their characteristics
beyond the scope that they would be focused upon.


I don't think we need to make many assumptions.

A given compute service will be configured with a single location for 
instance storage.  That location will be either shared or local 
depending on how the compute node is set up at commissioning.  This 
shared/local value could be stored in the config file alongside the 
location, and the compute service would read it in at startup.  Any 
instance started up on that compute node would have its 
instance_shared_storage value set by the compute node.


Block storage is by definition shared, so any instance booting off a 
cinder volume would be considered to be on shared storage even if the 
compute node's instance files are normally not shared.


The one assumption here is that if an instance is booted up on shared 
storage, then that storage is accessible from any other compute node 
that the instance could be migrate/evacuate to.  For larger 
installations this could be enforced by using host aggregates to group 
together the hosts that share a given instance storage filesystem.


Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] tasks/async workers sync up.

2013-10-04 Thread Nikhil Komawar

Hi all,
 
Please find below a brief summary of what went on in the meeting for 
convenience sake:-
 
TODO:
-
1. nikhil: To request for timings for a sync up meeting twice a week outside of 
weekly Glance meeting
2. flwang: to ensure out split patches are updated and ready for review
3. venkatesh and nikhil: to work on POC for dynamic loading of the executors to 
load deployer specific ones
4. venkatesh: to work on a separate patch for DB optimization
5. flwang: to modify tasks list call to return sparse list - one without input, 
result, message
6. rosmaita: to modify the docs to ensure they are okay with what API returns
7. reviewers: reviews :)




Notes:

1. re-evaluate the executor interface design after working on stevedore
2. DB is okay with current model - some performance improvements can be done in 
the future
3. API to return sparse list (which must include 'status') while a show on 
tasks will give all details - Doc change
4. community is okay with soft-deletes - hard-deletes would be looked at later 
point in time
5. expires_at field goes in some glance.conf, default value to be 48 hours
6. more research on the glance-janitor script or some kind of scrubber which 
deletes tasks after current_time  expires_at
 
Please let me know if you've any questions.
Thanks,
-Nikhil


-Original Message-
From: Nikhil Komawar nikhil.koma...@rackspace.com
Sent: Thursday, October 3, 2013 2:22pm
To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Glance] tasks/async workers sync up.



Hi folks,
 
I was hoping to see if you all were free sometime tomorrow (Friday) Oct 4th at 
14:00 UTC to do a sync up on the patch we've had going on for async workers 
(full PS here:- [https://review.openstack.org/#/c/46117/] 
https://review.openstack.org/#/c/46117/ ).
 
Venkatesh and I are working full time on this and trying to address some of the 
comments we've received. Zhi has suggested some important changes and, if we 
can have a general meeting/consensus about the direction this impl should go it 
would make things easy going forward. The design currently does not have 
dynamic loading of the modules; besides that if there were major concerns, we 
would really appreciate some feedback from community on them as well.
 
Thanks,
-Nikhil___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Undercloud Ceilometer

2013-10-04 Thread Chris Jones
Hi

On 4 October 2013 16:28, Ladislav Smola lsm...@redhat.com wrote:

 test it. Anybody volunteers for this task? There will be a hard part:
 doing the right configurations.
 (firewall, keystone, snmpd.conf) So it's all configured in a clean and a
 secured way. That would
 require a seasoned sysadmin to at least observe the thing. Any volunteers
 here? :-)


I'm not familiar at all with Ceilometer, but I'd be happy to discuss
how/where things like snmpd are going to be exposed, and look over the
resulting bits in tripleo :)

-- 
Cheers,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC Candidacy

2013-10-04 Thread James E. Blair
Hi,

I'd like to announce my candidacy for the TC.

About Me


I am the PTL for the OpenStack Infrastructure Program, which I have been
helping to build for the past two years.

I am responsible for a significant portion of our project infrastructure
and developer workflow.  I set up gerrit, helped write git-review, and
moved all of the OpenStack projects from bzr to git.  All of that is to
say that I've given a lot of thought and action to helping scale
OpenStack to the number of projects and developers it has today.

I also wrote zuul, nodepool, and devstack-gate to make sure that we are
able to test all components of the project on every commit.  There are a
lot of moving parts in OpenStack and I strongly believe that at the end
of the day they need to work together as a cohesive whole.  A good deal
of my technical work is focused on achieving that.

Throughout my time working on OpenStack I have always put the needs of
the whole project first, above those of any individual contributor,
organization, or program.  I also believe in the values we have
established as a project: open source, design, development, and
community.  To that end, I have worked hard to ensure that the project
infrastructure is run just like an OpenStack project, using the same
tools and processes, and I think we've succeeding in creating one of the
most open operational project infrastructures ever.

My Platform
===

As a significant OpenStack user, I'm excited about the direction that
OpenStack is heading.  I'm glad that we're accepting new programs that
expand the scope of our project to make it more useful for everyone.  I
believe a major function of the Technical Committee is to curate and
shepherd new programs through the incubation process.  However, I
believe that it should be more involved than it has been.  We have been
very quick to promote out of integration some exciting new projects that
may not have been fully integrated.  As a member of the TC, I support
our continued growth, and I want to make sure that the ties that hold
our collection of projects together are strong, and more than just a
marketing umbrella.

I have also seen a shift since the formation of the OpenStack
Foundation.  Our project is a technical meritocracy, but when the
Foundation's board of directors was established, some issues of
project-wide scope have been taken up by the board of directors while
the Technical Committee has been content to limit their involvement.
The Foundation board is extremely valuable, and I want the Technical
Committee to work closely with them on issues that concern them both.

Adding new projects is not the only purpose of the TC, which is charged
in the bylaws as being responsible for all technical matters relating to
the project.  The reformation of the Technical Committee with an
all-elected membership provides an opportunity to strengthen the
technical meritocracy of the OpenStack project by electing people who
will execute the full mission of the TC.  I would be happy to serve in
that capacity and would appreciate your vote.

Thanks,

Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Blueprint for IPAM and Policy extensions in Neutron

2013-10-04 Thread Rudra Rugge
Hi Nachi,

Inline response

On 10/4/13 12:54 PM, Nachi Ueno na...@ntti3.com wrote:

Hi Rudra

inline responded

2013/10/4 Rudra Rugge rru...@juniper.net:
 Hi Nachi,

 Thanks for reviewing the BP. Please see inline:

 On 10/4/13 11:30 AM, Nachi Ueno na...@ntti3.com wrote:

Hi Rudra

Two comment from me

(1) IPAM and Network policy extension looks like independent extension.
so IPAM part and Network policy should be divided for two blueprints.

 [Rudra] I agree that these need to be split into two blueprints. I will
 create another BP.

Thanks


(2) The team IPAM is too general word. IMO we should use more specific
word.
How about SubnetGroup?

 [Rudra] IPAM holds more information.
 - All DHCP attributes for this IPAM subnet
 - DNS server configuration
 - In future address allocation schemes

Actually, Neutron Subnet has dhcp, DNS, ip allocation schemes.
If I understand your proposal correct, IPAM is a group of subnets
 for of which shares common parameters.
Also, you can propose to extend existing subnet.

[Rudra] Neutron subnet requires a network as I understand. IPAM info
should not have such dependency. Similar to Amazon VPC model where all
IPAM information can be stored even if a a network is not created.
Association to networks can happen at a later time.

Rudra





(3) Network Policy Resource
I would like to know more details of this api

I would like to know resource definition and
sample API request and response json.

(This is one example
https://wiki.openstack.org/wiki/Quantum/VPNaaS )

Especially, I'm interested in src-addresses, dst-addresses, action-list
properties.
Also, how can we express any port in your API?

 [Rudra] Will add the details of the resources and APIs after separating
 the blueprint.

Thanks!

Best
Nachi

 Regards,
 Rudra


Best
Nachi


2013/10/4 Rudra Rugge rru...@juniper.net:
 Hi All,

 The link in the email was incorrect. Please follow the following link:


https://blueprints.launchpad.net/neutron/+spec/ipam-policy-extensions-f
or
-neutron

 Thanks,
 Rudra

 On Oct 3, 2013, at 11:38 AM, Rudra Rugge rru...@juniper.net wrote:

 Hi All,

 A blueprint has been registered to add IPAM and Policy
 extensions to Neutron. Please review the blueprint and
 the attached specification.


https://blueprints.launchpad.net/neutron/+spec/juniper-contrail-ipam-po
li
cy-extensions-for-neutron

 All comments are welcome.

 Thanks,
 Rudra
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Candidacy

2013-10-04 Thread Anita Kuno

Confirmed.

On 10/04/2013 06:46 PM, James E. Blair wrote:

Hi,

I'd like to announce my candidacy for the TC.

About Me


I am the PTL for the OpenStack Infrastructure Program, which I have been
helping to build for the past two years.

I am responsible for a significant portion of our project infrastructure
and developer workflow.  I set up gerrit, helped write git-review, and
moved all of the OpenStack projects from bzr to git.  All of that is to
say that I've given a lot of thought and action to helping scale
OpenStack to the number of projects and developers it has today.

I also wrote zuul, nodepool, and devstack-gate to make sure that we are
able to test all components of the project on every commit.  There are a
lot of moving parts in OpenStack and I strongly believe that at the end
of the day they need to work together as a cohesive whole.  A good deal
of my technical work is focused on achieving that.

Throughout my time working on OpenStack I have always put the needs of
the whole project first, above those of any individual contributor,
organization, or program.  I also believe in the values we have
established as a project: open source, design, development, and
community.  To that end, I have worked hard to ensure that the project
infrastructure is run just like an OpenStack project, using the same
tools and processes, and I think we've succeeding in creating one of the
most open operational project infrastructures ever.

My Platform
===

As a significant OpenStack user, I'm excited about the direction that
OpenStack is heading.  I'm glad that we're accepting new programs that
expand the scope of our project to make it more useful for everyone.  I
believe a major function of the Technical Committee is to curate and
shepherd new programs through the incubation process.  However, I
believe that it should be more involved than it has been.  We have been
very quick to promote out of integration some exciting new projects that
may not have been fully integrated.  As a member of the TC, I support
our continued growth, and I want to make sure that the ties that hold
our collection of projects together are strong, and more than just a
marketing umbrella.

I have also seen a shift since the formation of the OpenStack
Foundation.  Our project is a technical meritocracy, but when the
Foundation's board of directors was established, some issues of
project-wide scope have been taken up by the board of directors while
the Technical Committee has been content to limit their involvement.
The Foundation board is extremely valuable, and I want the Technical
Committee to work closely with them on issues that concern them both.

Adding new projects is not the only purpose of the TC, which is charged
in the bylaws as being responsible for all technical matters relating to
the project.  The reformation of the Technical Committee with an
all-elected membership provides an opportunity to strengthen the
technical meritocracy of the OpenStack project by electing people who
will execute the full mission of the TC.  I would be happy to serve in
that capacity and would appreciate your vote.

Thanks,

Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit downtime for repository renames

2013-10-04 Thread James E. Blair
jebl...@openstack.org (James E. Blair) writes:

 Hi,

 On Saturday October 5th at 1600 UTC, Gerrit will be offline for a
 short time while we rename source code repositories.  To convert
 that time to your local timezone, see:

   http://www.timeanddate.com/worldclock/fixedtime.html?iso=20131005T16

 We will be renaming the following repositories:

   stackforge/python-savannaclient-  openstack/python-savannaclient
   stackforge/savanna -  openstack/savanna
   stackforge/savanna-dashboard   -  openstack/savanna-dashboard
   stackforge/savanna-extra   -  openstack/savanna-extra
   stackforge/savanna-image-elements  -  openstack/savanna-image-elements
   stackforge/python-tuskarclient -  openstack/python-tuskarclient
   stackforge/tuskar  -  openstack/tuskar
   stackforge/tuskar-ui   -  openstack/tuskar-ui

 As usual, we will announce updates on Freenode in #openstack-dev and
 will be available in #openstack-infra to help with any issues.

 -Jim

Additionally we will rename:

  stackforge/fuel-ostf-tests  -  stackforge/fuel-ostf

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Blueprint for IPAM and Policy extensions in Neutron

2013-10-04 Thread Nachi Ueno
2013/10/4 Rudra Rugge rru...@juniper.net:
 Hi Nachi,

 Inline response

 On 10/4/13 12:54 PM, Nachi Ueno na...@ntti3.com wrote:

Hi Rudra

inline responded

2013/10/4 Rudra Rugge rru...@juniper.net:
 Hi Nachi,

 Thanks for reviewing the BP. Please see inline:

 On 10/4/13 11:30 AM, Nachi Ueno na...@ntti3.com wrote:

Hi Rudra

Two comment from me

(1) IPAM and Network policy extension looks like independent extension.
so IPAM part and Network policy should be divided for two blueprints.

 [Rudra] I agree that these need to be split into two blueprints. I will
 create another BP.

Thanks


(2) The team IPAM is too general word. IMO we should use more specific
word.
How about SubnetGroup?

 [Rudra] IPAM holds more information.
 - All DHCP attributes for this IPAM subnet
 - DNS server configuration
 - In future address allocation schemes

Actually, Neutron Subnet has dhcp, DNS, ip allocation schemes.
If I understand your proposal correct, IPAM is a group of subnets
 for of which shares common parameters.
Also, you can propose to extend existing subnet.

 [Rudra] Neutron subnet requires a network as I understand. IPAM info
 should not have such dependency. Similar to Amazon VPC model where all
 IPAM information can be stored even if a a network is not created.
 Association to networks can happen at a later time.

OK I got it. However IPAM is still too general word.
Don't you have any alternatives?

Best
Nachi

 Rudra





(3) Network Policy Resource
I would like to know more details of this api

I would like to know resource definition and
sample API request and response json.

(This is one example
https://wiki.openstack.org/wiki/Quantum/VPNaaS )

Especially, I'm interested in src-addresses, dst-addresses, action-list
properties.
Also, how can we express any port in your API?

 [Rudra] Will add the details of the resources and APIs after separating
 the blueprint.

Thanks!

Best
Nachi

 Regards,
 Rudra


Best
Nachi


2013/10/4 Rudra Rugge rru...@juniper.net:
 Hi All,

 The link in the email was incorrect. Please follow the following link:


https://blueprints.launchpad.net/neutron/+spec/ipam-policy-extensions-f
or
-neutron

 Thanks,
 Rudra

 On Oct 3, 2013, at 11:38 AM, Rudra Rugge rru...@juniper.net wrote:

 Hi All,

 A blueprint has been registered to add IPAM and Policy
 extensions to Neutron. Please review the blueprint and
 the attached specification.


https://blueprints.launchpad.net/neutron/+spec/juniper-contrail-ipam-po
li
cy-extensions-for-neutron

 All comments are welcome.

 Thanks,
 Rudra
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] question about stack updates, instance groups and wait conditions

2013-10-04 Thread Clint Byrum
Excerpts from Simon Pasquier's message of 2013-10-03 07:12:51 -0700:
 Hi Clint,
 
 Thanks for the reply! I'll update the bug you raised with more 
 information. In the meantime, I agree with you that cfn-hup is enough 
 for now.
 
 BTW, is there any bug or missing feature that would prevent me from 
 replacing cfn-hup by os-collect-config?
 

The only problem might be that currently os-collect-config can only watch
one path, but it was designed to watch multiple paths. That is just a bug,
and hopefully will get fixed soon.

Also cfn-init will not know how to read the config info that
os-collect-config produces, so if you are using cfn-init it is still
better to use cfn-hup.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] neutron and private networks

2013-10-04 Thread Clint Byrum
Excerpts from Fox, Kevin M's message of 2013-10-03 08:59:25 -0700:
 I've been trying to use heat more and ran into similar issues with its 
 metadata server bits not working on private namespaces too. Long term it may 
 need to be made netns aware as well.
 

Hi Kevin, could you please file a bug in Heat with any details you can
share about your experience. Thanks!

https://launchpad.net/heat/+filebug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Undercloud Ceilometer

2013-10-04 Thread Clint Byrum
Excerpts from Ladislav Smola's message of 2013-10-04 08:28:22 -0700:
 Hello,
 
 just a few words about role of Ceilometer in the Undercloud and the work 
 in progress.
 
 Why we need Ceilometer in Undercloud:
 ---
 
 In Tuskar-UI, we will display number of statistics, that will show 
 Undercloud metrics.
 Later also number of alerts and notifications, that will come from 
 Ceilometer.
 
 But I do suspect, that the Heat will use the Ceilometer Alarms, similar 
 way it is using it for
 auto-scaling in Overcloud. Can anybody confirm?

I have not heard of anyone want to auto scale baremetal for the
purpose of scaling out OpenStack itself. There is certainly a use case
for it when we run out of compute resources and happen to have spare
hardware around. But unlike on a cloud where you have several
applications all contending for the same hardware, in the undercloud we
have only one application, so it seems less likely that auto-scaling
will be needed. We definitely need scaling, but I suspect it will not
be extremely elastic.

What will be needed, however, is metrics for the rolling updates feature
we plan to add to Heat. We want to make sure that a rolling update does
not adversely affect the service level of the running cloud. If we're
early in the process with our canary-based deploy and suddenly CPU load is
shooting up on all of the completed nodes, something, perhaps Ceilometer,
should be able to send a signal to Heat, and trigger a rollback.

 
 What is planned in near future
 ---
 
 The Hardware Agent capable of obtaining statistics:
 https://blueprints.launchpad.net/ceilometer/+spec/monitoring-physical-devices
 It uses SNMP inspector for obtaining the stats. I have tested that with 
 the Devtest tripleo setup
 and it works.
 
 The planned architecture is to have one Hardware Agent(will be merged to 
 central agent code)
 placed on Control Node (or basically anywhere). That agent will poll 
 SNMP daemons placed on
 hardware in the Undercloud(baremetals, network devices). Any objections 
 why this is a bad idea?
 
 We will have to create a Ceilometer Image element, snmpd element is 
 already there, but we should
 test it. Anybody volunteers for this task? There will be a hard part: 
 doing the right configurations.
 (firewall, keystone, snmpd.conf) So it's all configured in a clean and a 
 secured way. That would
 require a seasoned sysadmin to at least observe the thing. Any 
 volunteers here? :-)
 
 The IPMI inspector for Hardware agent just started:
 https://blueprints.launchpad.net/ceilometer/+spec/ipmi-inspector-for-monitoring-physical-devices
 Seems it should query the Ironic API, which would provide the data 
 samples. Any objections?
 Any volunteers for implementing this on Ironic side?
 
 devananda and lifeless had a greatest concern about the scalability of a 
 Central agent. The Ceilometer
 is not doing any scaling right now, but they are planning Horizontal 
 scaling of the central agent
 for the future. So this is a very important task for us, for larger 
 deployments. Any feedback about
 scaling? Or changing of architecture for better scalability?
 

I share their concerns. For  100 nodes it is no big deal. But centralized
monitoring has a higher cost than distributed monitoring. I'd rather see
agents on the machines themselves do a bit more than respond to polling
so that load is distributed as much as possible and non-essential
network chatter is reduced.

I'm extremely interested in the novel approach that Assimilation
Monitoring [1] is taking to this problem, which is to have each node
monitor itself and two of its immediate neighbors on a switch and
some nodes monitor an additional node on a different switch. Failures
are reported to an API server which uses graph database queries to
determine at what level the failure occurred (single node, cascading,
or network level).

If Ceilometer could incorporate that type of light-weight high-scale
monitoring ethos, rather than implementing something we know does not
scale well at the level of scale OpenStack needs to be, I'd feel a lot
better about pushing it out as part of the standard deployment.

[1] http://assimmon.org/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev