Re: [openstack-dev] [Solum] Contributing to Solum

2014-12-13 Thread Keyvan Mir Mohammad Sadeghi
Hi Adrian, Roshan, etc,

We tried to reach you via the IRC channel Tuesdays night 2100 UTC, but
guess that meeting got cancelled?

We'd like to set a start for our contribution ASAP, so going through the
list of bugs, the below one looked interesting:

https https://bugs.launchpad.net/solum/+bug/1308690://
https://bugs.launchpad.net/solum/+bug/1308690bugs.launchpad.net
https://bugs.launchpad.net/solum/+bug/1308690/
https://bugs.launchpad.net/solum/+bug/1308690solum
https://bugs.launchpad.net/solum/+bug/1308690/+bug/1308690
https://bugs.launchpad.net/solum/+bug/1308690

Though I'm not sure this is the right one. Would you approve? If positive,
are there any sources on the topic other than the bug page itself? And
about the conventions you mentioned in the Skype call; what is the starting
point, do we just submit a pull request referencing the bug number?

Kind regards,

Keyvan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] CI report: 2014-12-5 - 2014-12-11

2014-12-13 Thread James Polley
Resending with correct subject tag. Never send email before coffee.

On Fri, Dec 12, 2014 at 9:33 AM, James Polley j...@jamezpolley.com wrote:

 In the week since the last email we've had no major CI failures. This
 makes it very easy for me to write my first CI report.

 There was a brief period where all the Ubuntu tests failed while an update
 was rolling out to various mirrors. DerekH worked around this quickly by
 dropping in a DNS hack, which remains in place. A long term fix for this
 problem probably involves setting up our own apt mirrors.

 check-tripleo-ironic-overcloud-precise-ha remains flaky, and hence
 non-voting.

 As always more details can be found here (although this week there's
 nothing to see)
 https://etherpad.openstack.org/p/tripleo-ci-breakages

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][infra] Ceph CI status update

2014-12-13 Thread Deepak Shetty
I think you completely mis-understood my Q
I am completely in agreement for _not_ putting CI status in mailing list.

Let me rephrase:

As of now, I see 2 places where CI status is being tracked:

https://wiki.openstack.org/wiki/ThirdPartySystems (clicking on the Link
tells u the status)
and
https://wiki.openstack.org/wiki/Cinder/third-party-ci-status (one of column
is status column)

How are the 2 different ? Do we need to update both ?

thanx,
deepak

On Sat, Dec 13, 2014 at 1:32 AM, Anita Kuno ante...@anteaya.info wrote:

 On 12/12/2014 03:28 AM, Deepak Shetty wrote:
  On Thu, Dec 11, 2014 at 10:33 PM, Anita Kuno ante...@anteaya.info
 wrote:
 
  On 12/11/2014 09:36 AM, Jon Bernard wrote:
  Heya, quick Ceph CI status update.  Once the test_volume_boot_pattern
  was marked as skipped, only the revert_resize test was failing.  I have
  submitted a patch to nova for this [1], and that yields an all green
  ceph ci run [2].  So at the moment, and with my revert patch, we're in
  good shape.
 
  I will fix up that patch today so that it can be properly reviewed and
  hopefully merged.  From there I'll submit a patch to infra to move the
  job to the check queue as non-voting, and we can go from there.
 
  [1] https://review.openstack.org/#/c/139693/
  [2]
 
 http://logs.openstack.org/93/139693/1/experimental/check-tempest-dsvm-full-ceph/12397fd/console.html
 
  Cheers,
 
  Please add the name of your CI account to this table:
  https://wiki.openstack.org/wiki/ThirdPartySystems
 
  As outlined in the third party CI requirements:
  http://ci.openstack.org/third_party.html#requirements
 
  Please post system status updates to your individual CI wikipage that is
  linked to this table.
 
 
  How is posting status there different than here :
  https://wiki.openstack.org/wiki/Cinder/third-party-ci-status
 
  thanx,
  deepak
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 There are over 100 CI accounts now and growing.

 Searching the email archives to evaluate the status of a CI is not
 something that infra will do, we will look on that wikipage or we will
 check the third-party-announce email list (which all third party CI
 systems should be subscribed to, as outlined in the third_party.html
 page lined above).

 If we do not find information where we have asked you to put it and were
 we expect it, we may disable your system until you have fulfilled the
 requirements as outlined in the third_party.html page linked above.

 Sprinkling status updates amongst the emails posted to -dev and
 expecting the infra team and other -devs to find them when needed is
 unsustainable and has been for some time, which is why we came up with
 the wikipage to aggregate them.

 Please direct all further questions about this matter to one of the two
 third-party meetings as linked above.

 Thank you,
 Anita.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-13 Thread Henry
Hi Morgan,

A good question about keystone.

In fact, keystone is naturally suitable for multi-region deployment. It has 
only REST service interface, and PKI based token greatly reduce the central 
service workload. So, unlike other openstack service, it would not be set to 
cascade mode.

Best regards
Henry

Sent from my iPad

On 2014-12-13, at 下午3:12, Morgan Fainberg morgan.fainb...@gmail.com wrote:

 
 
 On Dec 12, 2014, at 10:30, Joe Gordon joe.gord...@gmail.com wrote:
 
 
 
 On Fri, Dec 12, 2014 at 6:50 AM, Russell Bryant rbry...@redhat.com wrote:
 On 12/11/2014 12:55 PM, Andrew Laski wrote:
  Cells can handle a single API on top of globally distributed DCs.  I
  have spoken with a group that is doing exactly that.  But it requires
  that the API is a trusted part of the OpenStack deployments in those
  distributed DCs.
 
 And the way the rest of the components fit into that scenario is far
 from clear to me.  Do you consider this more of a if you can make it
 work, good for you, or something we should aim to be more generally
 supported over time?  Personally, I see the globally distributed
 OpenStack under a single API case much more complex, and worth
 considering out of scope for the short to medium term, at least.
 
 For me, this discussion boils down to ...
 
 1) Do we consider these use cases in scope at all?
 
 2) If we consider it in scope, is it enough of a priority to warrant a
 cross-OpenStack push in the near term to work on it?
 
 3) If yes to #2, how would we do it?  Cascading, or something built
 around cells?
 
 I haven't worried about #3 much, because I consider #2 or maybe even #1
 to be a show stopper here.
 
 Agreed
 
 I agree with Russell as well. I also am curious on how identity will work in 
 these cases. As it stands identity provides authoritative information only 
 for the deployment it runs. There is a lot of concern I have from a security 
 standpoint when I start needing to address what the central api can do on the 
 other providers. We have had this discussion a number of times in Keystone, 
 specifically when designing the keystone-to-keystone identity federation, and 
 we came to the conclusion that we needed to ensure that the keystone local to 
 a given cloud is the only source of authoritative authz information. While it 
 may, in some cases, accept authn from a source that is trusted, it still 
 controls the local set of roles and grants. 
 
 Second, we only guarantee that a tenan_id / project_id is unique within a 
 single deployment of keystone (e.g. Shared/replicated backends such as a 
 percona cluster, which cannot be when crossing between differing IAAS 
 deployers/providers). If there is ever a tenant_id conflict (in theory 
 possible with ldap assignment or an unlucky random uuid generation) between 
 installations, you end up with potentially granting access that should not 
 exist to a given user. 
 
 With that in mind, how does Keystone fit into this conversation? What is 
 expected of identity? What would keystone need to actually support to make 
 this a reality?
 
 I ask because I've only seen information on nova, glance, cinder, and 
 ceilometer in the documentation. Based upon the above information I outlined, 
 I would be concerned with an assumption that identity would just work 
 without also being part of this conversation. 
 
 Thanks,
 Morgan 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-13 Thread Morgan Fainberg
On December 13, 2014 at 3:26:34 AM, Henry (henry4...@gmail.com) wrote:
Hi Morgan,

A good question about keystone.

In fact, keystone is naturally suitable for multi-region deployment. It has 
only REST service interface, and PKI based token greatly reduce the central 
service workload. So, unlike other openstack service, it would not be set to 
cascade mode.


I agree that Keystone is suitable for multi-region in some cases, I am still 
concerned from a security standpoint. The cascade examples all assert a 
*global* tenant_id / project_id in a lot of comments/documentation. The answer 
you gave me doesn’t quite address this issue nor the issue of a disparate 
deployment having a wildly different role-set or security profile. A PKI token 
is not (as of today) possible to use with a Keystone (or OpenStack deployment) 
that it didn’t come from. This is like this because Keystone needs to control 
the AuthZ for it’s local deployment (same design as the keystone-to-keystone 
federation). 

So I have to direct questions:

* Is there something specific you expect to happen with the cascading that 
makes resolving a project_id to something globally unique or am I mis-reading 
this as part of the design? 

* Does the cascade centralization just ask for Keystone tokens for each of the 
deployments or is there something else being done? Essentially how does one 
work with a Nova from cloud XXX and cloud YYY from an authorization standpoint?

You don’t need to answer these right away, but they are clarification points 
that need to be thought about as this design moves forward. There are a number 
of security / authorization questions I can expand on, but the above two are 
the really big ones to start with. As you scale up (or utilize deployments 
owned by different providers) it isn’t always possible to replicate the 
Keystone data around.

Cheers,
Morgan

Best regards
Henry

Sent from my iPad

On 2014-12-13, at 下午3:12, Morgan Fainberg morgan.fainb...@gmail.com wrote:



On Dec 12, 2014, at 10:30, Joe Gordon joe.gord...@gmail.com wrote:



On Fri, Dec 12, 2014 at 6:50 AM, Russell Bryant rbry...@redhat.com wrote:
On 12/11/2014 12:55 PM, Andrew Laski wrote:
 Cells can handle a single API on top of globally distributed DCs.  I
 have spoken with a group that is doing exactly that.  But it requires
 that the API is a trusted part of the OpenStack deployments in those
 distributed DCs.

And the way the rest of the components fit into that scenario is far
from clear to me.  Do you consider this more of a if you can make it
work, good for you, or something we should aim to be more generally
supported over time?  Personally, I see the globally distributed
OpenStack under a single API case much more complex, and worth
considering out of scope for the short to medium term, at least.

For me, this discussion boils down to ...

1) Do we consider these use cases in scope at all?

2) If we consider it in scope, is it enough of a priority to warrant a
cross-OpenStack push in the near term to work on it?

3) If yes to #2, how would we do it?  Cascading, or something built
around cells?

I haven't worried about #3 much, because I consider #2 or maybe even #1
to be a show stopper here.

Agreed

I agree with Russell as well. I also am curious on how identity will work in 
these cases. As it stands identity provides authoritative information only for 
the deployment it runs. There is a lot of concern I have from a security 
standpoint when I start needing to address what the central api can do on the 
other providers. We have had this discussion a number of times in Keystone, 
specifically when designing the keystone-to-keystone identity federation, and 
we came to the conclusion that we needed to ensure that the keystone local to a 
given cloud is the only source of authoritative authz information. While it 
may, in some cases, accept authn from a source that is trusted, it still 
controls the local set of roles and grants. 

Second, we only guarantee that a tenan_id / project_id is unique within a 
single deployment of keystone (e.g. Shared/replicated backends such as a 
percona cluster, which cannot be when crossing between differing IAAS 
deployers/providers). If there is ever a tenant_id conflict (in theory possible 
with ldap assignment or an unlucky random uuid generation) between 
installations, you end up with potentially granting access that should not 
exist to a given user. 

With that in mind, how does Keystone fit into this conversation? What is 
expected of identity? What would keystone need to actually support to make this 
a reality?

I ask because I've only seen information on nova, glance, cinder, and 
ceilometer in the documentation. Based upon the above information I outlined, I 
would be concerned with an assumption that identity would just work without 
also being part of this conversation. 

Thanks,
Morgan 
___
OpenStack-dev mailing list

[openstack-dev] [qa] Question about nova boot --min-count number

2014-12-13 Thread Danny Choi (dannchoi)
Hi,

According to the help text, “—min-count number” boot at least number 
servers (limited by quota):


--min-count number  Boot at least number servers (limited by

quota).


I used devstack to deploy OpenStack (version Kilo) in a multi-node setup:

1 Controller/Network + 2 Compute nodes


I update the tenant demo default quota “instances and “cores from ’10’ and 
’20’ to ‘100’ and ‘200’:


localadmin@qa4:~/devstack$ nova quota-show --tenant 
62fe9a8a2d58407d8aee860095f11550 --user eacb7822ccf545eab9398b332829b476

+-+---+

| Quota   | Limit |

+-+---+

| instances   | 100   |   

| cores   | 200   |   

| ram | 51200 |

| floating_ips| 10|

| fixed_ips   | -1|

| metadata_items  | 128   |

| injected_files  | 5 |

| injected_file_content_bytes | 10240 |

| injected_file_path_bytes| 255   |

| key_pairs   | 100   |

| security_groups | 10|

| security_group_rules| 20|

| server_groups   | 10|

| server_group_members| 10|

+-+---+

When I boot 50 VMs using “—min-count 50”, only 48 VMs come up.


localadmin@qa4:~/devstack$ nova boot --image cirros-0.3.2-x86_64-uec --flavor 1 
--nic net-id=5b464333-bad0-4fc1-a2f0-310c47b77a17 --min-count 50 vm-

There is no error in logs; and it happens consistently.

I also tried “—min-count 60” and only 48 VMs com up.

In Horizon, left pane “Admin” - “System” - “Hypervisors”, it shows both 
Compute hosts, each with 32 total VCPUs for a grand total of 64, but only 48 
used.

Is this normal behavior or is there any other setting to change in order to use 
all 64 VCPUs?

Thanks,
Danny
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Contributing to Solum

2014-12-13 Thread Adrian Otto
Keyvan,

The meeting did happen at 2100 UTC. Are you sure you went to 
#openstack-meeting-alt on Freenode?

http://lists.openstack.org/pipermail/openstack-dev/2014-December/052009.html

I think bug 1308690 would be a great contribution. Please take ownership of 
that bug ticket, and begin!

Thanks,

Adrian

On Dec 13, 2014, at 12:04 AM, Keyvan Mir Mohammad Sadeghi 
keyvan.m.sade...@gmail.commailto:keyvan.m.sade...@gmail.com wrote:


Hi Adrian, Roshan, etc,

We tried to reach you via the IRC channel Tuesdays night 2100 UTC, but guess 
that meeting got cancelled?

We'd like to set a start for our contribution ASAP, so going through the list 
of bugs, the below one looked interesting:

httpshttps://bugs.launchpad.net/solum/+bug/1308690://https://bugs.launchpad.net/solum/+bug/1308690bugs.launchpad.nethttps://bugs.launchpad.net/solum/+bug/1308690/https://bugs.launchpad.net/solum/+bug/1308690solumhttps://bugs.launchpad.net/solum/+bug/1308690/+bug/1308690https://bugs.launchpad.net/solum/+bug/1308690

Though I'm not sure this is the right one. Would you approve? If positive, are 
there any sources on the topic other than the bug page itself? And about the 
conventions you mentioned in the Skype call; what is the starting point, do we 
just submit a pull request referencing the bug number?

Kind regards,

Keyvan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Services are now split out and neutron is open for commits!

2014-12-13 Thread Armando M.
This was more of a brute force fix!

I didn't have time to go with finesse, and instead I went in with the
hammer :)

That said, we want to make sure that the upgrade path to Kilo is as
painless as possible, so we'll need to review the Release Notes [1] to
reflect the fact that we'll be providing a seamless migration to the new
adv services structure.

[1] https://wiki.openstack.org/wiki/ReleaseNotes/Kilo#Upgrade_Notes_6


Cheers,
Armando

On 12 December 2014 at 09:33, Kyle Mestery mest...@mestery.com wrote:

 This has merged now, FYI.

 On Fri, Dec 12, 2014 at 10:28 AM, Doug Wiegley do...@a10networks.com
 wrote:

  Hi all,

  Neutron grenade jobs have been failing since late afternoon Thursday,
 due to split fallout.  Armando has a fix, and it’s working it’s way through
 the gate:

  https://review.openstack.org/#/c/141256/

  Get your rechecks ready!

  Thanks,
 Doug


   From: Douglas Wiegley do...@a10networks.com
 Date: Wednesday, December 10, 2014 at 10:29 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [neutron] Services are now split out and
 neutron is open for commits!

   Hi all,

  I’d like to echo the thanks to all involved, and thanks for the
 patience during this period of transition.

  And a logistical note: if you have any outstanding reviews against the
 now missing files/directories (db/{loadbalancer,firewall,vpn}, services/,
 or tests/unit/services), you must re-submit your review against the new
 repos.  Existing neutron reviews for service code will be summarily
 abandoned in the near future.

  Lbaas folks, hold off on re-submitting feature/lbaasv2 reviews.  I’ll
 have that branch merged in the morning, and ping in channel when it’s ready
 for submissions.

  Finally, if any tempest lovers want to take a crack at splitting the
 tempest runs into four, perhaps using salv’s reviews of splitting them in
 two as a guide, and then creating jenkins jobs, we need some help getting
 those going.  Please ping me directly (IRC: dougwig).

  Thanks,
 doug


   From: Kyle Mestery mest...@mestery.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Wednesday, December 10, 2014 at 4:10 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [neutron] Services are now split out and
 neutron is open for commits!

   Folks, just a heads up that we have completed splitting out the
 services (FWaaS, LBaaS, and VPNaaS) into separate repositores. [1] [2] [3].
 This was all done in accordance with the spec approved here [4]. Thanks to
 all involved, but a special thanks to Doug and Anita, as well as infra.
 Without all of their work and help, this wouldn't have been possible!

 Neutron and the services repositories are now open for merges again.
 We're going to be landing some major L3 agent refactoring across the 4
 repositories in the next four days, look for Carl to be leading that work
 with the L3 team.

  In the meantime, please report any issues you have in launchpad [5] as
 bugs, and find people in #openstack-neutron or send an email. We've
 verified things come up and all the tempest and API tests for basic neutron
 work fine.

 In the coming week, we'll be getting all the tests working for the
 services repositories. Medium term, we need to also move all the advanced
 services tempest tests out of tempest and into the respective repositories.
 We also need to beef these tests up considerably, so if you want to help
 out on a critical project for Neutron, please let me know.

 Thanks!
 Kyle

 [1] http://git.openstack.org/cgit/openstack/neutron-fwaas
 [2] http://git.openstack.org/cgit/openstack/neutron-lbaas
 [3] http://git.openstack.org/cgit/openstack/neutron-vpnaas
 [4]
 http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/kilo/services-split.rst
 [5] https://bugs.launchpad.net/neutron

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [oslo] [ha] potential issue with implicit async-compatible mysql drivers

2014-12-13 Thread Mike Bayer

 On Dec 12, 2014, at 1:16 PM, Mike Bayer mba...@redhat.com wrote:
 
 
 On Dec 12, 2014, at 9:27 AM, Ihar Hrachyshka ihrac...@redhat.com wrote:
 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512
 
 Reading the latest comments at
 https://github.com/PyMySQL/PyMySQL/issues/275, it seems to me that the
 issue is not to be solved in drivers themselves but instead in
 libraries that arrange connections (sqlalchemy/oslo.db), correct?
 
 Will the proposed connection reopening help?
 
 disagree, this is absolutely a driver bug.  I’ve re-read that last comment 
 and now I see that the developer is suggesting that this condition not be 
 flagged in any way, so I’ve responded.  The connection should absolutely blow 
 up and if it wants to refuse to be usable afterwards, that’s fine (it’s the 
 same as MySQLdb “commands out of sync”).  It just has to *not* emit any 
 further SQL as though nothing is wrong.
 
 It doesn’t matter much for PyMySQL anyway, I don’t know that PyMySQL is up to 
 par for openstack in any case (look at the entries in their changelog: 
 https://github.com/PyMySQL/PyMySQL/blob/master/CHANGELOG Several other bug 
 fixes”, “Many bug fixes- really?  is this an iphone app?)
 
 We really should be looking to get this fixed in MySQL-connector, which seems 
 to have a similar issue.   It’s just so difficult to get responses from 
 MySQL-connector that the PyMySQL thread is at least informative.

so I spent the rest of yesterday continuing to stare at that example case and 
also continued the thread on that list.

Where I think it’s at is that, while I think this is a huge issue in any one or 
all of:  1. a gevent-style “timeout” puts a monkeypatched socket in an entirely 
unknown state, 2. MySQL’s protocol doesn’t have any provision for matching an 
OK response to the request that it corresponds to, 3. the MySQL drivers we’re 
dealing with don’t have actual “async” APIs, which could then be easily 
tailored to work with eventlet/gevent safely (see 
https://github.com/zacharyvoase/gevent-psycopg2 
https://bitbucket.org/dvarrazzo/psycogreen for the PG examples of these, 
problem solved), at the moment I’m not fully confident the drivers are going to 
feasibly be able to provide a complete fix here. MySQL sends a status 
message that is essentially, “OK”, and there’s not really any way to tell that 
this “OK” is actually from a different statement.

What we need at the very basic level is that, if we call connection.rollback(), 
it either fails with an exception, or it succeeds.   Right now, the core of the 
test case is that we see connection.rollback() silently failing, which then 
causes the next statement (the INSERT) to also fail - then the connection 
rights itself and continues to be usable to complete the transaction.   There 
might be some other variants of this.

So in the interim I have added for SQLA 0.9.9, which I can also make available 
as part of oslo.db.sqlalchemy.compat if we’d like, a session.invalidate() 
method that will just call connection.invalidate() on the current bound 
connection(s); this is then caught within the block where we know that 
eventlet/gevent is in a “timeout” status.

Within the oslo.db.sqlalchemy.enginefacade system, we can potentially add 
direct awareness of eventlet.Timeout 
(http://eventlet.net/doc/modules/timeout.html) as a distinct error condition 
within a transactional block, and invalidate the known connection(s) when this 
is caught.   This would insulate us from this particular issue regardless of 
driver, with the key assumption that it is in fact only a “timeout” condition 
under which this issue actually occurs.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] OT: Utah Mid-cycle sprint photos

2014-12-13 Thread Paul Michali (pcm)
I put a link in the Etherpad, to some photos I took from the mid-cycle sprint 
in Utah. Here’s the direct link… http://media.michali.net/21/198/
That gallery doesn’t correctly scale portrait oriented photos initially, but 
you can select a size and it’ll resize it. I’ve got to get to that some day.


PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pc_m (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][vpnaas] Working on unit tests...

2014-12-13 Thread Paul Michali (pcm)
For the new VPNaaS repo, I have created 
https://review.openstack.org/#/c/141532/ to move the tests from tests.skip and 
modify the imports. This has Brandon’s change to setup policy.json, and Ihar’s 
one-liner for moving get_admin_context() in one test (should we upstream his, 
and I rebase mine?).

Please look it over, as I’m not sure if I put the override_nvalues() calls in 
the right places or not.

It passes unit tests in Jenkins. The Tempest tests all fail. Is that expected? 
What’s the plan for functional and tempest tests with these other repos?

Thanks!


PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pc_m (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][vpnaas] Working on unit tests...

2014-12-13 Thread Brandon Logan
Paul,
It looks like you put that method call in all the right places.  You
would know if you didn't, because the unit tests would fail bc of the
policy.json.  

Not sure on the tempests tests.  I'm sure Doug and Kyle know more about
that, so hopefully they can chime in.

Thanks,
Brandon

On Sat, 2014-12-13 at 19:06 +, Paul Michali (pcm) wrote:
 For the new VPNaaS repo, I have
 created https://review.openstack.org/#/c/141532/ to move the tests
 from tests.skip and modify the imports. This has Brandon’s change to
 setup policy.json, and Ihar’s one-liner for moving get_admin_context()
 in one test (should we upstream his, and I rebase mine?).
 
 
 Please look it over, as I’m not sure if I put the override_nvalues()
 calls in the right places or not.
 
 
 It passes unit tests in Jenkins. The Tempest tests all fail. Is that
 expected? What’s the plan for functional and tempest tests with these
 other repos?
 
 
 Thanks!
 
 
 
 
 PCM (Paul Michali)
 
 
 MAIL …..…. p...@cisco.com
 IRC ……..… pc_m (irc.freenode.com)
 TW ………... @pmichali
 GPG Key … 4525ECC253E31A83
 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
 
 
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] deadline for specs

2014-12-13 Thread Tan, Lin
Hi,

A quick question,
do we have a SpecProposalDeadline for Ironic, 18th Dec or ?

Thanks

Best Regards,

Tan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] How to delete a VM which is in ERROR state?

2014-12-13 Thread Christopher Yeoh
If force delete doesn't work please do submit the bug report along with as
much of the relevant nova logs as you can. Even better if it's easily
repeatable with devstack.

Chris
On Sat, 13 Dec 2014 at 8:43 am, pcrews glee...@gmail.com wrote:

 On 12/09/2014 03:54 PM, Ken'ichi Ohmichi wrote:
  Hi,
 
  This case is always tested by Tempest on the gate.
 
  https://github.com/openstack/tempest/blob/master/tempest/
 api/compute/servers/test_delete_server.py#L152
 
  So I guess this problem wouldn't happen on the latest version at least.
 
  Thanks
  Ken'ichi Ohmichi
 
  ---
 
  2014-12-10 6:32 GMT+09:00 Joe Gordon joe.gord...@gmail.com:
 
 
  On Sat, Dec 6, 2014 at 5:08 PM, Danny Choi (dannchoi) 
 dannc...@cisco.com
  wrote:
 
  Hi,
 
  I have a VM which is in ERROR state.
 
 
  +--+
 --+++---
 --++
 
  | ID   | Name
  | Status | Task State | Power State | Networks   |
 
 
  +--+
 --+++---
 --++
 
  | 1cb5bf96-619c-4174-baae-dd0d8c3d40c5 |
  cirros--1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | ERROR  | -  |
 NOSTATE
  ||
 
 
  I tried in both CLI “nova delete” and Horizon “terminate instance”.
  Both accepted the delete command without any error.
  However, the VM never got deleted.
 
  Is there a way to remove the VM?
 
 
  What version of nova are you using? This is definitely a serious bug,
 you
  should be able to delete an instance in error state. Can you file a bug
 that
  includes steps on how to reproduce the bug along with all relevant logs.
 
  bugs.launchpad.net/nova
 
 
 
  Thanks,
  Danny
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 Hi,

 I've encountered this in my own testing and have found that it appears
 to be tied to libvirt.

 When I hit this, reset-state as the admin user reports success (and
 state is set), *but* things aren't really working as advertised and
 subsequent attempts to do anything with the errant vm's will send them
 right back into 'FLAIL' / can't delete / endless DELETING mode.

 restarting libvirt-bin on my machine fixes this - after restart, the
 deleting vm's are properly wiped without any further user input to
 nova/horizon and all seems right in the world.

 using:
 devstack
 ubuntu 14.04
 libvirtd (libvirt) 1.2.2

 triggered via:
 lots of random create/reboot/resize/delete requests of varying validity
 and sanity.

 Am in the process of cleaning up my test code so as not to hurt anyone's
 brain with the ugly and will file a bug once done, but thought this
 worth sharing.

 Thanks,
 Patrick

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] deadline for specs

2014-12-13 Thread Devananda van der Veen
Hi, Tan,

No, ironic is not having an early feature proposal freeze for this cycle.
Dec 18 is the kilo-1 milestone, and that is all.

Please see the release schedule here:

https://wiki.openstack.org/wiki/Kilo_Release_Schedule

That being said, the earlier you can propose a spec, the better your
chances for it landing in any given cycle.

Regards,
Devananda




On Sat, Dec 13, 2014, 10:10 PM Tan, Lin lin@intel.com wrote:

Hi,

A quick question,
do we have a SpecProposalDeadline for Ironic, 18th Dec or ?

Thanks

Best Regards,

Tan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev