[openstack-dev] [cinder] [nova] Consistency groups?

2014-11-13 Thread Philipp Marek
Hi,

I'm working on the DRBD Cinder driver, and am looking at the Nova side, 
too. Is there any idea how Cinder's consistency groups should be used on 
the Nova nodes?

DRBD has easy support for consistency groups (a DRBD resource is 
a collection of DRBD volumes that share a single, serialized connection) 
and so can guarantee write consistency across multiple volumes. 

[ Which does make sense anyway; eg. with multiple iSCSI
  connections one could break down because of STP or
  other packet loss, and then the storage backend and/or
  snapshots/backups/etc. wouldn't be consistent anymore.]


What I'm missing now is a way to get the consistency group information 
to the Nova nodes. I can easily put such a piece of data into the 
transmitted transport information (along with the storage nodes' IP 
addresses etc.) and use it on the Nova side; but that also means that
on the Nova side there'll be several calls to establish the connection,
and several for tear down - and (to exactly adhere to the API contract)
I'd have to make sure that each individual volume is set up (and closed)
in exactly that order again.

That means quite a few unnecessary external calls, and so on.


Is there some idea, proposal, etc., that says that
   *within a consistency group*
all volumes *have* to be set up and shutdown 
   *as a single logical operation*?
[ well, there is one now ;]


Because in that case all volume transport information can (optionally) be 
transmitted in a single data block, with several iSCSI/DRBD/whatever
volumes being set up in a single operation; and later calls (for the other 
volumes in the same group) can be simply ignored as long as they have the
same transport information block in them.


Thank you for all pointers to existing proposals, ideas, opinions, etc.


Phil

-- 
: Ing. Philipp Marek
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com :

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Radomir Dopieralski
On 13/11/14 23:30, Martin Geisler wrote:

[...]

> While I agree that it's chaotic, I also think you make the problem worse
> than it really is. First, remember that the user who installs Horizon
> won't need to use the JavaScript based *developer* tools such as npm,
> bower, etc.
> 
> That is, I think Horizon developers will use these tools to produce a
> release -- a tarball -- and that tarball will be something you unpack on
> your webserver and then you're done. I base this on what I've seen in
> the project I've been working. The release tarball you download here
> don't mention npm, bower, or any of the other tools:
> 
>   https://github.com/zerovm/swift-browser/releases
> 
> The tools were used to produce the tarball and were used to test it, but
> they're not part of the released product. Somewhat similar to how GCC
> isn't included in the tarball if you download a pre-compiled binary.

[...]

> Maybe a difference is that you don't (yet) install a web application
> like you install a system application. Instead you *deploy* it: you
> unpack files on a webserver, you configure permissions, you setup cache
> rules, you configure a database, etc.

[...]

I think I see where the misunderstanding is in this whole discussion. It
seems it revolves around the purpose and role of the distribution.

>From the naive point of view, the role of a Linux distribution is to
just collect all the software from respective upstream developers and
put it in a single repository, so that it can be easily installed by the
users. That's not the case.

The role of a distribution is to provide a working ecosystem of
software, that is:
a) installed and configured in consistent way,
b) tested to work with the specific versions that it ships with,
c) audited for security,
d) maintained, including security patches,
e) documented, including external tutorials and the like,
f) supported, either by the community or by the companies that provide
support,
g) free of licensing issues and legal risks as much as possible,
h) managed with the common system management tools.

In order to do that, they can't just take a tarball and drop it in a
directory. They always produce their own builds, to make sure it's the
same thing that the source code specifies. They sometimes have to
rearrange configuration files, to make them fit the standards in their
system. They provide sane configuration defaults. They track the
security reports about all the installed components, and apply fixes,
often before the security issue is even announced.

Basically, a distribution adds a whole bunch of additional guarantees
for the software they ship. Those are often long-term guarantees, as
they will be supporting our software long after we have forgotten about
it already.

You say that "web development doesn't work like that". That may be true,
but that's mostly because if you develop a new web application in-house,
and deploy it on your server, you don't really have such large legal
risk, configuration complexity or support problem -- you just have to
care about that single application, because the packagers from the
distribution that you are using are taking care about all the rest of
software on your server.

-- 
Radomir Dopieralski


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

2014-11-13 Thread Jastrzebski, Michal


> -Original Message-
> From: Joshua Harlow [mailto:harlo...@outlook.com]
> Sent: Thursday, November 13, 2014 10:50 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Heat] Using Job Queues for timeout ops
> 
> On Nov 13, 2014, at 10:59 AM, Clint Byrum  wrote:
> 
> > Excerpts from Zane Bitter's message of 2014-11-13 09:55:43 -0800:
> >> On 13/11/14 09:58, Clint Byrum wrote:
> >>> Excerpts from Zane Bitter's message of 2014-11-13 05:54:03 -0800:
>  On 13/11/14 03:29, Murugan, Visnusaran wrote:
> > Hi all,
> >
> > Convergence-POC distributes stack operations by sending resource
> > actions over RPC for any heat-engine to execute. Entire stack
> > lifecycle will be controlled by worker/observer notifications.
> > This distributed model has its own advantages and disadvantages.
> >
> > Any stack operation has a timeout and a single engine will be
> > responsible for it. If that engine goes down, timeout is lost
> > along with it. So a traditional way is for other engines to
> > recreate timeout from scratch. Also a missed resource action
> > notification will be detected only when stack operation timeout
> happens.
> >
> > To overcome this, we will need the following capability:
> >
> > 1.Resource timeout (can be used for retry)
> 
>  I don't believe this is strictly needed for phase 1 (essentially we
>  don't have it now, so nothing gets worse).
> 
> >>>
> >>> We do have a stack timeout, and it stands to reason that we won't
> >>> have a single box with a timeout greenthread after this, so a
> >>> strategy is needed.
> >>
> >> Right, that was 2, but I was talking specifically about the resource
> >> retry. I think we agree on both points.
> >>
>  For phase 2, yes, we'll want it. One thing we haven't discussed
>  much is that if we used Zaqar for this then the observer could
>  claim a message but not acknowledge it until it had processed it,
>  so we could have guaranteed delivery.
> 
> >>>
> >>> Frankly, if oslo.messaging doesn't support reliable delivery then we
> >>> need to add it.
> >>
> >> That is straight-up impossible with AMQP. Either you ack the message
> >> and risk losing it if the worker dies before processing is complete,
> >> or you don't ack the message until it's processed and you become a
> >> blocker for every other worker trying to pull jobs off the queue. It
> >> works fine when you have only one worker; otherwise not so much. This
> >> is the crux of the whole "why isn't Zaqar just Rabbit" debate.
> >>
> >
> > I'm not sure we have the same understanding of AMQP, so hopefully we
> > can clarify here. This stackoverflow answer echoes my understanding:
> >
> > http://stackoverflow.com/questions/17841843/rabbitmq-does-one-
> consumer
> > -block-the-other-consumers-of-the-same-queue
> >
> > Not ack'ing just means they might get retransmitted if we never ack.
> > It doesn't block other consumers. And as the link above quotes from
> > the AMQP spec, when there are multiple consumers, FIFO is not
> guaranteed.
> > Other consumers get other messages.
> >
> > So just add the ability for a consumer to read, work, ack to
> > oslo.messaging, and this is mostly handled via AMQP. Of course that
> > also likely means no zeromq for Heat without accepting that messages
> > may be lost if workers die.
> >
> > Basically we need to add something that is not "RPC" but instead
> > "jobqueue" that mimics this:
> >
> > http://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo/messa
> > ging/rpc/dispatcher.py#n131
> >
> > I've always been suspicious of this bit of code, as it basically means
> > that if anything fails between that call, and the one below it, we
> > have lost contact, but as long as clients are written to re-send when
> > there is a lack of reply, there shouldn't be a problem. But, for a job
> > queue, there is no reply, and so the worker would dispatch, and then
> > acknowledge after the dispatched call had returned (including having
> > completed the step where new messages are added to the queue for any
> > newly-possible children).
> >
> > Just to be clear, I believe what Zaqar adds is the ability to peek at
> > a specific message ID and not affect it in the queue, which is
> > entirely different than ACK'ing the ones you've already received in your
> session.
> >
> >> Most stuff in OpenStack gets around this by doing synchronous calls
> >> across oslo.messaging, where there is an end-to-end ack. We don't
> >> want that here though. We'll probably have to make do with having
> >> ways to recover after a failure (kick off another update with the
> >> same data is always an option). The hard part is that if something
> >> dies we don't really want to wait until the stack timeout to start
> recovering.
> >>
> >
> > I fully agree. Josh's point about using a coordination service like
> > Zookeeper to maintain liveness is an intere

Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

2014-11-13 Thread Jastrzebski, Michal


> -Original Message-
> From: Clint Byrum [mailto:cl...@fewbar.com]
> Sent: Thursday, November 13, 2014 8:00 PM
> To: openstack-dev
> Subject: Re: [openstack-dev] [Heat] Using Job Queues for timeout ops
> 
> Excerpts from Zane Bitter's message of 2014-11-13 09:55:43 -0800:
> > On 13/11/14 09:58, Clint Byrum wrote:
> > > Excerpts from Zane Bitter's message of 2014-11-13 05:54:03 -0800:
> > >> On 13/11/14 03:29, Murugan, Visnusaran wrote:
> > >>> Hi all,
> > >>>
> > >>> Convergence-POC distributes stack operations by sending resource
> > >>> actions over RPC for any heat-engine to execute. Entire stack
> > >>> lifecycle will be controlled by worker/observer notifications.
> > >>> This distributed model has its own advantages and disadvantages.
> > >>>
> > >>> Any stack operation has a timeout and a single engine will be
> > >>> responsible for it. If that engine goes down, timeout is lost
> > >>> along with it. So a traditional way is for other engines to
> > >>> recreate timeout from scratch. Also a missed resource action
> > >>> notification will be detected only when stack operation timeout
> happens.
> > >>>
> > >>> To overcome this, we will need the following capability:
> > >>>
> > >>> 1.Resource timeout (can be used for retry)
> > >>
> > >> I don't believe this is strictly needed for phase 1 (essentially we
> > >> don't have it now, so nothing gets worse).
> > >>
> > >
> > > We do have a stack timeout, and it stands to reason that we won't
> > > have a single box with a timeout greenthread after this, so a
> > > strategy is needed.
> >
> > Right, that was 2, but I was talking specifically about the resource
> > retry. I think we agree on both points.
> >
> > >> For phase 2, yes, we'll want it. One thing we haven't discussed
> > >> much is that if we used Zaqar for this then the observer could
> > >> claim a message but not acknowledge it until it had processed it,
> > >> so we could have guaranteed delivery.
> > >>
> > >
> > > Frankly, if oslo.messaging doesn't support reliable delivery then we
> > > need to add it.
> >
> > That is straight-up impossible with AMQP. Either you ack the message
> > and risk losing it if the worker dies before processing is complete,
> > or you don't ack the message until it's processed and you become a
> > blocker for every other worker trying to pull jobs off the queue. It
> > works fine when you have only one worker; otherwise not so much. This
> > is the crux of the whole "why isn't Zaqar just Rabbit" debate.
> >
> 
> I'm not sure we have the same understanding of AMQP, so hopefully we can
> clarify here. This stackoverflow answer echoes my understanding:
> 
> http://stackoverflow.com/questions/17841843/rabbitmq-does-one-
> consumer-block-the-other-consumers-of-the-same-queue
> 
> Not ack'ing just means they might get retransmitted if we never ack. It
> doesn't block other consumers. And as the link above quotes from the
> AMQP spec, when there are multiple consumers, FIFO is not guaranteed.
> Other consumers get other messages.
> 
> So just add the ability for a consumer to read, work, ack to oslo.messaging,
> and this is mostly handled via AMQP. Of course that also likely means no
> zeromq for Heat without accepting that messages may be lost if workers die.
> 
> Basically we need to add something that is not "RPC" but instead "jobqueue"
> that mimics this:
> 
> http://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo/messagin
> g/rpc/dispatcher.py#n131
> 
> I've always been suspicious of this bit of code, as it basically means that if
> anything fails between that call, and the one below it, we have lost contact,
> but as long as clients are written to re-send when there is a lack of reply,
> there shouldn't be a problem. But, for a job queue, there is no reply, and so
> the worker would dispatch, and then acknowledge after the dispatched call
> had returned (including having completed the step where new messages are
> added to the queue for any newly-possible children).
> 
> Just to be clear, I believe what Zaqar adds is the ability to peek at a 
> specific
> message ID and not affect it in the queue, which is entirely different than
> ACK'ing the ones you've already received in your session.
> 
> > Most stuff in OpenStack gets around this by doing synchronous calls
> > across oslo.messaging, where there is an end-to-end ack. We don't want
> > that here though. We'll probably have to make do with having ways to
> > recover after a failure (kick off another update with the same data is
> > always an option). The hard part is that if something dies we don't
> > really want to wait until the stack timeout to start recovering.
> >
> 
> I fully agree. Josh's point about using a coordination service like Zookeeper 
> to
> maintain liveness is an interesting one here. If we just make sure that all 
> the
> workers that have claimed work off the queue are alive, that should be
> sufficient to prevent a hanging stack situation like you describe above.
> 
> 

[openstack-dev] [OpenStack-dev][CI Jenkins] ERROR: openstack The plugin token_endpoint could not be found

2014-11-13 Thread Rick Chen
HI, 
Starting today our third-part CI system unable to start devstack. Yesterday 
previously been good. I donot know what’s happen.
Looks like this was opened as a DevStack bug too:
https://launchpad.net/bugs/1392231
And discussed in this ML thread:
http://lists.openstack.org/pipermail/openstack-dev/2014-November/050350.html


014-11-13 07:31:45.032 | ERROR: openstack The plugin token_endpoint could not 
be found
2014-11-13 07:31:45.053 | + endpoint_id=
2014-11-13 07:31:45.053 | + echo
2014-11-13 07:31:45.053 |
2014-11-13 07:31:45.053 | ++ get_or_create_project swifttenanttest1
2014-11-13 07:31:45.053 | ++ local os_cmd=openstack
2014-11-13 07:31:45.053 | ++ local domain=
2014-11-13 07:31:45.053 | ++ [[ ! -z '' ]]
2014-11-13 07:31:45.054 | +++ openstack project show swifttenanttest1 -f value 
-c id
2014-11-13 07:31:45.372 | +++ openstack project create swifttenanttest1 -f 
value -c id
2014-11-13 07:31:45.673 | ERROR: openstack The plugin token_endpoint could not 
be found
2014-11-13 07:31:45.693 | ++ local project_id=
2014-11-13 07:31:45.693 | ++ echo
2014-11-13 07:31:45.693 | + local swift_tenant_test1=
2014-11-13 07:31:45.693 | + die_if_not_set 606 swift_tenant_test1 'Failure 
creating swift_tenant_test1'
2014-11-13 07:31:45.693 | + local exitcode=0
2014-11-13 07:31:45.695 | [Call Trace]
2014-11-13 07:31:45.695 | ./stack.sh:1028:create_swift_accounts
2014-11-13 07:31:45.695 | /opt/stack/new/devstack/lib/swift:606:die_if_not_set
2014-11-13 07:31:45.695 | /opt/stack/new/devstack/functions-common:284:die
2014-11-13 07:31:45.697 | [ERROR] /opt/stack/new/devstack/functions-common:606 
Failure creating swift_tenant_test1
2014-11-13 07:31:46.698 | exit_trap: cleaning up child processes
2014-11-13 07:31:46.698 | Error on exit

What should i do next? or has any solution?___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Is this fix introducing another different bug to dhcp-agent?

2014-11-13 Thread Miguel Ángel Ajo
Wow Xu!, that was fast,  

Thank you very much.  


Miguel Ángel
ajo @ freenode.net


On Friday, 14 de November de 2014 at 04:01, Xu Han Peng wrote:

> I opened a new bug and submitted a fix for this problem since it was 
> introduced by my previous patch.
>  
> https://bugs.launchpad.net/neutron/+bug/1392564
> https://review.openstack.org/#/c/134432/
>  
> It will be great if you can have a look at the fix and comment. Thanks!
>  
> Xu Han
>  
> On 11/14/2014 05:54 AM, Ihar Hrachyshka wrote:
> > -BEGIN PGP SIGNED MESSAGE- Hash: SHA512 Robert, Miguel, do you plan 
> > to take care of the bug and the fix, or you need help? RDO depends on the 
> > fix, also we should introduce the fix before the next Juno release that 
> > includes the bad patch, so I would be glad to step in if you don't have 
> > spare cycles. /Ihar On 13/11/14 16:44, Robert Li (baoli) wrote:  
> > > Nice catch. Since it’s already merged, a new bug may be in order. —Robert 
> > > On 11/13/14, 10:25 AM, "Miguel Ángel Ajo"  > > (mailto:majop...@redhat.com)  
> > > (mailto:majop...@redhat.com)> wrote: I believe this fix to IPv6 dhcp 
> > > spawn breaks isolated metadata when we have a subnet combination like 
> > > this on a network: 1) IPv6 subnet, with DHCP enabled 2) IPv4 subnet, with 
> > > isolated metadata enabled. 
> > > https://review.openstack.org/#/c/123671/1/neutron/agent/dhcp_agent.py I 
> > > haven’t been able to test yet, but wanted to share it before I forget. 
> > > Miguel Ángel ajo @ freenode.net (http://freenode.net)  
> > >  
> >  
> > -BEGIN PGP SIGNATURE- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) 
> > iQEcBAEBCgAGBQJUZSioAAoJEC5aWaUY1u57WvIH/0pEFnXwPF9dKGbmWGvxgURW 
> > Fec0SMxl544DUyKXfhy/fEJPiedAm63TcBbBRkcLrwrGrAI23iMvAi96tCuH/eb7 
> > uYbgoDF16b6DGUg0V2bXh8OufpgoIn4T38Vwwr0MFhfxOLbux4QfK1MshsRF1XWx 
> > LnzmLLnDuJvEYF/gq9ifAA0ekQ+OdwYaKpTEyoVXZNuSJzJOkY8BnwjPQTuRStYG 
> > M1oBsIxO9j9C/fw1/lkIKasaq9Vmy5LtG+neOyQDzM6EjZrVKO+Z9ZbJnhlkIoaF 
> > fL8dKqDBzDbFbHpE/pHcUSR5lMnBkHWjsfqC6w8NKpEKnPW6UN88oipSoB9h7U4= =NfOp 
> > -END PGP SIGNATURE- ___ 
> > OpenStack-dev mailing list OpenStack-dev@lists.openstack.org 
> > (mailto:OpenStack-dev@lists.openstack.org) 
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
> >  
>  
>  
> -- Thanks, Xu Han  
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>  
>  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron mid-cycle announcement

2014-11-13 Thread Miguel Ángel Ajo
I was willing to go, (plane is ~850€ from Madrid) but it’s going to push
too much stress on my family  as it’s too close to the summit.  
(I need to add at least 3 more days because of flights).

I’d love to participate remotely, even with the timezone differences,
if you find a way to leave “homework" to the ones at CEST without  
making your work more difficult I’d allocate those days to help where  
I can/It’s needed.


Miguel Ángel
ajo @ freenode.net


On Friday, 14 de November de 2014 at 00:17, Kyle Mestery wrote:

> A severe typo hopefully didn't result in people booking week and a half trips 
> to Lehi!
>  
> The mid-cycle is as originally planned: December 8-10.
>  
> Thanks,
> Kyle
>  
> > On Nov 13, 2014, at 2:04 PM, Carl Baldwin  > (mailto:c...@ecbaldwin.net)> wrote:
> >  
> > > On Thu, Nov 13, 2014 at 1:00 PM, Salvatore Orlando  > > (mailto:sorla...@nicira.com)> wrote:
> > > No worries,
> > >  
> > > you get one day off over the weekend. And you also get to choose if it's
> > > saturday or sunday.
> > >  
> >  
> >  
> > I didn't think it was going to be a whole day.
> >  
> > > Salvatore
> > >  
> > > > On 13 November 2014 20:05, Kevin Benton  > > > (mailto:blak...@gmail.com)> wrote:
> > > >  
> > > > December 8-19? 11 day mid-cycle seems a little intense...
> >  
> > If you thought the summits fried your brain...
> >  
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >  
>  
>  
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>  
>  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] HA cross project session summary and next teps

2014-11-13 Thread Miguel Ángel Ajo
Thank you for sharing, I missed that session.

Somehow related to the health checks: https://review.openstack.org/#/c/97748/

This is an spec/functionality for oslo I’m working on, to provide feedback to 
the process
manager that runs the daemons (init.d, pacemaker, systemd, pacemaker+systemd, 
upstart).

The idea is that daemons themselves could provide feedback about their inner
status, with an status code + status message. To allow, for example, degraded 
operation.

Feedback on the spec/comments is appreciated.

Best regards,
Miguel Ángel



Miguel Ángel
ajo @ freenode.net


On Thursday, 13 de November de 2014 at 12:59, Angus Salkeld wrote:

> On Tue, Nov 11, 2014 at 12:13 PM, Angus Salkeld  (mailto:asalk...@mirantis.com)> wrote:
> > Hi all
> >  
> > The HA session was really well attended and I'd like to give some feedback 
> > from the session.
> >  
> > Firstly there is some really good content here: 
> > https://etherpad.openstack.org/p/kilo-crossproject-ha-integration
> >  
> > 1. We SHOULD provide better health checks for OCF resources 
> > (http://linux-ha.org/wiki/OCF_Resource_Agents).  
> > These should be fast and reliable. We should probably bike shed on some 
> > convention like "-manage healthcheck"
> > and then roll this out for each project.
> >  
> > 2. We should really move 
> > https://github.com/madkiss/openstack-resource-agents to stackforge or 
> > openstack if the author is agreeable to it (it's referred to in our 
> > official docs).
> >  
>  
> I have chatted to the author of this repo and he is happy for it to live 
> under stackforge or openstack. Or each OCF resource going into each of the 
> projects.
> Does anyone have any particular preference? I suspect stackforge will be the 
> path of least resistance.
>  
> -Angus
>   
> > 3. All services SHOULD support Active/Active configurations
> > (better scaling and it's always tested)
> >  
> > 4. We should be testing HA (there are a number of ideas on the etherpad 
> > about this)
> >  
> > 5. Many services do not recovery in the case of failure mid-task
> > This seems like a big problem to me (some leave the DB in a mess). 
> > Someone linked to an interesting article (
> > crash-only-software: http://lwn.net/Articles/191059/) 
> > (http://lwn.net/Articles/191059/) that suggests that we if we do this 
> > correctly we should not need the concept of clean shutdown.
> >  
> > (https://github.com/openstack/oslo-incubator/blob/master/openstack/common/service.py#L459-L471)
> >  I'd be interested in how people think this needs to be approached 
> > (just raise bugs for each?).
> >  
> > Regards
> > Angus
>  
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>  
>  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] LTFS integration with OpenStack Swift for scenario like - Data Archival as a Service .

2014-11-13 Thread Sachin Goswami
Hi , 
We would like to know your opinion about the same. 

In OpenStack Swift - xfs file system is integrated which provides a 
maximum file system size of 8 exbibytes minus one byte (263-1 bytes). We 
are studying use of LTFS integration with OpenStack Swift for scenario 
like - Data Archival as a Service .

Was integration of LTFS with Swift considered before? If so, can you 
please share your study output?
Will integration of LTFS with Swift fit into existing Swift architecture ? 

We would like to know your opinion about the same and pros / cons. 
LTFS Link: 
http://en.wikipedia.org/wiki/Linear_Tape_File_System  


Best Regards
Sachin Goswami
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][devstack] CI failed The plugin token_endpoint could not be found

2014-11-13 Thread Wan, Sam
Hi,

  Seems we need to use python-keystoneclient and python-openstackclient from 
git.openstack.org  because those on pip don’t work.
  But in latest update of stack.sh, it’s to use pip by default

if use_library_from_git "python-openstackclient"; then
git_clone_by_name "python-openstackclient"
setup_dev_lib "python-openstackclient"
else
   pip_install python-openstackclient
fi


By looking at use_library_from_git in functions-common, you’ll see
function use_library_from_git {
local name=$1
local enabled=1
[[ ,${LIBS_FROM_GIT}, =~ ,${name}, ]] && enabled=0
return $enabled
}


LIBS_FROM_GIT which gets it value from DEVSTACK_PROJECT_FROM_GIT 
(devstack-gate/devstack-vm-gate.sh), by default, is empty.
I think we should set default value for DEVSTACK_PROJECT_FROM_GIT to fix this.

And by now, we can work around this issue by setting DEVSTACK_PROJECT_FROM_GIT 
in our CI.
export DEVSTACK_PROJECT_FROM_GIT=python-keystoneclient,python-openstackclient


This works for me. Hope it helps.

Thanks and regards
Sam

From: Patrick East [mailto:patrick.e...@purestorage.com]
Sent: Friday, November 14, 2014 4:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [infra][devstack] CI failed The plugin 
token_endpoint could not be found

I'm running into this issue as well on my CI. Any ideas on how to fix this?

My CI is behaving similar to the official jenkins in regards to using pip to 
install the clients, and pip freeze shows the same versions on each.

Comparing  
http://logs.openstack.org/70/124370/7/check/check-tempest-dsvm-full/d6a53b7/logs/devstacklog.txt.gz#_2014-11-13_18_48_32_860
 and the same spot in 
http://ec2-54-69-246-234.us-west-2.compute.amazonaws.com/purestorageci/MANUALLY_TRIGGERED_272/devstacklog.txt
 they both fail "use_library_from_git" check for keystoneclient and 
openstackclient

Any suggestions would be much appreciated!

-Patrick


On Wed, Nov 12, 2014 at 10:22 PM, Itsuro ODA 
mailto:o...@valinux.co.jp>> wrote:
Hi,

> I'm wondering why you are just hitting it now? Does your CI pull the
> latest python-keystoneclient and python-openstackclient from master?

Yes before it began to fail, but now it is No because of this change:
https://github.com/openstack-dev/devstack/commit/8f8e2d1fbfa4c51f6b68a6967e330cd478f979ee

Now python-*client are installed by pip install instead of git clone.

I think this change causes the problem. But I don't understand
why there are success CIs and failed CIs (include mine) and how to fix
the problem.

Thanks.
Itsuro Oda

On Thu, 13 Nov 2014 00:36:41 -0500
Steve Martinelli mailto:steve...@ca.ibm.com>> wrote:

> About a month ago, we made changes to python-openstackclient that seem
> related to the error message you posted. Change is
> https://review.openstack.org/#/c/127655/3/setup.cfg
> I'm wondering why you are just hitting it now? Does your CI pull the
> latest python-keystoneclient and python-openstackclient from master?
>
> Thanks,
>
> _
> Steve Martinelli
> OpenStack Development - Keystone Core Member
> Phone: (905) 413-2851
> E-Mail: steve...@ca.ibm.com
>
>
>
> From:   Itsuro ODA mailto:o...@valinux.co.jp>>
> To: 
> openstack-dev@lists.openstack.org
> Date:   11/12/2014 11:27 PM
> Subject:[openstack-dev] [infra][devstack] CI failed The plugin
> token_endpoint could not be found
>
>
>
> Hi,
>
> My third party CI becomes failed from about 21:00 12th UTC
> in execution of devstack.
>
> The error occurs at "openstack project create admin -f value -c id"
> ---
> ERROR: openstack The plugin token_endpoint could not be found
> ---
>
> I found some CIs have same problem.
>
> Does anyone give me a hint to solve this problem ?
>
> Thanks.
> --
> Itsuro ODA mailto:o...@valinux.co.jp>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

--
Itsuro ODA mailto:o...@valinux.co.jp>>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
-Patrick
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Is this fix introducing another different bug to dhcp-agent?

2014-11-13 Thread Xu Han Peng
I opened a new bug and submitted a fix for this problem since it was 
introduced by my previous patch.


https://bugs.launchpad.net/neutron/+bug/1392564
https://review.openstack.org/#/c/134432/

It will be great if you can have a look at the fix and comment. Thanks!

Xu Han

On 11/14/2014 05:54 AM, Ihar Hrachyshka wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Robert, Miguel,
do you plan to take care of the bug and the fix, or you need help? RDO
depends on the fix, also we should introduce the fix before the next
Juno release that includes the bad patch, so I would be glad to step
in if you don't have spare cycles.
/Ihar

On 13/11/14 16:44, Robert Li (baoli) wrote:

Nice catch. Since it’s already merged, a new bug may be in order.

—Robert

On 11/13/14, 10:25 AM, "Miguel Ángel Ajo" mailto:majop...@redhat.com>> wrote:

I believe this fix to IPv6 dhcp spawn breaks isolated metadata
when we have a subnet combination like this on a network:

1) IPv6 subnet, with DHCP enabled 2) IPv4 subnet, with isolated
metadata enabled.


https://review.openstack.org/#/c/123671/1/neutron/agent/dhcp_agent.py

  I haven’t been able to test yet, but wanted to share it before I
forget.




Miguel Ángel ajo @ freenode.net


-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUZSioAAoJEC5aWaUY1u57WvIH/0pEFnXwPF9dKGbmWGvxgURW
Fec0SMxl544DUyKXfhy/fEJPiedAm63TcBbBRkcLrwrGrAI23iMvAi96tCuH/eb7
uYbgoDF16b6DGUg0V2bXh8OufpgoIn4T38Vwwr0MFhfxOLbux4QfK1MshsRF1XWx
LnzmLLnDuJvEYF/gq9ifAA0ekQ+OdwYaKpTEyoVXZNuSJzJOkY8BnwjPQTuRStYG
M1oBsIxO9j9C/fw1/lkIKasaq9Vmy5LtG+neOyQDzM6EjZrVKO+Z9ZbJnhlkIoaF
fL8dKqDBzDbFbHpE/pHcUSR5lMnBkHWjsfqC6w8NKpEKnPW6UN88oipSoB9h7U4=
=NfOp
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Thanks,
Xu Han

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Supported (linux) distributions

2014-11-13 Thread Tony Breeds
Hi All,
I'm looking for a description of which linux distributions we as
developers expect to support openstack on.  I haven't found anything that
summerises this.

So far the closest I've come is:
1) http://docs.openstack.org/index.html
   We have:
Installation Guide for Debian 7
Installation Guide for openSUSE 13.1 and SUSE Linux Enterprise Server 
11 SP3
Installation Guide for Red Hat Enterprise Linux 7, CentOS 7, and Fedora 
20
Installation Guide for Ubuntu 14.04

2) 
https://github.com/openstack-dev/devstack/blame/master/doc/source/overview.rst#L18-L33
   which defines our CI strategy as:
Ubuntu LTS + dev (14.04 and 14.10)
Fedora $current and $current-1 (20 and 19)
RHEL $current (7)

These 2 lists don't have 100% overlap, so perhaps the union would be a
reasonable starting place?

Why am I asking this?

I'm trying to solve a bug[1]. The bug is in a system provided tool (qemu-img)
My belief is that we should add a workaround[2] to nova (and possibly cinder
but I'd need to check) until *all* 'supported' distributions have the qemu-img
fix.  At which point we could revert/remove the work around.  So I'm trying to
define the list of distributions I need to work with.

I'm happy to add something to the wiki once I know what that something is.

Yours Tony.

[1] https://review.openstack.org/#/c/123957/
[2] I readily acknowledge that there is an open question as to what that work 
around is


pgpzrb4YOXWDr.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][kite] oslo.messaging changes for message security

2014-11-13 Thread Jamie Lennox
Hi all,

To implement kite we need the ability to sign and encrypt the message and the
message data. This needs to happen at a very low level in the oslo.messaging
stack. The existing message security review
(https://review.openstack.org/#/c/109806/) isn't going to be sufficient. It
allows us to sign/encrypt only the message data ignoring the information in the
context and not allowing us to sign the message as a whole. It would also
intercept and sign notifications which is not something that kite can do.

Mostly this is an issue of how the oslo.messaging library is constructed. The
choice of how data is serialized for transmission (including things like how
you arrange context and message data in the payload) is handled individually by
the driver layer rather than in a common higher location. All the drivers use
the same helper functions for this and so it isn't a problem in practice.

Essentially I need a stateful serializing/deserializing object (I need to store
keys and hold things like a connection to the kite server) that either extends
or replaces oslo.messaging._drivers.common.serialize_msg and deserialize_msg
and their exception counterparts.

There are a couple of ways I can see to do what I need:

1. Kite becomes a more integral part of oslo.messaging and the marshalling and
verification code becomes part of the existing RPC path. This is how it was
initially proposed, it does not provide a good story for future or alternative
implementations. Oslo.messaging would either have a dependency on kiteclient,
implement its own ways of talking to the server, or have some hack that imports
kiteclient if available.

2. Essentially I add a global object loaded from conf to the existing common
RPC file. Pro: The existing drivers continue to work as today, Con: global
state held by a library. However given the way oslo messaging works i'm not
really sure how much of a problem this is. We typically load transport from a
predefined location in the conf file and we're not really in a situation where
you might want to construct different transports with different parameters in
the same project.

3. I create a protocol object out of the RPC code that kite can subclass and
the protocol can be chosen by CONF when the transport/driver is created. This
still touches a lot of places as the protocol object would need to be passed to
all messages, consumers etc. It involves changing the interface of the drivers
to accept this new object and changes in each of the drivers to work with the
new protocol object rather than the existing helpers.

4. As the last option requires changing the driver interface anyway we try and
correct the driver interfaces completely. The driver send and receive functions
that currently accept a context and args parameters should only accept a
generic object/string consisting of already marshalled data. The code that
handles serializing and deserializing gets moved to a higher level and kite
would be pluggable there with the current RPC being default.

None of these options involve changing the public facing interfaces nor the
messages emitted on the wire (when kite is not used).

I've been playing a little with option 3 and I don't think it's worth it. There
is a lot of code change and additional object passing that I don't think
improves the library in general.

Before I go too far down the path with option 4 I'd like to hear the thoughts
of people more familiar with the library.

Is there a reason that the drivers currently handle marshalling rather than the
RPC layer?

I know there is ongoing talk about evolving the oslo.messaging library, I
unfortunately didn't make it to the sessions at summit. Has this problem been
raised? How would it affect those talks?

Is there explicit/implicit support for out of tree drivers that would disallow
changing these interfaces?

Does anyone have alternative ideas on how to organize the library for message
security?

Thanks for the help.


Jamie

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable][nova] Doc build failure during backport

2014-11-13 Thread Fei Long Wang
Greetings,

Recently, I'm working on fixing Nova evacuate bugs for RBD. And both the
two patches have been merged in Kilo[1,2]. But during backporting them
to Juno/Icehouse, one patch got a document build failure, see
http://logs.openstack.org/26/131626/3/check/gate-nova-docs/789d9bd/console.html
Therefore, I have to change the docstring format a little bit. But seems
there are some reviewers have concern about this. Though personally I
think it's ok, given we even could resolve conflicts by changing code
during backporting. So may I get some suggestion/comments about this
situation? Thanks.

[1] https://review.openstack.org/131626
[2] https://review.openstack.org/131613

-- 
Cheers & Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] horizon failed due to django compressor

2014-11-13 Thread gong_ys2004
Hi, I installed the devstack on my fedora 20 system, the process seems good 
because I have seen
the normal output of the stack.sh:
Horizon is now available at http://192.168.88.225/
Keystone is serving at http://192.168.88.225:5000/v2.0/
Examples on using novaclient command line is in exercise.sh
The default users are: admin and demo
The password: admin
This is your host ip: 192.168.88.225
2014-11-14 00:03:05.159 | stack.sh completed in 249 seconds

but when I tried to access the horizon by http://192.168.88.225/
it always said the following:FilterError at //bin/sh: 
django_pyscss.compressor.DjangoScssFilter: command not found
Request Method:GETRequest URL:http://localhost/Django Version:1.6.7Exception 
Type:FilterErrorException Value:/bin/sh: 
django_pyscss.compressor.DjangoScssFilter: command not found
Exception Location:/usr/lib/python2.7/site-packages/compressor/filters/base.py 
in input, line 173Python Executable:/usr/bin/pythonPython Version:2.7.5Python 
Path:['/opt/stack/horizon/openstack_dashboard/wsgi/../..',
 '/usr/lib/python2.7/site-packages/XStatic_Spin-1.2.5.2-py2.7.egg',
 '/usr/lib/python2.7/site-packages/XStatic_Rickshaw-1.5.0.0-py2.7.egg',
 '/usr/lib/python2.7/site-packages/XStatic_QUnit-1.14.0.2-py2.7.egg',
 '/usr/lib/python2.7/site-packages/XStatic_JSEncrypt-2.0.0.2-py2.7.egg',
 '/usr/lib/python2.7/site-packages/XStatic_jquery_ui-1.11.0.1-py2.7.egg',
 
'/usr/lib/python2.7/site-packages/XStatic_JQuery.TableSorter-2.14.5.1-py2.7.egg',
 
'/usr/lib/python2.7/site-packages/XStatic_JQuery.quicksearch-2.0.3.1-py2.7.egg',
 '/usr/lib/python2.7/site-packages/XStatic_JQuery_Migrate-1.2.1.1-py2.7.egg',
 '/usr/lib/python2.7/site-packages/XStatic_jQuery-1.10.2.1-py2.7.egg',
 '/usr/lib/python2.7/site-packages/XStatic_Jasmine-1.3.1.1-py2.7.egg',
 '/usr/lib/python2.7/site-packages/XStatic_Font_Awesome-4.1.0.1-py2.7.egg',
 '/usr/lib/python2.7/site-packages/XStatic_Hogan-2.0.0.2-py2.7.egg',
 '/usr/lib/python2.7/site-packages/XStatic_D3-3.1.6.2-py2.7.egg',
 '/usr/lib/python2.7/site-packages/XStatic_Bootstrap_SCSS-3.2.0.0-py2.7.egg',
 
'/usr/lib/python2.7/site-packages/XStatic_Bootstrap_Datepicker-1.3.1.0-py2.7.egg',
 '/usr/lib/python2.7/site-packages/XStatic_Angular_Mock-1.2.1.1-py2.7.egg',
 '/usr/lib/python2.7/site-packages/XStatic_Angular_Cookies-1.2.1.1-py2.7.egg',
 '/usr/lib/python2.7/site-packages/XStatic_Angular-1.2.1.1-py2.7.egg',
 '/usr/lib/python2.7/site-packages/XStatic-1.0.1-py2.7.egg',
 '/usr/lib/python2.7/site-packages/python_swiftclient-2.3.1-py2.7.egg',
 '/usr/lib/python2.7/site-packages/python_novaclient-2.20.0-py2.7.egg',
 '/usr/lib/python2.7/site-packages/python_heatclient-0.2.12-py2.7.egg',
 '/usr/lib/python2.7/site-packages/python_glanceclient-0.14.2-py2.7.egg',
 '/usr/lib/python2.7/site-packages/python_cinderclient-1.1.1-py2.7.egg',
 '/usr/lib/python2.7/site-packages/pyScss-1.3.0.a1-py2.7-linux-x86_64.egg',
 '/usr/lib/python2.7/site-packages/django_pyscss-1.0.6-py2.7.egg',
 '/usr/lib/python2.7/site-packages/enum34-1.0.3-py2.7.egg',
 '/opt/stack/keystone',
 '/opt/stack/glance_store',
 '/opt/stack/glance',
 '/opt/stack/neutron',
 '/opt/stack/nova',
 '/opt/stack/horizon',
 '/usr/lib64/python27.zip',
 '/usr/lib64/python2.7',
 '/usr/lib64/python2.7/plat-linux2',
 '/usr/lib64/python2.7/lib-tk',
 '/usr/lib64/python2.7/lib-old',
 '/usr/lib64/python2.7/lib-dynload',
 '/usr/lib64/python2.7/site-packages',
 '/usr/lib64/python2.7/site-packages/gtk-2.0',
 '/usr/lib/python2.7/site-packages',
 '/opt/stack/horizon/openstack_dashboard']
I appreciate any help, thanks


Yong sheng gong___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

2014-11-13 Thread Joshua Harlow
On Nov 13, 2014, at 4:08 PM, Clint Byrum  wrote:

> Excerpts from Joshua Harlow's message of 2014-11-13 14:01:14 -0800:
>> On Nov 13, 2014, at 7:10 AM, Clint Byrum  wrote:
>> 
>>> Excerpts from Joshua Harlow's message of 2014-11-13 00:45:07 -0800:
 A question;
 
 How is using something like celery in heat vs taskflow in heat (or at 
 least concept [1]) 'to many code change'.
 
 Both seem like change of similar levels ;-)
 
>>> 
>>> I've tried a few times to dive into refactoring some things to use
>>> TaskFlow at a shallow level, and have always gotten confused and
>>> frustrated.
>>> 
>>> The amount of lines that are changed probably is the same. But the
>>> massive shift in thinking is not an easy one to make. It may be worth some
>>> thinking on providing a shorter bridge to TaskFlow adoption, because I'm
>>> a huge fan of the idea and would _start_ something with it in a heartbeat,
>>> but refactoring things to use it feels really weird to me.
>> 
>> I wonder how I can make that better...
>> 
>> Where the concepts that new/different? Maybe I just have more of a 
>> functional programming background and the way taskflow gets you to create 
>> tasks that are later executed, order them ahead of time, and then *later* 
>> run them is still a foreign concept for folks that have not done things with 
>> non-procedural languages. What were the confusion points, how may I help 
>> address them? More docs maybe, more examples, something else?
> 
> My feeling is that it is hard to let go of the language constructs that
> _seem_ to solve the problems TaskFlow does, even though in fact they are
> the problem because they're using the stack for control-flow where we
> want that control-flow to yield to TaskFlow.
> 

U know u want to let go!

> I also kind of feel like the Twisted folks answered a similar question
> with inline callbacks and made things "easier" but more complex in
> doing so. If I had a good answer I would give it to you though. :)
> 
>> 
>> I would agree that the jobboard[0] concept is different than the other parts 
>> of taskflow, but it could be useful here:
>> 
>> Basically at its core its a application of zookeeper where 'jobs' are posted 
>> to a directory (using sequenced nodes in zookeeper, so that ordering is 
>> retained). Entities then acquire ephemeral locks on those 'jobs' (these 
>> locks will be released if the owner process disconnects, or fails...) and 
>> then work on the contents of that job (where contents can be pretty much 
>> arbitrary). This creates a highly available job queue (queue-like due to the 
>> node sequencing[1]), and it sounds pretty similar to what zaqar could 
>> provide in theory (except the zookeeper one is proven, battle-hardened, 
>> works and exists...). But we should of course continue being scared of 
>> zookeeper, because u know, who wants to use a tool where it would fit, haha 
>> (this is a joke).
>> 
> 
> So ordering is a distraction from the task at hand. But the locks that
> indicate liveness of the workers is very interesting to me. Since we
> don't actually have requirements of ordering on the front-end of the task
> (we do on the completion of certain tasks, but we can use a DB for that),
> I wonder if we can just get the same effect with a durable queue that uses
> a reliable messaging pattern where we don't ack until we're done. That
> would achieve the goal of liveness.
> 

Possibly, it depends on what the message broker is doing with the message when 
the message hasn't been acked. With zookeeper being used as a queue of jobs, 
the job actually has an owner (the thing with the ephemeral lock on the job) so 
the job won't get 'taken over' by someone else unless that ephemeral lock drops 
off (due to owner dying or disconnecting...); this is where I'm not sure what 
message brokers do (varies by message broker?).

An example little taskflow program that I made that u can also run (replace my 
zookeeper server with your own).

http://paste.ubuntu.com/8995861/

You can then run like:

$ python jb.py  'producer'

And for a worker (start many of these if u want),

$ python jb.py 'c1'

Then you can see the work being produced/consumed, and u can ctrl-c 'c1' and 
then another worker will take over the work...

Something like the following should be output (by workers):

$ python jb.py 'c2'
INFO:kazoo.client:Connecting to buildingbuild.corp.yahoo.com:2181
INFO:kazoo.client:Zookeeper connection established, state: CONNECTED
Waiting for jobs to appear...
Running {u'action': u'stuff', u'id': 1}
Waiting for jobs to appear...
Running {u'action': u'stuff', u'id': 3}

For producers:

$ python jb.py  'producer'
INFO:kazoo.client:Connecting to buildingbuild.corp.yahoo.com:2181
INFO:kazoo.client:Zookeeper connection established, state: CONNECTED
Posting work item {'action': 'stuff', 'id': 0}
Posting work item {'action': 'stuff', 'id': 1}
Posting work item {'action': 'stuff', 'id': 2}
Posting work item {'action': 'stuff', 'id': 3}
P

Re: [openstack-dev] [all] config options not correctly deprecated

2014-11-13 Thread Sean Dague
On 11/13/2014 06:56 PM, Clint Byrum wrote:
> Excerpts from Ben Nemec's message of 2014-11-13 15:20:47 -0800:
>> On 11/10/2014 05:00 AM, Daniel P. Berrange wrote:
>>> On Mon, Nov 10, 2014 at 09:45:02AM +, Derek Higgins wrote:
 Tl;dr oslo.config wasn't logging warnings about deprecated config
 options, do we need to support them for another cycle?
>>> AFAIK, there has not been any change in olso.config behaviour
>>> in the Juno release, as compared to previous releases. The
>>> oslo.config behaviour is that the generated sample config file
>>> contain all the deprecation information.
>>>
>>> The idea that olso.config issue log warnings is a decent RFE
>>> to make the use of deprecated config settings more visible.
>>> This is an enhancement though, not a bug.
>>>
 A set of patches to remove deprecated options in Nova was landed on
 Thursday[1], these were marked as deprecated during the juno dev cycle
 and got removed now that kilo has started.
>>> Yes, this is our standard practice - at the start of each release
>>> cycle, we delete anything that was marked as deprected in the
>>> previous release cycle. ie we give downstream users/apps 1 release
>>> cycle of grace to move to the new option names.
>>>
 Most of the deprecated config options are listed as deprecated in the
 documentation for nova.conf changes[2] linked to from the Nova upgrade
 section in the Juno release notes[3] (the deprecated cinder config
 options are not listed here along with the allowed_direct_url_schemes
 glance option).
>>> The sample  nova.conf generated by olso lists all the deprecations.
>>>
>>> For example, for cinder options it shows what the old config option
>>> name was.
>>>
>>>   [cinder]
>>>
>>>   #
>>>   # Options defined in nova.volume.cinder
>>>   #
>>>
>>>   # Info to match when looking for cinder in the service
>>>   # catalog. Format is: separated values of the form:
>>>   # :: (string value)
>>>   # Deprecated group/name - [DEFAULT]/cinder_catalog_info
>>>   #catalog_info=volume:cinder:publicURL
>>>
>>> Also note the deprecated name will not appear as an option in the
>>> sample config file at all, other than in this deprecation comment.
>>>
>>>
 My main worry is that there were no warnings about these options being
 deprecated in nova's logs (as a result they were still being used in
 tripleo), once I noticed tripleo's CI jobs were failing and discovered
 the reason I submitted 4 reverts to put back the deprecated options in
 nova[4] as I believe they should now be supported for another cycle
 (along with a fix to oslo.config to log warnings about their use). The 4
 patches have now been blocked as they go "against our deprecation policy".

 I believe the correct way to handle this is to support these options for
 another cycle so that other operators don't get hit when upgrading to
 kilo. While at that same time fix oslo.config to report the deprecated
 options in kilo.
 I have marked this mail with the [all] tag because there are other
 projects using the same "deprecated_name" (or "deprecated_group")
 parameter when adding config options, I think those projects also now
 need to support their deprecated options for another cycle.
>>> AFAIK, there's nothing different about Juno vs previous release cycles,
>>> so I don't see any reason to do anything different this time around.
>>> No matter what we do there is always a possibility that downstream
>>> apps / users will not notice and/or ignore the deprecation. We should
>>> certainly look at how to make deprecation more obvious, but I don't
>>> think we should change our policy just because an app missed the fact
>>> that these were deprecated.
>> So the difference to me is that this cycle we are aware that we're
>> creating a crappy experience for deployers.  In the past we didn't have
>> anything in the CI environment simulating a real deployment so these
>> sorts of issues went unnoticed.  IMHO telling deployers that they have
>> to troll the sample configs and try to figure out which deprecated opts
>> they're still using is not an acceptable answer.
>>
> I don't know if this is really fair, as all of the deprecated options do
> appear here:
>
> http://docs.openstack.org/juno/config-reference/content/nova-conf-changes-juno.html
>
> So the real bug is that in TripleO we're not paying attention to the
> appropriate stream of deprecations. Logs on running systems is a mighty
> big hammer when the documentation is being updated for us, and we're
> just not paying attention in the right place.
>
> BTW, where SHOULD continuous deployers pay attention for this stuff?
>
>> Now that we do know, I think we need to address the issue.  The first
>> step is to revert the deprecated removals - they're not hurting
>> anything, and if we wait another cycle we can fix oslo.config and then
>> remove them once deployers have had a reasonable chance to address the
>> deprec

Re: [openstack-dev] opnfv proposal on DR capability enhancement on OpenStack Nova

2014-11-13 Thread Zhipeng Huang
Hi Keshava,

What we want to achieve is to enable Nova to provide the visibility of its
DR state during the whole DR procedure.

At the moment we wrote the first version of the proposal, we only consider
the VM states on both the production site as well as the standby site.

However we are considering and working on a more expanded and detailed
version as we speak, and if you are interested you are welcomed to join the
effort :)

On Fri, Nov 14, 2014 at 1:52 AM, A, Keshava  wrote:

> Zhipeng Huang,
>
> When multiple  Datacenters are interconnected over WAN/Internet if the
> remote the Datacenter goes down, expect the 'native VM status' to get
> changed accordingly ?
> Is this the requirement ? This requirement is  from NFV Service VM (like
> routing VM ? )
> Then is not it is  NFV routing (BGP/IGP) /MPLS signaling (LDP/RSVP)
> protocol to handle  ? Does the OpenStack needs to handle that ?
>
> Please correct me if my understanding on this problem  is not correct.
>
> Thanks & regards,
> keshava
>
> -Original Message-
> From: Steve Gordon [mailto:sgor...@redhat.com]
> Sent: Wednesday, November 12, 2014 6:24 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Nova][DR][NFV] opnfv proposal on DR
> capability enhancement on OpenStack Nova
>
> - Original Message -
> > From: "Zhipeng Huang" 
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> >
> > Hi Team,
> >
> > I knew we didn't propose this in the design summit and it is kinda
> > rude in this way to jam a topic into the schedule. We were really
> > stretched thin during the summit and didn't make it to the Nova
> > discussion. Full apologies here :)
> >
> > What we want to discuss here is that we proposed a project in opnfv (
> > https://wiki.opnfv.org/collaborative_development_projects/rescuer),
> > which in fact is to enhance inter-DC DR capabilities in Nova. We hope
> > we could achieve this in the K cycle, since there is no "HUGE" changes
> > required to be done in Nova. We just propose to add certain DR status
> > in Nova so operators could see what DR state the OpenStack is
> > currently in, therefore when disaster occurs they won't cut off the
> wrong stuff.
> >
> > Sorry again if we kinda barge in here, and we sincerely hope the Nova
> > community could take a look at our proposal. Feel free to contact me
> > if anyone got any questions :)
> >
> > --
> > Zhipeng Huang
>
> Hi Zhipeng,
>
> I would just like to echo the comments from the opnfv-tech-discuss list
> (which I notice is still private?) in saying that there is very little
> detail on the wiki page describing what you actually intend to do. Given
> this, it's very hard to provide any meaningful feedback. A lot more detail
> is required, particularly if you intend to propose a specification based on
> this idea.
>
> Thanks,
>
> Steve
>
> [1] https://wiki.opnfv.org/collaborative_development_projects/rescuer
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Zhipeng Huang
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402
OpenStack, OpenDaylight, OpenCompute affcienado
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

2014-11-13 Thread Clint Byrum
Excerpts from Joshua Harlow's message of 2014-11-13 14:01:14 -0800:
> On Nov 13, 2014, at 7:10 AM, Clint Byrum  wrote:
> 
> > Excerpts from Joshua Harlow's message of 2014-11-13 00:45:07 -0800:
> >> A question;
> >> 
> >> How is using something like celery in heat vs taskflow in heat (or at 
> >> least concept [1]) 'to many code change'.
> >> 
> >> Both seem like change of similar levels ;-)
> >> 
> > 
> > I've tried a few times to dive into refactoring some things to use
> > TaskFlow at a shallow level, and have always gotten confused and
> > frustrated.
> > 
> > The amount of lines that are changed probably is the same. But the
> > massive shift in thinking is not an easy one to make. It may be worth some
> > thinking on providing a shorter bridge to TaskFlow adoption, because I'm
> > a huge fan of the idea and would _start_ something with it in a heartbeat,
> > but refactoring things to use it feels really weird to me.
> 
> I wonder how I can make that better...
> 
> Where the concepts that new/different? Maybe I just have more of a functional 
> programming background and the way taskflow gets you to create tasks that are 
> later executed, order them ahead of time, and then *later* run them is still 
> a foreign concept for folks that have not done things with non-procedural 
> languages. What were the confusion points, how may I help address them? More 
> docs maybe, more examples, something else?

My feeling is that it is hard to let go of the language constructs that
_seem_ to solve the problems TaskFlow does, even though in fact they are
the problem because they're using the stack for control-flow where we
want that control-flow to yield to TaskFlow.

I also kind of feel like the Twisted folks answered a similar question
with inline callbacks and made things "easier" but more complex in
doing so. If I had a good answer I would give it to you though. :)

> 
> I would agree that the jobboard[0] concept is different than the other parts 
> of taskflow, but it could be useful here:
> 
> Basically at its core its a application of zookeeper where 'jobs' are posted 
> to a directory (using sequenced nodes in zookeeper, so that ordering is 
> retained). Entities then acquire ephemeral locks on those 'jobs' (these locks 
> will be released if the owner process disconnects, or fails...) and then work 
> on the contents of that job (where contents can be pretty much arbitrary). 
> This creates a highly available job queue (queue-like due to the node 
> sequencing[1]), and it sounds pretty similar to what zaqar could provide in 
> theory (except the zookeeper one is proven, battle-hardened, works and 
> exists...). But we should of course continue being scared of zookeeper, 
> because u know, who wants to use a tool where it would fit, haha (this is a 
> joke).
> 

So ordering is a distraction from the task at hand. But the locks that
indicate liveness of the workers is very interesting to me. Since we
don't actually have requirements of ordering on the front-end of the task
(we do on the completion of certain tasks, but we can use a DB for that),
I wonder if we can just get the same effect with a durable queue that uses
a reliable messaging pattern where we don't ack until we're done. That
would achieve the goal of liveness.

> [0] 
> https://github.com/openstack/taskflow/blob/master/taskflow/jobs/jobboard.py#L25
>  
> 
> [1] 
> http://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#Sequence+Nodes+--+Unique+Naming
> 
> > 
> >> What was your metric for determining the code change either would have 
> >> (out of curiosity)?
> >> 
> >> Perhaps u should look at [2], although I'm unclear on what the desired 
> >> functionality is here.
> >> 
> >> Do u want the single engine to transfer its work to another engine when it 
> >> 'goes down'? If so then the jobboard model + zookeper inherently does this.
> >> 
> >> Or maybe u want something else? I'm probably confused because u seem to be 
> >> asking for resource timeouts + recover from engine failure (which seems 
> >> like a liveness issue and not a resource timeout one), those 2 things seem 
> >> separable.
> >> 
> > 
> > I agree with you on this. It is definitely a liveness problem. The
> > resource timeout isn't something I've seen discussed before. We do have
> > a stack timeout, and we need to keep on honoring that, but we can do
> > that with a job that sleeps for the stack timeout if we have a liveness
> > guarantee that will resurrect the job (with the sleep shortened by the
> > time since stack-update-time) somewhere else if the original engine
> > can't complete the job.
> > 
> >> [1] http://docs.openstack.org/developer/taskflow/jobs.html
> >> 
> >> [2] 
> >> http://docs.openstack.org/developer/taskflow/examples.html#jobboard-producer-consumer-simple
> >> 
> >> On Nov 13, 2014, at 12:29 AM, Murugan, Visnusaran 
> >>  wrote:
> >> 
> >>> Hi all,
> >>> 
> >>> Convergence-POC distributes stack operations by sending resource actions 
> >>> over RPC f

Re: [openstack-dev] [oslo] oslo.messaging outcome from the summit

2014-11-13 Thread Joshua Harlow
Don't forget my executor which isn't dependent on a larger set of changes for 
asyncio/trollious...
https://review.openstack.org/#/c/70914/
The above will/should just 'work', although I'm unsure what thread count should 
be by default (the number of green threads that is set at like 200 shouldn't be 
the same number used in that executor which uses real python/system threads). 
The neat thing about that executor is that it can also replace the eventlet 
one, since when eventlet is monkey patching the threading module (which it 
should be) then it should behave just as the existing eventlet one; which IMHO 
is pretty cool (and could be one way to completely remove the eventlet usage in 
oslo.messaging).

As for the kombu discussions, maybe its time to jump on the #celery channel 
(where the kombu folks hang out) and start talking to them about how we can 
work better together to move some of our features into kombu (and also 
depreciate/remove some of the oslo.messaging features that now are in kombu). I 
believe https://launchpad.net/~asksol is the main guy there (and also the main 
maintainer of celery/kombu?). It'd be nice to have these cross-community talks 
and at least come up with some kind of game plan; hopefully one that benefits 
both communities...

-Josh
 From: Doug Hellmann 
 To: OpenStack Development Mailing List (not for usage questions) 
 
 Sent: Wednesday, November 12, 2014 12:22 PM
 Subject: [openstack-dev] [oslo] oslo.messaging outcome from the summit
   
The oslo.messaging session at the summit [1] resulted in some plans to evolve 
how oslo.messaging works, but probably not during this cycle.

First, we talked about what to do about the various drivers like ZeroMQ and the 
new AMQP 1.0 driver. We decided that rather than moving those out of the main 
tree and packaging them separately, we would keep them all in the main 
repository to encourage the driver authors to help out with the core library 
(oslo.messaging is a critical component of OpenStack, and we’ve lost several of 
our core reviewers for the library to other priorities recently).

There is a new set of contributors interested in maintaining the ZeroMQ driver, 
and they are going to work together to review each other’s patches. We will 
re-evaluate keeping ZeroMQ at the end of Kilo, based on how things go this 
cycle.

We also talked about the fact that the new version of Kombu includes some of 
the features we have implemented in our own driver, like heartbeats and 
connection management. Kombu does not include the calling patterns 
(cast/call/notifications) that we have in oslo.messaging, but we may be able to 
remove some code from our driver and consolidate the qpid and rabbit driver 
code to let Kombu do more of the work for us.

Python 3 support is coming slowly. There are a couple of patches up for review 
to provide a different sort of executor based on greenio and trollius. Adopting 
that would require some application-level changes to use co-routines, so it may 
not be an optimal solution even though it would get us off of eventlet. (During 
the Python 3 session later in the week we talked about the possibility of 
fixing eventlet’s monkey-patching to allow us to use the new eventlet under 
python 3.)

We also talked about the way the oslo.messaging API uses URLs to get some 
settings and configuration options for others. I thought I remembered this 
being a conscious decision to pass connection-specific parameters in the URL, 
and “global” parameters via configuration settings. It sounds like that split 
may not have been implemented as cleanly as originally intended, though. We 
identified documenting URL parameters as an issue for removing the 
configuration object, as well as backwards-compatibility. I don’t think we 
agreed on any specific changes to the API based on this part of the discussion, 
but please correct me if your recollection is different.

We also learned that there is a critical bug [2] related to heartbeats for 
RabbitMQ, and we have a few patches up to propose fixes in different ways (see 
the bottom of [1]). This highlighted again the fact that we have a shortage of 
reviewers for oslo.messaging.

Doug

[1] https://etherpad.openstack.org/p/kilo-oslo-oslo.messaging
[2] https://bugs.launchpad.net/nova/+bug/856764


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Richard Jones
On 14 November 2014 02:04, Thomas Goirand  wrote:

> On 11/13/2014 12:13 PM, Richard Jones wrote:
> > the npm stuff is all tool chain; tools
> > that I believe should be packaged as such by packagers.
>
> npm is already in Debian:
> https://packages.debian.org/sid/npm
>
> However, just like we can't use CPAN, "pear install", "pip install" and
> such when building or installing package, we wont be able to use NPM.
> This means every single dependency that isn't in Debian will need to be
> packaged.
>

Just to be clearer, when I wrote "the npm stuff" I meant "npm and the tools
installed by it", so grunt, bower, karma, phantomjs, etc. Not the stuff
managed by bower, just the stuff installed by npm. Those npm-based things
should be packaged by the distros as tools, just like other programs the
distros package.


> Horizon is an incredibly complex application. Just so we're all on the
> > same page, the components installed by bower for angboard are:
> >
> > angular
> >   Because writing an application the size of Horizon without it would be
> > madness :)
> > angular-route
> >   Provides structure to the application through URL routing.
> > angular-cookies
> >   Provides management of browser cookies in a way that integrates well
> > with angular.
> > angular-sanitize
> >   Allows direct embedding of HTML into angular templates, with
> sanitization.
> > json3
> >   Compatibility for older browsers so JSON works.
> > es5-shim
> >   Compatibility for older browsers so Javascript (ECMAScript 5) works.
> > angular-smart-table
> >   Table management (population, sorting, filtering, pagination, etc)
> > angular-local-storage
> >Browser local storage with cookie fallback, integrated with angular
> > mechanisms.
> > angular-bootstrap
> >Extensions to angular that leverage bootstrap (modal popups, tabbed
> > displays, ...)
> > font-awesome
> >Additional glyphs to use in the user interface (warning symbol, info
> > symbol, ...)
> > boot
> >Bootstrap for CSS styling (this is the dependency that brings in
> > jquery and requirejs)
> > underscore
> >Javascript utility library providing a ton of features Javascript
> > lacks but Python programmers expect.
> > ng-websocket
> >Angular-friendly interface to using websockets
> > angular-translate
> >Support for localization in angular using message catalogs generated
> > by gettext/transifex.
> > angular-mocks
> >Mocking support for unit testing angular code
> > angular-scenario
> >More support for angular unit tests
> >
> > Additionally, angboard vendors term.js because it was very poorly
> > packaged in the bower ecosystem. +1 for xstatic there I guess :)
> >
> > So those are the components we needed to create the prototype in a few
> > weeks. Not using them would have added months (or possibly years) to the
> > development time. Creating an application of the scale of Horizon
> > without leveraging all that existing work would be like developing
> > OpenStack while barring all use of Python 3rd-party packages.
>
> I have no problem with adding dependencies. That's how things work, for
> sure, I just want to make sure it doesn't become hell, with so many
> components inter-depending on 100s of them, which would become not
> manageable. If we define clear boundaries, then fine! The above seems
> reasonable anyway.
>
> Though did you list the dependencies of the above?
>

Again, just so we're clear, yes, the above is *all* the components
installed by bower, including dependencies (jquery and requirejs being the
*only* dependencies not directly installed).

As I mentioned, the dependency trees in bower are significantly simpler
than npm packages tend to be (most bower packages have zero or one
dependency). The "100s" of dependencies are in npm packages - but as Martin
Gleiser has pointed out, npm solves that problem with its local install and
node_modules directory structures.


 Richard
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] config options not correctly deprecated

2014-11-13 Thread Clint Byrum
Excerpts from Ben Nemec's message of 2014-11-13 15:20:47 -0800:
> On 11/10/2014 05:00 AM, Daniel P. Berrange wrote:
> > On Mon, Nov 10, 2014 at 09:45:02AM +, Derek Higgins wrote:
> >> Tl;dr oslo.config wasn't logging warnings about deprecated config
> >> options, do we need to support them for another cycle?
> > 
> > AFAIK, there has not been any change in olso.config behaviour
> > in the Juno release, as compared to previous releases. The
> > oslo.config behaviour is that the generated sample config file
> > contain all the deprecation information.
> > 
> > The idea that olso.config issue log warnings is a decent RFE
> > to make the use of deprecated config settings more visible.
> > This is an enhancement though, not a bug.
> > 
> >> A set of patches to remove deprecated options in Nova was landed on
> >> Thursday[1], these were marked as deprecated during the juno dev cycle
> >> and got removed now that kilo has started.
> > 
> > Yes, this is our standard practice - at the start of each release
> > cycle, we delete anything that was marked as deprected in the
> > previous release cycle. ie we give downstream users/apps 1 release
> > cycle of grace to move to the new option names.
> > 
> >> Most of the deprecated config options are listed as deprecated in the
> >> documentation for nova.conf changes[2] linked to from the Nova upgrade
> >> section in the Juno release notes[3] (the deprecated cinder config
> >> options are not listed here along with the allowed_direct_url_schemes
> >> glance option).
> > 
> > The sample  nova.conf generated by olso lists all the deprecations.
> > 
> > For example, for cinder options it shows what the old config option
> > name was.
> > 
> >   [cinder]
> > 
> >   #
> >   # Options defined in nova.volume.cinder
> >   #
> > 
> >   # Info to match when looking for cinder in the service
> >   # catalog. Format is: separated values of the form:
> >   # :: (string value)
> >   # Deprecated group/name - [DEFAULT]/cinder_catalog_info
> >   #catalog_info=volume:cinder:publicURL
> > 
> > Also note the deprecated name will not appear as an option in the
> > sample config file at all, other than in this deprecation comment.
> > 
> > 
> >> My main worry is that there were no warnings about these options being
> >> deprecated in nova's logs (as a result they were still being used in
> >> tripleo), once I noticed tripleo's CI jobs were failing and discovered
> >> the reason I submitted 4 reverts to put back the deprecated options in
> >> nova[4] as I believe they should now be supported for another cycle
> >> (along with a fix to oslo.config to log warnings about their use). The 4
> >> patches have now been blocked as they go "against our deprecation policy".
> >>
> >> I believe the correct way to handle this is to support these options for
> >> another cycle so that other operators don't get hit when upgrading to
> >> kilo. While at that same time fix oslo.config to report the deprecated
> >> options in kilo.
> > 
> >> I have marked this mail with the [all] tag because there are other
> >> projects using the same "deprecated_name" (or "deprecated_group")
> >> parameter when adding config options, I think those projects also now
> >> need to support their deprecated options for another cycle.
> > 
> > AFAIK, there's nothing different about Juno vs previous release cycles,
> > so I don't see any reason to do anything different this time around.
> > No matter what we do there is always a possibility that downstream
> > apps / users will not notice and/or ignore the deprecation. We should
> > certainly look at how to make deprecation more obvious, but I don't
> > think we should change our policy just because an app missed the fact
> > that these were deprecated.
> 
> So the difference to me is that this cycle we are aware that we're
> creating a crappy experience for deployers.  In the past we didn't have
> anything in the CI environment simulating a real deployment so these
> sorts of issues went unnoticed.  IMHO telling deployers that they have
> to troll the sample configs and try to figure out which deprecated opts
> they're still using is not an acceptable answer.
> 

I don't know if this is really fair, as all of the deprecated options do
appear here:

http://docs.openstack.org/juno/config-reference/content/nova-conf-changes-juno.html

So the real bug is that in TripleO we're not paying attention to the
appropriate stream of deprecations. Logs on running systems is a mighty
big hammer when the documentation is being updated for us, and we're
just not paying attention in the right place.

BTW, where SHOULD continuous deployers pay attention for this stuff?

> Now that we do know, I think we need to address the issue.  The first
> step is to revert the deprecated removals - they're not hurting
> anything, and if we wait another cycle we can fix oslo.config and then
> remove them once deployers have had a reasonable chance to address the
> deprecation.
> 

In this case, we can just f

[openstack-dev] [Neutron] Stale patches

2014-11-13 Thread Salvatore Orlando
There are a lot of neutron patches which, for different reasons, have not
been updated in a while.
In order to ensure reviewers focus on active patch, I have set a few
patches (about 75) as 'abandoned'.

No patch with an update in the past month, either patchset or review, has
been abandoned. Moreover, only a part of the patches not updated for over a
month have been abandoned. I took extra care in identifying which ones
could safely be abandoned, and which ones were instead still valuable;
nevertheless, if you find out I abandoned a change you're actively working
on, please restore it.

If you are the owner of one of these patches, you can use the 'restore
change' button in gerrit to resurrect the change. If you're not the other
and wish to resume work on these patches either contact any member of the
neutron-core team in IRC or push a new patch.

Salvatore
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] improving PyPi modules design & FHS (was: the future of angularjs development in Horizon)

2014-11-13 Thread Thomas Goirand
On 11/14/2014 06:40 AM, Donald Stufft wrote:
 Sure! That's how I do most of my Python modules these days. I don't just
 create them from scratch, I use my own "debpypi" script, which generates
 a template for packaging. But it can't be fully automated. I could
 almost do it in a fully automated manner for PEAR packages for PHP (see
 "debpear" in the Debian archive), but it's harder with Python and pip/PyPi.
>>>
>>> I would be interested to know what makes Python harder in this regard, I
>>> would like to fix it.
>>
>> The fact that the standard from PyPi is very fuzzy is one of the issue.
>> There's nothing in the format (for example in the DOAP.xml record) that
>> tells if a module supports Python3 for example. Then the short and long
>> descriptions aren't respected, often, you get some changelog entries
>> there. Then there's no real convention for the location of the sphinx
>> doc. There's also the fact that dependencies for Python have to be
>> written by hand on a Debian package. See for example, dependencies on
>> arparse, distribute, ordereddict, which I never put in a Debian package
>> as it's available in Python 2.7. Or the fact that there's no real unique
>> place where dependencies are written on a PyPi "package" (is it hidden
>> somewhere in setup.py, or is it explicitly written in
>> requirements.txt?). Etc. On the PHP world, everything is much cleaner,
>> in the package.xml, which is very easily parse-able.
> 
> (This is fairly off topic, so if you want to reply to this in private that’s
> fine):

Let's just change the subject line, so that those not interested in the
discussion can skip the topic entirely.

> Nothing that says if it supports py3:
> Yea, this is a problem, you can somewhat estimate it using the Python 3
> classifier though.

The issue is that this is a not-mandatory tag. And often, it isn't set.

> Short and Long descriptions aren’t respected:
> I’m not sure what you mean by isn’t respected?

On my templating script, I grab what's supposed to be the short and long
description. But this leads to importing some RST format long
description that do include unrelated things. In fact, I'm not even sure
there's such things as long and short desc in the proper way, so that it
could just be included in debian/control without manual work.

> Have to write dependencies by hand:
> Not sure what you mean by not depending on argparse, distribute, 
> ordereddict,
> etc? argparse and order edict are often depended on because of Python 2.6,

Right. I think this is an issue in Debian: we should have had a
Provides: in python 2.7, so that it wouldn't have mater. I just hope
this specific issue will just fade away as Python 2.6 gets older and
less used.

> setuptools/distribute should only be dependended on if the project is 
> using
> entry points or something similar.

If only everyone was using PBR... :)

> No unique place where dependencies are written:
> If the project is using setuptools (or is usable from pip) then 
> dependencies
> should be inside of the install_requires field in the setup.py. I can send
> some code for getting this information. Sadly it’s not in a static form 
> yet
> so it requires executing the setup.py.

Executing blindly setup.py before I can inspect it would be an issue.
However, yes please, I'm curious on how to extract the information, so
please do send the code!

>> No, that's for arch independent *things*. Like for example, javascript.
>> In Debian, these are going in /usr/share/javascript. Python code used to
>> live within /usr/share/pyshared too (but we stopped the symlink forest
>> during the Jessie cycle).
> 
> Why does the FHS webpage say differently?
> 
> From [1]:
> 
> The /usr/share hierarchy is for all read-only architecture independent 
> data files.

Which is exactly what I wrote. Oh, maybe it's the "data files" that
bothers you? Well, in some ways, javascript can be considered as data
files. But let's take another example. PHP, java and perl library files
are all stored into /usr/share as well (though surprisingly, ruby is in
/usr/lib... but maybe because it also integrates compiled-in .so files).

>>> I believe it also states that
>>> /usr/lib is for object files, libraries, and internal binaries.
>>
>> It's for arch dependent things.
> 
> Why does the FHS webpage say differently?
> 
> From [2]:
> 
> /usr/lib includes object files, libraries, and internal binaries that are 
> not
> intended to be executed directly by users or shell scripts.

That's nothing that goes against what I wrote. "object files, libraries,
and internal binaries" are all arch-dependent things if you know how to
read between the lines, especially if you know that /usr/share is for
"architecture independent" stuff.

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] The right (and the wrong) way of handling module imports in oslo-incubator

2014-11-13 Thread Amrith Kumar
At the suggestion of Doug Hellmann, and relative to a conversation with him and 
Flavio at Summit. Doug suggested that I pose this question on the dev mailing 
list so someone from OSLO can communicate the answer to the entire community 
(rather than just the private email exchange that we had).

Here's the situation, I'm using loopingcall.py as an example, this is not 
limited to this module but serves as an example.

An OSLO incubator module loopingcall depends on another OSLO incubator module 
timeutils.

timeutils has graduated [drum-roll] and is now part of oslo.utils.

There is also other project code that references timeutils.

So, to handle the graduation of timeutils, the changes I'll be making are:


1.  Remove timeutils from openstack-common.conf

2.  Make the project code reference oslo.utils

But what of loopingcall? Should I


a.  Update it and change the import(s) therein to point to oslo.utils, or

b.  Sync the oslo-incubator code for loopingcall, picking up all changes at 
least upto and including the change in oslo-incubator that handles the 
graduation of oslo.utils.

In speaking with Doug and Flavio, (after I submitted copious amounts of code 
that did (a)) above, I've come to learn that I chose the wrong answer. The 
correct answer is (b). This doesn't have to be part of the same commit, and 
what I've ended up doing is this ...


c.  Leave timeutils in /openstack/common and let oslo-incubator depend 
on it while migrating the project to use oslo.utils. In a subsequent commit, a 
sync from oslo-incubator can happen.

I'd like someone on OSLO to confirm this, and for other projects whose lead I 
followed, you may want to address these in the changes you have in flight or 
have already merged.

Thanks,

-amrith


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] config options not correctly deprecated

2014-11-13 Thread Ben Nemec
On 11/10/2014 05:00 AM, Daniel P. Berrange wrote:
> On Mon, Nov 10, 2014 at 09:45:02AM +, Derek Higgins wrote:
>> Tl;dr oslo.config wasn't logging warnings about deprecated config
>> options, do we need to support them for another cycle?
> 
> AFAIK, there has not been any change in olso.config behaviour
> in the Juno release, as compared to previous releases. The
> oslo.config behaviour is that the generated sample config file
> contain all the deprecation information.
> 
> The idea that olso.config issue log warnings is a decent RFE
> to make the use of deprecated config settings more visible.
> This is an enhancement though, not a bug.
> 
>> A set of patches to remove deprecated options in Nova was landed on
>> Thursday[1], these were marked as deprecated during the juno dev cycle
>> and got removed now that kilo has started.
> 
> Yes, this is our standard practice - at the start of each release
> cycle, we delete anything that was marked as deprected in the
> previous release cycle. ie we give downstream users/apps 1 release
> cycle of grace to move to the new option names.
> 
>> Most of the deprecated config options are listed as deprecated in the
>> documentation for nova.conf changes[2] linked to from the Nova upgrade
>> section in the Juno release notes[3] (the deprecated cinder config
>> options are not listed here along with the allowed_direct_url_schemes
>> glance option).
> 
> The sample  nova.conf generated by olso lists all the deprecations.
> 
> For example, for cinder options it shows what the old config option
> name was.
> 
>   [cinder]
> 
>   #
>   # Options defined in nova.volume.cinder
>   #
> 
>   # Info to match when looking for cinder in the service
>   # catalog. Format is: separated values of the form:
>   # :: (string value)
>   # Deprecated group/name - [DEFAULT]/cinder_catalog_info
>   #catalog_info=volume:cinder:publicURL
> 
> Also note the deprecated name will not appear as an option in the
> sample config file at all, other than in this deprecation comment.
> 
> 
>> My main worry is that there were no warnings about these options being
>> deprecated in nova's logs (as a result they were still being used in
>> tripleo), once I noticed tripleo's CI jobs were failing and discovered
>> the reason I submitted 4 reverts to put back the deprecated options in
>> nova[4] as I believe they should now be supported for another cycle
>> (along with a fix to oslo.config to log warnings about their use). The 4
>> patches have now been blocked as they go "against our deprecation policy".
>>
>> I believe the correct way to handle this is to support these options for
>> another cycle so that other operators don't get hit when upgrading to
>> kilo. While at that same time fix oslo.config to report the deprecated
>> options in kilo.
> 
>> I have marked this mail with the [all] tag because there are other
>> projects using the same "deprecated_name" (or "deprecated_group")
>> parameter when adding config options, I think those projects also now
>> need to support their deprecated options for another cycle.
> 
> AFAIK, there's nothing different about Juno vs previous release cycles,
> so I don't see any reason to do anything different this time around.
> No matter what we do there is always a possibility that downstream
> apps / users will not notice and/or ignore the deprecation. We should
> certainly look at how to make deprecation more obvious, but I don't
> think we should change our policy just because an app missed the fact
> that these were deprecated.

So the difference to me is that this cycle we are aware that we're
creating a crappy experience for deployers.  In the past we didn't have
anything in the CI environment simulating a real deployment so these
sorts of issues went unnoticed.  IMHO telling deployers that they have
to troll the sample configs and try to figure out which deprecated opts
they're still using is not an acceptable answer.

Now that we do know, I think we need to address the issue.  The first
step is to revert the deprecated removals - they're not hurting
anything, and if we wait another cycle we can fix oslo.config and then
remove them once deployers have had a reasonable chance to address the
deprecation.

This is one of the big reasons we want to have a deployment program
upstream.  It surfaces these sorts of shortcomings in a way that
probably wouldn't have happened before.  I think it would be a shame if
we ignore that because "we've always done it that way."

/2 cents

> 
>> I've also opened a bug for this against tripleo, nova and oslo.conf[5]
>> and for the moment in tripleo-ci I have pinned the version of nova we
>> use to something from before the commits were merged while we get a
>> chance to correctly use the new name for each of the options that were
>> deprecated.
>>
>> On a side node, When trying to look up the deprecation policy I wasn't
>> able to find it so if anybody has a link handy can you point me at it.
> 
> I'm not aware of it being writte

Re: [openstack-dev] [neutron] Neutron mid-cycle announcement

2014-11-13 Thread Kyle Mestery
A severe typo hopefully didn't result in people booking week and a half trips 
to Lehi!

The mid-cycle is as originally planned: December 8-10.

Thanks,
Kyle

> On Nov 13, 2014, at 2:04 PM, Carl Baldwin  wrote:
> 
>> On Thu, Nov 13, 2014 at 1:00 PM, Salvatore Orlando  
>> wrote:
>> No worries,
>> 
>> you get one day off over the weekend. And you also get to choose if it's
>> saturday or sunday.
> 
> I didn't think it was going to be a whole day.
> 
>> Salvatore
>> 
>>> On 13 November 2014 20:05, Kevin Benton  wrote:
>>> 
>>> December 8-19? 11 day mid-cycle seems a little intense...
> 
> If you thought the summits fried your brain...
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] tracking approved specs via RSS

2014-11-13 Thread Doug Hellmann
We’ve added RSS feeds to most of the published specs on 
http://specs.openstack.org to make it easier to follow along with approved 
specs. Watching gerrit is still the best way to track proposals and reviews, 
but for the folks who are interested in the sausage but don’t want to know how 
it’s made, having an RSS feed is another way for them to keep up.

If your project is listed on http://specs.openstack.org and does not have an 
“(RSS)” link next to the name and you would like one, contact me off list and 
I’ll walk you through the process.

Enjoy,
Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Donald Stufft

> On Nov 13, 2014, at 5:23 PM, Thomas Goirand  wrote:
> 
> On 11/14/2014 02:11 AM, Donald Stufft wrote:
>>> On Nov 13, 2014, at 12:38 PM, Thomas Goirand  wrote:
>>> On 11/13/2014 10:56 PM, Martin Geisler wrote:
 However, the whole JavaScript ecosystem seems to be centered around the
 idea of doing local installations. That means that you no longer need
 the package manager to install the software -- you only need a package
 manager to install the base system (NodeJs and npm for JavaScript).
>>> 
>>> Yeah... Just like for Java, PHP, Perl, Python, you-name-it...
>>> 
>>> In what way Javascript will be any different from all of these languages?
>> 
>> Node.js, and npm in particular tends to solve the dependency hell problem
>> by making it possible to install multiple versions of a particular thing
>> and use them all within the same process. As far as I know the OS tooling
>> doesn’t really handle SxS installs of the same thing in multiple versions
>> very well, I think the closest that you can do is multiple separate packages
>> with version numbers in the package name?
> 
> Yeah, and for a very good reason: having multiple version of the same
> thing is just really horrible, and should be avoided at all costs.

I don’t disagree with you that I don’t particularly like that situation, just
saying that node.js/npm *is* special in this regard because it’s entirely
possible that you can’t resolve things to a single version per dependency and
their tooling will just work for that.

> 
>>> Also, does your $language-specific-package--manager has enough checks so
>>> that there's no man in the middle attack possible? Is it secured enough?
>>> Can a replay attack be done on it? Does it supports any kind of
>>> cryptography checks like yum or apt does? I'm almost sure that's not the
>>> case. pip is really horrible in this regard. I haven't checked, but I'm
>>> almost sure what we're proposing (eg: npm and such) have the same
>>> weakness. And here, I'm only scratching security concerns. There's other
>>> concerns, like how good is the dependency solver and such (remember: it
>>> took *years* for apt to be as good as it is right now, and it still has
>>> some defects).
>> 
>> As far as I’m aware npm supports TLS the same as pip does. That secures the
>> transport between the end users and the repository so you can be assured
>> that there is no man in the middle. Security wise npm (and pip) are about
>> ~95% (mad up numbers, but you can get the gist) of the effectiveness as the
>> OS package managers.
> 
> I don't agree at all with this view. Using TLS is *far* from being
> enough IMO. But that's not the point. Using anything else than the
> distribution package manager is a hack (or unfinished work).

This is an argument that I don’t think either of us will convince the other of,
so I’ll just say agree to disagree.

> 
>>> On 11/14/2014 12:59 AM, Martin Geisler wrote:
 It seems to me that it should be possible translate the node module
 into system level packages in a mechanical fashion, assuming that
 you're willing to have a system package for each version of the node
 module
>>> 
>>> Sure! That's how I do most of my Python modules these days. I don't just
>>> create them from scratch, I use my own "debpypi" script, which generates
>>> a template for packaging. But it can't be fully automated. I could
>>> almost do it in a fully automated manner for PEAR packages for PHP (see
>>> "debpear" in the Debian archive), but it's harder with Python and pip/PyPi.
>> 
>> I would be interested to know what makes Python harder in this regard, I
>> would like to fix it.
> 
> The fact that the standard from PyPi is very fuzzy is one of the issue.
> There's nothing in the format (for example in the DOAP.xml record) that
> tells if a module supports Python3 for example. Then the short and long
> descriptions aren't respected, often, you get some changelog entries
> there. Then there's no real convention for the location of the sphinx
> doc. There's also the fact that dependencies for Python have to be
> written by hand on a Debian package. See for example, dependencies on
> arparse, distribute, ordereddict, which I never put in a Debian package
> as it's available in Python 2.7. Or the fact that there's no real unique
> place where dependencies are written on a PyPi "package" (is it hidden
> somewhere in setup.py, or is it explicitly written in
> requirements.txt?). Etc. On the PHP world, everything is much cleaner,
> in the package.xml, which is very easily parse-able.

(This is fairly off topic, so if you want to reply to this in private that’s
fine):

Nothing that says if it supports py3:
Yea, this is a problem, you can somewhat estimate it using the Python 3
classifier though.

Short and Long descriptions aren’t respected:
I’m not sure what you mean by isn’t respected?

No real convention for the location of the sphinx docs:
Ok, I’ll add this to the list of things that needs wo

Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Martin Geisler
Thomas Goirand  writes:

> On 11/13/2014 10:56 PM, Martin Geisler wrote:
>> Maybe a silly question, but why insist on this? Why would you insist on
>> installing a JavaScript based application using your package manager?
>> 
>> I'm a huge fan of package managers and typically refuse to install
>> anything globally if it doesn't come as a package.
>> 
>> However, the whole JavaScript ecosystem seems to be centered around the
>> idea of doing local installations. That means that you no longer need
>> the package manager to install the software -- you only need a package
>> manager to install the base system (NodeJs and npm for JavaScript).
>
> Yeah... Just like for Java, PHP, Perl, Python, you-name-it...
>
> In what way Javascript will be any different from all of these
> languages?

Let me again say that I'm fairly new to this modern JavaScript world. I
knew almost nothing about node, npm, bower, and grunt six months ago.

One answer may be that there isn't an expectation in the JavaScript
community that you'll be reusing system libraries the same way that
there is in at least the Python and Perl communities.

It's my impression that the JavaScript world is used to move *very*
quickly. People release versions very rapidly and are happy to break
APIs. I think they're okay with it because of semver -- the idea that as
long as you increment the right digit in your version number, you don't
have to care (much) about the work you put on your users when you ask
them to upgrade.

I hope that's too harsh a description, but the way the node module
system is explicitly designed to allow a single running program to use
multiple versions of the *same* module hints that this chaotic situation
is both expected and considered normal.

When reading about the module system, I came across blog posts that
called it superior compared to, say, Python, exactly because of this
flexibility. Reasonable people are sure to disagree, but that seems to
be the current situation.

>> Notice that Python has been moving rapidly in the same direction for
>> years: you only need Python and pip to bootstrap yourself. After getting
>> used to virtualenv, I've mostly stopped installing Python modules
>> globally and that is how the JavaScript world expects you to work too.
>
> Fine for development. Not for deployments. Not for distributions. Or you
> just get a huge mess of every library installed 10 times, with 10
> different versions, and then a security issue needs to be fixed...

While I agree that it's chaotic, I also think you make the problem worse
than it really is. First, remember that the user who installs Horizon
won't need to use the JavaScript based *developer* tools such as npm,
bower, etc.

That is, I think Horizon developers will use these tools to produce a
release -- a tarball -- and that tarball will be something you unpack on
your webserver and then you're done. I base this on what I've seen in
the project I've been working. The release tarball you download here
don't mention npm, bower, or any of the other tools:

  https://github.com/zerovm/swift-browser/releases

The tools were used to produce the tarball and were used to test it, but
they're not part of the released product. Somewhat similar to how GCC
isn't included in the tarball if you download a pre-compiled binary.

>> So maybe the Horizon package should be an installer package like the
>> ones that download fonts or Adobe?
>
> This is a horrible design which will *never* make it to distributions.
> Please think again. What is it that makes Horizon so special? Answer:
> nothing. It's "just a web app", so it doesn't need any special care.
> It should be packaged, just like the rest of everything, with
> .deb/.rpm and so on.

Maybe a difference is that you don't (yet) install a web application
like you install a system application. Instead you *deploy* it: you
unpack files on a webserver, you configure permissions, you setup cache
rules, you configure a database, etc.

I find this to be quite different from, say, installing Emacs. Emacs is
something you install once on a system and this single installation can
be done in a "right way" so that it's useable for several users on the
system.

A web app is something a single user installs on a system (www-data or a
similar user) and then this user configures the system web server to
serve this web app.

I agree that it would be cool to have web apps be as robust and general
purpose as system apps. However, I think that day is a ways off.

>> That package would get the right version of node and which then runs
>> the npm and bower commands to download the rest plus (importantly and
>> much appreciated) puts the files in a sensible location and gives
>> them good permissions.
>
> Fine for your development environment. But that's it.
>
> Also, does your $language-specific-package--manager has enough checks
> so that there's no man in the middle attack possible? Is it secured
> enough? Can a replay attack be done on it? Does i

Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Thomas Goirand
On 11/14/2014 02:11 AM, Donald Stufft wrote:
>> On Nov 13, 2014, at 12:38 PM, Thomas Goirand  wrote:
>> On 11/13/2014 10:56 PM, Martin Geisler wrote:
>>> However, the whole JavaScript ecosystem seems to be centered around the
>>> idea of doing local installations. That means that you no longer need
>>> the package manager to install the software -- you only need a package
>>> manager to install the base system (NodeJs and npm for JavaScript).
>>
>> Yeah... Just like for Java, PHP, Perl, Python, you-name-it...
>>
>> In what way Javascript will be any different from all of these languages?
> 
> Node.js, and npm in particular tends to solve the dependency hell problem
> by making it possible to install multiple versions of a particular thing
> and use them all within the same process. As far as I know the OS tooling
> doesn’t really handle SxS installs of the same thing in multiple versions
> very well, I think the closest that you can do is multiple separate packages
> with version numbers in the package name?

Yeah, and for a very good reason: having multiple version of the same
thing is just really horrible, and should be avoided at all costs.

>> Also, does your $language-specific-package--manager has enough checks so
>> that there's no man in the middle attack possible? Is it secured enough?
>> Can a replay attack be done on it? Does it supports any kind of
>> cryptography checks like yum or apt does? I'm almost sure that's not the
>> case. pip is really horrible in this regard. I haven't checked, but I'm
>> almost sure what we're proposing (eg: npm and such) have the same
>> weakness. And here, I'm only scratching security concerns. There's other
>> concerns, like how good is the dependency solver and such (remember: it
>> took *years* for apt to be as good as it is right now, and it still has
>> some defects).
> 
> As far as I’m aware npm supports TLS the same as pip does. That secures the
> transport between the end users and the repository so you can be assured
> that there is no man in the middle. Security wise npm (and pip) are about
> ~95% (mad up numbers, but you can get the gist) of the effectiveness as the
> OS package managers.

I don't agree at all with this view. Using TLS is *far* from being
enough IMO. But that's not the point. Using anything else than the
distribution package manager is a hack (or unfinished work).

>> On 11/14/2014 12:59 AM, Martin Geisler wrote:
>>> It seems to me that it should be possible translate the node module
>>> into system level packages in a mechanical fashion, assuming that
>>> you're willing to have a system package for each version of the node
>>> module
>>
>> Sure! That's how I do most of my Python modules these days. I don't just
>> create them from scratch, I use my own "debpypi" script, which generates
>> a template for packaging. But it can't be fully automated. I could
>> almost do it in a fully automated manner for PEAR packages for PHP (see
>> "debpear" in the Debian archive), but it's harder with Python and pip/PyPi.
> 
> I would be interested to know what makes Python harder in this regard, I
> would like to fix it.

The fact that the standard from PyPi is very fuzzy is one of the issue.
There's nothing in the format (for example in the DOAP.xml record) that
tells if a module supports Python3 for example. Then the short and long
descriptions aren't respected, often, you get some changelog entries
there. Then there's no real convention for the location of the sphinx
doc. There's also the fact that dependencies for Python have to be
written by hand on a Debian package. See for example, dependencies on
arparse, distribute, ordereddict, which I never put in a Debian package
as it's available in Python 2.7. Or the fact that there's no real unique
place where dependencies are written on a PyPi "package" (is it hidden
somewhere in setup.py, or is it explicitly written in
requirements.txt?). Etc. On the PHP world, everything is much cleaner,
in the package.xml, which is very easily parse-able.

>> On 11/14/2014 12:59 AM, Martin Geisler wrote:
>>> The guys behind npm has written a little about how that could work
>>> here:
>>>
>>> http://nodejs.org/api/modules.html#modules_addenda_package_manager_tips
>>
>> It's fun to read, but very naive. First thing that is very shocking is
>> that arch independent things gets installed into /usr/lib, where they
>> belong in /usr/share. If that is what the NPM upstream produces, that's
>> scary: he doesn't even know how the FSHS (FileSystem Hierarchy Standard)
>> works.
> 
> I may be wrong, but doesn’t the FHS state that /usr/share is for arch
> independent *data* that is read only?

No, that's for arch independent *things*. Like for example, javascript.
In Debian, these are going in /usr/share/javascript. Python code used to
live within /usr/share/pyshared too (but we stopped the symlink forest
during the Jessie cycle).

> I believe it also states that
> /usr/lib is for object files, libraries, and internal binaries.


Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

2014-11-13 Thread Joshua Harlow
On Nov 13, 2014, at 7:10 AM, Clint Byrum  wrote:

> Excerpts from Joshua Harlow's message of 2014-11-13 00:45:07 -0800:
>> A question;
>> 
>> How is using something like celery in heat vs taskflow in heat (or at least 
>> concept [1]) 'to many code change'.
>> 
>> Both seem like change of similar levels ;-)
>> 
> 
> I've tried a few times to dive into refactoring some things to use
> TaskFlow at a shallow level, and have always gotten confused and
> frustrated.
> 
> The amount of lines that are changed probably is the same. But the
> massive shift in thinking is not an easy one to make. It may be worth some
> thinking on providing a shorter bridge to TaskFlow adoption, because I'm
> a huge fan of the idea and would _start_ something with it in a heartbeat,
> but refactoring things to use it feels really weird to me.

I wonder how I can make that better...

Where the concepts that new/different? Maybe I just have more of a functional 
programming background and the way taskflow gets you to create tasks that are 
later executed, order them ahead of time, and then *later* run them is still a 
foreign concept for folks that have not done things with non-procedural 
languages. What were the confusion points, how may I help address them? More 
docs maybe, more examples, something else?

I would agree that the jobboard[0] concept is different than the other parts of 
taskflow, but it could be useful here:

Basically at its core its a application of zookeeper where 'jobs' are posted to 
a directory (using sequenced nodes in zookeeper, so that ordering is retained). 
Entities then acquire ephemeral locks on those 'jobs' (these locks will be 
released if the owner process disconnects, or fails...) and then work on the 
contents of that job (where contents can be pretty much arbitrary). This 
creates a highly available job queue (queue-like due to the node 
sequencing[1]), and it sounds pretty similar to what zaqar could provide in 
theory (except the zookeeper one is proven, battle-hardened, works and 
exists...). But we should of course continue being scared of zookeeper, because 
u know, who wants to use a tool where it would fit, haha (this is a joke).

[0] 
https://github.com/openstack/taskflow/blob/master/taskflow/jobs/jobboard.py#L25 

[1] 
http://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#Sequence+Nodes+--+Unique+Naming

> 
>> What was your metric for determining the code change either would have (out 
>> of curiosity)?
>> 
>> Perhaps u should look at [2], although I'm unclear on what the desired 
>> functionality is here.
>> 
>> Do u want the single engine to transfer its work to another engine when it 
>> 'goes down'? If so then the jobboard model + zookeper inherently does this.
>> 
>> Or maybe u want something else? I'm probably confused because u seem to be 
>> asking for resource timeouts + recover from engine failure (which seems like 
>> a liveness issue and not a resource timeout one), those 2 things seem 
>> separable.
>> 
> 
> I agree with you on this. It is definitely a liveness problem. The
> resource timeout isn't something I've seen discussed before. We do have
> a stack timeout, and we need to keep on honoring that, but we can do
> that with a job that sleeps for the stack timeout if we have a liveness
> guarantee that will resurrect the job (with the sleep shortened by the
> time since stack-update-time) somewhere else if the original engine
> can't complete the job.
> 
>> [1] http://docs.openstack.org/developer/taskflow/jobs.html
>> 
>> [2] 
>> http://docs.openstack.org/developer/taskflow/examples.html#jobboard-producer-consumer-simple
>> 
>> On Nov 13, 2014, at 12:29 AM, Murugan, Visnusaran 
>>  wrote:
>> 
>>> Hi all,
>>> 
>>> Convergence-POC distributes stack operations by sending resource actions 
>>> over RPC for any heat-engine to execute. Entire stack lifecycle will be 
>>> controlled by worker/observer notifications. This distributed model has its 
>>> own advantages and disadvantages.
>>> 
>>> Any stack operation has a timeout and a single engine will be responsible 
>>> for it. If that engine goes down, timeout is lost along with it. So a 
>>> traditional way is for other engines to recreate timeout from scratch. Also 
>>> a missed resource action notification will be detected only when stack 
>>> operation timeout happens.
>>> 
>>> To overcome this, we will need the following capability:
>>> 1.   Resource timeout (can be used for retry)
>>> 2.   Recover from engine failure (loss of stack timeout, resource 
>>> action notification)
>>> 
>>> 
>>> Suggestion:
>>> 1.   Use task queue like celery to host timeouts for both stack and 
>>> resource.
>>> 2.   Poll database for engine failures and restart timers/ retrigger 
>>> resource retry (IMHO: This would be a traditional and weighs heavy)
>>> 3.   Migrate heat to use TaskFlow. (Too many code change)
>>> 
>>> I am not suggesting we use Task Flow. Using celery will have very minimum 
>>> code change. (de

Re: [openstack-dev] [neutron] Is this fix introducing another different bug to dhcp-agent?

2014-11-13 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Robert, Miguel,
do you plan to take care of the bug and the fix, or you need help? RDO
depends on the fix, also we should introduce the fix before the next
Juno release that includes the bad patch, so I would be glad to step
in if you don't have spare cycles.
/Ihar

On 13/11/14 16:44, Robert Li (baoli) wrote:
> Nice catch. Since it’s already merged, a new bug may be in order.
> 
> —Robert
> 
> On 11/13/14, 10:25 AM, "Miguel Ángel Ajo"  > wrote:
> 
> I believe this fix to IPv6 dhcp spawn breaks isolated metadata
> when we have a subnet combination like this on a network:
> 
> 1) IPv6 subnet, with DHCP enabled 2) IPv4 subnet, with isolated
> metadata enabled.
> 
> 
> https://review.openstack.org/#/c/123671/1/neutron/agent/dhcp_agent.py
>
>  I haven’t been able to test yet, but wanted to share it before I
> forget.
> 
> 
> 
> 
> Miguel Ángel ajo @ freenode.net
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUZSioAAoJEC5aWaUY1u57WvIH/0pEFnXwPF9dKGbmWGvxgURW
Fec0SMxl544DUyKXfhy/fEJPiedAm63TcBbBRkcLrwrGrAI23iMvAi96tCuH/eb7
uYbgoDF16b6DGUg0V2bXh8OufpgoIn4T38Vwwr0MFhfxOLbux4QfK1MshsRF1XWx
LnzmLLnDuJvEYF/gq9ifAA0ekQ+OdwYaKpTEyoVXZNuSJzJOkY8BnwjPQTuRStYG
M1oBsIxO9j9C/fw1/lkIKasaq9Vmy5LtG+neOyQDzM6EjZrVKO+Z9ZbJnhlkIoaF
fL8dKqDBzDbFbHpE/pHcUSR5lMnBkHWjsfqC6w8NKpEKnPW6UN88oipSoB9h7U4=
=NfOp
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][libvirt] - 'nova reboot' causes console-log truncated

2014-11-13 Thread Surojit Pathak

Hi all,

[Issue observed]
If we issue 'nova reboot ', we get to have the console output of 
the latest bootup of the server only. The console output of the previous 
boot for the same server vanishes due to truncation[1]. If we do reboot 
from within the VM instance [ #sudo reboot ], or reboot the instance 
with 'virsh reboot ' the behavior is not the same, where the 
console.log keeps increasing, with the new output being appended.
This loss of history makes some debugging scenario difficult due to lack 
of information being available.


Please point me to any solution/blueprint for this issue, if already 
planned. Otherwise, please comment on my analysis and proposals as 
solution, below -


[Analysis]
Nova's libvirt driver on compute node tries to do a graceful restart of 
the server instance, by attempting a soft_reboot first. If soft_reboot 
fails, it attempts a hard_reboot. As part of soft_reboot, it brings down 
the instance by calling shutdown(), and then calls createWithFlags() to 
bring this up. Because of this, qemu-kvm process for the instance gets 
terminated and new process is launched. In QEMU, the chardev file is 
opened with O_TRUNC, and thus we lose the previous content of the 
console.log file.
On the other-hand, during 'virsh reboot ', the same qemu-kvm 
process continues, and libvirt actually does a 
qemuDomainSetFakeReboot(). Thus the same file continues capturing the 
new console output as a continuation into the same file.


[Proposals for solution]
1. NOVA, driven by certain configuration, will backup the console file, 
before creating the domain, during reboot scenario. viz. doing a backup 
of console.log as console.log.0. How many such backups of log-file to 
keep, what can be the maximum size of the file, 'logrotate` to be used 
or not - all these can come to NOVA as configuration parameter.
Pros - As NOVA libvirt driver is not using libvirt's reboot() 
functionality knowingly, this problem can be better addressed from the 
same layer.
Cons - NOVA's libvirt layer building awareness of the console files 
is not clean from modularity.


2. virDomainCreateWithFlags() will have a new flag value to indicate 
logs to be appended instead of truncated, if FILE option is used. This 
config will be passed to QEMU, while spawning the process.

- Changes will be not in OpenStack Code, but in libvirt and QEMU.
Cons - We may have to do the similar implementation for all the 
drivers of libvirt.
Pros - This feature's use-case is there in case of 'virsh shutdown 
', followed by a 'virsh start ' too.


Regards,
*Suro
*
Surojit Pathak
*

*Refs -
[1] Snippet
# tail -f 
/opt/stack/data/nova/instances/cea9a3d9-f833-4ded-90b8-c85b7da3f758/console.log

...
[  OK  ] Started udev Coldplug all Devices.
[  OK  ] Started Create static device nodes in /dev.
 Starting udev Kernel Device Manager...
[   36.938075] EXT4-fs (vda1): re-mounted. Opts: (null)
[  OK  ] Started Remount Root and Kernel File Systems.
 Starting Load/Save Random Seed...
[  OK  ] Reached target Local File Systems (Pre).
 Starting Configure read-only root support...
[  OK  ] Started Load/Save Random Seed.
*tail: 
/opt/stack/data/nova/instances/cea9a3d9-f833-4ded-90b8-c85b7da3f758/console.log: 
file truncated*

[0.00] Initializing cgroup subsys cpuset
[0.00] Initializing cgroup subsys cpu
[0.00] Initializing cgroup subsys cpuacct
[0.00] Linux version 3.11.10-301.fc20.x86_64 
(mockbu...@bkernel01.phx2.fedoraproject.org) (gcc version 4.8.2 20131017 
(Red Hat 4.8.2-1) (GCC) ) #1 SMP Thu Dec 5 14:01:17 UTC 2013
[0.00] Command line: ro 
root=UUID=314b4a27-3885-49e8-9415-af098db4fd2a no_timer_check 
console=tty1 console=ttyS0,115200n8 
initrd=/boot/initramfs-3.11.10-301.fc20.x86_64.img 
BOOT_IMAGE=/boot/vmlinuz-3.11.10-301.fc20.x86_64


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

2014-11-13 Thread Joshua Harlow
On Nov 13, 2014, at 10:59 AM, Clint Byrum  wrote:

> Excerpts from Zane Bitter's message of 2014-11-13 09:55:43 -0800:
>> On 13/11/14 09:58, Clint Byrum wrote:
>>> Excerpts from Zane Bitter's message of 2014-11-13 05:54:03 -0800:
 On 13/11/14 03:29, Murugan, Visnusaran wrote:
> Hi all,
> 
> Convergence-POC distributes stack operations by sending resource actions
> over RPC for any heat-engine to execute. Entire stack lifecycle will be
> controlled by worker/observer notifications. This distributed model has
> its own advantages and disadvantages.
> 
> Any stack operation has a timeout and a single engine will be
> responsible for it. If that engine goes down, timeout is lost along with
> it. So a traditional way is for other engines to recreate timeout from
> scratch. Also a missed resource action notification will be detected
> only when stack operation timeout happens.
> 
> To overcome this, we will need the following capability:
> 
> 1.Resource timeout (can be used for retry)
 
 I don't believe this is strictly needed for phase 1 (essentially we
 don't have it now, so nothing gets worse).
 
>>> 
>>> We do have a stack timeout, and it stands to reason that we won't have a
>>> single box with a timeout greenthread after this, so a strategy is
>>> needed.
>> 
>> Right, that was 2, but I was talking specifically about the resource 
>> retry. I think we agree on both points.
>> 
 For phase 2, yes, we'll want it. One thing we haven't discussed much is
 that if we used Zaqar for this then the observer could claim a message
 but not acknowledge it until it had processed it, so we could have
 guaranteed delivery.
 
>>> 
>>> Frankly, if oslo.messaging doesn't support reliable delivery then we
>>> need to add it.
>> 
>> That is straight-up impossible with AMQP. Either you ack the message and
>> risk losing it if the worker dies before processing is complete, or you 
>> don't ack the message until it's processed and you become a blocker for 
>> every other worker trying to pull jobs off the queue. It works fine when 
>> you have only one worker; otherwise not so much. This is the crux of the 
>> whole "why isn't Zaqar just Rabbit" debate.
>> 
> 
> I'm not sure we have the same understanding of AMQP, so hopefully we can
> clarify here. This stackoverflow answer echoes my understanding:
> 
> http://stackoverflow.com/questions/17841843/rabbitmq-does-one-consumer-block-the-other-consumers-of-the-same-queue
> 
> Not ack'ing just means they might get retransmitted if we never ack. It
> doesn't block other consumers. And as the link above quotes from the
> AMQP spec, when there are multiple consumers, FIFO is not guaranteed.
> Other consumers get other messages.
> 
> So just add the ability for a consumer to read, work, ack to
> oslo.messaging, and this is mostly handled via AMQP. Of course that
> also likely means no zeromq for Heat without accepting that messages
> may be lost if workers die.
> 
> Basically we need to add something that is not "RPC" but instead
> "jobqueue" that mimics this:
> 
> http://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo/messaging/rpc/dispatcher.py#n131
> 
> I've always been suspicious of this bit of code, as it basically means
> that if anything fails between that call, and the one below it, we have
> lost contact, but as long as clients are written to re-send when there
> is a lack of reply, there shouldn't be a problem. But, for a job queue,
> there is no reply, and so the worker would dispatch, and then
> acknowledge after the dispatched call had returned (including having
> completed the step where new messages are added to the queue for any
> newly-possible children).
> 
> Just to be clear, I believe what Zaqar adds is the ability to peek at
> a specific message ID and not affect it in the queue, which is entirely
> different than ACK'ing the ones you've already received in your session.
> 
>> Most stuff in OpenStack gets around this by doing synchronous calls 
>> across oslo.messaging, where there is an end-to-end ack. We don't want 
>> that here though. We'll probably have to make do with having ways to 
>> recover after a failure (kick off another update with the same data is 
>> always an option). The hard part is that if something dies we don't 
>> really want to wait until the stack timeout to start recovering.
>> 
> 
> I fully agree. Josh's point about using a coordination service like
> Zookeeper to maintain liveness is an interesting one here. If we just
> make sure that all the workers that have claimed work off the queue are
> alive, that should be sufficient to prevent a hanging stack situation
> like you describe above.
> 
>>> Zaqar should have nothing to do with this and is, IMO, a
>>> poor choice at this stage, though I like the idea of using it in the
>>> future so that we can make Heat more of an outside-the-cloud app.
>> 
>> I'm inclined to agree that it 

Re: [openstack-dev] [infra][devstack] CI failed The plugin token_endpoint could not be found

2014-11-13 Thread Patrick East
I'm running into this issue as well on my CI. Any ideas on how to fix this?

My CI is behaving similar to the official jenkins in regards to using pip
to install the clients, and pip freeze shows the same versions on each.

Comparing
http://logs.openstack.org/70/124370/7/check/check-tempest-dsvm-full/d6a53b7/logs/devstacklog.txt.gz#_2014-11-13_18_48_32_860
and the same spot in
http://ec2-54-69-246-234.us-west-2.compute.amazonaws.com/purestorageci/MANUALLY_TRIGGERED_272/devstacklog.txt
they both fail "use_library_from_git" check for keystoneclient and
openstackclient

Any suggestions would be much appreciated!

-Patrick


On Wed, Nov 12, 2014 at 10:22 PM, Itsuro ODA  wrote:

> Hi,
>
> > I'm wondering why you are just hitting it now? Does your CI pull the
> > latest python-keystoneclient and python-openstackclient from master?
>
> Yes before it began to fail, but now it is No because of this change:
>
> https://github.com/openstack-dev/devstack/commit/8f8e2d1fbfa4c51f6b68a6967e330cd478f979ee
>
> Now python-*client are installed by pip install instead of git clone.
>
> I think this change causes the problem. But I don't understand
> why there are success CIs and failed CIs (include mine) and how to fix
> the problem.
>
> Thanks.
> Itsuro Oda
>
> On Thu, 13 Nov 2014 00:36:41 -0500
> Steve Martinelli  wrote:
>
> > About a month ago, we made changes to python-openstackclient that seem
> > related to the error message you posted. Change is
> > https://review.openstack.org/#/c/127655/3/setup.cfg
> > I'm wondering why you are just hitting it now? Does your CI pull the
> > latest python-keystoneclient and python-openstackclient from master?
> >
> > Thanks,
> >
> > _
> > Steve Martinelli
> > OpenStack Development - Keystone Core Member
> > Phone: (905) 413-2851
> > E-Mail: steve...@ca.ibm.com
> >
> >
> >
> > From:   Itsuro ODA 
> > To: openstack-dev@lists.openstack.org
> > Date:   11/12/2014 11:27 PM
> > Subject:[openstack-dev] [infra][devstack] CI failed The plugin
> > token_endpoint could not be found
> >
> >
> >
> > Hi,
> >
> > My third party CI becomes failed from about 21:00 12th UTC
> > in execution of devstack.
> >
> > The error occurs at "openstack project create admin -f value -c id"
> > ---
> > ERROR: openstack The plugin token_endpoint could not be found
> > ---
> >
> > I found some CIs have same problem.
> >
> > Does anyone give me a hint to solve this problem ?
> >
> > Thanks.
> > --
> > Itsuro ODA 
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
>
> --
> Itsuro ODA 
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
-Patrick
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Matthew Farina
I would like to take a moment to point out that developing system software
is different from developing web applications. The way systems software is
developed and often deployed is different from web applications.

Horizon as it sits today appears to be web application development by
systems software developers. This raises the barrier to entry for web
application developers.

The approach being proposed moves horizon into the realm of web application
technologies that web application people use today.

The debate as I'm reading it is about taking web application development
processes and turning them into systems development processes which are
often foreign to web application developers. How is this going to work out?
How will web app people be willing to get involved? Why should this be
treated the same?

Most of OpenStack is a systems problem. This piece is a little different.
To make it successful should it get some wiggle room to work well in the
space it's in?

Note, I'm not saying it should be insecure or anything like that. There are
just different approaches.


On Thu, Nov 13, 2014 at 1:11 PM, Donald Stufft  wrote:

>
> > On Nov 13, 2014, at 12:38 PM, Thomas Goirand  wrote:
> >
> > On 11/13/2014 10:56 PM, Martin Geisler wrote:
> >> Maybe a silly question, but why insist on this? Why would you insist on
> >> installing a JavaScript based application using your package manager?
> >>
> >> I'm a huge fan of package managers and typically refuse to install
> >> anything globally if it doesn't come as a package.
> >>
> >> However, the whole JavaScript ecosystem seems to be centered around the
> >> idea of doing local installations. That means that you no longer need
> >> the package manager to install the software -- you only need a package
> >> manager to install the base system (NodeJs and npm for JavaScript).
> >
> > Yeah... Just like for Java, PHP, Perl, Python, you-name-it...
> >
> > In what way Javascript will be any different from all of these languages?
>
> Node.js, and npm in particular tends to solve the dependency hell problem
> by making it possible to install multiple versions of a particular thing
> and use them all within the same process. As far as I know the OS tooling
> doesn’t really handle SxS installs of the same thing in multiple versions
> very well, I think the closest that you can do is multiple separate
> packages
> with version numbers in the package name?
>
> In other words it’s entirely possible that a particular set of npm packages
> can not be resolved to a single version per library.
>
> >
> >> Notice that Python has been moving rapidly in the same direction for
> >> years: you only need Python and pip to bootstrap yourself. After getting
> >> used to virtualenv, I've mostly stopped installing Python modules
> >> globally and that is how the JavaScript world expects you to work too.
> >
> > Fine for development. Not for deployments. Not for distributions. Or you
> > just get a huge mess of every library installed 10 times, with 10
> > different versions, and then a security issue needs to be fixed…
>
> Eh, I wouldn’t say it’s not fine for deployments. Generally I find that
> the less I tie the things where I care about versions to my OS the happier
> my life gets. It’s not fine for distributions wanting to offer it though,
> that is correct.
>
> >
> >> So maybe the Horizon package should be an installer package like the
> >> ones that download fonts or Adobe?
> >
> > This is a horrible design which will *never* make it to distributions.
> > Please think again. What is it that makes Horizon so special? Answer:
> > nothing. It's "just a web app", so it doesn't need any special care. It
> > should be packaged, just like the rest of everything, with .deb/.rpm and
> > so on.
> >
> >> That package would get the right version of node and which then runs the
> >> npm and bower commands to download the rest plus (importantly and much
> >> appreciated) puts the files in a sensible location and gives them good
> >> permissions.
> >
> > Fine for your development environment. But that's it.
> >
> > Also, does your $language-specific-package--manager has enough checks so
> > that there's no man in the middle attack possible? Is it secured enough?
> > Can a replay attack be done on it? Does it supports any kind of
> > cryptography checks like yum or apt does? I'm almost sure that's not the
> > case. pip is really horrible in this regard. I haven't checked, but I'm
> > almost sure what we're proposing (eg: npm and such) have the same
> > weakness. And here, I'm only scratching security concerns. There's other
> > concerns, like how good is the dependency solver and such (remember: it
> > took *years* for apt to be as good as it is right now, and it still has
> > some defects).
>
> As far as I’m aware npm supports TLS the same as pip does. That secures the
> transport between the end users and the repository so you can be assured
> that there is no man in the middle. Security wise npm (

Re: [openstack-dev] [api] APIImpact flag for specs

2014-11-13 Thread Everett Toews
On Nov 12, 2014, at 10:45 PM, Angus Salkeld 
mailto:asalk...@mirantis.com>> wrote:

On Sat, Nov 1, 2014 at 6:45 AM, Everett Toews 
mailto:everett.to...@rackspace.com>> wrote:
Hi All,

Chris Yeoh started the use of an APIImpact flag in commit messages for specs in 
Nova. It adds a requirement for an APIImpact flag in the commit message for a 
proposed spec if it proposes changes to the REST API. This will make it much 
easier for people such as the API Working Group who want to review API changes 
across OpenStack to find and review proposed API changes.

For example, specifications with the APIImpact flag can be found with the 
following query:

https://review.openstack.org/#/q/status:open+project:openstack/nova-specs+message:apiimpact,n,z

Chris also proposed a similar change to many other projects and I did the rest. 
Here’s the complete list if you’d like to review them.

Barbican: https://review.openstack.org/131617
Ceilometer: https://review.openstack.org/131618
Cinder: https://review.openstack.org/131620
Designate: https://review.openstack.org/131621
Glance: https://review.openstack.org/131622
Heat: https://review.openstack.org/132338
Ironic: https://review.openstack.org/132340
Keystone: https://review.openstack.org/132303
Neutron: https://review.openstack.org/131623
Nova: https://review.openstack.org/#/c/129757
Sahara: https://review.openstack.org/132341
Swift: https://review.openstack.org/132342
Trove: https://review.openstack.org/132346
Zaqar: https://review.openstack.org/132348

There are even more projects in stackforge that could use a similar change. If 
you know of a project in stackforge that would benefit from using an APIImapct 
flag in its specs, please propose the change and let us know here.


I seem to have missed this, I'll place my review comment here too.

I like the general idea of getting more consistent/better API. But, is 
reviewing every spec across all projects just going to introduce a new non 
scalable bottle neck into our work flow (given the increasing move away from 
this approach: moving functional tests to projects, getting projects to do more 
of their own docs, etc..). Wouldn't a better approach be to have an API liaison 
in each project that can keep track of new guidelines and catch potential 
problems?

I see have added a new section here: 
https://wiki.openstack.org/wiki/CrossProjectLiaisons

Isn't that enough?

I replied in the review. We’ll continue the discussion there.

Everett

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron mid-cycle announcement

2014-11-13 Thread Carl Baldwin
On Thu, Nov 13, 2014 at 1:00 PM, Salvatore Orlando  wrote:
> No worries,
>
> you get one day off over the weekend. And you also get to choose if it's
> saturday or sunday.

I didn't think it was going to be a whole day.

> Salvatore
>
> On 13 November 2014 20:05, Kevin Benton  wrote:
>>
>> December 8-19? 11 day mid-cycle seems a little intense...

If you thought the summits fried your brain...

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron mid-cycle announcement

2014-11-13 Thread Salvatore Orlando
No worries,

you get one day off over the weekend. And you also get to choose if it's
saturday or sunday.

Salvatore

On 13 November 2014 20:05, Kevin Benton  wrote:

> December 8-19? 11 day mid-cycle seems a little intense...
>
> On Thu, Nov 13, 2014 at 11:01 AM, Kyle Mestery 
> wrote:
>
>> On Wed, Nov 12, 2014 at 7:16 PM, Kyle Mestery 
>> wrote:
>> > On Tue, Nov 11, 2014 at 7:04 AM, Kyle Mestery 
>> wrote:
>> >> Hi folks:
>> >>
>> >> Apologies for the delay in announcing the Neutron mid-cycle, but I was
>> >> confirming the details up until last night. I've captured the details
>> >> on an etherpad here [1]. The dates are December 8-10
>> >> (Monday-Wednesday), and it will be at the Adobe offices in Lehi, Utah,
>> >> USA.
>> >>
>> >> We're still collecting information on hotels which should be on the
>> >> etherpad later today.
>> >>
>> >> Thanks, looking forward to seeing everyone I missed in Paris!
>> >> Kyle
>> >>
>> > Folks, just an update, but the host organization is running into some
>> > trouble with the selected dates of Dec 8-10. We may need to shift the
>> > dates by a week to Dec. 15-17. If you've already booked travel, please
>> > ping me privately, otherwise hold off for another day until we sort
>> > this out. Apologies for the trouble here.
>> >
>> OK, we're keeping the mid-cycle at the same place and time: December
>> 8-19 at the Adobe offices in Lehi, Utah. We worked through the
>> scheduling kinks. For those who booked travel, you're safe. :)
>>
>> Thanks!
>> Kyle
>>
>> > Thanks,
>> > Kyle
>> >
>> >> [1] https://etherpad.openstack.org/p/neutron-kilo-midcycle
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Kevin Benton
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] Meeting Tuesday November 11th at 19:00 UTC

2014-11-13 Thread Elizabeth K. Joseph
On Mon, Nov 10, 2014 at 8:38 AM, Elizabeth K. Joseph
 wrote:
> The OpenStack Infrastructure (Infra) team is hosting our weekly
> meeting on Tuesday November 11th, at 19:00 UTC in #openstack-meeting

Our meeting this week was pretty short due to so many folks traveling,
minutes and logs available here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-11-11-19.00.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-11-11-19.00.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-11-11-19.00.log.html

It's my turn to take a vacation next week and I won't be around to
send out the reminder on Monday, so here's you're reminder now: next
meeting is coming up on Tuesday, November 18th at 19:00 UTC.

And remember daylight savings time things happened recently, so
double-check the time if you're in an area which has such things :)

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Consistency, efficiency, and safety of NovaObject.save()

2014-11-13 Thread Dan Smith
>> Unfortunately this model doesn't apply to Nova objects, which are 
>> persisted remotely. Unless I've missed something, SQLA doesn't run
>> on Nova Compute at all. Instead, when Nova Compute calls
>> object.save() this results in an RPC call to Nova Conductor, which
>> persists the object in the DB using SQLA. Compute wouldn't be able
>> to use common DB transactions without some hairy lifecycle
>> management in Conductor, so Compute apis need to be explicitly
>> aware of this.
> 
> So just a note to Dan, this is an example of where I keep hearing
> this about Nova objects.I’ve discussed this with Dan and if I
> understand him correctly, I think the idea is that a Nova Compute
> call can be organized such that the objects layer interacts with the
> database layer in a more coarse-grained fashion, if that is desired,
> so if you really need several things to happen in one DB transaction,
> you should organize the relevant objects code to work that way.

Instance.save() is a single thing. It implies that Instance.metadata,
Instance.system_metadata, Instance.info_cache, Instance.security_groups,
Instance.numa_topology, Instance.pci_requests, etc should all be written
to the database atomically (or fail). We don't do it atomically and in a
transaction right now, but only because db/api is broken into lots of
small pieces (all of which existed before objects).

If there was a good way to do this:

  with transaction:
save_instance_data()
save_instance_metadata()
save_instance_system_metadata()
...etc

Then we'd do that at the backend, achieve atomicity, and the client (the
compute node) wouldn't notice, or care, beyond the fact that it had
assumed that was happening all along. It sounds like Mike's facade will
provide us a nice way to clean up the db/api calls that the objects are
using to persist data in such a way that we can do the above safely like
we should have been doing all along.

Does that make sense?

> Still for me to get my head around is how often we are in fact
> organizing the bridge between objects / database such that we are
> using the database most effectively, and not breaking up a single
> logical operation into many individual transactions.   I know that
> Nova objects doesn’t mandate that this occur but I still want to
> learn if perhaps it tends to “encourage” that pattern to emerge -
> it’s hard for me to make that guess right now because I haven’t
> surveyed nova objects very much at all as I’ve been really trying to
> stick with getting database patterns sane to start with.

I don't agree that it encourages anything relating to how you interact
with the database, one way or the other. Almost all of our objects are
organized in the exact way that previously we had
dicts-of-dicts-of-dicts and an RPC call to flush things to the database.
We've changed very little of that access pattern.

I think we should push back to Matt to provide a description of why he
thinks that this is a problem.

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron mid-cycle announcement

2014-11-13 Thread Kevin Benton
December 8-19? 11 day mid-cycle seems a little intense...

On Thu, Nov 13, 2014 at 11:01 AM, Kyle Mestery  wrote:

> On Wed, Nov 12, 2014 at 7:16 PM, Kyle Mestery  wrote:
> > On Tue, Nov 11, 2014 at 7:04 AM, Kyle Mestery 
> wrote:
> >> Hi folks:
> >>
> >> Apologies for the delay in announcing the Neutron mid-cycle, but I was
> >> confirming the details up until last night. I've captured the details
> >> on an etherpad here [1]. The dates are December 8-10
> >> (Monday-Wednesday), and it will be at the Adobe offices in Lehi, Utah,
> >> USA.
> >>
> >> We're still collecting information on hotels which should be on the
> >> etherpad later today.
> >>
> >> Thanks, looking forward to seeing everyone I missed in Paris!
> >> Kyle
> >>
> > Folks, just an update, but the host organization is running into some
> > trouble with the selected dates of Dec 8-10. We may need to shift the
> > dates by a week to Dec. 15-17. If you've already booked travel, please
> > ping me privately, otherwise hold off for another day until we sort
> > this out. Apologies for the trouble here.
> >
> OK, we're keeping the mid-cycle at the same place and time: December
> 8-19 at the Adobe offices in Lehi, Utah. We worked through the
> scheduling kinks. For those who booked travel, you're safe. :)
>
> Thanks!
> Kyle
>
> > Thanks,
> > Kyle
> >
> >> [1] https://etherpad.openstack.org/p/neutron-kilo-midcycle
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron mid-cycle announcement

2014-11-13 Thread Kyle Mestery
On Wed, Nov 12, 2014 at 7:16 PM, Kyle Mestery  wrote:
> On Tue, Nov 11, 2014 at 7:04 AM, Kyle Mestery  wrote:
>> Hi folks:
>>
>> Apologies for the delay in announcing the Neutron mid-cycle, but I was
>> confirming the details up until last night. I've captured the details
>> on an etherpad here [1]. The dates are December 8-10
>> (Monday-Wednesday), and it will be at the Adobe offices in Lehi, Utah,
>> USA.
>>
>> We're still collecting information on hotels which should be on the
>> etherpad later today.
>>
>> Thanks, looking forward to seeing everyone I missed in Paris!
>> Kyle
>>
> Folks, just an update, but the host organization is running into some
> trouble with the selected dates of Dec 8-10. We may need to shift the
> dates by a week to Dec. 15-17. If you've already booked travel, please
> ping me privately, otherwise hold off for another day until we sort
> this out. Apologies for the trouble here.
>
OK, we're keeping the mid-cycle at the same place and time: December
8-19 at the Adobe offices in Lehi, Utah. We worked through the
scheduling kinks. For those who booked travel, you're safe. :)

Thanks!
Kyle

> Thanks,
> Kyle
>
>> [1] https://etherpad.openstack.org/p/neutron-kilo-midcycle

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

2014-11-13 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2014-11-13 09:55:43 -0800:
> On 13/11/14 09:58, Clint Byrum wrote:
> > Excerpts from Zane Bitter's message of 2014-11-13 05:54:03 -0800:
> >> On 13/11/14 03:29, Murugan, Visnusaran wrote:
> >>> Hi all,
> >>>
> >>> Convergence-POC distributes stack operations by sending resource actions
> >>> over RPC for any heat-engine to execute. Entire stack lifecycle will be
> >>> controlled by worker/observer notifications. This distributed model has
> >>> its own advantages and disadvantages.
> >>>
> >>> Any stack operation has a timeout and a single engine will be
> >>> responsible for it. If that engine goes down, timeout is lost along with
> >>> it. So a traditional way is for other engines to recreate timeout from
> >>> scratch. Also a missed resource action notification will be detected
> >>> only when stack operation timeout happens.
> >>>
> >>> To overcome this, we will need the following capability:
> >>>
> >>> 1.Resource timeout (can be used for retry)
> >>
> >> I don't believe this is strictly needed for phase 1 (essentially we
> >> don't have it now, so nothing gets worse).
> >>
> >
> > We do have a stack timeout, and it stands to reason that we won't have a
> > single box with a timeout greenthread after this, so a strategy is
> > needed.
> 
> Right, that was 2, but I was talking specifically about the resource 
> retry. I think we agree on both points.
> 
> >> For phase 2, yes, we'll want it. One thing we haven't discussed much is
> >> that if we used Zaqar for this then the observer could claim a message
> >> but not acknowledge it until it had processed it, so we could have
> >> guaranteed delivery.
> >>
> >
> > Frankly, if oslo.messaging doesn't support reliable delivery then we
> > need to add it.
> 
> That is straight-up impossible with AMQP. Either you ack the message and 
> risk losing it if the worker dies before processing is complete, or you 
> don't ack the message until it's processed and you become a blocker for 
> every other worker trying to pull jobs off the queue. It works fine when 
> you have only one worker; otherwise not so much. This is the crux of the 
> whole "why isn't Zaqar just Rabbit" debate.
> 

I'm not sure we have the same understanding of AMQP, so hopefully we can
clarify here. This stackoverflow answer echoes my understanding:

http://stackoverflow.com/questions/17841843/rabbitmq-does-one-consumer-block-the-other-consumers-of-the-same-queue

Not ack'ing just means they might get retransmitted if we never ack. It
doesn't block other consumers. And as the link above quotes from the
AMQP spec, when there are multiple consumers, FIFO is not guaranteed.
Other consumers get other messages.

So just add the ability for a consumer to read, work, ack to
oslo.messaging, and this is mostly handled via AMQP. Of course that
also likely means no zeromq for Heat without accepting that messages
may be lost if workers die.

Basically we need to add something that is not "RPC" but instead
"jobqueue" that mimics this:

http://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo/messaging/rpc/dispatcher.py#n131

I've always been suspicious of this bit of code, as it basically means
that if anything fails between that call, and the one below it, we have
lost contact, but as long as clients are written to re-send when there
is a lack of reply, there shouldn't be a problem. But, for a job queue,
there is no reply, and so the worker would dispatch, and then
acknowledge after the dispatched call had returned (including having
completed the step where new messages are added to the queue for any
newly-possible children).

Just to be clear, I believe what Zaqar adds is the ability to peek at
a specific message ID and not affect it in the queue, which is entirely
different than ACK'ing the ones you've already received in your session.

> Most stuff in OpenStack gets around this by doing synchronous calls 
> across oslo.messaging, where there is an end-to-end ack. We don't want 
> that here though. We'll probably have to make do with having ways to 
> recover after a failure (kick off another update with the same data is 
> always an option). The hard part is that if something dies we don't 
> really want to wait until the stack timeout to start recovering.
>

I fully agree. Josh's point about using a coordination service like
Zookeeper to maintain liveness is an interesting one here. If we just
make sure that all the workers that have claimed work off the queue are
alive, that should be sufficient to prevent a hanging stack situation
like you describe above.

> > Zaqar should have nothing to do with this and is, IMO, a
> > poor choice at this stage, though I like the idea of using it in the
> > future so that we can make Heat more of an outside-the-cloud app.
> 
> I'm inclined to agree that it would be hard to force operators to deploy 
> Zaqar in order to be able to deploy Heat, and that we should probably be 
> cautious for that reason.
> 
> That said, fr

Re: [openstack-dev] [Horizon] Separate horizon and openstack_dashboard

2014-11-13 Thread Radomir Dopieralski
On 30/10/14 13:13, Matthias Runge wrote:
> I tried that in [3], [4]. I renamed openstack_dashboard to
> openstack_horizon, rather than horizon to be sure, I really catched all
> imports etc., and to make sure, it's clear, what component is meant.
> During this process, the name horizon is a bit ambiguous.

Thank you for doing this, it helps a lot!

[snip]

> So, how do we proceed from here?
> - how do we block the gate

I have no idea, in the past we just -2-ed all patches temporarily?

> - how to create a new repo
> - how to set up ci for the new project?

That shouldn't be too complicated. Basically you send a patch to
https://github.com/openstack-infra/project-config and have it accepted
by the infra team. I suppose we will copy most of the current horizon
setup. It will help to have someone from infra helping us on that.

> - how to integrate new horizon_lib and horizon (or openstack-horizon) to
> devstack

I suppose we have to reach out to the devstack people for that.

-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] security and swift multi-tenant fixes on stable branch

2014-11-13 Thread Jeremy Stanley
On 2014-11-13 18:28:14 +0100 (+0100), Ihar Hrachyshka wrote:
[...]
> I think those who maintain glance_store module in downstream
> distributions will cherry-pick the security fix into their
> packages, so there is nothing to do in terms of stable branches to
> handle the security issue.
[...]

As a counterargument, some Oslo libs have grown stable branches for
security backports and cut corresponding point releases on an
as-needed basis so as to avoid introducing new features in stable
server deployments.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Consistency, efficiency, and safety of NovaObject.save()

2014-11-13 Thread Mike Bayer

> On Nov 13, 2014, at 5:47 AM, Matthew Booth  wrote:
> 
> Unfortunately this model doesn't apply to Nova objects, which are
> persisted remotely. Unless I've missed something, SQLA doesn't run on
> Nova Compute at all. Instead, when Nova Compute calls object.save() this
> results in an RPC call to Nova Conductor, which persists the object in
> the DB using SQLA. Compute wouldn't be able to use common DB
> transactions without some hairy lifecycle management in Conductor, so
> Compute apis need to be explicitly aware of this.


So just a note to Dan, this is an example of where I keep hearing this about 
Nova objects.I’ve discussed this with Dan and if I understand him 
correctly, I think the idea is that a Nova Compute call can be organized such 
that the objects layer interacts with the database layer in a more 
coarse-grained fashion, if that is desired, so if you really need several 
things to happen in one DB transaction, you should organize the relevant 
objects code to work that way.

Still for me to get my head around is how often we are in fact organizing the 
bridge between objects / database such that we are using the database most 
effectively, and not breaking up a single logical operation into many 
individual transactions.   I know that Nova objects doesn’t mandate that this 
occur but I still want to learn if perhaps it tends to “encourage” that pattern 
to emerge - it’s hard for me to make that guess right now because I haven’t 
surveyed nova objects very much at all as I’ve been really trying to stick with 
getting database patterns sane to start with.  



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Consistency, efficiency, and safety of NovaObject.save()

2014-11-13 Thread Mike Bayer

> On Nov 13, 2014, at 3:52 AM, Nikola Đipanov  wrote:
> 
> On 11/13/2014 02:45 AM, Dan Smith wrote:
>>> I’m not sure if I’m seeing the second SELECT here either but I’m less
>>> familiar with what I’m looking at. compute_node_update() does the
>>> one SELECT as we said, then it doesn’t look like
>>> self._from_db_object() would emit any further SQL specific to that
>>> row.
>> 
>> I don't think you're missing anything. I don't see anything in that
>> object code, or the other db/sqlalchemy/api.py code that looks like a
>> second select. Perhaps he was referring to two *queries*, being the
>> initial select and the following update?
>> 
> 
> FWIW - I think an example Matt was giving me yesterday was block devices
> where we have:
> 
> @require_context
> def block_device_mapping_update(context, bdm_id, values, legacy=True):
>_scrub_empty_str_values(values, ['volume_size'])
>values = _from_legacy_values(values, legacy, allow_updates=True)
>query =_block_device_mapping_get_query(context).filter_by(id=bdm_id)
>query.update(values)
>return query.first()
> 
> which gets called from object save()

OK well there, that is still a single UPDATE statement and then a SELECT.   
It’s using an aggregate UPDATE so there is no load up front required.   Unless 
_from_legacy_values() does something, that’s still just UPDATE + SELECT, just 
not in the usual order.  I’d suggest this method be swapped around to load the 
object first, then use the traditional flush process to flush it, as regular 
flush is a lot more reliable, so I’d agree this method is awkward and should be 
fixed, but I’m not sure there’s a second SELECT there.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

2014-11-13 Thread Nandavar, Divakar Padiyar
> Most stuff in OpenStack gets around this by doing synchronous calls across 
> oslo.messaging, where there is an end-to-end ack. We don't want that here > 
> >though. We'll probably have to make do with having ways to recover after a 
> failure (kick off another update with the same data is always an option). The 
> hard >part is that if something dies we don't really want to wait until the 
> stack timeout to start recovering.

We should be able to address this in convergence without having to wait for 
stack timeout.  This scenario would be similar to initiating the stack update 
while another large stack update is still progress.  We are looking into 
addressing this scenario.

Thanks,
Divakar

-Original Message-
From: Zane Bitter [mailto:zbit...@redhat.com] 
Sent: Thursday, November 13, 2014 11:26 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

On 13/11/14 09:58, Clint Byrum wrote:
> Excerpts from Zane Bitter's message of 2014-11-13 05:54:03 -0800:
>> On 13/11/14 03:29, Murugan, Visnusaran wrote:
>>> Hi all,
>>>
>>> Convergence-POC distributes stack operations by sending resource 
>>> actions over RPC for any heat-engine to execute. Entire stack 
>>> lifecycle will be controlled by worker/observer notifications. This 
>>> distributed model has its own advantages and disadvantages.
>>>
>>> Any stack operation has a timeout and a single engine will be 
>>> responsible for it. If that engine goes down, timeout is lost along 
>>> with it. So a traditional way is for other engines to recreate 
>>> timeout from scratch. Also a missed resource action notification 
>>> will be detected only when stack operation timeout happens.
>>>
>>> To overcome this, we will need the following capability:
>>>
>>> 1.Resource timeout (can be used for retry)
>>
>> I don't believe this is strictly needed for phase 1 (essentially we 
>> don't have it now, so nothing gets worse).
>>
>
> We do have a stack timeout, and it stands to reason that we won't have 
> a single box with a timeout greenthread after this, so a strategy is 
> needed.

Right, that was 2, but I was talking specifically about the resource retry. I 
think we agree on both points.

>> For phase 2, yes, we'll want it. One thing we haven't discussed much 
>> is that if we used Zaqar for this then the observer could claim a 
>> message but not acknowledge it until it had processed it, so we could 
>> have guaranteed delivery.
>>
>
> Frankly, if oslo.messaging doesn't support reliable delivery then we 
> need to add it.

That is straight-up impossible with AMQP. Either you ack the message and risk 
losing it if the worker dies before processing is complete, or you don't ack 
the message until it's processed and you become a blocker for every other 
worker trying to pull jobs off the queue. It works fine when you have only one 
worker; otherwise not so much. This is the crux of the whole "why isn't Zaqar 
just Rabbit" debate.

Most stuff in OpenStack gets around this by doing synchronous calls across 
oslo.messaging, where there is an end-to-end ack. We don't want that here 
though. We'll probably have to make do with having ways to recover after a 
failure (kick off another update with the same data is always an option). The 
hard part is that if something dies we don't really want to wait until the 
stack timeout to start recovering.



> Zaqar should have nothing to do with this and is, IMO, a poor choice 
> at this stage, though I like the idea of using it in the future so 
> that we can make Heat more of an outside-the-cloud app.

I'm inclined to agree that it would be hard to force operators to deploy Zaqar 
in order to be able to deploy Heat, and that we should probably be cautious for 
that reason.

That said, from a purely technical point of view it's not a poor choice at all 
- it has *exactly* the semantics we want (unlike AMQP), and at least to the 
extent that the operator wants to offer Zaqar to users anyway it completely 
eliminates a whole backend that they would otherwise have to deploy. It's a 
tragedy that all of OpenStack has not been designed to build upon itself in 
this way and it causes me physical pain to know that we're about to perpetuate 
it.

>>> 2.Recover from engine failure (loss of stack timeout, resource 
>>> action
>>> notification)
>>>
>>> Suggestion:
>>>
>>> 1.Use task queue like celery to host timeouts for both stack and resource.
>>
>> I believe Celery is more or less a non-starter as an OpenStack 
>> dependency because it uses Kombu directly to talk to the queue, vs.
>> oslo.messaging which is an abstraction layer over Kombu, Qpid, ZeroMQ 
>> and maybe others in the future. i.e. requiring Celery means that some 
>> users would be forced to install Rabbit for the first time.
>>
>> One option would be to fork Celery and replace Kombu with 
>> oslo.messaging as its abstraction layer. Good luck getting that 
>> maintained though, since Celery _invented_ Kombu to b

Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Donald Stufft

> On Nov 13, 2014, at 12:38 PM, Thomas Goirand  wrote:
> 
> On 11/13/2014 10:56 PM, Martin Geisler wrote:
>> Maybe a silly question, but why insist on this? Why would you insist on
>> installing a JavaScript based application using your package manager?
>> 
>> I'm a huge fan of package managers and typically refuse to install
>> anything globally if it doesn't come as a package.
>> 
>> However, the whole JavaScript ecosystem seems to be centered around the
>> idea of doing local installations. That means that you no longer need
>> the package manager to install the software -- you only need a package
>> manager to install the base system (NodeJs and npm for JavaScript).
> 
> Yeah... Just like for Java, PHP, Perl, Python, you-name-it...
> 
> In what way Javascript will be any different from all of these languages?

Node.js, and npm in particular tends to solve the dependency hell problem
by making it possible to install multiple versions of a particular thing
and use them all within the same process. As far as I know the OS tooling
doesn’t really handle SxS installs of the same thing in multiple versions
very well, I think the closest that you can do is multiple separate packages
with version numbers in the package name? 

In other words it’s entirely possible that a particular set of npm packages
can not be resolved to a single version per library.

> 
>> Notice that Python has been moving rapidly in the same direction for
>> years: you only need Python and pip to bootstrap yourself. After getting
>> used to virtualenv, I've mostly stopped installing Python modules
>> globally and that is how the JavaScript world expects you to work too.
> 
> Fine for development. Not for deployments. Not for distributions. Or you
> just get a huge mess of every library installed 10 times, with 10
> different versions, and then a security issue needs to be fixed…

Eh, I wouldn’t say it’s not fine for deployments. Generally I find that
the less I tie the things where I care about versions to my OS the happier
my life gets. It’s not fine for distributions wanting to offer it though,
that is correct.

> 
>> So maybe the Horizon package should be an installer package like the
>> ones that download fonts or Adobe?
> 
> This is a horrible design which will *never* make it to distributions.
> Please think again. What is it that makes Horizon so special? Answer:
> nothing. It's "just a web app", so it doesn't need any special care. It
> should be packaged, just like the rest of everything, with .deb/.rpm and
> so on.
> 
>> That package would get the right version of node and which then runs the
>> npm and bower commands to download the rest plus (importantly and much
>> appreciated) puts the files in a sensible location and gives them good
>> permissions.
> 
> Fine for your development environment. But that's it.
> 
> Also, does your $language-specific-package--manager has enough checks so
> that there's no man in the middle attack possible? Is it secured enough?
> Can a replay attack be done on it? Does it supports any kind of
> cryptography checks like yum or apt does? I'm almost sure that's not the
> case. pip is really horrible in this regard. I haven't checked, but I'm
> almost sure what we're proposing (eg: npm and such) have the same
> weakness. And here, I'm only scratching security concerns. There's other
> concerns, like how good is the dependency solver and such (remember: it
> took *years* for apt to be as good as it is right now, and it still has
> some defects).

As far as I’m aware npm supports TLS the same as pip does. That secures the
transport between the end users and the repository so you can be assured
that there is no man in the middle. Security wise npm (and pip) are about
~95% (mad up numbers, but you can get the gist) of the effectiveness as the
OS package managers.

> 
> On 11/14/2014 12:59 AM, Martin Geisler wrote:
>> It seems to me that it should be possible translate the node module
>> into system level packages in a mechanical fashion, assuming that
>> you're willing to have a system package for each version of the node
>> module
> 
> Sure! That's how I do most of my Python modules these days. I don't just
> create them from scratch, I use my own "debpypi" script, which generates
> a template for packaging. But it can't be fully automated. I could
> almost do it in a fully automated manner for PEAR packages for PHP (see
> "debpear" in the Debian archive), but it's harder with Python and pip/PyPi.

I would be interested to know what makes Python harder in this regard, I
would like to fix it.

> 
> Stuff like debian/copyright files have to be processed by hand, and each
> package is different (How to run unit tests? nose, testr, pytest? Does
> it support python3? Is there a sphinx doc? How good is upstream short
> and long description?). I guess it's going to be the same for Javascript
> packages: it will be possible to do automation for some parts, but
> manual work will always be needed.
> 
> On 11/1

Re: [openstack-dev] [Nova][Neutron][NFV][Third-party] CI for NUMA, SR-IOV, and other features that can't be tested on current infra.

2014-11-13 Thread Daniel P. Berrange
On Thu, Nov 13, 2014 at 05:43:14PM +, Daniel P. Berrange wrote:
> On Thu, Nov 13, 2014 at 09:36:18AM -0800, Dan Smith wrote:
> > > That sounds like something worth exploring at least, I didn't know
> > > about that kernel build option until now :-) It sounds like it ought
> > > to be enough to let us test the NUMA topology handling, CPU pinning
> > > and probably huge pages too.
> > 
> > Okay. I've been vaguely referring to this as a potential test vector,
> > but only just now looked up the details. That's my bad :)
> > 
> > > The main gap I'd see is NUMA aware PCI device assignment since the
> > > PCI <-> NUMA node mapping data comes from the BIOS and it does not
> > > look like this is fakeable as is.
> > 
> > Yeah, although I'd expect that the data is parsed and returned by a
> > library or utility that may be a hook for fakeification. However, it may
> > very well be more trouble than it's worth.
> > 
> > I still feel like we should be able to test generic PCI in a similar way
> > (passing something like a USB controller through to the guest, etc).
> > However, I'm willing to believe that the intersection of PCI and NUMA is
> > a higher order complication :)
> 
> Oh I forgot to mention with PCI device assignment (as well as having a
> bunch of PCI devices available[1]), the key requirement is an IOMMU.
> AFAIK, neither Xen or KVM provide any IOMMU emulation, so I think we're
> out of luck for even basic PCI assignment testing inside VMs.

Ok, turns out that wasn't entirely accurate in general.

KVM *can* emulate an IOMMU, but it requires that the guest be booted
with the q35 machine type, instead of the ancient PIIX4 machine type,
and also QEMU must be launched with "-machine iommu=on". We can't do
this in Nova, so although it is theoretically possible, it is not
doable for us in reality :-(

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] alpha version numbering discussion from summit

2014-11-13 Thread Doug Hellmann

On Nov 13, 2014, at 12:41 PM, Jeremy Stanley  wrote:

> On 2014-11-13 07:50:51 -0500 (-0500), Doug Hellmann wrote:
> [...]
>> I do remember a comment at some point, and I’m not sure it was in
>> this session, about using the per-project client libraries as
>> “internal only” libraries when the new SDK matures enough that we
>> can declare that the official external client library. That might
>> solve the problem, since we could pin the version of the client
>> libraries used, but it seems like a solution for the future rather
>> than for this cycle.
> [...]
> 
> Many of us have suggested this as a possible way out of the tangle
> in the past, though Monty was the one who raised it during that
> session. Basically the problem we have boils down to wanting to use
> these libraries as a stable internal communication mechanism within
> components of an OpenStack environment but also be able to support
> tenant users and application developers interacting with a broad
> variety of OpenStack releases through them, and that is a mostly
> unreconcilable difference. Having a user-facing SDK which talks to
> OpenStack APIs with broad version support, and a separate set of
> per-project communication libraries which can follow the integrated
> release cadence and maintain stable backport branches as needed,
> makes the problem much more tractable in the long term.

That makes sense. If we go that route, there is a good chance we will want to 
reconsider the decision to deprecate the apiclient and cliutils modules in the 
incubator, since there would still be good value in maintaining those as shared 
code for the internal client libraries.

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

2014-11-13 Thread Zane Bitter

On 13/11/14 09:58, Clint Byrum wrote:

Excerpts from Zane Bitter's message of 2014-11-13 05:54:03 -0800:

On 13/11/14 03:29, Murugan, Visnusaran wrote:

Hi all,

Convergence-POC distributes stack operations by sending resource actions
over RPC for any heat-engine to execute. Entire stack lifecycle will be
controlled by worker/observer notifications. This distributed model has
its own advantages and disadvantages.

Any stack operation has a timeout and a single engine will be
responsible for it. If that engine goes down, timeout is lost along with
it. So a traditional way is for other engines to recreate timeout from
scratch. Also a missed resource action notification will be detected
only when stack operation timeout happens.

To overcome this, we will need the following capability:

1.Resource timeout (can be used for retry)


I don't believe this is strictly needed for phase 1 (essentially we
don't have it now, so nothing gets worse).



We do have a stack timeout, and it stands to reason that we won't have a
single box with a timeout greenthread after this, so a strategy is
needed.


Right, that was 2, but I was talking specifically about the resource 
retry. I think we agree on both points.



For phase 2, yes, we'll want it. One thing we haven't discussed much is
that if we used Zaqar for this then the observer could claim a message
but not acknowledge it until it had processed it, so we could have
guaranteed delivery.



Frankly, if oslo.messaging doesn't support reliable delivery then we
need to add it.


That is straight-up impossible with AMQP. Either you ack the message and 
risk losing it if the worker dies before processing is complete, or you 
don't ack the message until it's processed and you become a blocker for 
every other worker trying to pull jobs off the queue. It works fine when 
you have only one worker; otherwise not so much. This is the crux of the 
whole "why isn't Zaqar just Rabbit" debate.


Most stuff in OpenStack gets around this by doing synchronous calls 
across oslo.messaging, where there is an end-to-end ack. We don't want 
that here though. We'll probably have to make do with having ways to 
recover after a failure (kick off another update with the same data is 
always an option). The hard part is that if something dies we don't 
really want to wait until the stack timeout to start recovering.



Zaqar should have nothing to do with this and is, IMO, a
poor choice at this stage, though I like the idea of using it in the
future so that we can make Heat more of an outside-the-cloud app.


I'm inclined to agree that it would be hard to force operators to deploy 
Zaqar in order to be able to deploy Heat, and that we should probably be 
cautious for that reason.


That said, from a purely technical point of view it's not a poor choice 
at all - it has *exactly* the semantics we want (unlike AMQP), and at 
least to the extent that the operator wants to offer Zaqar to users 
anyway it completely eliminates a whole backend that they would 
otherwise have to deploy. It's a tragedy that all of OpenStack has not 
been designed to build upon itself in this way and it causes me physical 
pain to know that we're about to perpetuate it.



2.Recover from engine failure (loss of stack timeout, resource action
notification)

Suggestion:

1.Use task queue like celery to host timeouts for both stack and resource.


I believe Celery is more or less a non-starter as an OpenStack
dependency because it uses Kombu directly to talk to the queue, vs.
oslo.messaging which is an abstraction layer over Kombu, Qpid, ZeroMQ
and maybe others in the future. i.e. requiring Celery means that some
users would be forced to install Rabbit for the first time.

One option would be to fork Celery and replace Kombu with oslo.messaging
as its abstraction layer. Good luck getting that maintained though,
since Celery _invented_ Kombu to be it's abstraction layer.



A slight side point here: Kombu supports Qpid and ZeroMQ. Oslo.messaging


You're right about Kombu supporting Qpid, it appears they added it. I 
don't see ZeroMQ on the list though:


http://kombu.readthedocs.org/en/latest/userguide/connections.html#transport-comparison


is more about having a unified API than a set of magic backends. It
actually boggles my mind why we didn't just use kombu (cue 20 reactions
with people saying it wasn't EXACTLY right), but I think we're committed


Well, we also have to take into account the fact that Qpid support was 
added only during the last 9 months, whereas oslo.messaging was 
implemented 3 years ago and time travel hasn't been invented yet (for 
any definition of 'yet').



to oslo.messaging now. Anyway, celery would need no such refactor, as
kombu would be able to access the same bus as everything else just fine.


Interesting, so that would make it easier to get Celery added to the 
global requirements, although we'd likely still have headaches to deal 
with around configuration.



2.Poll database for engine 

Re: [openstack-dev] opnfv proposal on DR capability enhancement on OpenStack Nova

2014-11-13 Thread A, Keshava
Zhipeng Huang,

When multiple  Datacenters are interconnected over WAN/Internet if the remote 
the Datacenter goes down, expect the 'native VM status' to get changed 
accordingly ?
Is this the requirement ? This requirement is  from NFV Service VM (like 
routing VM ? )
Then is not it is  NFV routing (BGP/IGP) /MPLS signaling (LDP/RSVP) protocol to 
handle  ? Does the OpenStack needs to handle that ?

Please correct me if my understanding on this problem  is not correct.

Thanks & regards,
keshava

-Original Message-
From: Steve Gordon [mailto:sgor...@redhat.com] 
Sent: Wednesday, November 12, 2014 6:24 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][DR][NFV] opnfv proposal on DR capability 
enhancement on OpenStack Nova

- Original Message -
> From: "Zhipeng Huang" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> 
> Hi Team,
> 
> I knew we didn't propose this in the design summit and it is kinda 
> rude in this way to jam a topic into the schedule. We were really 
> stretched thin during the summit and didn't make it to the Nova 
> discussion. Full apologies here :)
> 
> What we want to discuss here is that we proposed a project in opnfv ( 
> https://wiki.opnfv.org/collaborative_development_projects/rescuer), 
> which in fact is to enhance inter-DC DR capabilities in Nova. We hope 
> we could achieve this in the K cycle, since there is no "HUGE" changes 
> required to be done in Nova. We just propose to add certain DR status 
> in Nova so operators could see what DR state the OpenStack is 
> currently in, therefore when disaster occurs they won't cut off the wrong 
> stuff.
> 
> Sorry again if we kinda barge in here, and we sincerely hope the Nova 
> community could take a look at our proposal. Feel free to contact me 
> if anyone got any questions :)
> 
> --
> Zhipeng Huang

Hi Zhipeng,

I would just like to echo the comments from the opnfv-tech-discuss list (which 
I notice is still private?) in saying that there is very little detail on the 
wiki page describing what you actually intend to do. Given this, it's very hard 
to provide any meaningful feedback. A lot more detail is required, particularly 
if you intend to propose a specification based on this idea.

Thanks,

Steve

[1] https://wiki.opnfv.org/collaborative_development_projects/rescuer


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][neutron] Fwd: Re: [Openstack-stable-maint] Neutron backports for security group performance

2014-11-13 Thread Kevin Benton
Ok. Thanks again for doing that.

On Thu, Nov 13, 2014 at 5:06 AM, James Page  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> On 12/11/14 17:43, Kevin Benton wrote:
> > This is awesome. I seem to have misplaced my 540-node cluster. ;-)
> >
> > Is it possible for you to also patch in
> > https://review.openstack.org/#/c/132372/ ? In my rally testing of
> > port retrieval, this one probably made the most significant
> > improvement.
>
> Unfortunately not - our lab time on the infrastructure ended last week
> and I had to (reluctantly) give everything back to HP.
>
> That said, looking through all of the patches I applied to neutron, I
>  had that one in place as well - apologies for missing that
> information in my first email!.
>
> Regards
>
> James
>
> - --
> James Page
> Ubuntu and Debian Developer
> james.p...@ubuntu.com
> jamesp...@debian.org
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1
>
> iQIcBAEBCAAGBQJUZKy/AAoJEL/srsug59jDGa8QANJjKl8fyCmoE0FNZ0/xXnq0
> qYu8u0yYm1SPya09KQaSmMUkMACjgiemjEKD/lICQASd/ROPMMRoqmbfiogDzDLZ
> Si4U4CsYYy+EVnXQ3ozOopxbZHKNjjbTFBhNNvVeEQ1/sZpTHEdI6emwXlOuj6qP
> Z36RmJpr1rQDhvvccywytVI2a42MbUnT53yjI4AKIc5TQBdPOW6QIr89sNNZM+jp
> frNl40tCFo/SQU2TR3mmBXdXWYT5BAdNyAHBz/7TUNzSt5ZUXBSr/3lE2Vj69aZ6
> ioMBwreeW+hV2NXYjLCpCAOsam7lz3qZjOC5DtZj4OrIy+J8ts73uHvPe2y0Gxr/
> ANrbxPeRPp1uXAT4UPUqQZ4m2vYQVVwenc8cPQtzcXrJ9CF9ti8NrFnATtqdSf3a
> 2kWyKmJ1qd+6tValdImTFc/J7Vw/WPkTvoYXGAfszL6j0Ea6JGCvGCCvDOFZwG3o
> NWGBaIVCAErlypDaqxQGfiUtsGWIrFfy52ufJ+YEc0L/pIq9ZUlrHE17LkUz2gC2
> GTUbLYQ8+S+/b5suYzbthA+SHgc+Xzfzh+K+sCirEFzNaAhzJySvr7ssCRoKvs0d
> QDoLaSGdwNDKjW/Y7O/eGHD1bz6RVfMxvky+pa8GZBHIp/YhEuBSNU3CNNEAt6El
> /rWfIhMsjPtHlhHF245x
> =Bnsb
> -END PGP SIGNATURE-
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][NFV][Third-party] CI for NUMA, SR-IOV, and other features that can't be tested on current infra.

2014-11-13 Thread Daniel P. Berrange
On Thu, Nov 13, 2014 at 09:36:18AM -0800, Dan Smith wrote:
> > That sounds like something worth exploring at least, I didn't know
> > about that kernel build option until now :-) It sounds like it ought
> > to be enough to let us test the NUMA topology handling, CPU pinning
> > and probably huge pages too.
> 
> Okay. I've been vaguely referring to this as a potential test vector,
> but only just now looked up the details. That's my bad :)
> 
> > The main gap I'd see is NUMA aware PCI device assignment since the
> > PCI <-> NUMA node mapping data comes from the BIOS and it does not
> > look like this is fakeable as is.
> 
> Yeah, although I'd expect that the data is parsed and returned by a
> library or utility that may be a hook for fakeification. However, it may
> very well be more trouble than it's worth.
> 
> I still feel like we should be able to test generic PCI in a similar way
> (passing something like a USB controller through to the guest, etc).
> However, I'm willing to believe that the intersection of PCI and NUMA is
> a higher order complication :)

Oh I forgot to mention with PCI device assignment (as well as having a
bunch of PCI devices available[1]), the key requirement is an IOMMU.
AFAIK, neither Xen or KVM provide any IOMMU emulation, so I think we're
out of luck for even basic PCI assignment testing inside VMs.

Regards,
Daniel

[1] Devices which provide function level reset or PM reset capabilities,
as bus level reset is too painful to deal with, requiring co-assignment
of all devices on the same bus to the same guest.
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] alpha version numbering discussion from summit

2014-11-13 Thread Jeremy Stanley
On 2014-11-13 07:50:51 -0500 (-0500), Doug Hellmann wrote:
[...]
> I do remember a comment at some point, and I’m not sure it was in
> this session, about using the per-project client libraries as
> “internal only” libraries when the new SDK matures enough that we
> can declare that the official external client library. That might
> solve the problem, since we could pin the version of the client
> libraries used, but it seems like a solution for the future rather
> than for this cycle.
[...]

Many of us have suggested this as a possible way out of the tangle
in the past, though Monty was the one who raised it during that
session. Basically the problem we have boils down to wanting to use
these libraries as a stable internal communication mechanism within
components of an OpenStack environment but also be able to support
tenant users and application developers interacting with a broad
variety of OpenStack releases through them, and that is a mostly
unreconcilable difference. Having a user-facing SDK which talks to
OpenStack APIs with broad version support, and a separate set of
per-project communication libraries which can follow the integrated
release cadence and maintain stable backport branches as needed,
makes the problem much more tractable in the long term.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Thomas Goirand
On 11/13/2014 10:56 PM, Martin Geisler wrote:
> Maybe a silly question, but why insist on this? Why would you insist on
> installing a JavaScript based application using your package manager?
> 
> I'm a huge fan of package managers and typically refuse to install
> anything globally if it doesn't come as a package.
> 
> However, the whole JavaScript ecosystem seems to be centered around the
> idea of doing local installations. That means that you no longer need
> the package manager to install the software -- you only need a package
> manager to install the base system (NodeJs and npm for JavaScript).

Yeah... Just like for Java, PHP, Perl, Python, you-name-it...

In what way Javascript will be any different from all of these languages?

> Notice that Python has been moving rapidly in the same direction for
> years: you only need Python and pip to bootstrap yourself. After getting
> used to virtualenv, I've mostly stopped installing Python modules
> globally and that is how the JavaScript world expects you to work too.

Fine for development. Not for deployments. Not for distributions. Or you
just get a huge mess of every library installed 10 times, with 10
different versions, and then a security issue needs to be fixed...

> So maybe the Horizon package should be an installer package like the
> ones that download fonts or Adobe?

This is a horrible design which will *never* make it to distributions.
Please think again. What is it that makes Horizon so special? Answer:
nothing. It's "just a web app", so it doesn't need any special care. It
should be packaged, just like the rest of everything, with .deb/.rpm and
so on.

> That package would get the right version of node and which then runs the
> npm and bower commands to download the rest plus (importantly and much
> appreciated) puts the files in a sensible location and gives them good
> permissions.

Fine for your development environment. But that's it.

Also, does your $language-specific-package--manager has enough checks so
that there's no man in the middle attack possible? Is it secured enough?
Can a replay attack be done on it? Does it supports any kind of
cryptography checks like yum or apt does? I'm almost sure that's not the
case. pip is really horrible in this regard. I haven't checked, but I'm
almost sure what we're proposing (eg: npm and such) have the same
weakness. And here, I'm only scratching security concerns. There's other
concerns, like how good is the dependency solver and such (remember: it
took *years* for apt to be as good as it is right now, and it still has
some defects).

On 11/14/2014 12:59 AM, Martin Geisler wrote:
> It seems to me that it should be possible translate the node module
> into system level packages in a mechanical fashion, assuming that
> you're willing to have a system package for each version of the node
> module

Sure! That's how I do most of my Python modules these days. I don't just
create them from scratch, I use my own "debpypi" script, which generates
a template for packaging. But it can't be fully automated. I could
almost do it in a fully automated manner for PEAR packages for PHP (see
"debpear" in the Debian archive), but it's harder with Python and pip/PyPi.

Stuff like debian/copyright files have to be processed by hand, and each
package is different (How to run unit tests? nose, testr, pytest? Does
it support python3? Is there a sphinx doc? How good is upstream short
and long description?). I guess it's going to be the same for Javascript
packages: it will be possible to do automation for some parts, but
manual work will always be needed.

On 11/14/2014 12:59 AM, Martin Geisler wrote:
> The guys behind npm has written a little about how that could work
> here:
>
> http://nodejs.org/api/modules.html#modules_addenda_package_manager_tips

It's fun to read, but very naive. First thing that is very shocking is
that arch independent things gets installed into /usr/lib, where they
belong in /usr/share. If that is what the NPM upstream produces, that's
scary: he doesn't even know how the FSHS (FileSystem Hierarchy Standard)
works.

> Has anyone written such wrapper packages? Not the xstatic system which
> seems to incur a porting effort -- but really a wrapper system that
> can translate any node module into a system package.

The xstatic packages are quite painless, from my view point. What's
painful is to link an existing xstatic package with an already existing
libjs-* package that may have a completely different directory
structure. You can then end-up with a forest of symlinks, but there's no
way around that. No wrapper can solve that problem either. And more
generally, a wrapper that writes a $distribution source package out of a
$language-specific package manager will never solve all, it will only
reduce the amount of packaging work.

Cheers,

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi

Re: [openstack-dev] [Nova][Neutron][NFV][Third-party] CI for NUMA, SR-IOV, and other features that can't be tested on current infra.

2014-11-13 Thread Dan Smith
> That sounds like something worth exploring at least, I didn't know
> about that kernel build option until now :-) It sounds like it ought
> to be enough to let us test the NUMA topology handling, CPU pinning
> and probably huge pages too.

Okay. I've been vaguely referring to this as a potential test vector,
but only just now looked up the details. That's my bad :)

> The main gap I'd see is NUMA aware PCI device assignment since the
> PCI <-> NUMA node mapping data comes from the BIOS and it does not
> look like this is fakeable as is.

Yeah, although I'd expect that the data is parsed and returned by a
library or utility that may be a hook for fakeification. However, it may
very well be more trouble than it's worth.

I still feel like we should be able to test generic PCI in a similar way
(passing something like a USB controller through to the guest, etc).
However, I'm willing to believe that the intersection of PCI and NUMA is
a higher order complication :)

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][NFV][Third-party] CI for NUMA, SR-IOV, and other features that can't be tested on current infra.

2014-11-13 Thread Daniel P. Berrange
On Thu, Nov 13, 2014 at 09:28:01AM -0800, Dan Smith wrote:
> > Yep, it is possible to run the tests inside VMs - the key is that when
> > you create the VMs you need to be able to give them NUMA topology. This
> > is possible if you're creating your VMs using virt-install, but not if
> > you're creating your VMs in a cloud.
> 
> I think we should explore this a bit more. AFAIK, we can simulate a NUMA
> system with CONFIG_NUMA_EMU=y and providing numa=fake=XXX to the guest
> kernel. From a quick check with some RAX folks, we should have enough
> control to arrange this. Since we can put a custom kernel (and
> parameters) into our GRUB configuration that pygrub should honor, I
> would think we could get a fake-NUMA guest running in at least one
> public cloud. Since HP's cloud runs KVM, I would assume we have control
> over our kernel and boot there as well.
> 
> Is there something I'm missing about why that's not doable?

That sounds like something worth exploring at least, I didn't know about
that kernel build option until now :-) It sounds like it ought to be enough
to let us test the NUMA topology handling, CPU pinning and probably huge
pages too. The main gap I'd see is NUMA aware PCI device assignment
since the PCI <-> NUMA node mapping data comes from the BIOS and it does
not look like this is fakeable as is.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Policy file not reloaded after changes

2014-11-13 Thread Nikhil Komawar
Forgot to mention the main part - this patch should enable the auto loading of 
policies.

Thanks,
-Nikhil

From: Nikhil Komawar [nikhil.koma...@rackspace.com]
Sent: Thursday, November 13, 2014 11:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance] Policy file not reloaded after changes

Hi Ajaya,

We'r making some progress on sync-ing the latest Oslo-incubator code in Glance. 
It's a little more tricky due to the property protection feature so, we've had 
some impedance. Please give your feedback at: 
https://review.openstack.org/#/c/127923/3

Please let me know if you've any concerns.

Thanks,
-Nikhil

From: Ajaya Agrawal [ajku@gmail.com]
Sent: Thursday, November 13, 2014 4:50 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [glance] Policy file not reloaded after changes

Hi All,

The policy file is not reloaded in glance after a change is made to it. You 
need to restart glance to load the new policy file. I think all other 
components reload the policy file after a change is made to it. Is it a bug or 
intended behavior?

Cheers,
Ajaya
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][NFV][Third-party] CI for NUMA, SR-IOV, and other features that can't be tested on current infra.

2014-11-13 Thread Dan Smith
> Yep, it is possible to run the tests inside VMs - the key is that when
> you create the VMs you need to be able to give them NUMA topology. This
> is possible if you're creating your VMs using virt-install, but not if
> you're creating your VMs in a cloud.

I think we should explore this a bit more. AFAIK, we can simulate a NUMA
system with CONFIG_NUMA_EMU=y and providing numa=fake=XXX to the guest
kernel. From a quick check with some RAX folks, we should have enough
control to arrange this. Since we can put a custom kernel (and
parameters) into our GRUB configuration that pygrub should honor, I
would think we could get a fake-NUMA guest running in at least one
public cloud. Since HP's cloud runs KVM, I would assume we have control
over our kernel and boot there as well.

Is there something I'm missing about why that's not doable?

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] security and swift multi-tenant fixes on stable branch

2014-11-13 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 13/11/14 18:17, stuart.mcla...@hp.com wrote:
> All,
> 
> The 0.1.9 version of glance_store, and glance's master branch both 
> contain some fixes for the Swift multi-tenant store.
> 
> This security related change hasn't merged to glance_store yet: 
> https://review.openstack.org/130200
> 
> I'd like to suggest that we try to merge this security fix and
> release it as as glance_store '0.1.10'. Then make glance's
> juno/stable branch rely on glance_store '0.1.10' so that it picks
> up both the multi-tenant store and security fixes.

So you're forcing all stable branch users to upgrade their
glance_store module, with a version that includes featureful patches,
which is not nice.

I think those who maintain glance_store module in downstream
distributions will cherry-pick the security fix into their packages,
so there is nothing to do in terms of stable branches to handle the
security issue.

Objections?

> 
> The set of related glance stable branch patches would be: 
> https://review.openstack.org/134257 
> https://review.openstack.org/134286 
> https://review.openstack.org/134289/ (0.1.10 dependency -- also
> requires a global requirements change)
> 
> Does this seem ok?
> 
> -Stuart
> 
> ___ OpenStack-dev
> mailing list OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUZOouAAoJEC5aWaUY1u57aFMIAM2uhUPOLfBqNneKO89Kv3tU
uE5+JP3Oh7pSCwCgw+fgnxraG9jb5QjpV8rCHewvFpyWQKwsstmNjdMeryRIX1Hn
TZ42mSFUWkjDBJ/cvP2QyLXt2Il93xtqaAcLxo9enHUBR4F2lUCaZK0sm8jLkIFf
TYv9jaf5QwjIWD7VO51HibwoH4f2laJv4r8MbIuyQoUpMlKpeWzmETqm5NrIUCp+
Acvbxo0EaRgAhWRIfHmFtudVjeirjc6vG9yjxFwaObYODb3sridcnr5IOBwP8jrI
1WExsAPTMU6ut2j2pABxIc0PnYAcW1uzc8w4/oPMUp0rZsaQfveCH/mRA0QnqrQ=
=j14y
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] security and swift multi-tenant fixes on stable branch

2014-11-13 Thread stuart . mclaren

All,

The 0.1.9 version of glance_store, and glance's master branch both
contain some fixes for the Swift multi-tenant store.

This security related change hasn't merged to glance_store yet:
https://review.openstack.org/130200

I'd like to suggest that we try to merge this security fix and release
it as as glance_store '0.1.10'. Then make glance's juno/stable branch
rely on glance_store '0.1.10' so that it picks up both the multi-tenant store
and security fixes.

The set of related glance stable branch patches would be:
https://review.openstack.org/134257
https://review.openstack.org/134286
https://review.openstack.org/134289/ (0.1.10 dependency -- also requires a 
global requirements change)

Does this seem ok?

-Stuart

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Martin Geisler
Matthias Runge  writes:

> On 13/11/14 15:56, Martin Geisler wrote:
>
>> Maybe a silly question, but why insist on this? Why would you insist on
>> installing a JavaScript based application using your package manager?
>> 
>> I'm a huge fan of package managers and typically refuse to install
>> anything globally if it doesn't come as a package.
>> 
>> However, the whole JavaScript ecosystem seems to be centered around the
>> idea of doing local installations. That means that you no longer need
>> the package manager to install the software -- you only need a package
>> manager to install the base system (NodeJs and npm for JavaScript).
> Yeah, I understand you.

Let me just add that this shift has been a very recent change for me.
With anything but Python and JavaScript, I use my system-level package
manager.

> But: doing local installs or: installing things aside a package
> manager means, that software is not maintained, or properly updated
> any more. I'm a huge fan of not bundling stuff and re-using libraries
> from a central location. Copying foreign code to your own codebase is
> quite popular in JavaScript world. That doesn't mean, it's the right
> thing to do.

I agree that you don't want to copy third-party libraries into your
code. In some sense, that's not what the JavaScript world is doing, at
least not before install time.

What I mean is: the ease of use of local package managers has lead to an
explosion in the number of tiny packages. So JS projects will no longer
copy dependencies into their own project (into their version control
system). They will instead depend on them using a package manager such
as npm or bower.


It seems to me that it should be possible translate the node module into
system level packages in a mechanical fashion, assuming that you're
willing to have a system package for each version of the node module
(you'll need multiple system packages since it's very likely that you'll
end up using multiple different versions at the same time --
alternatively, you could let each system package install every published
or popular node module version).

The guys behind npm has written a little about how that could work here:

  http://nodejs.org/api/modules.html#modules_addenda_package_manager_tips

Has anyone written such wrapper packages? Not the xstatic system which
seems to incur a porting effort -- but really a wrapper system that can
translate any node module into a system package.

-- 
Martin Geisler

http://google.com/+MartinGeisler


pgpvFpbyk_SbN.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Changing our weekly meeting format

2014-11-13 Thread Chris K
+1
I think the best use of our time is to discuss new features and functions
that may have a api or functional impact for ironic or projects that depend
on ironic.

Chris Krelle

On Thu, Nov 13, 2014 at 8:22 AM, Ghe Rivero  wrote:

> I agree that a lot of time is missed with the announcement and status
> reports, but mostly because irc is a slow bandwidth communication channel
> (like waiting several minutes for a 3 lines announcement to be written)
>
> I propose that any announcement and project status must be written in
> advanced to an etherpad, and during the irc meeting just have a slot for
> people to discuss anything that need further explanation, only mentioning
> the topic but not  the content.
>
> Ghe Rivero
> On Nov 13, 2014 5:08 PM, "Peeyush Gupta" 
> wrote:
>
>> +1
>>
>> I agree with Lucas. Sounds like a good idea. I guess if we could spare
>> more time for discussing new features and requirements rather than
>> asking for status, that would be helpful for everyone.
>>
>> On 11/13/2014 05:45 PM, Lucas Alvares Gomes wrote:
>> > This was discussed in the Contributor Meetup on Friday at the Summit
>> > but I think it's important to share on the mail list too so we can get
>> > more opnions/suggestions/comments about it.
>> >
>> > In the Ironic weekly meeting we dedicate a good time of the meeting to
>> > do some announcements, reporting bug status, CI status, oslo status,
>> > specific drivers status, etc... It's all good information, but I
>> > believe that the mail list would be a better place to report it and
>> > then we can free some time from our meeting to actually discuss
>> > things.
>> >
>> > Are you guys in favor of it?
>> >
>> > If so I'd like to propose a new format based on the discussions we had
>> > in Paris. For the people doing the status report on the meeting, they
>> > would start adding the status to an etherpad and then we would have a
>> > responsible person to get this information and send it to the mail
>> > list once a week.
>> >
>> > For the meeting itself we have a wiki page with an agenda[1] which
>> > everyone can edit to put the topic they want to discuss in the meeting
>> > there, I think that's fine and works. The only change about it would
>> > be that we may want freeze the agenda 2 days before the meeting so
>> > people can take a look at the topics that will be discussed and
>> > prepare for it; With that we can move forward quicker with the
>> > discussions because people will be familiar with the topics already.
>> >
>> > Let me know what you guys think.
>> >
>> > [1] https://wiki.openstack.org/wiki/Meetings/Ironic
>> >
>> > Lucas
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> --
>> Peeyush Gupta
>> gpeey...@linux.vnet.ibm.com
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Policy file not reloaded after changes

2014-11-13 Thread Nikhil Komawar
Hi Ajaya,

We'r making some progress on sync-ing the latest Oslo-incubator code in Glance. 
It's a little more tricky due to the property protection feature so, we've had 
some impedance. Please give your feedback at: 
https://review.openstack.org/#/c/127923/3

Please let me know if you've any concerns.

Thanks,
-Nikhil

From: Ajaya Agrawal [ajku@gmail.com]
Sent: Thursday, November 13, 2014 4:50 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [glance] Policy file not reloaded after changes

Hi All,

The policy file is not reloaded in glance after a change is made to it. You 
need to restart glance to load the new policy file. I think all other 
components reload the policy file after a change is made to it. Is it a bug or 
intended behavior?

Cheers,
Ajaya
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Martin Geisler
Thomas Goirand  writes:

> Also, if the Horizon project starts using something like NPM (which
> again, is already available in Debian, so it has my preference), will we
> at least be able to control what version gets in, just like with pip?

Yes, npm similarly to pip in that you can specify the versions you want
to install. You can expect loose versions (like 1.2.x if you're okay
with gettting a random patch version) or you can specify the full
version.

In parallel with that, you can add a "shrinkwrap" file which lists the
versions to install recursively. This locks down the versions of
indirect dependencies too (one of your dependencies might otherwise
depend on a loose version number).

> Because that's a huge concern for me, and this has been very well and
> carefully addressed during the Juno cycle. I would very much appreciate
> if the same kind of care was taken again during the Kilo cycle, whatever
> path we take. How do I use npm by the way? Any pointer?

After installing it, you can try running 'npm install eslint'. That will
create a node_modules folder in your current working directory and
install ESLint inside it. It will also create a cache in ~/.npm.

The ESLint executable is now

  node_modules/.bin/eslint

You'll notice that npm creates

  node_modules/eslint/node_modules/

and install the ESLint dependencies there. Try removing node_modules,
then install one of the dependencies first followed by ESLint:

  rm -r node_modules
  npm install object-assign eslint

This will put both object-assign and eslint at the top of node_modules
and object-assign is no longer in node_modules/eslint/node_modules/.

This works because require('object-assign') in NodeJS will search up the
directory tree until it finds the module. So the ESLint code can still
use object-assign.

You can run 'npm dedupe' to move modules up the tree and de-duplicate
the install somewhat.

This nested module system also works the other way: if you run 'npm
install bower' after installing ESLint, you end up with two versions of
object-assign -- check 'npm list object-assign' for a dependency graph.

Surprisingly and unlike, say, Python, executing

  require('object-assign')

can give you different modules depending on where the code lives that
execute the statement. This allows different parts of Bower to use
different versions of object-assign. This is seen as a feature in this
world... I fear that it can cause strange problems and bugs when data
travels from one part of the program to another.

So, the philosophy behind this is very different from what we're used to
with system-level package managers (focus on local installs) and even
From what we have in the Python world with pip (multiple versions
installed concurrently).

-- 
Martin Geisler

http://google.com/+MartinGeisler


pgp6u4TvkQL2G.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Jiri Tomasek

On 11/13/2014 04:04 PM, Thomas Goirand wrote:

On 11/13/2014 12:13 PM, Richard Jones wrote:

the npm stuff is all tool chain; tools
that I believe should be packaged as such by packagers.

npm is already in Debian:
https://packages.debian.org/sid/npm

However, just like we can't use CPAN, "pear install", "pip install" and
such when building or installing package, we wont be able to use NPM.
This means every single dependency that isn't in Debian will need to be
packaged.


Horizon is an incredibly complex application. Just so we're all on the
same page, the components installed by bower for angboard are:

angular
   Because writing an application the size of Horizon without it would be
madness :)
angular-route
   Provides structure to the application through URL routing.
angular-cookies
   Provides management of browser cookies in a way that integrates well
with angular.
angular-sanitize
   Allows direct embedding of HTML into angular templates, with sanitization.
json3
   Compatibility for older browsers so JSON works.
es5-shim
   Compatibility for older browsers so Javascript (ECMAScript 5) works.
angular-smart-table
   Table management (population, sorting, filtering, pagination, etc)
angular-local-storage
Browser local storage with cookie fallback, integrated with angular
mechanisms.
angular-bootstrap
Extensions to angular that leverage bootstrap (modal popups, tabbed
displays, ...)
font-awesome
Additional glyphs to use in the user interface (warning symbol, info
symbol, ...)
boot
Bootstrap for CSS styling (this is the dependency that brings in
jquery and requirejs)
underscore
Javascript utility library providing a ton of features Javascript
lacks but Python programmers expect.
ng-websocket
Angular-friendly interface to using websockets
angular-translate
Support for localization in angular using message catalogs generated
by gettext/transifex.
angular-mocks
Mocking support for unit testing angular code
angular-scenario
More support for angular unit tests

Additionally, angboard vendors term.js because it was very poorly
packaged in the bower ecosystem. +1 for xstatic there I guess :)

So those are the components we needed to create the prototype in a few
weeks. Not using them would have added months (or possibly years) to the
development time. Creating an application of the scale of Horizon
without leveraging all that existing work would be like developing
OpenStack while barring all use of Python 3rd-party packages.

I have no problem with adding dependencies. That's how things work, for
sure, I just want to make sure it doesn't become hell, with so many
components inter-depending on 100s of them, which would become not
manageable. If we define clear boundaries, then fine! The above seems
reasonable anyway.

Though did you list the dependencies of the above?

Also, if the Horizon project starts using something like NPM (which
again, is already available in Debian, so it has my preference), will we
at least be able to control what version gets in, just like with pip?
Because that's a huge concern for me, and this has been very well and
carefully addressed during the Juno cycle. I would very much appreciate
if the same kind of care was taken again during the Kilo cycle, whatever
path we take. How do I use npm by the way? Any pointer?


NPM and Bower work the similar way as pip, they maintain similar files 
as requirements.txt that list dependencies and it's versions.
I think we should bring up patch that introduces this toolset so we can 
discuss the real amount of dependencies and the process.
It would be also nice to introduce something similar as 
global-requirements.txt in OpenStack project to make sure we have all 
deps in one place and get some approval process on versions used.


Here is an example of random Angular application's package.json (used by 
NPM) and bower.json (used by Bower) files:

http://fpaste.org/150513/89599214/

I'll try to search for a good article that describes how this ecosystem 
works.




Cheers,

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Changing our weekly meeting format

2014-11-13 Thread Ghe Rivero
I agree that a lot of time is missed with the announcement and status
reports, but mostly because irc is a slow bandwidth communication channel
(like waiting several minutes for a 3 lines announcement to be written)

I propose that any announcement and project status must be written in
advanced to an etherpad, and during the irc meeting just have a slot for
people to discuss anything that need further explanation, only mentioning
the topic but not  the content.

Ghe Rivero
On Nov 13, 2014 5:08 PM, "Peeyush Gupta" 
wrote:

> +1
>
> I agree with Lucas. Sounds like a good idea. I guess if we could spare
> more time for discussing new features and requirements rather than
> asking for status, that would be helpful for everyone.
>
> On 11/13/2014 05:45 PM, Lucas Alvares Gomes wrote:
> > This was discussed in the Contributor Meetup on Friday at the Summit
> > but I think it's important to share on the mail list too so we can get
> > more opnions/suggestions/comments about it.
> >
> > In the Ironic weekly meeting we dedicate a good time of the meeting to
> > do some announcements, reporting bug status, CI status, oslo status,
> > specific drivers status, etc... It's all good information, but I
> > believe that the mail list would be a better place to report it and
> > then we can free some time from our meeting to actually discuss
> > things.
> >
> > Are you guys in favor of it?
> >
> > If so I'd like to propose a new format based on the discussions we had
> > in Paris. For the people doing the status report on the meeting, they
> > would start adding the status to an etherpad and then we would have a
> > responsible person to get this information and send it to the mail
> > list once a week.
> >
> > For the meeting itself we have a wiki page with an agenda[1] which
> > everyone can edit to put the topic they want to discuss in the meeting
> > there, I think that's fine and works. The only change about it would
> > be that we may want freeze the agenda 2 days before the meeting so
> > people can take a look at the topics that will be discussed and
> > prepare for it; With that we can move forward quicker with the
> > discussions because people will be familiar with the topics already.
> >
> > Let me know what you guys think.
> >
> > [1] https://wiki.openstack.org/wiki/Meetings/Ironic
> >
> > Lucas
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> --
> Peeyush Gupta
> gpeey...@linux.vnet.ibm.com
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] - the setup of a DHCP sub-group

2014-11-13 Thread Don Kehn
If this shows up twice sorry for the repeat:

Armando, Carl:
During the Summit, Armando and I had a very quick conversation concern a
blue print that I submitted,
https://blueprints.launchpad.net/neutron/+spec/dhcp-cpnr-integration and
Armando had mention the possibility of getting together a sub-group tasked
with DHCP Neutron concerns. I have talk with Infoblox folks (see
https://blueprints.launchpad.net/neutron/+spec/neutron-ipam), and everyone
seems to be in agreement that there is synergy especially concerning the
development of a relay and potentially looking into how DHCP is handled. In
addition during the Fridays meetup session on DHCP that I gave there seems
to be some general interest by some of the operators as well.

So what would be the formality in going forth to start a sub-group and
getting this underway?

DeKehn
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] kilo graduation plans

2014-11-13 Thread Rodrigo Duarte
Thanks Steve.

On Thu, Nov 13, 2014 at 12:50 PM, Steve Martinelli 
wrote:

> looking at http://specs.openstack.org/openstack/oslo-specs/ and
> http://specs.openstack.org/openstack/keystone-specs/ should have all the
> info you need. The specs are hosted at:
> https://github.com/openstack/keystone-specs there's a template spec too.
>
> Thanks,
>
> _
> Steve Martinelli
> OpenStack Development - Keystone Core Member
> Phone: (905) 413-2851
> E-Mail: steve...@ca.ibm.com
>
>
>
> From:Rodrigo Duarte 
> To:"OpenStack Development Mailing List (not for usage questions)"
> 
> Date:11/13/2014 10:13 AM
> Subject:Re: [openstack-dev] [oslo] kilo graduation plans
> --
>
>
>
> Hi Doug,
>
> I'm going to write the spec regarding the policy graduation, it will be
> placed in the keystone-specs repository. I was wondering if someone have
> examples of such specs so we can cover all necessary points.
>
> On Thu, Nov 13, 2014 at 10:34 AM, Doug Hellmann <*d...@doughellmann.com*
> > wrote:
>
> On Nov 13, 2014, at 8:31 AM, Dmitry Tantsur <*dtant...@redhat.com*
> > wrote:
>
> > On 11/13/2014 01:54 PM, Doug Hellmann wrote:
> >>
> >> On Nov 13, 2014, at 3:52 AM, Dmitry Tantsur <*dtant...@redhat.com*
> > wrote:
> >>
> >>> On 11/12/2014 08:06 PM, Doug Hellmann wrote:
>  During our “Graduation Schedule” summit session we worked through the
> list of modules remaining the in the incubator. Our notes are in the
> etherpad [1], but as part of the "Write it Down” theme for Oslo this cycle
> I am also posting a summary of the outcome here on the mailing list for
> wider distribution. Let me know if you remembered the outcome for any of
> these modules differently than what I have written below.
> 
>  Doug
> 
> 
> 
>  Deleted or deprecated modules:
> 
>  funcutils.py - This was present only for python 2.6 support, but it
> is no longer used in the applications. We are keeping it in the stable/juno
> branch of the incubator, and removing it from master (
> *https://review.openstack.org/130092*
> )
> 
>  hooks.py - This is not being used anywhere, so we are removing it. (
> *https://review.openstack.org/#/c/125781/*
> )
> 
>  quota.py - A new quota management system is being created (
> *https://etherpad.openstack.org/p/kilo-oslo-common-quota-library*
> ) and
> should replace this, so we will keep it in the incubator for now but
> deprecate it.
> 
>  crypto/utils.py - We agreed to mark this as deprecated and encourage
> the use of Barbican or cryptography.py (
> *https://review.openstack.org/134020*
> )
> 
>  cache/ - Morgan is going to be working on a new oslo.cache library as
> a front-end for dogpile, so this is also deprecated (
> *https://review.openstack.org/134021*
> )
> 
>  apiclient/ - With the SDK project picking up steam, we felt it was
> safe to deprecate this code as well (*https://review.openstack.org/134024*
> ).
> 
>  xmlutils.py - This module was used to provide a security fix for some
> XML modules that have since been updated directly. It was removed. (
> *https://review.openstack.org/#/c/125021/*
> )
> 
> 
> 
>  Graduating:
> 
>  oslo.context:
>  - Dims is driving this
>  -
> *https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-context*
> 
>  - includes:
> context.py
> 
>  oslo.service:
>  - Sachi is driving this
>  -
> *https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-service*
> 
>  - includes:
> eventlet_backdoor.py
> loopingcall.py
> periodic_task.py
> >>> By te way, right now I'm looking into updating this code to be able to
> run tasks on a thread pool, not only in one thread (quite a problem for
> Ironic). Does it somehow interfere with the graduation? Any deadlines or
> something?
> >>
> >> Feature development on code declared ready for graduation is basically
> frozen until the new library is created. You should plan on doing that work
> in the new oslo.service repository, which should be showing up soon. And
> the you describe feature sounds like something for which we would want a
> spec written, so please consider filing one when you have some of the
> details worked out.
> > Sure, right now I'm experimenting in Ironic tree to figure out how it
> really works. There's a single oslo-specs repo for the whole oslo, right?
>
> Yes, that’s right openstack/oslo-specs. Havi

Re: [openstack-dev] [Ironic] Changing our weekly meeting format

2014-11-13 Thread Peeyush Gupta
+1

I agree with Lucas. Sounds like a good idea. I guess if we could spare
more time for discussing new features and requirements rather than
asking for status, that would be helpful for everyone.

On 11/13/2014 05:45 PM, Lucas Alvares Gomes wrote:
> This was discussed in the Contributor Meetup on Friday at the Summit
> but I think it's important to share on the mail list too so we can get
> more opnions/suggestions/comments about it.
>
> In the Ironic weekly meeting we dedicate a good time of the meeting to
> do some announcements, reporting bug status, CI status, oslo status,
> specific drivers status, etc... It's all good information, but I
> believe that the mail list would be a better place to report it and
> then we can free some time from our meeting to actually discuss
> things.
>
> Are you guys in favor of it?
>
> If so I'd like to propose a new format based on the discussions we had
> in Paris. For the people doing the status report on the meeting, they
> would start adding the status to an etherpad and then we would have a
> responsible person to get this information and send it to the mail
> list once a week.
>
> For the meeting itself we have a wiki page with an agenda[1] which
> everyone can edit to put the topic they want to discuss in the meeting
> there, I think that's fine and works. The only change about it would
> be that we may want freeze the agenda 2 days before the meeting so
> people can take a look at the topics that will be discussed and
> prepare for it; With that we can move forward quicker with the
> discussions because people will be familiar with the topics already.
>
> Let me know what you guys think.
>
> [1] https://wiki.openstack.org/wiki/Meetings/Ironic
>
> Lucas
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-- 
Peeyush Gupta
gpeey...@linux.vnet.ibm.com



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] mixing vif drivers e1000 and virtio

2014-11-13 Thread Daniel P. Berrange
On Thu, Nov 13, 2014 at 07:37:33AM -0800, Srini Sundararajan wrote:
> Hi,
> When i create an instance with more than 1 vif,  how can i pick and
> choose/configure  which  driver (e1000/virtio)  i can assign ?

The vif driver is is customizable at a per-image level using the
hw_vif_model  metadata parameter in glance.  There is no facility
for changing this per NIC.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] kilo graduation plans

2014-11-13 Thread Steve Martinelli
looking at http://specs.openstack.org/openstack/oslo-specs/ and 
http://specs.openstack.org/openstack/keystone-specs/ should have all the 
info you need. The specs are hosted at: 
https://github.com/openstack/keystone-specs there's a template spec too.

Thanks,

_
Steve Martinelli
OpenStack Development - Keystone Core Member
Phone: (905) 413-2851
E-Mail: steve...@ca.ibm.com



From:   Rodrigo Duarte 
To: "OpenStack Development Mailing List (not for usage questions)" 

Date:   11/13/2014 10:13 AM
Subject:Re: [openstack-dev] [oslo] kilo graduation plans



Hi Doug,

I'm going to write the spec regarding the policy graduation, it will be 
placed in the keystone-specs repository. I was wondering if someone have 
examples of such specs so we can cover all necessary points.

On Thu, Nov 13, 2014 at 10:34 AM, Doug Hellmann  
wrote:

On Nov 13, 2014, at 8:31 AM, Dmitry Tantsur  wrote:

> On 11/13/2014 01:54 PM, Doug Hellmann wrote:
>>
>> On Nov 13, 2014, at 3:52 AM, Dmitry Tantsur  
wrote:
>>
>>> On 11/12/2014 08:06 PM, Doug Hellmann wrote:
 During our ?Graduation Schedule? summit session we worked through the 
list of modules remaining the in the incubator. Our notes are in the 
etherpad [1], but as part of the "Write it Down? theme for Oslo this cycle 
I am also posting a summary of the outcome here on the mailing list for 
wider distribution. Let me know if you remembered the outcome for any of 
these modules differently than what I have written below.

 Doug



 Deleted or deprecated modules:

 funcutils.py - This was present only for python 2.6 support, but it 
is no longer used in the applications. We are keeping it in the 
stable/juno branch of the incubator, and removing it from master (
https://review.openstack.org/130092)

 hooks.py - This is not being used anywhere, so we are removing it. (
https://review.openstack.org/#/c/125781/)

 quota.py - A new quota management system is being created (
https://etherpad.openstack.org/p/kilo-oslo-common-quota-library) and 
should replace this, so we will keep it in the incubator for now but 
deprecate it.

 crypto/utils.py - We agreed to mark this as deprecated and encourage 
the use of Barbican or cryptography.py (
https://review.openstack.org/134020)

 cache/ - Morgan is going to be working on a new oslo.cache library as 
a front-end for dogpile, so this is also deprecated (
https://review.openstack.org/134021)

 apiclient/ - With the SDK project picking up steam, we felt it was 
safe to deprecate this code as well (https://review.openstack.org/134024).

 xmlutils.py - This module was used to provide a security fix for some 
XML modules that have since been updated directly. It was removed. (
https://review.openstack.org/#/c/125021/)



 Graduating:

 oslo.context:
 - Dims is driving this
 - 
https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-context

 - includes:
context.py

 oslo.service:
 - Sachi is driving this
 - 
https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-service

 - includes:
eventlet_backdoor.py
loopingcall.py
periodic_task.py
>>> By te way, right now I'm looking into updating this code to be able to 
run tasks on a thread pool, not only in one thread (quite a problem for 
Ironic). Does it somehow interfere with the graduation? Any deadlines or 
something?
>>
>> Feature development on code declared ready for graduation is basically 
frozen until the new library is created. You should plan on doing that 
work in the new oslo.service repository, which should be showing up soon. 
And the you describe feature sounds like something for which we would want 
a spec written, so please consider filing one when you have some of the 
details worked out.
> Sure, right now I'm experimenting in Ironic tree to figure out how it 
really works. There's a single oslo-specs repo for the whole oslo, right?

Yes, that?s right openstack/oslo-specs. Having a branch somewhere as a 
reference would be great for the spec reviewers, so that seems like a good 
way to start.

Doug

>
>>
>>>
request_utils.py
service.py
sslutils.py
systemd.py
threadgroup.py

 oslo.utils:
 - We need to look into how to preserve the git history as we import 
these modules.
 - includes:
fileutils.py
versionutils.py



 Remaining untouched:

 scheduler/ - Gantt probably makes this code obsolete, but it isn?t 
clear whether Gantt has enough traction yet so we will hold onto these in 
the incubator for at least another cycle.

 report/ - There?s interest in creating an oslo.reports library 
containing this code, but we haven?t had time to coordinate with Solly 
about doing that.



 Other work:

 We will continue the 

Re: [openstack-dev] [neutron] Is this fix introducing another different bug to dhcp-agent?

2014-11-13 Thread Robert Li (baoli)
Nice catch. Since it’s already merged, a new bug may be in order.

—Robert

On 11/13/14, 10:25 AM, "Miguel Ángel Ajo" 
mailto:majop...@redhat.com>> wrote:

I believe this fix to IPv6 dhcp spawn breaks isolated metadata when we have a 
subnet combination like this on a network:

1) IPv6 subnet, with DHCP enabled
2) IPv4 subnet, with isolated metadata enabled.


https://review.openstack.org/#/c/123671/1/neutron/agent/dhcp_agent.py

I haven’t been able to test yet, but wanted to share it before I forget.




Miguel Ángel
ajo @ freenode.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

2014-11-13 Thread Ryan Brown
On 11/13/2014 09:58 AM, Clint Byrum wrote:
> Excerpts from Zane Bitter's message of 2014-11-13 05:54:03 -0800:
>> On 13/11/14 03:29, Murugan, Visnusaran wrote:

> [snip]

>>> 3.Migrate heat to use TaskFlow. (Too many code change)
>>
>> If it's just handling timed triggers (maybe this is closer to #2) and 
>> not migrating the whole code base, then I don't see why it would be a 
>> big change (or even a change at all - it's basically new functionality). 
>> I'm not sure if TaskFlow has something like this already. If not we 
>> could also look at what Mistral is doing with timed tasks and see if we 
>> could spin some of it out into an Oslo library.
>>
> 
> I feel like it boils down to something running periodically checking for
> scheduled tasks that are due to run but have not run yet. I wonder if we
> can actually look at Ironic for how they do this, because Ironic polls
> power state of machines constantly, and uses a hash ring to make sure
> only one conductor is polling any one machine at a time. If we broke
> stacks up into a hash ring like that for the purpose of singleton tasks
> like timeout checking, that might work out nicely.

+1

Using a hash ring is a great way to shard tasks. I think the most
sensible way to add this would be to make timeout polling a
responsibility of the Observer instead of the engine.

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] VMware networking support

2014-11-13 Thread Armando M.
I chimed in on another thread, but I am reinstating my point just in case.

On 13 November 2014 04:38, Gary Kotton  wrote:

>  Hi,
> A few months back we started to work on a umbrella spec for Vmware
> networking support (https://review.openstack.org/#/c/105369). There are a
> number of different proposals for a number of different use cases. In
> addition to providing one another with an update of our progress we need to
> discuss the following challenges:
>
>- At the summit there was talk about splitting out vendor code from
>the neutron code base. The aforementioned specs are not being approved
>until we have decided what we as a community want/need. We need to
>understand how we can continue our efforts and not be blocked or hindered
>by this debate.
>
> The proposal of allowing vendor plugin to be in full control of their own
destiny will be submitted as any other blueprint and will be discussed as
any other community effort. In my opinion, there is no need to be blocked
on waiting whether the proposal go anywhere. Spec, code and CI being
submitted will have minimal impact irrespective of any decision reached.

So my suggestion is to keep your code current with trunk, and do your 3rd
Party CI infrastructure homework, so that when we are ready to push the
trigger there will be no further delay.

>
>- CI updates – in order to provide a new plugin we are required to
>provide CI (yes, this is written in stone and in some cases marble)
>- Additional support may be required in the following:
>   - Nova – for example Neutron may be exposing extensions or
>   functionality that requires Nova integrations
>   - Devstack – In order to get CI up and running we need devatck
>   support
>
> As a step forwards I would like to suggest that we meeting at
> #openstack-vmware channel on Tuesday at 15:00 UTC. Is this ok with everyone?
> Thanks
> Gary
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] mixing vif drivers e1000 and virtio

2014-11-13 Thread Srini Sundararajan
Hi,
When i create an instance with more than 1 vif,  how can i pick and
choose/configure  which  driver (e1000/virtio)  i can assign ?
Many thanks
Sri
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] About VMWare network bp

2014-11-13 Thread Armando M.
Hi there,

My answers inline. CC the dev list too.

On 13 November 2014 01:24, Gary Kotton  wrote:

>  Hi,
> At the moment the BP is blocked by the design on splitting  out the vendor
> plugins.
>
We have implemented the NSXv plugin based on stable/icehouse and plan to
> start to push this upstream soon. So at the moment I think that we are all
> blocked. The NSXv plugin is a holistic one. The IBM and HP are drivers that
> hook into the ML2. I am not sure if these will reside in the same or
> different projects.
>

I don't think this statement is entirely accurate, please let's not spread
FUD; it is true that splitting out of the vendor plugins has been proposed
at the summit, but nothing has actually been finalized yet. As a matter of
fact, the proposal will be going through the same review process as any
other community effort in the form of a blueprint specification.

The likely outcome of that can be:

- the proposal gets momentum and it gets ultimately approved
- the proposal does not get any traction and it's ultimately deferred
- the proposal gets attention, but it's shot down for lack of agreement

Regardless of the outcome, we can always find a place for the code being
contributed, therefore I would suggest to proceed and make progress on any
pending effort you may have, on the blueprint spec, the actual code, and
the 3rd party CI infrastructure.

If you have made progress on all of three, that's even better!

Hope this help
Armando



>   From: Feng Xi BJ Yan 
> Date: Thursday, November 13, 2014 at 9:46 AM
> To: Gary Kotton , "arma...@gmail.com" <
> arma...@gmail.com>
> Cc: Zhu ZZ Zhu , "d...@us.ibm.com" 
> Subject: About VMWare network bp
>
>   Hi, Gary and Armando,
> Long time no see.
> Our work on VMWare network bp was blocked for a long time. Shall we go on?
> Please let me know if you guys have any plans on this. Maybe we could
> resume our weekly talk firstly.
>
> Best Regard:)
> Bruce Yan
>
> Yan, Fengxi (闫凤喜)
> Openstack Platform Team
> IBM China Systems & Technology Lab, Beijing
> E-Mail: yanfen...@cn.ibm.com
> Tel: 86-10-82451418  Notes: Feng Xi FX Yan/China/IBM
> Address: 3BW239, Ring Building. No.28 Building, ZhongGuanCun Software
> Park,No.8
> DongBeiWang West Road, ShangDi, Haidian District, Beijing, P.R.China
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Matthias Runge
On 13/11/14 15:56, Martin Geisler wrote:

> Maybe a silly question, but why insist on this? Why would you insist on
> installing a JavaScript based application using your package manager?
> 
> I'm a huge fan of package managers and typically refuse to install
> anything globally if it doesn't come as a package.
> 
> However, the whole JavaScript ecosystem seems to be centered around the
> idea of doing local installations. That means that you no longer need
> the package manager to install the software -- you only need a package
> manager to install the base system (NodeJs and npm for JavaScript).
Yeah, I understand you.

But: doing local installs or: installing things aside a package manager
means, that software is not maintained, or properly updated any more.
I'm a huge fan of not bundling stuff and re-using libraries from a
central location. Copying foreign code to your own codebase is quite
popular in JavaScript world. That doesn't mean, it's the right thing to do.

Having a package manager pulling updates for your system (rather than
various applications trying to update themselves) is a big feature.

Just try to keep your windows system up to date: how many different
update tools do you need to use? Are you sure, you really got all?
Look at node.js CVEs listed in another mail to this thread. They were
all due to copying foreign code into their code base.

Matthias

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Is this fix introducing another different bug to dhcp-agent?

2014-11-13 Thread Miguel Ángel Ajo
I believe this fix to IPv6 dhcp spawn breaks isolated metadata when we have a 
subnet combination like this on a network:

1) IPv6 subnet, with DHCP enabled
2) IPv4 subnet, with isolated metadata enabled.


https://review.openstack.org/#/c/123671/1/neutron/agent/dhcp_agent.py  

I haven’t been able to test yet, but wanted to share it before I forget.




Miguel Ángel
ajo @ freenode.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Undead DB objects: ProviderFirewallRule and InstanceGroupPolicy?

2014-11-13 Thread Matthew Booth
There are 3 db apis relating to ProviderFirewallRule:
provider_fw_rule_create, provider_fw_rule_get_all, and
provider_fw_rule_destroy. Of these, only provider_fw_rule_get_all seems
to be used. i.e. It seems they can be queried, but not created.

InstanceGroupPolicy doesn't seem to be used anywhere at all.
_validate_instance_group_policy() in compute manager seems to be doing
something else.

Are these undead relics in need of a final stake through the heart, or
is something else going on here?

Thanks,

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] kilo graduation plans

2014-11-13 Thread Morgan Fainberg


> On Nov 12, 2014, at 14:22, Doug Hellmann  wrote:
> 
> 
>> On Nov 12, 2014, at 4:40 PM, Adam Young  wrote:
>> 
>>> On 11/12/2014 02:06 PM, Doug Hellmann wrote:
>>> During our “Graduation Schedule” summit session we worked through the list 
>>> of modules remaining the in the incubator. Our notes are in the etherpad 
>>> [1], but as part of the "Write it Down” theme for Oslo this cycle I am also 
>>> posting a summary of the outcome here on the mailing list for wider 
>>> distribution. Let me know if you remembered the outcome for any of these 
>>> modules differently than what I have written below.
>>> 
>>> Doug
>>> 
>>> 
>>> 
>>> Deleted or deprecated modules:
>>> 
>>> funcutils.py - This was present only for python 2.6 support, but it is no 
>>> longer used in the applications. We are keeping it in the stable/juno 
>>> branch of the incubator, and removing it from master 
>>> (https://review.openstack.org/130092)
>>> 
>>> hooks.py - This is not being used anywhere, so we are removing it. 
>>> (https://review.openstack.org/#/c/125781/)
>>> 
>>> quota.py - A new quota management system is being created 
>>> (https://etherpad.openstack.org/p/kilo-oslo-common-quota-library) and 
>>> should replace this, so we will keep it in the incubator for now but 
>>> deprecate it.
>>> 
>>> crypto/utils.py - We agreed to mark this as deprecated and encourage the 
>>> use of Barbican or cryptography.py (https://review.openstack.org/134020)
>>> 
>>> cache/ - Morgan is going to be working on a new oslo.cache library as a 
>>> front-end for dogpile, so this is also deprecated 
>>> (https://review.openstack.org/134021)
>>> 
>>> apiclient/ - With the SDK project picking up steam, we felt it was safe to 
>>> deprecate this code as well (https://review.openstack.org/134024).
>>> 
>>> xmlutils.py - This module was used to provide a security fix for some XML 
>>> modules that have since been updated directly. It was removed. 
>>> (https://review.openstack.org/#/c/125021/)
>>> 
>>> 
>>> 
>>> Graduating:
>>> 
>>> oslo.context:
>>> - Dims is driving this
>>> - 
>>> https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-context
>>> - includes:
>>>context.py
>>> 
>>> oslo.service:
>>> - Sachi is driving this
>>> - 
>>> https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-service
>>> - includes:
>>>eventlet_backdoor.py
>>>loopingcall.py
>>>periodic_task.py
>>>request_utils.py
>>>service.py
>>>sslutils.py
>>>systemd.py
>>>threadgroup.py
>>> 
>>> oslo.utils:
>>> - We need to look into how to preserve the git history as we import these 
>>> modules.
>>> - includes:
>>>fileutils.py
>>>versionutils.py
>> You missed oslo.policy.  Graduating, and moving under the AAA program.
> 
> I sure did. I thought we’d held a separate session on policy and I was going 
> to write it up separately, but now I’m not finding a link to a separate 
> etherpad. I must have been mixing that discussion up with one of the other 
> sessions.
> 
> The Keystone team did agree to adopt the policy module and create a library 
> from it. I have Morgan and Adam down as volunteering to drive that process. 
> Since we’re changing owners, I’m not sure where we want to put the 
> spec/blueprint to track the work. Maybe under the keystone program, since 
> you’re doing the work?
> 
Yeah putting it in keystone specs makes the most sense I think of the locations 
we have today. 

--Morgan 

>> 
>> 
>>> 
>>> 
>>> Remaining untouched:
>>> 
>>> scheduler/ - Gantt probably makes this code obsolete, but it isn’t clear 
>>> whether Gantt has enough traction yet so we will hold onto these in the 
>>> incubator for at least another cycle.
>>> 
>>> report/ - There’s interest in creating an oslo.reports library containing 
>>> this code, but we haven’t had time to coordinate with Solly about doing 
>>> that.
>>> 
>>> 
>>> 
>>> Other work:
>>> 
>>> We will continue the work on oslo.concurrency and oslo.log that we started 
>>> during Juno.
>>> 
>>> [1] https://etherpad.openstack.org/p/kilo-oslo-library-proposals
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Thomas Goirand
On 11/13/2014 08:32 AM, Richard Jones wrote:
> I note that the Debian JS guidelines* only recommend that libraries
> *should* be minified (though I'm unsure why they even recommend that).

I'm not sure why. Though what *must* be done, is that source packages,
and no point, should ever include a minified version. This should be
done either at build time, or at runtime. There's already some issues
within the current XStatic packages that I had to deal with (eg: remove
these minified versions so it could be uploaded to Debian).

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

2014-11-13 Thread Clint Byrum
Excerpts from Joshua Harlow's message of 2014-11-13 00:45:07 -0800:
> A question;
> 
> How is using something like celery in heat vs taskflow in heat (or at least 
> concept [1]) 'to many code change'.
> 
> Both seem like change of similar levels ;-)
> 

I've tried a few times to dive into refactoring some things to use
TaskFlow at a shallow level, and have always gotten confused and
frustrated.

The amount of lines that are changed probably is the same. But the
massive shift in thinking is not an easy one to make. It may be worth some
thinking on providing a shorter bridge to TaskFlow adoption, because I'm
a huge fan of the idea and would _start_ something with it in a heartbeat,
but refactoring things to use it feels really weird to me.

> What was your metric for determining the code change either would have (out 
> of curiosity)?
> 
> Perhaps u should look at [2], although I'm unclear on what the desired 
> functionality is here.
> 
> Do u want the single engine to transfer its work to another engine when it 
> 'goes down'? If so then the jobboard model + zookeper inherently does this.
> 
> Or maybe u want something else? I'm probably confused because u seem to be 
> asking for resource timeouts + recover from engine failure (which seems like 
> a liveness issue and not a resource timeout one), those 2 things seem 
> separable.
> 

I agree with you on this. It is definitely a liveness problem. The
resource timeout isn't something I've seen discussed before. We do have
a stack timeout, and we need to keep on honoring that, but we can do
that with a job that sleeps for the stack timeout if we have a liveness
guarantee that will resurrect the job (with the sleep shortened by the
time since stack-update-time) somewhere else if the original engine
can't complete the job.

> [1] http://docs.openstack.org/developer/taskflow/jobs.html
> 
> [2] 
> http://docs.openstack.org/developer/taskflow/examples.html#jobboard-producer-consumer-simple
> 
> On Nov 13, 2014, at 12:29 AM, Murugan, Visnusaran  
> wrote:
> 
> > Hi all,
> >  
> > Convergence-POC distributes stack operations by sending resource actions 
> > over RPC for any heat-engine to execute. Entire stack lifecycle will be 
> > controlled by worker/observer notifications. This distributed model has its 
> > own advantages and disadvantages.
> >  
> > Any stack operation has a timeout and a single engine will be responsible 
> > for it. If that engine goes down, timeout is lost along with it. So a 
> > traditional way is for other engines to recreate timeout from scratch. Also 
> > a missed resource action notification will be detected only when stack 
> > operation timeout happens.
> >  
> > To overcome this, we will need the following capability:
> > 1.   Resource timeout (can be used for retry)
> > 2.   Recover from engine failure (loss of stack timeout, resource 
> > action notification)
> >  
> >  
> > Suggestion:
> > 1.   Use task queue like celery to host timeouts for both stack and 
> > resource.
> > 2.   Poll database for engine failures and restart timers/ retrigger 
> > resource retry (IMHO: This would be a traditional and weighs heavy)
> > 3.   Migrate heat to use TaskFlow. (Too many code change)
> >  
> > I am not suggesting we use Task Flow. Using celery will have very minimum 
> > code change. (decorate appropriate functions)
> >  
> >  
> > Your thoughts.
> >  
> > -Vishnu
> > IRC: ckmvishnu
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Thomas Goirand
On 11/13/2014 08:05 PM, Radomir Dopieralski wrote:
> On 11/11/14 08:02, Richard Jones wrote:
> 
> [...]
> 
>> There were some discussions around tooling. We're using xstatic to
>> manage 3rd party components, but there's a lot missing from that
>> environment. I hesitate to add supporting xstatic components on to the
>> already large pile of work we have to do, so would recommend we switch
>> to managing those components with bower instead. For reference the list
>> of 3rd party components I used in angboard* (which is really only a
>> teensy fraction of the total application we'd end up with, so this
>> components list is probably reduced):
> 
> [...]
> 
>> Just looking at PyPI, it looks like only a few of those are in xstatic,
>> and those are out of date.
> 
> There is a very good reason why we only have a few external JavaScript
> libraries, and why they are in those versions.
> 
> You see, we are not developing Horizon for our own enjoyment, or to
> install it at our own webserver and be done with it. What we write has
> to be then packaged for different Linux distributions by the packagers.
> Those packagers have very little wiggle room with respect to how they
> can package it all, and what they can include.
> 
> In particular, libraries should get packaged separately, so that they
> can upgrade them and apply security patches and so on. Before we used
> xstatic, they have to go through the sources of Horizon file by file,
> and replace all of our bundled files with symlinks to what is provided
> in their distribution. Obviously that was laborious and introduced bugs
> when the versions of libraries didn't match.
> 
> So now we have the xstatic system. That means, that the libraries are
> explicitly listed, with their minimum and maximum version numbers, and
> it's easy to make a "dummy" xstatic package that just points at some
> other location of the static files. This simplifies the work of the
> packagers.
> 
> But the real advantage of using the xstatic packages is that in order to
> add them to Horizon, you need to add them to the global-requirements
> list, which is being watched and approved by the packagers themselves.
> That means, that when you try to introduce a new library, or a version
> of an old library, that is for some reason problematic for any of the
> distributions (due to licensing issues, due to them needing to remain at
> an older version, etc.), they get to veto it and you have a chance of
> resolving the problem early, not dropping it at the last moment on the
> packagers.
> 
> Going back to the versions of the xstatic packages that we use, they are
> so old for a reason. Those are the newest versions that are available
> with reasonable effort in the distributions for which we make Horizon.
> 
> If you want to replace this system with anything else, please keep in
> contact with the packagers to make sure that the resulting process makes
> sense and is acceptable for them.

Thanks a lot for all you wrote above. I 100% agree with it, and you
wrote it better than I would have. Also, I'd like to thank you for the
work we did together during the Juno cycle. Interactions and
communication on IRC were great. I just hope this continues for Kilo, on
the line of what you wrote above.

Cheers,

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC election by the numbers

2014-11-13 Thread Thierry Carrez
Zane Bitter wrote:
> On 01/11/14 16:31, Eoghan Glynn wrote:
>>   1. *make a minor concession to proportionality* - while keeping the
>>  focus on consensus, e.g. by adopting the proportional Condorcet
>>  variant.
> 
> It would be interesting to see the analysis again, but in the past this
> proved to not make much difference.

For the record, I just ran the ballots in CIVS "proportional mode" and
obtained the same set of winners:

http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_88cae988dff29be6

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] kilo graduation plans

2014-11-13 Thread Rodrigo Duarte
Hi Doug,

I'm going to write the spec regarding the policy graduation, it will be
placed in the keystone-specs repository. I was wondering if someone have
examples of such specs so we can cover all necessary points.

On Thu, Nov 13, 2014 at 10:34 AM, Doug Hellmann 
wrote:

>
> On Nov 13, 2014, at 8:31 AM, Dmitry Tantsur  wrote:
>
> > On 11/13/2014 01:54 PM, Doug Hellmann wrote:
> >>
> >> On Nov 13, 2014, at 3:52 AM, Dmitry Tantsur 
> wrote:
> >>
> >>> On 11/12/2014 08:06 PM, Doug Hellmann wrote:
>  During our “Graduation Schedule” summit session we worked through the
> list of modules remaining the in the incubator. Our notes are in the
> etherpad [1], but as part of the "Write it Down” theme for Oslo this cycle
> I am also posting a summary of the outcome here on the mailing list for
> wider distribution. Let me know if you remembered the outcome for any of
> these modules differently than what I have written below.
> 
>  Doug
> 
> 
> 
>  Deleted or deprecated modules:
> 
>  funcutils.py - This was present only for python 2.6 support, but it
> is no longer used in the applications. We are keeping it in the stable/juno
> branch of the incubator, and removing it from master (
> https://review.openstack.org/130092)
> 
>  hooks.py - This is not being used anywhere, so we are removing it. (
> https://review.openstack.org/#/c/125781/)
> 
>  quota.py - A new quota management system is being created (
> https://etherpad.openstack.org/p/kilo-oslo-common-quota-library) and
> should replace this, so we will keep it in the incubator for now but
> deprecate it.
> 
>  crypto/utils.py - We agreed to mark this as deprecated and encourage
> the use of Barbican or cryptography.py (
> https://review.openstack.org/134020)
> 
>  cache/ - Morgan is going to be working on a new oslo.cache library as
> a front-end for dogpile, so this is also deprecated (
> https://review.openstack.org/134021)
> 
>  apiclient/ - With the SDK project picking up steam, we felt it was
> safe to deprecate this code as well (https://review.openstack.org/134024).
> 
>  xmlutils.py - This module was used to provide a security fix for some
> XML modules that have since been updated directly. It was removed. (
> https://review.openstack.org/#/c/125021/)
> 
> 
> 
>  Graduating:
> 
>  oslo.context:
>  - Dims is driving this
>  -
> https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-context
>  - includes:
> context.py
> 
>  oslo.service:
>  - Sachi is driving this
>  -
> https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-service
>  - includes:
> eventlet_backdoor.py
> loopingcall.py
> periodic_task.py
> >>> By te way, right now I'm looking into updating this code to be able to
> run tasks on a thread pool, not only in one thread (quite a problem for
> Ironic). Does it somehow interfere with the graduation? Any deadlines or
> something?
> >>
> >> Feature development on code declared ready for graduation is basically
> frozen until the new library is created. You should plan on doing that work
> in the new oslo.service repository, which should be showing up soon. And
> the you describe feature sounds like something for which we would want a
> spec written, so please consider filing one when you have some of the
> details worked out.
> > Sure, right now I'm experimenting in Ironic tree to figure out how it
> really works. There's a single oslo-specs repo for the whole oslo, right?
>
> Yes, that’s right openstack/oslo-specs. Having a branch somewhere as a
> reference would be great for the spec reviewers, so that seems like a good
> way to start.
>
> Doug
>
> >
> >>
> >>>
> request_utils.py
> service.py
> sslutils.py
> systemd.py
> threadgroup.py
> 
>  oslo.utils:
>  - We need to look into how to preserve the git history as we import
> these modules.
>  - includes:
> fileutils.py
> versionutils.py
> 
> 
> 
>  Remaining untouched:
> 
>  scheduler/ - Gantt probably makes this code obsolete, but it isn’t
> clear whether Gantt has enough traction yet so we will hold onto these in
> the incubator for at least another cycle.
> 
>  report/ - There’s interest in creating an oslo.reports library
> containing this code, but we haven’t had time to coordinate with Solly
> about doing that.
> 
> 
> 
>  Other work:
> 
>  We will continue the work on oslo.concurrency and oslo.log that we
> started during Juno.
> 
>  [1] https://etherpad.openstack.org/p/kilo-oslo-library-proposals
>  ___
>  OpenStack-dev mailing list
>  OpenStack-dev@lists.openstack.org
>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> >>>
> >>>
> >>> __

Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Thomas Goirand
On 11/13/2014 12:13 PM, Richard Jones wrote:
> the npm stuff is all tool chain; tools
> that I believe should be packaged as such by packagers.

npm is already in Debian:
https://packages.debian.org/sid/npm

However, just like we can't use CPAN, "pear install", "pip install" and
such when building or installing package, we wont be able to use NPM.
This means every single dependency that isn't in Debian will need to be
packaged.

> Horizon is an incredibly complex application. Just so we're all on the
> same page, the components installed by bower for angboard are:
> 
> angular
>   Because writing an application the size of Horizon without it would be
> madness :)
> angular-route
>   Provides structure to the application through URL routing.
> angular-cookies
>   Provides management of browser cookies in a way that integrates well
> with angular.
> angular-sanitize
>   Allows direct embedding of HTML into angular templates, with sanitization.
> json3
>   Compatibility for older browsers so JSON works.
> es5-shim
>   Compatibility for older browsers so Javascript (ECMAScript 5) works.
> angular-smart-table
>   Table management (population, sorting, filtering, pagination, etc)
> angular-local-storage
>Browser local storage with cookie fallback, integrated with angular
> mechanisms.
> angular-bootstrap
>Extensions to angular that leverage bootstrap (modal popups, tabbed
> displays, ...)
> font-awesome
>Additional glyphs to use in the user interface (warning symbol, info
> symbol, ...)
> boot
>Bootstrap for CSS styling (this is the dependency that brings in
> jquery and requirejs)
> underscore
>Javascript utility library providing a ton of features Javascript
> lacks but Python programmers expect.
> ng-websocket
>Angular-friendly interface to using websockets
> angular-translate
>Support for localization in angular using message catalogs generated
> by gettext/transifex.
> angular-mocks
>Mocking support for unit testing angular code
> angular-scenario
>More support for angular unit tests
> 
> Additionally, angboard vendors term.js because it was very poorly
> packaged in the bower ecosystem. +1 for xstatic there I guess :)
> 
> So those are the components we needed to create the prototype in a few
> weeks. Not using them would have added months (or possibly years) to the
> development time. Creating an application of the scale of Horizon
> without leveraging all that existing work would be like developing
> OpenStack while barring all use of Python 3rd-party packages.

I have no problem with adding dependencies. That's how things work, for
sure, I just want to make sure it doesn't become hell, with so many
components inter-depending on 100s of them, which would become not
manageable. If we define clear boundaries, then fine! The above seems
reasonable anyway.

Though did you list the dependencies of the above?

Also, if the Horizon project starts using something like NPM (which
again, is already available in Debian, so it has my preference), will we
at least be able to control what version gets in, just like with pip?
Because that's a huge concern for me, and this has been very well and
carefully addressed during the Juno cycle. I would very much appreciate
if the same kind of care was taken again during the Kilo cycle, whatever
path we take. How do I use npm by the way? Any pointer?

Cheers,

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

2014-11-13 Thread Jastrzebski, Michal
By observer I mean process which will actually notify about stack timeout. 
Maybe it was poor  choice of words. Anyway, something will need to check what 
stacks are timed out, and that's new single point of failure.

> -Original Message-
> From: Zane Bitter [mailto:zbit...@redhat.com]
> Sent: Thursday, November 13, 2014 3:49 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Heat] Using Job Queues for timeout ops
> 
> On 13/11/14 09:31, Jastrzebski, Michal wrote:
> > Guys, I don't think we want to get into this cluster management mud.
> > You say let's make observer...and what if observer dies? Do we do
> > observer to observer? And then there is split brain. I'm observer, I've lost
> connection to worker. Should I restart a worker?
> > Maybe I'm one who lost connection to the rest of the world? Should I
> > resume task and risk duplicate workload?
> 
> I think you're misinterpreting what we mean by "observer". See
> https://wiki.openstack.org/wiki/Heat/ConvergenceDesign
> 
> - ZB
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] python-troveclient keystone v3 support breaking the world

2014-11-13 Thread Sean Dague
On 11/13/2014 07:14 AM, Ihar Hrachyshka wrote:
> On 12/11/14 15:17, Sean Dague wrote:
> 
>> 1) just delete the trove exercise so we can move forward - 
>> https://review.openstack.org/#/c/133930 - that will need to be 
>> backported as well.
> 
> The patch is merged. Do we still need to backport it baring in mind
> that client revert [1] was merged? I guess no, but better check.
> 
> Also, since trove client is back in shape, should we revert your
> devstack patch?
> 
> [1]: https://review.openstack.org/#/c/133958/

Honestly, devstack exercises are deprecated. I'd rather just keep it
out. They tend to be things that projects write once, then rot, and I
end up deleting or disabling them 6 months later.

Service testing should be in tempest, where we're in an environment
that's a bit more controlled, and has lots better debug information when
things go wrong.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

2014-11-13 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2014-11-13 05:54:03 -0800:
> On 13/11/14 03:29, Murugan, Visnusaran wrote:
> > Hi all,
> >
> > Convergence-POC distributes stack operations by sending resource actions
> > over RPC for any heat-engine to execute. Entire stack lifecycle will be
> > controlled by worker/observer notifications. This distributed model has
> > its own advantages and disadvantages.
> >
> > Any stack operation has a timeout and a single engine will be
> > responsible for it. If that engine goes down, timeout is lost along with
> > it. So a traditional way is for other engines to recreate timeout from
> > scratch. Also a missed resource action notification will be detected
> > only when stack operation timeout happens.
> >
> > To overcome this, we will need the following capability:
> >
> > 1.Resource timeout (can be used for retry)
> 
> I don't believe this is strictly needed for phase 1 (essentially we 
> don't have it now, so nothing gets worse).
> 

We do have a stack timeout, and it stands to reason that we won't have a
single box with a timeout greenthread after this, so a strategy is
needed.

> For phase 2, yes, we'll want it. One thing we haven't discussed much is 
> that if we used Zaqar for this then the observer could claim a message 
> but not acknowledge it until it had processed it, so we could have 
> guaranteed delivery.
>

Frankly, if oslo.messaging doesn't support reliable delivery then we
need to add it. Zaqar should have nothing to do with this and is, IMO, a
poor choice at this stage, though I like the idea of using it in the
future so that we can make Heat more of an outside-the-cloud app.

> > 2.Recover from engine failure (loss of stack timeout, resource action
> > notification)
> >
> > Suggestion:
> >
> > 1.Use task queue like celery to host timeouts for both stack and resource.
> 
> I believe Celery is more or less a non-starter as an OpenStack 
> dependency because it uses Kombu directly to talk to the queue, vs. 
> oslo.messaging which is an abstraction layer over Kombu, Qpid, ZeroMQ 
> and maybe others in the future. i.e. requiring Celery means that some 
> users would be forced to install Rabbit for the first time.
>
> One option would be to fork Celery and replace Kombu with oslo.messaging 
> as its abstraction layer. Good luck getting that maintained though, 
> since Celery _invented_ Kombu to be it's abstraction layer.
> 

A slight side point here: Kombu supports Qpid and ZeroMQ. Oslo.messaging
is more about having a unified API than a set of magic backends. It
actually boggles my mind why we didn't just use kombu (cue 20 reactions
with people saying it wasn't EXACTLY right), but I think we're committed
to oslo.messaging now. Anyway, celery would need no such refactor, as
kombu would be able to access the same bus as everything else just fine.

> > 2.Poll database for engine failures and restart timers/ retrigger
> > resource retry (IMHO: This would be a traditional and weighs heavy)
> >
> > 3.Migrate heat to use TaskFlow. (Too many code change)
> 
> If it's just handling timed triggers (maybe this is closer to #2) and 
> not migrating the whole code base, then I don't see why it would be a 
> big change (or even a change at all - it's basically new functionality). 
> I'm not sure if TaskFlow has something like this already. If not we 
> could also look at what Mistral is doing with timed tasks and see if we 
> could spin some of it out into an Oslo library.
> 

I feel like it boils down to something running periodically checking for
scheduled tasks that are due to run but have not run yet. I wonder if we
can actually look at Ironic for how they do this, because Ironic polls
power state of machines constantly, and uses a hash ring to make sure
only one conductor is polling any one machine at a time. If we broke
stacks up into a hash ring like that for the purpose of singleton tasks
like timeout checking, that might work out nicely.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] the future of angularjs development in Horizon

2014-11-13 Thread Martin Geisler
Radomir Dopieralski  writes:

> On 11/11/14 08:02, Richard Jones wrote:
>
> [...]
>
>> There were some discussions around tooling. We're using xstatic to
>> manage 3rd party components, but there's a lot missing from that
>> environment. I hesitate to add supporting xstatic components on to
>> the already large pile of work we have to do, so would recommend we
>> switch to managing those components with bower instead. For reference
>> the list of 3rd party components I used in angboard* (which is really
>> only a teensy fraction of the total application we'd end up with, so
>> this components list is probably reduced):
>
> [...]
>
>> Just looking at PyPI, it looks like only a few of those are in xstatic,
>> and those are out of date.
>
> There is a very good reason why we only have a few external JavaScript
> libraries, and why they are in those versions.
>
> You see, we are not developing Horizon for our own enjoyment, or to
> install it at our own webserver and be done with it. What we write has
> to be then packaged for different Linux distributions by the
> packagers. [...]

Maybe a silly question, but why insist on this? Why would you insist on
installing a JavaScript based application using your package manager?

I'm a huge fan of package managers and typically refuse to install
anything globally if it doesn't come as a package.

However, the whole JavaScript ecosystem seems to be centered around the
idea of doing local installations. That means that you no longer need
the package manager to install the software -- you only need a package
manager to install the base system (NodeJs and npm for JavaScript).

Notice that Python has been moving rapidly in the same direction for
years: you only need Python and pip to bootstrap yourself. After getting
used to virtualenv, I've mostly stopped installing Python modules
globally and that is how the JavaScript world expects you to work too.
(Come to think of it, the same applies to some extend to Haskell and
Emacs where there also exist nice package managers that'll pull in and
manage dependencies for you.)

So maybe the Horizon package should be an installer package like the
ones that download fonts or Adobe?

That package would get the right version of node and which then runs the
npm and bower commands to download the rest plus (importantly and much
appreciated) puts the files in a sensible location and gives them good
permissions.

-- 
Martin Geisler

http://google.com/+MartinGeisler


pgph36NhgvYqz.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] VMware networking support

2014-11-13 Thread Romil Gupta
Fine for me :)

On Thu, Nov 13, 2014 at 6:08 PM, Gary Kotton  wrote:

>  Hi,
> A few months back we started to work on a umbrella spec for Vmware
> networking support (https://review.openstack.org/#/c/105369). There are a
> number of different proposals for a number of different use cases. In
> addition to providing one another with an update of our progress we need to
> discuss the following challenges:
>
>- At the summit there was talk about splitting out vendor code from
>the neutron code base. The aforementioned specs are not being approved
>until we have decided what we as a community want/need. We need to
>understand how we can continue our efforts and not be blocked or hindered
>by this debate.
>- CI updates – in order to provide a new plugin we are required to
>provide CI (yes, this is written in stone and in some cases marble)
>- Additional support may be required in the following:
>   - Nova – for example Neutron may be exposing extensions or
>   functionality that requires Nova integrations
>   - Devstack – In order to get CI up and running we need devatck
>   support
>
> As a step forwards I would like to suggest that we meeting at
> #openstack-vmware channel on Tuesday at 15:00 UTC. Is this ok with everyone?
> Thanks
> Gary
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
*Regards,*

*Romil *
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

2014-11-13 Thread Zane Bitter

On 13/11/14 09:31, Jastrzebski, Michal wrote:

Guys, I don't think we want to get into this cluster management mud. You say 
let's
make observer...and what if observer dies? Do we do observer to observer? And 
then
there is split brain. I'm observer, I've lost connection to worker. Should I 
restart a worker?
Maybe I'm one who lost connection to the rest of the world? Should I resume 
task and risk
duplicate workload?


I think you're misinterpreting what we mean by "observer". See 
https://wiki.openstack.org/wiki/Heat/ConvergenceDesign


- ZB

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Current development focus

2014-11-13 Thread Mike Scherbakov
Fuelers,
among bugs we have to work on, please go over 5.1.1 milestone in the first
order. The plan was to declare a code freeze today, but we still have a
number of bugs opened there. We need your help here.

Also, at
https://review.openstack.org/#/c/130717/
we have release notes draft. It also needs an attention.

Thanks,

On Wed, Nov 12, 2014 at 12:17 PM, Mike Scherbakov 
wrote:

> Folks,
> as we all getting hurry with features landing before Feature Freeze
> deadline, we destabilize master. Right after FF, we must be focused on
> stability, and bug squashing.
> Now we are approaching Soft Code Freeze [1], which is planned for Nov
> 13th. Master is still not very stable, and we are getting intermittent
> build failures.
>
> Let's focus on bug squashing, and in a first order, critical bugs which
> are known to be causing BVT tests failures. Please postpone, if possible,
> other action items, such as researches for new features, before we are
> known to be in good shape with release candidates.
>
> [1] https://wiki.openstack.org/wiki/Fuel/6.0_Release_Schedule
>
> Thanks,
> --
> Mike Scherbakov
> #mihgen
>
>


-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Consistency, efficiency, and safety of NovaObject.save()

2014-11-13 Thread Dan Smith
> Can we guarantee that the lifetime of a context object in conductor is
> a single rpc call, and that the object cannot be referenced from any
> other thread?

Yes, without a doubt.

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Consistency, efficiency, and safety of NovaObject.save()

2014-11-13 Thread Matthew Booth
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 13/11/14 14:26, Dan Smith wrote:
>> On 12/11/14 19:39, Mike Bayer wrote:
>>> lets keep in mind my everyone-likes-it-so-far proposal for
>>> reader() and writer(): https://review.openstack.org/#/c/125181/
>>> (this is where it’s going to go as nobody has -1’ed it, so in
>>> absence of any “no way!” votes I have to assume this is what
>>> we’re going with).
>> 
>> Dan,
>> 
>> Note that this model, as I understand it, would conflict with
>> storing context in NovaObject.
> 
> Why do you think that? As you pointed out, the above model is
> purely SQLA code, which is run by an object, long after the context
> has been resolved, the call has been remoted, etc.

Can we guarantee that the lifetime of a context object in conductor is
a single rpc call, and that the object cannot be referenced from any
other thread? Seems safer just to pass it around.

Matt
- -- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iEYEARECAAYFAlRkwTMACgkQNEHqGdM8NJBHMwCdF6RpkpFSXitHfGfOmL0Iw/wr
f/8AnRxozN/LusnermjbZffmvuyoFub7
=S6KI
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >