Re: [openstack-dev] [Neutron][IPv6] Three SLAAC and DHCPv6 related blueprints

2013-12-19 Thread Ian Wells
Xuhan, check the other thread - would 1500UTC suit?


On 19 December 2013 01:09, Xuhan Peng pengxu...@gmail.com wrote:

 Shixiong and guys,

 The sub team meeting is too early for china IBM folks to join although we
 would like to participate the discussion very much. Any chance to rotate
 the time so we can comment?

 Thanks, Xuhan


 On Thursday, December 19, 2013, Shixiong Shang wrote:

 Hi, Ian:

 I agree with you on the point that the way we implement it should be app
 agnostic. In addition, it should cover both CLI and Dashboard, so the
 system behavior should be consistent to end users.

 The keywords is just one of the many ways to implement the concept. It is
 based on the reality that dnsmasq is the only driver available today to the
 community. By the end of the day, the input from customer should be
 translated to one of those mode keywords. It doesn't imply the same
 constants have to be used as part of the CLI or Dashboard.

 Randy and I had lengthy discussion/debating about this topic today. We
 have straw-man proposal and will share with the team tomorrow.

 That being said, what concerned me the most at this moment is, we are not
 on the same page. I hope tomorrow during sub-team meeting, we can reach
 consensus. If you can not make it, then please set up a separate meeting to
 invite key placeholders so we have a chance to sort it out.

 Shixiong




 On Dec 18, 2013, at 8:25 AM, Ian Wells ijw.ubu...@cack.org.uk wrote:

 On 18 December 2013 14:10, Shixiong Shang 
 sparkofwisdom.cl...@gmail.comwrote:

 Hi, Ian:

 I won’t say the intent here is to replace dnsmasq-mode-keyword BP.
 Instead, I was trying to leverage and enhance those definitions so when
 dnsmasq is launched, it knows which mode it should run in.

 That being said, I see the value of your points and I also had lengthy
 discussion with Randy regarding this. We did realize that the keyword
 itself may not be sufficient to properly configure dnsmasq.


 I think the point is that the attribute on whatever object (subnet or
 router) that defines the behaviour should define the behaviour, in
 precisely the terms you're talking about, and then we should find the
 dnsmasq options to suit.  Talking to Sean, he's good with this too, so
 we're all working to the same ends and it's just a matter of getting code
 in.


 Let us discuss that on Thursday’s IRC meeting.


 Not sure if I'll be available or not this Thursday, unfortunately.  I'll
 try to attend but I can't make promises.

 Randy and I had a quick glance over your document. Much of it parallels
 the work we did on our POC last summer, and is now being addressed across
 multiple BP being implemented by ourselves or with Sean Collins and IBM
 team's work. I will take a closer look and provide my comments.


 That's great.  I'm not wedded to the details in there, I'm actually more
 interested that we've covered everything.

 If you have blueprint references, add them as comments - the
 ipv6-feature-parity BP could do with work and if we get the links together
 in one place we can update it.
 --
 Ian.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Blueprint Bind dnsmasq in qrouter- namespace

2013-12-19 Thread Ian Wells
Per the discussions this evening, we did identify a reason why you might
need a dhcp namespace for v6 - because networks don't actually have to have
routers.  It's clear you need an agent in the router namespace for RAs and
another one in the DHCP namespace for when the network's not connected to a
router, though.

We've not pinned down all the API details yet, but the plan is to implement
an RA agent first, responding to subnets that router is attached to (which
is very close to what Randy and Shixiong have already done).
-- 
Ian.


On 19 December 2013 14:01, Randy Tuttle randy.m.tut...@gmail.com wrote:

 First, dnsmasq is not being moved. Instead, it's a different instance
 for the attached subnet in the qrouter namespace. If it's not in the
 qrouter namespace, the default gateway (the local router interface) will be
 the interface of qdhcp namespace interface. That will cause blackhole for
 traffic from VM. As you know, routing tables and NAT all occur in qrouter
 namespace. So we want the RA to contain the local interface as default
 gateway in qrouter namespace

 Randy

 Sent from my iPhone

 On Dec 19, 2013, at 4:05 AM, Xuhan Peng pengxu...@gmail.com wrote:

 I am reading through the blueprint created by Randy to bind dnsmasq into
 qrouter- namespace:


 https://blueprints.launchpad.net/neutron/+spec/dnsmasq-bind-into-qrouter-namespace

 I don't think I can follow the reason that we need to change the namespace
 which contains dnsmasq process and the device it listens to from qdhcp- to
 qrouter-. Why the original namespace design conflicts with the Router
 Advertisement sending from dnsmasq for SLAAC?

 From the attached POC result link, the reason is stated as:

 Even if the dnsmasq process could send Router Advertisement, the default
 gateway would bind to its own link-local address in the qdhcp- namespace.
 As a result, traffic leaving tenant network will be drawn to DHCP
 interface, instead of gateway port on router. That is not desirable! 

 Can Randy or Shixiong explain this more? Thanks!

 Xuhan

 ___

 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][nova] oslo common.service vs. screen and devstack

2013-12-19 Thread Sean Dague
So a few people had been reporting recently that unstack no longer stops
nova processes, which I only got around to looking at today. It turns
out the new common.service stack from oslo takes SIGHUP and treats it as
a restart. Which isn't wrong, but is new, and is incompatible with
screen (the way we use it). Because we use -X stuff, the resulting -X
quit sends SIGHUP to the child processes.

So the question is, are we definitely in a state now where nova services
can and do want to support SIGHUP as restart?

If so, is there interest in being able to disable that behavior at start
time, so we can continue with a screen based approach as well?

If not, we'll need to figure out another way to approach the shutdown in
devstack. Which is fine, just work that wasn't expected.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] additional core review criteria - recent Jenkins pass - otherwise you break the gates

2013-12-19 Thread Sean Dague
https://review.openstack.org/#/c/51793/ is a good instance of a behavior
I've seen a lot of recently, where someone approves a patch that last
ran CI on it a month ago (the last Jenkins pass on this patch was Nov 19th).

If you come across a patch like that, as a core reviewer, please
recheck no bug to make sure it actually passes.

This patches unit tests don't. And what happens then is it becomes a
wrecking ball.

It fails, and gets pulled to the side, causing a reset for anything
behind it. In this case with unit tests failing that means a 20 - 30
minute delay to everything behind it. If anything in front of it fails,
zuul puts it back into rotation, because the change in front of it that
failed *might have been the problem*. Then it fails again, resets the
queue behind it. Another 20 - 30 minute delay.

If there are lots of other races in the gate, and it's a long queue, a
change like this could add *hours* of gate delay.

So please look for recent passes before +Aing anything.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2013-12-19 Thread Ian Wells
On 19 December 2013 15:15, John Garbutt j...@johngarbutt.com wrote:

  Note, I don't see the person who boots the server ever seeing the
 pci-flavor, only understanding the server flavor.
   [IrenaB] I am not sure that elaborating PCI device request into server
 flavor is the right approach for the PCI pass-through network case. vNIC by
 its nature is something dynamic that can be plugged or unplugged after VM
 boot. server flavor is  quite static.

 I was really just meaning the server flavor specify the type of NIC to
 attach.

 The existing port specs, etc, define how many nics, and you can hot
 plug as normal, just the VIF plugger code is told by the server flavor
 if it is able to PCI passthrough, and which devices it can pick from.
 The idea being combined with the neturon network-id you know what to
 plug.

 The more I talk about this approach the more I hate it :(


The thinking we had here is that nova would provide a VIF or a physical NIC
for each attachment.  Precisely what goes on here is a bit up for grabs,
but I would think:

Nova specifiies the type at port-update, making it obvious to Neutron it's
getting a virtual interface or a passthrough NIC (and the type of that NIC,
probably, and likely also the path so that Neutron can distinguish between
NICs if it needs to know the specific attachment port)
Neutron does its magic on the network if it has any to do, like faffing(*)
with switches
Neutron selects the VIF/NIC plugging type that Nova should use, and in the
case that the NIC is a VF and it wants to set an encap, returns that encap
back to Nova
Nova plugs it in and sets it up (in libvirt, this is generally in the XML;
XenAPI and others are up for grabs).

  We might also want a nic-flavor that tells neutron information it
 requires, but lets get to that later...
  [IrenaB] nic flavor is definitely something that we need in order to
 choose if  high performance (PCI pass-through) or virtio (i.e. OVS) nic
 will be created.

 Well, I think its the right way go. Rather than overloading the server
 flavor with hints about which PCI devices you could use.


The issue here is that additional attach.  Since for passthrough that isn't
NICs (like crypto cards) you would almost certainly specify it in the
flavor, if you did the same for NICs then you would have a preallocated
pool of NICs from which to draw.  The flavor is also all you need to know
for billing, and the flavor lets you schedule.  If you have it on the list
of NICs, you have to work out how many physical NICs you need before you
schedule (admittedly not hard, but not in keeping) and if you then did a
subsequent attach it could fail because you have no more NICs on the
machine you scheduled to - and at this point you're kind of stuck.

Also with the former, if you've run out of NICs, the already-extant resize
call would allow you to pick a flavor with more NICs and you can then
reschedule the subsequent VM to wherever resources are available to fulfil
the new request.

One question here is whether Neutron should become a provider of billed
resources (specifically passthrough NICs) in the same way as Cinder is of
volumes - something we'd not discussed to date; we've largely worked on the
assumption that NICs are like any other passthrough resource, just one
where, once it's allocated out, Neutron can work magic with it.
-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party-testing] Reminder: Meeting tomorrow

2013-12-19 Thread Salvatore Orlando
Hi,

I'm sorry I could not make it to meeting.
However, I can see clearly see the progress being made from gerrit!

One thing which might be worth mentioning is that some of the new jobs are
already voting.
However, in some cases the logs are either not accessible, and in other
cases the job seem to be a work-in-progress. For instance I've seen some
just launching devstack; nevertheless the job still votes.

I think publicly available logs and execution of the tempest smoke suite or
a reasonable subset of it should be a required condition for voting.
Is this something which was also discussed at today's meeting?

Salvatore


On 19 December 2013 15:12, Kyle Mestery mest...@siliconloons.com wrote:

 Apologies folks, I meant 2200 UTC Thursday. We'll still do the
 meeting today.

 On Dec 18, 2013, at 4:40 PM, Don Kehn dek...@gmail.com wrote:

  Wouldn't 2200 UTC be in about 20 mins?
 
 
  On Wed, Dec 18, 2013 at 3:32 PM, Itsuro ODA o...@valinux.co.jp wrote:
  Hi,
 
  It seems the meeting was not held on 2200 UTC on Wednesday (today).
 
  Do you mean 2200 UTC on Thursday ?
 
  Thanks.
 
  On Thu, 12 Dec 2013 11:43:03 -0600
  Kyle Mestery mest...@siliconloons.com wrote:
 
   Hi everyone:
  
   We had a meeting around Neutron Third-Party testing today on IRC.
   The logs are available here [1]. We plan to host another meeting
   next week, and it will be at 2200 UTC on Wednesday in the
   #openstack-meeting-alt channel on IRC. Please attend and update
   the etherpad [2] with any items relevant to you before then.
  
   Thanks again!
   Kyle
  
   [1]
 http://eavesdrop.openstack.org/meetings/networking_third_party_testing/2013/
   [2] https://etherpad.openstack.org/p/multi-node-neutron-tempest
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  On Wed, 18 Dec 2013 15:10:46 -0600
  Kyle Mestery mest...@siliconloons.com wrote:
 
   Just a reminder, we'll be meeting at 2200 UTC on
 #openstack-meeting-alt.
   We'll be looking at this etherpad [1] again, and continuing
 discussions from
   last week.
  
   Thanks!
   Kyle
  
   [1] https://etherpad.openstack.org/p/multi-node-neutron-tempest
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  --
  Itsuro ODA o...@valinux.co.jp
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  --
  
  Don Kehn
  303-442-0060
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Horizon and Tuskar-UI codebase merge

2013-12-19 Thread Kieran Spear
Another +1 for separate repos. Horizon's core focus should remain on
the integrated projects, but we also need to prepare better for when
incubated projects graduate, so their inclusion is less disruptive.

I also like Gabriel's suggestion of a non-gating CI job to catch any
integration issues between TuskarUI and Horizon.


Kieran


On 20 December 2013 03:29, Lyle, David david.l...@hp.com wrote:
 So after a lot of consideration, my opinion is the two code bases should stay 
 in separate repos under the Horizon Program, for a few reasons:
 -Adding a large chunk of code for an incubated project is likely going to 
 cause the Horizon delivery some grief due to dependencies and packaging 
 issues at the distro level.
 -The code in Tuskar-UI is currently in a large state of flux/rework.  The 
 Tuskar-UI code needs to be able to move quickly and at times drastically, 
 this could be detrimental to the stability of Horizon.  And conversely, the 
 stability needs of Horizon and be detrimental to the speed at which Tuskar-UI 
 can change.
 -Horizon Core can review changes in the Tuskar-UI code base and provide 
 feedback without the code needing to be integrated in Horizon proper.  
 Obviously, with an eye to the code bases merging in the long run.

 As far as core group organization, I think the current Tuskar-UI core should 
 maintain their +2 for only Tuskar-UI.  Individuals who make significant 
 review contributions to Horizon will certainly be considered for Horizon core 
 in time.  I agree with Gabriel's suggestion of adding Horizon Core to 
 tuskar-UI core.  The idea being that Horizon core is looking for 
 compatibility with Horizon initially and working toward a deeper 
 understanding of the Tuskar-UI code base.  This will help insure the 
 integration process goes as smoothly as possible when Tuskar/TripleO comes 
 out of incubation.

 I look forward to being able to merge the two code bases, but I don't think 
 the time is right yet and Horizon should stick to only integrating code into 
 OpenStack Dashboard that is out of incubation.  We've made exceptions in the 
 past, and they tend to have unfortunate consequences.

 -David


 -Original Message-
 From: Jiri Tomasek [mailto:jtoma...@redhat.com]
 Sent: Thursday, December 19, 2013 4:40 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Horizon and Tuskar-UI codebase merge

 On 12/19/2013 08:58 AM, Matthias Runge wrote:
  On 12/18/2013 10:33 PM, Gabriel Hurley wrote:
 
  Adding developers to Horizon Core just for the purpose of reviewing
  an incubated umbrella project is not the right way to do things at
  all.  If my proposal of two separate groups having the +2 power in
  Gerrit isn't technically feasible then a new group should be created
  for management of umbrella projects.
  Yes, I totally agree.
 
  Having two separate projects with separate cores should be possible
  under the umbrella of a program.
 
  Tuskar differs somewhat from other projects to be included in horizon,
  because other projects contributed a view on their specific feature.
  Tuskar provides an additional dashboard and is talking with several apis
  below. It's a something like a separate dashboard to be merged here.
 
  When having both under the horizon program umbrella, my concern is,
 that
  both projects wouldn't be coupled so tight, as I would like it.
 
  Esp. I'd love to see an automatic merge of horizon commits to a
  (combined) tuskar and horizon repository, thus making sure, tuskar will
  work in a fresh (updated) horizon environment.

 Please correct me if I am wrong, but I think this is not an issue.
 Currently Tuskar-UI is run from Horizon fork. In local Horizon fork we
 create symlink to tuskar-ui local clone and to run Horizon with
 Tuskar-UI we simply start Horizon server. This means that Tuskar-UI runs
 on latest version of Horizon. (If you pull regularly of course).

 
  Matthias
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 Jirka


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] additional core review criteria - recent Jenkins pass - otherwise you break the gates

2013-12-19 Thread Julien Danjou
On Thu, Dec 19 2013, Sean Dague wrote:

 So please look for recent passes before +Aing anything.

What about making that automatic?

Same question for patchset that stays that for a month, finally got
approved and fails right away because they cannot be merged. It would be
cool to notify the submitter as soon as the patch is detected
non-mergeable.

-- 
Julien Danjou
-- Free Software hacker - independent consultant
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] additional core review criteria - recent Jenkins pass - otherwise you break the gates

2013-12-19 Thread Sean Dague
Jim and I have been talking about both of these ideas for months. We aren't 
lacking clever solutions to make this better. However they are lacking 
implementors. Volunteers welcomed.

Until such time, this is completely solvable problem by people taking and extra 
5 seconds before approving a patch to merge. I expect a core reviewer to look 
back through the history of comments to make sure they aren't ignoring other 
people's feedback.

It doesn't seem unreasonable that people that core reviewers also understand 
that there is a lot of responsibility with that power, and what the impact on 
others is if you +A things into the gate.

John Griffith john.griff...@solidfire.com wrote:
On Thu, Dec 19, 2013 at 5:41 PM, Sean Dague s...@dague.net wrote:
 https://review.openstack.org/#/c/51793/

Just curious, what about the possibility of automating this?  In other
words, run through idle patches any time the gate volume gets below a
certain threshold.  If you come across a patch that hasn't been
visited/modified in over say a week, and it doesn't have any failures
(-1's, -2's etc) then go ahead an run a check on it.  The volume will
likely prevent it from keeping all of them updated but it might make a
significant dent.

Might work, might not.  But if the simple action described above
causes *hours* of gate delay it very well may be more than worth
making an attempt to automate IMO.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Sean Dague 
http://dague.net 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][qa] test_network_basic_ops and the FloatingIPChecker control point

2013-12-19 Thread Salvatore Orlando
Before starting this post I confess I did not read with the required level
of attention all this thread, so I apologise for any repetition.

I just wanted to point out that floating IPs in neutron are created
asynchronously when using the l3 agent, and I think this is clear to
everybody.
So when the create floating IP call returns, this does not mean the
floating IP has actually been wired, ie: IP configured on qg-interface and
SNAT/DNAT rules added.

Unfortunately, neutron lacks a concept of operational status for a floating
IP which would tell clients, including nova (it acts as a client wrt nova
api), when a floating IP is ready to be used. I started work in this
direction, but it has been suspended now for a week. If anybody wants to
take over and deems this a reasonable thing to do, it will be great.

I think neutron tests checking connectivity might return more meaningful
failure data if they would gather the status of the various components
which might impact connectivity.
These are:
- The floating IP
- The router internal interface
- The VIF port
- The DHCP agent

Collecting info about the latter is very important but a bit trickier. I
discussed with Sean and Maru that it would be great for a starter, grep the
console log to check whether the instance obtained an IP.
Other things to consider would be:
- adding an operational status to a subnet, which would express whether the
DHCP agent is in sync with that subnet (this information won't make sense
for subnets with dhcp disabled)
- working on a 'debug' administrative API which could return, for instance,
for each DHCP agent the list of configured networks and leases.

Regarding timeouts, I think it's fair for tempest to define a timeout and
ask that everything from VM boot to Floating IP wiring completes within
that timeout.

Regards,
Salvatore


On 19 December 2013 16:15, Frittoli, Andrea (Cloud Services) 
fritt...@hp.com wrote:

 My 2 cents:

 In the test the floating IP is created via neutron API and later checked
 via
 nova API.

 So the test is relying here (or trying to verify?) the network cache
 refresh
 mechanism in nova.
 This is something that we should test, but in a test dedicated to this.

 The primary objective of test_network_basic_ops is to verify the network
 plumbing and end-to-end connectivity, so it should be decoupled from things
 like network cache refresh.

 If the floating IP is associated via neutron API, only the neutron API will
 report the associated in a timely manner.
 Else if the floating IP is created via the nova API, this will update the
 network cache automatically, not relying on the cache refresh mechanism, so
 both neutron and nova API will report the associated in a timely manner
 (this did not work some weeks ago, so it something tempest tests should
 catch).

 andrea

 -Original Message-
 From: Brent Eagles [mailto:beag...@redhat.com]
 Sent: 19 December 2013 14:53
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [neutron][qa] test_network_basic_ops and the
 FloatingIPChecker control point

 Hi,

 Yair Fried wrote:
  I would also like to point out that, since Brent used
  compute.build_timeout as the timeout value ***It takes more time to
  update FLIP in nova DB, than for a VM to build***
 
  Yair

 Agreed. I think that's an extremely important highlight of this discussion.
 Propagation of the floating IP is definitely bugged. In the small sample of
 logs (2) that I checked, the floating IP assignment propagated in around 10
 seconds for test_network_basic_ops, but in the cross tenant connectivity
 test it took somewhere around 1 minute for the first assignment and
 something over 3 (otherwise known as simply-too-long-to-find-out). Even if
 the querying of once a second were excessive - which I do not feel strong
 enough about to say is anything other than a *possible* contributing factor
 - it should not take that long.

 Cheers,

 Brent

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Blueprint Bind dnsmasq in qrouter- namespace

2013-12-19 Thread Shixiong Shang
Hi, Ian:

The use case brought by Comcast team today during the ipv6 sub-team meeting 
actually proved the point I made here, instead of against it. If I didn’t 
explain it clearly in my previous email, here it is.

I was questioning the design with two namespaces and I believe we can use a 
SINGLE namespace as the common container to host two services, i.e. DHCP and 
ROUTING. If your use case needs DHCP instance, but not ROUTING, then just 
launch dnsmasq in THE namespace with qr- interface; If your use case needs 
default GW, then add qg- interface in THE namespace. Whether it is called qdhcp 
or qrouter, I don’t care. It is just a label. 

People follow the routine to use it, simply because this is what OpenStack 
offers. But my question is, why? And why NOT we design the system in the way 
that qg- and qr- interface collocate in the same namespace?

It is because we intentionally separate the service, now the system become 
clumsy and less efficient. As you can see in IPv6 cases, we are forced to deal 
with two namespaces now. It just doesn’t make any sense.

Shixiong






On Dec 19, 2013, at 7:27 PM, Ian Wells ijw.ubu...@cack.org.uk wrote:

 Per the discussions this evening, we did identify a reason why you might need 
 a dhcp namespace for v6 - because networks don't actually have to have 
 routers.  It's clear you need an agent in the router namespace for RAs and 
 another one in the DHCP namespace for when the network's not connected to a 
 router, though.  
 
 We've not pinned down all the API details yet, but the plan is to implement 
 an RA agent first, responding to subnets that router is attached to (which is 
 very close to what Randy and Shixiong have already done).
 -- 
 Ian.
 
 
 On 19 December 2013 14:01, Randy Tuttle randy.m.tut...@gmail.com wrote:
 First, dnsmasq is not being moved. Instead, it's a different instance for 
 the attached subnet in the qrouter namespace. If it's not in the qrouter 
 namespace, the default gateway (the local router interface) will be the 
 interface of qdhcp namespace interface. That will cause blackhole for traffic 
 from VM. As you know, routing tables and NAT all occur in qrouter namespace. 
 So we want the RA to contain the local interface as default gateway in 
 qrouter namespace
 
 Randy
 
 Sent from my iPhone
 
 On Dec 19, 2013, at 4:05 AM, Xuhan Peng pengxu...@gmail.com wrote:
 
 I am reading through the blueprint created by Randy to bind dnsmasq into 
 qrouter- namespace:
 
 https://blueprints.launchpad.net/neutron/+spec/dnsmasq-bind-into-qrouter-namespace
 
 I don't think I can follow the reason that we need to change the namespace 
 which contains dnsmasq process and the device it listens to from qdhcp- to 
 qrouter-. Why the original namespace design conflicts with the Router 
 Advertisement sending from dnsmasq for SLAAC?
 
 From the attached POC result link, the reason is stated as:
 
 Even if the dnsmasq process could send Router Advertisement, the default 
 gateway would bind to its own link-local address in the qdhcp- namespace. As 
 a result, traffic leaving tenant network will be drawn to DHCP interface, 
 instead of gateway port on router. That is not desirable! 
 
 Can Randy or Shixiong explain this more? Thanks!
 
 Xuhan 
 ___
 
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Blueprint Bind dnsmasq in qrouter- namespace

2013-12-19 Thread Randy Tuttle
Shixiong,

I know you must have a typo in the 3rd paragraph. I think maybe you mean to
include the ns- interface in that list. So why not have qg- qr- and ns-
interfaces in the same namespace. Am I right?

Rnady


On Thu, Dec 19, 2013 at 8:31 PM, Shixiong Shang 
sparkofwisdom.cl...@gmail.com wrote:

 Hi, Ian:

 The use case brought by Comcast team today during the ipv6 sub-team
 meeting actually proved the point I made here, instead of against it. If I
 didn’t explain it clearly in my previous email, here it is.

 I was questioning the design with two namespaces and I believe we can use
 a SINGLE namespace as the common container to host two services, i.e. DHCP
 and ROUTING. If your use case needs DHCP instance, but not ROUTING, then
 just launch dnsmasq in THE namespace with qr- interface; If your use case
 needs default GW, then add qg- interface in THE namespace. Whether it is
 called qdhcp or qrouter, I don’t care. It is just a label.

 People follow the routine to use it, simply because this is what OpenStack
 offers. But my question is, why? And why NOT we design the system in the
 way that qg- and qr- interface collocate in the same namespace?

 It is because we intentionally separate the service, now the system become
 clumsy and less efficient. As you can see in IPv6 cases, we are forced to
 deal with two namespaces now. It just doesn’t make any sense.

 Shixiong






 On Dec 19, 2013, at 7:27 PM, Ian Wells ijw.ubu...@cack.org.uk wrote:

 Per the discussions this evening, we did identify a reason why you might
 need a dhcp namespace for v6 - because networks don't actually have to have
 routers.  It's clear you need an agent in the router namespace for RAs and
 another one in the DHCP namespace for when the network's not connected to a
 router, though.

 We've not pinned down all the API details yet, but the plan is to
 implement an RA agent first, responding to subnets that router is attached
 to (which is very close to what Randy and Shixiong have already done).
 --
 Ian.


 On 19 December 2013 14:01, Randy Tuttle randy.m.tut...@gmail.com wrote:

 First, dnsmasq is not being moved. Instead, it's a different instance
 for the attached subnet in the qrouter namespace. If it's not in the
 qrouter namespace, the default gateway (the local router interface) will be
 the interface of qdhcp namespace interface. That will cause blackhole for
 traffic from VM. As you know, routing tables and NAT all occur in qrouter
 namespace. So we want the RA to contain the local interface as default
 gateway in qrouter namespace

 Randy

 Sent from my iPhone

 On Dec 19, 2013, at 4:05 AM, Xuhan Peng pengxu...@gmail.com wrote:

 I am reading through the blueprint created by Randy to bind dnsmasq into
 qrouter- namespace:


 https://blueprints.launchpad.net/neutron/+spec/dnsmasq-bind-into-qrouter-namespace

 I don't think I can follow the reason that we need to change the
 namespace which contains dnsmasq process and the device it listens to from
 qdhcp- to qrouter-. Why the original namespace design conflicts with the
 Router Advertisement sending from dnsmasq for SLAAC?

 From the attached POC result link, the reason is stated as:

 Even if the dnsmasq process could send Router Advertisement, the default
 gateway would bind to its own link-local address in the qdhcp- namespace.
 As a result, traffic leaving tenant network will be drawn to DHCP
 interface, instead of gateway port on router. That is not desirable! 

 Can Randy or Shixiong explain this more? Thanks!

 Xuhan

 ___

 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][VMware] Deploy from vCenter template

2013-12-19 Thread Zhu Zhu
Hi Arnaud, 

It's really good to know that your team are proposing the vcenter driver with 
OVA+glance datastore backend support.  Thanks for sharing the information.  OVA 
would be a good choice which will benefit users by avoiding only use flat image 
limited from current driver. 

But in my opinion, it may not conflict with deploying from template. From an 
end user perpective, if there are already a set of templates within vcenter, 
it's good for him to have openstack to deploy VM from it directly.  He can 
touch an empty image in glance only with the metadata pointing to the template 
name.  And boot vm from it.  Also he can freely choose to  generate a *ova with 
vmdk stream file and placed in certain datastore and deploy from VM from image 
location pointing to the datastore.  These are two different usage scenarios 
per my understanding.  

And to go further, if there are some mechnism, that openstack can sync exsiting 
VM templates into Glance images.  It can make this function more useful. 




Best Regards
Zarric(Zhu Zhu)

From: Arnaud Legendre
Date: 2013-12-18 01:58
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][VMware] Deploy from vCenter template
Hi Qui Xing,



We are planning to address the vCenter template issue by levering the OVF/OVA 
capabilities.
Kiran's implementation is tied to a specific VC and requires to add Glance 
properties that are not generic.
For existing templates, the workflow will be:
. generate an *.ova tarball (containing the *.ovf descriptor + *.vmdk 
stream-optimized) out of the template,
. register the *.ova as a Glance image location (using the VMware Glance driver 
see bp [1]) or simply upload the *.ova to another Glance store,
. The VMware driver in Nova will be able to consume the *.ova (either through 
the location or by downloading the content to the cache):  see bp [2]. However, 
for Icehouse, we are not planning to actually consume the *.ovf descriptor 
(work scheduled for the J/K release).


[1]  
https://blueprints.launchpad.net/glance/+spec/vmware-datastore-storage-backend
[2] https://blueprints.launchpad.net/nova/+spec/vmware-driver-ova-support 


If you have questions about [1], please send me an email. For [2], please reach 
vuil.




Thanks,
Arnaud





From: Shawn Hartsock harts...@acm.org
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Monday, December 16, 2013 9:37:34 AM
Subject: Re: [openstack-dev] [Nova][VMware] Deploy from vCenter template



IIRC someone who shows up at 
https://wiki.openstack.org/wiki/Meetings/VMwareAPI#Meetings is planning on 
working on that again for Icehouse-3 but there's some new debate on the best 
way to implement the desired effect. The goal of that change would be to avoid 
streaming the disk image out of vCenter for the purpose of then streaming the 
same image back into the same vCenter. That's really inefficient.


So there's a Nova level change that could happen (that's the patch you saw) and 
there's a Glance level change that could happen, and there's a combination of 
both approaches that could happen.


If you want to discuss it informally with the group that's looking into the 
problem I could probably make sure you end up talking to the right people on 
#openstack-vmware or if you pop into the weekly team meeting on IRC you could 
mention it during open discussion time.




On Mon, Dec 16, 2013 at 3:27 AM, Qing Xin Meng mengq...@cn.ibm.com wrote:

I saw a commit for Deploying from VMware vCenter template and found it's 
abandoned.
https://review.openstack.org/#/c/34903 

Anyone knows the plan to support the deployment from VMware vCenter template?


Thanks!



Best Regards

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







-- 

# Shawn.Hartsock - twitter: @hartsock - plus.google.com/+ShawnHartsock 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=5wWaXo2oVaivfKLCMyU6Z9UTO8HOfeGCzbGHAT4gZpo%3D%0Am=KCBtvBVSZCslDQrTvSEqBcTEcx%2FSKxtF0ZRIjtTFmSw%3D%0As=f45fbe293564be6a16c90b0125534e5e62f7505fea9f92708b72aa60e5e1dc5f___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Today - Doc Bug Day

2013-12-19 Thread Tom Fifield
Reminder: Doc Bug Day is today.

On 13/12/13 08:16, Tom Fifield wrote:
 Reminder: Friday next week - Doc Bug Day.
 
 Current number of doc bugs: 505
 
 On 22/11/13 11:27, Tom Fifield wrote:
 All,

 This month, docs reaches 500 bugs, making it the 2nd-largest project by
 bug count in all of OpenStack. Yes, it beats Cinder, Horizon, Swift,
 Keystone and Glance, and will soon surpass Neutron.

 In order to start the new year in a slightly better state, we have
 arranged a bug squash day:


 Friday, December 20th


 https://wiki.openstack.org/wiki/Documentation/BugDay


 Join us in #openstack-doc whenever you get to your computer, and let's
 beat the bugs :)


 For those who are unfamiliar:
 Bug days are a day-long event where all the OpenStack community focuses
 exclusively on a task around bugs corresponding to the bug day topic.
 With so many community members available around the same task, these
 days are a great way to start joining the OpenStack community.


 Regards,


 Tom

 ___
 Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 
 
 ___
 Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Blueprint Bind dnsmasq in qrouter- namespace

2013-12-19 Thread Shixiong Shang
Hi, Randy: 

Thanks a bunch for pointing it out! Yup, you are absolutely right. What I 
wanted to say is why not put qg-, qr-, and ns- interfaces in the single 
namespace.

I typed it on my small keyboard on iPhone. Sorry for the confusion. :(

Shixiong




On Dec 19, 2013, at 8:44 PM, Randy Tuttle randy.m.tut...@gmail.com wrote:

 Shixiong,
 
 I know you must have a typo in the 3rd paragraph. I think maybe you mean to 
 include the ns- interface in that list. So why not have qg- qr- and ns- 
 interfaces in the same namespace. Am I right?
 
 Rnady
 
 
 On Thu, Dec 19, 2013 at 8:31 PM, Shixiong Shang 
 sparkofwisdom.cl...@gmail.com wrote:
 Hi, Ian:
 
 The use case brought by Comcast team today during the ipv6 sub-team meeting 
 actually proved the point I made here, instead of against it. If I didn’t 
 explain it clearly in my previous email, here it is.
 
 I was questioning the design with two namespaces and I believe we can use a 
 SINGLE namespace as the common container to host two services, i.e. DHCP and 
 ROUTING. If your use case needs DHCP instance, but not ROUTING, then just 
 launch dnsmasq in THE namespace with qr- interface; If your use case needs 
 default GW, then add qg- interface in THE namespace. Whether it is called 
 qdhcp or qrouter, I don’t care. It is just a label. 
 
 People follow the routine to use it, simply because this is what OpenStack 
 offers. But my question is, why? And why NOT we design the system in the way 
 that qg- and qr- interface collocate in the same namespace?
 
 It is because we intentionally separate the service, now the system become 
 clumsy and less efficient. As you can see in IPv6 cases, we are forced to 
 deal with two namespaces now. It just doesn’t make any sense.
 
 Shixiong
 
 
 
 
 
 
 On Dec 19, 2013, at 7:27 PM, Ian Wells ijw.ubu...@cack.org.uk wrote:
 
 Per the discussions this evening, we did identify a reason why you might 
 need a dhcp namespace for v6 - because networks don't actually have to have 
 routers.  It's clear you need an agent in the router namespace for RAs and 
 another one in the DHCP namespace for when the network's not connected to a 
 router, though.  
 
 We've not pinned down all the API details yet, but the plan is to implement 
 an RA agent first, responding to subnets that router is attached to (which 
 is very close to what Randy and Shixiong have already done).
 -- 
 Ian.
 
 
 On 19 December 2013 14:01, Randy Tuttle randy.m.tut...@gmail.com wrote:
 First, dnsmasq is not being moved. Instead, it's a different instance for 
 the attached subnet in the qrouter namespace. If it's not in the qrouter 
 namespace, the default gateway (the local router interface) will be the 
 interface of qdhcp namespace interface. That will cause blackhole for 
 traffic from VM. As you know, routing tables and NAT all occur in qrouter 
 namespace. So we want the RA to contain the local interface as default 
 gateway in qrouter namespace
 
 Randy
 
 Sent from my iPhone
 
 On Dec 19, 2013, at 4:05 AM, Xuhan Peng pengxu...@gmail.com wrote:
 
 I am reading through the blueprint created by Randy to bind dnsmasq into 
 qrouter- namespace:
 
 https://blueprints.launchpad.net/neutron/+spec/dnsmasq-bind-into-qrouter-namespace
 
 I don't think I can follow the reason that we need to change the namespace 
 which contains dnsmasq process and the device it listens to from qdhcp- to 
 qrouter-. Why the original namespace design conflicts with the Router 
 Advertisement sending from dnsmasq for SLAAC?
 
 From the attached POC result link, the reason is stated as:
 
 Even if the dnsmasq process could send Router Advertisement, the default 
 gateway would bind to its own link-local address in the qdhcp- namespace. 
 As a result, traffic leaving tenant network will be drawn to DHCP 
 interface, instead of gateway port on router. That is not desirable! 
 
 Can Randy or Shixiong explain this more? Thanks!
 
 Xuhan 
 ___
 
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Nova] Default ephemeral filesystem

2013-12-19 Thread Andrew Woodward
From my some testing I did a couple of months ago, we decided to move to
XFS to avoid the issue

I was poking around with after my file system inadvertently filled and
 found that in ex3/4 all of the inodes in the file system have to be zeroed
 prior to mkfs completing (unless the kernel is above 2.6.37 in which case
 the inode table is lazily initialized (in the background) after the first
 mount]). Initializing the inode table with the current default
 bytes-per-inode ratio of 16Ki and 256Bi (inode-size) per inode results in
 16GiB of inodes per 1TiB of volume.



There appear to be two viable options to reducing the format time.
 1. We can increase the bytes-per-inode value during mkfs, at the cost of
 total files the file system can store. Moving this so that 1GiB per TiB are
 initialized (setting bytes-per-inode to 256Ki) would result in 4Mi inodes
 per TiB of disk (down from 64Mi)
 2. We can format all non-os partitions as XFS. Which does a lot less
 upfront allocation. By my observation, it appears to initialize around
 700MiB per TiB


The performance increase you saw is likely to the deferred updates that
will still occur on the first mount of the device (but in the background).
However this wont occur for RHEL4,5,6 as the kernel won't do it so they
will still sit there and require the pre-allocation. You can short the
inodes by making them larger as I initially tested which could be OK since
most images are going many MByte and usualy many GByte

Just my feedback on what you were seeing. moving to ext4 preferred over ext3


On Thu, Dec 19, 2013 at 12:30 PM, Sean Dague s...@dague.net wrote:

 On 12/19/2013 03:21 PM, Robert Collins wrote:
  The default ephemeral filesystem in Nova is ext3 (for Linux). However
  ext3 is IMNSHO a pretty poor choice given ext4's existence. I can
  totally accept that other fs's like xfs might be contentious - but is
  there any reason not to make ext4 the default?
 
  I'm not aware of any distro that doesn't have ext4 support - even RHEL
  defaults to ext4 in RHEL5.
 
  The reason I'm raising this is that making a 1TB ext3 ephemeral volume
  does (way) over 5GB of writes due to zeroing all the inode tables, but
  an ext4 one does less than 1% of the IO - 14m vs 7seconds in my brief
  testing. (We were investigating why baremetal deploys were slow :)).
 
  -Rob

 Seems like a fine change to me. I assume that's all just historical
 artifact.

 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
If google has done it, Google did it right!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] role of Domain in VPC definition

2013-12-19 Thread Ravi Chunduru
Hi,
  We had some internal discussions on role of Domain and VPCs. I would like
to expand and understand community thinking of Keystone domain and VPCs.

Is VPC equivalent to Keystone Domain?

If so, as a public cloud provider - I create a Keystone domain and give it
to an organization which wants a virtual private cloud.

Now the question is if that organization wants to have  departments wise
allocation of resources it is becoming difficult to visualize with existing
v3 keystone constructs.

Currently, it looks like each department of an organization cannot have
their own resource management with in the organization VPC ( LDAP based
user management, network management or dedicating computes etc.,) For us,
Openstack Project does not match the requirements of a department of an
organization.

I hope you guessed what we wanted - Domain must have VPCs and VPC to have
projects.

I would like to know how community see the VPC model in Openstack.

Thanks,
-Ravi.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Blueprint Bind dnsmasq in qrouter- namespace

2013-12-19 Thread Ian Wells
Interesting.  So you're suggesting we provision a single namespace (per
network, rather than subnet?) proactively, and use it for both routing and
DHCP.  Not unreasonable.  Also workable for v4, I think?
-- 
Ian.


On 20 December 2013 02:31, Shixiong Shang sparkofwisdom.cl...@gmail.comwrote:

 Hi, Ian:

 The use case brought by Comcast team today during the ipv6 sub-team
 meeting actually proved the point I made here, instead of against it. If I
 didn’t explain it clearly in my previous email, here it is.

 I was questioning the design with two namespaces and I believe we can use
 a SINGLE namespace as the common container to host two services, i.e. DHCP
 and ROUTING. If your use case needs DHCP instance, but not ROUTING, then
 just launch dnsmasq in THE namespace with qr- interface; If your use case
 needs default GW, then add qg- interface in THE namespace. Whether it is
 called qdhcp or qrouter, I don’t care. It is just a label.

 People follow the routine to use it, simply because this is what OpenStack
 offers. But my question is, why? And why NOT we design the system in the
 way that qg- and qr- interface collocate in the same namespace?

 It is because we intentionally separate the service, now the system become
 clumsy and less efficient. As you can see in IPv6 cases, we are forced to
 deal with two namespaces now. It just doesn’t make any sense.

 Shixiong






 On Dec 19, 2013, at 7:27 PM, Ian Wells ijw.ubu...@cack.org.uk wrote:

 Per the discussions this evening, we did identify a reason why you might
 need a dhcp namespace for v6 - because networks don't actually have to have
 routers.  It's clear you need an agent in the router namespace for RAs and
 another one in the DHCP namespace for when the network's not connected to a
 router, though.

 We've not pinned down all the API details yet, but the plan is to
 implement an RA agent first, responding to subnets that router is attached
 to (which is very close to what Randy and Shixiong have already done).
 --
 Ian.


 On 19 December 2013 14:01, Randy Tuttle randy.m.tut...@gmail.com wrote:

 First, dnsmasq is not being moved. Instead, it's a different instance
 for the attached subnet in the qrouter namespace. If it's not in the
 qrouter namespace, the default gateway (the local router interface) will be
 the interface of qdhcp namespace interface. That will cause blackhole for
 traffic from VM. As you know, routing tables and NAT all occur in qrouter
 namespace. So we want the RA to contain the local interface as default
 gateway in qrouter namespace

 Randy

 Sent from my iPhone

 On Dec 19, 2013, at 4:05 AM, Xuhan Peng pengxu...@gmail.com wrote:

 I am reading through the blueprint created by Randy to bind dnsmasq into
 qrouter- namespace:


 https://blueprints.launchpad.net/neutron/+spec/dnsmasq-bind-into-qrouter-namespace

 I don't think I can follow the reason that we need to change the
 namespace which contains dnsmasq process and the device it listens to from
 qdhcp- to qrouter-. Why the original namespace design conflicts with the
 Router Advertisement sending from dnsmasq for SLAAC?

 From the attached POC result link, the reason is stated as:

 Even if the dnsmasq process could send Router Advertisement, the default
 gateway would bind to its own link-local address in the qdhcp- namespace.
 As a result, traffic leaving tenant network will be drawn to DHCP
 interface, instead of gateway port on router. That is not desirable! 

 Can Randy or Shixiong explain this more? Thanks!

 Xuhan

 ___

 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] datastore migration issues

2013-12-19 Thread Tim Simpson
I second Rob and Greg- we need to not allow the instance table to have nulls 
for the datastore version ID. I can't imagine that as Trove grows and evolves, 
that edge case is something we'll always remember to code and test for, so 
let's cauterize things now by no longer allowing it at all.

The fact that the migration scripts can't, to my knowledge, accept parameters 
for what the dummy datastore name and version should be isn't great, but I 
think it would be acceptable enough to make the provided default values 
sensible and ask operators who don't like it to manually update the database.

- Tim




From: Robert Myers [myer0...@gmail.com]
Sent: Thursday, December 19, 2013 9:59 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [trove] datastore migration issues

I think that we need to be good citizens and at least add dummy data. Because 
it is impossible to know who all is using this, the list you have is probably 
complete. But Trove has been available for quite some time and all these users 
will not be listening on this thread. Basically anytime you have a database 
migration that adds a required field you *have* to alter the existing rows. If 
we don't we're basically telling everyone who upgrades that we the 'Database as 
a Service' team don't care about data integrity in our own product :)

Robert


On Thu, Dec 19, 2013 at 9:25 AM, Greg Hill 
greg.h...@rackspace.commailto:greg.h...@rackspace.com wrote:
We did consider doing that, but decided it wasn't really any different from the 
other options as it required the deployer to know to alter that data.  That 
would require the fewest code changes, though.  It was also my understanding 
that mysql variants were a possibility as well (percona and mariadb), which is 
what brought on the objection to just defaulting in code.  Also, we can't 
derive the version being used, so we *could* fill it with a dummy version and 
assume mysql, but I don't feel like that solves the problem or the objections 
to the earlier solutions.  And then we also have bogus data in the database.

Since there's no perfect solution, I'm really just hoping to gather consensus 
among people who are running existing trove installations and have yet to 
upgrade to the newer code about what would be easiest for them.  My 
understanding is that list is basically HP and Rackspace, and maybe Ebay?, but 
the hope was that bringing the issue up on the list might confirm or refute 
that assumption and drive the conversation to a suitable workaround for those 
affected, which hopefully isn't that many organizations at this point.

The options are basically:

1. Put the onus on the deployer to correct existing records in the database.
2. Have the migration script put dummy data in the database which you have to 
correct.
3. Put the onus on the deployer to fill out values in the config value

Greg

On Dec 18, 2013, at 8:46 PM, Robert Myers 
myer0...@gmail.commailto:myer0...@gmail.com wrote:


There is the database migration for datastores. We should add a function to  
back fill the existing data with either a dummy data or set it to 'mysql' as 
that was the only possibility before data stores.

On Dec 18, 2013 3:23 PM, Greg Hill 
greg.h...@rackspace.commailto:greg.h...@rackspace.com wrote:
I've been working on fixing a bug related to migrating existing installations 
to the new datastore code:

https://bugs.launchpad.net/trove/+bug/1259642

The basic gist is that existing instances won't have any data in the 
datastore_version_id field in the database unless we somehow populate that data 
during migration, and not having that data populated breaks a lot of things 
(including the ability to list instances or delete or resize old instances).  
It's impossible to populate that data in an automatic, generic way, since it's 
highly vendor-dependent on what database and version they currently support, 
and there's not enough data in the older schema to populate the new tables 
automatically.

So far, we've come up with some non-optimal solutions:

1. The first iteration was to assume 'mysql' as the database manager on 
instances without a datastore set.
2. The next iteration was to make the default value be configurable in 
trove.conf, but default to 'mysql' if it wasn't set.
3. It was then proposed that we could just use the 'default_datastore' value 
from the config, which may or may not be set by the operator.

My problem with any of these approaches beyond the first is that requiring 
people to populate config values in order to successfully migrate to the newer 
code is really no different than requiring them to populate the new database 
tables with appropriate data and updating the existing instances with the 
appropriate values.  Either way, it's now highly dependent on people deploying 
the upgrade to know about this change and react accordingly.

Does anyone have a better solution that we aren't 

Re: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

2013-12-19 Thread Gao, Fengqian
Hi, Devananda,
Thanks for your reply.
I totally agree with your thought, nova-compute could gathering such 
information locally. And I think IPMI, using in-band connection, might be more 
suitable to be used to collect these sensor data.
At the beginning, I thought it would better to put all IPMI part together in 
Ironic while there is part of code over there.  Now, it looks like that  I 
thought too much:).

In all, power and temperature could be collected by IPMI locally in 
nova-compute.


Best wishes
--fengqian
From: Devananda van der Veen [mailto:devananda@gmail.com]
Sent: Friday, December 20, 2013 3:42 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

On Wed, Dec 18, 2013 at 7:16 PM, Gao, Fengqian 
fengqian@intel.commailto:fengqian@intel.com wrote:
Hi, Devananda,
I agree with you that new features should be towards Ironic.
As you asked why use Ironic instead of lm-sensors, actually I just want to use 
IPMI instead of lm-sensors. I think it is reasonable to put the IPMI part into 
Ironic and we already did:).

To get the sensors' information, I think IPMI is much more powerful than 
lm-sensors.
Firstly, IPMI is flexible.  Generally speaking, it provides two kinds of 
connections, in-bind and out-of-band.
Out-of-band connection allows us to get sensors' status even without OS and CPU.
In-band connection is quite similar to lm-sensors, It needs the OS kernel to 
get sensor data.
Secondly,  IPMI can gather more sensors' information that lm-sensors and it is 
easy to use. From my own experience, using IPMI can get all the sensor 
information that lm-sensors could get, such as temperature/voltage/fan. Besides 
that, IPMI can get power data and some OEM specific sensor data.
Thirdly, I think IPMI is a common spec for most of OEMs.  And most of servers 
are integrated with IPMI interface.

As you sais, nova-compute is already supplying information to the scheduler and 
power/temperature should be gathered locally.  IPMI can be used locally, the 
in-band connection. And there is a lot of open source library, such as 
OpenIPMI, FreeIPMI, which provide the interfaces to OS, just like lm-sensors.
So, I prefer to use IPMI than lm-sensors. Please leave your comments if you 
disagree:).


I see nothing wrong with nova-compute gathering such information locally. 
Whether you use lm-sensors or in-band IPMI is an implementation detail of how 
nova-compute would gather the information.

However, I don't see how this has anything to do with Ironic or the 
nova-baremetal driver. These would gather information remotely (using 
out-of-band IPMI) for hardware controlled and deployed by these services. In 
most cases, nova-compute is not deployed by nova-compute (exception: if you're 
running TripleO).

Hope that helps,
-Deva
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party-testing] Reminder: Meeting tomorrow

2013-12-19 Thread Shiv Haris
Salvatore,

Agreed that a visible job-log should be a requirement, otherwise who knows what 
was actually run.

-Shiv


From: Salvatore Orlando [mailto:sorla...@nicira.com]
Sent: Thursday, December 19, 2013 4:50 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [third-party-testing] Reminder: Meeting 
tomorrow

Hi,

I'm sorry I could not make it to meeting.
However, I can see clearly see the progress being made from gerrit!

One thing which might be worth mentioning is that some of the new jobs are 
already voting.
However, in some cases the logs are either not accessible, and in other cases 
the job seem to be a work-in-progress. For instance I've seen some just 
launching devstack; nevertheless the job still votes.

I think publicly available logs and execution of the tempest smoke suite or a 
reasonable subset of it should be a required condition for voting.
Is this something which was also discussed at today's meeting?

Salvatore

On 19 December 2013 15:12, Kyle Mestery 
mest...@siliconloons.commailto:mest...@siliconloons.com wrote:
Apologies folks, I meant 2200 UTC Thursday. We'll still do the
meeting today.

On Dec 18, 2013, at 4:40 PM, Don Kehn 
dek...@gmail.commailto:dek...@gmail.com wrote:

 Wouldn't 2200 UTC be in about 20 mins?


 On Wed, Dec 18, 2013 at 3:32 PM, Itsuro ODA 
 o...@valinux.co.jpmailto:o...@valinux.co.jp wrote:
 Hi,

 It seems the meeting was not held on 2200 UTC on Wednesday (today).

 Do you mean 2200 UTC on Thursday ?

 Thanks.

 On Thu, 12 Dec 2013 11:43:03 -0600
 Kyle Mestery mest...@siliconloons.commailto:mest...@siliconloons.com 
 wrote:

  Hi everyone:
 
  We had a meeting around Neutron Third-Party testing today on IRC.
  The logs are available here [1]. We plan to host another meeting
  next week, and it will be at 2200 UTC on Wednesday in the
  #openstack-meeting-alt channel on IRC. Please attend and update
  the etherpad [2] with any items relevant to you before then.
 
  Thanks again!
  Kyle
 
  [1] 
  http://eavesdrop.openstack.org/meetings/networking_third_party_testing/2013/
  [2] https://etherpad.openstack.org/p/multi-node-neutron-tempest
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 On Wed, 18 Dec 2013 15:10:46 -0600
 Kyle Mestery mest...@siliconloons.commailto:mest...@siliconloons.com 
 wrote:

  Just a reminder, we'll be meeting at 2200 UTC on #openstack-meeting-alt.
  We'll be looking at this etherpad [1] again, and continuing discussions from
  last week.
 
  Thanks!
  Kyle
 
  [1] https://etherpad.openstack.org/p/multi-node-neutron-tempest
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 --
 Itsuro ODA o...@valinux.co.jpmailto:o...@valinux.co.jp


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 
 Don Kehn
 303-442-0060
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Meeting time - change to 1300 UTC or 1500 UTC?

2013-12-19 Thread Xuhan Peng
15UTC is 23PM in China, not ideal, but I am OK with that :-)


On Fri, Dec 20, 2013 at 8:20 AM, Ian Wells ijw.ubu...@cack.org.uk wrote:

 I'm easy.


 On 20 December 2013 00:47, Randy Tuttle randy.m.tut...@gmail.com wrote:

 Any of those times suit me.

 Sent from my iPhone

 On Dec 19, 2013, at 5:12 PM, Collins, Sean 
 sean_colli...@cable.comcast.com wrote:

  Thoughts? I know we have people who are not able to attend at our
  current time.
 
  --
  Sean M. Collins
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova]I want to add a hypervisor driver for Nova in icehouse

2013-12-19 Thread wu jiang
Hi all,

I'm planing to commit a new hypervisor Driver for Huawei-FusionCompute in
Icehouse.
And I've already submitted a bp  wiki to OpenStack.

BP:
https://blueprints.launchpad.net/nova/+spec/driver-for-huawei-fusioncompute
Wiki:  https://wiki.openstack.org/wiki/FusionCompute

Now I'm not sure whether this is enough.

So if you're interested, plz take a look. Any help will be truly
appreciated.
Thanks.


Regards,
wingwj
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Three SLAAC and DHCPv6 related blueprints

2013-12-19 Thread Xuhan Peng
Ian, thanks for asking!  I replied in the other thread. It works for me!


On Fri, Dec 20, 2013 at 8:23 AM, Ian Wells ijw.ubu...@cack.org.uk wrote:

 Xuhan, check the other thread - would 1500UTC suit?


 On 19 December 2013 01:09, Xuhan Peng pengxu...@gmail.com wrote:

 Shixiong and guys,

 The sub team meeting is too early for china IBM folks to join although we
 would like to participate the discussion very much. Any chance to rotate
 the time so we can comment?

 Thanks, Xuhan


 On Thursday, December 19, 2013, Shixiong Shang wrote:

 Hi, Ian:

 I agree with you on the point that the way we implement it should be app
 agnostic. In addition, it should cover both CLI and Dashboard, so the
 system behavior should be consistent to end users.

 The keywords is just one of the many ways to implement the concept. It
 is based on the reality that dnsmasq is the only driver available today to
 the community. By the end of the day, the input from customer should be
 translated to one of those mode keywords. It doesn't imply the same
 constants have to be used as part of the CLI or Dashboard.

 Randy and I had lengthy discussion/debating about this topic today. We
 have straw-man proposal and will share with the team tomorrow.

 That being said, what concerned me the most at this moment is, we are
 not on the same page. I hope tomorrow during sub-team meeting, we can reach
 consensus. If you can not make it, then please set up a separate meeting to
 invite key placeholders so we have a chance to sort it out.

 Shixiong




 On Dec 18, 2013, at 8:25 AM, Ian Wells ijw.ubu...@cack.org.uk wrote:

 On 18 December 2013 14:10, Shixiong Shang sparkofwisdom.cl...@gmail.com
  wrote:

 Hi, Ian:

 I won’t say the intent here is to replace dnsmasq-mode-keyword BP.
 Instead, I was trying to leverage and enhance those definitions so when
 dnsmasq is launched, it knows which mode it should run in.

 That being said, I see the value of your points and I also had lengthy
 discussion with Randy regarding this. We did realize that the keyword
 itself may not be sufficient to properly configure dnsmasq.


 I think the point is that the attribute on whatever object (subnet or
 router) that defines the behaviour should define the behaviour, in
 precisely the terms you're talking about, and then we should find the
 dnsmasq options to suit.  Talking to Sean, he's good with this too, so
 we're all working to the same ends and it's just a matter of getting code
 in.


 Let us discuss that on Thursday’s IRC meeting.


 Not sure if I'll be available or not this Thursday, unfortunately.  I'll
 try to attend but I can't make promises.

 Randy and I had a quick glance over your document. Much of it parallels
 the work we did on our POC last summer, and is now being addressed across
 multiple BP being implemented by ourselves or with Sean Collins and IBM
 team's work. I will take a closer look and provide my comments.


 That's great.  I'm not wedded to the details in there, I'm actually more
 interested that we've covered everything.

 If you have blueprint references, add them as comments - the
 ipv6-feature-parity BP could do with work and if we get the links together
 in one place we can update it.
 --
 Ian.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

2013-12-19 Thread Alan Kavanagh
Cheers Gao

It definitely makes sense to collect additional metrics such as power and 
temperature, and make that available for selective decisions you would want to 
take. However, I am just wondering if you could realistically feed those 
metrics as variables for scheduling, this is the main part I feel is 
questionable. I assume then you would use temperature || power etc to gauge if 
you want to schedule another VM on a given node when a given temperature 
threshold is reached. Is this the main case you are thinking of Gao?

Alan

From: Gao, Fengqian [mailto:fengqian@intel.com]
Sent: December-18-13 10:23 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

Hi, Alan,
I think, for nova-scheduler it is better if we gather more information.  And In 
today's DC, power and temperature are very important facts to considering.
CPU/Memory utilization is not enough to describe nodes' status. Power/inlet 
temperature should be noticed.

Best Wishes

--fengqian

From: Alan Kavanagh [mailto:alan.kavan...@ericsson.com]
Sent: Thursday, December 19, 2013 2:14 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

Hi Gao

What is the reason why you see it would be important to have these two 
additional metrics power and temperature for Nova to base scheduling on?

Alan

From: Gao, Fengqian [mailto:fengqian@intel.com]
Sent: December-18-13 1:00 AM
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

Hi, all,
I am planning to extend bp 
https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling with 
power and temperature. In other words, power and temperature can be collected 
and used for nova-scheduler just as CPU utilization.
I have a question here. As you know, IPMI is used to get power and temperature 
and baremetal implements IPMI functions in Nova. But baremetal driver is being 
split out of nova, so if I want to change something to the IPMI, which part 
should I choose now? Nova or Ironic?


Best wishes

--fengqian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

2013-12-19 Thread Gao, Fengqian
Yes, Alan, you got me.
Providing power/temperature to scheduler, set threshold or different weight, 
then the scheduler can boot VM on the most suitable node.

Thanks

--fengqian

From: Alan Kavanagh [mailto:alan.kavan...@ericsson.com]
Sent: Friday, December 20, 2013 11:58 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

Cheers Gao

It definitely makes sense to collect additional metrics such as power and 
temperature, and make that available for selective decisions you would want to 
take. However, I am just wondering if you could realistically feed those 
metrics as variables for scheduling, this is the main part I feel is 
questionable. I assume then you would use temperature || power etc to gauge if 
you want to schedule another VM on a given node when a given temperature 
threshold is reached. Is this the main case you are thinking of Gao?

Alan

From: Gao, Fengqian [mailto:fengqian@intel.com]
Sent: December-18-13 10:23 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

Hi, Alan,
I think, for nova-scheduler it is better if we gather more information.  And In 
today's DC, power and temperature are very important facts to considering.
CPU/Memory utilization is not enough to describe nodes' status. Power/inlet 
temperature should be noticed.

Best Wishes

--fengqian

From: Alan Kavanagh [mailto:alan.kavan...@ericsson.com]
Sent: Thursday, December 19, 2013 2:14 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

Hi Gao

What is the reason why you see it would be important to have these two 
additional metrics power and temperature for Nova to base scheduling on?

Alan

From: Gao, Fengqian [mailto:fengqian@intel.com]
Sent: December-18-13 1:00 AM
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

Hi, all,
I am planning to extend bp 
https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling with 
power and temperature. In other words, power and temperature can be collected 
and used for nova-scheduler just as CPU utilization.
I have a question here. As you know, IPMI is used to get power and temperature 
and baremetal implements IPMI functions in Nova. But baremetal driver is being 
split out of nova, so if I want to change something to the IPMI, which part 
should I choose now? Nova or Ironic?


Best wishes

--fengqian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Next two weekly meetings cancelled ?

2013-12-19 Thread Nikolay Starodubtsev
I'm on holidays till 9th January. And I don't think I'll have an internet
access all the time on holidays.
p.s. By the way, I'll prefer Friday's evenings as new meeting time if it is
available - it only one day when I don't do my BJJ classes. Or we can move
to 1900-2000UTC. it looks fine for me. Or move to early Europe morning.



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1


2013/12/19 Sylvain Bauza sylvain.ba...@bull.net

  Le 19/12/2013 13:57, Dina Belova a écrit :

 I have Christmas holidays till 12th January... So I don't really know I if
 I will be available 6th Jan.


 Oh ok. Who else are still on vacation these times ?
 We can do our next meeting on 12th Jan, but I'm concerned with the
 delivery of Climate 0.1 which would be one week after.

 -Sylvain

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Blocking issue with ring rebalancing

2013-12-19 Thread Yuriy Taraday
On Thu, Dec 19, 2013 at 8:22 PM, Nikolay Markov nmar...@mirantis.comwrote:

 I created a bug on launchpad regarding this:
 https://bugs.launchpad.net/swift/+bug/1262166

 Could anybody please participate in discussion on how to overcome it?


Hello.

I know you've decided to dig a bit deeper on IRC, but I've made a round
over rebalance code anyway.
Please take a look at https://review.openstack.org/63315. It won't
magically make 2 mins out of 8 but it might shorten them to 6.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Add no HPET option for resolving windows vm time drifting

2013-12-19 Thread 한승진
Hi all,

When we tested windows guest OS, the time drift case was occured.
It meant the clock of windows mv gredually lost a few seconds per minute.
It occured especailly the CPUs are working hard.
For syncing between Hypervisor and windows vm, we need to other option into
libvirt template.

I submitted a blue-print to openstack ice house.

https://blueprints.launchpad.net/nova/+spec/add-no-hpet-option-into-guest-clock

If you're interested, please teke a look and have a comments.

Regrads,
John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][VMware] Deploy from vCenter template

2013-12-19 Thread Qing Xin Meng



I don't think these patches are conflicting.
I will continue to work on this patch.

https://review.openstack.org/#/c/34903/


Thanks!


Best Regards


---

Hi Arnaud,

It's really good to know that your team are proposing the vcenter driver
with OVA+glance datastore backend support.  Thanks for sharing the
information.  OVA would be a good choice which will benefit users by
avoiding only use flat image limited from current driver.

But in my opinion, it may not conflict with deploying from template. From
an end user perpective, if there are already a set of templates within
vcenter, it's good for him to have openstack to deploy VM from it directly.
He can touch an empty image in glance only with the metadata pointing to
the template name.  And boot vm from it.  Also he can freely choose to
generate a *ova with vmdk stream file and placed in certain datastore and
deploy from VM from image location pointing to the datastore.  These are
two different usage scenarios per my understanding.

And to go further, if there are some mechnism, that openstack can sync
exsiting VM templates into Glance images.  It can make this function more
useful.




Best Regards
Zarric(Zhu Zhu)

From: Arnaud Legendre
Date: 2013-12-18 01:58
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][VMware] Deploy from vCenter template
Hi Qui Xing,



We are planning to address the vCenter template issue by levering the
OVF/OVA capabilities.
Kiran's implementation is tied to a specific VC and requires to add Glance
properties that are not generic.
For existing templates, the workflow will be:
. generate an *.ova tarball (containing the *.ovf descriptor + *.vmdk
stream-optimized) out of the template,
. register the *.ova as a Glance image location (using the VMware Glance
driver see bp [1]) or simply upload the *.ova to another Glance store,
. The VMware driver in Nova will be able to consume the *.ova (either
through the location or by downloading the content to the cache):  see bp
[2]. However, for Icehouse, we are not planning to actually consume the
*.ovf descriptor (work scheduled for the J/K release).


[1]
https://blueprints.launchpad.net/glance/+spec/vmware-datastore-storage-backend

[2] https://blueprints.launchpad.net/nova/+spec/vmware-driver-ova-support


If you have questions about [1], please send me an email. For [2], please
reach vuil.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<    1   2