I disagree to moving this logic to tempest/services/*. The idea of these
modules - assemble requests and return responses. Testing and verification
should be wrapped over it. Either base class or tests, it depends on
situation...
--
Kind Regards
Valeriy Ponomaryov
On Thu, Mar 13, 2014 at 6:55
On Mon, Mar 10, 2014 at 3:26 PM, Miguel Angel Ajo majop...@redhat.comwrote:
I'm not familiar with unix domain sockets at low level, but , I wonder
if authentication could be achieved just with permissions (only users in
group neutron or group rootwrap accessing this service.
It can be
Hi Chris,
Thank you for picking it up,
-Original Message-
From: Christopher Yeoh [mailto:cbky...@gmail.com]
Sent: Thursday, March 13, 2014 1:56 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [qa][tempest] Where to do response body validation
The new tempest
Hi folks,
I having working on enhancing the base performance of Neutron using ML2 OVS
during my free cycles for the past few weeks. The current bottleneck in
Neutron is, as we all know, the part when a neutron agent requests
security_group_rules_for_devices to the server in order to update its
On Tue, Mar 11, 2014 at 12:58 AM, Carl Baldwin c...@ecbaldwin.net wrote:
All,
I was writing down a summary of all of this and decided to just do it
on an etherpad. Will you help me capture the big picture there? I'd
like to come up with some actions this week to try to address at least
About the (1) [Single VM], the use cases as follows can be supplement.
1. Protection Group: Define the set of instances to be protected.
2. Protection Policy: Define the policy for protection group, such as sync
period, sync priority, advanced features, etc.
3. Recovery Plan:Define the
Ok, awesome. Btw, when I was writing tests for Data Flow I didn’t make
assumptions about order of tasks. You can take a look at how it’s achieved.
Renat Akhmerov
@ Mirantis Inc.
On 13 Mar 2014, at 02:04, Manas Kelshikar ma...@stackstorm.com wrote:
Works ok if I directly run nosetests from
Hi There,
Basically, we just wanted to be sure that the pre-populated information is
accurate and if there is nothing else to add you can close the bug.
Thanks,
Edgar
From: Mohammad Banikazemi m...@us.ibm.com
Date: Wednesday, March 12, 2014 8:53 PM
To: OpenStack List
Yuri, could you elaborate your idea in detail? , I'm lost at some
points with your unix domain / token authentication.
Where does the token come from?,
Who starts rootwrap the first time?
If you could write a full interaction sequence, on the etherpad, from
rootwrap daemon start ,to a simple
On 13 Mar 2014, at 10:40, W Chan m4d.co...@gmail.com wrote:
I can write a method in base test to start local executor. I will do that as
a separate bp.
Ok.
After the engine is made standalone, the API will communicate to the engine
and the engine to the executor via the oslo.messaging
Hello, all
Our translators start translation and I18n compliance test since string
frozen date.
During the translation and test, we may report bugs.
Some bugs are incorrect and incomprehensible messages.
Some bugs are user facing messages but not marked with _().
All of these bugs might
Hi Vincent,
I feel your blueprint is interesting, too. Its objective seems similar with
the existing one, and some new APIs also look similar as existing ones. For
instance, 'RestoreFromSnapshot' looks like 'rebuild', and
'SpawnFromSnapshot' looks like 'spawn'. Does it benefit a lot, if we define
So no need to ask the same…”
Renat Akhmerov
@ Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Joshua,
Thanks for your interest and feedback.
I believe you were able to deliver your message already, we definitely hear
you. So no need to the same stuff again and again ;) As I promised before, we
are now evaluating what’s been going on with TaskFlow for the last couple of
months and
Hi,
I have written a wiki about usb controller and usb-passthrough in
https://wiki.openstack.org/wiki/Nova/proposal_about_usb_passthrough.
Hope I can get your good advices.
Thanks,
Jeremy Liu
-Original Message-
From: Liuji (Jeremy) [mailto:jeremy@huawei.com]
Sent: Thursday,
On 12/03/14 19:19 -0700, Mark Washenberger wrote:
Hi folks,
I'd like to nominate Arnaud Legendre to join Glance Core. Over the past cycle
his reviews have been consistently high quality and I feel confident in his
ability to assess the design of new features and the overall direction for
Hello!
Recently I've discovered (and it was really surprising for me) that
horizon package isn't published on PyPi (see
http://paste.openstack.org/show/73348/). The reason why I needed to
install horizon this way is that it is desirable for muranodashboard
unittests to have horizon in the test
On 03/12/2014 06:34 PM, Mike Spreitzer wrote:
Has anybody exercised the case of OS::Heat::AutoScalingGroup scaling a
nested stack that includes a OS::Neutron::PoolMember? Should I expect
this to work?
This sort of thing works fine for us. It needs some patches that missed
Havana, though.
On Thu, Mar 13, 2014 at 01:10:06PM +0400, Timur Sufiev wrote:
Recently I've discovered (and it was really surprising for me) that
horizon package isn't published on PyPi (see
http://paste.openstack.org/show/73348/). The reason why I needed to
install horizon this way is that it is desirable
+1. Nice work Arnaud!
On Thu, Mar 13, 2014 at 5:09 PM, Flavio Percoco fla...@redhat.com wrote:
On 12/03/14 19:19 -0700, Mark Washenberger wrote:
Hi folks,
I'd like to nominate Arnaud Legendre to join Glance Core. Over the past
cycle
his reviews have been consistently high quality and I
So we already have pretty high requirements - its basically a 16G
workstation as minimum.
Specifically to test the full story:
- a seed VM
- an undercloud VM (bm deploy infra)
- 1 overcloud control VM
- 2 overcloud hypervisor VMs
5 VMs with 2+G RAM each.
To test the overcloud alone
On 12/03/14 18:28, Matt Riedemann wrote:
On 2/25/2014 6:36 AM, Matthew Booth wrote:
I'm new to Nova. After some frustration with the review process,
specifically in the VMware driver, I decided to try to visualise how the
review process is working across Nova. To that end, I've created 2
Has anybody exercised the case of OS::Heat::AutoScalingGroup scaling a nested
stack that includes a OS::Neutron::PoolMember? Should I expect this to work?
Hi Mike,
Yes I tested it and it works. I'm trying to build an example for heat-templates
putting it all together. I'm mostly struggling
On Thu, Mar 13, 2014 at 12:18 PM, Miguel Angel Ajo majop...@redhat.comwrote:
Yuri, could you elaborate your idea in detail? , I'm lost at some
points with your unix domain / token authentication.
Where does the token come from?,
Who starts rootwrap the first time?
If you could write a
You may have noticed that gate race fails on neutron jobs seem to have
gone up a lot in the last couple of days. You aren't just imagining it...
The current top gate race is https://bugs.launchpad.net/bugs/1248757
That has a massive uptick in the last 2 days, which is pretty
concerning, as there
Hi all,
we would use the replication mechanism in swift to replicate the data
in two swift instances deployed in different clouds with different keystones
and administrative domains.
Is this possible with the current replication facilities or they should
stay in the same cloud sharing the
On 13 March 2014 10:09, Matthew Booth mbo...@redhat.com wrote:
On 12/03/14 18:28, Matt Riedemann wrote:
On 2/25/2014 6:36 AM, Matthew Booth wrote:
I'm new to Nova. After some frustration with the review process,
specifically in the VMware driver, I decided to try to visualise how the
review
On 13/03/14 09:28, Akihiro Motoki wrote:
+1
In my understanding String Freeze is a SOFT freeze as Daisy describes.
Applying string freeze to incorrect or incomprehensible messages is
not good from UX point of view
and shipping the release with such strings will make the situation
worse and
Hi,
Right now you can select running either the v1 or v2 registries but not
both at the same time. I'd like to ask for an FFE for this functionality
(Erno's patch in progress here: https://review.openstack.org/#/c/79957/)
With v1 on the road to deprecation I think this may help migrating from
a
Hello devs,
I wanted the update the analysis performed by Salvatore Orlando few
weeks ago [1]
I used the following query for Logstash [2] to detect the failures of
the last 48 hours.
There were 77 failures (40% of the total).
I classified them and obtained the following:
21% due to infra
@Sriram Thanks for the pointers! Though I'm afraid that students don't have
a Connections option. Maybe after submitting the proof of enrollment?
Cheers,
2014-03-12 22:07 GMT-03:00 Andronidis Anastasios andronat_...@hotmail.com:
Ok, thank you very much!
Anastasis
On 13 Μαρ 2014, at 1:58
On 03/13/2014 08:00 AM, stuart.mcla...@hp.com wrote:
Hi,
Right now you can select running either the v1 or v2 registries but not
both at the same time. I'd like to ask for an FFE for this functionality
(Erno's patch in progress here: https://review.openstack.org/#/c/79957/)
With v1 on the
FYI, here's the log for the OpenStack GSoC meeting we just wrapped up
http://paste.openstack.org/show/73389/
On Tue, Mar 11, 2014 at 1:29 PM, Davanum Srinivas dava...@gmail.com wrote:
Hi,
Mentors:
* Please click on My Dashboard then Connect with organizations and
request a connection as a
Those use cases are very important in enterprise scenarios requirements, but
there's an important missing piece in the current OpenStack APIs: support for
application consistent backups via Volume Shadow Copy (or other solutions) at
the instance level, including differential / incremental
On 13/03/14 09:32 -0400, Sean Dague wrote:
On 03/13/2014 08:00 AM, stuart.mcla...@hp.com wrote:
Hi,
Right now you can select running either the v1 or v2 registries but not
both at the same time. I'd like to ask for an FFE for this functionality
(Erno's patch in progress here:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Here is the latest marked fail -
http://logs.openstack.org/28/79628/4/check/check-tempest-dsvm-neutron/11f8293/
So,
looking at this a little bit, you can see from the n-cpu log that
it is getting failures when talking to neutron. Specifically,
@bnemec: I don't think that's been considered. I'm actually one of the
upstream maintainers for noVNC. The only concern that I'd have with OpenStack
adopting noVNC (there are other maintainers, as well as the author, so I'd have
to check with them as well) is that there are a few other
Howdy!
I have some tech questions I'd love some pointers on from people who've
succeeded in setting up CI for Neutron based on the upstream devstack-gate.
Here are the parts where I'm blocked now:
1. I need to enable an ML2 mech driver. How can I do this? I have been
trying to create a localrc
On 12 March 2014 17:35, Tim Bell tim.b...@cern.ch wrote:
And if the same mistake is done for a cinder volume or a trove database ?
Deferred deletion for cinder has been proposed, and there have been
few objections to it... nobody has put forward code yet, but anybody
is welcome to do so.
Hi, about OpenStack and VSS. Does anyone have experience with the qemu
project's implementation of VSS support? They appear to have a within-guest
agent, qemu-ga, that perhaps can work as a VSS requestor. Does it also work
with KVM? Does qemu-ga work with libvirt (can VSS quiesce be triggered
I think a bigger question is why are we using a git version of something
outside of OpenStack.
Where is a noNVC release we can point to and use?
In Juno I'd really be pro removing all the devstack references to git
repos not on git.openstack.org, because these kinds of failures have
real impact.
On 03/12/2014 12:14 PM, Sylvain Bauza wrote:
Hi Russell,
Thanks for replying,
2014-03-12 16:46 GMT+01:00 Russell Bryant rbry...@redhat.com
mailto:rbry...@redhat.com:
The biggest concern seemed to be that we weren't sure whether Climate
makes sense as an independent project or
The easiest/quickest thing to do for ice house would probably be to run the
initial sync in parallel like the dhcp-agent does for this exact reason.
See: https://review.openstack.org/#/c/28914/ which did this for thr
dhcp-agent.
Best,
Aaron
On Thu, Mar 13, 2014 at 12:18 PM, Miguel Angel Ajo
On 10 Mar 2014, at 22:54, David Kranz dkr...@redhat.com wrote:
There are a number of patches up for review that make various changes to use
six apis instead of Python 2 constructs. While I understand the desire to
get a head start on getting Tempest to run in Python 3, I'm not sure it makes
Hi,
I am a late comer in this discussion.
Can someone please point me to the design proposal documentations in
addition to the object model ?
Thanks,
Prashanth
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
oh and in my haste I forgot to say: thank you extremely much to everybody
who's been giving me pointers on IRC and especially to Jay for the blog
walkthrough!
On 13 March 2014 15:30, Luke Gorrie l...@tail-f.com wrote:
Howdy!
I have some tech questions I'd love some pointers on from people
Well, for a non-interactive view of things, you can use the
openstack.common.report functionality. It's currently integrated into Nova,
and I believe that the other projects are working to get it integrated as well.
To use it, you just send a SIGUSR1 to any Nova process, and a report of the
Hi folks,
I can't make it to the QA meeting for today so I wanted to summarize the issue
that we have with the pep8 and tempest gate. An example for the issue you can
find here:
https://review.openstack.org/#/c/79256/
This is the object model proposals:
https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance/Discussion
From: Prashanth Hari [hvpr...@gmail.com]
Sent: Thursday, March 13, 2014 9:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Marco,
The replication *inside* Swift is not intended to move data between two
different Swift instances -- it's an internal data repair and rebalance
mechanism.
However, there is a different mechanism, called container-to-container
synchronization that might be what you are looking for. It
On 2014-03-13 09:44, Sean Dague wrote:
I think a bigger question is why are we using a git version of
something
outside of OpenStack.
Where is a noNVC release we can point to and use?
In Juno I'd really be pro removing all the devstack references to git
repos not on git.openstack.org, because
[A] The current keystone LDAP community driver returns all users that
exist in LDAP via the API call v3/users, instead of returning just users
that have role grants (similar processing is true for groups). This could
potentially be a very large number of users. We have seen large companies
Aaron,
I thought the l3-agent already did this if doing a full sync?
_sync_routers_task()-_process_routers()-spawn_n(self.process_router, ri)
So each router gets processed in a greenthread.
It seems like the other calls - sudo/rootwrap, /sbin/ip, etc are now the
limiting factor, at least on
Bruce,
Nice list of use cases; thank you for sharing. One thought
Bruce Montague bruce_monta...@symantec.com wrote on 13/03/2014 04:34:59
PM:
* (2) [Core tenant/project infrastructure VMs]
Twenty VMs power the core infrastructure of a group using a
private cloud (OpenStack in their
Hi Prabhakar,
I'm not sure the functionality is split between 'policy' and 'server' as
cleanly as you describe.
The 'policy' directory contains the Policy Engine. At its core, the policy
engine has a generic Datalog implementation that could feasibly be used by
other OS components. (I don't
Hi all, Akihiro, David,
This is regarding the review for - https://review.openstack.org/#/c/76653/
Akihiro - Thanks for the review as always and as I mentioned in the review
comment
I completely agree with you. This is a small featurette.
However this is small in that it adds to a chociefield
I agree.
Solly - in addition to potentially 'adopting' noVNC - or as a parallel
train of thought ...
As we started working on storyboard in infra, we've started using the
bower tool for html/javascript packaging - and we have some ability to
cache the output of that pretty easily. Would you
On Thu, Mar 13, 2014 at 2:51 AM, Robert Collins
robe...@robertcollins.net wrote:
So we already have pretty high requirements - its basically a 16G
workstation as minimum.
Specifically to test the full story:
- a seed VM
- an undercloud VM (bm deploy infra)
- 1 overcloud control VM
- 2
Thanks Donagh,
I will take a look to the ontainer-to-container synchronization to understand
if it fits with my scenario.
Cheers,
Marco
On Thu, Mar 13, 2014 at 03:28:03PM +, McCabe, Donagh wrote:
Marco,
The replication *inside* Swift is not intended to move data between two
different
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Because of where we are in the freeze, I think this should wait
until Juno opens to fix. Icehouse will only be compatible with
SQLA 0.8, which I think is fine. I expect the rest of the issues
can be addressed during Juno 1.
Agreed. I think we
Hello openstackers,
You can find MagnetoDB team weekly meeting notes below
Meeting summary
1. *General project status overview*
(isviridovhttp://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-03-13-13.01.log.html#l-14,
13:02:15)
2. *MagnetoDB API Draft status*
-- edited the subject
I'm resending this question.
The issue is described in email thread and. In brief, I need to add load
new extensions and it seems the mechanism driver does not support that. In
order to do that I was thinking to have a new ml2 plugin base on existing
Ml2Plugin and add my
Hi all,
Here is the draft for MagnetoDB API:
https://wiki.openstack.org/wiki/MagnetoDB/api
Your comments and propositions are welcome. And welcome to discuss this
draft and any other KeyValue aaS -related subjects in our IRC channel:
#magnetodb. Please note, MagnetoDB team mostly in UTC+2.
Best
@Monty: having a packaging system sounds like a good idea. Send us a pull
request on github.com/kanaka/noVNC.
Best Regards,
Solly Ross
- Original Message -
From: Monty Taylor mord...@inaugust.com
To: Sean Dague s...@dague.net, OpenStack Development Mailing List (not for
usage
You may be interested by this project as well :
https://github.com/stackforge/swiftsync
you would need to replicate your keystone in both way via mysql replication
or something like this (and have endpoint url changed as well obviously
there).
Chmouel
On Thu, Mar 13, 2014 at 5:25 PM, Marco
On 03/13/2014 12:42 PM, Dan Smith wrote:
Because of where we are in the freeze, I think this should wait
until Juno opens to fix. Icehouse will only be compatible with
SQLA 0.8, which I think is fine. I expect the rest of the issues
can be addressed during Juno 1.
Agreed. I think we have
Hello Folks,
When zeromq is use as rpc-backend, nova-rpc-zmq-receiver service needs to
be run on every node.
zmq-receiver receives messages on tcp://*:9501 with socket type PULL and
based on topic-name (which is extracted from received data), it forwards
data to respective local services, over
On Thu, Mar 13, 2014 at 7:50 AM, Joe Hakim Rahme
joe.hakim.ra...@enovance.com wrote:
On 10 Mar 2014, at 22:54, David Kranz dkr...@redhat.com wrote:
There are a number of patches up for review that make various changes to
use six apis instead of Python 2 constructs. While I understand the
Hi Chmouel,
using this approach should I need to have the same users in both keystone?
Is there any way to map user A from cloud X to user B in cloud Y?
Our clouds have different users and replicates the keystone could have
some problems, not only technical.
Cheers,
Marco
On Thu, Mar 13, 2014
Hi Anna,
On Thu, Mar 13, 2014 at 8:36 AM, Anna A Sortland annas...@us.ibm.comwrote:
[A] The current keystone LDAP community driver returns all users that
exist in LDAP via the API call v3/users, instead of returning just users
that have role grants (similar processing is true for groups).
On 03/13/2014 12:31 PM, Thomas Goirand wrote:
On 03/12/2014 07:07 PM, Sean Dague wrote:
Because of where we are in the freeze, I think this should wait until
Juno opens to fix. Icehouse will only be compatible with SQLA 0.8, which
I think is fine. I expect the rest of the issues can be
The Oslo team is working hard to move code from the incubator into
libraries, and that work will speed up during Juno. As part of the
planning, we have been developing our deprecation policy for code in the
oslo-incubator repository. We recognize that it may take some projects
longer than others
Hi Chris,
That's great to hear. I'm looking forward to installing icehouse and testing
that out. :)
Thanks,
Kevin
From: Chris Armstrong [chris.armstr...@rackspace.com]
Sent: Wednesday, March 12, 2014 1:29 PM
To: Fox, Kevin M; OpenStack Development Mailing List
On the transport variable, the problem I see isn't with passing the
variable to the engine and executor. It's passing the transport into the
API layer. The API layer is a pecan app and I currently don't see a way
where the transport variable can be passed to it directly. I'm looking at
Therve told me he actually tested this and it works. Now if I could only
configure DevStack to install a working Neutron...
Regards,
Mike
From: Fox, Kevin M kevin@pnnl.gov
To: Chris Armstrong chris.armstr...@rackspace.com, OpenStack
Development Mailing List (not for usage
Funny this topic came up. I was just looking into some of this yesterday.
Here's some links that I came up with:
*
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/sub-sect-qemu-ga-freeze-thaw.html
- Describes how
Thanks for the feedback.
Will create a design on these lines and send across for review
On Wed, Mar 12, 2014 at 3:53 PM, Tim Hinrichs thinri...@vmware.com wrote:
Hi Rajdeep,
This is an great problem to work on because it confronts one of the
assumptions we're making in Congress: that cloud
I've been playing with an icehouse build grabbed from fedorapeople. My
hypervisor platform is libvirt-xen, which I understand may be
deprecated for icehouse(?) but I'm stuck with it for now, and I'm
using VLAN networking. It almost works, but I have a problem with
networking. In havana, the VIF
Will do!
On Mar 13, 2014 10:13 AM, Solly Ross sr...@redhat.com wrote:
@Monty: having a packaging system sounds like a good idea. Send us a pull
request on github.com/kanaka/noVNC.
Best Regards,
Solly Ross
- Original Message -
From: Monty Taylor mord...@inaugust.com
To:
Hi all,
I would like to fix direction of this thread. Cause it is going in wrong
direction.
To assume:
1) Yes restoring already deleted recourses could be useful.
2) Current approach with soft deletion is broken by design and we should
get rid of them.
More about why I think that it is broken:
On 03/12/2014 04:54 PM, Matt Riedemann wrote:
On 3/12/2014 6:32 PM, Dan Smith wrote:
I'm confused as to why we arrived at the decision to revert the commits
since Jay's patch was accepted. I'd like some details about this
decision, and what new steps we need to take to get this back in for
Thanks everyone who have joined Savanna meeting.
Here are the logs from the meeting:
Minutes:
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-03-13-18.04.html
Log:
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-03-13-18.04.log.html
It was decided to not keep
On Thu, 2014-03-13 at 23:13 +0400, Sergey Lukjanov wrote:
Thanks everyone who have joined Savanna meeting.
You mean Sahara? :P
-jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
Excerpts from Jay Pipes's message of 2014-03-12 10:58:36 -0700:
On Wed, 2014-03-12 at 17:35 +, Tim Bell wrote:
And if the same mistake is done for a cinder volume or a trove database ?
Snapshots and backups?
and bears, oh my!
+1, whether it is large data on a volume or saved state in
Excerpts from Tim Bell's message of 2014-03-12 11:02:25 -0700:
If you want to archive images per-say, on deletion just export it to a
'backup tape' (for example) and store enough of the metadata
on that 'tape' to re-insert it if this is really desired and then delete it
from the
Hi stackers,
As a result of discussion:
[openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion
(step by step)
http://osdir.com/ml/openstack-dev/2014-03/msg00947.html
I understood that there should be another proposal. About how we should
implement Restorable Delayed Deletion
Hi All,
Just wanted to let everyone know that the Fuel Project met it¹s 4.1
milestone on Monday March 7th. This latest version includes (among other
things):
* Support for the OpenStack Havana 2013.2.2
https://wiki.openstack.org/wiki/ReleaseNotes/2013.2.2 release.
* Ability to stop a
The restore use case is for sure inconsistently implemented and used. I
think I agree with Boris that we treat it as separate and just move on with
cleaning up soft delete. I imagine most deployments don't like having most
of the rows in their table be useless and make db access slow? That being
On 03/13/2014 03:04 PM, Josh Durgin wrote:
These reverts are still confusing me. The use of glance's v2 api
is very limited and easy to protect from errors.
These patches use the v2 glance api for exactly one call - to get
image locations. This has been available and used by other
features
Hi Mark,
The existing v3/users API will still exist and will show all users. So you
will still be able to grant role to a user who does not have one now.
I wonder if it makes sense to add a new API that would show only users
that have role grants.
So we would have:
v3/users - list all users
Hey everyone,
Now that the thread has had enough time for people to reply it appears that the
majority of people that vocalized their opinion are in favor of a mini-summit,
preferably to occur in Atlanta days before the Openstack summit. There are
concerns however, most notably the concern
On 03/13/2014 03:24 PM, Jay Pipes wrote:
On Thu, 2014-03-13 at 23:13 +0400, Sergey Lukjanov wrote:
Thanks everyone who have joined Savanna meeting.
You mean Sahara? :P
-jay
sergey now has to put some bitcoins in the jar...
___
OpenStack-dev
I disagree with the new dependency graph here, I don't think its reasonable
continue to have the Ephemeral RBD patch behind both glance v2 support and
image-multiple-location. Given the time that this has been in flight, we
should not be holding up features that do exist for features that don't.
On 3/12/2014 7:29 PM, Arnaud Legendre wrote:
Hi Matt,
I totally agree with you and actually we have been discussing this a lot
internally the last few weeks.
. As a top priority, the driver MUST integrate with oslo.vmware. This will be
achieved through this chain of patches [1]. We want
On Thu, 2014-03-13 at 20:06 +, Jorge Miramontes wrote:
Now that the thread has had enough time for people to reply it appears
that the majority of people that vocalized their opinion are in favor
of a mini-summit, preferably to occur in Atlanta days before the
Openstack summit. There are
Heh, we should have a fathomless jar for it :(
On Thu, Mar 13, 2014 at 11:30 PM, Matthew Farrellee m...@redhat.com wrote:
On 03/13/2014 03:24 PM, Jay Pipes wrote:
On Thu, 2014-03-13 at 23:13 +0400, Sergey Lukjanov wrote:
Thanks everyone who have joined Savanna meeting.
You mean Sahara? :P
On 03/13/2014 10:50 AM, Joe Hakim Rahme wrote:
On 10 Mar 2014, at 22:54, David Kranz dkr...@redhat.com wrote:
There are a number of patches up for review that make various changes to use six apis
instead of Python 2 constructs. While I understand the desire to get a head start on getting
On 03/13/2014 12:48 PM, Russell Bryant wrote:
On 03/13/2014 03:04 PM, Josh Durgin wrote:
These reverts are still confusing me. The use of glance's v2 api
is very limited and easy to protect from errors.
These patches use the v2 glance api for exactly one call - to get
image locations. This has
On Thu, Feb 27, 2014 at 3:45 AM, yongli he yongli...@intel.com wrote:
refer to :
https://wiki.openstack.org/wiki/Translations
now some exception use _ and some not. the wiki suggest do not to do
that. but i'm not sure.
what's the correct way?
F.Y.I
What To Translate
At present the
On 03/13/2014 04:29 PM, David Kranz wrote:
On 03/13/2014 10:50 AM, Joe Hakim Rahme wrote:
On 10 Mar 2014, at 22:54, David Kranz dkr...@redhat.com wrote:
There are a number of patches up for review that make various changes
to use six apis instead of Python 2 constructs. While I understand
1 - 100 of 129 matches
Mail list logo