On 25/11/14 20:16 +, Nikhil Komawar wrote:
Hi all,
Please consider this email as a nomination for Erno and Alex (CC) for adding
them to the list of Glance core reviewers. Over the last cycle, both of them
have been doing good work with reviews, participating in the project
discussions as
On Tue, Nov 25, 2014 at 1:39 AM, Evgeniy L e...@mirantis.com wrote:
On Tue, Nov 25, 2014 at 10:40 AM, Andrew Woodward xar...@gmail.com wrote:
On Mon, Nov 24, 2014 at 4:40 AM, Evgeniy L e...@mirantis.com wrote:
Hi Andrew,
Comments inline.
Also could you please provide a link on
Hi,
I traced the WSME code and found a place [0] where it tries to get arguments
from request body based on different mimetype. So looks like WSME supports only
json, xml and “application/x-www-form-urlencoded”.
So my question is: Can we fix WSME to also support “text/plain” mimetype? I
think
+1 and +1
Cindy and Thai, thank you and welcome!
2014-11-25 2:09 GMT+03:00 David Lyle dkly...@gmail.com:
I am pleased to nominate Thai Tran and Cindy Lu to horizon-core.
Both Thai and Cindy have been contributing significant numbers of high
quality reviews during Juno and Kilo cycles. They
Hi All,
I am facing some issues with Jenkins on two of my reviews. Jenkins is failing
either on gate-tempest-dsvm-neutron-src-python-neutronclient-icehouse or
gate-tempest-dsvm-neutron-src-python-neutronclient-icehouse but i do not see
any of my code changes making them fail.
So if you can
On Fri, Oct 31 2014, Flavio Percoco wrote:
Fully agree!
The more I think about it, the more I'm convinced we should keep py26
in oslo until EOL Juno. It'll take time, it may be painful but it'll
be simpler to explain and more importantly it'll be simpler to do.
Keeping this simple will
For small installs we still have to consider an option of roles
combination, and placing Zabbix on controllers. Fuel disk allocation logic
should be smart and has to allocate separate disk for it where possible.
On Wednesday, November 26, 2014, Stanislaw Bogatkin sbogat...@mirantis.com
wrote:
Hi all,
As I understand, we just need to monitoring one node - Fuel master. For
slave nodes we already have a solution - zabbix.
So, in that case why we need some complicated stuff like monasca? Let's use
something small, like monit or sensu.
On Mon, Nov 24, 2014 at 10:36 PM, Fox, Kevin M
Hi Deepak,
Docs are present in any project already, according to example with manila -
https://github.com/openstack/manila/tree/master/doc/source
It is used for docs on http://docs.openstack.org/ , also everyone if able
to contribute to it.
See docs built on basis of files from manila repo:
Hi Kevin,
Oh. Yes. That could be the problem.
Thanks for pointing that out.
Regards,
Vineet Menon
On 26 November 2014 at 02:02, Chen CH Ji jiche...@cn.ibm.com wrote:
are you using libvirt ? it's not implemented
,guess your bug are talking about other hypervisors?
the message was printed
Hi,
When testing high load scenarios, e.g. issuing 100 volume attachments, we are
running into timeout problems between nova, cinder and the centralized storage
backend.
Has anybody experienced similar problems?
/Tobi
https://blueprints.launchpad.net/nova/+spec/volume-status-polling
In which order are machines terminated during a scale down action in an
auto scaling group
For example instance 1 2 were deployed in a stack. Instances 3 4
were created as a result of load.
When the load is reduced and the instances are scaled back down, which
ones will be removed? And in
Hello,
Any pointer (document and/or code pointer) related to how the different
overridden methods are getting called when a custom resource is getting
deployed in the heat stack?
Basically just tried to annotate the h-eng log on a simple,
very-first-attempt 'hello world' resource. Noticed the
Im working on Zabbix implementation which include HA support.
Zabbix server should be deployed on all controllers in HA mode.
But zabbix-server will stay and user will be able to assign this role where
he wants?
If so there will be no limitations on roles allocation strategy that user
can use
Maish,
by default they are deleted in in the same order they were created, FIFO
style.
Best regards,
Pavlo Shchelokovskyy.
On Wed, Nov 26, 2014 at 12:24 PM, Maish Saidel-Keesing
maishsk+openst...@maishsk.com wrote:
In which order are machines terminated during a scale down action in an
auto
Can we put it as a work item for diagnostic snapshot improvements, so we
won't forget about this in 6.1?
On Tuesday, November 25, 2014, Dmitry Pyzhov dpyz...@mirantis.com wrote:
Thank you all for your feedback. Request postponed to the next release. We
will compare available solutions.
On
Vladimir,
+1 on using additional prefixes.
Please do not use Orchestrator though, especially with /Astute. Astute
is not an orchestrator since we moved all orchestration logic to Nailgun,
and it happened a long time ago already. Let's call it as task executor
instead, or Nailgun's workers.
So,
Pradip,
https://github.com/openstack/heat/blob/master/heat/engine/resource.py#L473
Basically, it calls handle_create that might return some data, yields, and
than keeps calling check_create_complete with that data returned by
handle_create, yielding control in-between, until
Hi Valeriy,
I know about docs, but this was a proposal to provide small doc which
are patch specific as that helps reviewers and other doc writers
I have many a times seen people asking on IRC or list on how to test this
patch, or i did this with your patch but didn't work , such iterations
I agree, this was supposed to be small.
P.
On 11/26/2014 11:03 AM, Stanislaw Bogatkin wrote:
Hi all,
As I understand, we just need to monitoring one node - Fuel master.
For slave nodes we already have a solution - zabbix.
So, in that case why we need some complicated stuff like monasca?
On 11/26/2014 10:46 AM, Julien Danjou wrote:
On Fri, Oct 31 2014, Flavio Percoco wrote:
Fully agree!
The more I think about it, the more I'm convinced we should keep py26
in oslo until EOL Juno. It'll take time, it may be painful but it'll
be simpler to explain and more importantly it'll
At the mid-cycle, there was some discussion around using our weekly meeting
time to find 5 old reviews and assign people to shepherd those reviews -
either marking them as abandoned if there hasn't been any action, or
rounding up reviewers or people to help make changes if required to drive
it
hi there,
while working on the TripleO cinder-ha spec meant to provide HA for
Cinder via Ceph [1], we wondered how to (if at all) test this in CI, so
we're looking for some feedback
first of all, shall we make Cinder/Ceph the default for our (currently
non-voting) HA job?
On Mon, 2014-11-24 at 13:19 -0500, Jay Pipes wrote:
I think pointing out that the default failure
message for testtools.TestCase.assertEqual() uses the terms
reference
(expected) and actual is a reason why reviewers *should* ask patch
submitters to use (expected, actual) ordering.
Is
Thanks Pavlo.
One particular thing I did not comprehend is:
Suppose my resource code is something like:
class HelloWorld(resource.Resource):
def __init__(self, controller, deserializer, serializer=None):
LOG.info([pradipm]:Inside HelloWorld ctor);
Mike,
from DevOps point of view it doesn't really matter when we do
branching. This is the process we need to perform anyway and this
partial branching doesn't change too much for us.
Although there might be several technical questions like:
1) When we create /6.1 mirror?
2) Should we create
Hi, all!
Recently I started working on nailgun-api log is too verbose
https://bugs.launchpad.net/fuel/+bug/1393148 bug and collected your
feedbacks about PoC https://review.openstack.org/#/c/137053/ as follows:
1) We cannot always delete or cut messages received from nailgun-agent
because of
Evgeniy,
Thanks a lot!
On Mon, Nov 24, 2014 at 5:15 PM, Evgeniy L e...@mirantis.com wrote:
Hi Dmitry,
Our current validation implementation is based on jsonschema,
we will figure out how to hack/configure it to provide more human
readable message
Thanks,
On Mon, Nov 24, 2014 at 2:34 PM,
The current behavior is not flexible to customer, I see that we have a
blueprint want to enhance this behavior.
https://blueprints.launchpad.net/heat/+spec/autoscaling-api-resources
https://wiki.openstack.org/wiki/Heat/AutoScaling
In Use Case section, we have the following:
Hi,
On Wed, Nov 26, 2014 at 12:48 AM, Georgy Okrokvertskhov
gokrokvertsk...@mirantis.com wrote:
Hi,
In Murano we did couple projects related to networking orchestration. As NFV
Can you tell us more about those projects? Does it include
mutli-datacenter use cases?
is a quite broad term I can
Hi together,
is there a way to use Environment variables in the local.conf
post-config section?
On my system (stable/juno devstack) not the content of the variable, but
the variable name itself is being inserted into the config file.
So e.g.
[[post-config|$NOVA_CONF]]
[DEFAULT]
So then in the end, there will be 3 monitoring systems to learn, configure, and
debug? Monasca for cloud users, zabbix for most of the physical systems, and
sensu or monit to be small?
Seems very complicated.
If not just monasca, why not the zabbix thats already being deployed?
Thanks,
Kevin
Hi all!
As our state machine and discovery discussion proceeds, I'd like to ask
your opinion on whether we need an IntrospectionInterface
(DiscoveryInterface?). Current proposal [1] suggests adding a method for
initiating a discovery to the ManagementInterface. IMO it's not 100%
correct, because:
On 11/25/2014 03:28 AM, Flavio Percoco wrote:
On 24/11/14 08:56 -0500, Sean Dague wrote:
Having XML payloads was never a universal part of OpenStack services.
During the Icehouse release the TC declared that being an OpenStack
service requires having a JSON REST API. Projects could do what
I envision it in the following way:
1. stable/6.0 is created for fuel-main along with stable branch for
other repo, which is under consideration, like fuel-web
2. In stable/6.0 of fuel-main, config.mk should be changed to refer to
stable/6.0 for one of the repos, like fuel-web. master
Hi Gary, All,
This is with reference to blueprint - L3 router Service Type Framework and
corresponding development at github repo.
I noticed that the patch was abandoned due to inactivity. Wanted to know
if there is a specific reason for which the development was put on hold?
I am working
On 26/11/2014 14:50, Jay Lau wrote:
The current behavior is not flexible to customer, I see that we have
a blueprint want to enhance this behavior.
https://blueprints.launchpad.net/heat/+spec/autoscaling-api-resources
https://wiki.openstack.org/wiki/Heat/AutoScaling
In Use Case section,
On Wed, Nov 26 2014, Andreas Jaeger wrote:
The libraries have 2.6 support enabled as discussed - but if indeed some
are missing, please send patches,
So to recap, it seems to me the plan is to keep all Oslo lib with
Python 2.6 so we don't have any transient dependency problem with stable
in
On 11/26/2014 02:48 PM, Julien Danjou wrote:
On Wed, Nov 26 2014, Andreas Jaeger wrote:
The libraries have 2.6 support enabled as discussed - but if indeed some
are missing, please send patches,
So to recap, it seems to me the plan is to keep all Oslo lib with
Python 2.6 so we don't have
On 11/26/2014 06:20 AM, Nicolas Trangez wrote:
On Mon, 2014-11-24 at 13:19 -0500, Jay Pipes wrote:
I think pointing out that the default failure
message for testtools.TestCase.assertEqual() uses the terms
reference
(expected) and actual is a reason why reviewers *should* ask patch
submitters to
Hi,
It should work with your current local.conf. You may be facing this bug :
https://bugs.launchpad.net/devstack/+bug/1386413
Jordan
- Original Message -
From: Andreas Scheuring scheu...@linux.vnet.ibm.com
To: openstack-dev openstack-dev@lists.openstack.org
Sent: Wednesday, 26
Thanks Pavlo.
Is there any reason why FIFO was chosen?
Maish
On 26/11/2014 12:30, Pavlo Shchelokovskyy wrote:
Maish,
by default they are deleted in in the same order they were created,
FIFO style.
Best regards,
Pavlo Shchelokovskyy.
On Wed, Nov 26, 2014 at 12:24 PM, Maish Saidel-Keesing
On Wed, 2014-11-26 at 08:54 -0500, Jay Pipes wrote:
On 11/26/2014 06:20 AM, Nicolas Trangez wrote:
On Mon, 2014-11-24 at 13:19 -0500, Jay Pipes wrote:
I think pointing out that the default failure
message for testtools.TestCase.assertEqual() uses the terms
reference
(expected) and
Hi,
Some days ago, a bunch of Nova specs were approved for Kilo. Among them
was https://blueprints.launchpad.net/nova/+spec/use-libvirt-storage-pools
Now, while I do recognize the wisdom of using storage pools, I do see a
couple of possible problems with this, especially in the light of my
On Wed, Nov 26, 2014 at 08:54:35AM -0500, Jay Pipes wrote:
It's not about an equality condition.
It's about the message that is produced by testtools.TestCase.assertEqual(),
and the helpfulness of that message when the order of the arguments is
reversed.
This is especially true with large
On 11/26/2014 09:28 AM, Nicolas Trangez wrote:
On Wed, 2014-11-26 at 08:54 -0500, Jay Pipes wrote:
On 11/26/2014 06:20 AM, Nicolas Trangez wrote:
On Mon, 2014-11-24 at 13:19 -0500, Jay Pipes wrote:
I think pointing out that the default failure
message for testtools.TestCase.assertEqual() uses
That's it. Thanks!
--
Andreas
(irc: scheuran)
On Wed, 2014-11-26 at 15:10 +0100, jordan pittier wrote:
Hi,
It should work with your current local.conf. You may be facing this bug :
https://bugs.launchpad.net/devstack/+bug/1386413
Jordan
- Original Message -
From: Andreas
I tend to agree with Morgan. There are resources and there are users.
And there is something in the middle that says which users can access
which resources. It might be an ACL, a RBAC role, or a set of ABAC
attributes, or something else (such as a MAC policy). So to my mind this
middle bit, whilst
+1
- Original Message -
From: Marc Koderer m...@koderer.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Sent: Wednesday, November 26, 2014 7:58:06 AM
Subject: Re: [openstack-dev] [QA][Tempest] Proposing Ghanshyam Mann for Tempest
So all of the current core team members have voted unanimously in favor of
adding Ghanshyam to the team.
Welcome to the team Ghanshyam.
-Matt Treinish
On Wed, Nov 26, 2014 at 09:57:10AM -0500, Attila Fazekas wrote:
+1
- Original Message -
From: Marc Koderer m...@koderer.com
To:
Hi,
do you think we could backport this bug to the devstack stable/juno
release?
https://review.openstack.org/#/c/131334/
This bug prohibits people to use the local.conf from icehouse, when they
take use of variables for defining configuration values
An example. The local.conf looks like
- Original Message -
From: Vineet Menon mvineetme...@gmail.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Sent: Wednesday, 26 November, 2014 5:14:09 AM
Subject: Re: [openstack-dev] [nova] nova host-update gives error 'Virt
On 26/11/14 09:13, Maish Saidel-Keesing wrote:
Thanks Pavlo.
Is there any reason why FIFO was chosen?
I believe that this was the original termination policy on AWS, and that
was the reason we chose it. It was used on AWS because if you deleted an
instance that was just created you would be
Hi Everyone,
I figured I'd send a quick announcement that we won't be having a meeting this
week. The next meeting will be next week, Dec. 4th at 17:00 UTC.
Thanks,
-Matt Treinish
pgpCJ2s1V0P2N.pgp
Description: PGP signature
___
OpenStack-dev
On Nov 26, 2014, at 3:49 AM, Renat Akhmerov rakhme...@mirantis.com wrote:
Hi,
I traced the WSME code and found a place [0] where it tries to get arguments
from request body based on different mimetype. So looks like WSME supports
only json, xml and “application/x-www-form-urlencoded”.
Folks,
Maybe I understand some things wrong, but Zabbix is a different story.
We deploy Zabbix per cluster, so it doesn't monitor for *all* slaves
or master node. It monitors only one cluster.
Therefore I see no reasons to choose Zabbix over monit. I mean, it
shouldn't be We MUST use Zabbix
We want to monitor Fuel master node while Zabbix is only on slave nodes
and not on master. The monitoring service is supposed to be installed on
Fuel master host (not inside a Docker container) and provide basic info
about free disk space, etc.
P.
On 11/26/2014 02:58 PM, Jay Pipes wrote:
On
Hi all,
Is there a way to simulate thousands or millions of compute nodes? Maybe we
could have many fake nova-compute services on one physical machine. By this
other nova components would have pressure from thousands of compute
services and this could help us find more problem from large-scale
I forgot to bring the topic up last week, but this week we have a
holiday in the US that conflicts with the weekly meeting, so I have
cancelled it.
-Ben
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
Hi,
you can still add your own service plugin, as a mixin of
L3RouterPlugin (have a look at brocade's code).
AFAIU service framework would manage the coexistence several
implementation of a single service plugin.
This is currently not prioritized by neutron. This kind of work might
restart in
Hi,
I tried to package suds-jurko. I was first happy to see that there was
some progress to make things work with Python 3. Unfortunately, the
reality is that suds-jurko has many issues with Python 3. For example,
it has many:
except Exception, e:
as well as many:
raise Exception, 'Duplicate
On 11/26/2014 10:22 AM, Przemyslaw Kaminski wrote:
We want to monitor Fuel master node while Zabbix is only on slave nodes
and not on master. The monitoring service is supposed to be installed on
Fuel master host (not inside a Docker container) and provide basic info
about free disk space, etc.
Hi,
I would do both to compare. monit and Sensu have own advantages.
--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser
On Wed, Nov 26, 2014 at 4:22 PM, Przemyslaw Kaminski pkamin...@mirantis.com
wrote:
We want to monitor Fuel master node while Zabbix is only on slave nodes
and
On 11/25/2014 11:54 AM, Solly Ross wrote:
I can't comment on other projects, but Nova definitely needs the soft
delete in the main nova database. Perhaps not for every table, but
there is definitely code in the code base which uses it right now.
Search for read_deleted=True if you're curious.
On 11/26/2014 02:20 PM, Dmitry Tantsur wrote:
Hi all!
As our state machine and discovery discussion proceeds, I'd like to ask
your opinion on whether we need an IntrospectionInterface
(DiscoveryInterface?). Current proposal [1] suggests adding a method for
initiating a discovery to the
As for me - zabbix is overkill for one node. Zabbix Server + Agent +
Frontend + DB + HTTP server, and all of it for one node? Why not use
something that was developed for monitoring one node, doesn't have many
deps and work out of the box? Not necessarily Monit, but something similar.
On Wed, Nov
On Nov 26, 2014, at 10:34 AM, Thomas Goirand z...@debian.org wrote:
Hi,
I tried to package suds-jurko. I was first happy to see that there was
some progress to make things work with Python 3. Unfortunately, the
reality is that suds-jurko has many issues with Python 3. For example,
it
On 11/26/2014 07:54 AM, Jay Pipes wrote:
On 11/26/2014 06:20 AM, Nicolas Trangez wrote:
On Mon, 2014-11-24 at 13:19 -0500, Jay Pipes wrote:
I think pointing out that the default failure
message for testtools.TestCase.assertEqual() uses the terms
reference
(expected) and actual is a reason
Jay,
Fuel uses watchdog service for container to restart it in case of issues.
We have the same problem with containers when disk is full
--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser
On Wed, Nov 26, 2014 at 4:39 PM, Jay Pipes jaypi...@gmail.com wrote:
On 11/26/2014 10:22
Monit is easy and is used to control states of Compute nodes. We can adopt
it for master node.
--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser
On Wed, Nov 26, 2014 at 4:46 PM, Stanislaw Bogatkin sbogat...@mirantis.com
wrote:
As for me - zabbix is overkill for one node. Zabbix
On 11/25/2014 10:58 PM, Ian Wienand wrote:
Hi,
My change [1] to enable a consistent tracing mechanism for the many
scripts diskimage-builder runs during its build seems to have hit a
stalemate.
I hope we can agree that the current situation is not good. When
trying to develop with
I think that 6 am for US west works much better than 3 am for Saratov.
So, I'm ok with keeping current time and add 1400 UTC.
18:00UTC: Moscow (9pm) China(2am) US West(10am)/US East (1pm)
14:00UTC: Moscow (5pm) China(10pm) US (W 6am / E 9am)
I think it's the best option to make all of
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 11/26/2014 07:48 AM, Julien Danjou wrote:
On Wed, Nov 26 2014, Andreas Jaeger wrote:
The libraries have 2.6 support enabled as discussed - but if
indeed some are missing, please send patches,
So to recap, it seems to me the plan is to keep
On 26/11/14 09:33, Louis Taylor wrote:
On Wed, Nov 26, 2014 at 08:54:35AM -0500, Jay Pipes wrote:
It's not about an equality condition.
It's about the message that is produced by testtools.TestCase.assertEqual(),
and the helpfulness of that message when the order of the arguments is
reversed.
Hi
Is there a Rally scenario under works where we create N networks and associate
N Vms with each network.
This would be a decent stress tests of neutron
Is there any such scale scenario in works
I see scenario for N networks, subnet creation and a separate one for N VM
bootups
I am looking for
Hi Boris
Looks like this would require changes in key portions of Rally infra. Need some
more time getting a hang of rally by committing a few scenarios before I make
infra changes
Ajay
From: Boris Pavlovic bo...@pavlovic.memailto:bo...@pavlovic.me
Reply-To: OpenStack Development Mailing
Looks like the same bug affecting oslo libraries, clients (likely anyone
with that similarily named icehouse job):
https://bugs.launchpad.net/tempest/+bug/1395368 (likely neutrons real
issue), see bug for a potential resolution review that seems to be going
through the tubes.
ER query
Hi folks,
We'll be having the Sahara team meeting as usual in
#openstack-meeting-alt channel.
Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings
http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20141127T18
--
Sincerely yours,
Sergey Lukjanov
On 11/26/2014 11:54 AM, Sergii Golovatiuk wrote:
Jay,
Fuel uses watchdog service for container to restart it in case of
issues. We have the same problem with containers when disk is full
I see. I guess I don't quite understand why Zabbix isn't just used for
everything -- after all, the
On 11/26/2014 01:10 PM, Sergey Lukjanov wrote:
Hi folks,
We'll be having the Sahara team meeting as usual in
#openstack-meeting-alt channel.
Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings
+1. In the ODL case you would just want a completely separate L3 plugin.
On Wed, Nov 26, 2014 at 7:29 AM, Mathieu Rohon mathieu.ro...@gmail.com
wrote:
Hi,
you can still add your own service plugin, as a mixin of
L3RouterPlugin (have a look at brocade's code).
AFAIU service framework would
Hi Mathieu,
Can you tell us more about those projects? Does it include
mutli-datacenter use cases?
Most of this work was done as a custom projects for customers. I have to
ask them for a permission to share details.
We do not support multi-datacenter placement officially, but this feature
was
- Original Message -
From: Deepak Shetty dpkshe...@gmail.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Hi stackers,
I was having this thought which i believe applies to all projects of
openstack (Hence All in the subject
Hello,
Thanks to those who I met personally at the Summit for your feedback on the
project.
For those that don’t know what Cue is, we’re building a Message Broker
Provisioning service for Openstack. More info can be found here:
https://wiki.openstack.org/wiki/Cue
Since the summit, we’re
On 11/26/2014 09:52 AM, David Chadwick wrote:
I tend to agree with Morgan. There are resources and there are users.
And there is something in the middle that says which users can access
which resources. It might be an ACL, a RBAC role, or a set of ABAC
attributes, or something else (such as a
There is already an out-of-tree L3 plugin, and as part of the plugin
decomposition work, I'm planning to use this as the base for the new
ODL driver in Kilo. Before you file specs and BPs, we should talk a
bit more.
Thanks,
Kyle
[1] https://github.com/dave-tucker/odl-neutron-drivers
On Wed, Nov
We talked about this in the meeting this week, but just for the record I
should be able to make it too.
-Ben
On 11/21/2014 11:07 AM, Doug Hellmann wrote:
We have a bit of a backlog in the Oslo review queue. Before we add a bunch of
new reviews for Kilo work, I’d like to see if we can clear
On 11/14/2014 08:38 AM, Doug Hellmann wrote:
On Nov 13, 2014, at 8:47 PM, Jamie Lennox jamielen...@redhat.com wrote:
Hi all,
To implement kite we need the ability to sign and encrypt the message and the
message data. This needs to happen at a very low level in the oslo.messaging
stack.
Hi!
Some days ago, a bunch of Nova specs were approved for Kilo. Among them was
https://blueprints.launchpad.net/nova/+spec/use-libvirt-storage-pools
Now, while I do recognize the wisdom of using storage pools, I do see a
couple of possible problems with this, especially in the light of my
Thanks for the note, it sounds like we could cancel the meeting this week
because of it... Anybody except Russian team folks planning to attend the
meeting this week?
On Wed, Nov 26, 2014 at 9:44 PM, Matthew Farrellee m...@redhat.com wrote:
On 11/26/2014 01:10 PM, Sergey Lukjanov wrote:
Hi
On 11/25/2014 09:34 PM, Mike Bayer wrote:
On Nov 25, 2014, at 8:15 PM, Ahmed RAHAL ara...@iweb.com wrote:
Hi,
Le 2014-11-24 17:20, Michael Still a écrit :
Heya,
This is a new database, so its our big chance to get this right. So,
ideas welcome...
Some initial proposals:
- we do what we
Hi,
my experience is that soft delete is important to keep record of deleted
instances and its characteristics.
In fact in my organization we are obliged to keep these records for several
months.
However, it would be nice that after few months we were able to purge the
DB with a nova tool.
In the
On 11/20/2014 08:12 AM, Sandy Walsh wrote:
Hey y'all,
To avoid cross-posting, please inform your -infra / -operations buddies about
this post.
We've just started thinking about where notification schema files should live
and how they should be deployed. Kind of a tricky problem. We could
Precisely. Why is the RDBMS the thing that is used for archival/audit
logging? Why not a NoSQL store or a centralized log facility? All that would
be needed would be for us to standardize on the format of the archival
record, standardize on the things to provide with the archival record
On 11/26/2014 03:39 PM, Belmiro Moreira wrote:
Hi,
my experience is that soft delete is important to keep record of
deleted instances and its characteristics.
In fact in my organization we are obliged to keep these records for
several months.
However, it would be nice that after few months we
Hi,
Sahara is broken in stable/juno now by new alembic release (unit tests).
The patch is already done and approved for master [0] and I've already
backported it to the stable/juno branch [1].
[0] https://review.openstack.org/#/c/137035/
[1] https://review.openstack.org/#/c/137469/
P.S. If
Mike Bayer wrote:
Precisely. Why is the RDBMS the thing that is used for archival/audit logging?
Why not a NoSQL store or a centralized log facility? All that would be needed
would be for us to standardize on the format of the archival record,
standardize on the things to provide with the
On Nov 20, 2014, at 4:06 PM, Eoghan Glynn
egl...@redhat.commailto:egl...@redhat.com wrote:
How about allowing the caller to specify what level of detail
they require via the Accept header?
▶ GET /prefix/resource_name
Accept: application/json; detail=concise
The Accept request-header field can
Hi All-
Daviey was an original member of the stable-maint team and one of the
driving forces behind the creation of the team and branches back in the
early days. He was removed from the team later on during a pruning of
inactive members. Recently, he has began focusing on the stable branches
detail=concise is not a media type and looking at the grammar in the RFC it
wouldn’t be valid.
I think the grammar would allow for application/json; detail=concise. See the
last line in the definition of the media-range nonterminal in the grammar
(copied below for convenience):
Accept
1 - 100 of 120 matches
Mail list logo