On 04 Apr 2014, at 07:33, Kirill Izotov enyk...@stackstorm.com wrote:
Then, we can make task executor interface public and allow clients to
provide their own task executors. It will be possible then for Mistral
to implement its own task executor, or several, and share the
executors between
Dmitri, nice work, will research them carefully early next week. I would ask
other folks to do the same (especially Nikolay).
Renat Akhmerov
@ Mirantis Inc.
On 03 Apr 2014, at 06:22, Dmitri Zimine d...@stackstorm.com wrote:
Two more workflows drafted - cloud cron, and lifecycle, version 1.
Clint Byrum cl...@fewbar.com wrote on 04/03/2014 07:01:16 PM:
... The whole question raises many more
questions, and I wonder if there's just something you haven't told us
about this use case. :-P
Yes, I seem to have made a muddle of things by starting in one corner of a
design space. Let
+1
On 04/03/2014 01:02 PM, Robert Collins wrote:
Getting back in the swing of things...
Hi,
like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be
Hi Steve,
your indexing idea sounds interesting, but I am not sure it would work
reliably. The kind of matching based on names of parameters and outputs and
internal get_attr uses has very strong assumptions and I think there is a
not so low risk of false positives. What if the templates includes
On 03/04/14 14:02, Robert Collins wrote:
Getting back in the swing of things...
Hi,
like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
On 04/03/2014 01:02 PM, Robert Collins wrote:
Getting back in the swing of things...
Hi,
like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
Glad to see this, i will be glad to contribute on it if the project could move
on..
On Apr 4, 2014, at 10:01, Cazzolato, Sergio J sergio.j.cazzol...@intel.com
wrote:
Glad to see that, for sure I'll participate of this session.
Thanks
-Original Message-
From: Jay Pipes
Hello everyone,
Last but not least, Swift just published its first Icehouse release
candidate. You can find the tarball for 1.13.1-rc1 at:
https://launchpad.net/swift/icehouse/1.13.1-rc1
Unless release-critical issues are found that warrant a release
candidate respin, this RC1 will be formally
Hi all,
2014-04-03 18:47 GMT+02:00 Meghal Gosalia meg...@yahoo-inc.com:
Hello folks,
Here is the bug [1] which is currently not allowing a host to be part of
two availability zones.
This bug was targeted for havana.
The fix in the bug was made because it was assumed
that openstack
resendign it with correct cinder prefix in subject.
thanx,
deepak
On Thu, Apr 3, 2014 at 7:44 PM, Deepak Shetty dpkshe...@gmail.com wrote:
Hi,
I am looking to umount the glsuterfs shares that are mounted as part
of gluster driver, when c-vol is being restarted or Ctrl-C'ed (as in
Hi Salvatore,
On 03/04/2014 14:56, Salvatore Orlando wrote:
Hi Simon,
snip
I hope stricter criteria will be enforced for Juno; I personally think
every CI should run at least the smoketest suite for L2/L3 services (eg:
load balancer scenario will stay optional).
I had a little thinking
Hi,
I have a question regarding the ring building process in a swift cluster.
Many sources online suggest building the rings using ring-builder and scp
the generated ring files to all the nodes in the cluster.
What I'm trying to understand is if the scp step is just to simplify
things, or is it
Hello,
We had quite a lengthy discussion on this review :
https://review.openstack.org/#/c/65113/
about a patch that seb has sent to add ceph support to devstack.
The main issues seems to resolve around the fact that in devstack we
support only packages that are in the distros and not having
Hi all,
I've logged a bug in trove. I'm a little unsure if this is a bug or
feature. Please have a look at the bug @
https://bugs.launchpad.net/trove/+bug/1302376 and suggest if it is valid.
Thanks,
Shweta | Consultant Engineering
GlobalLogic
www.globallogic.com
http://www.globallogic.com/
Goodday Shweta, it's a definitely a bug, thanks for registering the
bug-report.
Best regards,
Denis Makogon
On Fri, Apr 4, 2014 at 1:04 PM, Shweta shweta shw...@globallogic.comwrote:
Hi all,
I've logged a bug in trove. I'm a little unsure if this is a bug or
feature. Please have a look
Hello everyone,
This is regarding implementation of blueprint
https://blueprints.launchpad.net/tempest/+spec/testcases-expansion-icehouse.
As per mentioned in etherpads for this blueprint, please add your name if you
are working on any of the items mentioned in the list.
Otherwise efforts will
On 03/04/14 23:20, Jay Pipes wrote:
On Thu, 2014-04-03 at 14:41 -0500, Kevin L. Mitchell wrote:
On Thu, 2014-04-03 at 19:16 +, Cazzolato, Sergio J wrote:
Jay, thanks for taking ownership on this idea, we are really
interested to contribute to this, so what do you think are the next
steps
On 04/04/2014 07:37 AM, Sean Dague wrote:
An interesting conversation has cropped up over the last few days in -qa
and -infra which I want to bring to the wider OpenStack community. When
discussing the use of Tempest as part of the Defcore validation we came
to an interesting question:
Why does
Hello everyone,
Sahara published its first Icehouse release candidate today. The list of
bugs fixed since feature freeze and the RC1 tarball are available at:
https://launchpad.net/sahara/icehouse/icehouse-rc1
Unless release-critical issues are found that warrant a release
candidate respin,
On 03/04/14 17:53 +, Kurt Griffiths wrote:
[snip]
If elected, my priorities during Juno will include:
1. Operational Maturity: Marconi is already production-ready, but we still
have work to do to get to world-class reliability, monitoring, logging,
and efficiency.
2. Documentation: During
No problem.
Filed here: https://bugs.launchpad.net/heat/+bug/1302578 for continued
discussion.
-M
Kind Regards,
Michael D. Elder
STSM | Master Inventor
mdel...@us.ibm.com | linkedin.com/in/mdelder
Success is not delivering a feature; success is learning
Hello Vladimir,
I would prefer an agent-less node, meaning the agent is only used
under the ramdisk OS to collect hw info, to do firmware updates and to
install nodes etc. In this sense, the agent running as root is fine. Once
the node is installed, the agent should be out of the picture.
On Thu, Apr 3, 2014 at 5:42 PM, Zane Bitter zbit...@redhat.com wrote:
On 03/04/14 08:48, Doug Hellmann wrote:
On Wed, Apr 2, 2014 at 9:55 PM, Zane Bitter zbit...@redhat.com wrote:
We have an issue in Heat where the sample config generator from Oslo is
currently broken (see bug #1288586).
Opened in Launchpad: https://bugs.launchpad.net/heat/+bug/1302624
I still have concerns though about the design approach of creating a new
project for every stack and new users for every resource.
If I provision 1000 patterns a day with an average of 10 resources per
pattern, you're looking
On Apr 4, 2014, at 10:41 AM, Chuck Thier cth...@gmail.com wrote:
Howdy,
Now that swift has aligned with the other projects to use requests in
python-swiftclient, we have lost a couple of features.
1. Requests doesn't support expect: 100-continue. This is very useful for
services
Whilst looking at something unrelated in HostManager, I noticed that
HostManager.service_states appears to be unused, and decided to remove
it. This seems to have a number of implications:
1. capabilities in HostManager.get_all_host_states will always be None.
2. capabilities passed to
On Fri, Apr 4, 2014 at 9:44 AM, Donald Stufft don...@stufft.io wrote:
requests should work fine if you used the event let monkey patch the
socket module prior to import requests.
That's what I had hoped as well (and is what swift-bench did already), but
it performs the same if I monkey patch
Hi all,
I think it's important for our developers to publish an official Release
Note as other core openstack projects does at the end of Icehouse
development cycle, it contains the new features added and upgrade issue to
be noticed by the users. any one like to be volunteer to help accomplish
+1
From: Ling Gao [mailto:ling...@us.ibm.com]
Sent: Friday, April 04, 2014 10:10 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ironic][Agent] Ironic-python-agent
Hello Vladimir,
On Fri, Apr 4, 2014 at 3:43 AM, Deepak Shetty dpkshe...@gmail.com wrote:
Shiva,
Can u tell what exactly u r trying to change in /opt/stack/ ?
My guess is that u might be running into stack.sh re-pulling the sources
hence overriding ur changes ? Try with OFFLINE=True in localrc (create a
+1
The agent is a tool Ironic is using to take the place of a
hypervisor to discover and prepare nodes to recieve workloads. For
hardware, this includes more work -- such as firmware flashing, bios
configuration, and disk imaging -- all of which must be done in an
OOB manner.
Hi Joshua,
Quotas will not be expanded during the scenario, they will be updated
*prior* the scenario with the requested values as context of this scenario.
If values are too low, the scenario will continue to fail.
This update does not allow to benchmark quotas update modification time.
Hi Simon,
You are absolutely right in your train of thoughts: unless the
third-party CI monitors and vets all the potential changes it cares
about there's always a chance something might break. This is why I
think it's important that each Neutron third party CI should not only
test Neutron
On Apr 4, 2014, at 11:03 AM, Yaguang Tang
yaguang.t...@canonical.commailto:yaguang.t...@canonical.com wrote:
I think it's important for our developers to publish an official Release Note
as other core openstack projects does at the end of Icehouse development cycle,
it contains the new
Kekane, Abhishek abhishek.kek...@nttdata.com wrote on 04/04/2014
06:26:58 AM:
This is regarding implementation of blueprint https://
blueprints.launchpad.net/tempest/+spec/testcases-expansion-icehouse.
As per mentioned in etherpads for this blueprint, please add your
name if you are
Hi All,
I was wondering if the time has come to document what exactly are we
doing with tripleo-heat-templates and merge.py[1], figure out what needs
to happen to move away and raise the necessary blueprints on Heat and
TripleO side.
(merge.py is a script we use to build the final TripleO Heat
Elections are underway and will remain open for you to cast your vote
until at least 1300 utc April 11, 2014.
We are having elections for Nova, Neutron, Cinder, Ceilometer, Heat and
TripleO.
If you are a Foundation individual member and had a commit in one of the
program's projects[0] over the
On Fri, Apr 4, 2014 at 5:19 AM, Vladimir Kozhukalov
vkozhuka...@mirantis.com wrote:
On the other hand, it is easy to imagine a situation when you want to run
agent on every node of your cluster after installing OS. It could be useful
to keep hardware info consistent (for example, many
On Fri, Apr 4, 2014 at 10:51 AM, Kurt Griffiths
kurt.griffi...@rackspace.com wrote:
It appears the current version of oslo.cache is going to bring in quite
a few oslo libraries that we would not want keystone client to depend on
[1]. Moving the middleware to a separate library would solve
On Fri, Apr 4, 2014 at 12:22 PM, Dean Troyer dtro...@gmail.com wrote:
On Fri, Apr 4, 2014 at 10:51 AM, Kurt Griffiths
kurt.griffi...@rackspace.com wrote:
It appears the current version of oslo.cache is going to bring in quite
a few oslo libraries that we would not want keystone client to
Bruno,
Btw great idea add benchmark scenarios for quotas as well!
Best regards,
Boris Pavlovic
On Fri, Apr 4, 2014 at 7:28 PM, Bruno Semperlotti
bruno.semperlo...@gmail.com wrote:
Hi Joshua,
Quotas will not be expanded during the scenario, they will be updated
*prior* the scenario with
Excerpts from Stan Lagun's message of 2014-04-04 02:54:05 -0700:
Hi Steve, Thomas
I'm glad the discussion is so constructive!
If we add type interfaces to HOT this may do the job.
Applications in AppCatalog need to be portable across OpenStack clouds.
Thus if we use some globally-unique
On April 4, 2014 at 9:12:56 AM, Devananda van der Veen
(devananda@gmail.com) wrote:
Ironic's responsibility ends where the host OS begins. Ironic is a bare metal
provisioning service, not a configuration management service.
+1
// jim
___
There are lots of configuration management agents already out there (chef?
puppet? salt? ansible? ... the list is pretty long these days...) which you
can bake into the images that you deploy with Ironic, but I'd like to be
clear that, in my opinion, Ironic's responsibility ends where the host
Excerpts from Vladimir Kozhukalov's message of 2014-04-04 05:19:41 -0700:
Hello, everyone,
I'd like to involve more people to express their opinions about the way how
we are going to run Ironic-python-agent. I mean should we run it with root
privileges or not.
From the very beginning
seems that this discussion is splitted in 2 threads
Lucas,
That's because I added a subject when responded. :-)
Ling Gao
From: Lucas Alvares Gomes lucasago...@gmail.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org,
Date:
Ironic's responsibility ends where the host OS begins. Ironic is a bare
metal provisioning service, not a configuration management service.
I agree with the above, but just to clarify I would say that Ironic
shouldn't *interact* with the host OS once it booted. Obviously it can
still perform
Excerpts from Tomas Sedovic's message of 2014-04-04 08:47:46 -0700:
Hi All,
I was wondering if the time has come to document what exactly are we
doing with tripleo-heat-templates and merge.py[1], figure out what needs
to happen to move away and raise the necessary blueprints on Heat and
On Fri, Apr 4, 2014 at 9:05 PM, Clint Byrum cl...@fewbar.com wrote:
IMO that is not really true and trying to stick all these databases into
one SQL database interface is not a use case I'm interested in
pursuing.
Indeed. Any SQL database is a useless interface. What I was trying to say
is
On 3.4.2014 13:02, Robert Collins wrote:
Getting back in the swing of things...
Hi,
like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with
On 04/04/2014 05:57 PM, Dolph Mathews wrote:
tl;dr:
$ python clean_po.py PROJECT/locale/
$ git commit
The comments on bug 1299349 are already quite long, so apparently this
got lost. To save everyone some time, the fix is as easy as above. So
what's clean_po.py?
Devananda van der
I am fine with taking the approach of user passing multiple avail. zones
Az1,Az2 if he wants vm to be in (intersection of AZ1 and Az2).
It will be more cleaner.
But, similar approach should also be used while setting the
default_scheduling_zone.
Since, we will not be able to add host to
Excerpts from Robert Collins's message of 2014-04-03 04:02:20 -0700:
Getting back in the swing of things...
Hi,
like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing
On 04/03/2014 01:02 PM, Robert Collins wrote:
Getting back in the swing of things...
Hi,
like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
Excerpts from Michael Elder's message of 2014-04-04 07:16:55 -0700:
Opened in Launchpad: https://bugs.launchpad.net/heat/+bug/1302624
I still have concerns though about the design approach of creating a new
project for every stack and new users for every resource.
If I provision 1000
I think I have worked out the performance issues with eventlet and Requests
with most of it being that swiftclient needs to make use of
requests.session to re-use connections, and there are likely other areas
there that we can make improvements.
Now on to expect: 100-continue support, has anyone
On 19/02/14 02:48, Clint Byrum wrote:
Since picking up Heat and trying to think about how to express clusters
of things, I've been troubled by how poorly the CFN language supports
using lists. There has always been the Fn::Select function for
dereferencing arrays and maps, and recently we added
I found https://github.com/kennethreitz/requests/issues/713
Lukasahttps://github.com/Lukasa commented a month
agohttps://github.com/kennethreitz/requests/issues/713#issuecomment-35594520
There's been no progress on this, and it's not high on the list of priorities
for any of the core
(easier to insert my questions at top of discussion as they are more general)
How would test deprecations work in a branchless Tempest? Right now, there is
the discussion on removing the XML tests from Tempest, yet they are still valid
for Havana and Icehouse. If they get removed, will they
On 04/04/14 13:58, Clint Byrum wrote:
We could keep roughly the same structure: a separate template for each
OpenStack service (compute, block storage, object storage, ironic, nova
baremetal). We would then use Heat environments to treat each of these
templates as a custom resource (e.g.
I wonder if there is a way to do the following. I have a user A with admin
role in tenant A, and I want to create a VM in/for tenant B as user A.
Obviously, I can use A's admin privilege to add itself to tenant B, but I
want to avoid that.
Based on the policy.json file, it seems doable:
Excerpts from Adam Young's message of 2014-04-04 18:48:40 -0700:
On 04/04/2014 02:46 PM, Clint Byrum wrote:
Excerpts from Michael Elder's message of 2014-04-04 07:16:55 -0700:
Opened in Launchpad: https://bugs.launchpad.net/heat/+bug/1302624
I still have concerns though about the design
63 matches
Mail list logo