Hello,
Is there any documentation available which can be followed to start writing
up (from scratch) a new service?
Thanks,
Pradip
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
Rich,Yes, I am adding this ability to the keystone client library and then to osc.HenryOn 17 Mar 2015, at 20:17, Rich Megginson rmegg...@redhat.com wrote:
On 03/17/2015 01:26 PM, Henry Nash
wrote:
Hi
What does ³Flavor sizes² include? Memory, CPU count? Is there a
wide-enough care for other measures of performance or compatibility like:
- virtualization type: none (hardware/metal), xen, kvm, hyperv
- cpu speed, cache or some form of performance index
- volume types: SATA, SSD, iSCSI, and a
=
KVM Forum 2015: Call For Participation
August 19-21, 2015 - Sheraton Seattle - Seattle, WA
(All submissions must be received before midnight May 1, 2015)
=
KVM is an
Hi folks,
before you say «romcheg, go away and never come back again!», please read the
story that caused me to propose this and the proposed solution. Perhaps it
makes you reconsider :)
As you know for different reasons, among which are being able to set up
everything online and bringing
Hello all,
I am trying to debug the L3-agent code with pycharm, but the debugger
doesnt stop on my break points.
I have enabled PyCharm Gevent compatible debugging but that doesnt solve
the issue
(I am able to debug neutron server correctly)
Anyone might know what is the problem?
Thanks
Gal.
On Wed, Mar 18, 2015 at 08:33:26AM +0100, Thomas Herve wrote:
Interesting bug. I think I agree with you that there isn't a good solution
currently for instances that have a mix of shared and not-shared storage.
I'm curious what Daniel meant by saying that marking the disk shareable is
On 17 March 2015 at 22:02, Davis, Amos (PaaS-Core) amos.steven.da...@hp.com
wrote:
Ceph/Cinder:
LVM or other?
SCSI-backed?
Any others?
I'm wondering why any of the above matter to an application. The entire
point of cinder is to abstract those details from the application. I'd be
very
Hello OpenStackers!
The nomination deadline is past .. and Sumit Naiksatam is the
uncontested PTL of OpenStack GBP!
Congratulations Sumit and all the very best!
Regards
Malini
From: Bhandaru, Malini K
Sent: Wednesday, March 11, 2015 2:18 AM
To: OpenStack Development Mailing List
I think that both ways of doing this should be supported.
Decorated private methods make sense if the different microversions
have nicely interchangeable bits of functionality but not if one of
the private methods would have to be a no-op. A method which just
passes is noise. Additionally there
[Joe]: For reliability purpose, I suggest that the keystone client should
provide a fail-safe design: primary KeyStone server, the second KeyStone server
(or even the third KeySont server) . If the primary KeyStone server is out of
service, then the KeyStone client will try the second KeyStone
In my opinion you have got into this situation because your federation
trust model is essentially misguided. As I have been arguing since the
inception of federated Keystone, you should have rules for trusted IdPs
(already done), trusted attributes (not done), and then one set of
mapping rules
On Tue, Mar 17, 2015 at 01:33:26PM -0700, Joe Gordon wrote:
On Thu, Jun 19, 2014 at 1:38 AM, Daniel P. Berrange berra...@redhat.com
wrote:
On Wed, Jun 18, 2014 at 11:09:33PM -0700, Rafi Khardalian wrote:
I am concerned about how block migration functions when Cinder volumes
are
Hi Li,
I am using a fresh devstack with Horizon deployed as part of devstack.
I am running Sahara separately from the command line from the git
sources (master branch).
I use a little script to register the Sahara endpoint so that Horizon
sees it.
The only change I had to make was to
Copying my response on the review below:
Yes that completely makes sense Sean. In our original proposal we wanted
to allow the user to initiate a subnet-create without providing a CIDR,
and have an 'ipv6_pd_enabled' flag which could be set in the API call to
tell Neutron that this particular
Thank you!
On Wed, Mar 18, 2015 at 8:28 AM, Sergey Lukjanov slukja...@mirantis.com
wrote:
The PTL candidacy proposal time frame ended and we have only one candidate.
So, Serg Melikyan, my congratulations!
Results documented in
Hi all,
I recently posted this comment in the review for
https://review.openstack.org/#/c/158697/,
and wanted to post it here so that people can respond. Basically, I have
concerns that I raised during the spec submission process
(https://review.openstack.org/#/c/93054/).
I'm still not totally
Hello everyone,
I am having an issue using the RPCClient of the oslo.messaging package
delivered through the stable/icehouse release of devstack (v 1.4.1).
With this simple script:
import sys
from oslo.config import cfg
from oslo import
On 03/17/2015 09:13 AM, Zane Bitter wrote:
On 16/03/15 16:38, Ben Nemec wrote:
On 03/13/2015 05:53 AM, Jan Provaznik wrote:
On 03/10/2015 05:53 PM, James Slagle wrote:
On Mon, Mar 9, 2015 at 4:35 PM, Jan Provazník jprov...@redhat.com wrote:
Hi,
it would make sense to have a library for the
If you are not seeing the horizon panels for Sahara, I believe you are
seeing https://bugs.launchpad.net/horizon/+bug/1429987
The fix for that was merged on March 9
https://review.openstack.org/#/c/162736/
There are several bugs and fixes around the switch of the endpoint type
from
I haven't tested yet (and someone should) that it does all JUST WORK,
but thats easy: put an environment marker in a requirements.txt file
like so:
argparse; python_version '3'
I think the last time this came up the feature wasn't available in pip
yet, and so using separate files was
On Wed, Mar 18, 2015 at 06:45:59AM PDT, John Davidge (jodavidg) wrote:
In the IPv6 meeting yesterday you mentioned doing this
with an extension rather than modifying the core API. Could you go into
some detail about how you see this extension looking?
The easiest, is to evaluate the REST API
On 03/16/2015 03:55 PM, aburluka wrote:
Hello Nova!
I'd like to ask community to help me with some unclear things. I'm
currently working on adding persistent storage support into a parallels
driver.
I'm trying to start VM.
nova boot test-vm --flavor m1.medium --image centos-vm-32 --nic
Hi,
I think I've found some bugs around host aggregates in the documentation,
curious what people think.
The docs at
http://docs.openstack.org/trunk/config-reference/content/section_compute-scheduler.html#host-aggregates;
contain this:
nova aggregate-create name availability-zone
I assume that you considered a situation when we have a common repository
with RPMs for Fuel master and for nodes.
There are some plans (unfortunately I do not know details, so maybe someone
from OSCI could tell more) to split those repositories. How this workflow
will work with those separated
What you're proposing quickly becomes an authorization question. What
capabilities can this service provide? is a far less useful question than
what capabilities is the user authorized to consume? More generally, why
would you advertise any capability that the user is going to receive a
4xx/5xx
Yesterday the TC approved the python-openstackclient project as an
official OpenStack project. The governance change also included the
previously discussed move of openstack/cliff from the Oslo team
over to the OSC team. I've updated gerrit to add
python-openstackclient-core to cliff-core and
On 2015-03-18 13:41:03 +0800 (+0800), Lily.Sing wrote:
I follow the account setup steps here
http://docs.openstack.org/infra/manual/developers.html#account-setup and
it says the username for review.openstack.org should be the same as
launchpad.
Well, it says The first time you sign into ...
I responded in the gdoc. Here’s a copy.
One of my goals for delegation is to avoid asking people to write policy
statements specific to any particular domain-specific solver. People ought to
encode policy however they like, and the system ought to figure out how best to
enforce that policy
On 03/18/2015 12:23 PM, Mathieu Gagné wrote:
On 2015-03-17 3:22 PM, Emilien Macchi wrote:
A first question that comes in my mind is: should we continue to manage
every Puppet module in a different Launchpad project? Or should we
migrate all modules to a single project.
I prefer multiple
On 2015-03-17 3:22 PM, Emilien Macchi wrote:
A first question that comes in my mind is: should we continue to manage
every Puppet module in a different Launchpad project? Or should we
migrate all modules to a single project.
I prefer multiple Launchpad projects due to the fact that each
- Original Message -
From: Chris Friesen chris.frie...@windriver.com
To: openstack-dev@lists.openstack.org
Hi,
I think I've found some bugs around host aggregates in the documentation,
curious what people think.
The docs at
On 03/18/2015 09:35 AM, Steve Gordon wrote:
- Original Message -
From: Chris Friesen chris.frie...@windriver.com
I think I've found some bugs around host aggregates in the documentation,
curious what people think.
Agree on both counts, can you file a bug against openstack-manuals and
I think we can create a mapping which restricts which IdP it is applicable.
When playing around with K2K, I've experimented with multiple IdPs. I basically
chained the IdPs in shibboleth2.xml like this
MetadataProvider type=Chaining
MetadataProvider type=XML
On 2015-03-18 12:26 PM, Emilien Macchi wrote:
The challenge is with release management at scale. I have a bunch of
tools which I use to create new series, milestones and release them. So
it's not that big of a deal.
Are you willing to share it?
Sure. I'll make it a priority to publish it
Hello
o) custom constraint class
What did you mean by the “custom” constraint class?
Did you mean we specify a “meta model” to specify constraints? And then each
“Policy” specifying a constraint ( ) will lead to generation of the constraint
in that meta-model.
Then the solver-scheduler
Hi all,
This week I try to discover why this job
gate-tempest-dsvm-neutron-src-python-saharaclient-juno
http://logs.openstack.org/88/155588/6/check/gate-tempest-dsvm-neutron-src-python-saharaclient-juno/7f29e63/
fails
and what's the difference from the same jobs in other clients. The results
are
Ruby,
The Custom constraint class was something Yathiraj mentioned a while back. But
yes the idea is that MemoryCapacityConstraint would be a special case of what
we can express in the custom constraints.
Tim
On Mar 18, 2015, at 10:05 AM,
On Wed, Mar 18, 2015 at 3:09 AM, Daniel P. Berrange berra...@redhat.com
wrote:
On Wed, Mar 18, 2015 at 08:33:26AM +0100, Thomas Herve wrote:
Interesting bug. I think I agree with you that there isn't a good
solution
currently for instances that have a mix of shared and not-shared
I'm not sure of any particular benefit to trying to run cinder volumes over
swift, and I'm a little confused by the aim - you'd do better to use
something closer to purpose designed for the job if you want software fault
tolerant block storage - ceph and drdb are the two open-source options I
know
+1
-Original Message-
From: Ben Swartzlander [mailto:b...@swartzlander.org]
Sent: Wednesday, March 18, 2015 3:04 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Manila] Nominate Igor Malinovskiy for core team
Igor (u_glide on IRC) joined the
On Mar 18, 2015, at 4:21 PM, Jeremy Stanley fu...@yuggoth.org wrote:
On 2015-03-19 09:15:36 +1300 (+1300), Robert Collins wrote:
[...]
A second but also mandatory change is to synchronise on the final
pre-release tag definitions in PEP-440, IIRC that was just 'rc' -
'c'.
[...]
Igor (u_glide on IRC) joined the Manila team back in December and has
done a consistent amount of reviews and contributed significant new core
features in the last 2-3 months. I would like to nominate him to join
the Manila core reviewer team.
-Ben Swartzlander
Manila PTL
Congrats Steve!
On Wed, Mar 18, 2015 at 12:51 AM, Daneyon Hansen (danehans)
daneh...@cisco.com wrote:
Congratulations Steve!
Regards,
Daneyon Hansen
Software Engineer
Email: daneh...@cisco.com
Phone: 303-718-0400
http://about.me/daneyon_hansen
From: Angus Salkeld
Interesting bug. I think I agree with you that there isn't a good solution
currently for instances that have a mix of shared and not-shared storage.
I'm curious what Daniel meant by saying that marking the disk shareable is
not
as reliable as we would want.
I think this is the bug I
The Nova API attaches security groups to servers. The Neutron API attaches
security groups to ports. A server can of course have multiple ports. Up
through Icehouse at least the Horizon GUI only exposes the ability to map
security groups to servers (I haven't looked beyond Icehouse).
Are both
On Wed, Mar 18, 2015 at 12:25 PM, Adam Lawson alaw...@aqorn.com wrote:
The aim is cloud storage that isn't affected by a host failure and major
players who deploy hyper-scaling clouds architect them to prevent that from
happening. To me that's cloud 101. Physical machine goes down, data
Thanks all for the help.
I get help from IRC, and now I have solved my issues:
1. The reason after I update horizon and the Data Processing panel do
not work is because I missed a step for horizon to work.
The correct step to update horizon is:
a. git pull origin master (update
Here's a possibly relevant use case for this discussion:
1) Running Icehouse OpenStack
2) Keystone reports v3.0 auth capabilities
3) If you actually use the v3.0 auth, then any nova call that gets passed
through to cinder fails due to the code in Icehouse being unable to parse
the 3.0 service
[Joe]: For reliability purpose, I suggest that the keystone client should
provide a fail-safe design: primary KeyStone server, the second KeyStone server
(or even the third KeySont server) . If the primary KeyStone server is out of
service, then the KeyStone client will try the second KeyStone
Hi folks,
We'll be having the Sahara team meeting in #openstack-meeting-alt channel.
Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings
http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20150319T18
--
Sincerely yours,
Sergey Lukjanov
Sahara
Hi everyone,
Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, March 19th at 22:00 UTC in the #openstack-meeting
channel.
The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to
I've added following bug importance guidelines for documentation bugs
in the public Fuel wiki [0]:
* Critical = following the instructions from documentation can cause
outage or data loss
* High = documentation includes information that is not true, or
instructions that yield the advertised
On 03/18/2015 08:59 PM, joehuang wrote:
[Joe]: For reliability purpose, I suggest that the keystone client
should provide a fail-safe design: primary KeyStone server, the second
KeyStone server (or even the third KeySont server) . If the primary
KeyStone server is out of service, then the
BP is at
https://blueprints.launchpad.net/keystone/+spec/keystone-ha-multisite ,
spec will come later :)
On Thu, Mar 19, 2015 at 11:21 AM, Adam Young ayo...@redhat.com wrote:
On 03/18/2015 08:59 PM, joehuang wrote:
[Joe]: For reliability purpose, I suggest that the keystone client
should
Hi,
I suggest you use pd. or ipdb to debug neutron...
If you prefer IDE, komodo can do remote debug in neutron in my experiment.
Hope helps,
Damon
2015-03-19 6:06 GMT+08:00 Daniel Comnea comnea.d...@gmail.com:
Gal,
while i don't have an answer to your question, can you pls share how you
The deadline is almost near. First off I want to thank all driver
maintainers who are reporting successfully with their driver CI in
Cinder reviews. For many of you, I know you discovered how useful the
CI is, in just the bugs it has caught or revealed. OpenStack users
that use your solution will
On 18 March 2015 at 03:33, Duncan Thomas duncan.tho...@gmail.com wrote:
On 17 March 2015 at 22:02, Davis, Amos (PaaS-Core)
amos.steven.da...@hp.com wrote:
Ceph/Cinder:
LVM or other?
SCSI-backed?
Any others?
I'm wondering why any of the above matter to an application.
The Neutron
Hi everyone,
Got some questions for whether certain use cases have been addressed and if
so, where things are at. A few things I find particularly interesting:
- Automatic Nova evacuation for VM's using shared storage
- Using Swift as a back-end for Cinder
I know we discussed Nova
On Wed, Mar 18, 2015 at 10:59:19AM -0700, Joe Gordon wrote:
On Wed, Mar 18, 2015 at 3:09 AM, Daniel P. Berrange berra...@redhat.com
wrote:
On Wed, Mar 18, 2015 at 08:33:26AM +0100, Thomas Herve wrote:
Interesting bug. I think I agree with you that there isn't a good
solution
The aim is cloud storage that isn't affected by a host failure and major
players who deploy hyper-scaling clouds architect them to prevent that from
happening. To me that's cloud 101. Physical machine goes down, data
disappears, VM's using it fail and folks scratch their head and ask this
was in
Excerpts from Adam Lawson's message of 2015-03-18 11:25:37 -0700:
The aim is cloud storage that isn't affected by a host failure and major
players who deploy hyper-scaling clouds architect them to prevent that from
happening. To me that's cloud 101. Physical machine goes down, data
disappears,
Excerpts from Robert Collins's message of 2015-03-19 09:15:36 +1300:
On 18 March 2015 at 03:03, Doug Hellmann d...@doughellmann.com wrote:
Now that we have good processes in place for the other Oslo libraries, I
want to bring pbr into the fold so to speak and start putting it through
the
On 19 March 2015 at 10:51, Doug Hellmann d...@doughellmann.com wrote:
Excerpts from Robert Collins's message of 2015-03-19 09:15:36 +1300:
I wonder if it had to do with Oslo's alpha releases? Since we're no
longer doing that, do we still care? Are we still actually broken?
Yes, we do and
Hi everyone,
As we continue working on Fuel 6.1 release, let me share some updates. I
do realize I should have shared this earlier, I apologize for the delay
with this communication. So, here we go:
1. We are officially at Feature Freeze for 6.1.
According to our Release Schedule for 6.1 [1]
On 18 March 2015 at 03:03, Doug Hellmann d...@doughellmann.com wrote:
Now that we have good processes in place for the other Oslo libraries, I
want to bring pbr into the fold so to speak and start putting it through
the same reviews and release procedures. We also have some open bugs
that I'd
On 2015-03-19 09:15:36 +1300 (+1300), Robert Collins wrote:
[...]
A second but also mandatory change is to synchronise on the final
pre-release tag definitions in PEP-440, IIRC that was just 'rc' -
'c'.
[...]
Mmmwaffles. It was for a time, then by popular demand it got
switched back to rc
On 18 March 2015 at 02:33, Monty Taylor mord...@inaugust.com wrote:
If so, an option would be to have pbr recognize the version-specific
input files as implying a particular rule, and adding that environment
marker to the dependencies list automatically until we can migrate to a
single
On 19 March 2015 at 09:21, Jeremy Stanley fu...@yuggoth.org wrote:
On 2015-03-19 09:15:36 +1300 (+1300), Robert Collins wrote:
[...]
A second but also mandatory change is to synchronise on the final
pre-release tag definitions in PEP-440, IIRC that was just 'rc' -
'c'.
[...]
Mmmwaffles. It
Roman,
I like this proposal very much, thanks for the idea and for putting
together a straightforward process.
I assume you meant: If a requirement that previously was only in Fuel
Requirements is merged to Global Requirements, it should be removed
from *Fuel* Requirements.
Sebastian,
We have
review.openstack.org use lauchpad to login, you could change your
setting here:
https://login.launchpad.net/
On 2015年03月18日 13:41, Lily.Sing wrote:
Hi all,
I follow the account setup steps here
http://docs.openstack.org/infra/manual/developers.html#account-setup
and it says the username for
71 matches
Mail list logo