I am have build 15 node openstack cluster and now I have 5 node for ceph
storage they all are HP DL360p G8 which has 10 HDD tray.
Now for ceph minimum requirement is to have 3 monitor node, I have 5 ceph node
then it would be little tight if I give 3 node for monitor and 2 for OSD.
I was
Installed masakari 4.0.0 on queens. Hostmonitor, instancemonitor, and
processmonitor all running on compute nodes. API and engine running on
controller nodes. I've tried using the masakari client to list/add segments,
any of those commands does nothing and returns:
("'NoneType' object has no
I would recommend using availability zones for this.
Torin Woltjer
Grand Dial Communications - A ZK Tech Inc. Company
616.776.1066 ext. 2006
www.granddial.com
From: Satish Patel
Sent: 7/1/18 9:56 AM
To: openstack
Subject: [Openstack] flavor metadata
Have a look at Designate: https://wiki.openstack.org/wiki/Designate
It has support for powerDNS, and sounds like what you're looking for.
Torin Woltjer
Grand Dial Communications - A ZK Tech Inc. Company
616.776.1066 ext. 2006
www.granddial.com
From:
Now i am confused, what is the best option and can you give me example
how should i use them?
~S
On Mon, Jul 2, 2018 at 8:48 AM, Torin Woltjer
wrote:
> I would recommend using availability zones for this.
>
> Torin Woltjer
>
> Grand Dial Communications - A ZK Tech Inc. Company
>
> 616.776.1066
Hi,
I have recently finished installing a minimal OpenStack Queens environment
for a school project, and was asked whether it is possible to deploy an
additional compute node on bare metal, aka without an underlying operating
system, in order to eliminate the operating system overhead and thus to
Yes, i am looking at it but documentation is little confusion too..
On Mon, Jul 2, 2018 at 8:52 AM, Torin Woltjer
wrote:
> Have a look at Designate: https://wiki.openstack.org/wiki/Designate
> It has support for powerDNS, and sounds like what you're looking for.
>
> Torin Woltjer
>
> Grand Dial
The purpose of availability zones is segregating your servers so that
downtime of a group of servers doesn't affect another group of servers.
Such as, servers in different buildings or hooked up to different power
lines. Important detail: A server can be in a single availability zone
at most.
Bernd has this right.
Host aggregates (sometimes called Haggs) is the right soluton to this
problem. You can setup a flavor to only run on a certain hagg.
This works well (in production, at scale).
On Sun, Jul 1, 2018 at 7:52 AM, Satish Patel wrote:
> Folks,
>
> Recently we build openstack
They are probably thinking of Vmware ESXi, which is both an operating
system kernel, named vmkernel, and a hypervisor.
OpenStack is not a hypervisor. It /uses /hypervisors to manage virtual
machines. Furthermore, OpenStack is written in Python, so that, as a
minimum, your "baremetal" would have
Installing it with tox instead of pip seems to have precisely the same effect.
Is there a config file for the masakari client that I am not aware of? Nothing
seems to be provided with it, and documentation is nonexistant.
Torin Woltjer
Grand Dial Communications - A ZK Tech Inc. Company
On 07/02/2018 09:45 AM, Houssam ElBouanani wrote:
Hi,
I have recently finished installing a minimal OpenStack Queens
environment for a school project, and was asked whether it is possible
to deploy an additional compute node on bare metal, aka without an
underlying operating system, in order
Are you using OOO? Or what? HA mode?
Remo
> On Jul 2, 2018, at 7:32 AM, Satish Patel wrote:
>
> Yes, i am looking at it but documentation is little confusion too..
>
> On Mon, Jul 2, 2018 at 8:52 AM, Torin Woltjer
> mailto:torin.wolt...@granddial.com>> wrote:
>> Have a look at Designate:
Running the command with the -d debug option provides this python traceback:
Traceback (most recent call last):
File "/usr/local/bin/masakari", line 11, in
sys.exit(main())
File "/usr/local/lib/python2.7/dist-packages/masakariclient/shell.py", line
189, in main
On Mon, Jul 2, 2018 at 7:43 AM, Torin Woltjer
wrote:
> Installed masakari 4.0.0 on queens. Hostmonitor, instancemonitor, and
> processmonitor all running on compute nodes. API and engine running on
> controller nodes. I've tried using the masakari client to list/add segments,
> any of those
I am using Openstack-ansible deployment method in HA mode.
On Mon, Jul 2, 2018 at 11:08 AM, Remo Mattei wrote:
> Are you using OOO? Or what? HA mode?
>
> Remo
>
>
> On Jul 2, 2018, at 7:32 AM, Satish Patel wrote:
>
> Yes, i am looking at it but documentation is little confusion too..
>
> On
hi,
I created a LP team "tap-as-a-service-drivers",
whose initial members are same as the existing tap-as-a-service-core
group on gerrit.
I made the team the Maintainer and Driver of the tap-as-a-service project.
This way, someone in the team can take it over even if I disappeared
suddenly. :-)
In case you did not get the reminder on Friday afternoon ;)
--
Kind regards,
Melvin Hillsman
mrhills...@gmail.com
mobile: (832) 264-2646
On Fri, Jun 29th, 2018 at 12:59 PM, Melvin Hillsman
wrote:
>
> Hi everyone,
>
>
> Please be sure to join us - if not getting ready for firecrackers -
Hi,
Going inline.
From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: Friday, June 29, 2018 4:25 AM
In-lined comments / questions below,
Greg.
From: "Csatari, Gergely (Nokia - HU/Budapest)"
mailto:gergely.csat...@nokia.com>>
Date: Thursday, June 28, 2018 at 3:35 AM
Hi,
I’ve added
Tony, do you mean the script I am using to create the image ?
2018-07-02 8:11 GMT+02:00 Tony Breeds :
> On Mon, Jul 02, 2018 at 07:55:13AM +0200, Ignazio Cassano wrote:
> > Hi Tony,
> > applying the patch reported here (https://review.openstack.org/
> #/c/561740/)
> > the issue is solved..
> >
Oh,sorry,not this means,in my opinion,we could filter the flavor in flavor
list.such as the cli:openstack flavor list --property key:value.
-- Original --
From: "Sahid Orentino Ferdjaoui";
Date: 2018年7月2日(星期一) 下午3:20
To: "OpenStack Developmen";
Subject: Re:
Hey Mistralites!
Here is your monthly recap of whats what in the Mistral community. Arriving
to you a day late as the 1st was a Sunday. When that happens I'll just aim
to send it as close to the 1st as I can. Either slightly early or slightly
late.
# General News
Vitalii Solodilov joined the
On Mon, Jul 02, 2018 at 11:08:51AM +0800, Rambo wrote:
> Hi,all
>
> I have an idea.Now we can't filter the special flavor according to
> the property.Can we achieve it?If we achieved this,we can filter the
> flavor according the property's key and value to filter the
> flavor. What do you think
On Mon, Jul 02, 2018 at 07:55:13AM +0200, Ignazio Cassano wrote:
> Hi Tony,
> applying the patch reported here (https://review.openstack.org/#/c/561740/)
> the issue is solved..
> The above path was related to another issue (distutils) but is solves also
> the cleanup error.
> Anycase I could
Hi,
It seems that the current request_specs record did not got removed even
when the related instance is gone, which lead to a continuously growing
request_specs table. How is that so?
Is it because the delete process could be error and we have to recover the
request_spec if we deleted it?
How
Hey all,
I'll be out for the rest of the week after today. I don't anticipate
anything coming up but Renat Akhmerov is standing in as PTL while I'm out.
See you all on Monday next week.
Cheers,
Dougal
__
OpenStack
On Mon, 2018-06-18 at 17:23 +, Waines, Greg wrote:
> Hey ... a couple of NEWBY question for the Barbican Team.
>
> I just setup a devstack with Barbican @ stable/queens .
>
> Ran through the “Verify operation” commands (
> https://docs.openstack.org/barbican/latest/install/verify.html )
On Thu, 2018-06-28 at 17:32 -0400, Zane Bitter wrote:
> On 28/06/18 15:00, Douglas Mendizabal wrote:
> > Replying inline.
>
> [snip]
> > IIRC, using URIs instead of UUIDs was a federation pre-optimization
> > done many years ago when Barbican was brand new and we knew we
> > wanted
> > federation
On Thu, 28 Jun 2018, Fox, Kevin M wrote:
I think if OpenStack wants to gain back some of the steam it had before, it
needs to adjust to the new world it is living in. This means:
* Consider abolishing the project walls. They are driving bad architecture (not
intentionally but as a side affect
Hello Everyone,
We have 23 responses so far on the PTG survey for openstack operators to
let the ops meetups team and the openstack foundation folk know preferences
for the upcoming PTG in Denver. Perhaps some of you that intended to
respond were, like me, sweltering in a heatwave and didn't
On 07/02/2018 03:12 PM, Fox, Kevin M wrote:
I think a lot of the pushback around not adding more common/required services
is the extra load it puts on ops though. hence these:
* Consider abolishing the project walls.
* simplify the architecture for ops
IMO, those need to change to break
On 06/27/2018 07:23 PM, Zane Bitter wrote:
On 27/06/18 07:55, Jay Pipes wrote:
Above, I was saying that the scope of the *OpenStack* community is
already too broad (IMHO). An example of projects that have made the
*OpenStack* community too broad are purpose-built telco applications
like
On Thu, Jun 28, 2018 at 8:04 PM, Lars Kellogg-Stedman
wrote:
> What is required to successfully run the rspec tests?
On the odd chance that it might be useful to someone else, here's the
Docker image I'm using to successfully run the rspec tests for
puppet-keystone:
Tuesday+Wednesday positive: gives time on Monday for the API SIG (I
personally would like to be there) and the Ask-me-anything/goal help
room
Tuesday+Wednesday negative: less time for Luigi (if he is at PTG) to
do QA things (but QA will also be there on Thursday)
Tuesday+Wednesday negative: the
On 28/06/18 15:09, Fox, Kevin M wrote:
I'll weigh in a bit with my operator hat on as recent experience it pertains to
the current conversation
Kubernetes has largely succeeded in common distribution tools where OpenStack
has not been able to.
kubeadm was created as a way to centralize
On 06/28/2018 02:09 PM, Fox, Kevin M wrote:
> I'll weigh in a bit with my operator hat on as recent experience it pertains
> to the current conversation
>
> Kubernetes has largely succeeded in common distribution tools where OpenStack
> has not been able to.
> kubeadm was created as a way
I think Keystone is one of the exceptions currently, as it is the
quintessential common service in all of OpenStack since the rule was made, all
things auth belong to Keystone and the other projects don't waver from it. The
same can not be said of, say, Barbican. Steps have been made recently
On Mon, Jul 02, 2018 at 08:13:39AM +0200, Ignazio Cassano wrote:
> Tony, do you mean the script I am using to create the image ?
Yup, it'd be good to try and reproduce this outside your environment as
that'll make fixing the underlying bug quicker.
Yours Tony.
signature.asc
Description: PGP
Thanks, I may have missed that one.
On Mon, Jul 2, 2018 at 10:29 PM Matt Riedemann wrote:
> On 7/2/2018 2:47 AM, Zhenyu Zheng wrote:
> > It seems that the current request_specs record did not got removed even
> > when the related instance is gone, which lead to a continuously growing
> >
hi,
- networking-midonet uses autodoc in their doc.
build-openstack-sphinx-docs runs it.
- build-openstack-sphinx-docs doesn't use tox-siblings. thus the job
uses released versions of dependencies. eg. neutron, neutron-XXXaas,
os-vif, etc
- released versions of dependencies and networking-midonet
Hi Tony, I sent log file and script yesterday. I hope you received them.
Ignazio
Il Mar 3 Lug 2018 02:49 Tony Breeds ha scritto:
> On Mon, Jul 02, 2018 at 08:13:39AM +0200, Ignazio Cassano wrote:
> > Tony, do you mean the script I am using to create the image ?
>
> Yup, it'd be good to try and
On Tue, Jul 03, 2018 at 06:12:21AM +0200, Ignazio Cassano wrote:
> Hi Tony, I sent log file and script yesterday. I hope you received them.
Sorry I can't find them in any of my inboxes :(
Yours Tony.
signature.asc
Description: PGP signature
___
Hi Saharans,
as previously discussed, we are scheduled for Monday and Tuesday at the PTG
in Denver. I would like to hear from folks who are planning to be there
which days works best for you. Options are, Monday and Tuesday or Tuesday
and Wednesday.
Keep in mind that I can't guarantee a switch,
On 7/2/2018 2:47 AM, Zhenyu Zheng wrote:
It seems that the current request_specs record did not got removed even
when the related instance is gone, which lead to a continuously growing
request_specs table. How is that so?
Is it because the delete process could be error and we have to recover
On 7/2/2018 2:43 AM, 李杰 wrote:
Oh,sorry,not this means,in my opinion,we could filter the flavor in
flavor list.such as the cli:openstack flavor list --property key:value.
There is no support for natively filtering flavors by extra specs in the
compute REST API so that would have to be added
45 matches
Mail list logo