hecker code, but I would love to see it re-using oslo-config-
validator, as it would be the unique source of truth for upgrades
before the upgrade happens (vs having to do multiple steps).
If I am completely out of my league here, tell me.
Just my 2 cents.
Jean-Philippe Evrard (evrardjp)
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Thank you, I will try it next week (since today is Friday) and update this
thread if it has fixed my issues. We are indeed using the latest RDO Pike, so
ovsdbapp 0.4.3.1 .
Jean-Philippe Méthot
Openstack system administrator
Administrateur système Openstack
PlanetHoster inc.
> Le 28 s
r, I don’t think it’s max open file since the number of open files is
nowhere close to what I’ve set it.
Ideas?
Jean-Philippe Méthot
Openstack system administrator
Administrateur système Openstack
PlanetHoster inc.
> Le 26 sept. 2018 à 15:16, Jean-Philippe Méthot
> a écrit :
>
constantly under load.
Jean-Philippe Méthot
Openstack system administrator
Administrateur système Openstack
PlanetHoster inc.
> Le 26 sept. 2018 à 11:48, Simon Leinen a écrit :
>
> Jean-Philippe Méthot writes:
>> This particular message makes it sound as if openvswitch is getting
almost instantly
though. I’ve done some research about that particular message, but it didn’t
give me anything I can use to fix it.
Jean-Philippe Méthot
Openstack system administrator
Administrateur système Openstack
PlanetHoster inc.
> Le 25 sept. 2018 à 19:37, Erik McCormick a éc
and thus minimize the impact
of such an attack, or whatever it was.
Best regards,
Jean-Philippe Méthot
Openstack system administrator
Administrateur système Openstack
PlanetHoster inc.
___
OpenStack-operators mailing list
OpenStack-operators
receive metadata?
We currently use Pike on centos 7.
Jean-Philippe Méthot
Openstack system administrator
Administrateur système Openstack
PlanetHoster inc.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http
setting up the right
processes. I'd be happy to discuss that with you to have a real/more
complete understanding of what you mean there.
Jean-Philippe Evrard (evrardjp)
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
s are named compute1, compute2, ...
Maybe a cleanup of your environment and redeploy would help you?
I am not sure to have enough information to answer you there.
Best regards,
Jean-Philippe Evrard (evrardjp)
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
and it will apply on all your nodes.
If you want to be more surgical, you'd have to give more details about your
OpenStack-Ansible version and what you're trying to achieve.
Regards,
Jean-Philippe Evrard (evrardjp)
___
OpenStack-operators mailing
, like a regular ansible ini inventory.
Hope it helps.
Jean-Philippe Evrard (evrardjp)
[1]:
https://docs.openstack.org/openstack-ansible/latest/reference/inventory/inventory.html
On Thursday, August 02, 2018 11:23 CEST, nico...@lrasc.fr wrote:
> Hi Openstack community !
>
> I have a
not that hard to register! [1]). The conversations will be easier to
follow though.
You can still contact us on the mailing lists too.
Regards,
Jean-Philippe Evrard (evrardjp)
[0]: https://freenode.net/news/spambot-attack
[1]: https://freenode.net/kb/answer/registration
. In such a configuration, would I
still need to have only one cinder-volume service running at a time? Also, the
backend is a Dell compellent SAN, if that makes a difference.
Jean-Philippe Méthot
Openstack system administrator
Administrateur système Openstack
PlanetHoster inc
their block device.
I know this issue doesn’t happen on Ceph, so I’ve been wondering, is this a
limitation of Openstack or the SAN driver? Also, is there actually a way to
reach even active-passive high availability with this current storage solution?
Jean-Philippe Méthot
Openstack system
Please don't hesitate to edit this etherpad, adding your new special
interest group, or simply joining an existing one if you have spare
cycles!
Thank you!
Jean-Philippe Evrard (evrardjp)
[1]: https://etherpad.openstack.org/p/osa-liaisons
___
OpenStack-op
> Right, you can set the stable-branch-type field to 'tagless' (see
> http://git.openstack.org/cgit/openstack/releases/tree/README.rst#n462) and
> then set the branch location field to the SHA you want to use.
Exactly what I thought.
> If you would be ready to branch all of the roles at one
flows.
Don't hesitate to join us on our IRC channel for more detailed
questions, on freenode #openstack-ansible.
If you want to continue by email, don't hesitate to put
[openstack-ansible] in the email title :)
Best regards,
Jean-Philippe Evrard (evrardjp
lease
file, similar to it.
What I would like to have, from this email, is:
1. Raise awareness to all the involved parties;
2. Confirmation we can go ahead, from a governance standpoint;
3. Confirmation we can still benefit from this automatic branch
tooling.
Thank you in advance.
Jean-Phili
> Le 11 mai 2018 à 08:36, Matt Riedemann <mriede...@gmail.com> a écrit :
>
> On 5/10/2018 6:30 PM, Jean-Philippe Méthot wrote:
>> 1.I was talking about the region-name parameter underneath
>> keystone_authtoken. That is in the pike doc you linked, but I am unawa
>>
>>> I currently operate a multi-region cloud split between 2 geographic
>>> locations. I have updated it to Pike not too long ago, but I've been
>>> running into a peculiar issue. Ever since the Pike release, Nova now
>>> asks Keystone if a new project exists in Keystone before configuring
the code for the
Nova check)
Jean-Philippe Méthot
Openstack system administrator
Administrateur système Openstack
PlanetHoster inc.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman
that was in-use
previously. That was my main concern and now it does make the process of fixing
this simpler.
Jean-Philippe Méthot
Openstack system administrator
Administrateur système Openstack
PlanetHoster inc.
> Le 25 avr. 2018 à 00:22, Sean McGinnis <sean.mcgin...@gmx.com> a écrit :
in the SAN has changed?
Jean-Philippe Méthot
Openstack system administrator
Administrateur système Openstack
PlanetHoster inc.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman
to merge. It makes the reduction of
technical debt easier.
Really thank you for your understanding.
Best regards,
Jean-Philippe (evrardjp)
[1]:
http://zuul.openstack.org/builds.html?pipeline=periodic=openstack%2Fopenstack-ansible
___
OpenStack
or migration. Probably
better to wait before I start converting my multi-disk instances to
virtio-scsi. If I am not mistaken, this should also be an issue in Pike and
master, right?
Jean-Philippe Méthot
Openstack system administrator
Administrateur système Openstack
PlanetHoster inc.
> Le 26 janv. 2
it be as easy as changing
the drive order in the database?
Jean-Philippe Méthot
Openstack system administrator
Administrateur système Openstack
PlanetHoster inc.
> Le 26 janv. 2018 à 13:06, Logan V. <lo...@protiumit.com> a écrit :
>
> https://bugs.launchpad.net/nova/+bug/17
Hi,
Lately, we’ve been converting our VMs block devices (cinder block devices) to
use the virtio-scsi driver instead of virtio-blk by modifying the database.
This works great, however, we’ve run into an issue with an instance that has
more than one drive. Essentially, the root device has
this in the openstack database? What
parameters would I need to change? Is there an easier, less likely to break
everything way?
Jean-Philippe Méthot
Openstack system administrator
Administrateur système Openstack
PlanetHoster inc.
___
OpenStack-operators mailing
> Hey Jean-Philippe,
>
> No, after I disasterously split-brained/partitioned my rabbitmq and galera
> clusters by allowing LXC to start the containers up without the dnsmasq
> process to address their eth0 interfaces (due to what _may_ be a
> template/Xenial bug), I've spent the l
Hello David,
Did you solve your issue?
Did you check that it depends on the default container interface's mtu itself?
Best regards,
JP
On 6 December 2017 at 18:45, David Young <dav...@funkypenguin.co.nz> wrote:
> So..
>
> On 07/12/2017 03:12, Jean-Philippe Evrard wrote:
ld work out of
> the box.
>
> Agreed, that seems to be the case currently with 1500, I’d expect it to be
> true with the updated value
>
> 6) If your instance is reaching its router with no mtu issue, you may
> still have issues for the Northbound trafic. Check how you configure
Hello,
OpenStack-Ansible Mitaka deploys Linux Bridge by default.
This would still happen, but as said, it's not too big of a deal too.
Best regards,
Jean-Philippe (evrardjp)
On 6 November 2017 at 08:10, Kevin Benton <ke...@benton.pub> wrote:
> The Neutron OVS agent should not cause i
critical security updates for things
like kernel?
Just my 2 cents, it's probably good to have other opinions out there.
Best regards,
Jean-Philippe Evrard (evrardjp)
On 3 November 2017 at 13:19, haad <haa...@gmail.com> wrote:
> Hi,
>
> I have one additional question. What is
has been some
> discussion around "skip-level" upgrades.
>
> Chris
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
oject Update" session.
I am looking forward meeting all of you!
Best regards,
Jean-Philippe Evrard (evrardjp)
PS: We have an etherpad listing all these activities and more details
about the ops session here:
https://etherpad.openstack.org/p/osa-sydney-summi
vise you to follow the standard OpenStack-Ansible path, which is
upgrading from Mitaka to Newton, then do your ubuntu upgrade to
xenial.
From Newton onwards, we have code taking care of the rolling part of
the upgrade, which should help you getting to Pike, after
configuration to reduce the
impact of a DDoS on the whole compute node and thus, prevent it from going
down? I understand that increasing the size of the conntrack table is one, but
outside of that?
Best regards,
Jean-Philippe Méthot
Openstack system administrator
Administrateur système Openstack
the information you gave me, I think the user could be the cause.
Our connection plugin doesn't seem to
work with the sudo trick you did.
May I suggest you to file a bug to list what you did in practice,
explain this issue, and you'd expect?
In the meantime, could you try running as root on your destination
node? And maybe later on your deploy node too, to see the results?
Best regards,
Jean-Philippe Evrard (evrardjp)
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
in which you can regularily attend
OpenStack-Ansible meetings
Please give your irc nick too, that would help.
Thank you in advance.
Best regards,
Jean-Philippe Evrard (evrardjp)
[1] https://etherpad.openstack.org/p/osa-meetings-planification
___
OpenStack
this?
--
Jean-Philippe Méthot
Cloud system administrator
PlanetHoster inc.
www.planethoster.net
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Hello,
I forgot to do a “reply all” this morning. Here was the gist:
Don’t hesitate to give us the logs you can find on
/openstack/logs/dc2-controller-01_repo_container-7ce807b6/repo (particularily
the repo_venv_builder.log). It could help us debugging requirements issues.
Best regards,
JP
is used, but buffers/cache consumes 98GB of ram. > >
___
> Kris Lindgren > Senior Linux Systems Engineer > GoDaddy > > On
3/23/17, 11:01 AM, "Jean-Philippe Methot"
<jp.met...@planethos
, but buffers/cache consumes 98GB of ram.
___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
On 3/23/17, 11:01 AM, "Jean-Philippe Methot" <jp.met...@planethoster.info>
wrote:
Hi,
Lately, on my production
openstack actually manage ram
allocation? Will it ever take back the unused ram of a guest process?
Can I force it to take back that ram?
--
Jean-Philippe Méthot
Openstack system administrator
PlanetHoster inc.
www.planethoster.net
___
OpenStack
Hello,
This sounds like an interesting issue that we should fix as soon as possible.
May I ask you more details on the channel?
It’s still brainstorming, but I think that if you clear facts, destroy/upgrade
one controller node, then make sure this controller node has a repo under
16.04, it
. It completely
depends on your environment/requirements.
Hope it helps.
Best regards,
Jean-Philippe Evrard (evrardjp)
Rackspace Limited is a company registered in England & Wales (company
registered number 03897010) whose registered office is at 5 Millington
,
Jean-Philippe Evrard (evrardjp)
From: David Moreau Simard <d...@redhat.com>
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-...@lists.openstack.org>
Date: Friday, 11 November 2016 at 12:40
To: "OpenStack Development Mailing Li
47 matches
Mail list logo