Hi Saverio,
I think only the API versions supported by some of the endpoint are
discoverable, as described here:
https://wiki.openstack.org/wiki/VersionDiscovery
curl https://x.x.x.x:9292/image
curl https://x.x.x.x:8774/compute
Cheers,
George
On Tue, Aug 7, 2018 at 9:30 AM, Saverio Proto
ances.
>
> In summary; DHCP requests are being sent, but are never received.
>
> *Torin Woltjer*
>
> *Grand Dial Communications - A ZK Tech Inc. Company*
>
> *616.776.1066 ext. 2006*
> * <http://www.granddial.com>www.granddial.com <http://www.granddial.com>*
&g
ances.
>
> In summary; DHCP requests are being sent, but are never received.
>
> *Torin Woltjer*
>
> *Grand Dial Communications - A ZK Tech Inc. Company*
>
> *616.776.1066 ext. 2006*
> * <http://www.granddial.com>www.granddial.com <http://www.granddial.com>*
&g
m>www.granddial.com <http://www.granddial.com>*
>
> --
> *From*: George Mihaiescu
> *Sent*: 7/5/18 10:38 AM
> *To*: torin.wolt...@granddial.com
> *Subject*: Re: [Openstack] Recovering from full outage
> Did you restart the neutron-dhcp-agent and rebooted the VMs?
>
> On T
m>www.granddial.com <http://www.granddial.com>*
>
> --
> *From*: George Mihaiescu
> *Sent*: 7/5/18 10:38 AM
> *To*: torin.wolt...@granddial.com
> *Subject*: Re: [Openstack] Recovering from full outage
> Did you restart the neutron-dhcp-agent and rebooted the VMs?
>
> On T
, however I cannot ping any
> of the instances floating IPs or the neutron router. And when logging into an
> instance with the console, there is no IP address on any interface.
>
> Torin Woltjer
>
> Grand Dial Communications - A ZK Tech Inc. Company
>
> 616.776.106
, however I cannot ping any
> of the instances floating IPs or the neutron router. And when logging into an
> instance with the console, there is no IP address on any interface.
>
> Torin Woltjer
>
> Grand Dial Communications - A ZK Tech Inc. Company
>
> 616.776.106
True" and "vif_plugging_timeout = 300" and run
another large test, just to confirm.
We usually run these large tests after a version upgrade to test the APIs
under load.
On Thu, May 17, 2018 at 11:42 AM, Matt Riedemann <mriede...@gmail.com>
wrote:
> On 5/17/2018 9:
We use "vif_plugging_is_fatal = False" and "vif_plugging_timeout = 0" as
well as "no-ping" in the dnsmasq-neutron.conf, and large rally tests of 500
instances complete with no issues.
These are some good blogposts about Neutron performance:
Cloudbase provides an evaluation image for Windows:
https://cloudbase.it/windows-cloud-images/
On Fri, May 11, 2018 at 12:30 PM, Remo Mattei wrote:
> Hello guys, I have a need now to get a Windows VM into the OpenStack
> deployment. Can anyone suggest the best way to do
Hi Blair,
We had a few cases of compute nodes hanging with the last log in syslog
being related to "rdmsr", and requiring hard reboots:
kvm [29216]: vcpu0 unhandled rdmsr: 0x345
The workloads are probably similar to yours (SGE workers doing genomics)
with CPU mode host-passthrough, on top of
I totally agree with Jay, this is the best, cheapest and most scalable way to
build a cloud environment with Openstack.
We use local storage as the primary root disk source which lets us make good
use of the slots available in each compute node (6), and coupled with the
Raid10 gives good I/O
+1 for option 3
> On Jun 1, 2017, at 11:06, Alexandra Settle wrote:
>
> Hi everyone,
>
> I haven’t had any feedback regarding moving the Operations Guide to the
> OpenStack wiki. I’m not taking silence as compliance. I would really like to
> hear people’s opinions on
r a lightning talk, and enables us to fit one more
> in.
>
> Best wishes,
> Stig
>
>
> > On 27 Apr 2017, at 20:29, George Mihaiescu <lmihaie...@gmail.com> wrote:
> >
> > Hi Stig, it will be 10 minutes sessions like in Barcelona?
> >
> > Thanks,
Hi Stig, it will be 10 minutes sessions like in Barcelona?
Thanks,
George
> On Apr 26, 2017, at 03:31, Stig Telfer wrote:
>
> Hi All -
>
> We have planned a session of lightning talks at the Boston summit to discuss
> topics specific for OpenStack and research
Hi Massimo,
You can upload the images twice, in both qcow2 and raw format, then create
a host aggregate for your "local-disk" compute nodes and set its metadata
to match the property you'll set on your qcow2 images.
When somebody will start a qcow2 version of the image, it will be scheduled
on
Hi Evan,
I believe the scientific working group will have at least a meeting focused
on this subject (
https://etherpad.openstack.org/p/BOS-UC-brainstorming-scientific-wg)
Contact me off-list if you want to chat about protected data, as I'm the
architect for a fairly large environment that deals
Check if the flavour you chose has a large enough root disk.
> On Jan 20, 2017, at 08:25, Jorge Luiz Correa wrote:
>
> Hi, I need some help with images that I can't boot. I've here some images
> like Cirros, Ubuntu, Fedora, CentOS etc. All downloaded as indicated here:
>
>
Can you not update the flavour in dashboard?
> On Dec 15, 2016, at 09:34, William Josefsson
> wrote:
>
>> On Thu, Dec 15, 2016 at 9:40 PM, Mikhail Medvedev
>> wrote:
>>
>> I could not figure out how to set swap on existing flavor fast enough,
Try changing the following in nova.conf and restart the nova-scheduler:
scheduler_host_subset_size = 10
scheduler_max_attempts = 10
Cheers,
George
On Wed, Nov 30, 2016 at 9:56 AM, Massimo Sgaravatto <
massimo.sgarava...@gmail.com> wrote:
> Hi all
>
> I have a problem with scheduling in our
Same need here, I want to know who changed a security group and what change was
done. Just the logged POST on the API is not enough to properly audit the
operation.
> On Nov 16, 2016, at 19:51, Kris G. Lindgren wrote:
>
> I need to do a deeper dive on audit logging.
>
Hi Jonathan,
The openvswitch-agent is out of sync on compute 4, try restarting it.
> On Nov 8, 2016, at 17:43, Jonathan Proulx wrote:
>
>
> I have an odd issue that seems to just be affecting one private
> network for one tenant, though I saw a similar thing on a
the console.
You could create a special role that has console access and change the
policy file to reference that role for the "compute:get_vnc_console", for
example.
I don't think you can do it on per-flavor basis.
Cheers,
George
On Thu, Oct 27, 2016 at 10:24 AM, Blair Bethwaite <blair.bethwa.
Hi Blair,
Did you try playing with Nova's policy file and limit the scope for
"compute_extension:console_output": "" ?
Cheers,
George
On Thu, Oct 27, 2016 at 10:08 AM, Blair Bethwaite wrote:
> On 27 October 2016 at 16:02, Jonathan D. Proulx
Hi Ian,
Neutron dhcp server only serves IPs to the MACs defined in its host file
(/var/lib/neutron/dhcp/UUID/host).
You can create a port for the physical server if you know the MAC address
and this make it work, check the help for the "neutron port-create" command:
neutron help port-create
You can define the "osapi_volume_listen=" in cinder.conf and specify just
the desired IP address.
On Mon, Feb 1, 2016 at 3:50 AM, Ludwig Tirazona
wrote:
> My problem actually is that I have three controller nodes that are also my
> HAProxy nodes as well, and I want just
Is it just dashboard that's slow?
How about "nova list" or "neutron port-list" from your dashboard node, as
well as from outside your environment?
Depending on how you have your endpoints configured in keystone (ip or
name) and how dns resolution is set in your environment, there might be
delays
Couldn't you achieve the same goal with egress security rules?
Without SNAT enabled, those instances wouldn't be able to reach the
Internet at all, so no package updates, etc.
On 18 May 2015 05:13, Simone Spinelli simone.spine...@gmail.com wrote:
Hi all,
by default neutron routers have source
I don't think there is anything specific to SLES, other than making sure
the interface scripts are set correctly, the interfaces are not renamed at
boot, there are drivers for the NIC (e.g. virtio).
You could login on the SLES VM on the console, assign it the IP was
supposed get from DHCP and
You have to set two variables in the nova.conf file apparently
(send_arp_for_ha and send_arp_for_ha_count):
https://github.com/openstack/nova/blob/3658e1015a2f7cb7c321cc1a0adfda37757fd80b/nova/network/linux_net.py#L764
On Fri, Mar 13, 2015 at 2:00 PM, Georgios Dimitrakakis gior...@acmac.uoc.gr
Well, what value do you have for send_arp_for_ha_count ?
On 13 Mar 2015 17:11, Georgios Dimitrakakis gior...@acmac.uoc.gr wrote:
So according to that it sends ARP every time a floating IP is assigned at
a VM. Am I right?
If that is correct unfortunately it's not working :-(
Any ideas how to
Based on the code, it sends the gratuitous ARPs only when the floating IP
is assigned to the VM, and it's sent as many times as the user decided by
configuring the send_arp_for_ha_count parameter in nova.conf
So, to be safe, I would add the following in nova.conf and restart nova
services.
You could just attach that volume to an instance and scp the data from
there.
On 16 Dec 2014 11:35, varun bhatnagar varun292...@gmail.com wrote:
Hi,
I have created a cinder volume of 50GB. I want to download this volume to
my host machine (may be by using scp). Can anyone please tell me how
Make sure the time is in sync on all your compute nodes and controller.
On 4 Dec 2014 02:05, Guillermo Alvarado guillermoalvarad...@gmail.com
wrote:
I got this log when the computes goes XXX
*var/log/nova/nova-compute.log*
2014-12-04 00:43:41.921 32947 DEBUG nova.openstack.common.lockutils
Depending on your overcommit ratio, the scheduler can schedule instances
using more virtual memory than the available physical memory on the host,
700 MB in your case.
On 27 Nov 2014 05:36, mad Engineer themadengin...@gmail.com wrote:
hi all i have set
reserved_host_memory_mb in nova.conf of
of RAM is 1 and that is working.However
instances are still getting created with available free memory
reserved_host_memory_mb
On Thu, Nov 27, 2014 at 4:33 PM, George Mihaiescu lmihaie...@gmail.com
wrote:
Depending on your overcommit ratio, the scheduler can schedule instances
using more
You have to install the neutron-openvswitch-agent on the compute nodes as
well.
On 26 Nov 2014 06:57, Uwe Sauter uwe.sauter...@gmail.com wrote:
Hi all,
I'm trying to setup Juno on five hosts (CentOS 7), following along the
three-node setup guide as close as possible. My setup differs in that
Hi Uwe,
Enable debug, restart the service and you should get more info.
George
On Wed, Nov 26, 2014 at 11:28 AM, Uwe Sauter uwe.sauter...@gmail.com
wrote:
Hi all,
does anyone know why neutron-openvswitch-agent.service keeps crashing?
Nov 26 17:25:40 os483601.localnet systemd[1]: Starting
Make sure that nova.conf you edited it's actually being used, if you have a
nova.conf in your root directory it will be used before the one in
/etc/nova/.
You could start the db sync with strace and see what config file is
actually used just to make sure. Also, if the nova.conf has wrong
Run 'brctl show' to see the linux bridge that holds the veth pair.
On 20 Nov 2014 05:14, Robert van Leeuwen robert.vanleeu...@spilgames.com
wrote:
I still don’t get it.
I can’t see the veth pair configured when calling
#ip link list
Do other methods exist to configure veth pairs?
You
The problem is pretty clear to me:
*useradd: UID 96 is not unique.*
Your distribution is probably using a hard-coded userid which is already
used on your system.
On Thu, Nov 13, 2014 at 10:11 AM, varun bhatnagar varun292...@gmail.com
wrote:
Hi,
Am I the only one who got this error?
Any
To see where the problem lies you could do an iperf test between two
instances of the same tenant running on the same compute node.
The traffic would still pass through openvswitch, but not across the Vxlan
tunnel.
On Oct 15, 2014 5:29 PM, Sławek Kapłoński sla...@kaplonski.pl wrote:
Hello,
I
) help to exceed
the 4k subnet limitation? In such a scenario, will dhcp + routing be
distributed or is a second network node just something like a
hot-standby?
Thanks!
--
Andreas
(irc: scheuran)
On Thu, 2014-09-18 at 09:47 -0400, George Mihaiescu wrote:
The VLAN ID is only locally significant
serve more than 4096 Neutron subnets, but you would hit other limits
by then.
George
From: BYEONG-GI KIM [mailto:kimbyeon...@gmail.com]
Sent: Wednesday, September 17, 2014 10:41 PM
To: George Mihaiescu; openstack@lists.openstack.org
Subject: Re
The internal VLAD ID is indeed limited to 4096 but this internal tag number is
used to isolate different neutron subnets, not tenants.
A tenant could create 10 neutron networks each with its own subnet and then
start 10 instances each attached to a separate net/subnet. If these instances
The first error is probably triggered because the Mysql user you created for
Cinder doesn't have write permissions on the cinder db.
The second error is because you passed the logs argument (string) to the
cinder-manage db sync command which can only accept an integer (for example
3, if
Hi Daniel,
It's recommended to separate the external traffic reaching the Dashboard from
the management, so the Dashboard server(s) should have at least two NICs
(public and management).
The installation guide covers only one of the multitudes of possible deployment
scenarios, and in this
Hi,
I would like to know if there is a way to apply security rules to the VIP
assigned to a LB in the Haproxy implementation.
I noticed that this functionality doesn't exist in Horizon and I couldn't find
a way to do this with the neutron client.
Basically, I would like to enable access
Hi Ross,
You also have to assign the router you created an interface on the admin-net.1
If everything is setup correctly you can then run neutron router-list to see
the ID of the router you created, and then neutron router-port-list ROUTER_ID
and you should see that your router has two
Thanks for the slides Kyle.
Can you please provide some details about the proxy ARP mechanism (mentioned on
slide 21) that intercepts the ARP request and answers using a pre-populated
neighbour entry?
Where is the proxy ARP learning from, and how long it caches the entries?
Does it update when
50 matches
Mail list logo