For what it's worth, we're running in a configuration similar to the
one in the attached diagram using VlanManager. When we moved the
nova-network service off of the machine with nova-api, we needed to
add an additional prerouting rule on the network server that prevented
the traffic from being
quagga in your environment.
2011/5/11 Narayan Desai narayan.de...@gmail.com
For what it's worth, we're running in a configuration similar to the
one in the attached diagram using VlanManager. When we moved the
nova-network service off of the machine with nova-api, we needed to
add
We've got a system comprised of 336 compute nodes, a head node with
everything else except for the network and volume pieces, 12 volume
servers, and 2 network servers.
We're using mysql. We've deployed using VlanManager. We deploy using a
custom node imaging system here for base builds and bcfg2
We had to preconfigure the vlan tags and set all network ports for
nova-compute nodes to trunk them in advance on our switching gear.
(BNT and Juniper both, but I've also needed to do it on Cisco gear) I
think that is a pretty common requirement for managed switches.
-nld
When we were having rabbitmq problems, we would use rabbitmqctl
list_queues to see what the queue depth for each nova service was.
While this doesn't show what the throughput is, it does let you know
when things start to get backed up.
-nld
___
Mailing
I suspect that the original poster was looking for instance access
(mediated in some way) to IB gear. When we were trying to figure out
how to best use our IB gear inside of openstack, we decided that it
was too risky to try exposing IB at the verbs layer to instances
directly, since the security
On Mon, Oct 3, 2011 at 4:21 PM, Caitlin Bestler
caitlin.best...@nexenta.com wrote:
Narayan Desai wrote:
I suspect that the original poster was looking for instance access
(mediated in some way) to IB gear.
When we were trying to figure out how to best use our IB gear inside
of openstack
This looks more or less right. We have been running a setup like you
are describing here for quite a while, and we've found it to be stable
(and easier to setup than a lot of the other network options, IMO).
When debugging this sort of setup, trunking setup problems on the
switch are often the
Hello all. We've recently upgraded our cactus system to more recent
code. In the process of doing this, we've started logging whenever we
get tracebacks out of any of the openstack components we are running.
Some of these are clearly bugs, while others correspond to normal
operational conditions
Ghe, while you're right that these two workloads are different, deployers
need developers to use a representative environment during development, or
the code doesn't work when it hits real deployments. We've now been bitten
during our initial deployment of cactus, our upgrade to diablo, and our
We needed to setup something similar when we split out the
nova-network service to a different host than nova-api in cactus, so
that instances could get to the metadata service. It was pretty simple
to make quagga work, but then we needed to add a rule to bypass NAT.
Since this was just for the
As far as I know, the current volume service doesn't support
connecting the same volume to multiple instances at the same time, so
neither of these can work directly through nova apis.
-nld
On Tue, Apr 24, 2012 at 4:44 AM, Daniel Martinez danie...@gmail.com wrote:
Hello everyone.
My setup is
I'm not sure that it would be particularly easy to make nova-volume
support clustered filesystems; the current model only supports
attaching a volume to a single instance at a time. Aside from that, it
shouldn't be too hard to use fc as the data path instead of iscsi.
We're looking at using iSER
This sounds like it might be working properly. In VLAN mode, all
instances are connected to one of the project vlans. The .1 address
(gateway, dhcp, etc) exists on an interface on the nova-network node
(or one of them, in the case that you are running multiple. This
interface is bridged to a
a private ip address to
the vm launched on compute node. However, I still cannot ping this ip
address from the network(controller node). I am running nova-network
service only on the controller.
Thanks,**-vj**
*From:* Narayan Desai narayan.de...@gmail.com
*To:* Vijay vija...@yahoo.com
*Cc
We're definitely interested in this sort of thing. So much so that
we've already hacked support into nova-volume to run directly on top
of an illumos box with zfs. ;)
We've only gotten the basics working, and we haven't done any serious
torture testing of it yet. Our real goal is to get things
How integrated is the network target support for zfs on freebsd? One
of the most compelling features (IMHO) of ZFS on illumos is the whole
comstar stack. On the zfs linux port at least, there are just
integration hooks out to the standard linux methods (kernel-nfs, etc)
for nfs, iscsi, etc.
I'm
I vaguely recall Vish mentioning a bug in dnsmasq that had a somewhat
similar problem. (it had to do with lease renewal problems on ip
aliases or something like that).
This issue was particularly pronounced with windows VMs, apparently.
-nld
On Thu, Jun 14, 2012 at 6:02 PM, Christian Parpart
On Thu, Jun 21, 2012 at 11:16 AM, Rick Jones rick.jon...@hp.com wrote:
On 06/20/2012 08:09 PM, Huang Zhiteng wrote:
By 'network scaling', do you mean the aggregated throughput
(bandwidth, packets/sec) of the entire cloud (or part of it)? I think
picking up 'netperf' as micro benchmark is just
On Thu, Jun 21, 2012 at 4:21 PM, Rick Jones rick.jon...@hp.com wrote:
TSO and GRO can cover a multitude of path-length sins :)
Along with a 64 MB TCP window ;)
That is one of the reasons netperf does more than just bulk transfer :)
When I was/am measuring scaling of an SMP node I would use
On Sat, Jun 30, 2012 at 3:06 AM, Christian Parpart tra...@gmail.com wrote:
Hm, Pacemaker/Corosync *inside* the VM will add the Service-IP to the local
ethernet
interface, and thus, the outside OpenStack components do not know about.
Using a dedicated floating IP pool for service IPs might
On Fri, Jul 6, 2012 at 9:51 AM, John Paul Walters jwalt...@isi.edu wrote:
Does something like the first Monday of the month at 4:00pm EDT (UTC-4) work?
I'm just throwing out that time as something that seems to broadly work on
my end, but I'd welcome any input from others.
That generally
On Fri, Jul 6, 2012 at 11:52 AM, Stefano Maffulli stef...@openstack.org wrote:
On 07/06/2012 07:51 AM, John Paul Walters wrote:
One of the outputs of the design summit was that folks are
interested in participating in a monthly (or so) telecon to express
feature requests, best practices, etc.
I also vote for option 1, but the migration path really needs to be
solid and well documented.
-nld
On Wed, Jul 11, 2012 at 10:52 AM, Andrew Clay Shafer
a...@parvuscaptus.com wrote:
One vote for option 1.
Remove Volumes
___
Mailing list:
On Wed, Jul 11, 2012 at 1:49 PM, Adam Gandelman ad...@canonical.com wrote:
On 07/11/2012 09:22 AM, Narayan Desai wrote:
I also vote for option 1, but the migration path really needs to be
solid and well documented.
-nld
I feel the same. I think documented and tested migration paths
On Thu, Jul 12, 2012 at 2:38 PM, Vishvananda Ishaya
vishvana...@gmail.com wrote:
Agreed, I'm a developer, so I'm clearly biased towards what is easier for
developers. It will be a significant effort to have to maintain the
nova-volume code, so I want to be sure it is necessary. End users
On Thu, Jul 12, 2012 at 4:36 PM, Vishvananda Ishaya
vishvana...@gmail.com wrote:
Upgrading has been painful and we are striving to improve this process
as much as possible.
I think that this needs to be a core value of the developer community,
if Openstack is going to become pervasive.
I
We're running into what looks like a linux bridging bug, which causes
both substantial (20-40%) packet loss, and DNS to fail about that same
fraction of the time. We're running essex on precise, with dedicated
nova-network servers and VLANManager. On either of our nova-network
servers, we see the
I suspect that you need the right solaris (more likely illumos) bits
to get guest side support for virtio. We tried a while ago and the
default openindiana at the time didn't work.
-nld
On Tue, Jul 17, 2012 at 7:43 PM, Joshua j...@root.bz wrote:
I have tried with both KVM and qemu. Solaris
On Wed, Jul 18, 2012 at 7:38 PM, Michael March mma...@gmail.com wrote:
I don't follow Solaris that closely but I vaguely remember the Joyent folks
ported all of KVM to Solaris, right? Or am I just missing the whole point
here?
They did, and it is a fairly impressive piece of work. Their focus
On Fri, Jul 20, 2012 at 4:38 AM, Eoghan Glynn egl...@redhat.com wrote:
Hi Narayan,
I had the idea previously of applying a weighting function to the
resource usage being allocated from the quota, as opposed to simply
counting raw instances.
The notion I had in mind was more related to
Just for the record, we found the issue. There was some filtering
being applied in the bridge code which randomly (?) dropped some DNS
requests. Setting:
net.bridge.bridge-nf-call-arptables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-ip6tables = 0
completely resolved the
On Sat, Jul 21, 2012 at 6:47 AM, Xu (Simon) Chen xche...@gmail.com wrote:
Narayan,
If you do net.bridge.bridge-nf-call-iptables = 0 on the network controller,
does floating IP still work? For each tenant/network, a subnet is created,
and the nova-network has a .1 gateway configured on the
On Thu, Aug 2, 2012 at 8:42 AM, Christoph Kluenter c...@iphh.net wrote:
* Am Thu, Aug 02 2012 at 09:24:55 -0400 , schrieb Ravi Jagannathan:
It should hop on to the next subnet block if available ( assuming that in
LAN its a private address scheme ) .
We only use routable IPs. thats why we have
We've managed to get things working by hardwiring the filtering
scheduler to route instances to particular hosts that are running nova
compute with different virtualization layers. (in our case, kvm and
lxc for GPUs)
-nld
On Wed, Aug 22, 2012 at 12:34 PM, Michael J Fork mjf...@us.ibm.com wrote:
On Wed, Aug 29, 2012 at 12:19 PM, Joshua Harlow harlo...@yahoo-inc.com wrote:
Perhaps we should also have a CHANGELOG file to explain the major
features/changes...
Perhaps a 'MIGRATION' file as well that explains how to migrate from
version - 1?
I think that this would be a great start.
In
Sure, we've been running in that sort of configuration since bexar.
The only tricky part is that you need to make sure that you run
nova-api-metadata on each nova-network server, and you need to make
sure that floating IPs can get to the appropriate fixed addresses (ie
if a fixed address is not
On Thu, Sep 27, 2012 at 2:20 PM, Nandavar, Divakar Padiyar (STSD)
divakar.padiyar-nanda...@hp.com wrote:
From the information available in the blueprint for
multi-process-api-service I see that implementation has been completed and
would be available as part of Folsom release
I've finally finished my writeup describing the experiments that we
performed using Openstack to drive a wide area 100 gigabit network.
I've included all of the details for configuration and tuning, as well
as speculation why we're seeing such good numbers.
tl;dr: you can push a whole lot of
We're using IB (QDR connectX and connectX2) on our system. It turns
out that the drivers included in version 3.2 of the linux kernel are
fine. I've built a ppa for updated management tools though; all of
those bits are ancient in precise. The ppa is here:
http://launchpad.net/~narayan-desai
We have the same basic problems. We have 4 different types of systems
integrated into our system. They all have different ratios of cpu to
memory, and we have some specialized hardware on one class of nodes.
We ended up setting up a series of chassis specific instance
definitions. We then use the
Make sure that the metadata server has a route back to the VM. Traffic
hitting that NAT rule ensures that data is flowing properly in one
direction, but you need to make sure bits can flow back to establish a
tcp connection. We had this problem running multiple nova-network
servers.
-nld
On Thu,
And if a nova reboot fails, you can always fall back to issuing virsh
commands on the node behind nova's back.
-nld
On Mon, Apr 8, 2013 at 8:28 PM, Blair Bethwaite
blair.bethwa...@gmail.com wrote:
Dave,
Have you tried rebooting it (via OpenStack dashboard/CLI/API)? Obviously
you'll lose
This will depend on whether the VMs are in the same tenant network or
not. Assuming they are on the same L2 and L3 network, then the packets
will transit either the linux bridge, or openvswitch, depending on how
you have things configured. Note that network filtering rules will be
processed on
+1.
We're going to be running a bunch of parallel deployments of openstack for
the purpose of experimentation in system design. it would be nice to be
able to share glance and keystone between instances.
-nld
On Wed, May 15, 2013 at 1:46 PM, John Paul Walters jwalt...@isi.edu wrote:
Hi,
45 matches
Mail list logo