Re: [Openstack] Networking issues in Essex

2012-07-12 Thread Michael Chapman
Thanks for the tip, unfortunately the interfaces are already up.

 - Michael

On Thu, Jul 12, 2012 at 10:15 PM, Jonathan Proulx  wrote:

>
> I've only deployed openstack for the first time a couple weeks ago,
> but FWIW...
>
> I had similar symptoms on my Essex test deployment (on Ubuntu 12.04)
> turned out my problem was taht while the br100 bridge was up and
> configured the underlying eth1 physical interface was down so the bits
> went nowhere.  'ifconfig eth1 up' fixed all, followed ofcoures by
> fixing in /etc/network/interfaces as well so this happens on it's own
> in future.
>
> -Jon
>
> On Thu, Jul 12, 2012 at 02:56:57PM +1000, Michael Chapman wrote:
> :Hi all, I'm hoping I could get some assistance figuring out my networking
> :problems with a small Essex test cluster. I have a small Diablo cluster
> :running without any problems but have hit a wall in deploying Essex.
> :
> :I can launch VMs without issue and access them from the compute host, but
> :from there I can't access anything except the host, DNS services, and
> other
> :VMs.
> :
> :I have separate machines running keystone, glance, postgresql, rabbit-mq
> :and nova-api. They're all on the .os domain with 172.22.1.X IPs
> :
> :I have one machine running nova-compute, nova-network and nova-api, with a
> :public address 192.43.239.175 and also an IP on the 172.22.1.X subnet in
> :the .os domain. It has the following nova/conf:
> :
> :--dhcpbridge_flagfile=/etc/nova/nova.conf
> :--dhcpbridge=/usr/bin/nova-dhcpbridge
> :--logdir=/var/log/nova
> :--state_path=/var/lib/nova
> :--lock_path=/var/lock/nova
> :--force_dhcp_release
> :--iscsi_helper=tgtadm
> :--libvirt_use_virtio_for_bridges
> :--connection_type=libvirt
> :--root_helper=sudo nova-rootwrap
> :--verbose
> :--ec2_private_dns_show_ip
> :
> :--network_manager=nova.network.manager.FlatDHCPManager
> :--rabbit_host=os-amqp.os
> :--sql_connection=postgresql://[user]:[password]@os-sql.os/nova
> :--image_service=nova.image.glance.GlanceImageService
> :--glance_api_servers=os-glance.os:9292
> :--auth_strategy=keystone
> :--scheduler_driver=nova.scheduler.simple.SimpleScheduler
> :--keystone_ec2_url=http://os-key.os:5000/v2.0/ec2tokens
> :
> :--api_paste_config=/etc/nova/api-paste.ini
> :
> :--my_ip=192.43.239.175
> :--flat_interface=eth0
> :--public_interface=eth1
> :--multi_host=True
> :--routing_source_ip=192.43.239.175
> :--network_host=192.43.239.175
> :
> :--dmz_cidr=$my_ip
> :
> :--ec2_host=192.43.239.175
> :--ec2_dmz_host=192.43.239.175
> :
> :I believe I'm seeing a natting issue of some sort - my VMs cannot ping
> :external IPs, though DNS seems to work.
> :ubuntu@monday:~$ ping www.google.com
> :PING www.l.google.com (74.125.237.148) 56(84) bytes of data.
> :
> :
> :When I do a tcpdump on the compute host things seem fairly normal, even
> :though nothing is getting back to the VM
> :
> :root@ncios1:~# tcpdump icmp -i br100
> :tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
> :listening on br100, link-type EN10MB (Ethernet), capture size 65535 bytes
> :14:35:28.046416 IP 10.0.0.8 > syd01s13-in-f20.1e100.net: ICMP echo
> request,
> :id 5002, seq 9, length 64
> :14:35:28.051477 IP syd01s13-in-f20.1e100.net > 10.0.0.8: ICMP echo reply,
> :id 5002, seq 9, length 64
> :14:35:29.054505 IP 10.0.0.8 > syd01s13-in-f20.1e100.net: ICMP echo
> request,
> :id 5002, seq 10, length 64
> :14:35:29.059556 IP syd01s13-in-f20.1e100.net > 10.0.0.8: ICMP echo reply,
> :id 5002, seq 10, length 64
> :
> :I've pored over the iptables nat rules and can't see anything amiss apart
> :from the masquerades that are automatically added: (I've cut out some
> empty
> :chains for brevity)
> :
> :root@ncios1:~# iptables -L -t nat -v
> :Chain PREROUTING (policy ACCEPT 22 packets, 2153 bytes)
> : pkts bytes target prot opt in out source
> :destination
> :   22  2153 nova-network-PREROUTING  all  --  anyany anywhere
> :  anywhere
> :   22  2153 nova-compute-PREROUTING  all  --  anyany anywhere
> :  anywhere
> :   22  2153 nova-api-PREROUTING  all  --  anyany anywhere
> :  anywhere
> :
> :Chain INPUT (policy ACCEPT 12 packets, 1573 bytes)
> : pkts bytes target prot opt in out source
> :destination
> :
> :Chain OUTPUT (policy ACCEPT 31 packets, 2021 bytes)
> : pkts bytes target prot opt in out source
> :destination
> :   31  2021 nova-network-OUTPUT  all  --  anyany anywhere
> :  anywhere
> :   31  2021 nova-compute-OUTPUT  all  --  anyany anywhere
> :  anywhere
> :   31  2021 nova-api-OUTPUT  all  --

[Openstack] Networking issues in Essex

2012-07-11 Thread Michael Chapman
  10.0.0.0/8
nri5.nci.org.au
0 0 ACCEPT all  --  anyany 10.0.0.0/8
nri5.nci.org.au
160 ACCEPT all  --  anyany 10.0.0.0/8
10.0.0.0/8   ! ctstate DNAT

Chain nova-network-PREROUTING (1 references)
 pkts bytes target prot opt in out source
destination
0 0 DNAT   tcp  --  anyany anywhere
169.254.169.254  tcp dpt:http to:192.43.239.175:8775

Chain nova-network-snat (1 references)
 pkts bytes target prot opt in out source
destination
   30  1961 nova-network-float-snat  all  --  anyany anywhere
  anywhere
0 0 SNAT   all  --  anyany 10.0.0.0/8
anywhere to:192.43.239.175

Chain nova-postrouting-bottom (1 references)
 pkts bytes target prot opt in out source
destination
   30  1961 nova-network-snat  all  --  anyany anywhere
anywhere
   30  1961 nova-compute-snat  all  --  anyany anywhere
anywhere
   30  1961 nova-api-snat  all  --  anyany anywhere
anywhere

and the ACCEPT icmp rule seems to be there in filter for the security group
as well, though it's not being triggered for some reason:

Chain nova-compute-inst-6 (1 references)
 pkts bytes target prot opt in out source
destination
0 0 DROP   all  --  anyany anywhere
anywhere state INVALID
   39  6545 ACCEPT all  --  anyany anywhere
anywhere state RELATED,ESTABLISHED
160 nova-compute-provider  all  --  anyany anywhere
anywhere
0 0 ACCEPT udp  --  anyany 10.0.0.3
anywhere udp spt:bootps dpt:bootpc
160 ACCEPT all  --  anyany 10.0.0.0/24
 anywhere
0 0 ACCEPT icmp --  anyany anywhere
anywhere
0 0 ACCEPT tcp  --  anyany anywhere
anywhere tcp dpt:ssh
0 0 nova-compute-sg-fallback  all  --  anyany anywhere
anywhere

I've tried changing the routing source IP between using the private
172.22.1.X IP and the public one but it doesn't seem to change anything. I
tried without that config option at all and also without the network host
flag and not much seems to change.

Any help would be much appreciated.



-- 
Michael Chapman
*Cloud Computing Services*
ANU Supercomputer Facility
Room 318, Leonard Huxley Building (#56), Mills Road
The Australian National University
Canberra ACT 0200 Australia
Tel: *+61 2 6125 7106*
Web: http://nci.org.au
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Greatest deployment?

2012-05-29 Thread Michael Chapman
Matt,

LXC is not a good alternative for several obvious reasons.  So think on all
of that.
Could you expand on why you believe LXC is not a good alternative? As an
HPC provider we're currently weighing up options to get the most we can out
of our Openstack deployment performance-wise. In particular we have quite a
bit of IB, a fairly large Lustre deployment and some GPUs, and are
seriously considering going down the LXC route to try to avoid wasting all
of that by putting a hypervisor on top.

 - Michael Chapman

On Fri, May 25, 2012 at 1:34 AM, Matt Joyce wrote:

> We did some considerable HPC testing when I worked over at NASA Ames with
> the Nebula project.  So I think we may have been the first to try out
> openstack in an HPC capacity.
>
> If you can find Piyush Mehrotra from the NAS division at Ames, ( I'll
> leave it to you to look him up ) he has comprehensive OpenStack tests from
> the Bexar days.  He'd probably be willing to share some of that data if
> there was interest ( assuming he hasn't already ).
>
> Several points of interest I think worth mentioning are:
>
> I think fundamentally many of the folks who are used to doing HPC work
> dislike working with hypervisors in general.  The memory management and
> general i/o latency is something they find to be a bit intolerable.
> OpenNebula, and OpenStack rely on the same sets of open source
> hypervisors.  In fact, I believe OpenStack supports more.  What they do
> fundamentally is operate as an orchestration layer on top of the hypervisor
> layer of the stack.  So in terms of performance you should not see much
> difference between the two at all.  That being said, that's ignoring the
> possibility of scheduler customisation and the sort.
>
> We ultimately, much like Amazon HPC ended up handing over VMs to customers
> that consumed all the resources on a system thus negating the benefit of
> VMs by a large amount.  1 primary reason for this is pinning the 10 gig
> drivers, or infiniband if you have it, to a single VM allows for direct
> pass through and no hypervisor latency.  We were seeing a maximum
> throughput on our 10 gigs of about 8-9 gbit with virtio / jumbo frames via
> kvm, while hardware was slightly above 10.  Several vendors in the area I
> have spoken with are engaged in efforts to tie in physical layer
> provisioning with OpenStack orchestration to bypass the hypervisor
> entirely.  LXC is not a good alternative for several obvious reasons.  So
> think on all of that.
>
> GPUs are highly specialised.  Depending on your workloads you may not
> benefit from them.  Again you have the hardware pinning issue in VMs.
>
> As far as Disk I/O is concerned, large datasets need large disk volumes.
> Large non immutable disk volumes.  So swift / lafs go right out the
> window.  nova-volume has some limitations ( or it did at the time ) euca
> tools couldn't handle 1 TB volumes and the APT maxed out around 2.  So we
> had users raiding their volumes and asking how to target them to nodes to
> increase I/O.  This was sub optimal.  Luster or gluster would be better
> options here.  We chose gluster because we've used luster before, and
> anyone who has knows it's pain.
>
> As for node targeting users cared about specific families of cpus.  Many
> people optimised by cpu and wanted to target westmeres of nehalems.  We had
> no means to do that at the time.
>
> Scheduling full instances is somewhat easier so long as all the nodes in
> your zone are full instance use only.
>
> Matt Joyce
> Now at Cloudscaling
>
>
>
>
> On Thu, May 24, 2012 at 5:49 AM, John Paul Walters wrote:
>
>> Hi,
>>
>> On May 24, 2012, at 5:45 AM, Thierry Carrez wrote:
>>
>> >
>> >
>> >> OpenNebula has also this advantage, for me, that it's designed also to
>> >> provide scientific cloud and it's used by few research centres and even
>> >> supercomputing centres. How about Openstack? Anyone tried deploy it in
>> >> supercomputing environment? Maybe huge cluster or GPU cluster or any
>> >> other scientific group is using Openstack? Is anyone using Openstack in
>> >> scentific environement or Openstack's purpose is to create commercial
>> >> only cloud (business - large and small companies)?
>> >
>> > OpenStack is being used in a number of research clouds, including NeCTAR
>> > (Australia's national research cloud). There is huge interest around
>> > bridging the gap there, with companies like Nimbis or Bull being
>> involved.
>> >
>> > Hopefully people with more information than I have will comment on this
>> > thread.
>> >
>> &g

Re: [Openstack] about ApiError: ApiError: Address quota exceeded. You cannot allocate any more addresses

2011-07-27 Thread Michael Chapman
Hi,

I think you need to specify the project name in your command, eg:

nova-manage project quota *admin* floating_ips 100

To give the admin project 100 floating IPs.

 - Michael

2011/7/28 tianyi wang 

>  Hi, all:
>
>  I want to allocate IP for project‘s instances, but told me:ApiError:
> ApiError: Address quota exceeded. You cannot allocate any more addresses.
>
> When I use : nova-manage project quota list
>
> get this result:
> # nova-manage project quota list
>  metadata_items: 128
>  gigabytes: 1000
>  floating_ips: 10
>  instances: 10
>  volumes: 10
>  cores: 20
>
> But when I use : nova-manage project quota floating_ips 100
>
> It's not change at all. It's still  floating_ips: 10. I use  Cactus nova.
>
> How to resolve this problem?
>
> thanks
>
> alex
>
>
>
>
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
Michael Chapman
*Data Services*
ANU Supercomputer Facility
Room 318, Leonard Huxley Building (#56), Mills Road
The Australian National University
Canberra ACT 0200 Australia
Tel: *+61 2 6125 7106*
Web: http://nci.org.au
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp