Re: [Openstack] Fwd: [Quantum] Query regarding floating IP configuration

2013-04-19 Thread Simon Pasquier

Have a look at this page:
http://docs.openstack.org/folsom/openstack-network/admin/content/connectivity.html

Simon

Le 18/04/2013 21:00, Anil Vishnoi a écrit :

Re-sending it, with the hope of response :-)

-- Forwarded message --
From: *Anil Vishnoi* vishnoia...@gmail.com 
mailto:vishnoia...@gmail.com

Date: Thu, Apr 18, 2013 at 1:59 AM
Subject: [Openstack][Quantum] Query regarding floating IP configuration
To: openstack@lists.launchpad.net 
mailto:openstack@lists.launchpad.net openstack@lists.launchpad.net 
mailto:openstack@lists.launchpad.net




Hi All,

I am trying to setup openstack in my lab, where i have a plan to run 
Controller+Network node on one physical machine and two compute node. 
Controller/Network physical machine has 2 NIc, one connected to 
externet network (internet) and second nic is on private network.


OS Network Administrator Guide says The node running quantum-l3-agent 
should not have an IP address manually configured on the NIC connected 
to the external network. Rather, you must have a range of IP addresses 
from the external network that can be used by OpenStack Networking for 
routers that uplink to the external network.. So my confusion is, if 
i want to send any REST API call to my controller/network node from 
external network, i obviously need public IP address. But instruction 
i quoted says that we should not have manual IP address on the NIC.


Does it mean we can't create floating IP pool in this kind of setup? 
Or we need 3 NIC, 1 for private network, 1 for floating ip pool 
creation and 1 for external access to the machine?


OR is it that we can assign the public ip address to the br-ex, and 
remove it from physical NIC? Please let me know if my query is not clear.

--
Thanks
Anil



--
Thanks
Anil


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] FlatDHCP networking problem

2013-04-19 Thread Javier Alvarez

Looking at /var/log/syslog I have found some more info:

Apr 19 12:36:26  dnsmasq-dhcp[2204]: read 
/var/lib/nova/networks/nova-br100.conf
Apr 19 12:36:34  kernel: [176191.796184] device vnet0 entered 
promiscuous mode
Apr 19 12:36:34  kernel: [176191.874774] br100: port 2(vnet0) entering 
forwarding state
Apr 19 12:36:34  kernel: [176191.874796] br100: port 2(vnet0) entering 
forwarding state
Apr 19 12:36:36  kernel: [176194.057668] kvm: 11182: cpu0 unhandled 
rdmsr: 0xc0010001
Apr 19 12:36:37  ntpd[3497]: Listen normally on 36 vnet0 
fe80::fc16:3eff:fe24:1cab UDP 123

Apr 19 12:36:37  ntpd[3497]: peers refreshed
Apr 19 12:36:38  dnsmasq-dhcp[2204]: DHCPDISCOVER(br100) 
fa:16:3e:24:1c:ab no address available
Apr 19 12:36:41  dnsmasq-dhcp[2204]: DHCPDISCOVER(br100) 
fa:16:3e:24:1c:ab no address available
Apr 19 12:36:44  dnsmasq-dhcp[2204]: DHCPDISCOVER(br100) 
fa:16:3e:24:1c:ab no address available


The thing is that /var/lib/nova/networks/nova-br100.conf is empty. Can 
that be the cause dnsmasq-dhcp doesn't have available addresses?


Thanks,

Javier

On 18/04/13 17:38, Javier Alvarez wrote:

Hello all,

Here it is my situation:

I am trying to install Essex on a small cluster (3 nodes) running 
Debian. There is a front-end node that has a public IP and then there 
are 2 compute nodes in a LAN. I cannot run nova-network on the 
front-end node because it is overwritting the iptables there and some 
other services start to misbehave, so I am trying a multi-host 
solution with nova-network running in each compute node.


The nova.conf I'm using in both compute nodes is the following:

[DEFAULT]
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
root_helper=sudo nova-rootwrap
auth_strategy=keystone
iscsi_helper=tgtadm
sql_connection=mysql://nova-common:password@172.16.8.1/nova
connection_type=libvirt
libvirt_type=kvm
my_ip=172.16.8.22
rabbit_host=172.16.8.1
glance_host=172.16.8.1
image_service=nova.image.glance.GlanceImageService
network_manager=nova.network.manager.FlatDHCPManager
fixed_range=192.168.100.0/24
flat_interface=eth1
public_interface=eth0
flat_network_bridge=br100
flat_network_dhcp_start=192.168.100.2
network_size=256
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
multi_host=True
send_arp_for_ha=true

I have created a network with:

nova-manage network create private --fixed_range_v4=192.168.100.0/24 
--multi_host=T --bridge_interface=br100


And I have set up eth1 with no IP and running in promisc mode. When I 
launch an instance, ifconfig outputs the following:



br100 Link encap:Ethernet  HWaddr 68:b5:99:c2:7b:a7
  inet addr:192.168.100.3  Bcast:192.168.100.255 
Mask:255.255.255.0

  inet6 addr: fe80::7033:eeff:fe29:81ae/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:1 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:0 (0.0 B)  TX bytes:90 (90.0 B)

eth0  Link encap:Ethernet  HWaddr 68:b5:99:c2:7b:a6
  inet addr:172.16.8.22  Bcast:172.16.8.255 Mask:255.255.255.0
  inet6 addr: fe80::6ab5:99ff:fec2:7ba6/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:4432580 errors:0 dropped:0 overruns:0 frame:0
  TX packets:4484811 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:457880509 (436.6 MiB)  TX bytes:398588034 (380.1 MiB)
  Memory:fe86-fe88

eth1  Link encap:Ethernet  HWaddr 68:b5:99:c2:7b:a7
  UP BROADCAST PROMISC MULTICAST  MTU:1500  Metric:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
  Memory:fe8e-fe90

loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:16436  Metric:1
  RX packets:52577 errors:0 dropped:0 overruns:0 frame:0
  TX packets:52577 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:2737820 (2.6 MiB)  TX bytes:2737820 (2.6 MiB)

vnet0 Link encap:Ethernet  HWaddr fe:16:3e:2d:40:3b
  inet6 addr: fe80::fc16:3eff:fe2d:403b/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:500
  RX bytes:0 (0.0 B)  TX bytes:370 (370.0 B)

And brctl show:

bridge name bridge id   STP enabled interfaces
br100   8000.68b599c27ba7   no  eth1
vnet0

Which looks fine to me. However, the VM log shows that it is unable to 
get an IP through DHCP 

Re: [Openstack] Local storage and Xen with Libxl

2013-04-19 Thread Daniel P. Berrange
On Fri, Apr 19, 2013 at 01:43:23PM +0300, Cristian Tomoiaga wrote:
 As for the compute part, I may need to work with libvirt but I want to
 avoid that if possible. Libxl was meant for stacks right ? Again, this may
 not be acceptable and I would like to know.

Nova already has two drivers which support Xen, one using XenAPI and
the other using libvirt. Libvirt itself will either use the legacy
XenD/XenStore APIs, or on new enough Xen will use libxl.

libxl is a pretty low level interface, not really targetted for direct
application usage, but rather for building management APIs like libvirt
or XCP. IMHO it would not really be appropriate for OpenStack to directly
use libxl. Given that Nova already has two virt drivers which can work
with Xen, I also don't really think there's a need to add a 3rd using
libxl.

 Regarding KVM, I did not use it until now. I don't like the fact the
 security issues pop up more often then I would like (I may be wrong ?).
 There are other reasons but are not important in my decision.

Having worked with both Xen  KVM for 8 years now, I don't see that
either of them are really winning in terms of security issues in the
hypervisor or userspace. Both of them have had their fair share of
vulnerabilities. In terms of the device model, they both share use
of the QEMU codebase, so many vulnerabilities detected with KVM will
also apply to Xen and vica-verca. So I don't think your assertion
that KVM suffers more issues is really accurate.

 Should I go with Libxl or stick to libvirt ? Should I start to work on
 local storage or has someone already started and I should contact him ?

As far as Nova virt drivers for Xen are concerned, you should either
use the XenAPI driver, or the libvirt driver.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Local storage and Xen with Libxl

2013-04-19 Thread Cristian Tomoiaga
Got it, thank you! I'll use libvirt then.
Regarding security with KVM and Xen, I've been reading too much, probably
from unverified sources too.
I may plan on using Ceph too and this seems to work better with KVM for now
(again from reading on the Ceph mailing list). I will test everything in
one or two weeks. For now I only want to get some input from the community.
There is no clear winner between Xen and KVM indeed and I'm only trying to
figure out what's best for my needs.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] FlatDHCP networking problem

2013-04-19 Thread Javier Alvarez

If anyone is interested, the problem was this bug:

https://lists.launchpad.net/openstack/msg14988.html

Javier

On 19/04/13 12:41, Javier Alvarez wrote:

Looking at /var/log/syslog I have found some more info:

Apr 19 12:36:26  dnsmasq-dhcp[2204]: read 
/var/lib/nova/networks/nova-br100.conf
Apr 19 12:36:34  kernel: [176191.796184] device vnet0 entered 
promiscuous mode
Apr 19 12:36:34  kernel: [176191.874774] br100: port 2(vnet0) entering 
forwarding state
Apr 19 12:36:34  kernel: [176191.874796] br100: port 2(vnet0) entering 
forwarding state
Apr 19 12:36:36  kernel: [176194.057668] kvm: 11182: cpu0 unhandled 
rdmsr: 0xc0010001
Apr 19 12:36:37  ntpd[3497]: Listen normally on 36 vnet0 
fe80::fc16:3eff:fe24:1cab UDP 123

Apr 19 12:36:37  ntpd[3497]: peers refreshed
Apr 19 12:36:38  dnsmasq-dhcp[2204]: DHCPDISCOVER(br100) 
fa:16:3e:24:1c:ab no address available
Apr 19 12:36:41  dnsmasq-dhcp[2204]: DHCPDISCOVER(br100) 
fa:16:3e:24:1c:ab no address available
Apr 19 12:36:44  dnsmasq-dhcp[2204]: DHCPDISCOVER(br100) 
fa:16:3e:24:1c:ab no address available


The thing is that /var/lib/nova/networks/nova-br100.conf is empty. Can 
that be the cause dnsmasq-dhcp doesn't have available addresses?


Thanks,

Javier

On 18/04/13 17:38, Javier Alvarez wrote:

Hello all,

Here it is my situation:

I am trying to install Essex on a small cluster (3 nodes) running 
Debian. There is a front-end node that has a public IP and then there 
are 2 compute nodes in a LAN. I cannot run nova-network on the 
front-end node because it is overwritting the iptables there and some 
other services start to misbehave, so I am trying a multi-host 
solution with nova-network running in each compute node.


The nova.conf I'm using in both compute nodes is the following:

[DEFAULT]
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
root_helper=sudo nova-rootwrap
auth_strategy=keystone
iscsi_helper=tgtadm
sql_connection=mysql://nova-common:password@172.16.8.1/nova
connection_type=libvirt
libvirt_type=kvm
my_ip=172.16.8.22
rabbit_host=172.16.8.1
glance_host=172.16.8.1
image_service=nova.image.glance.GlanceImageService
network_manager=nova.network.manager.FlatDHCPManager
fixed_range=192.168.100.0/24
flat_interface=eth1
public_interface=eth0
flat_network_bridge=br100
flat_network_dhcp_start=192.168.100.2
network_size=256
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
multi_host=True
send_arp_for_ha=true

I have created a network with:

nova-manage network create private --fixed_range_v4=192.168.100.0/24 
--multi_host=T --bridge_interface=br100


And I have set up eth1 with no IP and running in promisc mode. When I 
launch an instance, ifconfig outputs the following:



br100 Link encap:Ethernet  HWaddr 68:b5:99:c2:7b:a7
  inet addr:192.168.100.3  Bcast:192.168.100.255 
Mask:255.255.255.0

  inet6 addr: fe80::7033:eeff:fe29:81ae/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:1 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:0 (0.0 B)  TX bytes:90 (90.0 B)

eth0  Link encap:Ethernet  HWaddr 68:b5:99:c2:7b:a6
  inet addr:172.16.8.22  Bcast:172.16.8.255 Mask:255.255.255.0
  inet6 addr: fe80::6ab5:99ff:fec2:7ba6/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:4432580 errors:0 dropped:0 overruns:0 frame:0
  TX packets:4484811 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:457880509 (436.6 MiB)  TX bytes:398588034 (380.1 MiB)
  Memory:fe86-fe88

eth1  Link encap:Ethernet  HWaddr 68:b5:99:c2:7b:a7
  UP BROADCAST PROMISC MULTICAST  MTU:1500  Metric:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
  Memory:fe8e-fe90

loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:16436  Metric:1
  RX packets:52577 errors:0 dropped:0 overruns:0 frame:0
  TX packets:52577 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:2737820 (2.6 MiB)  TX bytes:2737820 (2.6 MiB)

vnet0 Link encap:Ethernet  HWaddr fe:16:3e:2d:40:3b
  inet6 addr: fe80::fc16:3eff:fe2d:403b/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:500
  RX bytes:0 (0.0 B)  TX bytes:370 (370.0 B)

And brctl show:

bridge name bridge id   STP enabled interfaces
br100   

Re: [Openstack] Grizzly Dashboad problem...

2013-04-19 Thread Heinonen, Johanna (NSN - FI/Espoo)
Hi,

Is there any solution available to this problem?

BR
Johanna


From: openstack-bounces+johanna.heinonen=nsn@lists.launchpad.net 
[mailto:openstack-bounces+johanna.heinonen=nsn@lists.launchpad.net] On 
Behalf Of ext Martinx - ?
Sent: Wednesday, April 10, 2013 10:00 PM
To: Ritesh Nanda
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] Grizzly Dashboad problem...

Okay, I'll double check it... Tks!

On 10 April 2013 13:59, Ritesh Nanda 
riteshnand...@gmail.commailto:riteshnand...@gmail.com wrote:
Most probably your nova-* services are having some problem. Check whether nova 
is working properly.

On Wed, Apr 10, 2013 at 8:47 PM, Martinx - ジェームズ 
thiagocmarti...@gmail.commailto:thiagocmarti...@gmail.com wrote:
Here is the full apache error log after login into the Dashboard:

http://paste.openstack.org/show/35722/

What can I do?

Tks,
Thiago

On 10 April 2013 12:04, Martinx - ジェームズ 
thiagocmarti...@gmail.commailto:thiagocmarti...@gmail.com wrote:
Guys,

 I just install Grizzly from UCA.

 When I try to access the Dashboard, I'm getting:

Internal Server Error

The server encountered an internal error or misconfiguration and was unable to 
complete your request.

Please contact the server administrator, webmaster@localhost and inform them of 
the time the error occurred, and anything you might have done that may have 
caused the error.

More information about this error may be available in the server error log.



 On the apache error.log:

UncompressableFileError: 'horizon/js/horizon.js' isn't accessible via 
COMPRESS_URL ('/static/') and can't be compressed, referer: 
http://10.32.14.232/horizon

The file /etc/openstack-dashboard/local_settings.py contains:

COMPRESS_OFFLINE = False

What am I doing wrong?

I'm weeks now without Dashboard, I tough that this problem was solved with the 
stable release but, it isn't stable yet...

 I appreciate any help.

Thanks!
Thiago


___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



--

 With Regards

 Ritesh Nanda





___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Short introduction to running swift with quotas

2013-04-19 Thread Heiko Krämer
Hi Guys,

I've written a short guide to enable the quotas in Swift (1.8.0).

http://honeybutcher.de/2013/04/account-quotas-swift-1-8-0/


I hope it's helpfully.

Greetings
Heiko

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] ANNOUNCE: Ultimate OpenStack Grizzly Guide, with super easy Quantum!

2013-04-19 Thread Paras pradhan
Any idea why I could not hit http://169.254.169.254/20090404/instanceid ?
Here is what I am seeing in cirros .

--
Sending discover...
Sending select for 192.168.122.98...
Lease of 192.168.122.98 obtained, lease time 120
deleting routers
route: SIOCDELRT: No such process
route: SIOCADDRT: No such process
adding dns 192.168.122.1
adding dns 8.8.8.8
cirrosds 'net' up at 4.62
checking http://169.254.169.254/20090404/instanceid
failed 1/20: up 4.79. request failed
failed 2/20: up 6.97. request failed
failed 3/20: up 9.03. request failed
failed 4/20: up 11.08. request fa

..
--

Thanks
Paras.


On Thu, Apr 11, 2013 at 7:22 AM, Martinx - ジェームズ
thiagocmarti...@gmail.comwrote:

 Guys!

  I just update the *Ultimate OpenStack Grizzly 
 Guide*https://gist.github.com/tmartinx/d36536b7b62a48f859c2
 !

  You guys will note that this environment works with *echo 0 
 /proc/sys/net/ipv4/ip_forward*, on *both* controller *AND* compute
 nodes! Take a look! I didn't touch the /etc/sysctl.conf file and it is
 working!

  I'll ask for the help of this community to finish my guide.

  On my `TODO list' I have: enable Metadata, Spice and Ceilometer.
 Volunteers?!

 Best!
 Thiago

 On 20 March 2013 19:51, Martinx - ジェームズ thiagocmarti...@gmail.com wrote:

 Hi!

  I'm working with Grizzly G3+RC1 on top of Ubuntu 12.04.2 and here is the
 guide I wrote:

  Ultimate OpenStack Grizzly 
 Guidehttps://gist.github.com/tmartinx/d36536b7b62a48f859c2

  It covers:

  * Ubuntu 12.04.2
  * Basic Ubuntu setup
  * KVM
  * OpenvSwitch
  * Name Resolution for OpenStack components;
  * LVM for Instances
  * Keystone
  * Glance
  * Quantum - Single Flat, Super Green!!
  * Nova
  * Cinder / tgt
  * Dashboard

  It is still a draft but, every time I deploy Ubuntu and Grizzly, I
 follow this little guide...

  I would like some help to improve this guide... If I'm doing something
 wrong, tell me! Please!

  Probably I'm doing something wrong, I don't know yet, but I'm seeing
 some errors on the logs, already reported here on this list. Like for
 example: nova-novncproxy conflicts with novnc (no VNC console for now),
 dhcp-agent.log / auth.log points to some problems with `sudo' or the
 `rootwarp' subsystem when dealing with metadata (so it isn't working)...

  But in general, it works great!!

 Best!
 Thiago



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Local storage and Xen with Libxl

2013-04-19 Thread Jim Fehlig
Daniel P. Berrange wrote:
 On Fri, Apr 19, 2013 at 01:43:23PM +0300, Cristian Tomoiaga wrote:
   
 As for the compute part, I may need to work with libvirt but I want to
 avoid that if possible. Libxl was meant for stacks right ? Again, this may
 not be acceptable and I would like to know.
 

 Nova already has two drivers which support Xen, one using XenAPI and
 the other using libvirt. Libvirt itself will either use the legacy
 XenD/XenStore APIs, or on new enough Xen will use libxl.

 libxl is a pretty low level interface, not really targetted for direct
 application usage, but rather for building management APIs like libvirt
 or XCP. IMHO it would not really be appropriate for OpenStack to directly
 use libxl. Given that Nova already has two virt drivers which can work
 with Xen, I also don't really think there's a need to add a 3rd using
 libxl.
   

Absolutely agreed, we do not want a libxl nova virt driver :).

FYI, I have not tried the libvirt libxl driver on Xen compute nodes -
all of my nodes are running the legacy xend toolstack and thus using the
legacy libvirt xen driver.  (I plan to switch these nodes to the new
toolstack in the Xen 4.3 timeframe.)  That said, the libxl driver should
work on a Xen compute node running the libxl stack.  I still haven't
finished the migration patch for the libvirt libxl driver, so migration
between libxl Xen compute nodes is not possible.

Regards,
Jim


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Multinode setup?

2013-04-19 Thread Dmitry Makovey
played with --availability-zone, so after specifying:

# cinder  create --availability-zone nova:foo.bar.com 10


I get:

# cinder show c1e4bcc1-c8aa-4bc6-93a8-88e362028f9a
+-+--+
|       Property      |                Value                 |
+-+--+
|     attachments     |                  []                  |
|  availability_zone  |     nova:foo.bar.com               |
|      created_at     |      2013-04-19T17:06:40.00      |
| display_description |                 None                 |
|     display_name    |                 None                 |
|          id         | c1e4bcc1-c8aa-4bc6-93a8-88e362028f9a |
|       metadata      |                  {}                  |
|         size        |                  10                  |
|     snapshot_id     |                 None                 |
|        status       |                error                 |
|     volume_type     |                 None                 |
+-+--+

I can create volumes just fine without --availability-zone, however they are 
always created on primary cinder node that runs cinder-api, cinder-scheduler 
and cinder-volume and not on secondary that runs cinder-api and cinder-volume. 

I have added to /etc/cinder/cinder.conf:

iscsi_ip_prefix= 1.1.1.2

and 

iscsi_ip_prefix= 1.1.1.3


on both hosts but I get nothing. creation with availability zone specified 
fails every time. 

from /var/log/cinder/scheduler.log on primary node I get:

2013-04-19 11:06:40 13525 ERROR cinder.openstack.common.rpc.amqp [-] Exception 
during message handling
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp Traceback 
(most recent call last):
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py, line 
276, in _process_data
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp     rval = 
self.proxy.dispatch(ctxt, version, method, **args)
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/dispatcher.py, 
line 145, in dispatch
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp     return 
getattr(proxyobj, method)(ctxt, **kwargs)
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/cinder/scheduler/manager.py, line 98, in 
_schedule
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp     
db.volume_update(context, volume_id, {'status': 'error'})
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp   File 
/usr/lib64/python2.6/contextlib.py, line 23, in __exit__
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp     
self.gen.next()
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/cinder/scheduler/manager.py, line 94, in 
_schedule
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp     return 
driver_method(*args, **kwargs)
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/cinder/scheduler/simple.py, line 59, in 
schedule_create_volume
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp     raise 
exception.WillNotSchedule(host=host)
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp 
WillNotSchedule: Host foo.bar.com is not up or doesn't exist.

does that mean I have to run Qpid on secondary as well?




 From: Dmitry Makovey dmako...@yahoo.com
To: dmescherya...@mirantis.com dmescherya...@mirantis.com 
Cc: openstack@lists.launchpad.net openstack@lists.launchpad.net 
Sent: Thursday, April 18, 2013 10:37 PM
Subject: Re: [Openstack] Multinode setup?
 


thanks for the pointer. cinder indeed has --availability-zone switch. I'll try 
to play with that one

 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Multinode setup?

2013-04-19 Thread Dmitry Mescheryakov
Did you try running
nova-manage service list
?

It should show services status relatively to node on which you run that
command.


2013/4/19 Dmitry Makovey dmako...@yahoo.com

 played with --availability-zone, so after specifying:

 # cinder  create --availability-zone nova:foo.bar.com 10


 I get:

 # cinder show c1e4bcc1-c8aa-4bc6-93a8-88e362028f9a
 +-+--+
 |   Property  |Value |
 +-+--+
 | attachments |  []  |
 |  availability_zone  | nova:foo.bar.com   |
 |  created_at |  2013-04-19T17:06:40.00  |
 | display_description | None |
 | display_name| None |
 |  id | c1e4bcc1-c8aa-4bc6-93a8-88e362028f9a |
 |   metadata  |  {}  |
 | size|  10  |
 | snapshot_id | None |
 |status   |error |
 | volume_type | None |
 +-+--+

 I can create volumes just fine without --availability-zone, however they
 are always created on primary cinder node that runs cinder-api,
 cinder-scheduler and cinder-volume and not on secondary that runs
 cinder-api and cinder-volume.

 I have added to /etc/cinder/cinder.conf:

 iscsi_ip_prefix= 1.1.1.2

 and

 iscsi_ip_prefix= 1.1.1.3


 on both hosts but I get nothing. creation with availability zone specified
 fails every time.

 from /var/log/cinder/scheduler.log on primary node I get:

 2013-04-19 11:06:40 13525 ERROR cinder.openstack.common.rpc.amqp [-]
 Exception during message handling
 2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp Traceback
 (most recent call last):
 2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp   File
 /usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py,
 line 276, in _process_data
 2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp rval
 = self.proxy.dispatch(ctxt, version, method, **args)
 2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp   File
 /usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/dispatcher.py,
 line 145, in dispatch
 2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp
 return getattr(proxyobj, method)(ctxt, **kwargs)
 2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp   File
 /usr/lib/python2.6/site-packages/cinder/scheduler/manager.py, line 98, in
 _schedule
 2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp
 db.volume_update(context, volume_id, {'status': 'error'})
 2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp   File
 /usr/lib64/python2.6/contextlib.py, line 23, in __exit__
 2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp
 self.gen.next()
 2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp   File
 /usr/lib/python2.6/site-packages/cinder/scheduler/manager.py, line 94, in
 _schedule
 2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp
 return driver_method(*args, **kwargs)
 2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp   File
 /usr/lib/python2.6/site-packages/cinder/scheduler/simple.py, line 59, in
 schedule_create_volume
 2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp raise
 exception.WillNotSchedule(host=host)
 2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp
 WillNotSchedule: Host foo.bar.com is not up or doesn't exist.

 does that mean I have to run Qpid on secondary as well?



 
  From: Dmitry Makovey dmako...@yahoo.com
 To: dmescherya...@mirantis.com dmescherya...@mirantis.com
 Cc: openstack@lists.launchpad.net openstack@lists.launchpad.net
 Sent: Thursday, April 18, 2013 10:37 PM
 Subject: Re: [Openstack] Multinode setup?
 
 
 
 thanks for the pointer. cinder indeed has --availability-zone switch.
 I'll try to play with that one
 
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Multinode setup?

2013-04-19 Thread Dmitry Makovey
# cinder-manage host list

host                            zone           
primary.bar.com           nova           
foo.bar.com          nova           

however 
# nova-manage service list 
only shows nova services on primary node (since that's the only place that has 
it installed) 




 From: Dmitry Mescheryakov dmescherya...@mirantis.com
To: Dmitry Makovey dmako...@yahoo.com 
Cc: openstack@lists.launchpad.net openstack@lists.launchpad.net 
Sent: Friday, April 19, 2013 11:41 AM
Subject: Re: [Openstack] Multinode setup?
 


Did you try running
nova-manage service list
?


It should show services status relatively to node on which you run that 
command.



2013/4/19 Dmitry Makovey dmako...@yahoo.com

played with --availability-zone, so after specifying:

# cinder  create --availability-zone nova:foo.bar.com 10


I get:

# cinder show c1e4bcc1-c8aa-4bc6-93a8-88e362028f9a
+-+--+
|       Property      |                Value                 |
+-+--+
|     attachments     |                  []                  |
|  availability_zone  |     nova:foo.bar.com               |
|      created_at     |      2013-04-19T17:06:40.00      |
| display_description |                 None                 |
|     display_name    |                 None                 |
|          id         | c1e4bcc1-c8aa-4bc6-93a8-88e362028f9a |
|       metadata      |                  {}                  |
|         size        |                  10                  |
|     snapshot_id     |                 None                 |
|        status       |                error                 |
|     volume_type     |                 None                 |
+-+--+

I can create volumes just fine without --availability-zone, however they are 
always created on primary cinder node that runs cinder-api, 
cinder-scheduler and cinder-volume and not on secondary that runs cinder-api 
and cinder-volume. 

I have added to /etc/cinder/cinder.conf:

iscsi_ip_prefix= 1.1.1.2

and 

iscsi_ip_prefix= 1.1.1.3


on both hosts but I get nothing. creation with availability zone specified 
fails every time. 

from /var/log/cinder/scheduler.log on primary node I get:

2013-04-19 11:06:40 13525 ERROR cinder.openstack.common.rpc.amqp [-] 
Exception during message handling
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp Traceback 
(most recent call last):
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py, line 
276, in _process_data
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp     rval = 
self.proxy.dispatch(ctxt, version, method, **args)
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/dispatcher.py, 
line 145, in dispatch
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp     return 
getattr(proxyobj, method)(ctxt, **kwargs)
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/cinder/scheduler/manager.py, line 98, in 
_schedule
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp     
db.volume_update(context, volume_id, {'status': 'error'})
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp   File 
/usr/lib64/python2.6/contextlib.py, line 23, in __exit__
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp     
self.gen.next()
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/cinder/scheduler/manager.py, line 94, in 
_schedule
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp     return 
driver_method(*args, **kwargs)
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/cinder/scheduler/simple.py, line 59, in 
schedule_create_volume
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp     raise 
exception.WillNotSchedule(host=host)
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp 
WillNotSchedule: Host foo.bar.com is not up or doesn't exist.

does that mean I have to run Qpid on secondary as well?




 From: Dmitry Makovey dmako...@yahoo.com
To: dmescherya...@mirantis.com dmescherya...@mirantis.com
Cc: openstack@lists.launchpad.net openstack@lists.launchpad.net
Sent: Thursday, April 18, 2013 10:37 PM

Subject: Re: [Openstack] Multinode setup?




thanks for the pointer. cinder indeed has --availability-zone switch. I'll 
try to play with that one





___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : 

Re: [Openstack] Local storage and Xen with Libxl

2013-04-19 Thread Jim Fehlig
Cristian Tomoiaga wrote:
 Hi Jim,

 Thank you! I'll check libvirt in more detail to make sure nothing I
 need is missing. 
 With xend it should work. I'm planning ahead and want to deploy on
 Libxl but for the sake of argument I will probably use both KVM
 (Daniel is to blame here :) ) and Xen with libxl while I test out
 everything. It's a good thing to see interest in libvirt. For some
 reason I though that libvirt will move slower with new features
 (granted libxl has changed from 4.1 to 4.2). Also being bugged by
 this: https://wiki.openstack.org/wiki/LibvirtAPI

Nothing to be alarmed about IMO.  That simply provides info about some
of the many ongoing improvements and enhancements to the nova libvirt
driver, which is the most widely used driver btw, including in all the
CI gating.

Regards,
Jim


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] authorization failed with swift:swift?

2013-04-19 Thread Nan Zhu
Hi, all 

I'm a newbie to openstack and swift

I installed the swift components according to the official document, and I can 
start the service daemons, my questions is that

when I try to validate my installation with 

swift -V 2.0 -A http://192.168.2.2:5000/v2.0 -U swift:swift -K swift stat

the system always tells me that unauthorized, check username, password and 
tenant name/id

I didn't check the username and password setup in the document (they are 
swift/swift in proxy-server.conf), 192.168.2.2 is my server's address

What's wrong in here? the admin name and password is not swift:swift?

Best, 

-- 
Nan Zhu
School of Computer Science,
McGill University



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Local storage and Xen with Libxl

2013-04-19 Thread Bob Ball
It is true that most of the gating jobs are running on KVM but the smokestack 
tests also run on Xen (actually XenServer with the XenAPI driver), so there is 
CI testing for Xen and we'll be improving that through Havana as well.

Bob

-Original Message-
From: Openstack 
[mailto:openstack-bounces+bob.ball=citrix@lists.launchpad.net] On Behalf Of 
Jim Fehlig
Sent: 19 April 2013 13:05
To: Cristian Tomoiaga
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] Local storage and Xen with Libxl

Cristian Tomoiaga wrote:
 Hi Jim,

 Thank you! I'll check libvirt in more detail to make sure nothing I 
 need is missing.
 With xend it should work. I'm planning ahead and want to deploy on 
 Libxl but for the sake of argument I will probably use both KVM 
 (Daniel is to blame here :) ) and Xen with libxl while I test out 
 everything. It's a good thing to see interest in libvirt. For some 
 reason I though that libvirt will move slower with new features 
 (granted libxl has changed from 4.1 to 4.2). Also being bugged by
 this: https://wiki.openstack.org/wiki/LibvirtAPI

Nothing to be alarmed about IMO.  That simply provides info about some of the 
many ongoing improvements and enhancements to the nova libvirt driver, which is 
the most widely used driver btw, including in all the CI gating.

Regards,
Jim


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Multinode setup?

2013-04-19 Thread Daniels Cai
hi Dmitry
Cinder service is not managed by nova-manage service list
Mq is only needed to be installed once

You can do the following to check whether multi node works

1.check MySQL db cinder database , there should be a table named service
which record all the available cinder services.
If not please check your cinder config file,make sure the service record is
created and do step2

2. tail -f /var/log/cinder/cinder-volume in all of your cinder scheduler
service host
And then create as many empty cinder volume as you can
A log will be generated when a cinder-volume works



发自我的 iPhone

在 2013-4-20,3:18,Dmitry Makovey dmako...@yahoo.com 写道:

# cinder-manage host list
hostzone
primary.bar.com   nova
foo.bar.com  nova

however
# nova-manage service list
only shows nova services on primary node (since that's the only place that
has it installed)

  --
 *From:* Dmitry Mescheryakov dmescherya...@mirantis.com
*To:* Dmitry Makovey dmako...@yahoo.com
*Cc:* openstack@lists.launchpad.net openstack@lists.launchpad.net
*Sent:* Friday, April 19, 2013 11:41 AM
*Subject:* Re: [Openstack] Multinode setup?

Did you try running
nova-manage service list
?

It should show services status relatively to node on which you run that
command.


2013/4/19 Dmitry Makovey dmako...@yahoo.com

played with --availability-zone, so after specifying:

# cinder  create --availability-zone nova:foo.bar.com 10


I get:

# cinder show c1e4bcc1-c8aa-4bc6-93a8-88e362028f9a
+-+--+
|   Property  |Value |
+-+--+
| attachments |  []  |
|  availability_zone  | nova:foo.bar.com   |
|  created_at |  2013-04-19T17:06:40.00  |
| display_description | None |
| display_name| None |
|  id | c1e4bcc1-c8aa-4bc6-93a8-88e362028f9a |
|   metadata  |  {}  |
| size|  10  |
| snapshot_id | None |
|status   |error |
| volume_type | None |
+-+--+

I can create volumes just fine without --availability-zone, however they
are always created on primary cinder node that runs cinder-api,
cinder-scheduler and cinder-volume and not on secondary that runs
cinder-api and cinder-volume.

I have added to /etc/cinder/cinder.conf:

iscsi_ip_prefix= 1.1.1.2

and

iscsi_ip_prefix= 1.1.1.3


on both hosts but I get nothing. creation with availability zone specified
fails every time.

from /var/log/cinder/scheduler.log on primary node I get:

2013-04-19 11:06:40 13525 ERROR cinder.openstack.common.rpc.amqp [-]
Exception during message handling
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp Traceback
(most recent call last):
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp   File
/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py,
line 276, in _process_data
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp rval =
self.proxy.dispatch(ctxt, version, method, **args)
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp   File
/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/dispatcher.py,
line 145, in dispatch
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp return
getattr(proxyobj, method)(ctxt, **kwargs)
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp   File
/usr/lib/python2.6/site-packages/cinder/scheduler/manager.py, line 98, in
_schedule
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp
db.volume_update(context, volume_id, {'status': 'error'})
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp   File
/usr/lib64/python2.6/contextlib.py, line 23, in __exit__
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp
self.gen.next()
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp   File
/usr/lib/python2.6/site-packages/cinder/scheduler/manager.py, line 94, in
_schedule
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp return
driver_method(*args, **kwargs)
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp   File
/usr/lib/python2.6/site-packages/cinder/scheduler/simple.py, line 59, in
schedule_create_volume
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp raise
exception.WillNotSchedule(host=host)
2013-04-19 11:06:40 13525 TRACE cinder.openstack.common.rpc.amqp
WillNotSchedule: Host foo.bar.com is not up or doesn't exist.

does that mean I have to run Qpid on secondary as well?




Re: [Openstack] ANNOUNCE: Ultimate OpenStack Grizzly Guide, with super easy Quantum!

2013-04-19 Thread Daniels Cai
Hi Paras
The log says your dhcp works fine while metadata is not
Check the following steps

1.Make sure nova API enables metadata service

2. A virtual router should be created for your subnet and this router is
binding with a l3 agent

3.in the l3 agent metadata proxy service should be works fine
Metadata service config file should contains nova API host and keystone
auth info

4.  Ovs bridge br-ex is needed in your l3 agent server even you don't need
floating ip

Daniels Cai

http://dnscai.com

在 2013-4-19,23:42,Paras pradhan pradhanpa...@gmail.com 写道:

Any idea why I could not hit http://169.254.169.254/20090404/instanceid ?
Here is what I am seeing in cirros .

--
Sending discover...
Sending select for 192.168.122.98...
Lease of 192.168.122.98 obtained, lease time 120
deleting routers
route: SIOCDELRT: No such process
route: SIOCADDRT: No such process
adding dns 192.168.122.1
adding dns 8.8.8.8
cirrosds 'net' up at 4.62
checking http://169.254.169.254/20090404/instanceid
failed 1/20: up 4.79. request failed
failed 2/20: up 6.97. request failed
failed 3/20: up 9.03. request failed
failed 4/20: up 11.08. request fa

..
--

Thanks
Paras.


On Thu, Apr 11, 2013 at 7:22 AM, Martinx - ジェームズ
thiagocmarti...@gmail.comwrote:

 Guys!

  I just update the *Ultimate OpenStack Grizzly 
 Guide*https://gist.github.com/tmartinx/d36536b7b62a48f859c2
 !

  You guys will note that this environment works with *echo 0 
 /proc/sys/net/ipv4/ip_forward*, on *both* controller *AND* compute
 nodes! Take a look! I didn't touch the /etc/sysctl.conf file and it is
 working!

  I'll ask for the help of this community to finish my guide.

  On my `TODO list' I have: enable Metadata, Spice and Ceilometer.
 Volunteers?!

 Best!
 Thiago

 On 20 March 2013 19:51, Martinx - ジェームズ thiagocmarti...@gmail.com wrote:

 Hi!

  I'm working with Grizzly G3+RC1 on top of Ubuntu 12.04.2 and here is the
 guide I wrote:

  Ultimate OpenStack Grizzly 
 Guidehttps://gist.github.com/tmartinx/d36536b7b62a48f859c2

  It covers:

  * Ubuntu 12.04.2
  * Basic Ubuntu setup
  * KVM
  * OpenvSwitch
  * Name Resolution for OpenStack components;
  * LVM for Instances
  * Keystone
  * Glance
  * Quantum - Single Flat, Super Green!!
  * Nova
  * Cinder / tgt
  * Dashboard

  It is still a draft but, every time I deploy Ubuntu and Grizzly, I
 follow this little guide...

  I would like some help to improve this guide... If I'm doing something
 wrong, tell me! Please!

  Probably I'm doing something wrong, I don't know yet, but I'm seeing
 some errors on the logs, already reported here on this list. Like for
 example: nova-novncproxy conflicts with novnc (no VNC console for now),
 dhcp-agent.log / auth.log points to some problems with `sudo' or the
 `rootwarp' subsystem when dealing with metadata (so it isn't working)...

  But in general, it works great!!

 Best!
 Thiago



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-community] Q: Red Hat OpenStack Cloud Infrastructure Partner n

2013-04-19 Thread Frans Thamura
yes,

what is reposition with company inside openstack.org, like dell, hp.

frans

Frans Thamura
Meruvian
On Apr 20, 2013 12:23 AM, Dave Neary dne...@redhat.com wrote:

 Hi Franz,

 I'm not sure I understand what you mean with your question. Red Hat's
 OpenStack Cloud Infrastructure Partner program is a pathway for vendors
 to certify their solutions (hardware and software) with Red Hat's
 supported OpenStack distribution.

 Thanks,
 Dave.

 On 04/18/2013 07:30 PM, Frans Thamura wrote:
  hi there
 
  just read press release regarding  Red Hat OpenStack Cloud
  Infrastructure Partner
 
  which intel and cisco join become the first
 
 
  what is the difference between a company join OpenStack Foundation vs
  an implemenatator like RedHat?
 
  is there are a different editiion inside RedHat OpenStack vs the
  OpenStack.org's OpenStack?
 
  and how we can work on this?
 
 
 
  Frans
 
  ___
  Community mailing list
  commun...@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/community
 

 --
 Dave Neary - Community Action and Impact
 Open Source and Standards, Red Hat - http://community.redhat.com
 Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_quantum_trunk #45

2013-04-19 Thread openstack-testing-bot
Title: precise_havana_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_quantum_trunk/45/Project:precise_havana_quantum_trunkDate of build:Fri, 19 Apr 2013 21:01:36 -0400Build duration:1 min 57 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesChanged DHCPV6_PORT from 467 to 547, the correct port for DHCPv6.by jjmbeditquantum/agent/linux/dhcp.pyConsole Output[...truncated 3021 lines...]Finished at 20130419-2103Build needed 00:00:44, 15152k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304192101~precise-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304192101~precise-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/quantum/grizzly /tmp/tmpiv8gK2/quantummk-build-deps -i -r -t apt-get -y /tmp/tmpiv8gK2/quantum/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log de5c1e4f281f59d550b476919b27ac4e2aae14ac..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201304192101~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [5896322] Changed DHCPV6_PORT from 467 to 547, the correct port for DHCPv6.dch -a [80ffdde] Imported Translations from Transifexdch -a [929cbab] Imported Translations from Transifexdch -a [2a24058] Imported Translations from Transifexdch -a [b6f0f68] Imported Translations from Transifexdch -a [1e1c513] Imported Translations from Transifexdch -a [6bbcc38] Imported Translations from Transifexdch -a [bd702cb] Imported Translations from Transifexdch -a [a13295b] Enable automatic validation of many HACKING rules.dch -a [91bed75] Ensure unit tests work with all interface typesdch -a [0446eac] Shorten the path of the nicira nvp plugin.dch -a [8354133] Implement LB plugin delete_pool_health_monitor().dch -a [147038a] Parallelize quantum unit testing:debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC quantum_2013.2+git201304192101~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A quantum_2013.2+git201304192101~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304192101~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304192101~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_quantum_trunk #46

2013-04-19 Thread openstack-testing-bot
Title: precise_havana_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_quantum_trunk/46/Project:precise_havana_quantum_trunkDate of build:Fri, 19 Apr 2013 22:31:37 -0400Build duration:1 min 56 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesImported Translations from Transifexby Jenkinseditquantum/locale/ja/LC_MESSAGES/quantum.poaddquantum/locale/ka_GE/LC_MESSAGES/quantum.poeditquantum/locale/quantum.potConsole Output[...truncated 3030 lines...]Finished at 20130419-2233Build needed 00:00:44, 15560k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304192231~precise-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304192231~precise-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/quantum/grizzly /tmp/tmpxQvLe2/quantummk-build-deps -i -r -t apt-get -y /tmp/tmpxQvLe2/quantum/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log de5c1e4f281f59d550b476919b27ac4e2aae14ac..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201304192231~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [fb66e24] Imported Translations from Transifexdch -a [5896322] Changed DHCPV6_PORT from 467 to 547, the correct port for DHCPv6.dch -a [80ffdde] Imported Translations from Transifexdch -a [929cbab] Imported Translations from Transifexdch -a [2a24058] Imported Translations from Transifexdch -a [b6f0f68] Imported Translations from Transifexdch -a [1e1c513] Imported Translations from Transifexdch -a [6bbcc38] Imported Translations from Transifexdch -a [bd702cb] Imported Translations from Transifexdch -a [a13295b] Enable automatic validation of many HACKING rules.dch -a [91bed75] Ensure unit tests work with all interface typesdch -a [0446eac] Shorten the path of the nicira nvp plugin.dch -a [8354133] Implement LB plugin delete_pool_health_monitor().dch -a [147038a] Parallelize quantum unit testing:debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC quantum_2013.2+git201304192231~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A quantum_2013.2+git201304192231~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304192231~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304192231~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp