Re: [Openstack] error while creating instance on nfs share

2013-07-09 Thread Chathura M. Sarathchandra Magurawalage
I am using Folsom release.

I have followed the instruction on that page.


On 9 July 2013 10:57, JuanFra Rodriguez Cardoso 
juanfra.rodriguez.card...@gmail.com wrote:

 Diablo release? I'd recommend you use the lastest release for live
 migrations:


 http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-live-migrations.html

 Regards,
 ---
 JuanFra


 2013/7/8 Chathura M. Sarathchandra Magurawalage 77.chath...@gmail.com

  Hello al,

 I followed the Openstack instructions for enabling vm migration   (
 http://docs.openstack.org/diablo/openstack-compute/admin/content/configuring-live-migrations.html
  )

 But I can not get it working, since when I create an instance I get the
 following error on the compute node.

 2013-07-08 17:01:34 9329 ERROR nova.openstack.common.rpc.amqp [-]
 Exception during message handling
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp Traceback
 (most recent call last):
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py, line
 276, in _process_data
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp rval =
 self.proxy.dispatch(ctxt, version, method, **args)
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py,
 line 145, in dispatch
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp return
 getattr(proxyobj, method)(ctxt, **kwargs)
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova/exception.py, line 117, in wrapped
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp
 temp_level, payload)
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/contextlib.py, line 24, in __exit__
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp
 self.gen.next()
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova/exception.py, line 92, in wrapped
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp return
 f(*args, **kw)
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 176, in
 decorated_function
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp pass
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/contextlib.py, line 24, in __exit__
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp
 self.gen.next()
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 162, in
 decorated_function
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp return
 function(self, context, *args, **kwargs)
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 197, in
 decorated_function
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp
 kwargs['instance']['uuid'], e, sys.exc_info())
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/contextlib.py, line 24, in __exit__
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp
 self.gen.next()
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 191, in
 decorated_function
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp return
 function(self, context, *args, **kwargs)
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 839, in
 run_instance
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp
 do_run_instance()
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova/utils.py, line 803, in inner
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp retval
 = f(*args, **kwargs)
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 838, in
 do_run_instance
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp
 admin_password, is_first_time, instance)
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 529, in
 _run_instance
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp
 self._set_instance_error_state(context, instance['uuid'])
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/contextlib.py, line 24, in __exit__
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp
 self.gen.next()
 2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
 /usr/lib/python2.7/dist-packages/nova

[Openstack] error while creating instance on nfs share

2013-07-08 Thread Chathura M. Sarathchandra Magurawalage
Hello al,

I followed the Openstack instructions for enabling vm migration   (
http://docs.openstack.org/diablo/openstack-compute/admin/content/configuring-live-migrations.html
 )

But I can not get it working, since when I create an instance I get the
following error on the compute node.

2013-07-08 17:01:34 9329 ERROR nova.openstack.common.rpc.amqp [-] Exception
during message handling
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp Traceback
(most recent call last):
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py, line
276, in _process_data
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp rval =
self.proxy.dispatch(ctxt, version, method, **args)
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py,
line 145, in dispatch
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp return
getattr(proxyobj, method)(ctxt, **kwargs)
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/exception.py, line 117, in wrapped
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp
temp_level, payload)
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/contextlib.py, line 24, in __exit__
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp
self.gen.next()
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/exception.py, line 92, in wrapped
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp return
f(*args, **kw)
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 176, in
decorated_function
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp pass
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/contextlib.py, line 24, in __exit__
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp
self.gen.next()
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 162, in
decorated_function
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp return
function(self, context, *args, **kwargs)
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 197, in
decorated_function
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp
kwargs['instance']['uuid'], e, sys.exc_info())
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/contextlib.py, line 24, in __exit__
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp
self.gen.next()
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 191, in
decorated_function
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp return
function(self, context, *args, **kwargs)
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 839, in
run_instance
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp
do_run_instance()
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/utils.py, line 803, in inner
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp retval =
f(*args, **kwargs)
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 838, in
do_run_instance
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp
admin_password, is_first_time, instance)
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 529, in
_run_instance
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp
self._set_instance_error_state(context, instance['uuid'])
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/contextlib.py, line 24, in __exit__
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp
self.gen.next()
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 517, in
_run_instance
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp
is_first_time, request_spec, filter_properties)
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 503, in
_run_instance
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp
injected_files, admin_password)
2013-07-08 17:01:34 9329 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, 

[Openstack] wget: can't connect to remote host (169.254.169.254): No route to host, but can ping and ssh

2013-05-24 Thread Chathura M. Sarathchandra Magurawalage
Hello everyone,

I have am having a problem with my metadata service. My VM's log is as
follows. As you may see, it can not find the route to the metadata service
(169.254.169.254). But I can ssh and ping the VM's from the network node.
Although as far as I know, you need to have metadata imported to the
virtual machines, to be able to ping to them and to ssh in to them.

I cant ping or ssh in to the VMs from the controller or compute node.

Anyone any help?


[0.018022] Initializing cgroup subsys cpuacct
[0.020014] Initializing cgroup subsys memory
[0.021322] Initializing cgroup subsys devices
[0.022629] Initializing cgroup subsys freezer
[0.024007] Initializing cgroup subsys net_cls
[0.025310] Initializing cgroup subsys blkio
[0.026587] Initializing cgroup subsys perf_event
[0.028143] mce: CPU supports 10 MCE banks
[0.029686] SMP alternatives: switching to UP code
[0.132200] Freeing SMP alternatives: 24k freed
[0.133648] ACPI: Core revision 20110413
[0.135949] ftrace: allocating 26075 entries in 103 pages
[0.144172] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
[0.145806] CPU0: Intel(R) Xeon(R) CPU   X5355  @ 2.66GHz stepping 07
[0.148008] Performance Events: unsupported p6 CPU model 15 no PMU
driver, software events only.
[0.148008] Brought up 1 CPUs
[0.148012] Total of 1 processors activated (5320.05 BogoMIPS).
[0.150609] devtmpfs: initialized
[0.154178] print_constraints: dummy:
[0.155400] Time: 13:02:05  Date: 05/24/13
[0.156087] NET: Registered protocol family 16
[0.157512] ACPI: bus type pci registered
[0.158819] PCI: Using configuration type 1 for base access
[0.160810] bio: create slab bio-0 at 0
[0.165652] ACPI: Interpreter enabled
[0.166763] ACPI: (supports S0 S3 S4 S5)
[0.168414] ACPI: Using IOAPIC for interrupt routing
[0.173485] ACPI: No dock devices found.
[0.174646] HEST: Table not found.
[0.175698] PCI: Ignoring host bridge windows from ACPI; if
necessary, use pci=use_crs and report a bug
[0.176031] ACPI: PCI Root Bridge [PCI0] (domain  [bus 00-ff])
[0.183300] pci :00:01.3: quirk: [io  0xb000-0xb03f] claimed by
PIIX4 ACPI
[0.184026] pci :00:01.3: quirk: [io  0xb100-0xb10f] claimed by PIIX4 SMB
[0.201264]  pci:00: Unable to request _OSC control (_OSC
support mask: 0x1e)
[0.207189] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11)
[0.209145] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11)
[0.211331] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11)
[0.213126] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11)
[0.215291] ACPI: PCI Interrupt Link [LNKS] (IRQs 9) *0
[0.217204] vgaarb: device added:
PCI::00:02.0,decodes=io+mem,owns=io+mem,locks=none
[0.220016] vgaarb: loaded
[0.220931] vgaarb: bridge control possible :00:02.0
[0.222716] SCSI subsystem initialized
[0.224363] usbcore: registered new interface driver usbfs
[0.225873] usbcore: registered new interface driver hub
[0.227381] usbcore: registered new device driver usb
[0.228226] PCI: Using ACPI for IRQ routing
[0.229889] NetLabel: Initializing
[0.232021] NetLabel:  domain hash size = 128
[0.233289] NetLabel:  protocols = UNLABELED CIPSOv4
[0.234683] NetLabel:  unlabeled traffic allowed by default
[0.236101] HPET: 3 timers in total, 0 timers will be used for per-cpu timer
[0.237936] hpet0: at MMIO 0xfed0, IRQs 2, 8, 0
[0.240017] hpet0: 3 comparators, 64-bit 100.00 MHz counter
[0.248162] Switching to clocksource kvm-clock
[0.250575] Switched to NOHz mode on CPU #0
[0.260176] AppArmor: AppArmor Filesystem Enabled
[0.261587] pnp: PnP ACPI init
[0.262592] ACPI: bus type pnp registered
[0.264716] pnp: PnP ACPI: found 8 devices
[0.265919] ACPI: ACPI bus type pnp unregistered
[0.273657] NET: Registered protocol family 2
[0.275023] IP route cache hash table entries: 4096 (order: 3, 32768 bytes)
[0.277174] TCP established hash table entries: 16384 (order: 6,
262144 bytes)
[0.279627] TCP bind hash table entries: 16384 (order: 6, 262144 bytes)
[0.281735] TCP: Hash tables configured (established 16384 bind 16384)
[0.283454] TCP reno registered
[0.284478] UDP hash table entries: 256 (order: 1, 8192 bytes)
[0.286046] UDP-Lite hash table entries: 256 (order: 1, 8192 bytes)
[0.287800] NET: Registered protocol family 1
[0.289098] pci :00:00.0: Limiting direct PCI/PCI transfers
[0.290692] pci :00:01.0: PIIX3: Enabling Passive Release
[0.292282] pci :00:01.0: Activating ISA DMA hang workarounds
[0.294366] audit: initializing netlink socket (disabled)
[0.295853] type=2000 audit(1369400527.292:1): initialized
[0.315902] Trying to unpack rootfs image as initramfs...
[0.332248] HugeTLB registered 2 MB page size, pre-allocated 0 pages
[0.340281] VFS: Disk quotas dquot_6.5.2
[0.341541] Dquot-cache 

Re: [Openstack] wget: can't connect to remote host (169.254.169.254): No route to host, but can ping and ssh

2013-05-24 Thread Chathura M. Sarathchandra Magurawalage
Now I have spawned a Ubuntu VM and I can not ssh because it does not have
the security group and the public keys injected, but I can still ping from
the network node.

On 24 May 2013 14:28, Chathura M. Sarathchandra Magurawalage 
77.chath...@gmail.com wrote:

 Hello everyone,

 I have am having a problem with my metadata service. My VM's log is as
 follows. As you may see, it can not find the route to the metadata service
 (169.254.169.254). But I can ssh and ping the VM's from the network node.
 Although as far as I know, you need to have metadata imported to the
 virtual machines, to be able to ping to them and to ssh in to them.

 I cant ping or ssh in to the VMs from the controller or compute node.

 Anyone any help?


 [0.018022] Initializing cgroup subsys cpuacct
 [0.020014] Initializing cgroup subsys memory
 [0.021322] Initializing cgroup subsys devices
 [0.022629] Initializing cgroup subsys freezer
 [0.024007] Initializing cgroup subsys net_cls
 [0.025310] Initializing cgroup subsys blkio
 [0.026587] Initializing cgroup subsys perf_event
 [0.028143] mce: CPU supports 10 MCE banks
 [0.029686] SMP alternatives: switching to UP code
 [0.132200] Freeing SMP alternatives: 24k freed
 [0.133648] ACPI: Core revision 20110413
 [0.135949] ftrace: allocating 26075 entries in 103 pages
 [0.144172] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
 [0.145806] CPU0: Intel(R) Xeon(R) CPU   X5355  @ 2.66GHz stepping 
 07
 [0.148008] Performance Events: unsupported p6 CPU model 15 no PMU driver, 
 software events only.
 [0.148008] Brought up 1 CPUs
 [0.148012] Total of 1 processors activated (5320.05 BogoMIPS).
 [0.150609] devtmpfs: initialized
 [0.154178] print_constraints: dummy:
 [0.155400] Time: 13:02:05  Date: 05/24/13
 [0.156087] NET: Registered protocol family 16
 [0.157512] ACPI: bus type pci registered
 [0.158819] PCI: Using configuration type 1 for base access
 [0.160810] bio: create slab bio-0 at 0
 [0.165652] ACPI: Interpreter enabled
 [0.166763] ACPI: (supports S0 S3 S4 S5)
 [0.168414] ACPI: Using IOAPIC for interrupt routing
 [0.173485] ACPI: No dock devices found.
 [0.174646] HEST: Table not found.
 [0.175698] PCI: Ignoring host bridge windows from ACPI; if necessary, use 
 pci=use_crs and report a bug
 [0.176031] ACPI: PCI Root Bridge [PCI0] (domain  [bus 00-ff])
 [0.183300] pci :00:01.3: quirk: [io  0xb000-0xb03f] claimed by PIIX4 
 ACPI
 [0.184026] pci :00:01.3: quirk: [io  0xb100-0xb10f] claimed by PIIX4 
 SMB
 [0.201264]  pci:00: Unable to request _OSC control (_OSC support 
 mask: 0x1e)
 [0.207189] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11)
 [0.209145] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11)
 [0.211331] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11)
 [0.213126] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11)
 [0.215291] ACPI: PCI Interrupt Link [LNKS] (IRQs 9) *0
 [0.217204] vgaarb: device added: 
 PCI::00:02.0,decodes=io+mem,owns=io+mem,locks=none
 [0.220016] vgaarb: loaded
 [0.220931] vgaarb: bridge control possible :00:02.0
 [0.222716] SCSI subsystem initialized
 [0.224363] usbcore: registered new interface driver usbfs
 [0.225873] usbcore: registered new interface driver hub
 [0.227381] usbcore: registered new device driver usb
 [0.228226] PCI: Using ACPI for IRQ routing
 [0.229889] NetLabel: Initializing
 [0.232021] NetLabel:  domain hash size = 128
 [0.233289] NetLabel:  protocols = UNLABELED CIPSOv4
 [0.234683] NetLabel:  unlabeled traffic allowed by default
 [0.236101] HPET: 3 timers in total, 0 timers will be used for per-cpu 
 timer
 [0.237936] hpet0: at MMIO 0xfed0, IRQs 2, 8, 0
 [0.240017] hpet0: 3 comparators, 64-bit 100.00 MHz counter
 [0.248162] Switching to clocksource kvm-clock
 [0.250575] Switched to NOHz mode on CPU #0
 [0.260176] AppArmor: AppArmor Filesystem Enabled
 [0.261587] pnp: PnP ACPI init
 [0.262592] ACPI: bus type pnp registered
 [0.264716] pnp: PnP ACPI: found 8 devices
 [0.265919] ACPI: ACPI bus type pnp unregistered
 [0.273657] NET: Registered protocol family 2
 [0.275023] IP route cache hash table entries: 4096 (order: 3, 32768 bytes)
 [0.277174] TCP established hash table entries: 16384 (order: 6, 262144 
 bytes)
 [0.279627] TCP bind hash table entries: 16384 (order: 6, 262144 bytes)
 [0.281735] TCP: Hash tables configured (established 16384 bind 16384)
 [0.283454] TCP reno registered
 [0.284478] UDP hash table entries: 256 (order: 1, 8192 bytes)
 [0.286046] UDP-Lite hash table entries: 256 (order: 1, 8192 bytes)
 [0.287800] NET: Registered protocol family 1
 [0.289098] pci :00:00.0: Limiting direct PCI/PCI transfers
 [0.290692] pci :00:01.0: PIIX3: Enabling Passive Release
 [0.292282] pci :00:01.0

[Openstack] not able to create VMs with member user but only with admin

2013-05-19 Thread Chathura M. Sarathchandra Magurawalage
I have just done a fresh multi node openstack installation. I can not
create VMs as a Member user. The VM end up getting ERROR state with the
following error.

u'message': u'ProcessExecutionError', u'code': 500, u'created':
u'2013-05-19T10:56:58Z'

Any help would be appreciated.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Allocating dynamic IP to the VMs

2013-05-12 Thread Chathura M. Sarathchandra Magurawalage
any idea, anyone?

On 11 May 2013 10:37, Chathura M. Sarathchandra Magurawalage 
77.chath...@gmail.com wrote:

 Hello Sylvain,

 I am sorry I got caught up with another project in the past few weeks,
 hence my late reply. Please bare with me.

 The floating ip is bound to the qg-550803ee-ce bridge.

 root@controller:~# ip a
 1: lo: LOOPBACK,UP,LOWER_UP mtu 16436 qdisc noqueue state UNKNOWN
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
 inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
 2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP qlen
 1000
 link/ether d4:ae:52:bb:aa:20 brd ff:ff:ff:ff:ff:ff
 inet 192.168.2.225/24 brd 192.168.2.255 scope global eth0
 inet6 fe80::d6ae:52ff:febb:aa20/64 scope link
valid_lft forever preferred_lft forever
 3: eth1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP qlen
 1000
 link/ether d4:ae:52:bb:aa:21 brd ff:ff:ff:ff:ff:ff
 inet 10.10.10.1/24 brd 10.10.10.255 scope global eth1
 inet6 fe80::d6ae:52ff:febb:aa21/64 scope link
valid_lft forever preferred_lft forever
  31: br-int: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN
 link/ether e6:fe:c6:5e:73:47 brd ff:ff:ff:ff:ff:ff
 32: br-ex: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue state
 UNKNOWN
 link/ether f6:26:0f:0d:32:45 brd ff:ff:ff:ff:ff:ff
 inet6 fe80::f426:fff:fe0d:3245/64 scope link
valid_lft forever preferred_lft forever
 33: tapd648cfe0-f6: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc
 noqueue state UNKNOWN
 link/ether fa:16:3e:9a:2c:29 brd ff:ff:ff:ff:ff:ff
 inet 10.5.5.2/24 brd 10.5.5.255 scope global tapd648cfe0-f6
 inet6 fe80::f816:3eff:fe9a:2c29/64 scope link
valid_lft forever preferred_lft forever
 34: br-tun: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN
 link/ether 5e:2b:f9:ca:c6:40 brd ff:ff:ff:ff:ff:ff
 35: qr-9e818b07-92: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc
 noqueue state UNKNOWN
 link/ether fa:16:3e:01:48:1d brd ff:ff:ff:ff:ff:ff
 inet 10.5.5.1/24 brd 10.5.5.255 scope global qr-9e818b07-92
 inet6 fe80::f816:3eff:fe01:481d/64 scope link
valid_lft forever preferred_lft forever
 36: qg-550803ee-ce: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc
 noqueue state UNKNOWN
 link/ether fa:16:3e:fc:87:1c brd ff:ff:ff:ff:ff:ff
 inet 192.168.2.151/24 brd 192.168.2.255 scope global qg-550803ee-ce
 inet 192.168.2.152/32 brd 192.168.2.152 scope global qg-550803ee-ce
 inet6 fe80::f816:3eff:fefc:871c/64 scope link
valid_lft forever preferred_lft forever

 But I can not ping the floating ip.

 root@controller:~# quantum net-list -- --router:external True

 +-+--+---+
 | id  | name   |
  subnets   |

 ++---+---+
 | a83c3409-6c79-4bb7-9557-010f3b56024f  | ext_net |
 fb3439f4-2afa-4cdc-86a6-aaee2ce1a3a3  |

 ++---+---+

 root@controller:~# quantum floatingip-list

 +---+--+++
 | id  |
 fixed_ip_address | floating_ip_address | port_id
|

 ++-+++
 | 46311a66-5793-43af-be74-612580e505ca | 10.5.5.3  |
 192.168.2.152   | 529f6c56-8037-489a-a120-b97675d1745f  |
 +--
 -+-+++

 Any idea?


 On 25 March 2013 16:09, Sylvain Bauza sylvain.ba...@digimind.com wrote:

  Le 25/03/2013 16:00, Chathura M. Sarathchandra Magurawalage a écrit :


 I can not see anything going through qg- interface.


 Your iptables seem correct.
 Could you please ip a and make sure floating ip is bound to qg- ?

 Are you sure that floating IPs are working properly on your setup ?

 -Sylvain



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Allocating dynamic IP to the VMs

2013-05-11 Thread Chathura M. Sarathchandra Magurawalage
Hello Sylvain,

I am sorry I got caught up with another project in the past few weeks,
hence my late reply. Please bare with me.

The floating ip is bound to the qg-550803ee-ce bridge.

root@controller:~# ip a
1: lo: LOOPBACK,UP,LOWER_UP mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP qlen
1000
link/ether d4:ae:52:bb:aa:20 brd ff:ff:ff:ff:ff:ff
inet 192.168.2.225/24 brd 192.168.2.255 scope global eth0
inet6 fe80::d6ae:52ff:febb:aa20/64 scope link
   valid_lft forever preferred_lft forever
3: eth1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP qlen
1000
link/ether d4:ae:52:bb:aa:21 brd ff:ff:ff:ff:ff:ff
inet 10.10.10.1/24 brd 10.10.10.255 scope global eth1
inet6 fe80::d6ae:52ff:febb:aa21/64 scope link
   valid_lft forever preferred_lft forever
 31: br-int: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN
link/ether e6:fe:c6:5e:73:47 brd ff:ff:ff:ff:ff:ff
32: br-ex: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue state
UNKNOWN
link/ether f6:26:0f:0d:32:45 brd ff:ff:ff:ff:ff:ff
inet6 fe80::f426:fff:fe0d:3245/64 scope link
   valid_lft forever preferred_lft forever
33: tapd648cfe0-f6: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc
noqueue state UNKNOWN
link/ether fa:16:3e:9a:2c:29 brd ff:ff:ff:ff:ff:ff
inet 10.5.5.2/24 brd 10.5.5.255 scope global tapd648cfe0-f6
inet6 fe80::f816:3eff:fe9a:2c29/64 scope link
   valid_lft forever preferred_lft forever
34: br-tun: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN
link/ether 5e:2b:f9:ca:c6:40 brd ff:ff:ff:ff:ff:ff
35: qr-9e818b07-92: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc
noqueue state UNKNOWN
link/ether fa:16:3e:01:48:1d brd ff:ff:ff:ff:ff:ff
inet 10.5.5.1/24 brd 10.5.5.255 scope global qr-9e818b07-92
inet6 fe80::f816:3eff:fe01:481d/64 scope link
   valid_lft forever preferred_lft forever
36: qg-550803ee-ce: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc
noqueue state UNKNOWN
link/ether fa:16:3e:fc:87:1c brd ff:ff:ff:ff:ff:ff
inet 192.168.2.151/24 brd 192.168.2.255 scope global qg-550803ee-ce
inet 192.168.2.152/32 brd 192.168.2.152 scope global qg-550803ee-ce
inet6 fe80::f816:3eff:fefc:871c/64 scope link
   valid_lft forever preferred_lft forever

But I can not ping the floating ip.

root@controller:~# quantum net-list -- --router:external True
+-+--+---+
| id  | name   |
 subnets   |
++---+---+
| a83c3409-6c79-4bb7-9557-010f3b56024f  | ext_net |
fb3439f4-2afa-4cdc-86a6-aaee2ce1a3a3  |
++---+---+

root@controller:~# quantum floatingip-list
+---+--+++
| id  |
fixed_ip_address | floating_ip_address | port_id
   |
++-+++
| 46311a66-5793-43af-be74-612580e505ca | 10.5.5.3  |
192.168.2.152   | 529f6c56-8037-489a-a120-b97675d1745f  |
+--
-+-+++

Any idea?


On 25 March 2013 16:09, Sylvain Bauza sylvain.ba...@digimind.com wrote:

  Le 25/03/2013 16:00, Chathura M. Sarathchandra Magurawalage a écrit :


 I can not see anything going through qg- interface.


 Your iptables seem correct.
 Could you please ip a and make sure floating ip is bound to qg- ?

 Are you sure that floating IPs are working properly on your setup ?

 -Sylvain

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] novnc not working (No such RPC function validate_console_port ) - fix

2013-03-26 Thread Chathura M. Sarathchandra Magurawalage
Hello,

I get Failed to connect to server (code: 1006) error message on the
dashboard when I try to view the  VNC panel.

I get the following error on the in the
controller. /var/log/nova/nova-consoleauth.log

2013-03-26 16:55:49 16471 ERROR nova.openstack.common.rpc.amqp [-]
Exception during message handling
2013-03-26 16:55:49 16471 TRACE nova.openstack.common.rpc.amqp Traceback
(most recent call last):
2013-03-26 16:55:49 16471 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py, line
276, in _process_data
2013-03-26 16:55:49 16471 TRACE nova.openstack.common.rpc.amqp rval =
self.proxy.dispatch(ctxt, version, method, **args)
2013-03-26 16:55:49 16471 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py,
line 145, in dispatch
2013-03-26 16:55:49 16471 TRACE nova.openstack.common.rpc.amqp return
getattr(proxyobj, method)(ctxt, **kwargs)
2013-03-26 16:55:49 16471 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/consoleauth/manager.py, line 107,
in check_token
2013-03-26 16:55:49 16471 TRACE nova.openstack.common.rpc.amqp if
self._validate_token(context, token):
2013-03-26 16:55:49 16471 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/consoleauth/manager.py, line 99, in
_validate_token
2013-03-26 16:55:49 16471 TRACE nova.openstack.common.rpc.amqp
token['console_type'])
2013-03-26 16:55:49 16471 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/compute/rpcapi.py, line 267, in
validate_console_port
2013-03-26 16:55:49 16471 TRACE nova.openstack.common.rpc.amqp None,
instance))
2013-03-26 16:55:49 16471 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/proxy.py, line
80, in call
2013-03-26 16:55:49 16471 TRACE nova.openstack.common.rpc.amqp return
rpc.call(context, self._get_topic(topic), msg, timeout)
2013-03-26 16:55:49 16471 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/__init__.py,
line 108, in call
2013-03-26 16:55:49 16471 TRACE nova.openstack.common.rpc.amqp return
_get_impl().call(cfg.CONF, context, topic, msg, timeout)
2013-03-26 16:55:49 16471 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py,
line 718, in call
2013-03-26 16:55:49 16471 TRACE nova.openstack.common.rpc.amqp
rpc_amqp.get_connection_pool(conf, Connection))
2013-03-26 16:55:49 16471 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py, line
369, in call
2013-03-26 16:55:49 16471 TRACE nova.openstack.common.rpc.amqp rv =
list(rv)
2013-03-26 16:55:49 16471 TRACE nova.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py, line
337, in __iter__
2013-03-26 16:55:49 16471 TRACE nova.openstack.common.rpc.amqp raise
result
2013-03-26 16:55:49 16471 TRACE nova.openstack.common.rpc.amqp RemoteError:
Remote error: AttributeError No such RPC function 'validate_console_port'
2013-03-26 16:55:49 16471 TRACE nova.openstack.common.rpc.amqp [u'Traceback
(most recent call last):\n', u'  File
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.p
y, line 276, in _process_data\nrval = self.proxy.dispatch(ctxt,
version, method, **args)\n', u'  File
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py
, line 148, in dispatch\nraise AttributeError(No such RPC function
\'%s\' % method)\n', uAttributeError: No such RPC function
'validate_console_port'\n].
2013-03-26 16:55:49 16471 TRACE nova.openstack.common.rpc.amqp
2013-03-26 16:55:49 16471 ERROR nova.openstack.common.rpc.common [-]
Returning exception Remote error: AttributeError No such RPC function
'validate_console_port'
[u'Traceback (most recent call last):\n', u'  File
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py, line
276, in _process_data\nrval = self.proxy.dispatch(
ctxt, version, method, **args)\n', u'  File
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py,
line 148, in dispatch\nraise AttributeError(No such RPC
 function \'%s\' % method)\n', uAttributeError: No such RPC function
'validate_console_port'\n]. to caller
2013-03-26 16:55:49 16471 ERROR nova.openstack.common.rpc.common [-]
['Traceback (most recent call last):\n', '  File
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/am
qp.py, line 276, in _process_data\nrval = self.proxy.dispatch(ctxt,
version, method, **args)\n', '  File
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher
.py, line 145, in dispatch\nreturn getattr(proxyobj, method)(ctxt,
**kwargs)\n', '  File
/usr/lib/python2.7/dist-packages/nova/consoleauth/manager.py, line 107,
in check_tok
en\nif 

Re: [Openstack] Allocating dynamic IP to the VMs

2013-03-25 Thread Chathura M. Sarathchandra Magurawalage
Thanks Sylvain,

I will check and get back to you on this.

I have got one question on this. Does quantum directly request leases from
the gateway of the physical network before reserving them to allocate to
VMs?


On 25 March 2013 10:53, Sylvain Bauza sylvain.ba...@digimind.com wrote:

  The basic troubleshooting steps for L3 mapping are :
 1. make sure your DNAT/SNAT entries have been populated correctly (using
 'iptables -t nat -L -n )
 2. monitor your qg-X interface making sure SNAT is working properly
 (using 'tcpdump -i qg- -nn) and checking that you actually have *two*
 TCP requests with the same id (the first one with the private IP, the
 second one with the public IP)
 3. make sure you activated ip_forward in /etc/sysctl.conf and either
 reboot or sysctl -w the value (and restart quantum-l3-agent in this case)


 If these 3 steps are OK, then you have a gateway issue, not related to
 Quantum.

 -Sylvain

 Le 24/03/2013 15:49, Chathura M. Sarathchandra Magurawalage a écrit :

 Thanks Sylvain,

  I have tried this, but does not seem to work. I can allocate the
 floating ip to the VM but it is not accessible from the physical network. I
 can not ping to it from the controller or any other physical nodes.

  Any idea?

 On 19 March 2013 16:14, Sylvain Bauza sylvain.ba...@digimind.com wrote:

  As per
 http://docs.openstack.org/folsom/openstack-network/admin/content/demo_logical_network_config.htmlbut
  slightly modified as per CLI help,

 quantum net-create ext_net --tenant-id $TENANT_ID --router:external=True
 quantum subnet-create --ip_version 4 --allocation-pool 
 start=192.168.2.151,end=192.168.2.240 \
  --gateway 192.168.2.253 id_of_ext_net 192.168.2.0/24 -- 
 --enable_dhcp=False


 It will create ext_net subnet with preallocated IP range. For each VM,
 allocate floating IP from this pool and then associate it with the internal
 port.

 Hope it can helps,
 -Sylvain

 Le 19/03/2013 13:44, Chathura M. Sarathchandra Magurawalage a écrit :

 Thanks.

  its 192.168.2.0/24

  free ip range: 192.168.2.151 192.168.2.240

  gw/dhcp server: 192.168.2.253


 On 19 March 2013 08:28, Sylvain Bauza sylvain.ba...@digimind.com wrote:

  In that case, please refer to my previous e-mail : use floating IPs
 bound to the same physical network.
 That's up to you to know which IP pools are available inside your
 network. Once you get one, create a external Quantum subnet defined with
 this IP range.

 Sorry, I have feeling to explain again and again. If you still don't
 catch the point, could you please then tell me your physical net/CIDR, your
 free IP range and your gateway, and I'll mix you up the command to issue.

 -Sylvain

 Le 18/03/2013 18:02, Chathura M. Sarathchandra Magurawalage a écrit :

  Thanks Sylvain,

  There must be a way of doing this without having to do anything with my
 default gateway of my physical network? . Even if I have to I do not wan to
 do anything to the physical gateway. All I need is a way to let the VMs get
 a dynamic IP from the physical network. How can I do this. For example this
 can be done on virtual box using a bridge adapter which maps the VM in to
 the physical network.

 On 18 March 2013 16:05, Sylvain Bauza sylvain.ba...@digimind.comwrote:

  Could you please tell me your physical network CIDR ?
 Anyway, what you need is not requiring having a floating IP pool inside
 the same network, you can also play with static routing : if your physical
 host does have a default gw, you can create a static route from this gw to
 the VM network gateway. And on the VM network gateway, do the same...

 -Sylvain

 Le 18/03/2013 16:53, Chathura M. Sarathchandra Magurawalage a écrit :

   Hey Sylvain,

  Basically what I need is to have the VMs mapped to my physical
 network so that my physical hosts can directly access the VMs. How can I do
 this?

  Thanks.


 On 18 March 2013 15:50, Sylvain Bauza sylvain.ba...@digimind.comwrote:

  Hi,

 I don't understand your business. Should you have a 192.168.1.0/24network 
 for management, you could also assign an external network with
 Quantum based on the same subnet (ie. 192.168.1.0/24).
 When creating a floating IP pool, Quantum does require at least 3
 things :
  - the CIDR
  - the beginning and ending IPs
  - the external gateway

 So, based on what I previously said, you only need to create a
 192.168.1.0/24 in Quantum with .1-.100 (for example) as the range,
 .254 being the external gateway.

 Thanks,
 -Sylvain

 Le 18/03/2013 16:29, Chathura M. Sarathchandra Magurawalage a écrit :

  anyone?

 On 17 March 2013 21:33, Chathura M. Sarathchandra Magurawalage 
 77.chath...@gmail.com wrote:

 After reading a little bit more, I think I have found what I need. It
 is  a provider network that I need for the VMs so that they can get 
 access
 to the other resources in my main network ( such as other physical hosts
 that are connected to the same network ).

  My question is, is it possible to do this alongside the use case
 that I have

Re: [Openstack] Allocating dynamic IP to the VMs

2013-03-25 Thread Chathura M. Sarathchandra Magurawalage
Thanks.

root@controller:~# iptables -t nat -L -n
Chain PREROUTING (policy ACCEPT)
target prot opt source   destination
quantum-l3-agent-PREROUTING  all  --  0.0.0.0/00.0.0.0/0

nova-api-PREROUTING  all  --  0.0.0.0/00.0.0.0/0

Chain INPUT (policy ACCEPT)
target prot opt source   destination

Chain OUTPUT (policy ACCEPT)
target prot opt source   destination
quantum-l3-agent-OUTPUT  all  --  0.0.0.0/00.0.0.0/0
nova-api-OUTPUT  all  --  0.0.0.0/00.0.0.0/0

Chain POSTROUTING (policy ACCEPT)
target prot opt source   destination
quantum-l3-agent-POSTROUTING  all  --  0.0.0.0/00.0.0.0/0

nova-api-POSTROUTING  all  --  0.0.0.0/00.0.0.0/0
quantum-postrouting-bottom  all  --  0.0.0.0/00.0.0.0/0

nova-postrouting-bottom  all  --  0.0.0.0/00.0.0.0/0

Chain nova-api-OUTPUT (1 references)
target prot opt source   destination

Chain nova-api-POSTROUTING (1 references)
target prot opt source   destination

Chain nova-api-PREROUTING (1 references)
target prot opt source   destination

Chain nova-api-float-snat (1 references)
target prot opt source   destination

Chain nova-api-snat (1 references)
target prot opt source   destination
nova-api-float-snat  all  --  0.0.0.0/00.0.0.0/0

Chain nova-postrouting-bottom (1 references)
target prot opt source   destination
nova-api-snat  all  --  0.0.0.0/00.0.0.0/0

Chain quantum-l3-agent-OUTPUT (1 references)
target prot opt source   destination
DNAT   all  --  0.0.0.0/0192.168.2.152to:10.5.5.3

Chain quantum-l3-agent-POSTROUTING (1 references)
target prot opt source   destination
ACCEPT all  --  0.0.0.0/00.0.0.0/0! ctstate DNAT
ACCEPT all  --  10.5.5.0/24  192.168.2.225

Chain quantum-l3-agent-PREROUTING (1 references)
target prot opt source   destination
DNAT   tcp  --  0.0.0.0/0169.254.169.254  tcp dpt:80 to:
192.168.2.225:8775
DNAT   all  --  0.0.0.0/0192.168.2.152to:10.5.5.3

Chain quantum-l3-agent-float-snat (1 references)
target prot opt source   destination
SNAT   all  --  10.5.5.3 0.0.0.0/0
 to:192.168.2.152

Chain quantum-l3-agent-snat (1 references)
target prot opt source   destination
quantum-l3-agent-float-snat  all  --  0.0.0.0/00.0.0.0/0

SNAT   all  --  10.5.5.0/24  0.0.0.0/0
 to:192.168.2.151

Chain quantum-postrouting-bottom (1 references)
target prot opt source   destination
quantum-l3-agent-snat  all  --  0.0.0.0/00.0.0.0/0


I can not see anything going through qg- interface.

I have activated net.ipv4.ip_forward in /etc/sysctl.conf.

On 25 March 2013 13:03, Sylvain Bauza sylvain.ba...@digimind.com wrote:

 Le 25/03/2013 12:49, Chathura M. Sarathchandra Magurawalage a écrit :


 I have got one question on this. Does quantum directly request leases
 from the gateway of the physical network before reserving them to allocate
 to VMs?



 Nope, not at all. It's up to the administrator to make sure the IP ranges
 for Openstack are not pooled by any other DHCP server. There is (as per my
 knowledge) no way to sync up in between quantum-l3-agent and other DHCP
 servers.

 Actually, contrary to fixed ip networks in Quantum, floating IP networks
 are not DHCP managed. Eligibility is made upon next IP address available in
 Quantum mysql database and directly injected into iptables, that's it.

 -Sylvain

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Allocating dynamic IP to the VMs

2013-03-24 Thread Chathura M. Sarathchandra Magurawalage
Thanks Sylvain,

I have tried this, but does not seem to work. I can allocate the floating
ip to the VM but it is not accessible from the physical network. I can not
ping to it from the controller or any other physical nodes.

Any idea?

On 19 March 2013 16:14, Sylvain Bauza sylvain.ba...@digimind.com wrote:

  As per
 http://docs.openstack.org/folsom/openstack-network/admin/content/demo_logical_network_config.htmlbut
  slightly modified as per CLI help,

 quantum net-create ext_net --tenant-id $TENANT_ID --router:external=True
 quantum subnet-create --ip_version 4 --allocation-pool 
 start=192.168.2.151,end=192.168.2.240 \
  --gateway 192.168.2.253 id_of_ext_net 192.168.2.0/24 -- --enable_dhcp=False


 It will create ext_net subnet with preallocated IP range. For each VM,
 allocate floating IP from this pool and then associate it with the internal
 port.

 Hope it can helps,
 -Sylvain

 Le 19/03/2013 13:44, Chathura M. Sarathchandra Magurawalage a écrit :

 Thanks.

  its 192.168.2.0/24

  free ip range: 192.168.2.151 192.168.2.240

  gw/dhcp server: 192.168.2.253


 On 19 March 2013 08:28, Sylvain Bauza sylvain.ba...@digimind.com wrote:

  In that case, please refer to my previous e-mail : use floating IPs
 bound to the same physical network.
 That's up to you to know which IP pools are available inside your
 network. Once you get one, create a external Quantum subnet defined with
 this IP range.

 Sorry, I have feeling to explain again and again. If you still don't
 catch the point, could you please then tell me your physical net/CIDR, your
 free IP range and your gateway, and I'll mix you up the command to issue.

 -Sylvain

 Le 18/03/2013 18:02, Chathura M. Sarathchandra Magurawalage a écrit :

  Thanks Sylvain,

  There must be a way of doing this without having to do anything with my
 default gateway of my physical network? . Even if I have to I do not wan to
 do anything to the physical gateway. All I need is a way to let the VMs get
 a dynamic IP from the physical network. How can I do this. For example this
 can be done on virtual box using a bridge adapter which maps the VM in to
 the physical network.

 On 18 March 2013 16:05, Sylvain Bauza sylvain.ba...@digimind.com wrote:

  Could you please tell me your physical network CIDR ?
 Anyway, what you need is not requiring having a floating IP pool inside
 the same network, you can also play with static routing : if your physical
 host does have a default gw, you can create a static route from this gw to
 the VM network gateway. And on the VM network gateway, do the same...

 -Sylvain

 Le 18/03/2013 16:53, Chathura M. Sarathchandra Magurawalage a écrit :

   Hey Sylvain,

  Basically what I need is to have the VMs mapped to my physical network
 so that my physical hosts can directly access the VMs. How can I do this?

  Thanks.


 On 18 March 2013 15:50, Sylvain Bauza sylvain.ba...@digimind.comwrote:

  Hi,

 I don't understand your business. Should you have a 192.168.1.0/24network 
 for management, you could also assign an external network with
 Quantum based on the same subnet (ie. 192.168.1.0/24).
 When creating a floating IP pool, Quantum does require at least 3
 things :
  - the CIDR
  - the beginning and ending IPs
  - the external gateway

 So, based on what I previously said, you only need to create a
 192.168.1.0/24 in Quantum with .1-.100 (for example) as the range,
 .254 being the external gateway.

 Thanks,
 -Sylvain

 Le 18/03/2013 16:29, Chathura M. Sarathchandra Magurawalage a écrit :

  anyone?

 On 17 March 2013 21:33, Chathura M. Sarathchandra Magurawalage 
 77.chath...@gmail.com wrote:

 After reading a little bit more, I think I have found what I need. It
 is  a provider network that I need for the VMs so that they can get access
 to the other resources in my main network ( such as other physical hosts
 that are connected to the same network ).

  My question is, is it possible to do this alongside the use case
 that I have followed ( Provider router with private networks)?

  If so how can I do this?

  Thanks.


   On 16 March 2013 01:46, Chathura M. Sarathchandra Magurawalage 
 77.chath...@gmail.com wrote:

  Hello,

  I want to know how I can allocate a dynamic IP to the VM from the
 same network as the openstack hosts (controller/network-node/compute 
 node)
 network/management network . For example, in virtual box you can give 
 your
 VM an IP from the host's network using a Bridge adapter. How can I do 
 this
 in openstack?

  From what I understand floating IP's are used when you have a
 public IP
  (which is static) to be allocated to VM's.

 My openstack installation architecture:

 http://docs.openstack.org/folsom/basic-install/content/basic-install_architecture.html

  Quantum use case:

 http://docs.openstack.org/trunk/openstack-network/admin/content/use_cases_single_router.html





  ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack

Re: [Openstack] Allocating dynamic IP to the VMs

2013-03-19 Thread Chathura M. Sarathchandra Magurawalage
Thanks.

its 192.168.2.0/24

free ip range: 192.168.2.151 192.168.2.240

gw/dhcp server: 192.168.2.253


On 19 March 2013 08:28, Sylvain Bauza sylvain.ba...@digimind.com wrote:

  In that case, please refer to my previous e-mail : use floating IPs
 bound to the same physical network.
 That's up to you to know which IP pools are available inside your network.
 Once you get one, create a external Quantum subnet defined with this IP
 range.

 Sorry, I have feeling to explain again and again. If you still don't catch
 the point, could you please then tell me your physical net/CIDR, your free
 IP range and your gateway, and I'll mix you up the command to issue.

 -Sylvain

 Le 18/03/2013 18:02, Chathura M. Sarathchandra Magurawalage a écrit :

  Thanks Sylvain,

  There must be a way of doing this without having to do anything with my
 default gateway of my physical network? . Even if I have to I do not wan to
 do anything to the physical gateway. All I need is a way to let the VMs get
 a dynamic IP from the physical network. How can I do this. For example this
 can be done on virtual box using a bridge adapter which maps the VM in to
 the physical network.

 On 18 March 2013 16:05, Sylvain Bauza sylvain.ba...@digimind.com wrote:

  Could you please tell me your physical network CIDR ?
 Anyway, what you need is not requiring having a floating IP pool inside
 the same network, you can also play with static routing : if your physical
 host does have a default gw, you can create a static route from this gw to
 the VM network gateway. And on the VM network gateway, do the same...

 -Sylvain

 Le 18/03/2013 16:53, Chathura M. Sarathchandra Magurawalage a écrit :

   Hey Sylvain,

  Basically what I need is to have the VMs mapped to my physical network
 so that my physical hosts can directly access the VMs. How can I do this?

  Thanks.


 On 18 March 2013 15:50, Sylvain Bauza sylvain.ba...@digimind.com wrote:

  Hi,

 I don't understand your business. Should you have a 192.168.1.0/24network 
 for management, you could also assign an external network with
 Quantum based on the same subnet (ie. 192.168.1.0/24).
 When creating a floating IP pool, Quantum does require at least 3 things
 :
  - the CIDR
  - the beginning and ending IPs
  - the external gateway

 So, based on what I previously said, you only need to create a
 192.168.1.0/24 in Quantum with .1-.100 (for example) as the range, .254
 being the external gateway.

 Thanks,
 -Sylvain

 Le 18/03/2013 16:29, Chathura M. Sarathchandra Magurawalage a écrit :

  anyone?

 On 17 March 2013 21:33, Chathura M. Sarathchandra Magurawalage 
 77.chath...@gmail.com wrote:

 After reading a little bit more, I think I have found what I need. It
 is  a provider network that I need for the VMs so that they can get access
 to the other resources in my main network ( such as other physical hosts
 that are connected to the same network ).

  My question is, is it possible to do this alongside the use case that
 I have followed ( Provider router with private networks)?

  If so how can I do this?

  Thanks.


   On 16 March 2013 01:46, Chathura M. Sarathchandra Magurawalage 
 77.chath...@gmail.com wrote:

  Hello,

  I want to know how I can allocate a dynamic IP to the VM from the
 same network as the openstack hosts (controller/network-node/compute node)
 network/management network . For example, in virtual box you can give your
 VM an IP from the host's network using a Bridge adapter. How can I do this
 in openstack?

  From what I understand floating IP's are used when you have a public
 IP
  (which is static) to be allocated to VM's.

 My openstack installation architecture:

 http://docs.openstack.org/folsom/basic-install/content/basic-install_architecture.html

  Quantum use case:

 http://docs.openstack.org/trunk/openstack-network/admin/content/use_cases_single_router.html





  ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp







___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Allocating dynamic IP to the VMs

2013-03-18 Thread Chathura M. Sarathchandra Magurawalage
anyone?

On 17 March 2013 21:33, Chathura M. Sarathchandra Magurawalage 
77.chath...@gmail.com wrote:

 After reading a little bit more, I think I have found what I need. It is
  a provider network that I need for the VMs so that they can get access to
 the other resources in my main network ( such as other physical hosts that
 are connected to the same network ).

 My question is, is it possible to do this alongside the use case that I
 have followed ( Provider router with private networks)?

 If so how can I do this?

 Thanks.


 On 16 March 2013 01:46, Chathura M. Sarathchandra Magurawalage 
 77.chath...@gmail.com wrote:

 Hello,

 I want to know how I can allocate a dynamic IP to the VM from the same
 network as the openstack hosts (controller/network-node/compute node)
 network/management network . For example, in virtual box you can give your
 VM an IP from the host's network using a Bridge adapter. How can I do this
 in openstack?

 From what I understand floating IP's are used when you have a public IP
  (which is static) to be allocated to VM's.

 My openstack installation architecture:

 http://docs.openstack.org/folsom/basic-install/content/basic-install_architecture.html

 Quantum use case:

 http://docs.openstack.org/trunk/openstack-network/admin/content/use_cases_single_router.html



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Allocating dynamic IP to the VMs

2013-03-18 Thread Chathura M. Sarathchandra Magurawalage
Hey Sylvain,

Basically what I need is to have the VMs mapped to my physical network so
that my physical hosts can directly access the VMs. How can I do this?

Thanks.


On 18 March 2013 15:50, Sylvain Bauza sylvain.ba...@digimind.com wrote:

  Hi,

 I don't understand your business. Should you have a 192.168.1.0/24network for 
 management, you could also assign an external network with
 Quantum based on the same subnet (ie. 192.168.1.0/24).
 When creating a floating IP pool, Quantum does require at least 3 things :
  - the CIDR
  - the beginning and ending IPs
  - the external gateway

 So, based on what I previously said, you only need to create a
 192.168.1.0/24 in Quantum with .1-.100 (for example) as the range, .254
 being the external gateway.

 Thanks,
 -Sylvain

 Le 18/03/2013 16:29, Chathura M. Sarathchandra Magurawalage a écrit :

 anyone?

 On 17 March 2013 21:33, Chathura M. Sarathchandra Magurawalage 
 77.chath...@gmail.com wrote:

 After reading a little bit more, I think I have found what I need. It is
  a provider network that I need for the VMs so that they can get access to
 the other resources in my main network ( such as other physical hosts that
 are connected to the same network ).

  My question is, is it possible to do this alongside the use case that I
 have followed ( Provider router with private networks)?

  If so how can I do this?

  Thanks.


   On 16 March 2013 01:46, Chathura M. Sarathchandra Magurawalage 
 77.chath...@gmail.com wrote:

  Hello,

  I want to know how I can allocate a dynamic IP to the VM from the same
 network as the openstack hosts (controller/network-node/compute node)
 network/management network . For example, in virtual box you can give your
 VM an IP from the host's network using a Bridge adapter. How can I do this
 in openstack?

  From what I understand floating IP's are used when you have a public
 IP
  (which is static) to be allocated to VM's.

 My openstack installation architecture:

 http://docs.openstack.org/folsom/basic-install/content/basic-install_architecture.html

  Quantum use case:

 http://docs.openstack.org/trunk/openstack-network/admin/content/use_cases_single_router.html





 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Allocating dynamic IP to the VMs

2013-03-18 Thread Chathura M. Sarathchandra Magurawalage
Thanks Sylvain,

There must be a way of doing this without having to do anything with my
default gateway of my physical network? . Even if I have to I do not wan to
do anything to the physical gateway. All I need is a way to let the VMs get
a dynamic IP from the physical network. How can I do this. For example this
can be done on virtual box using a bridge adapter which maps the VM in to
the physical network.

On 18 March 2013 16:05, Sylvain Bauza sylvain.ba...@digimind.com wrote:

  Could you please tell me your physical network CIDR ?
 Anyway, what you need is not requiring having a floating IP pool inside
 the same network, you can also play with static routing : if your physical
 host does have a default gw, you can create a static route from this gw to
 the VM network gateway. And on the VM network gateway, do the same...

 -Sylvain

 Le 18/03/2013 16:53, Chathura M. Sarathchandra Magurawalage a écrit :

   Hey Sylvain,

  Basically what I need is to have the VMs mapped to my physical network
 so that my physical hosts can directly access the VMs. How can I do this?

  Thanks.


 On 18 March 2013 15:50, Sylvain Bauza sylvain.ba...@digimind.com wrote:

  Hi,

 I don't understand your business. Should you have a 192.168.1.0/24network 
 for management, you could also assign an external network with
 Quantum based on the same subnet (ie. 192.168.1.0/24).
 When creating a floating IP pool, Quantum does require at least 3 things
 :
  - the CIDR
  - the beginning and ending IPs
  - the external gateway

 So, based on what I previously said, you only need to create a
 192.168.1.0/24 in Quantum with .1-.100 (for example) as the range, .254
 being the external gateway.

 Thanks,
 -Sylvain

 Le 18/03/2013 16:29, Chathura M. Sarathchandra Magurawalage a écrit :

  anyone?

 On 17 March 2013 21:33, Chathura M. Sarathchandra Magurawalage 
 77.chath...@gmail.com wrote:

 After reading a little bit more, I think I have found what I need. It is
  a provider network that I need for the VMs so that they can get access to
 the other resources in my main network ( such as other physical hosts that
 are connected to the same network ).

  My question is, is it possible to do this alongside the use case that
 I have followed ( Provider router with private networks)?

  If so how can I do this?

  Thanks.


   On 16 March 2013 01:46, Chathura M. Sarathchandra Magurawalage 
 77.chath...@gmail.com wrote:

  Hello,

  I want to know how I can allocate a dynamic IP to the VM from the
 same network as the openstack hosts (controller/network-node/compute node)
 network/management network . For example, in virtual box you can give your
 VM an IP from the host's network using a Bridge adapter. How can I do this
 in openstack?

  From what I understand floating IP's are used when you have a public
 IP
  (which is static) to be allocated to VM's.

 My openstack installation architecture:

 http://docs.openstack.org/folsom/basic-install/content/basic-install_architecture.html

  Quantum use case:

 http://docs.openstack.org/trunk/openstack-network/admin/content/use_cases_single_router.html





  ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp





___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Allocating dynamic IP to the VMs

2013-03-17 Thread Chathura M. Sarathchandra Magurawalage
After reading a little bit more, I think I have found what I need. It is  a
provider network that I need for the VMs so that they can get access to the
other resources in my main network ( such as other physical hosts that are
connected to the same network ).

My question is, is it possible to do this alongside the use case that I
have followed ( Provider router with private networks)?

If so how can I do this?

Thanks.


On 16 March 2013 01:46, Chathura M. Sarathchandra Magurawalage 
77.chath...@gmail.com wrote:

 Hello,

 I want to know how I can allocate a dynamic IP to the VM from the same
 network as the openstack hosts (controller/network-node/compute node)
 network/management network . For example, in virtual box you can give your
 VM an IP from the host's network using a Bridge adapter. How can I do this
 in openstack?

 From what I understand floating IP's are used when you have a public IP
  (which is static) to be allocated to VM's.

 My openstack installation architecture:

 http://docs.openstack.org/folsom/basic-install/content/basic-install_architecture.html

 Quantum use case:

 http://docs.openstack.org/trunk/openstack-network/admin/content/use_cases_single_router.html

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] ssh from VM to VM

2013-03-16 Thread Chathura M. Sarathchandra Magurawalage
Thanks for your reply.

I have inserted PasswordAuthentication yes to the ssh config file. All VMs
have the same metadata including the ssh public key of the controller. So I
cant see why only cirros vms can do this.

Still does not work.



On 16 March 2013 06:24, Aaron Rosen aro...@nicira.com wrote:

 I suspect that that host 10.5.5.6 has ssh configured for
 PasswordAuthentication set to no and you don't have your public key of the
 host you are on, in the authorized_key file of 10.5.5.6.

 Aaron

 On Fri, Mar 15, 2013 at 7:26 PM, Chathura M. Sarathchandra Magurawalage 
 77.chath...@gmail.com wrote:

 Hello,

 I can't ssh from Ubuntu cloud VM to other VM. I get following

 ubuntu@master:~$ ssh cirros@10.5.5.6 -v
 OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012
 debug1: Reading configuration data /etc/ssh/ssh_config
 debug1: /etc/ssh/ssh_config line 19: Applying options for *
 debug1: Connecting to 10.5.5.6 [10.5.5.6] port 22.
 debug1: Connection established.
 debug1: identity file /home/ubuntu/.ssh/id_rsa type -1
 debug1: identity file /home/ubuntu/.ssh/id_rsa-cert type -1
 debug1: identity file /home/ubuntu/.ssh/id_dsa type -1
 debug1: identity file /home/ubuntu/.ssh/id_dsa-cert type -1
 debug1: identity file /home/ubuntu/.ssh/id_ecdsa type -1
 debug1: identity file /home/ubuntu/.ssh/id_ecdsa-cert type -1
 debug1: Remote protocol version 2.0, remote software version
 OpenSSH_5.9p1 Debian-5ubuntu1
 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH*
 debug1: Enabling compatibility mode for protocol 2.0
 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1
 debug1: SSH2_MSG_KEXINIT sent
 debug1: SSH2_MSG_KEXINIT received
 debug1: kex: server-client aes128-ctr hmac-md5 none
 debug1: kex: client-server aes128-ctr hmac-md5 none
 debug1: sending SSH2_MSG_KEX_ECDH_INIT
 debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
 debug1: Server host key: ECDSA
 7b:8f:6a:ee:ba:e5:0a:c5:04:01:ca:bd:e5:38:69:55
 debug1: Host '10.5.5.6' is known and matches the ECDSA host key.
 debug1: Found key in /home/ubuntu/.ssh/known_hosts:4
 debug1: ssh_ecdsa_verify: signature correct
 debug1: SSH2_MSG_NEWKEYS sent
 debug1: expecting SSH2_MSG_NEWKEYS
 debug1: SSH2_MSG_NEWKEYS received
 debug1: Roaming not allowed by server
 debug1: SSH2_MSG_SERVICE_REQUEST sent
 debug1: SSH2_MSG_SERVICE_ACCEPT received
 debug1: Authentications that can continue: publickey
 debug1: Next authentication method: publickey
 debug1: Trying private key: /home/ubuntu/.ssh/id_rsa
 debug1: Trying private key: /home/ubuntu/.ssh/id_dsa
 debug1: Trying private key: /home/ubuntu/.ssh/id_ecdsa
 debug1: No more authentication methods to try.
 Permission denied (publickey).

 But I can ssh from to my Cirros VMs. Also I can ssh from Ubuntu VM to
 Cirros VM.

 Any Idea?

 Thanks.

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] ssh from VM to VM

2013-03-16 Thread Chathura M. Sarathchandra Magurawalage
I solved the issue by copying the rsa public key of the first VM to the
second VM. Thought I did not have to do this.

Thanks.

On 16 March 2013 12:34, Pranav pps.pra...@gmail.com wrote:

 I think you need not exchange key pairs for Cirros image.
 Regards,
 Pranav


 On Sat, Mar 16, 2013 at 4:32 PM, Chathura M. Sarathchandra Magurawalage 
 77.chath...@gmail.com wrote:

 Thanks for your reply.

 I have inserted PasswordAuthentication yes to the ssh config file. All
 VMs have the same metadata including the ssh public key of the controller.
 So I cant see why only cirros vms can do this.

 Still does not work.



 On 16 March 2013 06:24, Aaron Rosen aro...@nicira.com wrote:

 I suspect that that host 10.5.5.6 has ssh configured for
 PasswordAuthentication set to no and you don't have your public key of the
 host you are on, in the authorized_key file of 10.5.5.6.

 Aaron

  On Fri, Mar 15, 2013 at 7:26 PM, Chathura M. Sarathchandra
 Magurawalage 77.chath...@gmail.com wrote:

 Hello,

 I can't ssh from Ubuntu cloud VM to other VM. I get following

 ubuntu@master:~$ ssh cirros@10.5.5.6 -v
 OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012
 debug1: Reading configuration data /etc/ssh/ssh_config
 debug1: /etc/ssh/ssh_config line 19: Applying options for *
 debug1: Connecting to 10.5.5.6 [10.5.5.6] port 22.
 debug1: Connection established.
 debug1: identity file /home/ubuntu/.ssh/id_rsa type -1
 debug1: identity file /home/ubuntu/.ssh/id_rsa-cert type -1
 debug1: identity file /home/ubuntu/.ssh/id_dsa type -1
 debug1: identity file /home/ubuntu/.ssh/id_dsa-cert type -1
 debug1: identity file /home/ubuntu/.ssh/id_ecdsa type -1
 debug1: identity file /home/ubuntu/.ssh/id_ecdsa-cert type -1
 debug1: Remote protocol version 2.0, remote software version
 OpenSSH_5.9p1 Debian-5ubuntu1
 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH*
 debug1: Enabling compatibility mode for protocol 2.0
 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1
 debug1: SSH2_MSG_KEXINIT sent
 debug1: SSH2_MSG_KEXINIT received
 debug1: kex: server-client aes128-ctr hmac-md5 none
 debug1: kex: client-server aes128-ctr hmac-md5 none
 debug1: sending SSH2_MSG_KEX_ECDH_INIT
 debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
 debug1: Server host key: ECDSA
 7b:8f:6a:ee:ba:e5:0a:c5:04:01:ca:bd:e5:38:69:55
 debug1: Host '10.5.5.6' is known and matches the ECDSA host key.
 debug1: Found key in /home/ubuntu/.ssh/known_hosts:4
 debug1: ssh_ecdsa_verify: signature correct
 debug1: SSH2_MSG_NEWKEYS sent
 debug1: expecting SSH2_MSG_NEWKEYS
 debug1: SSH2_MSG_NEWKEYS received
 debug1: Roaming not allowed by server
 debug1: SSH2_MSG_SERVICE_REQUEST sent
 debug1: SSH2_MSG_SERVICE_ACCEPT received
 debug1: Authentications that can continue: publickey
 debug1: Next authentication method: publickey
 debug1: Trying private key: /home/ubuntu/.ssh/id_rsa
 debug1: Trying private key: /home/ubuntu/.ssh/id_dsa
 debug1: Trying private key: /home/ubuntu/.ssh/id_ecdsa
 debug1: No more authentication methods to try.
 Permission denied (publickey).

 But I can ssh from to my Cirros VMs. Also I can ssh from Ubuntu VM to
 Cirros VM.

 Any Idea?

 Thanks.

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Allocating dynamic IP to the VMs

2013-03-15 Thread Chathura M. Sarathchandra Magurawalage
Hello,

I want to know how I can allocate a dynamic IP to the VM from the same
network as the openstack hosts (controller/network-node/compute node)
network/management network . For example, in virtual box you can give your
VM an IP from the host's network using a Bridge adapter. How can I do this
in openstack?

From what I understand floating IP's are used when you have a public IP
 (which is static) to be allocated to VM's.

My openstack installation architecture:
http://docs.openstack.org/folsom/basic-install/content/basic-install_architecture.html

Quantum use case:
http://docs.openstack.org/trunk/openstack-network/admin/content/use_cases_single_router.html
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] ssh from VM to VM

2013-03-15 Thread Chathura M. Sarathchandra Magurawalage
Hello,

I can't ssh from Ubuntu cloud VM to other VM. I get following

ubuntu@master:~$ ssh cirros@10.5.5.6 -v
OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to 10.5.5.6 [10.5.5.6] port 22.
debug1: Connection established.
debug1: identity file /home/ubuntu/.ssh/id_rsa type -1
debug1: identity file /home/ubuntu/.ssh/id_rsa-cert type -1
debug1: identity file /home/ubuntu/.ssh/id_dsa type -1
debug1: identity file /home/ubuntu/.ssh/id_dsa-cert type -1
debug1: identity file /home/ubuntu/.ssh/id_ecdsa type -1
debug1: identity file /home/ubuntu/.ssh/id_ecdsa-cert type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1
Debian-5ubuntu1
debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server-client aes128-ctr hmac-md5 none
debug1: kex: client-server aes128-ctr hmac-md5 none
debug1: sending SSH2_MSG_KEX_ECDH_INIT
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: Server host key: ECDSA
7b:8f:6a:ee:ba:e5:0a:c5:04:01:ca:bd:e5:38:69:55
debug1: Host '10.5.5.6' is known and matches the ECDSA host key.
debug1: Found key in /home/ubuntu/.ssh/known_hosts:4
debug1: ssh_ecdsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: Roaming not allowed by server
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey
debug1: Next authentication method: publickey
debug1: Trying private key: /home/ubuntu/.ssh/id_rsa
debug1: Trying private key: /home/ubuntu/.ssh/id_dsa
debug1: Trying private key: /home/ubuntu/.ssh/id_ecdsa
debug1: No more authentication methods to try.
Permission denied (publickey).

But I can ssh from to my Cirros VMs. Also I can ssh from Ubuntu VM to
Cirros VM.

Any Idea?

Thanks.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cant ping private or floating IP

2013-02-20 Thread Chathura M. Sarathchandra Magurawalage
Thanks.

I would be more concerned about the SIOCDELRT error above. Do you try to
 manually remove a network route at bootup ? Seems like the 'route del' is
 failing because the route is not already existing.

 I am not doing doing anything that I am aware of.


 As already said, you absolutely need VNC support for investigating. Could
 you please fix your VNC setup which is incorrect ?


But VNC works fine. Its just that it VM hangs on the boot up it wont come
to the log in prompt, I can't log into it.  :(

On 20 February 2013 13:46, Sylvain Bauza sylvain.ba...@digimind.com wrote:

  Le 20/02/2013 14:04, Chathura M. Sarathchandra Magurawalage a écrit :

  There are apparently two instances running in the compute node but nova
 just see only one. Probably when I have deleted an instance earlier it had
 not deleted the instance properly.

  root@controller:~# nova list

 +--+++---+
 | ID   | Name   | Status | Networks
|

 +--+++---+
 | 42e18cd5-de6f-4181-b238-320fe37ef6f1 | master | ACTIVE |
 demo-net=10.5.5.3 |

 +--+++---+


  virsh -c qemu+ssh://root@computenode/system list
 root@computenode's password:
  IdName   State
 
  14instance-002c  running
  18instance-001e  running



 You should have seen at 'sudo virsh list --all', plus looking at
 /etc/libvirt/qemu/*.xml to check how many instances were defined.
 I do suspect also that for some reason (probably nova-compute down), a
 clean-up of 2c probably didn't work. Anyway, this is fixed as you
 mention.


  Then I have deleted all instances and created a new one. But still cant
 ping or ssh the new VM.

  interface type='bridge'
   mac address='fa:16:3e:a2:6e:02'/
   source bridge='qbrff8933bf-ba'/
   model type='virtio'/
   filterref filter='nova-instance-instance-0035-fa163ea26e02'
 parameter name='DHCPSERVER' value='10.5.5.2'/
 parameter name='IP' value='10.5.5.3'/
 parameter name='PROJMASK' value='255.255.255.0'/
 parameter name='PROJNET' value='10.5.5.0'/
   /filterref
   address type='pci' domain='0x' bus='0x00' slot='0x03'
 function='0x0'\
 /
 /interface

 Starting network...
  udhcpc (v1.18.5) started
 Sending discover...
 Sending select for 10.5.5.3...
 Lease of 10.5.5.3 obtained, lease time 120
 deleting routers
  route: SIOCDELRT: No such process
 adding dns 8.8.8.8



 The DHCP reply is correctly received by the instance from the network node
 to the compute node. This is not a network issue (at least for IP
 assignation).
 I would be more concerned about the SIOCDELRT error above. Do you try to
 manually remove a network route at bootup ? Seems like the 'route del' is
 failing because the route is not already existing.


 As already said, you absolutely need VNC support for investigating. Could
 you please fix your VNC setup which is incorrect ?


   graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0'
 keymap='en-u\
 s'
   listen type='address' address='0.0.0.0'/
 /graphics


 Try in nova-compute.conf :
 vncserver_proxyclient_address=compute node mgmt IP
 vncserver_listen=compute node mgmt IP
 and in nova.conf :
 novncproxy_base_url=http://controler node mgmt IP:6080/vnc_auto.html

 and restart nova-compute.



   On 20 February 2013 11:57, Sylvain Bauza sylvain.ba...@digimind.comwrote:

  Could you please paste :
  - /etc/libvirt/qemu/your_instance_id.xml
  - ip a show vnet0
  - brctl show

 Sounds like your virtual device is not created. Could you please launch a
 new VM and paste /var/log/nova/nova-compute.log ?

 Thanks,
 -Sylvain



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cant ping private or floating IP

2013-02-18 Thread Chathura M. Sarathchandra Magurawalage
I have only got 1 NIC but got two virtual interface for two different
networks. I have got network node in the same physical machine too.


On 18 February 2013 13:15, Guilherme Russi luisguilherme...@gmail.comwrote:

 How did you install your controller node? I mean, mine I have 2 NICs and I
 installed the network node at the same physical machine.


 2013/2/18 Chathura M. Sarathchandra Magurawalage 77.chath...@gmail.com

 Hello Guilherme,

 No, I am still having the problem :(


 On 18 February 2013 13:01, Guilherme Russi luisguilherme...@gmail.comwrote:

 Hello Chathura,

  Have succeeded with your network? I'm having problems with mine too.

 Thanks.

 Guilherme.


 2013/2/17 Chathura M. Sarathchandra Magurawalage 77.chath...@gmail.com

 Hope you had a good night sleep :)

 Yes sure I will be on irc. my nickname is chathura77

 Thanks

 On 17 February 2013 13:15, Jean-Baptiste RANSY 
 jean-baptiste.ra...@alyseo.com wrote:

  ping

 Are you on IRC ?

 JB



 On 02/17/2013 04:07 AM, Jean-Baptiste RANSY wrote:

 Add Cirros Image to Glance :)

 Username: cirros
 Password: cubswin:)


 http://docs.openstack.org/trunk/openstack-compute/install/apt/content/uploading-to-glance.html

 to join your VM, it's a bit dirty but you can :
 - put your computer in the same subnet as your controller (
 192.168.2.0/24)
 - then adds a static route to the subnet of your VM. (ip route add
 10.5.5.0/24 gw 192.168.2.151)
 (192.168.2.151 is the quantum gateway)

 I'm going to sleep, we will continue tomorrow.

 JB

 PS : You also should get some sleep :)


 On 02/17/2013 03:53 AM, Chathura M. Sarathchandra Magurawalage wrote:

  oh that's weird.

  I still get this error. couldnt this be because I cannot ping the VM
 in the first place?. Because as far as I know metadata takes care of ssh
 keys. But what if you cant reach the VM in the first place?

  no instance data found in start-local

 ci-info: lo: 1 127.0.0.1   255.0.0.0   .

 ci-info: eth0  : 1 10.5.5.3255.255.255.0   fa:16:3e:a7:28:25

 ci-info: route-0: 0.0.0.0 10.5.5.10.0.0.0 eth0   
 UG

 ci-info: route-1: 10.5.5.00.0.0.0 255.255.255.0   eth0   U

 cloud-init start running: Sun, 17 Feb 2013 02:45:35 +. up 3.51 seconds

 2013-02-17 02:48:25,840 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed 
 [50/120s]: url error [timed out]

 2013-02-17 02:49:16,893 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed 
 [101/120s]: url error [timed out]

 2013-02-17 02:49:34,912 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed 
 [119/120s]: url error [timed out]

 2013-02-17 02:49:35,913 - DataSourceEc2.py[CRITICAL]: giving up on md 
 after 120 seconds



 no instance data found in start

 Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd

  * Starting AppArmor profiles   [80G
 [74G[ OK ]



  On 17 February 2013 02:41, Jean-Baptiste RANSY 
 jean-baptiste.ra...@alyseo.com wrote:

  For me, it's normal that you are not able to curl 169.254.169.254
 from your compute and controller nodes : Same thing on my side, but my VM
 get their metadata.

 Try to lunch an instance.

 JB



 On 02/17/2013 03:35 AM, Chathura M. Sarathchandra Magurawalage wrote:

  root@computernode:~# curl -v  http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254...

  root@controller:~# curl -v  http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254...


  root@athena:~# iptables -L -n -v
 Chain INPUT (policy ACCEPT 59009 packets, 22M bytes)
  pkts bytes target prot opt in out source
 destination
 59493   22M quantum-l3-agent-INPUT  all  --  *  *   0.0.0.0/0
 0.0.0.0/0
 59493   22M nova-api-INPUT  all  --  *  *   0.0.0.0/0
  0.0.0.0/0
   484 73533 ACCEPT 47   --  *  *   0.0.0.0/0
 0.0.0.0/0

  Chain FORWARD (policy ACCEPT 707 packets, 47819 bytes)
  pkts bytes target prot opt in out source
 destination
   707 47819 quantum-filter-top  all  --  *  *   0.0.0.0/0
  0.0.0.0/0
   707 47819 quantum-l3-agent-FORWARD  all  --  *  *
 0.0.0.0/00.0.0.0/0
   707 47819 nova-filter-top  all  --  *  *   0.0.0.0/0
  0.0.0.0/0
   707 47819 nova-api-FORWARD  all  --  *  *   0.0.0.0/0
0.0.0.0/0

  Chain OUTPUT (policy ACCEPT 56022 packets, 22M bytes)
  pkts bytes target prot opt in out source
 destination
 56022   22M quantum-filter-top  all  --  *  *   0.0.0.0/0
  0.0.0.0/0
 56022   22M quantum-l3-agent-OUTPUT  all  --  *  *
 0.0.0.0/00.0.0.0/0
 56022   22M nova-filter-top  all  --  *  *   0.0.0.0/0
  0.0.0.0/0
 56022   22M nova-api-OUTPUT  all  --  *  *   0.0.0.0/0
  0.0.0.0/0

  Chain nova-api-FORWARD (1 references)
  pkts bytes target prot opt in out

Re: [Openstack] Cant ping private or floating IP

2013-02-18 Thread Chathura M. Sarathchandra Magurawalage
Yes definitely I will post it here for future reference for anybody.

On 18 February 2013 13:28, Guilherme Russi luisguilherme...@gmail.comwrote:

 Got it, I have one virtual interface too, to make the management and VM
 conf part. If you find anything, let me know, please.

 Thanks.



 2013/2/18 Chathura M. Sarathchandra Magurawalage 77.chath...@gmail.com

 I have only got 1 NIC but got two virtual interface for two different
 networks. I have got network node in the same physical machine too.


 On 18 February 2013 13:15, Guilherme Russi luisguilherme...@gmail.comwrote:

 How did you install your controller node? I mean, mine I have 2 NICs and
 I installed the network node at the same physical machine.


 2013/2/18 Chathura M. Sarathchandra Magurawalage 77.chath...@gmail.com

 Hello Guilherme,

 No, I am still having the problem :(


 On 18 February 2013 13:01, Guilherme Russi 
 luisguilherme...@gmail.comwrote:

 Hello Chathura,

  Have succeeded with your network? I'm having problems with mine too.

 Thanks.

 Guilherme.


 2013/2/17 Chathura M. Sarathchandra Magurawalage 
 77.chath...@gmail.com

 Hope you had a good night sleep :)

 Yes sure I will be on irc. my nickname is chathura77

 Thanks

 On 17 February 2013 13:15, Jean-Baptiste RANSY 
 jean-baptiste.ra...@alyseo.com wrote:

  ping

 Are you on IRC ?

 JB



 On 02/17/2013 04:07 AM, Jean-Baptiste RANSY wrote:

 Add Cirros Image to Glance :)

 Username: cirros
 Password: cubswin:)


 http://docs.openstack.org/trunk/openstack-compute/install/apt/content/uploading-to-glance.html

 to join your VM, it's a bit dirty but you can :
 - put your computer in the same subnet as your controller (
 192.168.2.0/24)
 - then adds a static route to the subnet of your VM. (ip route add
 10.5.5.0/24 gw 192.168.2.151)
 (192.168.2.151 is the quantum gateway)

 I'm going to sleep, we will continue tomorrow.

 JB

 PS : You also should get some sleep :)


 On 02/17/2013 03:53 AM, Chathura M. Sarathchandra Magurawalage wrote:

  oh that's weird.

  I still get this error. couldnt this be because I cannot ping the
 VM in the first place?. Because as far as I know metadata takes care of 
 ssh
 keys. But what if you cant reach the VM in the first place?

  no instance data found in start-local

 ci-info: lo: 1 127.0.0.1   255.0.0.0   .

 ci-info: eth0  : 1 10.5.5.3255.255.255.0   fa:16:3e:a7:28:25

 ci-info: route-0: 0.0.0.0 10.5.5.10.0.0.0 eth0  
  UG

 ci-info: route-1: 10.5.5.00.0.0.0 255.255.255.0   eth0  
  U

 cloud-init start running: Sun, 17 Feb 2013 02:45:35 +. up 3.51 
 seconds

 2013-02-17 02:48:25,840 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed 
 [50/120s]: url error [timed out]

 2013-02-17 02:49:16,893 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed 
 [101/120s]: url error [timed out]

 2013-02-17 02:49:34,912 - util.py[WARNING]: 
 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed 
 [119/120s]: url error [timed out]

 2013-02-17 02:49:35,913 - DataSourceEc2.py[CRITICAL]: giving up on md 
 after 120 seconds



 no instance data found in start

 Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd

  * Starting AppArmor profiles   [80G
 [74G[ OK ]



  On 17 February 2013 02:41, Jean-Baptiste RANSY 
 jean-baptiste.ra...@alyseo.com wrote:

  For me, it's normal that you are not able to curl 169.254.169.254
 from your compute and controller nodes : Same thing on my side, but my 
 VM
 get their metadata.

 Try to lunch an instance.

 JB



 On 02/17/2013 03:35 AM, Chathura M. Sarathchandra Magurawalage
 wrote:

  root@computernode:~# curl -v  http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254...

  root@controller:~# curl -v  http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254...


  root@athena:~# iptables -L -n -v
 Chain INPUT (policy ACCEPT 59009 packets, 22M bytes)
  pkts bytes target prot opt in out source
 destination
 59493   22M quantum-l3-agent-INPUT  all  --  *  *
 0.0.0.0/00.0.0.0/0
 59493   22M nova-api-INPUT  all  --  *  *   0.0.0.0/0
0.0.0.0/0
   484 73533 ACCEPT 47   --  *  *   0.0.0.0/0
  0.0.0.0/0

  Chain FORWARD (policy ACCEPT 707 packets, 47819 bytes)
  pkts bytes target prot opt in out source
 destination
   707 47819 quantum-filter-top  all  --  *  *   0.0.0.0/0
 0.0.0.0/0
   707 47819 quantum-l3-agent-FORWARD  all  --  *  *
 0.0.0.0/00.0.0.0/0
   707 47819 nova-filter-top  all  --  *  *   0.0.0.0/0
0.0.0.0/0
   707 47819 nova-api-FORWARD  all  --  *  *   0.0.0.0/0
  0.0.0.0/0

  Chain OUTPUT (policy ACCEPT 56022 packets, 22M bytes)
  pkts bytes target prot opt in out source
 destination
 56022   22M quantum-filter-top  all

Re: [Openstack] Cant ping private or floating IP

2013-02-16 Thread Chathura M. Sarathchandra Magurawalage
Hello guys,

The problem still exists. Any ideas?

Thanks

On 15 February 2013 14:37, Sylvain Bauza sylvain.ba...@digimind.com wrote:

 Metadata API allows to fetch SSH credentials when booting (pubkey I mean).
 If a VM is unable to reach metadata service, then it won't be able to get
 its public key, so you won't be able to connect, unless you specifically go
 thru a Password authentication (provided password auth is enabled in
 /etc/ssh/sshd_config, which is not the case with Ubuntu cloud archive).
 There is also a side effect, the boot process is longer as the instance is
 waiting for the curl timeout (60sec.) to finish booting up.

 Re: Quantum, the metadata API is actually DNAT'd from Network node to the
 Nova-api node (here 172.16.0.1 as internal management IP) :
 Chain quantum-l3-agent-PREROUTING (1 references)

 target prot opt source   destination
 DNAT   tcp  --  0.0.0.0/0169.254.169.254  tcp dpt:80
 to:172.16.0.1:8775


 Anyway, the first step is to :
 1. grab the console.log
 2. access thru VNC to the desired instance

 Troubleshooting will be easier once that done.

 -Sylvain



 Le 15/02/2013 14:24, Chathura M. Sarathchandra Magurawalage a écrit :

 Hello Guys,

 Not sure if this is the right port but these are the results:

 *Compute node:*


 root@computenode:~# netstat -an | grep 8775
 tcp0  0 0.0.0.0:8775 http://0.0.0.0:8775  0.0.0.0:*
 LISTEN

 *Controller: *


 root@controller:~# netstat -an | grep 8775
 tcp0  0 0.0.0.0:8775 http://0.0.0.0:8775  0.0.0.0:*
 LISTEN

 *Additionally I cant curl 169.254.169.254 from the compute node. I am not
 sure if this is related to not being able to PING the VM.*


 curl -v http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254...

 Thanks for your help


 --**--**
 --**--**-
 Chathura Madhusanka Sarathchandra Magurawalage.
 1NW.2.1, Desk 2
 School of Computer Science and Electronic Engineering
 University Of Essex
 United Kingdom.

 Email: csar...@essex.ac.uk mailto:csar...@essex.ac.uk
   
 chathura.sarathchandra@gmail.**comchathura.sarathchan...@gmail.commailto:
 77.chath...@gmail.com
 77.chath...@gmail.com mailto:77.chath...@gmail.com



 On 15 February 2013 11:03, Anil Vishnoi vishnoia...@gmail.com mailto:
 vishnoia...@gmail.com** wrote:

 If you are using ubuntu cloud image then the only way to log-in is
 to do ssh with the public key. For that you have to create ssh key
 pair and download the ssh key. You can create this ssh pair using
 horizon/cli.


 On Fri, Feb 15, 2013 at 4:27 PM, Sylvain Bauza
 sylvain.ba...@digimind.com 
 mailto:sylvain.bauza@**digimind.comsylvain.ba...@digimind.com
 

 wrote:


 Le 15/02/2013 11:42, Chathura M. Sarathchandra Magurawalage a
 écrit :


 How can I log into the VM from VNC? What are the credentials?


 You have multiple ways to get VNC access. The easiest one is
 thru Horizon. Other can be looking at the KVM command-line for
 the desired instance (on the compute node) and check the vnc
 port in use (assuming KVM as hypervisor).
 This is basic knowledge of Nova.



 nova-api-metadata is running fine in the compute node.


 Make sure the metadata port is avaible thanks to telnet or
 netstat, nova-api can be running without listening on metadata
 port.




 __**_
 Mailing list: 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 
 https://launchpad.net/%**7Eopenstackhttps://launchpad.net/%7Eopenstack
 
 Post to : openstack@lists.launchpad.net
 
 mailto:openstack@lists.**launchpad.netopenstack@lists.launchpad.net
 
 Unsubscribe : 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 
 https://launchpad.net/%**7Eopenstackhttps://launchpad.net/%7Eopenstack
 

 More help   : 
 https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp




 -- Thanks  Regards
 --Anil Kumar Vishnoi




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cant ping private or floating IP

2013-02-16 Thread Chathura M. Sarathchandra Magurawalage
Hello Jean,

Thanks for your reply.

I followed the instructions in
http://docs.openstack.org/folsom/basic-install/content/basic-install_network.html.
And my Controller and the Network-node is installed in the same physical
node.

I am using Folsom but without Network namespaces.

But in the website you have provided it states that If you run both L3 +
DHCP services on the same node, you should enable namespaces to avoid
conflicts with routes :

But currently quantum-dhcp-agent and quantum-l3-agent are running in the
same node?

Additionally the control node serves as a DHCP server for the local network
( Don't know if that would make and difference)

Any idea what the problem could be?


On 16 February 2013 16:21, Jean-Baptiste RANSY 
jean-baptiste.ra...@alyseo.com wrote:

  Hello Chathura,

 Are you using Folsom with Network Namespaces ?

 If yes, have a look here :
 http://docs.openstack.org/folsom/openstack-network/admin/content/ch_limitations.html


 Regards,

 Jean-Baptsite RANSY



 On 02/16/2013 05:01 PM, Chathura M. Sarathchandra Magurawalage wrote:

 Hello guys,

  The problem still exists. Any ideas?

  Thanks

   On 15 February 2013 14:37, Sylvain Bauza sylvain.ba...@digimind.comwrote:

 Metadata API allows to fetch SSH credentials when booting (pubkey I mean).
 If a VM is unable to reach metadata service, then it won't be able to get
 its public key, so you won't be able to connect, unless you specifically go
 thru a Password authentication (provided password auth is enabled in
 /etc/ssh/sshd_config, which is not the case with Ubuntu cloud archive).
 There is also a side effect, the boot process is longer as the instance
 is waiting for the curl timeout (60sec.) to finish booting up.

 Re: Quantum, the metadata API is actually DNAT'd from Network node to the
 Nova-api node (here 172.16.0.1 as internal management IP) :
 Chain quantum-l3-agent-PREROUTING (1 references)

 target prot opt source   destination
  DNAT   tcp  --  0.0.0.0/0169.254.169.254  tcp
 dpt:80 to:172.16.0.1:8775


 Anyway, the first step is to :
 1. grab the console.log
 2. access thru VNC to the desired instance

 Troubleshooting will be easier once that done.

 -Sylvain



 Le 15/02/2013 14:24, Chathura M. Sarathchandra Magurawalage a écrit :

  Hello Guys,

 Not sure if this is the right port but these are the results:

  *Compute node:*


 root@computenode:~# netstat -an | grep 8775
  tcp0  0 0.0.0.0:8775 http://0.0.0.0:8775  0.0.0.0:*
   LISTEN

 *Controller: *


 root@controller:~# netstat -an | grep 8775
  tcp0  0 0.0.0.0:8775 http://0.0.0.0:8775  0.0.0.0:*
   LISTEN

 *Additionally I cant curl 169.254.169.254 from the compute node. I am
 not sure if this is related to not being able to PING the VM.*


 curl -v http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254...

 Thanks for your help



 -
 Chathura Madhusanka Sarathchandra Magurawalage.
 1NW.2.1, Desk 2
 School of Computer Science and Electronic Engineering
 University Of Essex
 United Kingdom.

  Email: csar...@essex.ac.uk mailto:csar...@essex.ac.uk
   chathura.sarathchan...@gmail.com mailto:77.chath...@gmail.com
 
 77.chath...@gmail.com mailto:77.chath...@gmail.com



 On 15 February 2013 11:03, Anil Vishnoi vishnoia...@gmail.com mailto:
 vishnoia...@gmail.com wrote:

 If you are using ubuntu cloud image then the only way to log-in is
 to do ssh with the public key. For that you have to create ssh key
 pair and download the ssh key. You can create this ssh pair using
 horizon/cli.


 On Fri, Feb 15, 2013 at 4:27 PM, Sylvain Bauza
  sylvain.ba...@digimind.com mailto:sylvain.ba...@digimind.com

 wrote:


 Le 15/02/2013 11:42, Chathura M. Sarathchandra Magurawalage a
 écrit :


 How can I log into the VM from VNC? What are the credentials?


 You have multiple ways to get VNC access. The easiest one is
 thru Horizon. Other can be looking at the KVM command-line for
 the desired instance (on the compute node) and check the vnc
 port in use (assuming KVM as hypervisor).
 This is basic knowledge of Nova.



 nova-api-metadata is running fine in the compute node.


 Make sure the metadata port is avaible thanks to telnet or
 netstat, nova-api can be running without listening on metadata
 port.




 ___
 Mailing list: https://launchpad.net/~openstack
  https://launchpad.net/%7Eopenstack
 Post to : openstack@lists.launchpad.net
 mailto:openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 https://launchpad.net/%7Eopenstack

 More help   : https

Re: [Openstack] Cant ping private or floating IP

2013-02-16 Thread Chathura M. Sarathchandra Magurawalage
, remote_ip=10.0.0.3}
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Port qg-6f8374cb-cb
Interface qg-6f8374cb-cb
type: internal
Port br0
Interface br0
Bridge br-int
Port br-int
Interface br-int
type: internal
Port tapf71b5b86-5c
tag: 1
Interface tapf71b5b86-5c
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port qr-4d088f3a-78
tag: 1
Interface qr-4d088f3a-78
type: internal
ovs_version: 1.4.0+build0


*Compute node:*

*root@cronus:~# ip link show*
1: lo: LOOPBACK,UP,LOWER_UP mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP qlen
1000
link/ether d4:ae:52:bb:a1:9d brd ff:ff:ff:ff:ff:ff
3: eth1: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN qlen 1000
link/ether d4:ae:52:bb:a1:9e brd ff:ff:ff:ff:ff:ff
4: eth0.2@eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue
state UP
link/ether d4:ae:52:bb:a1:9d brd ff:ff:ff:ff:ff:ff
5: br-int: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN
link/ether ae:9b:43:09:af:40 brd ff:ff:ff:ff:ff:ff
9: qbr256f5ed2-43: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue
state UP
link/ether c6:c0:df:64:c6:99 brd ff:ff:ff:ff:ff:ff
10: qvo256f5ed2-43: BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP mtu 1500
qdisc pfifo_fast state UP qlen 1000
link/ether 76:25:8b:fd:90:3b brd ff:ff:ff:ff:ff:ff
11: qvb256f5ed2-43: BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP mtu 1500
qdisc pfifo_fast master qbr256f5ed2-43 state UP qlen 1000
link/ether c6:c0:df:64:c6:99 brd ff:ff:ff:ff:ff:ff
13: br-tun: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN
link/ether be:8c:30:78:35:48 brd ff:ff:ff:ff:ff:ff
15: vnet0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast
master qbr256f5ed2-43 state UNKNOWN qlen 500
link/ether fe:16:3e:57:ec:ff brd ff:ff:ff:ff:ff:ff

*root@cronus:~# ip route show*
default via 192.168.2.253 dev eth0.2  metric 100
10.10.10.0/24 dev eth0  proto kernel  scope link  src 10.10.10.12
192.168.2.0/24 dev eth0.2  proto kernel  scope link  src 192.168.2.234

*root@cronus:~# ovs-vsctl show*
d85bc334-6d64-4a13-b851-d56b18ff1549
Bridge br-int
Port qvo0e743b01-89
tag: 4095
Interface qvo0e743b01-89
Port qvo256f5ed2-43
tag: 1
Interface qvo256f5ed2-43
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal
Port qvoee3d4131-2a
tag: 4095
Interface qvoee3d4131-2a
Port qvocbc816bd-3d
tag: 4095
Interface qvocbc816bd-3d
Bridge br-tun
Port br-tun
Interface br-tun
type: internal
Port gre-2
Interface gre-2
type: gre
options: {in_key=flow, out_key=flow, remote_ip=10.10.10.1}
Port gre-1
Interface gre-1
type: gre
options: {in_key=flow, out_key=flow, remote_ip=10.0.0.3}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port gre-3
Interface gre-3
type: gre
options: {in_key=flow, out_key=flow, remote_ip=127.0.0.1}
ovs_version: 1.4.0+build0


Thanks I appreciate your help.

On 16 February 2013 16:49, Jean-Baptiste RANSY 
jean-baptiste.ra...@alyseo.com wrote:

  Please provide files listed bellow :

 Controller Node :
 /etc/nova/nova.conf
 /etc/nova/api-paste.ini
 /etc/quantum/l3_agent.ini
 /etc/quantum/quantum.conf
 /etc/quantum/dhcp_agent.ini
 /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
 /etc/quantum/api-paste.ini
 /var/log/nova/*.log
 /var/log/quantum/*.log

 Compute Node :
 /etc/nova/nova.conf
 /etc/nova/nova-compute.conf
 /etc/nova/api-paste.ini
 /etc/quantum/quantum.conf
 /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
 /var/log/nova/*.log
 /var/log/quantum/*.log

 Plus, complete output of the following commands :

 Controller Node :
 $ keystone endpoint-list
 $ ip link show
 $ ip route show
 $ ip netns show
 $ ovs-vsctl show

 Compute Node :
 $ ip link show
 $ ip route show
 $ ovs-vsctl show

 Regards,

 Jean-Baptiste RANSY



 On 02/16/2013 05:32 PM, Chathura M. Sarathchandra Magurawalage wrote:

 Hello Jean,

  Thanks for your reply.

  I followed the instructions in
 http://docs.openstack.org/folsom/basic-install/content/basic-install_network.html.
 And my Controller and the Network-node is installed in the same physical
 node.

  I am using Folsom

Re: [Openstack] Cant ping private or floating IP

2013-02-16 Thread Chathura M. Sarathchandra Magurawalage
 in out source
destination

Chain quantum-l3-agent-POSTROUTING (1 references)
 pkts bytes target prot opt in out source
destination
 3726  247K ACCEPT all  --  !qg-6f8374cb-cb !qg-6f8374cb-cb
0.0.0.0/0
0.0.0.0/0! ctstate DNAT
0 0 ACCEPT all  --  *  *   10.5.5.0/24
 192.168.2.225

Chain quantum-l3-agent-PREROUTING (1 references)
 pkts bytes target prot opt in out source
destination
0 0 DNAT   tcp  --  *  *   0.0.0.0/0
 169.254.169.254  tcp dpt:80 to:192.168.2.225:8775

Chain quantum-l3-agent-float-snat (1 references)
 pkts bytes target prot opt in out source
destination

Chain quantum-l3-agent-snat (1 references)
 pkts bytes target prot opt in out source
destination
0 0 quantum-l3-agent-float-snat  all  --  *  *
0.0.0.0/0
0.0.0.0/0
0 0 SNAT   all  --  *  *   10.5.5.0/24
0.0.0.0/0to:192.168.2.151

Chain quantum-postrouting-bottom (1 references)
 pkts bytes target prot opt in out source
destination
0 0 quantum-l3-agent-snat  all  --  *  *   0.0.0.0/0
 0.0.0.0/0

thanks.


On 17 February 2013 02:25, Jean-Baptiste RANSY 
jean-baptiste.ra...@alyseo.com wrote:

  Controller node :
 # iptables -L -n -v
 # iptables -L -n -v -t nat



 On 02/17/2013 03:18 AM, Chathura M. Sarathchandra Magurawalage wrote:

 You should be able to curl 169.254.169.254 from compute node, which I cant
 at the moment.

  I have got the bridge set up in the l3_agent.ini



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cant ping private or floating IP

2013-02-16 Thread Chathura M. Sarathchandra Magurawalage
oh that's weird.

I still get this error. couldnt this be because I cannot ping the VM in the
first place?. Because as far as I know metadata takes care of ssh keys. But
what if you cant reach the VM in the first place?

no instance data found in start-local

ci-info: lo: 1 127.0.0.1   255.0.0.0   .

ci-info: eth0  : 1 10.5.5.3255.255.255.0   fa:16:3e:a7:28:25

ci-info: route-0: 0.0.0.0 10.5.5.10.0.0.0 eth0   UG

ci-info: route-1: 10.5.5.00.0.0.0 255.255.255.0   eth0   U

cloud-init start running: Sun, 17 Feb 2013 02:45:35 +. up 3.51 seconds

2013-02-17 02:48:25,840 - util.py[WARNING]:
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
[50/120s]: url error [timed out]

2013-02-17 02:49:16,893 - util.py[WARNING]:
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
[101/120s]: url error [timed out]

2013-02-17 02:49:34,912 - util.py[WARNING]:
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed
[119/120s]: url error [timed out]

2013-02-17 02:49:35,913 - DataSourceEc2.py[CRITICAL]: giving up on md
after 120 seconds



no instance data found in start

Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd

 * Starting AppArmor profiles   [80G
[74G[ OK ]



On 17 February 2013 02:41, Jean-Baptiste RANSY 
jean-baptiste.ra...@alyseo.com wrote:

  For me, it's normal that you are not able to curl 169.254.169.254 from
 your compute and controller nodes : Same thing on my side, but my VM get
 their metadata.

 Try to lunch an instance.

 JB



 On 02/17/2013 03:35 AM, Chathura M. Sarathchandra Magurawalage wrote:

  root@computernode:~# curl -v  http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254...

  root@controller:~# curl -v  http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254...


  root@athena:~# iptables -L -n -v
 Chain INPUT (policy ACCEPT 59009 packets, 22M bytes)
  pkts bytes target prot opt in out source
 destination
 59493   22M quantum-l3-agent-INPUT  all  --  *  *   0.0.0.0/0
0.0.0.0/0
 59493   22M nova-api-INPUT  all  --  *  *   0.0.0.0/0
 0.0.0.0/0
   484 73533 ACCEPT 47   --  *  *   0.0.0.0/0
 0.0.0.0/0

  Chain FORWARD (policy ACCEPT 707 packets, 47819 bytes)
  pkts bytes target prot opt in out source
 destination
   707 47819 quantum-filter-top  all  --  *  *   0.0.0.0/0
0.0.0.0/0
   707 47819 quantum-l3-agent-FORWARD  all  --  *  *   0.0.0.0/0
  0.0.0.0/0
   707 47819 nova-filter-top  all  --  *  *   0.0.0.0/0
 0.0.0.0/0
   707 47819 nova-api-FORWARD  all  --  *  *   0.0.0.0/0
  0.0.0.0/0

  Chain OUTPUT (policy ACCEPT 56022 packets, 22M bytes)
  pkts bytes target prot opt in out source
 destination
 56022   22M quantum-filter-top  all  --  *  *   0.0.0.0/0
0.0.0.0/0
 56022   22M quantum-l3-agent-OUTPUT  all  --  *  *   0.0.0.0/0
  0.0.0.0/0
 56022   22M nova-filter-top  all  --  *  *   0.0.0.0/0
 0.0.0.0/0
 56022   22M nova-api-OUTPUT  all  --  *  *   0.0.0.0/0
 0.0.0.0/0

  Chain nova-api-FORWARD (1 references)
  pkts bytes target prot opt in out source
 destination

  Chain nova-api-INPUT (1 references)
  pkts bytes target prot opt in out source
 destination
 0 0 ACCEPT tcp  --  *  *   0.0.0.0/0
  192.168.2.225tcp dpt:8775

  Chain nova-api-OUTPUT (1 references)
  pkts bytes target prot opt in out source
 destination

  Chain nova-api-local (1 references)
  pkts bytes target prot opt in out source
 destination

  Chain nova-filter-top (2 references)
  pkts bytes target prot opt in out source
 destination
 56729   22M nova-api-local  all  --  *  *   0.0.0.0/0
 0.0.0.0/0

  Chain quantum-filter-top (2 references)
  pkts bytes target prot opt in out source
 destination
 56729   22M quantum-l3-agent-local  all  --  *  *   0.0.0.0/0
0.0.0.0/0

  Chain quantum-l3-agent-FORWARD (1 references)
  pkts bytes target prot opt in out source
 destination

  Chain quantum-l3-agent-INPUT (1 references)
  pkts bytes target prot opt in out source
 destination
 0 0 ACCEPT tcp  --  *  *   0.0.0.0/0
  192.168.2.225tcp dpt:8775

  Chain quantum-l3-agent-OUTPUT (1 references)
  pkts bytes target prot opt in out source
 destination

  Chain quantum-l3-agent-local (1 references)
  pkts bytes target prot opt in out source
 destination

  root@athena:~# iptables -L -n -v -t nat
 Chain PREROUTING (policy ACCEPT 3212 packets, 347K bytes)
  pkts bytes target prot opt in out source
 destination
  3212  347K quantum-l3-agent-PREROUTING  all  --  *  *   0.0.0.0/0
 0.0.0.0/0
  3212  347K nova-api-PREROUTING  all

Re: [Openstack] Cant ping private or floating IP

2013-02-15 Thread Chathura M. Sarathchandra Magurawalage
Hello Anil,

I can not ssh into the VM so I cant do ifconfig from vm.

I am using quantum and, quantum-plugin-openvswitch-agent,
quantum-dhcp-agent, quantum-l3-agent as described in the guide.

Thanks.

-
Chathura Madhusanka Sarathchandra Magurawalage.
1NW.2.1, Desk 2
School of Computer Science and Electronic Engineering
University Of Essex
United Kingdom.

Email: csar...@essex.ac.uk
  chathura.sarathchan...@gmail.com 77.chath...@gmail.com
  77.chath...@gmail.com


On 15 February 2013 07:34, Anil Vishnoi vishnoia...@gmail.com wrote:

 Did your VM got ip address ? Can you paste the output of ifconfig from
 your vm. Are you using nova-network or quantum ? If quantum - which plugin
 are you using ?


  On Fri, Feb 15, 2013 at 4:28 AM, Chathura M. Sarathchandra Magurawalage 
 77.chath...@gmail.com wrote:

  Hello,

 I followed the folsom basic install instructions in
 http://docs.openstack.org/folsom/basic-install/content/basic-install_operate.html

 But now I am not able to ping either the private or the floating ip of
 the instances.

 Can someone please help?

 Instance log:

 [0.00] Initializing cgroup subsys cpuset
 [0.00] Initializing cgroup subsys cpu
 [0.00] Linux version 3.2.0-37-virtual (buildd@allspice) (gcc version 
 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #58-Ubuntu SMP Thu Jan 24 15:48:03 
 UTC 2013 (Ubuntu 3.2.0-37.58-virtual 3.2.35)
 [0.00] Command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-37-virtual 
 root=LABEL=cloudimg-rootfs ro console=ttyS0
 [0.00] KERNEL supported cpus:
 [0.00]   Intel GenuineIntel
 [0.00]   AMD AuthenticAMD
 [0.00]   Centaur CentaurHauls
 [0.00] BIOS-provided physical RAM map:
 [0.00]  BIOS-e820:  - 0009bc00 (usable)
 [0.00]  BIOS-e820: 0009bc00 - 000a (reserved)
 [0.00]  BIOS-e820: 000f - 0010 (reserved)
 [0.00]  BIOS-e820: 0010 - 7fffd000 (usable)
 [0.00]  BIOS-e820: 7fffd000 - 8000 (reserved)
 [0.00]  BIOS-e820: feffc000 - ff00 (reserved)
 [0.00]  BIOS-e820: fffc - 0001 (reserved)
 [0.00] NX (Execute Disable) protection: active
 [0.00] DMI 2.4 present.
 [0.00] No AGP bridge found
 [0.00] last_pfn = 0x7fffd max_arch_pfn = 0x4
 [0.00] x86 PAT enabled: cpu 0, old 0x70406, new 0x7010600070106
 [0.00] found SMP MP-table at [880fdae0] fdae0
 [0.00] init_memory_mapping: -7fffd000
 [0.00] RAMDISK: 3776c000 - 37bae000
 [0.00] ACPI: RSDP 000fd980 00014 (v00 BOCHS )
 [0.00] ACPI: RSDT 7fffd7b0 00034 (v01 BOCHS  BXPCRSDT 
 0001 BXPC 0001)
 [0.00] ACPI: FACP 7f80 00074 (v01 BOCHS  BXPCFACP 
 0001 BXPC 0001)
 [0.00] ACPI: DSDT 7fffd9b0 02589 (v01   BXPC   BXDSDT 
 0001 INTL 20100528)
 [0.00] ACPI: FACS 7f40 00040
 [0.00] ACPI: SSDT 7fffd910 0009E (v01 BOCHS  BXPCSSDT 
 0001 BXPC 0001)
 [0.00] ACPI: APIC 7fffd830 00072 (v01 BOCHS  BXPCAPIC 
 0001 BXPC 0001)
 [0.00] ACPI: HPET 7fffd7f0 00038 (v01 BOCHS  BXPCHPET 
 0001 BXPC 0001)
 [0.00] No NUMA configuration found
 [0.00] Faking a node at -7fffd000
 [0.00] Initmem setup node 0 -7fffd000
 [0.00]   NODE_DATA [7fff8000 - 7fffcfff]
 [0.00] kvm-clock: Using msrs 4b564d01 and 4b564d00
 [0.00] kvm-clock: cpu 0, msr 0:1cf7681, boot clock
 [0.00] Zone PFN ranges:
 [0.00]   DMA  0x0010 - 0x1000
 [0.00]   DMA320x1000 - 0x0010
 [0.00]   Normal   empty
 [0.00] Movable zone start PFN for each node
 [0.00] early_node_map[2] active PFN ranges
 [0.00] 0: 0x0010 - 0x009b
 [0.00] 0: 0x0100 - 0x0007fffd
 [0.00] ACPI: PM-Timer IO Port: 0xb008
 [0.00] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
 [0.00] ACPI: IOAPIC (id[0x01] address[0xfec0] gsi_base[0])
 [0.00] IOAPIC[0]: apic_id 1, version 17, address 0xfec0, GSI 0-23
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
 [0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
 [0.00] Using ACPI (MADT) for SMP configuration information
 [0.00] ACPI: HPET id: 0x8086a201 base

Re: [Openstack] Cant ping private or floating IP

2013-02-15 Thread Chathura M. Sarathchandra Magurawalage
Thanks for your reply.

first of all I do not see the following rule in my iptables

target prot opt source   destination
DNAT   tcp  --  0.0.0.0/0169.254.169.254  tcp dpt:80 to:
x.x.x.x:8775 http://172.16.0.1:8775/

Please find the console log at the beginning of the post.

Since I am using Ubuntu cloud image I am not able to log in to it through
VNC console. I can't even ping. This is the main problem.

Any help will greatly appreciated.



On 15 February 2013 14:37, Sylvain Bauza sylvain.ba...@digimind.com wrote:

 Metadata API allows to fetch SSH credentials when booting (pubkey I mean).
 If a VM is unable to reach metadata service, then it won't be able to get
 its public key, so you won't be able to connect, unless you specifically go
 thru a Password authentication (provided password auth is enabled in
 /etc/ssh/sshd_config, which is not the case with Ubuntu cloud archive).
 There is also a side effect, the boot process is longer as the instance is
 waiting for the curl timeout (60sec.) to finish booting up.

 Re: Quantum, the metadata API is actually DNAT'd from Network node to the
 Nova-api node (here 172.16.0.1 as internal management IP) :
 Chain quantum-l3-agent-PREROUTING (1 references)

 target prot opt source   destination
 DNAT   tcp  --  0.0.0.0/0169.254.169.254  tcp dpt:80
 to:172.16.0.1:8775


 Anyway, the first step is to :
 1. grab the console.log
 2. access thru VNC to the desired instance

 Troubleshooting will be easier once that done.

 -Sylvain



 Le 15/02/2013 14:24, Chathura M. Sarathchandra Magurawalage a écrit :

 Hello Guys,

 Not sure if this is the right port but these are the results:

 *Compute node:*


 root@computenode:~# netstat -an | grep 8775
 tcp0  0 0.0.0.0:8775 http://0.0.0.0:8775  0.0.0.0:*
 LISTEN

 *Controller: *


 root@controller:~# netstat -an | grep 8775
 tcp0  0 0.0.0.0:8775 http://0.0.0.0:8775  0.0.0.0:*
 LISTEN

 *Additionally I cant curl 169.254.169.254 from the compute node. I am not
 sure if this is related to not being able to PING the VM.*


 curl -v http://169.254.169.254
 * About to connect() to 169.254.169.254 port 80 (#0)
 *   Trying 169.254.169.254...

 Thanks for your help



  On 15 February 2013 11:03, Anil Vishnoi vishnoia...@gmail.com mailto:
 vishnoia...@gmail.com** wrote:

 If you are using ubuntu cloud image then the only way to log-in is
 to do ssh with the public key. For that you have to create ssh key
 pair and download the ssh key. You can create this ssh pair using
 horizon/cli.


 On Fri, Feb 15, 2013 at 4:27 PM, Sylvain Bauza
 sylvain.ba...@digimind.com 
 mailto:sylvain.bauza@**digimind.comsylvain.ba...@digimind.com
 

 wrote:


 Le 15/02/2013 11:42, Chathura M. Sarathchandra Magurawalage a
 écrit :


 How can I log into the VM from VNC? What are the credentials?


 You have multiple ways to get VNC access. The easiest one is
 thru Horizon. Other can be looking at the KVM command-line for
 the desired instance (on the compute node) and check the vnc
 port in use (assuming KVM as hypervisor).
 This is basic knowledge of Nova.



 nova-api-metadata is running fine in the compute node.


 Make sure the metadata port is avaible thanks to telnet or
 netstat, nova-api can be running without listening on metadata
 port.




 __**_
 Mailing list: 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 
 https://launchpad.net/%**7Eopenstackhttps://launchpad.net/%7Eopenstack
 
 Post to : openstack@lists.launchpad.net
 
 mailto:openstack@lists.**launchpad.netopenstack@lists.launchpad.net
 
 Unsubscribe : 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 
 https://launchpad.net/%**7Eopenstackhttps://launchpad.net/%7Eopenstack
 

 More help   : 
 https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp




 -- Thanks  Regards
 --Anil Kumar Vishnoi




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Cant ping private or floating IP

2013-02-14 Thread Chathura M. Sarathchandra Magurawalage
Hello,

I followed the folsom basic install instructions in
http://docs.openstack.org/folsom/basic-install/content/basic-install_operate.html

But now I am not able to ping either the private or the floating ip of the
instances.

Can someone please help?

Instance log:

[0.00] Initializing cgroup subsys cpuset
[0.00] Initializing cgroup subsys cpu
[0.00] Linux version 3.2.0-37-virtual (buildd@allspice) (gcc
version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #58-Ubuntu SMP Thu Jan
24 15:48:03 UTC 2013 (Ubuntu 3.2.0-37.58-virtual 3.2.35)
[0.00] Command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-37-virtual
root=LABEL=cloudimg-rootfs ro console=ttyS0
[0.00] KERNEL supported cpus:
[0.00]   Intel GenuineIntel
[0.00]   AMD AuthenticAMD
[0.00]   Centaur CentaurHauls
[0.00] BIOS-provided physical RAM map:
[0.00]  BIOS-e820:  - 0009bc00 (usable)
[0.00]  BIOS-e820: 0009bc00 - 000a (reserved)
[0.00]  BIOS-e820: 000f - 0010 (reserved)
[0.00]  BIOS-e820: 0010 - 7fffd000 (usable)
[0.00]  BIOS-e820: 7fffd000 - 8000 (reserved)
[0.00]  BIOS-e820: feffc000 - ff00 (reserved)
[0.00]  BIOS-e820: fffc - 0001 (reserved)
[0.00] NX (Execute Disable) protection: active
[0.00] DMI 2.4 present.
[0.00] No AGP bridge found
[0.00] last_pfn = 0x7fffd max_arch_pfn = 0x4
[0.00] x86 PAT enabled: cpu 0, old 0x70406, new 0x7010600070106
[0.00] found SMP MP-table at [880fdae0] fdae0
[0.00] init_memory_mapping: -7fffd000
[0.00] RAMDISK: 3776c000 - 37bae000
[0.00] ACPI: RSDP 000fd980 00014 (v00 BOCHS )
[0.00] ACPI: RSDT 7fffd7b0 00034 (v01 BOCHS  BXPCRSDT
0001 BXPC 0001)
[0.00] ACPI: FACP 7f80 00074 (v01 BOCHS  BXPCFACP
0001 BXPC 0001)
[0.00] ACPI: DSDT 7fffd9b0 02589 (v01   BXPC   BXDSDT
0001 INTL 20100528)
[0.00] ACPI: FACS 7f40 00040
[0.00] ACPI: SSDT 7fffd910 0009E (v01 BOCHS  BXPCSSDT
0001 BXPC 0001)
[0.00] ACPI: APIC 7fffd830 00072 (v01 BOCHS  BXPCAPIC
0001 BXPC 0001)
[0.00] ACPI: HPET 7fffd7f0 00038 (v01 BOCHS  BXPCHPET
0001 BXPC 0001)
[0.00] No NUMA configuration found
[0.00] Faking a node at -7fffd000
[0.00] Initmem setup node 0 -7fffd000
[0.00]   NODE_DATA [7fff8000 - 7fffcfff]
[0.00] kvm-clock: Using msrs 4b564d01 and 4b564d00
[0.00] kvm-clock: cpu 0, msr 0:1cf7681, boot clock
[0.00] Zone PFN ranges:
[0.00]   DMA  0x0010 - 0x1000
[0.00]   DMA320x1000 - 0x0010
[0.00]   Normal   empty
[0.00] Movable zone start PFN for each node
[0.00] early_node_map[2] active PFN ranges
[0.00] 0: 0x0010 - 0x009b
[0.00] 0: 0x0100 - 0x0007fffd
[0.00] ACPI: PM-Timer IO Port: 0xb008
[0.00] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
[0.00] ACPI: IOAPIC (id[0x01] address[0xfec0] gsi_base[0])
[0.00] IOAPIC[0]: apic_id 1, version 17, address 0xfec0, GSI 0-23
[0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
[0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
[0.00] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
[0.00] Using ACPI (MADT) for SMP configuration information
[0.00] ACPI: HPET id: 0x8086a201 base: 0xfed0
[0.00] SMP: Allowing 1 CPUs, 0 hotplug CPUs
[0.00] PM: Registered nosave memory: 0009b000 - 0009c000
[0.00] PM: Registered nosave memory: 0009c000 - 000a
[0.00] PM: Registered nosave memory: 000a - 000f
[0.00] PM: Registered nosave memory: 000f - 0010
[0.00] Allocating PCI resources starting at 8000 (gap:
8000:7effc000)
[0.00] Booting paravirtualized kernel on KVM
[0.00] setup_percpu: NR_CPUS:64 nr_cpumask_bits:64
nr_cpu_ids:1 nr_node_ids:1
[0.00] PERCPU: Embedded 28 pages/cpu @88007fc0 s82880
r8192 d23616 u2097152
[0.00] kvm-clock: cpu 0, msr 0:7fc13681, primary cpu clock
[0.00] KVM setup async PF for cpu 0
[0.00] kvm-stealtime: cpu 0, msr 7fc0dd40
[0.00] Built 1 zonelists in Node order, mobility grouping on.
Total pages: 515971
[0.00] Policy zone: DMA32
[0.00] Kernel command line:

[Openstack] Undo changes made by Quantum script

2013-01-28 Thread Chathura M. Sarathchandra Magurawalage
Hello,

I am new to openstack and I have accidently inserted wrong Ip addresses in
the Quantum script (
https://raw.github.com/EmilienM/openstack-folsom-guide/master/scripts/quantum-networking.sh)
 in openstack folsom guide (
http://docs.openstack.org/folsom/basic-install/content/basic-install_network.html).
How can I undo the changes that was made by the script?

Thanks,
Chathura
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp