Hi Om Prakash,

   Based on the shared nova VM instance console log output, VM instance has
initiated DHCP discover request but we need to know whether dhcp
agent(q-dhcp) is sending dhcp offer response properly back to the VM or
not. To confirm this please do packet (using tcpdump) capture of the VM tap
interface also please confirm whether openstack q-dhcp service is up and
running properly without any issues. Please refer the below sample q-dhcp
service running status from the openstack cli command response.



stack@osc-pike-ubuntu16:~$ openstack network agent list
+--------------------------------------+----------------+-------------------+-------------------+-------+-------+------------------------------+
| ID                                   | Agent Type     | Host
| Availability Zone | Alive | State | Binary                       |
+--------------------------------------+----------------+-------------------+-------------------+-------+-------+------------------------------+
| 1aa5f2c0-b6ea-4390-81fc-8e1f12d5a89e | Metadata agent | osc-pike-ubuntu16
| None              | :-)   | UP    | neutron-metadata-agent       |
| 870fd06d-de30-4590-b33a-665556a410c2 | ODL L2         | osc-pike-ubuntu16
| None              | :-)   | UP    | neutron-odlagent-portbinding |
*| 8a5b4424-1e91-4932-9e6e-5a351817f70d | DHCP agent     |
osc-pike-ubuntu16 | nova              | :-)   | UP    | neutron-dhcp-agent
         |*
+--------------------------------------+----------------+-------------------+-------------------+-------+-------+------------------------------+

Thanks & Regards,
Karthikeyan.



On Tue, Jul 3, 2018 at 8:34 PM, PRAKASH, OM <[email protected]> wrote:

> Thank you Karthikeyan for help
>
>
>
> Need one more help . I am trying to test SFC . I have 3 VM . One is for
> ODL controller ,2nd one is openstack controller and 3rd one is openstack
> compute.
>
>
>
> After set up and run devstack build . I had below configuration
>
>
>
> 1)      VM1 : ODL oxygen and OVS 2.6.1 with NSH
>
> 2)      VM2 : Devstack controller queen with OVS 2.8
>
> 3)      VM3 :-  Devstack compute queen with OVS 2.8
>
>
>
> Did Sanity test . create VM with port P1 and static IP ip_address=20.0.0.3
> and it worked perfect
>
>
>
> Now for SFC , I cleaned up every thing in all three VM
>
>  Then
>
> 1)      run unstack.sh in VM2 and VM3  and install OVS 2.6 with NSH
> header
>
> 2)      then run stack.sh in VM2/Vm3
>
>
>
> And after this I have this set up
>
>
>
> 1)      VM1 : ODL oxygen and OVS 2.6.1 with NSH
>
> 2)      VM2 : Devstack controller queen with OVS 2.6.1 with NSH
>
> 3)      VM3 :-  Devstack compute queen with OVS 2.6.1 with NSH
>
>
>
> Now I do Sanity test create VM with port P1 and static IP
> ip_address=20.0.0.3 but VM don’t assign ip address.
>
>
>
> Can you please help me how can I debug this issue ?
>
>
>
> info: initramfs: up at 8.10
>
> NOCHANGE: partition 1 is size 64260. it cannot be grown
>
> info: initramfs loading root from /dev/vda1
>
> info: /etc/init.d/rc.sysinit: up at 11.12
>
> info: container: none
>
> Starting logging: OK
>
> modprobe: module virtio_blk not found in modules.dep
>
> modprobe: module virtio_net not found in modules.dep
>
> WARN: /etc/rc3.d/S10-load-modules failed
>
> Initializing random number generator... done.
>
> Starting acpid: OK
>
> cirros-ds 'local' up at 17.24
>
> [   17.447905] hrtimer: interrupt took 13508421 ns
>
> no results found for mode=local. up 18.47. searched: nocloud configdrive
> ec2
>
> Starting network...
>
> udhcpc (v1.20.1) started
>
> Sending discover...
>
> Sending discover...
>
> Sending discover...
>
> Usage: /sbin/cirros-dhcpc <up|down>
>
> No lease, failing
>
> WARN: /etc/rc3.d/S40-network failed
>
> cirros-ds 'net' up at 201.48
>
> checking http://169.254.169.254/2009-04-04/instance-id
>
> failed 1/20: up 202.25. request failed
>
> failed 2/20: up 205.19. request failed
>
> failed 3/20: up 207.62. request failed
>
> failed 4/20: up 210.38. request failed
>
> failed 5/20: up 212.79. request failed
>
> failed 6/20: up 215.34. request failed
>
> failed 7/20: up 218.00. request failed
>
> failed 8/20: up 220.35. request failed
>
> failed 9/20: up 223.06. request failed
>
> failed 10/20: up 225.41. request failed
>
> failed 11/20: up 228.07. request failed
>
> failed 12/20: up 230.45. request failed
>
> failed 13/20: up 233.17. request failed
>
> failed 14/20: up 235.68. request failed
>
> failed 15/20: up 238.23. request failed
>
> failed 16/20: up 240.92. request failed
>
> failed 17/20: up 243.27. request failed
>
> failed 18/20: up 245.98. request failed
>
> failed 19/20: up 248.32. request failed
>
> failed 20/20: up 251.03. request failed
>
> failed to read iid from metadata. tried 20
>
> no results found for mode=net. up 253.44. searched: nocloud configdrive ec2
>
> failed to get instance-id of datasource
>
> Top of dropbear init script
>
> Starting dropbear sshd: failed to get instance-id of datasource
>
> WARN: generating key of type ecdsa failed!
>
> OK
>
> === system information ===
>
> Platform: OpenStack Foundation OpenStack Nova
>
> Container: none
>
> Arch: x86_64
>
> CPU(s): 1 @ 2793.539 MHz
>
> Cores/Sockets/Threads: 1/1/1
>
> Virt-type: AMD-V
>
> RAM Size: 49MB
>
> Disks:
>
> NAME MAJ:MIN     SIZE LABEL         MOUNTPOINT
>
> vda  253:0   41126400
>
> vda1 253:1   32901120 cirros-rootfs /
>
> === sshd host keys ===
>
> -----BEGIN SSH HOST KEY KEYS-----
>
> ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgnlmTxi/Mn8UXIxMnaUwY9DERS3QMp9xChjEeN
> iMehaZUAtCH0ki++waKv6fipE3BPCvPbyD1iICJhwJhGMIo8iJMe+NID5KVQ5rBNz/
> vt1g3NRJoiknD6ZuImNZviEFzXyndgDf8jjb5AaU9MYqOXZpta5PF5NGzutcznz66XCWiH0=
> root@cirros
>
> ssh-dss AAAAB3NzaC1kc3MAAACBAIXCxwR4nYn4s/z6p3SEeQ+
> OL71EjgXoD1zhP19oRcBRC2qaPMlXUjLinyFfJ3FHfHf7lzmwzBXycPwi38e
> Y3oZNgfpRTzgy7k89FY5eeykpPE+HX52A3jYIisQvUHxbbI/tNGLbCN/
> 8DJUbBpZslXgrVbYrTnvmabJpFWbVMxXNAAAAFQD3MJ6DOaeA4b63VVA4xKg
> nxpt4TQAAAIAjcD+0TaYSxD1eBD1Vz5kvjxX0DJdm/HwH/3RowXMCvu73t9E/
> iJFlcB4mD7lynKR5ZnmoKGkcdXpntBCwBZHSQQSFCpEdB/
> Cs7bcDHZTNhxH4mTexOy4Rt9IxWVyWyBEPQhCsJzY+165xAwjnCU2B4VsJZGYX9+/
> gqxl1MQ8UZwAAAIB5NvR26FM74sghPtR7MRWjGUannkfiToNW8mM5x1/
> MIPAvHekhm7Wu9W5t2Ts6jsOAI7WsldRTJsMV6XxU2Yj8sNq9hNU11hVbhw2f3TJC/
> 4EH67ivWrwfMZRjBPKqXtW2OJz0kqWKBygmwq7VhVX634nLbY60Mbr12ze7B5zyfA==
> root@cirros
>
> -----END SSH HOST KEY KEYS-----
>
> === network info ===
>
> if-info: lo,up,127.0.0.1,8,::1
>
> if-info: eth0,up,,8,fe80::f816:3eff:feaa:f28e
>
> === datasource: None None ===
>
> === cirros: current=0.3.5 uptime=259.57 ===
>
> route: fscanf
>
> === pinging gateway failed, debugging connection ===
>
> ############ debug start ##############
>
> ### /etc/init.d/sshd start
>
> Top of dropbear init script
>
> Starting dropbear sshd: failed to get instance-id of datasource
>
> WARN: generating key of type ecdsa failed!
>
> FAIL
>
> route: fscanf
>
> ### ifconfig -a
>
> eth0      Link encap:Ethernet  HWaddr FA:16:3E:AA:F2:8E
>
>           inet6 addr: fe80::f816:3eff:feaa:f28e/64 Scope:Link
>
>           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>
>           RX packets:13 errors:0 dropped:6 overruns:0 frame:0
>
>           TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
>
>           collisions:0 txqueuelen:1000
>
>           RX bytes:1224 (1.1 KiB)  TX bytes:1132 (1.1 KiB)
>
>
>
> lo        Link encap:Local Loopback
>
>           inet addr:127.0.0.1  Mask:255.0.0.0
>
>           inet6 addr: ::1/128 Scope:Host
>
>           UP LOOPBACK RUNNING  MTU:16436  Metric:1
>
>           RX packets:12 errors:0 dropped:0 overruns:0 frame:0
>
>           TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
>
>           collisions:0 txqueuelen:0
>
>           RX bytes:1020 (1020.0 B)  TX bytes:1020 (1020.0 B)
>
>
>
> ### route -n
>
> Kernel IP routing table
>
> Destination     Gateway         Genmask         Flags Metric Ref    Use
> Iface
>
> route: fscanf
>
> ### cat /etc/resolv.conf
>
> cat: can't open '/etc/resolv.conf': No such file or directory
>
> ### gateway not found
>
> /sbin/cirros-status: line 1: can't open /etc/resolv.conf: no such file
>
> ### pinging nameservers
>
> ### uname -a
>
> Linux cirros 3.2.0-80-virtual #116-Ubuntu SMP Mon Mar 23 17:28:52 UTC 2015
> x86_64 GNU/Linux
>
> ### lsmod
>
> Module                  Size  Used by    Not tainted
>
> nls_iso8859_1          12713  0
>
> nls_cp437              16991  0
>
> vfat                   17585  0
>
> fat                    61512  1 vfat
>
> isofs                  40259  0
>
> ip_tables              27473  0
>
> x_tables               29891  1 ip_tables
>
> pcnet32                42119  0
>
> 8139cp                 27360  0
>
> ne2k_pci               13691  0
>
> 8390                   18856  1 ne2k_pci
>
> e1000                 108589  0
>
> acpiphp                24231  0
>
> ### dmesg | tail
>
> [   14.782570] acpiphp: Slot [30] registered
>
> [   14.783018] acpiphp: Slot [31] registered
>
> [   14.993447] e1000: Intel(R) PRO/1000 Network Driver - version
> 7.3.21-k8-NAPI
>
> [   14.993589] e1000: Copyright (c) 1999-2006 Intel Corporation.
>
> [   15.078199] ne2k-pci.c:v1.03 9/22/2003 D. Becker/P. Gortmaker
>
> [   15.167340] 8139cp: 8139cp: 10/100 PCI Ethernet driver v1.3 (Mar 22,
> 2004)
>
> [   15.234572] pcnet32: pcnet32.c:v1.35 21.Apr.2008
> [email protected]
>
> [   15.364192] ip_tables: (C) 2000-2006 Netfilter Core Team
>
> [   17.447905] hrtimer: interrupt took 13508421 ns
>
> [   30.290671] eth0: no IPv6 routers present
>
> ### tail -n 25 /var/log/messages
>
> Jul  3 15:47:02 cirros kern.info kernel: [   11.247234] EXT3-fs (vda1):
> using internal journal
>
> Jul  3 15:47:03 cirros kern.info kernel: [   14.748572] acpiphp: ACPI Hot
> Plug PCI Controller Driver version: 0.5
>
> Jul  3 15:47:03 cirros kern.info kernel: [   14.768152] acpiphp: Slot [2]
> registered
>
> Jul  3 15:47:03 cirros kern.info kernel: [   14.769498] acpiphp: Slot [3]
> registered
>
> Jul  3 15:47:03 cirros kern.info kernel: [   14.769848] acpiphp: Slot [4]
> registered
>
> Jul  3 15:47:03 cirros kern.info kernel: [   14.770218] acpiphp: Slot [5]
> registered
>
> Jul  3 15:47:03 cirros kern.info kernel: [   14.771520] acpiphp: Slot [6]
> registered
>
> Jul  3 15:47:03 cirros kern.info kernel: [   14.771917] acpiphp: Slot [7]
> registered
>
> Jul  3 15:47:03 cirros kern.info kernel: [   14.772305] acpiphp: Slot [8]
> registered
>
> Jul  3 15:47:03 cirros kern.info kernel: [   14.772920] acpiphp: Slot [9]
> registered
>
> Jul  3 15:47:03 cirros kern.info kernel: [   14.773358] acpiphp: Slot
> [10] registered
>
> Jul  3 15:47:03 cirros kern.info kernel: [   14.773824] acpiphp: Slot
> [11] registered
>
> Jul  3 15:47:03 cirros kern.info kernel: [   14.774208] acpiphp: Slot
> [12] registered
>
> Jul  3 15:47:03 cirros kern.info kernel: [   14.779128] acpiphp: Slot
> [22] registered
>
> Jul  3 15:47:03 cirros kern.info kernel: [   14.779565] acpiphp: Slot
> [23] registered
>
> Jul  3 15:47:03 cirros kern.info kernel: [   14.779922] acpiphp: Slot
> [24] registered
>
> Jul  3 15:47:03 cirros kern.info kernel: [   14.993447] e1000: Intel(R)
> PRO/1000 Network Driver - version 7.3.21-k8-NAPI
>
> Jul  3 15:47:03 cirros kern.info kernel: [   14.993589] e1000: Copyright
> (c) 1999-2006 Intel Corporation.
>
> Jul  3 15:47:03 cirros kern.info kernel: [   15.078199] ne2k-pci.c:v1.03
> 9/22/2003 D. Becker/P. Gortmaker
>
> Jul  3 15:47:04 cirros kern.info kernel: [   15.167340] 8139cp: 8139cp:
> 10/100 PCI Ethernet driver v1.3 (Mar 22, 2004)
>
> Jul  3 15:47:04 cirros kern.info kernel: [   15.234572] pcnet32:
> pcnet32.c:v1.35 21.Apr.2008 [email protected]
>
> Jul  3 15:47:04 cirros kern.info kernel: [   15.364192] ip_tables: (C)
> 2000-2006 Netfilter Core Team
>
> Jul  3 15:47:06 cirros kern.warn kernel: [   17.447905] hrtimer: interrupt
> took 13508421 ns
>
> Jul  3 15:47:19 cirros kern.debug kernel: [   30.290671] eth0: no IPv6
> routers present
>
> Jul  3 15:51:04 cirros authpriv.info dropbear[306]: Running in background
>
> ############ debug end   ##############
>
>   ____               ____  ____
>
> / __/ __ ____ ____ / __ \/ __/
>
> / /__ / // __// __// /_/ /\ \
>
> \___//_//_/  /_/   \____/___/
>
>    http://cirros-cloud.net
>
>
>
>
>
> login as 'cirros' user. default password: 'cubswin:)'. use 'sudo' for root.
>
> cirros login:
>
>
>
> *Thanks*
>
> *Om Prakash*
>
>
>
>
>
_______________________________________________
sfc-dev mailing list
[email protected]
https://lists.opendaylight.org/mailman/listinfo/sfc-dev

Reply via email to