No they don't. On each T5240 we have 5 LDoms running and they seem to  
drop at random. Even when one LDom drops a second on the same vsw is  
still up and running. It's very strange.

The other thing I've found is that all vnets on the LDom drop at the  
same time (vnet0, vnet1 and vnet2).

The control domain has never lost it's connection, but it doesn't  
share access with any of the vsw (it has it's own nic).

S.

On 17-Jun-08, at 12:11 PM, Steve Goldthorpe wrote:

> Do all your LDoms on the same host drop off the
> network at the same time, ditto for the Control Domain
> (on the vsw devices)?
>
> -Steve
>
> --- Scott Adair <scott at adair.cc> wrote:
>
>> Hi
>>
>> I'm seeing some strange behavior with networking
>> inside my LDom
>> environment, and was hoping that somebody has
>> experienced the same
>> issue. And hopefully have an idea how to fix it :-)
>>
>> First, here is some background information.. The
>> system is a T5240
>> running Solaris 10u5, and LDom 1.0.3. There are 5
>> LDoms configured,
>> and all are using each of three virtual switches and
>> each switch is
>> connected to a single network port.
>>
>> The primary domain has 8 VCPUs and 8GB of RAM
>> allocated to it. Nothing
>> else is running in it. Each LDom is configured with
>> 24 VCPUs, 3 MAUs,
>> at least 22GB of RAM and three virtual nic.
>>
>> Every now and then a LDom will drop off our network
>> for no apparent
>> reason. The LDom is still running, I am able to
>> connect to the console
>> and the vnet0 interface is still plumbed. I cannot
>> ping the LDom from
>> the outside or from the primary domain. Generally if
>> I leave the
>> system for a period of time (say 10-15mins)
>> everything comes back to
>> life.
>>
>> Nothing of relevance shows up in the logs of either
>> the LDom or the
>> primary domain, aside from the inability connect to
>> our NIS or NFS
>> servers.
>>
>> Below is a listing of the configuration. If anybody
>> needs more
>> information please let me know. Any ideas would be
>> helpful!
>>
>> Scott
>>
>>
>>
>> torsun01sx:/root# ldm list-bindings -e
>> NAME             STATE    FLAGS   CONS    VCPU
>> MEMORY   UTIL  UPTIME
>> primary          active   -n-cv   SP      8     8G
>>    2.8%  20h 36m
>>
>> MAC
>>     00:14:4f:e8:a9:b8
>>
>> VCPU
>>     VID    PID    UTIL STRAND
>>     0      0      1.7%   100%
>>     1      1      1.5%   100%
>>     2      2      0.3%   100%
>>     3      3      0.1%   100%
>>     4      4      1.8%   100%
>>     5      5      0.3%   100%
>>     6      6      0.1%   100%
>>     7      7      0.2%   100%
>>
>> MAU
>>     ID     CPUSET
>>     0      (0, 1, 2, 3, 4, 5, 6, 7)
>>
>> MEMORY
>>     RA               PA               SIZE
>>     0xe000000        0xe000000        8G
>>
>> IO
>>     DEVICE           PSEUDONYM        OPTIONS
>>     pci at 400          pci_0
>>     pci at 500          pci_1
>>
>> VCC
>>     NAME             PORT-RANGE
>>     primary-vcc0     5001-5100
>>         CLIENT                      PORT   LDC
>>         torld-soln02 at primary-vcc0   5001   17
>>         torld-soln01 at primary-vcc0   5002   26
>>         torld-qa02 at primary-vcc0     5003   33
>>         torld-qa01 at primary-vcc0     5004   40
>>         torld-soleng01 at primary-vcc0 5005   46
>>
>> VSW
>>     NAME             MAC               NET-DEV
>> DEVICE     MODE
>>     primary-vsw1     00:14:4f:f9:ef:b6 nxge1
>> switch at 0
>>         PEER                        MAC
>>  LDC
>>         vnet0 at torld-soln02
>> 00:14:4f:fb:ae:3c 11
>>         vnet2 at torld-soln01
>> 00:14:4f:f9:08:7e 21
>>         vnet1 at torld-qa02
>> 00:14:4f:fb:7d:0e 28
>>         vnet2 at torld-qa01
>> 00:14:4f:fb:0d:8a 36
>>         vnet1 at torld-soleng01
>> 00:14:4f:f9:64:9b 42
>>     NAME             MAC               NET-DEV
>> DEVICE     MODE
>>     primary-vsw2     00:14:4f:fa:a2:aa nxge2
>> switch at 1
>>         PEER                        MAC
>>  LDC
>>         vnet1 at torld-soln02
>> 00:14:4f:fb:c2:74 12
>>         vnet0 at torld-soln01
>> 00:14:4f:f8:44:52 18
>>         vnet2 at torld-qa02
>> 00:14:4f:f9:8f:26 29
>>         vnet0 at torld-qa01
>> 00:14:4f:f9:7c:be 34
>>         vnet2 at torld-soleng01
>> 00:14:4f:fa:28:34 43
>>     NAME             MAC               NET-DEV
>> DEVICE     MODE
>>     primary-vsw3     00:14:4f:fb:22:8b nxge3
>> switch at 2
>>         PEER                        MAC
>>  LDC
>>         vnet2 at torld-soln02
>> 00:14:4f:fa:3c:e3 14
>>         vnet1 at torld-soln01
>> 00:14:4f:fb:66:d4 19
>>         vnet0 at torld-qa02
>> 00:14:4f:f9:e9:26 27
>>         vnet1 at torld-qa01
>> 00:14:4f:fa:a6:e1 35
>>         vnet0 at torld-soleng01
>> 00:14:4f:fb:05:47 41
>>
>> VDS
>>     NAME             VOLUME         OPTIONS
>> DEVICE
>>     primary-vds0     torld-soln02_dsk01
>>     /data/ldom/
>> torld-soln02/dsk01.img
>>                      torld-qa01_dsk01
>>   /data/ldom/
>> torld-qa01/dsk01.img
>>                      torld-qa01_dsk02
>>   /data/ldom/
>> torld-qa01/dsk02.img
>>                      torld-qa02_dsk01
>>   /data/ldom/
>> torld-qa02/dsk01.img
>>                      torld-qa02_dsk02
>>   /data/ldom/
>> torld-qa02/dsk02.img
>>                      torld-soln01_dsk01
>>     /data/ldom/
>> torld-soln01/dsk01.img
>>                      torld-soleng01_dsk01
>>       /data/ldom/
>> torld-soleng01/dsk01.img
>>         CLIENT                      VOLUME
>> LDC
>>         torld-soln02_dsk01 at torld-soln02
>> torld-soln02_dsk01 15
>>         torld-soln01_dsk01 at torld-soln01
>> torld-soln01_dsk01 24
>>         torld-qa02_dsk01 at torld-qa02
>> torld-qa02_dsk01 30
>>         torld-qa02_dsk02 at torld-qa02
>> torld-qa02_dsk02 31
>>         torld-qa01_dsk01 at torld-qa01
>> torld-qa01_dsk01 37
>>         torld-qa01_dsk02 at torld-qa01
>> torld-qa01_dsk02 38
>>         torld-soleng01_dsk01 at torld-soleng01
>> torld-soleng01_dsk01 44
>>
>> VLDC
>>     NAME
>>     primary-vldc3
>>         CLIENT                      DESC
>>  LDC
>>         SP                          spds
>>  20
>>         SP                          sunvts
>>  6
>>         SP                          sunmc
>>  7
>>         SP                          explorer
>>  8
>>         SP                          led
>>  9
>>         SP                          flashupdate
>>  10
>>         SP
>> system-management
> === message truncated ===
>
> _______________________________________________
> ldoms-discuss mailing list
> ldoms-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/ldoms-discuss


Reply via email to