ok put solaris 10 u4 on

it installs fine

On first boot it also gets
WARNING: ds at 0: send_msg: ldc_write failed (131)

but not seen it since


Liam Merwick wrote:
> 
> Looking that the output below, all seem OK, there is no smoking gun :-(
> I'm at a loss to explain what is happening.
> 
> Are there any warnings/notices in /var/adm/messages in the control domain ?
> what does "echo ::ldcinfo | mdb -k" on the control domain give ?
> 
> As a matter of interest - which of cluster1 an cluster2 is the guest 
> being booted ? I assume /dev/zvol/dsk/pool/ldom/cluster* exist.
> could you send on the 'zfs list' output ?
> 
> sorry, but I'm clutching at straws at this stage.
> 
> -- Liam
> 
> 
> jpd wrote:
>> Liam Merwick wrote:
>>> John-Paul Drawneek wrote:
>>>> ok using b85 for the control ldom i get more issues with SXDE 1/08
>>>>
>>>> I get this:
>>>> SunOS Release 5.11 Version snv_79a 64-bit
>>>> Copyright 1983-2007 Sun Microsystems, Inc.  All rights reserved.
>>>> Use is subject to license terms.
>>>> WARNING: ds at 0: send_msg: ldc_write failed (131)
>>>> whoami: no domain name
>>>> WARNING: Cannot mount /etc/dfs/sharetab
>>>> Configuring /dev
>>>> WARNING: i_ldc_get_tx_tail: (0x1) cannot read qptrs
>>>>
>>>> WARNING: ldc_read: (0x1) unable to read queue ptrs
>>>> WARNING: i_ldc_rxq_reconf: (0x1) cannot get state
>>>> Using RPC Bootparams for network configuration information.
>>>> Attempting to configure interface vnet2...
>>>>
>>>> Which is more than before with b83
>>>>
>>>> So from the looks SXDE 1/08 is broke on ldoms, b85 works fine.
>>>>
>>>> Any body got it to work?
>>>>
>>> I netbooted SXDE 1/08 just now to see if it worked for me and had no 
>>> problems.
>>
>> joy
>>
>>> I'm running a development build of snv_85 on my control domain so it
>>> should be pretty close to your setup.
>>>
>>> Your problem seems to stem from the Domain Services module (ds) not 
>>> being able
>>> to communicate via LDC. What version of SUNWldm and firmware are you 
>>> running ?
>>> We'd also need to see the 'ldm ls -l' output before we could hazard a 
>>> guess
>>> at what is going wrong.
>>>
>>> -- Liam
>>
>> Sun-Fire-T1000 System Firmware 6.6.1  2008/02/11 15:54
>>
>> Host flash versions:
>>     OBP 4.28.1 2008/02/11 13:04
>>     Hypervisor 1.6.1 2008/02/11 12:15
>>     POST 4.28.1 2008/02/11 13:29
>>
>> Control domain SXCR b85
>>
>> ldm -V
>>
>> Logical Domain Manager (v 1.0.2)
>>          Hypervisor control protocol v 1.1
>>          Using Hypervisor MD v 1.1
>>
>> System PROM:
>>          Hypervisor      v. 1.6.1        @(#)Hypervisor 1.6.1 
>> 2008/02/11 12:15\015
>>
>>          OpenBoot        v. 4.28.1       @(#)OBP 4.28.1 2008/02/11 13:04
>>
>> -bash-3.2$ ldm ls -l
>> NAME             STATE    FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
>> primary          active   -n-cv   SP      4     4G        42%  1m
>>
>> SOFTSTATE
>> Solaris running
>>
>> VCPU
>>      VID    PID    UTIL STRAND
>>      0      0       72%   100%
>>      1      1       30%   100%
>>      2      2       36%   100%
>>      3      3       30%   100%
>>
>> MAU
>>      CPUSET
>>      (0, 1, 2, 3)
>>
>> MEMORY
>>      RA               PA               SIZE
>>      0x8000000        0x8000000        4G
>>
>> VARIABLES
>>      boot-device=disk2 disk1 net
>>      nvramrc=devalias disk1 /pci at 7c0/pci at 0/pci at 8/scsi at 2/disk at 
>> 0,0:a
>> devalias disk2 /pci at 7c0/pci at 0/pci at 8/scsi at 2/disk at 0,0:b
>>
>>      use-nvramrc?=true
>>
>> IO
>>      DEVICE           PSEUDONYM        OPTIONS
>>      pci at 780          bus_a
>>      pci at 7c0          bus_b
>>
>> VDS
>>      NAME             VOLUME         OPTIONS          DEVICE
>>      primary-vds0     cluster1 /dev/zvol/dsk/pool/ldom/cluster1
>>                       cluster2 /dev/zvol/dsk/pool/ldom/cluster2
>>
>> VCC
>>      NAME             PORT-RANGE
>>      primary-vcc0     5000-5031
>>
>> VSW
>>      NAME             MAC               NET-DEV   DEVICE     MODE
>>      primary-vsw0     00:14:4f:f9:47:32 bge0      switch at 0   prog,promisc
>>      primary-vsw1     00:14:4f:f9:f2:eb bge1      switch at 1   prog,promisc
>>      primary-vsw2     00:14:4f:f8:00:52 bge2      switch at 2   prog,promisc
>>      primary-vsw3     00:14:4f:fb:61:00 bge3      switch at 3   prog,promisc
>>
>> VCONS
>>      NAME             SERVICE                     PORT
>>      SP
>>
>> ------------------------------------------------------------------------------
>>  
>>
>> NAME             STATE    FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
>> cluster1         inactive -----           4     1984M
>>
>> MAU
>>      COUNT
>>      1
>>
>> VARIABLES
>>      auto-boot?=false
>>
>> NETWORK
>>      NAME             SERVICE                     DEVICE     MAC
>>      vnet0            primary-vsw0                network at 0 
>> 00:14:4f:f9:56:76
>>      vnet1            primary-vsw2                network at 1 
>> 00:14:4f:f9:3e:ca
>>      vnet2            primary-vsw2                network at 2 
>> 00:14:4f:fb:45:dd
>>
>> DISK
>>      NAME             VOLUME                      TOUT DEVICE  SERVER
>>      vdisk            cluster1 at primary-vds0            disk at 0
>>
>> ------------------------------------------------------------------------------
>>  
>>
>> NAME             STATE    FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
>> cluster2         inactive -----           4     1984M
>>
>> MAU
>>      COUNT
>>      1
>>
>> VARIABLES
>>      auto-boot?=false
>>
>> NETWORK
>>      NAME             SERVICE                     DEVICE     MAC
>>      vnet0            primary-vsw0                network at 0 
>> 00:14:4f:f9:9d:2f
>>      vnet1            primary-vsw2                network at 1 
>> 00:14:4f:fb:28:b4
>>      vnet2            primary-vsw2                network at 2 
>> 00:14:4f:f8:b3:ea
>>
>> DISK
>>      NAME             VOLUME                      TOUT DEVICE  SERVER
>>      vdisk            cluster2 at primary-vds0            disk at 0
>>
>> _______________________________________________
>> ldoms-discuss mailing list
>> ldoms-discuss at opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/ldoms-discuss
> 


Reply via email to