- "prtconf -v" does not even show a device node for the 1st disk, all
it shows is the 2nd disk:
            disk, instance #1
                Driver properties:
                    name='ddi-no-autodetach' type=int items=1 dev=none
                        value=00000001
                    name='Nblocks' type=int64 items=1 dev=(99,15)
                        value=0000000000000000
                    name='Size' type=int64 items=1 dev=(99,15)
                        value=0000000000000000
                    name='Nblocks' type=int64 items=1 dev=(99,14)
                        value=0000000000000000
                    name='Size' type=int64 items=1 dev=(99,14)
                        value=0000000000000000
                    name='Nblocks' type=int64 items=1 dev=(99,13)
                        value=0000000000000000
                    name='Size' type=int64 items=1 dev=(99,13)
                        value=0000000000000000
                    name='Nblocks' type=int64 items=1 dev=(99,12)
                        value=0000000000000000
                    name='Size' type=int64 items=1 dev=(99,12)
                        value=0000000000000000
                    name='Nblocks' type=int64 items=1 dev=(99,11)
                        value=00000000001000e0
                    name='Size' type=int64 items=1 dev=(99,11)
                        value=000000002001c000
                    name='Nblocks' type=int64 items=1 dev=(99,10)
                        value=0000000000cffaf8
                    name='Size' type=int64 items=1 dev=(99,10)
                        value=000000019ff5f000
                    name='Nblocks' type=int64 items=1 dev=(99,9)
                        value=0000000000082f50
                    name='Size' type=int64 items=1 dev=(99,9)
                        value=00000000105ea000
                    name='Nblocks' type=int64 items=1 dev=(99,8)
                        value=0000000000b7cac8
                    name='Size' type=int64 items=1 dev=(99,8)
                        value=000000016f959000

 Device Minor Nodes:
                    dev=(99,8)

dev_path=/virtual-devices at 100/channel-devices at 200/disk at 1:a
                            spectype=blk type=minor
                            dev_link=/dev/dsk/c0d1s0

dev_path=/virtual-devices at 100/channel-devices at 200/disk at 1:a,raw
                            spectype=chr type=minor
                            dev_link=/dev/rdsk/c0d1s0
                    dev=(99,9)

dev_path=/virtual-devices at 100/channel-devices at 200/disk at 1:b
                            spectype=blk type=minor
                            dev_link=/dev/dsk/c0d1s1

dev_path=/virtual-devices at 100/channel-devices at 200/disk at 1:b,raw
                            spectype=chr type=minor
                            dev_link=/dev/rdsk/c0d1s1
                    dev=(99,10)

dev_path=/virtual-devices at 100/channel-devices at 200/disk at 1:c
                            spectype=blk type=minor
                            dev_link=/dev/dsk/c0d1s2

dev_path=/virtual-devices at 100/channel-devices at 200/disk at 1:c,raw
                            spectype=chr type=minor
                            dev_link=/dev/rdsk/c0d1s2
                    dev=(99,11)

dev_path=/virtual-devices at 100/channel-devices at 200/disk at 1:d
                            spectype=blk type=minor
                            dev_link=/dev/dsk/c0d1s3

dev_path=/virtual-devices at 100/channel-devices at 200/disk at 1:d,raw
                            spectype=chr type=minor
                            dev_link=/dev/rdsk/c0d1s3
                    dev=(99,12)

dev_path=/virtual-devices at 100/channel-devices at 200/disk at 1:e
                            spectype=blk type=minor
                            dev_link=/dev/dsk/c0d1s4

dev_path=/virtual-devices at 100/channel-devices at 200/disk at 1:e,raw
                            spectype=chr type=minor
                            dev_link=/dev/rdsk/c0d1s4
                    dev=(99,13)

dev_path=/virtual-devices at 100/channel-devices at 200/disk at 1:f
                            spectype=blk type=minor
                            dev_link=/dev/dsk/c0d1s5

dev_path=/virtual-devices at 100/channel-devices at 200/disk at 1:f,raw
                            spectype=chr type=minor
                            dev_link=/dev/rdsk/c0d1s5
                    dev=(99,14)

dev_path=/virtual-devices at 100/channel-devices at 200/disk at 1:g
                            spectype=blk type=minor
                            dev_link=/dev/dsk/c0d1s6

dev_path=/virtual-devices at 100/channel-devices at 200/disk at 1:g,raw
                            spectype=chr type=minor
                            dev_link=/dev/rdsk/c0d1s6
                    dev=(99,15)

dev_path=/virtual-devices at 100/channel-devices at 200/disk at 1:h
                            spectype=blk type=minor
                            dev_link=/dev/dsk/c0d1s7

dev_path=/virtual-devices at 100/channel-devices at 200/disk at 1:h,raw
                            spectype=chr type=minor
                            dev_link=/dev/rdsk/c0d1s7
            disk (driver not attached)

- prtconf w/o any args shows the following:
# prtconf
System Configuration:  Sun Microsystems  sun4v
Memory size: 4096 Megabytes
System Peripherals (Software Nodes):
.
.
    virtual-devices, instance #0
        ncp (driver not attached)
        console, instance #0
        channel-devices, instance #0
            network, instance #0
            network, instance #1
            disk, instance #1
            disk (driver not attached)

Regards,
Misha.

On Jan 12, 2008 11:54 AM, Alexandre Chartre <Alexandre.Chartre at sun.com> 
wrote:
>
>   So it sounds like the vdc driver hasn't been attached for the
> first disk. Can you check the output of the "prtconf"? it
> should two disks but only one with a driver attached.
>
>   Is there any vdc messages in /var/adm/messages on the guest
> domain or any vds messages in /var/adm/messages on the service
> domains?
>
> alex.
>
>
> Misha Chawla Shanker wrote:
> > Hi Alex,
> >
> > The OS in the guest was installed on "c0d1" as even at install time,
> > that was the only disk visible to the installer.
> >
> > The outputs that you requested:
> >>   # devfsadm -Cv - no output just runs and succeeds.
> > # devfsadm -C -v
> > # echo $?
> > 0
> >
> >>   # ls -l /dev/dsk
> > # ls -l /dev/dsk/
> > total 16
> > lrwxrwxrwx   1 root     root          62 Jan 11 15:12 c0d1s0 ->
> > ../../devices/virtual-devices at 100/channel-devices at 200/disk at 1:a
> > lrwxrwxrwx   1 root     root          62 Jan 11 15:12 c0d1s1 ->
> > ../../devices/virtual-devices at 100/channel-devices at 200/disk at 1:b
> > lrwxrwxrwx   1 root     root          62 Jan 11 15:12 c0d1s2 ->
> > ../../devices/virtual-devices at 100/channel-devices at 200/disk at 1:c
> > lrwxrwxrwx   1 root     root          62 Jan 11 15:12 c0d1s3 ->
> > ../../devices/virtual-devices at 100/channel-devices at 200/disk at 1:d
> > lrwxrwxrwx   1 root     root          62 Jan 11 15:12 c0d1s4 ->
> > ../../devices/virtual-devices at 100/channel-devices at 200/disk at 1:e
> > lrwxrwxrwx   1 root     root          62 Jan 11 15:12 c0d1s5 ->
> > ../../devices/virtual-devices at 100/channel-devices at 200/disk at 1:f
> > lrwxrwxrwx   1 root     root          62 Jan 11 15:12 c0d1s6 ->
> > ../../devices/virtual-devices at 100/channel-devices at 200/disk at 1:g
> > lrwxrwxrwx   1 root     root          62 Jan 11 15:12 c0d1s7 ->
> > ../../devices/virtual-devices at 100/channel-devices at 200/disk at 1:h
> >
> >>   # prtvtoc /dev/rdsk/c0d0s0 - not possible as no such device node file 
> >> exists in the system.
> >
> >>   # prtvtoc /dev/rdsk/c0d1s0
> > # prtvtoc /dev/rdsk/c0d1s0
> > * /dev/rdsk/c0d1s0 partition map
> > *
> > * Dimensions:
> > *     512 bytes/sector
> > *     600 sectors/track
> > *       1 tracks/cylinder
> > *     600 sectors/cylinder
> > *   22719 cylinders
> > *   22717 accessible cylinders
> > *
> > * Flags:
> > *   1: unmountable
> > *  10: read-only
> > *
> > *                          First     Sector    Last
> > * Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
> >        0      2    00    1048800  12045000  13093799   /
> >        1      7    00   13093800    536400  13630199   /var
> >        2      5    00          0  13630200  13630199
> >        3      3    01          0   1048800   1048799
> >
> >
> > Let me know if you need anything else from the guest or i/o domains.
> >
> > Regards,
> > Misha.
> >
> > On Jan 12, 2008 1:06 AM, Alexandre Chartre <Alexandre.Chartre at sun.com> 
> > wrote:
> >>   Misha did have 2 I/O domains and the way she exported one file (with
> >> the same name) from each I/O domains is correct. An I/O domain is a
> >> domain which has direct access to the hardware and that's the case here
> >> for primary and alternate. Then each I/O domains are used to export one
> >> file to the same guest domain. The guest domain will eventually use
> >> these two virtual disks to create a mirror with each side of the mirror
> >> managed by a different I/O domain.
> >>
> >>   Misha, what is strange to me is that the OBP seems to find the two
> >> disks but not Solaris. Can you provide the output of the following
> >> commands from the guest domain:
> >>
> >>   # devfsadm -Cv
> >>   # ls -l /dev/dsk
> >>   # prtvtoc /dev/rdsk/c0d0s0
> >>   # prtvtoc /dev/rdsk/c0d1s0
> >>
> >>   How was the system installed? On which disk?
> >>
> >>   Rgds,
> >>
> >> alex.
> >>
> >>
> >>
> >> Pallab Bhattacharya wrote:
> >>> Misha Chawla Shanker wrote:
> >>>> If both the files (being used as boot devices) reside on storage
> >>>> visible from the primary domain, how will doing mirroring the boot
> >>>>
> >>> The word "visible" is critical here - with respect to the ldm command,
> >>> you may have encountered it already when you assign some non-existent
> >>> device to a service - the bind will fail .
> >>>> disk in the guest help in case of a primary domain crash?
> >>>>
> >>> Sorry, IMHO, this is not the way to solve the mirroring issue..
> >>>> Also, your earlier comment about the domain "alternate" not being an
> >>>> i/o domain is confusing. The domain "alternate" has a PCI bus leaf
> >>>> assigned to it and had direct connectivity to physical disk devices
> >>>>
> >>> If you indeed have disks and pcie-dev - (which was not visible from the
> >>> output below)
> >>> then pl. use the io-dev - and not the file.
> >>> -regards
> >>> -pallab
> >>>> via the HBA attached to that PCI bus. the file is carved on top of
> >>>> these physical disks with volumes as an abstraction in between.
> >>>>
> >>>> regards,
> >>>> Misha.
> >>>>
> >>>> On Jan 11, 2008 5:38 PM, Pallab Bhattacharya
> >>>> <Pallab.Bhattacharya at sun.com> wrote:
> >>>>
> >>>>> Misha Chawla Shanker wrote:
> >>>>>
> >>>>>
> >>>>>> Yes it is a single file.
> >>>>>> And I believe you meant "assign that to the guest" instead of "assign
> >>>>>> that to the alternate" below?
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>> No, here is what i would do from the primary
> >>>>>
> >>>>> # cp /fsmnt1/ldom1_boot.img /fsmnt1/ldom2_boot.img
> >>>>>
> >>>>> Then add the dev to alternate such that the list below now
> >>>>> shows (pl. see the DEVICE col)
> >>>>>
> >>>>> # /opt/SUNWldm/bin/ldm list-services alternate
> >>>>> VDS
> >>>>>    NAME             VOLUME         OPTIONS          DEVICE
> >>>>>    alternate-vds0   vdisk1                          
> >>>>> /fsmnt1/ldom2_boot.img
> >>>>>
> >>>>>
> >>>>> The file /fsmnt1/ldom1_boot.img  physcially present on the domain named
> >>>>> "alternate"
> >>>>> is not used at all -
> >>>>>
> >>>>> -regards
> >>>>> -pallab
> >>>>>
> >>>>>
> >>>>>
> >>>>>> Regards,
> >>>>>> Misha.
> >>>>>>
> >>>>>> On Jan 11, 2008 5:31 PM, Pallab Bhattacharya
> >>>>>> <Pallab.Bhattacharya at sun.com> wrote:
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>> So
> >>>>>>>
> >>>>>>> /fsmnt1/ldom1_boot.img
> >>>>>>>
> >>>>>>> is a single file as seen from the primary ?
> >>>>>>>
> >>>>>>> Can you pl. copy the file to a different name and assign
> >>>>>>> that to the alternate?
> >>>>>>>
> >>>>>>> -regards
> >>>>>>> -pallab
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> Misha Chawla Shanker wrote:
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>> To all Alternate I/O domain Gurus,
> >>>>>>>>
> >>>>>>>> I have an alternate i/o domain setup on one of my T2000 box.
> >>>>>>>> I am stuck at the point where only one boot disk of the guest
> >>>>>>>> shows up inside the guest and not the other.
> >>>>>>>>
> >>>>>>>> >From the control domain:
> >>>>>>>> # ldm list
> >>>>>>>> NAME             STATE    FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
> >>>>>>>> primary          active   -n-cv   SP      4     4G       0.8%  1d 3h 
> >>>>>>>> 24m
> >>>>>>>> alternate        active   -n--v   5000    4     4G       0.6%  22h 
> >>>>>>>> 31m
> >>>>>>>> ldom1            active   -n---   5001    4     4G       0.5%  23m
> >>>>>>>>
> >>>>>>>> guest domain bindings:
> >>>>>>>> # ldm list-bindings ldom1 | grep vds0
> >>>>>>>>   vdisk1           vdisk1 at primary-vds0              disk at 0  
> >>>>>>>> primary
> >>>>>>>>   vdisk2           vdisk1 at alternate-vds0            disk at 1  
> >>>>>>>> alternate
> >>>>>>>>
> >>>>>>>> # /opt/SUNWldm/bin/ldm list-services primary
> >>>>>>>> VDS
> >>>>>>>> NAME             VOLUME         OPTIONS          DEVICE
> >>>>>>>>   primary-vds0     vdisk1                          
> >>>>>>>> /fsmnt1/ldom1_boot.img
> >>>>>>>>
> >>>>>>>> # /opt/SUNWldm/bin/ldm list-services alternate
> >>>>>>>> VDS
> >>>>>>>>   NAME             VOLUME         OPTIONS          DEVICE
> >>>>>>>>   alternate-vds0   vdisk1                          
> >>>>>>>> /fsmnt1/ldom1_boot.img
> >>>>>>>>
> >>>>>>>> Inside the guest I see only: one disk: c0d1:
> >>>>>>>> # format
> >>>>>>>> Searching for disks...done
> >>>>>>>> AVAILABLE DISK SELECTIONS:
> >>>>>>>>      0. c0d1 <SUNVDSK cyl 22717 alt 2 hd 1 sec 600>
> >>>>>>>>         /virtual-devices at 100/channel-devices at 200/disk at 1
> >>>>>>>> Specify disk (enter its number)
> >>>>>>>>
> >>>>>>>> Both the files are accessible from both the i/o domains:
> >>>>>>>> primary # ls -l /fsmnt1/ldom1_boot.img
> >>>>>>>> -rw------T   1 root     root     6979321856 Jan 10 17:06 
> >>>>>>>> /fsmnt1/ldom1_boot.img
> >>>>>>>>
> >>>>>>>> alternate # ls -l /fsmnt1/ldom1_boot.img
> >>>>>>>> -rw------T   1 root     root     6979321856 Jan 10 17:11 
> >>>>>>>> /fsmnt1/ldom1_boot.img
> >>>>>>>>
> >>>>>>>> Also, from the ok prompt of the guest, I can see 2 disks attached to 
> >>>>>>>> it:
> >>>>>>>> a) /virtual-devices at 100/channel-devices at 200/disk at 1 <-- this 
> >>>>>>>> one is
> >>>>>>>> visible inside the guest as c0d1
> >>>>>>>> b) /virtual-devices at 100/channel-devices at 200/disk at 0 <-- this 
> >>>>>>>> one is not
> >>>>>>>> visible, should have showed up as "c0d0"
> >>>>>>>>
> >>>>>>>> "c0d1" visible inside the guest is the backed by the file exported
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>> >from the alternate i/o domain.
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>> but the other disk backed by the file exported from the primary i/o
> >>>>>>>> domain is not visible.
> >>>>>>>>
> >>>>>>>> Any idea what may be going wrong here?
> >>>>>>>>
> >>>>>>>> Thanks,
> >>>>>>>> Misha.
> >>>>>>>> _______________________________________________
> >>>>>>>> ldoms-discuss mailing list
> >>>>>>>> ldoms-discuss at opensolaris.org
> >>>>>>>> http://mail.opensolaris.org/mailman/listinfo/ldoms-discuss
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>> --
> >>>>>>> Pallab Bhattacharya
> >>>>>>> Performance & Architecture Engineering
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>> --
> >>>>>
> >>>>> Pallab Bhattacharya
> >>>>> Performance & Architecture Engineering
> >>>>>
> >>>>>
> >>>>>
> >>>> _______________________________________________
> >>>> ldoms-discuss mailing list
> >>>> ldoms-discuss at opensolaris.org
> >>>> http://mail.opensolaris.org/mailman/listinfo/ldoms-discuss
> >>>>
> >>>
> >>> --
> >>> Pallab Bhattacharya
> >>> Performance & Architecture Engineering
> >>>
> >>>
> >>> ------------------------------------------------------------------------
> >>> _______________________________________________
> >>> ldoms-discuss mailing list
> >>> ldoms-discuss at opensolaris.org
> >>> http://mail.opensolaris.org/mailman/listinfo/ldoms-discuss
> > _______________________________________________
> > ldoms-discuss mailing list
> > ldoms-discuss at opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/ldoms-discuss
>

Reply via email to