> Hi Leo,
> 
> Leo Liu wrote, On 16/06/2009 09:39:
> > ENV:
> > SPARC-Enterprise-T5220 boxes
> > Sunt5220b is the source node with source domain
> ldom2, ldom2 used a flat file as its vdisk(as a full
> disk) which locates on a share disk;
> > Sunt5220a is the target node, the target domain
> still used a flat file as its vdisk which also
> locates on a share disk;
> > 
>

Hi Liam,

Thanks for your quick reply,  I am not sure if I got your idea, you mean I 
should use /ldom2/bootdisk from /dev/vx/dsk/ldoquitmdg2/ldomvol2 on both source 
and target domains? Not only the same name but also the same storage(here it's 
the bootdisk file), in this situation, after migration, I need to bring the 
bootdisk file online on the target node, then I can access the target ldom?

Thanks and Regards,
-Leo


 
> Are both domains backed by the same file though ? It
> doesn't look
> like it below. The same disk backend needs to be
> added as a vdsdev
> on both the source & target systems - it's not
> sufficient to have
> the just the same name on the target for a different
> backend.
> The disk contents are not migrated across.
> 
> - Liam
> 
> 
> > 1.  Create ldom2 on sunt5220b, ldom2 used a flat
> file /ldom2/bootdisk as its vdisk, which locates on a
> vxfs file-system /dev/vx/dsk/ldom2/ldomvol2;
> > 
> > 2.  On target node sunt5220a, mount a vxfs
> file-system on /ldom2 and create a flatfile on it;
> > [root at sunt5220a config]#>mount -F vxfs
> /dev/vx/dsk/ldomdg3/ldomvol3 /ldom2/
> > [root at sunt5220a config]#>mkfile 17G /ldom2/bootdisk
> > 
> > 3.  Add /ldom2/bootdisk  in VDS of primary domain,
> which is the same configuration as source node
> sunt5220b;
> > [root at sunt5220b config]#>ldm list-bindings
> > VDS
> >     NAME             VOLUME         OPTIONS
>          MPGROUP        DEVICE
> mary-vds0     ldom2_boot
>                                     /ldom2/bootdisk
> bindings
> > VDS
> > NAME             VOLUME         OPTIONS
>          MPGROUP        DEVICE
> -vds0     ldom2_boot
>                                     /ldom2/bootdisk
> de sunt5220b to target node sunt5220a, the migration
> was successful without any error.
> > [root at sunt5220b config]#>ldm migrate-domain ldom2
> sunt5220a
> > Target Password:
> > 
> > 8.  On the target node, we can see that ldom2 is
> here with ?n? normal flag
> > [root at sunt5220a /]#>ldm list
> > NAME             STATE      FLAGS   CONS    VCPU
>  MEMORY   UTIL  UPTIME
> primary          active     -n-cv-  SP      48
>     14G      0.1%  3h 49m
> om2            active     -n----  5000    8     1G
>       0.1%  21m
> verything looks fine here EXCEPT:
> > 1.  We failed to login ldom2,  login hang here
> > [root at sunt5220a /]#>telnet localhost 5000
> > Trying 127.0.0.1...
> > Connected to localhost.
> > Escape character is '^]'.
> > 
> > Connecting to console "ldom2" in group "ldom2" ....
> > Press ~? for control options ..
> > 
> > 2.  stop ldom on target node, then restart it, ldom
> failed to start
> > [root at sunt5220a /]#>telnet localhost 5000
> > Trying 127.0.0.1...
> > Connected to localhost.
> > Escape character is '^]'.
> > 
> > Connecting to console "ldom2" in group "ldom2" ....
> > Press ~? for control options ..
> > 
> > {0} ok boot
> > Bad magic number in disk label
> > ERROR:
> /virtual-devices at 100/channel-devices at 200/disk at 0:
> Can't open disk label package
> > 
> > ERROR: boot-read fail
> > Boot device: ldom2_boot  File and args:
> > Bad magic number in disk label
> > ERROR:
> /virtual-devices at 100/channel-devices at 200/disk at 0:
> Can't open disk label package
> > 
> > ERROR: boot-read fail
> > 
> > Can't open boot device
> > 
> > 
> > Could anybody please make some comments on it,
> thanks so much.
> 
> 
> _______________________________________________
> ldoms-discuss mailing list
> ldoms-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/ldoms-dis
> cuss
-- 
This message posted from opensolaris.org

Reply via email to