have a T5120 machine with 4 internal disks. I want to have 4 Guest Domains + 
the Control Domain. My point of conflict is the disks. 

I wanted to install the OS in a ZFS pool consisting of two stripped mirrors and 
then from that pool create zvolumes to be used as boot devices by my ldoms. 

I noticed that one cannot do that as otherwise OS complains as follows:

# zpool status
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror ONLINE 0 0 0
c1t0d0s0 ONLINE 0 0 0
c1t1d0s0 ONLINE 0 0 0

# zpool add rpool mirror c1t2d0 c1t3d0
cannot add to 'rpool': root pool can not have multiple vdevs or separate logs
# 

Anyway then I created a new pool called ldomspool and I put those 2 disks on 
that pool. 

At this point I will create two zvolumes in every disk pool so that at the end 
I have 2 guest domains booting from rpool and 2 ldoms from ldomspool. 

Question #1, Is this the ldoms gets deployed commonly? I tried earlier file 
based images for booting and the control domain runs out of memory and at that 
point pages. I also tried SVM based boot devices but the performance was 
terrible and to install OS on those guest domain took for ever. I just want to 
have an idea if I'm following a weird approach or I'm fine. 


Question #2. When one uses ZFS one needs more memory in the machines than usual 
as ZFS loves memory. In my control domain I'm assigning 5GB of RAM just to have 
ZFS happy. Now if I install OS in the Guest Domains I imagine I will need again 
extra memory if I go with ZFS based root. What are you doing in this case? Are 
you doing ZFS based boot in guest domains or you go for regular UFS based boot 
and save the extra memory? Problem is that if you go with UFS you lose all the 
cool ZFS stuff. 

Thanks and Regards
Luis
-- 
This message posted from opensolaris.org

Reply via email to