Among all the other thoughts that were brought up....it depends on what
performance you want.

For example, I have some test Linux systems, that are a single 3390-9. 
All software is on it and data.  Just something easy to play with. 
However it is limited as the volume can only be on a single Raid Array. 
And every Raid Array can end up bottlenecking on I/O, if there is
sufficient I/O.  Your other Raid Arrays (I have 8 arrays), may be idle.

>From a performance side, Oracle seems to like to manage its own disks,
and of course, each disk being on separate Raid Arrays.  However, from a
management side, putting the disks in a LVM and giving it to Oracle,
saves people costs.  So, how busy is the Oracle?  If you are not
pounding it, then use LVM and save people costs.  

That type of consideration also plays for any application that has
large pools of disk space, Oracle, DB2, Samba, NFS, FTP Server, etc.

Raid Arrays really eliminated most of the dasd performance tuning we
use to do on a volume level.  But you still may need to do it on a Raid
Array level.

You may want to try this experiment:

Have Linux format 3 volumes.  Two volumes on the same Raid Array, and
the third on a different Raid Array.  When Linux is formatting all three
volumes at the same time, it has a "completion graph" shown.  The single
volume is formatted much faster then the other two.  I know, twice as
many I/Os should take longer, plus you are doing head seeks to boot. 
But it shows there is a difference in performance when you are on an
Array that is highly used, vs lightly used.

Tom Duerbusch
THD Consulting

>>> עופר ברוך <offerbar...@gmail.com> 1/29/2009 6:18 AM >>>
Hi all,

 

We just bought some dasd storage specifically for z/VM and for the
z/Linux
underneath.

This is the time to decide what model to use 3/9/27… Originally I
thought
"the bigger the better".

We are using DIRMAINT to manage dasd space and now I am not sure what
is the
best approach.

Here are my thoughts about large models:

1.       I am concerned about fragmentation. Using model-3 I could just
play
around with full disks not worrying about fragmentation (meaning –
adding
storage to a Linux guest will add a full model-3)

2.       I am concerned about IOSQ time. I know z/VM supports static
PAV but
that is just not comfortable… We don't have Hyperpav yet…

3.       Using big minidisks will make cloning difficult, (I must have
the
same big gaps available for cloning).

Here are my thoughts about model-3:

1.       Many Many device addresses…

Am I missing something? The more I think about it the more I believe
that
model-3 is the correct answer…

 

Can you please help me out here?

 

Thanks,

Offer Baruch.


----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390
or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to