Re: striping, etc.

1997-06-21 Thread Hamish Moffatt
On Fri, Jun 20, 1997 at 12:12:14PM -0500, Nathan E Norman wrote:
 On that linux-raid list I told you about, someone was discussing IDE
 performance.  Seems that with their testing, which may or may not have
 been very accurate, that putting IDE disks on the same or seperate
 controllers seemed to have very little difference in performance.  I
 suspect this has more to do with the crappiness of IDE than anything to
 do with the md algorithms.

In fact, depending on your hardware, the performance might even be
worse. I have a Shuttle HOT-553, which is a Triton HX motherboard.
I have a Western Dig 1.6gb on the primary, and a Quantum 3.2gb
which was on the secondary. Linux would initially enable the DMA
for the Quantum, then disable it when it mounted the partitions,
after a timeout. This doesn't happen now that I've moved the Quantum
onto the primary with the WD (although it still happens to my
CD-ROM drive which is on the secondary). I don't know if there's
a performance difference, but I couldn't be any worse off.

So in summary, DMA does not seem to work to drives on the secondary
controller, while it works fine on the primary.


Hamish
-- 
Hamish Moffatt, StudIEAust[EMAIL PROTECTED]
Student, computer science  computer systems engineering.3rd year, RMIT.
http://hamish.home.ml.org/ (PGP key here) CPOM: [  ] 48%
The opposite of a profound truth may well be another profound truth.  --Bohr


--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word unsubscribe to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .


Re: striping, etc.

1997-06-20 Thread Brian White
 They will go on a machine with 3 200m ide drives, which will be a poor-man's
 server.  My current thinking is to mount / on the first controller, and
 use the other pair as /usr on the second interface.  /usr will be NFS
 exported.  Or would I be better off putting the two /usr drives on
 separate controllers?

I'd think it was better to mount them across separate controllers.  With
seperate control and data lines, the kernel can issue two simultaneous
requests and get data from both at the same time.  My understanding with
IDE (and EIDE) is that a single controller can only access a single
drive at a time and must wait for that request to finish before issuing
another.

SCSI is a more sophisticated in that it allows a request to be issued
and then the bus to idle (for more requests or other data) until the
drive finishes processing the request and can blast back the data.

This is why SCSI is much better than EIDE when dealing with more than one
drive.  (At least, this is my understanding...  Somebody please correct
me if I'm wrong.)

  Brian
 ( [EMAIL PROTECTED] )

---
Debian GNU/Linux!  Search it at  http://insite.verisim.com/search/debian/simple



--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word unsubscribe to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .


Re: striping, etc.

1997-06-20 Thread Rick Hawkins

  They will go on a machine with 3 200m ide drives, which will be a poor-man's
  server.  My current thinking is to mount / on the first controller, and
  use the other pair as /usr on the second interface.  /usr will be NFS
  exported.  Or would I be better off putting the two /usr drives on
  separate controllers?
 
 I'd think it was better to mount them across separate controllers.  With
 seperate control and data lines, the kernel can issue two simultaneous
 requests and get data from both at the same time.  My understanding with
 IDE (and EIDE) is that a single controller can only access a single
 drive at a time and must wait for that request to finish before issuing
 another.

yes; that's the hitch with ide.  On the other hand, we don't have spare
scsis lying around :)

The reason i'm hesitating to put them on separate controllers is that /
is also on the first controller.  Everything that gets nfs exported will
come off /usr, and my concern is that massive hits to the portion that
was slaved could leave / unaccesable to the host.


 SCSI is a more sophisticated in that it allows a request to be issued
 and then the bus to idle (for more requests or other data) until the
 drive finishes processing the request and can blast back the data.

 This is why SCSI is much better than EIDE when dealing with more than one
 drive.  (At least, this is my understanding...  Somebody please correct
 me if I'm wrong.)

yes; exactly.  I just wish we had scsis.  Of coure, if this whole thing
works, we may be able to get one . . .

rick


--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word unsubscribe to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .


Re: striping, etc.

1997-06-20 Thread Nathan E Norman

On Fri, 20 Jun 1997, Rick Hawkins wrote:

:
:  They will go on a machine with 3 200m ide drives, which will be a 
poor-man's
:  server.  My current thinking is to mount / on the first controller, and
:  use the other pair as /usr on the second interface.  /usr will be NFS
:  exported.  Or would I be better off putting the two /usr drives on
:  separate controllers?
: 
: I'd think it was better to mount them across separate controllers.  With
: seperate control and data lines, the kernel can issue two simultaneous
: requests and get data from both at the same time.  My understanding with
: IDE (and EIDE) is that a single controller can only access a single
: drive at a time and must wait for that request to finish before issuing
: another.
:
:yes; that's the hitch with ide.  On the other hand, we don't have spare
:scsis lying around :)
:
:The reason i'm hesitating to put them on separate controllers is that /
:is also on the first controller.  Everything that gets nfs exported will
:come off /usr, and my concern is that massive hits to the portion that
:was slaved could leave / unaccesable to the host.
:
:
: SCSI is a more sophisticated in that it allows a request to be issued
: and then the bus to idle (for more requests or other data) until the
: drive finishes processing the request and can blast back the data.
:
: This is why SCSI is much better than EIDE when dealing with more than one
: drive.  (At least, this is my understanding...  Somebody please correct
: me if I'm wrong.)
:
:yes; exactly.  I just wish we had scsis.  Of coure, if this whole thing
:works, we may be able to get one . . .
:

On that linux-raid list I told you about, someone was discussing IDE
performance.  Seems that with their testing, which may or may not have
been very accurate, that putting IDE disks on the same or seperate
controllers seemed to have very little difference in performance.  I
suspect this has more to do with the crappiness of IDE than anything to
do with the md algorithms.

Given your concerns about / being accessable, I believe the best choice
would be to put both drives on the secondary controller.  After all,
this is a proof of concept type install, right?  I suppose you could try
creating a linear device and a raid0 device and run some adhoc tests to
see if there's a difference ... but I think you'll find that IDE is
holding you back, not the md stuff.

Good luck!  May the gods grant you many gigs of SCSI disk :)

--
  Nathan Norman:Hostmaster CFNI:[EMAIL PROTECTED]
finger [EMAIL PROTECTED] for PGP public key and other stuff
Key fingerprint = CE 03 10 AF 32 81 18 58  9D 32 C2 AB 93 6D C4 72
--



--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word unsubscribe to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .


Re: striping, etc.

1997-06-20 Thread Debian user mail

On Fri, 20 Jun 1997, Rick Hawkins wrote:

   They will go on a machine with 3 200m ide drives, which will be a 
   poor-man's
   server.  My current thinking is to mount / on the first controller, and
   use the other pair as /usr on the second interface.  /usr will be NFS
   exported.  Or would I be better off putting the two /usr drives on
   separate controllers?

  I'd think it was better to mount them across separate controllers.  With
  seperate control and data lines, the kernel can issue two simultaneous
  requests and get data from both at the same time.  My understanding with
  IDE (and EIDE) is that a single controller can only access a single
  drive at a time and must wait for that request to finish before issuing
  another.
 
 The reason i'm hesitating to put them on separate controllers is that /
 is also on the first controller.  Everything that gets nfs exported will
 come off /usr, and my concern is that massive hits to the portion that
 was slaved could leave / unaccesable to the host.

Assuming you don't plan to add an ISA IDE/EIDE controller (I've seen them,
and I think someone is running Debian with one (giving them three or even
four IDE controllers)), I would suggest running both disks on the second
controller and using linear, although that's merely a guess, not a result
of actual testing.

I had a 2x3.1GB RAID0 md array, with both disks as slaves.  I had 2x120M
swap areas, one on each of the masters.  If mirror was running, it was
trying to access the (slave) array while swapping to the (master) swap
area(s).  Horrible performance!

Don't think about putting / on md...without a non-md partition, you can't
read the kernel, and without the kernel you can't load the md stuff.  _DO_
compile md into the kernel, it'll be much easier to use than if it is
modularized.

One bad note to put forth: Concurrent with a local storm and power failure
(though I don't think it was related), my Linux host choked.  Upon reboot,
fsck on the md array failed (some sort of internal error).  With my
limited knowledge of filesystems, I couldn't fix it and was forced to
rebuild my data.  As a result, I chose to remove the md array and
downgrade my disk usage (I had a mirror on it, so I just had to give up
breathing room and Debian 1.1, only weeks before the release of 1.3 so I
wasn't too bummed).  In doing so, I rearranged partitions so that / and
/usr were on different controllers, swap was on the same disk as /, and
got much better performance than before.  Very well worth it, but too many
changes (and too many other space commitments) to possibly restore the md
array and see how performance was after the repartitioning.  Sorry I can't
verify that speed.

And, make sure your drives DON'T spin down ever (I used hdparm -S 0 I
think to stop that behavior).

HTH,

Pete

--
Peter J. Templin, Jr.   Client Services Analyst
Computer  Communication Services   tel: (717) 524-1590
Bucknell University [EMAIL PROTECTED]



--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word unsubscribe to
[EMAIL PROTECTED] . Trouble? 
e-mail to [EMAIL PROTECTED] .


Re: striping, etc.

1997-06-20 Thread Dima
Rick Hawkins wrote:
 
 wow, that was fast :)
 
 I've downloaded it, and read the docs.  I compiled the kernel with
 support for these devices.  
 
 They will go on a machine with 3 200m ide drives, which will be a poor-man's
 server.  My current thinking is to mount / on the first controller, and
 use the other pair as /usr on the second interface.  /usr will be NFS
 exported.  Or would I be better off putting the two /usr drives on
 separate controllers?
 
 also, would I be better off combining all three?  and finally, will
 linear or raid0 give be better performance on ide drives (i know the
 answer is raid0 on scsi).

Put  RAIDed disks on one controller and / and swap on another.  Also, 
IIRC if disks on one controller support different modes, contoller will 
use the [s]lowest for both.  So, if your disks aren't identical you might 
want to run a few tests on them (hdparm) first.

Dimitri


--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word unsubscribe to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .


Re: striping, etc.

1997-06-20 Thread Rob Browning
Dima [EMAIL PROTECTED] writes:

 Put  RAIDed disks on one controller and / and swap on another.

Well, as I understand it, if you're not using hardware raid,
specifically, if you're using IDE, then having the RAIDed disks on the
same IDE controller mostly defeats the purpose of RAID (at least RAID
for acceleration) because IDE can only issue commands to one drive on
a given controller at a time.

If you're using SCSI, then I'm not sure if it makes a lot of
difference if the RAIDed drives are on the same controller or
different ones.  Probably depends on the quality of the controllers.

This is all assuming *software* raid (i.e. md).
-- 
Rob


--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word unsubscribe to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .


striping, etc.

1997-06-19 Thread Rick Hawkins

I've noticed that linux supports volumes across physical devices.
However, I haven't figured out which command to use to set this up.  I
would like to mount a pair of hard drives on the second controller
jointly as /usr.  THis volume will also be served out by nfs.  Could
someone give me a hint as to what the command i'm looking for is?  and
which how-to or faq to look at?

rick


--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word unsubscribe to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .


Re: striping, etc.

1997-06-19 Thread Nathan E Norman
You want mdtools (the package) and md device support in the kernel,
wither compiled in or as a module.  Either works well.  I have two RAID0
partitions spanned across two 4 GB SCSI drives.  Works great.

The md commands actually have useful man pages.  If you need more info,
feel free to email.

--
  Nathan Norman:Hostmaster CFNI:[EMAIL PROTECTED]
finger [EMAIL PROTECTED] for PGP public key and other stuff
Key fingerprint = CE 03 10 AF 32 81 18 58  9D 32 C2 AB 93 6D C4 72
--

On Thu, 19 Jun 1997, Rick Hawkins wrote:

:
:I've noticed that linux supports volumes across physical devices.
:However, I haven't figured out which command to use to set this up.  I
:would like to mount a pair of hard drives on the second controller
:jointly as /usr.  THis volume will also be served out by nfs.  Could
:someone give me a hint as to what the command i'm looking for is?  and
:which how-to or faq to look at?
:
:rick
:
:
:--
:TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word unsubscribe to
:[EMAIL PROTECTED] . 
:Trouble?  e-mail to [EMAIL PROTECTED] .
:


--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word unsubscribe to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .


Re: striping, etc.

1997-06-19 Thread Rick Hawkins

wow, that was fast :)

I've downloaded it, and read the docs.  I compiled the kernel with
support for these devices.  

They will go on a machine with 3 200m ide drives, which will be a poor-man's
server.  My current thinking is to mount / on the first controller, and
use the other pair as /usr on the second interface.  /usr will be NFS
exported.  Or would I be better off putting the two /usr drives on
separate controllers?

also, would I be better off combining all three?  and finally, will
linear or raid0 give be better performance on ide drives (i know the
answer is raid0 on scsi).

rick



--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word unsubscribe to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .


Re: striping, etc.

1997-06-19 Thread Nathan E Norman

On Thu, 19 Jun 1997, Rick Hawkins wrote:

:
:wow, that was fast :)
:
:I've downloaded it, and read the docs.  I compiled the kernel with
:support for these devices.  
:
:They will go on a machine with 3 200m ide drives, which will be a poor-man's
:server.  My current thinking is to mount / on the first controller, and
:use the other pair as /usr on the second interface.  /usr will be NFS
:exported.  Or would I be better off putting the two /usr drives on
:separate controllers?
:
:also, would I be better off combining all three?  and finally, will
:linear or raid0 give be better performance on ide drives (i know the
:answer is raid0 on scsi).
:
:rick
:

My feeling is that you'd do better with seperate controllers - however,
I've never run the md devices on IDE (I have an aversion to IDE).  I
suspect that RAID0 would be faster in any case since it divides access
between the drives at all times.

There is a mailing list for Linux-raid users.  Send mail to
[EMAIL PROTECTED], with a body of subscribe linux-raid.  It's
pretty low volume, and you can find people who have done what you're
trying to do.

Sorry I can't provide more specific answers, but I avoid IDE like the
plague.  I'm in the lucky position of having lots of SCSI drives to play
with ... I know that's not the norm.

Good luck! Linux raid is cool!
--
  Nathan Norman:Hostmaster CFNI:[EMAIL PROTECTED]
finger [EMAIL PROTECTED] for PGP public key and other stuff
Key fingerprint = CE 03 10 AF 32 81 18 58  9D 32 C2 AB 93 6D C4 72
--



--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word unsubscribe to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .