A fix(?) is found... First, sorry for the horrendous English on the first post. Never try to write a technical email right before a meeting. You may become a poster child for the deterioration of our schooling system (a quote from the day I sent this: "Jeez, and they actually graduated you..."). Now to the meat: I found out how to add the necessary boot parameters in the ide.txt and kernel-parameters.txt of the 2.3.40 documentation. This along with a truly excellent web page on lilo, http://www.lasg.ac.cn/tutorials/linux/Linux_Sys_Adm/lsg04.htm#E68E20, helped me to formulate the lilo.conf entry below: # End LILO global Section # image = /boot/bzImage root = /dev/hda3 append = "hdc=4962,255,63 hde=4962,255,63 hdg=4962,255,63 hdi=4962,255,63 hdk=4962,255,63 hdm=4962,255,63 hdo=4962,255,63" label = l On reboot, though, I found a limitiation on lilo I hadn't known about. Only the first three options were taken, with the fourth being clobbered. A switch from the "append" lilo option to the "literal" version got the fourth entry to work but no more. My best guess is that the string was too long to fit in the MBR space lilo had reserved for such things. Strike two! Detected 451029178 Hz processor. ide_setup: hdc=4962,255,63 ide_setup: hde=4962,255,63 ide_setup: hdg=4962,255,63 ide_setup: hdi=4 -- BAD OPTION Console: colour VGA+ 80x25 Calibrating delay loop... 448.92 BogoMIPS . . . hda: Maxtor 92720U8, 25965MB w/2048kB Cache, CHS=3310/255/63, UDMA(33) hdc: Maxtor 94098U8, 39082MB w/2048kB Cache, CHS=4962/255/63, UDMA(33) hde: Maxtor 94098U8, 39082MB w/2048kB Cache, CHS=4962/255/63, UDMA(33) hdg: Maxtor 94098U8, 39082MB w/2048kB Cache, CHS=4962/255/63, UDMA(33) hdi: Maxtor 94098U8, 39082MB w/2048kB Cache, CHS=79406/16/63, UDMA(33) hdk: Maxtor 94098U8, 39082MB w/2048kB Cache, CHS=79406/16/63, UDMA(33) hdm: Maxtor 94098U8, 39082MB w/2048kB Cache, CHS=79406/16/63, UDMA(33) hdo: Maxtor 94098U8, 39082MB w/2048kB Cache, CHS=79406/16/63, UDMA(33) I finally got to poking around in the kernel code and devised a fix around this. I added an option to the ide section's parameter parser that allows a single parameter to apply to multiple drives. This takes the form below in my new lilo.conf: # End LILO global Section # image = /boot/bzImage root = /dev/hda3 append = "hdcegikmo=4962,255,63" label = l # While admittedly the best solution is an update to the code that resolves the geometries, this hack does fix (a relative term) the problem, and it allows for additional "niche" boot param settings in the future should you use a bundle of identical drives. A diff of the changes I made to 2.3.40's drivers/block/ide.c is listed below. Some of the code is obviously poor: note the arbitrary 200 byte length buffers for string vars - I couldn't get malloc to work that early in the boot process. I welcome feedback and am willing to clean this up for formal distribution. At any rate, I would sure like to see something similar (or that proivides similar functionality) in future versions of the kernel. -Z --- ide.c Wed Jan 26 09:40:47 2000 +++ ide.c.orig Wed Jan 26 06:29:47 2000 @@ -2744,10 +2744,8 @@ ide_hwif_t *hwif; ide_drive_t *drive; unsigned int hw, unit; - size_t s_length, dl_length; const char max_drive = 'a' + ((MAX_HWIFS * MAX_DRIVES) - 1); const char max_hwif = '0' + (MAX_HWIFS - 1); - char drive_list[200], split_option[200]; printk("ide_setup: %s", s); @@ -2779,26 +2777,6 @@ "serialize", "autotune", "noautotune", "slow", "swapdata", "bswap", "flash", "remap", "noremap", "scsi", NULL}; - /* - * Check for options that apply to multiple drives: "hdabc...xyz=" - */ - if ( s[3] != '=' && ! (s[3] == 'l' && s[4] == 'u' && s[5] == 'n') ) { - s_length = strlen(s); - for (i=2; ( s[i] != '=' && i < s_length - 1 ); i++ ) { - drive_list[i-2] = s[i]; - } - if ( i != s_length - 1 ) { - dl_length = i - 2; - strcpy(split_option, "hd"); - printk("\n"); - for (i=0; i < dl_length; i++ ) { - split_option[2] = drive_list[i]; - strcpy(split_option + 3, s + dl_length + 2); - ide_setup(split_option); - } - } - goto done; - } unit = s[2] - 'a'; hw = unit / MAX_DRIVES; unit = unit % MAX_DRIVES; At 2:02 PM -0600 1/25/0, Zach Coombes, AMD, Austin, TX wrote: >Some threads never die... I'm continuing one here from October... > >Thanx for the great summary of how to get a system up and running, but I >have a question about the setup below. > >I'm trying to get some 40GB Maxtors up using either Promise Ultra33's or >Ultra66 boards (33's in the logs below). With a test compile of the new >2.3.40 beta kernel, the drives are finally recognized as 40GB (mondo thanx >to Andries Brouwer, Andre Hedrick, et. al... Mama always said I was a suck >up), but the way the kernel does it is causing fdisk to fail. Fdisk can >only manage disks with 16-bit cylinder numbers. I tried the new version of >the GNU tool "parted" with similar results. > >What was the boot parameter you used to get the LBA addressing to use 255 >heads as a parameter in the drive geometry on the 32GB drives? > > >>from dmesg > >hda: Maxtor 92720U8, 25965MB w/2048kB Cache, CHS=3310/255/63, UDMA(33) >hdc: Maxtor 94098U8, 39082MB w/2048kB Cache, CHS=79406/16/63, UDMA(33) >hde: Maxtor 94098U8, 39082MB w/2048kB Cache, CHS=79406/16/63, UDMA(33) >hdg: Maxtor 94098U8, 39082MB w/2048kB Cache, CHS=79406/16/63, UDMA(33) >hdi: Maxtor 94098U8, 39082MB w/2048kB Cache, CHS=79406/16/63, UDMA(33) >hdk: Maxtor 94098U8, 39082MB w/2048kB Cache, CHS=79406/16/63, UDMA(33) >hdm: Maxtor 94098U8, 39082MB w/2048kB Cache, CHS=79406/16/63, UDMA(33) >hdo: Maxtor 94098U8, 39082MB w/2048kB Cache, CHS=79406/16/63, UDMA(33) > > > >/root/parted-1.0.7# fdisk /dev/hde > >The number of cylinders for this disk is set to 13870. >There is nothing wrong with that, but this is larger than 1024, >and could in certain setups cause problems with: >1) software that runs at boot time (e.g., LILO) >2) booting and partitioning software from other OSs > (e.g., DOS FDISK, OS/2 FDISK) > >Command (m for help): > > > >Thanx, > >-Z > > >------------------------------------------------------------------------------- > _______ >Zach Coombes \____ | "Computers are useless. > AMD Senior Hick Engineer _ | | They can only give you answers." > email: [EMAIL PROTECTED] / |_ | | -Pablo Picasso > |__/ \| > > >>In case someone else here wants to build a larger IDE software raid5 in >>the near future, here is what works for me very well right now: >> >>- single processor P3 >>- 4 or more IBM IDE drives (I use 4) >>- linux 2.2.13pre15 (but probably better: 2.2.13final) >>- the raid 2.2.11 patch (just press enter a few times...) >>- if you use >32GB drives, a patch for that or the UnifiedIDE patch >>- if you use >32GB drives, hd[e,g,i,k etc.]=4560,255,63 boot parameter (or >>a future UnifiedIDE) >>- another small patch to get the promise66 going, or the UnifiedIDE patch >>- "hdparm -d1 -X66 <dev>" in some startup script is needed without >>UnifiedIDE patch. >>Later "hdparm -d1 -X66 -k1 -K1 -W1<dev>" can be used. >> >>- use UDMA66 cables (80 wires), expensive but should improve signal quality >>- but probably better operate the array in UDMA33 mode >>- you have less physical problems with cable length if you use one drive >>per controller (masters only, no slaves). This may also have advantages if >>a drive fails. In my experience the reliability & data integrity however >>doesn�t suffer if you use master+slave (in case no drive fails). >> >>- you probably need to use mknod for /dev/hdi etc. >> >>- stresstest the machine for >=24h (this is also a very good idea for SCSI >>arrays) >> >>For me, this configuration (currently without UnifiedIDE, 4 IBM 37GB >>drives on 2 promise controllers, one non-raid drive on onboard controller) >>survived an intensive 40h stresstest without problems (high bandwidth >>random data with read-back validation). Also 30h in normal operation now. >>I don�t expect it to give me future problems. >> >>You don�t want to mix IDE+SMP currently, since kernels up to 2.2.13pre14 >>have an SMP issue in the IDE driver, and pre15 is unproven right now. >> >>You need very solid hardware. I had to replace mainboard+cpu in a >>different raid server, because the old ones couldn�t cope the stress (they >>preferred to deliver bit errors). >> >>You usually cannot use all drive bays, you need enough space between the >>drives (or they will run very very hot...). >> >>Performance: >> >>- overkill read bandwidth, very good write bandwidth. But bandwidth is >>more or less irrelevant these days. >>- very good (average) seek performance, a factor of (number_of_drives + >>X). X due to better locality. This makes your database happy. >>- the performance of IDE master/slave configurations is somewhat lower, >>but not much. I don�t see performance reasons to not use master/slave. >>It�s more a matter of cable length and potential problems due to failing >>drives. >> >>kernel patches: >> >>Special ones are needed if Unified IDE is not used. Available on request. >> >>stresstester: >> >>If there is interest, I can clean it up a little and release it. >>In order to enable you to happily crash your IDE+SMP machines too... >> >> >>-- >>the online community service for gamers & friends - http://www.rivalnet.com >>* unterst�tzt �ber 50 PC-Spiele im Multiplayer-Modus >>* Dateien senden & empfangen bis 500 MB am St�ck >>* Newsgroups, Mail, Chat & mehr
Re: some recommendations for IDE raid (using 37GB drives)
Zach Coombes, AMD, Austin, TX Tue, 1 Feb 2000 21:59:41 -0800
- some recommendations for IDE raid (using 37G... thx
- Re: some recommendations for IDE raid (... Zach Coombes, AMD, Austin, TX
- Re: some recommendations for IDE raid (... Zach Coombes, AMD, Austin, TX
- Re: some recommendations for IDE ra... Brian D. Haymore
