Re: where are patches located?

2000-08-11 Thread Christoph Kukulies

On Thu, Aug 10, 2000 at 05:07:42PM +0200, Karl-Heinz Herrmann wrote:
 Hi,
 
 
 On 10-Aug-00 Christoph Kukulies wrote:
  If the patch is not clean (i.e. rejects) you probably had a kernel
  patched
  with the old style md-raid. The patch is probably against a clean kernel
  source.
 
  Indeed, I had some rejects.  Uh.
 
 Then there is some problem like that. If you have your kernel running, what
 does "cat /proc/mdstat" say? There is a little difference between the
 output between the old "mdstyle" raid and the newer code (suitable with the
 new raidtools-0.90).

# cat /proc/mdstat
Personalities : [1 linear] [2 raid0] [3 raid1] [4 raid5]
read_ahead not set
md0 : inactive
md1 : inactive
md2 : inactive
md3 : inactive


 
  
  Either get a clean kernel or a patch against your kernel version (could
  be difficult).
  
  Getting a clean kernel would be difficult since I'm already using a
  kernel with SMP patches.
 
 
 Hopefully a clean kernel is not necessary -- but a kernel without the
 md-raid-patches. 
 
 Have a look at:
 http://people.redhat.com/mingo/
 
 There are raidpatches *and* smp patches of which I hope they are compatible.
 
 
 I've no idea where to find old md-style patches, but if you get one
 matching your version you could reverse-patch your kernel and apply the new
 one. 
 
 Or have a look what actually collides in your reject-files.
 
 Hope this helps,
 
 
 
 K.-H.
 

Thanks


-- 
Chris Christoph P. U. Kukulies [EMAIL PROTECTED]



Re: where are patches located?

2000-08-11 Thread Karl-Heinz Herrmann


On 11-Aug-00 Christoph Kukulies wrote:
# cat /proc/mdstat
 Personalities : [1 linear] [2 raid0] [3 raid1] [4 raid5]
 read_ahead not set
 md0 : inactive
 md1 : inactive
 md2 : inactive
 md3 : inactive

Yes -- thats old style md-raid. 

So you have a kernel which is already patched with raid-code and you try to
apply a new patch which is against a clean kernel. 

Would there be a patch against an oldstyle md-patched kernel somewhere? 
Since your distribution (Wasn't it RedHat?) is delivering it like that
maybe they should do that?

But your best strategy is probably to grab a clean kernel and get the smp
patches you wanted and the new raid patches and apply them yourself.


K.-H.





E-Mail: Karl-Heinz Herrmann [EMAIL PROTECTED] 
http://www.kfa-juelich.de/icg/icg7/FestFluGre/transport/khh/general.html
Sent: 11-Aug-00, 10:50:27




Re: where are patches located?

2000-08-11 Thread Christoph Kukulies

On Fri, Aug 11, 2000 at 10:53:11AM +0200, Karl-Heinz Herrmann wrote:
 
 On 11-Aug-00 Christoph Kukulies wrote:
 # cat /proc/mdstat
  Personalities : [1 linear] [2 raid0] [3 raid1] [4 raid5]
  read_ahead not set
  md0 : inactive
  md1 : inactive
  md2 : inactive
  md3 : inactive
 
 Yes -- thats old style md-raid. 
 
 So you have a kernel which is already patched with raid-code and you try to

patched with old style raid code?

 apply a new patch which is against a clean kernel. 
 
 Would there be a patch against an oldstyle md-patched kernel somewhere? 
 Since your distribution (Wasn't it RedHat?) is delivering it like that

Yes, RH 6.1

 maybe they should do that?
 
 But your best strategy is probably to grab a clean kernel and get the smp
 patches you wanted and the new raid patches and apply them yourself.

Sigh. :-)

 
 
 K.-H.
 
 

-- 
Chris Christoph P. U. Kukulies [EMAIL PROTECTED]



Re: where are patches located?

2000-08-11 Thread Karl-Heinz Herrmann


On 11-Aug-00 Christoph Kukulies wrote:
 So you have a kernel which is already patched with raid-code and you try
 to
 
 patched with old style raid code?

Yes -- thats the problem.

SuSE is just the same -- They distribute kernels which include a lot of
patches and you almost can't apply a new one.
I started with a fresh from kernel.org.

K.-H.




E-Mail: Karl-Heinz Herrmann [EMAIL PROTECTED] 
http://www.kfa-juelich.de/icg/icg7/FestFluGre/transport/khh/general.html
Sent: 11-Aug-00, 13:10:46




Re: where are patches located?

2000-08-11 Thread Eric Z. Ayers

Christoph Kukulies writes:
...
   Would there be a patch against an oldstyle md-patched kernel somewhere? 
   Since your distribution (Wasn't it RedHat?) is delivering it like that
  
  Yes, RH 6.1
  
   maybe they should do that?
   
   But your best strategy is probably to grab a clean kernel and get the smp
   patches you wanted and the new raid patches and apply them yourself.
  
  Sigh. :-)

Maybe it will cheer you up to hear that all you have to do is get the
kernel Source SRPM from your vendor, do an rpm -i to get the tar file
and all the patches they used.  Then, you can take each patch and
(usually!) apply it to a fresh kernel source that you've downloaded,
or at least go find the latest version of each patch, the RAID patch
being one example.


-Eric.





RE: Loss of a SCSI device and RAID

2000-08-11 Thread Eric Z. Ayers

Gregory Leblanc writes:
...
   I know that the Linux kernel auto-detects the SCSI devices on boots
   and assigns them
   
   /dev/sda to the first one
   /dev/sdb to the second one ...
   
   and so on.
  
  Yep.  Lots of planning done there.  :-)
  
   Doesn't this put a kink in your plans if you remove a disk physically
   and then restart the system?  I mean, what if the failiure on the disk
   is something like smoke coming out of the drive bay and the next time
   you reboot the kernel doesn't even see the device?
  
  If you're using just the SCSI drives, yes, it screws everything up.  
  
   Is there a way to hard code /dev/sda to Target ID N and /dev/sdb to
   Target ID M so that in case N fails, your old /dev/sdb doesn't show up
   as /dev/sda when you reboot?
  
  Sort of.  There are some "devfs" patches that make the /dev filesystem MUCH
  cleaner, and they keep disk at the same location, even when other disks are
  removed.  It does break a few things though.  I don't think it currently
  works with RAID, at least not on 2.2.x
  
   The setup I'm envisioning is a 2.2.16 kernel with the latest patches,
   a single SCSI bus with 2 hard drives in a RAID 1 configuration.  If it
   makes a difference, the system will NOT boot from these disks.
  
  Well, with persistent superblocks, you don't have anything to worry about.
  The kernel will just detect your RAID sets, and configure them.  Then, since
  /etc/fstab is pointed at /dev/mdX rather than /dev/sdX, you don't have to
  worry about SCSI drives changing.  HTH,


Thanks for the response.  Unfortunately, I don't think this setup
would help my situation very much, because this is on a shared SCSI
bus configuration.  Autodetection screws everything up, because when the
second machine is booted, it shouldn't have "permission" to access the
disks.

Is there a way for me to take advantage of the persistent superblock
without kernel auto detection?  Basically, I don't want to start the
RAID device until I say, "GO" (after we are sure the peer system is no
longer accessing the devices.)

-Eric.



How do I tell the kernel I want RAID-5?

2000-08-11 Thread Ryan Daly

This is going to sound pretty stupid, but here goes anyway...

I got 2.2.16 and the latest patch from kernel.org, applied it and started to
rebuild.

The question is, where do I tell the kernel to use RAID-5?

I can't see it in the 'make menuconfig' stuff anywhere...  Am I missing
something here?  (Hopefully...)

-rd



Re: How do I tell the kernel I want RAID-5?

2000-08-11 Thread Nick Kay

At 09:12 11/08/00 -0400, you wrote:
This is going to sound pretty stupid, but here goes anyway...

I got 2.2.16 and the latest patch from kernel.org, applied it and started to
rebuild.

The question is, where do I tell the kernel to use RAID-5?

I can't see it in the 'make menuconfig' stuff anywhere...  Am I missing
something here?  (Hopefully...)

-rd



It should be in the section "Block Devices" - make sure that
"Multiple devices driver support" is set. Then RAID options
are below that.
Also check the first few lines of Makefile for EXTRAVERSION= -RAID
to ensure the patch was applied OK

hih
nick@nexnix





lilo issue

2000-08-11 Thread Nick Kay

Hi all,
I have my raid-1 up and running over two 9gig scsi
drives under slackware 7.1 with kernel 2.2.16+raid-2.2.6-AO
from Ingo and lilo-21.5.
After disconnecting the second drive (ID 1) and rebooting
works fine, however pulling ID0 causes problems in that lilo comes
up without any kernel labels and if left alone will do a floppy seek
and return to the "boot:" prompt. Manually entering a kernel label
will get the system up. After installing a fresh empty drive as ID0
and running my rebuild script, the raid system appears to be fine
but the behaviour of lilo is still broken. Rerunning lilo on the new
config makes no difference - there are still no kernel labels and the
system has to be booted manually.
Any ideas or pointers where to look??

BTW lilo will not build on this system - the assemble fails
for temp2. I built it on a RedHat 6.2 box and tarred it over. The
assemblers have different checksums between the two systems
Perhaps this may be a clue?

tia
nick@nexnix



Re: How do I tell the kernel I want RAID-5?

2000-08-11 Thread Ryan Daly

Yeah, it's in there.  I was blowing past that before...  My mistake.

Thanks for the help.

--

On Aug 11 at 14:32, Nick Kay (nick) wrote:
  Subject: Re: How do I tell the kernel I want RAID-5?
 
  At 09:12 11/08/00 -0400, you wrote:
  This is going to sound pretty stupid, but here goes anyway...
  
  I got 2.2.16 and the latest patch from kernel.org, applied it and started
to
  rebuild.
  
  The question is, where do I tell the kernel to use RAID-5?
  
  I can't see it in the 'make menuconfig' stuff anywhere...  Am I missing
  something here?  (Hopefully...)
  
  -rd
  
 
 
  It should be in the section "Block Devices" - make sure that
  "Multiple devices driver support" is set. Then RAID options
  are below that.
  Also check the first few lines of Makefile for EXTRAVERSION= -RAID
  to ensure the patch was applied OK
 
  hih
  nick@nexnix
 
  
 
 
End of excerpt from Nick Kay



--
Ryan Daly
Unix Administrator/Network Engineer
Concurrent Technologies Corporation (v) 814.269.6889
100 CTC Drive   (f) 814.269.6870
Johnstown, PA US 15904-1935

91 3E E1 09 16 D1 5A 67 1A CA 16 C7 E0 C1 74 72
ftp.ctc.com:/pub/PGP-keys/daly.asc



RE: lilo issue

2000-08-11 Thread Gregory Leblanc

 -Original Message-
 From: Nick Kay [mailto:[EMAIL PROTECTED]]
 Sent: Friday, August 11, 2000 6:51 AM
 To: [EMAIL PROTECTED]
 Subject: lilo issue
 
 Hi all,
   I have my raid-1 up and running over two 9gig scsi
 drives under slackware 7.1 with kernel 2.2.16+raid-2.2.6-AO
 from Ingo and lilo-21.5.
   After disconnecting the second drive (ID 1) and rebooting
 works fine, however pulling ID0 causes problems in that lilo comes
 up without any kernel labels and if left alone will do a floppy seek
 and return to the "boot:" prompt. Manually entering a kernel label
 will get the system up. After installing a fresh empty drive as ID0
 and running my rebuild script, the raid system appears to be fine
 but the behaviour of lilo is still broken. Rerunning lilo on the new
 config makes no difference - there are still no kernel labels and the
 system has to be booted manually.
   Any ideas or pointers where to look??

Can you show us your lilo.conf?  Do you have a default label set?  Does
lilo-21.5 include RH's boot from RAID1 patch, or another boot from RAID1
patch?  
Greg



RE: lilo issue

2000-08-11 Thread Nick Kay



Can you show us your lilo.conf?  Do you have a default label set?  Does
lilo-21.5 include RH's boot from RAID1 patch, or another boot from RAID1
patch?  

No I don't have the default label set - I tend to like having the
option of alternate kernels as a rescue mechanism. I guess I 
don't have much choice in the matter this time though.

Thanks for the response

nick@nexnix


   Greg





RE: lilo issue

2000-08-11 Thread Gregory Leblanc

 -Original Message-
 From: Nick Kay [mailto:[EMAIL PROTECTED]]
 Sent: Friday, August 11, 2000 9:27 AM
 To: Gregory Leblanc
 Cc: [EMAIL PROTECTED]
 Subject: RE: lilo issue
 
 Can you show us your lilo.conf?  Do you have a default label 
 set?  Does
 lilo-21.5 include RH's boot from RAID1 patch, or another 
 boot from RAID1
 patch?  
 
 No I don't have the default label set - I tend to like having the
 option of alternate kernels as a rescue mechanism. I guess I 
 don't have much choice in the matter this time though.

Unfortunately, in order to not break things, the default label must be set
to something, although I'm not sure what happens if you set it to something
invalid.  You can configure lilo so that it waits forever, regardless of
whether or not you have a default label specified.  The only thing that the
default label does in that configuration is to specify the kernel to boot if
you simply press enter.  Later,
Greg



Re: Degrading disk read performance under 2.2.16

2000-08-11 Thread Corin Hartland-Swann


Hi Andre,

On Fri, 11 Aug 2000, Andre Hedrick wrote:
 On Fri, 11 Aug 2000, Corin Hartland-Swann wrote:
  When I try hdparm -m -c -d1 -a, I get the following output:
  
  /dev/hdc:
   setting using_dma to 1 (on)
   HDIO_SET_DMA failed: Operation not permitted
   multcount= 16 (on)
   I/O support  =  1 (32-bit)
   using_dma=  0 (off)
   readahead=  8 (on)

 Sheesh you have to at least turn on in the kernel at compile time to
 attempt dma.

Sorry, my fault entirely. This is the first time that I have used kernel
2.4, and I wasn't used to the menuconfig and missed the DMA options out
(they weren't enabled by default).

The revised comparison between 2.2.15 and 2.4.0-test5 are as follows:

== 2.2.15 ==

 Dir   Size   BlkSz  Thr#  Read (CPU%)   Write (CPU%)   Seeks (CPU%)
- -- ---  - -- --
/mnt/  25640961   27.1371 10.3% 26.7979 23.0%  146.187 0.95%
/mnt/  25640962   27.1219 10.7% 26.6606 23.2%  142.233 0.60%
/mnt/  25640964   26.9915 10.6% 26.4289 22.9%  142.789 0.50%
/mnt/  256409616  26.4320 10.5% 26.1310 23.0%  147.424 0.52%
/mnt/  256409632  25.3407 10.1% 25.6822 22.7%  150.750 0.57%

== 2.4.0-test5 ==

 Dir   Size   BlkSz  Thr#  Read (CPU%)   Write (CPU%)   Seeks (CPU%)
- -- ---  - -- --
/mnt/  25681921   23.4496 9.70% 24.1711 20.6%  139.941 0.88%
/mnt/  25681922   16.9398 7.53% 24.0482 20.3%  136.706 0.69%
/mnt/  25681924   15.0166 6.82% 23.7892 20.2%  139.922 0.69%
/mnt/  256819216  13.5901 6.38% 23.2326 19.4%  147.956 0.70%
/mnt/  256819232  13.3228 6.36% 22.8210 19.0%  151.544 0.73%

So we're still seeing a drop in performance with 1 thread, and still
seeing the same severe degradation 2.2.16 exhibits.

 Maybe using the chipset tuning code to get it programmed correctly,
 would get you to the average 22MB/sec that piix and drive combo will
 do.

Sorry, I don't understand this. Could you explain it to me? Will this be
a specific option on the kernel config?

Thanks,

Corin

PS Sorry about the DMA oversight.

/+-\
| Corin Hartland-Swann   | Direct: +44 (0) 20 7544 4676|
| Commerce Internet Ltd  | Mobile: +44 (0) 79 5854 0027|
| 22 Cavendish Buildings |Tel: +44 (0) 20 7491 2000|
| Gilbert Street |Fax: +44 (0) 20 7491 2010|
| Mayfair|Web: http://www.commerce.uk.net/ |
| London W1K 5HJ | E-Mail: [EMAIL PROTECTED]|
\+-/


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/