I had the same problem, and found a conflicting declaration for the
function in question in include/linux/blkdev.h. However, after fixing, I
encountered more and more errors, so I have given up for now. Possibly
not all files are being patched that need to be?
On Thu, 5 Nov 1998, John Lellis wrote:
> > -----Original Message-----
> > From: [EMAIL PROTECTED]
> > [mailto:[EMAIL PROTECTED]]On Behalf Of MOLNAR Ingo
> > Sent: Thursday, November 05, 1998 2:59 AM
> > To: [EMAIL PROTECTED]
> > Cc: Erik Troan; Stephen C. Tweedie
> > Subject: RELEASE: RAID-0,1,4,5 and LVM version 0.90, 1998.11.05
> >
> >
> >
> > this is an alpha release of the latest Linux RAID0145 drivers, against
> > kernel 2.1.125 and 2.0.35. (a 2.1.127 port will follow as
> > soon as 2.1.127
> > gets out) Also this package contains a prototype (kernel-space and MD
> > based) LVM implementation.
> >
> > WARNING: we are still not out of alpha status, some of the
> > features are
> > not widely tested. It should be mostly ok, but a backup never
> > hurts ...
> >
> > you can find raid0145-19981105-2.1.125.gz,
> > raid0145-19981105-2.1.125.gz
> > and raidtools-19981105-0.90.tar.gz in the usual alpha directory:
> >
> > http://linux.kernel.org/pub/linux/daemons/raid/alpha
> >
> > new RAID features/fixes in this release:
> > ========================================
> >
> > = 'raid=noautodetect' boot time option added
> >
> > = /proc/sys/dev/md/speed-limit to runtime-configure reconstruction
> > speed.
> >
> > = initrd and reboot fixes by Luca Berra <[EMAIL PROTECTED]>
> >
> > = RAID5.HOWTO by Jakob Ostergaard ([EMAIL PROTECTED])
> >
> > = the 'negative counter' bugfix
> >
> > = and varios smaller or bigger things i forgot ...
> >
> > experimental LVM support:
> > =========================
> >
> > the biggest change is the rewrite to get LVM into MD. This ment the
> > rewrite of varios MD pieces, eg. md_arrays[] is gone and mddev is now
> > runtime allocated, along with minor device numbers. This enables us to
> > utilize the rather scarce minor device number space
> > efficiently, which is
> > a must for LVM.
> >
> > the LVM is already tightly integrated with the MD stuff. The RAID
> > superblock layer still has to be cleaned up a bit more to serve as a
> > generic 'storage container identification layer',
> > independently starting
> > up both RAID and LVM (or both, or stacked) devices. But we are almost
> > there.
> >
> > the LVM implementation lacks proper user-space support, but
> > people who are
> > interested and want to comment on the design are welcome to
> > take a look at
> > lvm_p.h, lvm.h and lvm.c. In raidtools there is an 'mkpv'
> > utility, which
> > prepares partitions to be added to the LVM:
> >
> > ./mkpv -f /dev/sdc6
> > /dev/sdc6's size: 51892 KB.
> > /dev/sdc6's rounded size: 51000 KB.
> > creating VG ...
> > creating LV 1 ...
> > initializing block groups on /dev/sdc6.
> >
> > [root@hell raidtools]# ./mkpv -f /dev/sdc7
> > /dev/sdc7's size: 51892 KB.
> > /dev/sdc7's rounded size: 51000 KB.
> > creating VG ...
> > creating LV 1 ...
> > initializing block groups on /dev/sdc7.
> > [root@hell raidtools]#
> >
> > a sample raidtab entry to utilize the above PVs:
> >
> > #
> > # Create an LVM Volume Group out of two Physical Volumes:
> > #
> >
> > raiddev /dev/md0
> > raid-level lvm #-volume-group
> > nr-raid-disks 2
> > persistent-superblock 1
> > chunk-size 16
> > device /dev/sdc7
> > raid-disk 0
> > device /dev/sdc6
> > raid-disk 1
> >
> > and after 'mkraid -f /dev/md0', the VG will show up in /proc/mdstat:
> >
> > [root@hell /root]# cat /proc/mdstat
> > Personalities : [linear] [raid0] [raid1] [raid5] [lvm]
> > read_ahead 128 sectors
> > md0 : active lvm sdc6[1] sdc7[0] 0 blocks<LV1 1/20000
> > blocks used>
> > unused devices: <none>
> > [root@hell /root]#
> >
> > currently 'mkpv' creates a single hardcoded 80M LV, which is mapped to
> > /dev/md9. /dev/md9 can then be used to create a filesystem.
> >
> > [root@hell /root]# mke2fs -b 4096 /dev/md9
> > [root@hell /root]# df /mnt
> > Filesystem 1024-blocks Used Available Capacity
> > Mounted on
> > /dev/md9 50140 52 47500 0% /mnt
> > [root@hell /root]#
> > [root@hell /root]# cat /proc/mdstat
> > Personalities : [linear] [raid0] [raid1] [raid5] [lvm]
> > read_ahead 128 sectors
> > md0 : active lvm sdc6[1] sdc7[0] 0 blocks<LV1 427/20000
> > blocks used>
> > unused devices: <none>
> > [root@hell /root]#
> >
> > this Logical Volume can be stopped/started in normal
> > raidtools fashion,
> > and can be autostarted/root mounted as well.
> >
> > the kernel side of the LVM support code is mostly finished, one major
> > component that is still lacking at the moment is proper
> > integration with
> > the buffer cache. User-space needs some serios coding and
> > properly thought
> > out utilities. This release of the LVM code is ment to give people an
> > opportunity to comment on the design, before i build too many things
> > around it :) The physical layout will almost certainly change in a
> > nonmaintainable way.
> >
> > this LVM implementation differs very much from 'typical' LVM
> > implementations (AIX, HP-UX, Veritas), it's a 'block-level
> > LVM' (i'm not
> > sure wether this term exists at all), with an allocation
> > granularity (LVM
> > blocksize) of 4K. This design is pretty 'daring' but enables us to do
> > advanced block device features like filesystem-independent migration,
> > resizing, defragmentation, on-demand storage management,
> > software-based
> > badblock-handling, snapshotting and multiversioning. But i
> > first want to
> > finalize (and discuss) the core design (which should already
> > provide all
> > the 'legacy' LVM operations like spanning a filesystem over arbitrary
> > devices, and basic storage management) before adding
> > 'applications' to the
> > core level.
> >
> > the LVM implementation does not impact overall RAID stability, people
> > using RAID should just disable LVM in the kernel config.
> >
> > enjoy. Reports, comments, flames, feature-requests welcome.
> > Let me know if
> > i have missed/forgotten some patch sent to me.
> >
> > -- mingo
> >
>
> I applied the raid0145-19981105-2.0.35 patches to a virgin copy of
> 2.0.35 and they all succeeded. But, when I try to make the kernel, I
> get compile errors. For example, with only striping configured I get
> the following:
>
> make[2]: Entering directory `/root/raid/linux/drivers/block'
> make all_targets
> make[3]: Entering directory `/root/raid/linux/drivers/block'
> gcc -D__KERNEL__ -I/root/raid/linux/include -Wall -Wstrict-prototypes
> -O2 -fomit-frame-pointer -fno-strength-reduce -pipe -m486
> -malign-loops=2 -malign-jumps=2 -malign-functions=2 -DCPU=686 -c -o
> ll_rw_blk.o ll_rw_blk.c
> ll_rw_blk.c: In function `ll_rw_block':
> ll_rw_blk.c:552: warning: passing arg 1 of `md_make_request' makes
> integer from pointer without a cast
> ll_rw_blk.c:552: too few arguments to function `md_make_request'
> make[3]: *** [ll_rw_blk.o] Error 1
> make[3]: Leaving directory `/root/raid/linux/drivers/block'
> make[2]: *** [first_rule] Error 2
> make[2]: Leaving directory `/root/raid/linux/drivers/block'
> make[1]: *** [sub_dirs] Error 2
> make[1]: Leaving directory `/root/raid/linux/drivers'
> make: *** [linuxsubdirs] Error 2
>
> The number of arguments to md_make_request changed from three to two,
> but both the prototype and the uses do seem to agree:
>
> igate[p4]:/root/raid/linux/drivers/block# grep md_make_request *
> ll_rw_blk.c: md_make_request(bh[i], rw);
> ll_rw_blk.c.orig: md_make_request(MINOR
> (bh[i]->b_dev), rw, bh[i]);
> md.c:int md_make_request (struct buffer_head * bh, int rw)
> md.c.orig:int md_make_request (int minor, int rw, struct buffer_head *
> bh)
> grep: paride: Is a directory
>
> Ideas, anyone?
>
> --
>
> John C. Lellis E-Mail:
> [EMAIL PROTECTED]
> Consultant Phone : (713) 313-5068
> FAX : (713) 313-5193
> Aspen Technology, Inc.
> Advanced Control and Optimization Division
> Software Development
> 9896 Bissonnet
> Houston, TX 77036
>