Re: [PATCH] missing close in mdassemble
On Wed, Sep 13, 2006 at 04:57:43PM +0200, Luca Berra wrote: attached, please apply without this mdassemble cannot activate stacked arrays, i wonder how i managed to miss it :( Another patch which obsoletes the previous one this will make mdassemble, if run a second time, try to make arrays read-write. Useful if one starts arrays readonly as described in README.initramfs, after resume fails. L. -- Luca Berra -- [EMAIL PROTECTED] Communication Media Services S.r.l. /\ \ / ASCII RIBBON CAMPAIGN XAGAINST HTML MAIL / \ --- mdadm-2.5.3/mdassemble.c.close 2006-06-26 07:11:00.0 +0200 +++ mdadm-2.5.3/mdassemble.c2006-09-13 17:23:15.0 +0200 @@ -91,13 +91,14 @@ rv |= 1; continue; } - if (ioctl(mdfd, GET_ARRAY_INFO, array)=0) - /* already assembled, skip */ - continue; - rv |= Assemble(array_list-st, array_list-devname, mdfd, - array_list, - NULL, NULL, + if (ioctl(mdfd, GET_ARRAY_INFO, array) 0) { + rv |= Assemble(array_list-st, array_list-devname, mdfd, + array_list, NULL, NULL, readonly, runstop, NULL, NULL, verbose, force); + } else { + rv |= Manage_ro(array_list-devname, mdfd, -1); /* make it readwrite */ + } + close(mdfd); } return rv; } --- mdadm-2.5.3/Makefile.close 2006-06-20 02:01:17.0 +0200 +++ mdadm-2.5.3/Makefile2006-09-13 17:54:36.0 +0200 @@ -76,7 +76,7 @@ STATICSRC = pwgr.c STATICOBJS = pwgr.o -ASSEMBLE_SRCS := mdassemble.c Assemble.c config.c dlink.c util.c super0.c super1.c sha1.c +ASSEMBLE_SRCS := mdassemble.c Assemble.c Manage.c config.c dlink.c util.c super0.c super1.c sha1.c ASSEMBLE_FLAGS:= $(CFLAGS) -DMDASSEMBLE ifdef MDASSEMBLE_AUTO ASSEMBLE_SRCS += mdopen.c mdstat.c --- mdadm-2.5.3/Manage.c.close 2006-06-26 04:26:07.0 +0200 +++ mdadm-2.5.3/Manage.c2006-09-13 17:25:31.0 +0200 @@ -72,6 +72,8 @@ return 0; } +#ifndef MDASSEMBLE + int Manage_runstop(char *devname, int fd, int runstop, int quiet) { /* Run or stop the array. array must already be configured @@ -393,3 +395,5 @@ return 0; } + +#endif /* MDASSEMBLE */ --- mdadm-2.5.3/util.c.close2006-09-13 17:29:19.0 +0200 +++ mdadm-2.5.3/util.c 2006-09-13 18:08:56.0 +0200 @@ -189,6 +189,7 @@ } } +#ifndef MDASSEMBLE int check_ext2(int fd, char *name) { /* @@ -286,6 +287,7 @@ fprintf(stderr, Name : assuming 'no'\n); return 0; } +#endif /* MDASSEMBLE */ char *map_num(mapping_t *map, int num) { @@ -307,7 +309,6 @@ return UnSet; } - int is_standard(char *dev, int *nump) { /* tests if dev is a standard md dev name. @@ -482,6 +483,7 @@ return csum; } +#ifndef MDASSEMBLE char *human_size(long long bytes) { static char buf[30]; @@ -534,7 +536,9 @@ ); return buf; } +#endif /* MDASSEMBLE */ +#if !defined(MDASSEMBLE) || defined(MDASSEMBLE) defined(MDASSEMBLE_AUTO) int get_mdp_major(void) { static int mdp_major = -1; @@ -618,6 +622,7 @@ if (strncmp(name, /dev/.tmp.md, 12)==0) unlink(name); } +#endif /* !defined(MDASSEMBLE) || defined(MDASSEMBLE) defined(MDASSEMBLE_AUTO) */ int dev_open(char *dev, int flags) { --- mdadm-2.5.3/mdassemble.8.close 2006-08-07 03:33:56.0 +0200 +++ mdadm-2.5.3/mdassemble.82006-09-13 18:25:41.0 +0200 @@ -25,6 +25,13 @@ .B mdassemble has the same effect as invoking .B mdadm --assemble --scan. +.PP +Invoking +.B mdassemble +a second time will make all defined arrays readwrite, this is useful if +using the +.B start_ro +module parameter. .SH OPTIONS @@ -54,6 +61,5 @@ .PP .BR mdadm (8), .BR mdadm.conf (5), -.BR md (4). -.PP +.BR md (4), .BR diet (1).
Re: RAID5 producing fake partition table on single drive
On Fri, Sep 15, 2006 at 05:51:12PM +1000, Lem wrote: On Thu, 2006-09-14 at 18:42 -0400, Bill Davidsen wrote: Lem wrote: On Mon, 2006-09-04 at 13:55 -0400, Bill Davidsen wrote: May I belatedly say that this is sort-of a kernel issue, since /proc/partitions reflects invalid data? Perhaps a boot option like nopart=sda,sdb or similar would be in order? My suggestion was to Neil or other kernel maintainers. If they agree that this is worth fixing, the option could be added in the kernel. It isn't there now, I was soliciting responses on whether this was desirable. My mistake, sorry. It sounds like a nice idea, and would work well in cases where the RAID devices are always assigned the same device names (sda, sdb, sdc etc), which I'd expect to be the case quite frequently. that is the issue, quite frequently != always Unfortunately I see no way to avoid data in the partition table location, which looks like a partition table, from being used. Perhaps an alternative would be to convert an array with non-partition-based devices to partition-based devices, though I remember Neil saying this would involve relocating all of the data on the entire array (perhaps could be done through some funky resync option?). sorry, i do not agree ms-dos partitions are a bad idea, and one i would really love to leave behind. what i'd do is move the partition detect code to userspace where it belongs, togheter with lvm, md, dmraid, multipath and evms so what userspace would do is: check if any wholedisk is one of the above mentioned types or if it is partitionable. I believe the order would be something like: dmraid or multipath evms (*) md lvm partition table (partx or kpartx) md lvm (*) evms should handle all cases by itself after each check the device list for the next check should be recalculated removing devices handled and adding new devices just created. this is too much to be done in kernel space, but it can be done easily in initramfs or initscript. just say Y to CONFIG_PARTITION_ADVANCED and N to all other CONFIG_?_PARTITION and code something in userspace. L. P.S. the op can simply use partx to remove partition tables from the components of the md array just after assembling. L. -- Luca Berra -- [EMAIL PROTECTED] Communication Media Services S.r.l. /\ \ / ASCII RIBBON CAMPAIGN XAGAINST HTML MAIL / \ - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID5 Problem - $1000 reward for help
On Sep 15, 2006, at 2:08, Reza Naima wrote: Linux version 2.6.12-1.1381_FC3 Not much help, but newer kernels are more aggressive about not failing a second disk in a raid-5. (I noticed because the change came in just around when my old raid-5 did the same as yours; but before I upgraded). - ask -- http://www.askbjoernhansen.com/ - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 09/19] dmaengine: reduce backend address permutations
Hi, On Mon, 11 Sep 2006 16:18:23 -0700 Dan Williams [EMAIL PROTECTED] wrote: From: Dan Williams [EMAIL PROTECTED] Change the backend dma driver API to accept a 'union dmaengine_addr'. The intent is to be able to support a wide range of frontend address type permutations without needing an equal number of function type permutations on the backend. Please do the cleanup of existing code before you apply new function. Earlier patches in this series added code that you're modifying here. If you modify the existing code first it's less churn for everyone to review. Thanks, Olof - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 16/19] dmaengine: Driver for the Intel IOP 32x, 33x, and 13xx RAID engines
Hi, On Mon, 11 Sep 2006 16:19:00 -0700 Dan Williams [EMAIL PROTECTED] wrote: From: Dan Williams [EMAIL PROTECTED] This is a driver for the iop DMA/AAU/ADMA units which are capable of pq_xor, pq_update, pq_zero_sum, xor, dual_xor, xor_zero_sum, fill, copy+crc, and copy operations. You implement a bunch of different functions here. I agree with Jeff's feedback related to the lack of scalability the way the API is going right now. Another example of this is that the driver is doing it's own self-test of the functions. This means that every backend driver will need to duplicate this code. Wouldn't it be easier for everyone if the common infrastructure did a test call at the time of registration of a function instead, and return failure if it doesn't pass? drivers/dma/Kconfig | 27 + drivers/dma/Makefile|1 drivers/dma/iop-adma.c | 1501 +++ include/asm-arm/hardware/iop_adma.h | 98 ++ ioatdma.h is currently under drivers/dma/. If the contents is strictly device-related please add them under drivers/dma. -Olof - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 08/19] dmaengine: enable multiple clients and operations
On Mon, 11 Sep 2006 19:44:16 -0400 Jeff Garzik [EMAIL PROTECTED] wrote: Dan Williams wrote: @@ -759,8 +755,10 @@ #endif device-common.device_memcpy_buf_to_buf = ioat_dma_memcpy_buf_to_buf; device-common.device_memcpy_buf_to_pg = ioat_dma_memcpy_buf_to_pg; device-common.device_memcpy_pg_to_pg = ioat_dma_memcpy_pg_to_pg; - device-common.device_memcpy_complete = ioat_dma_is_complete; - device-common.device_memcpy_issue_pending = ioat_dma_memcpy_issue_pending; + device-common.device_operation_complete = ioat_dma_is_complete; + device-common.device_xor_pgs_to_pg = dma_async_xor_pgs_to_pg_err; + device-common.device_issue_pending = ioat_dma_memcpy_issue_pending; + device-common.capabilities = DMA_MEMCPY; Are we really going to add a set of hooks for each DMA engine whizbang feature? That will get ugly when DMA engines support memcpy, xor, crc32, sha1, aes, and a dozen other transforms. Yes, it will be unmaintainable. We need some sort of multiplexing with per-function registrations. Here's a first cut at it, just very quick. It could be improved further but it shows that we could exorcise most of the hardcoded things pretty easily. Dan, would this fit with your added XOR stuff as well? If so, would you mind rebasing on top of something like this (with your further cleanups going in before added function, please. :-) (Build tested only, since I lack Intel hardware). It would be nice if we could move the type specification to only be needed in the channel allocation. I don't know how well that fits the model for some of the hardware platforms though, since a single channel might be shared for different types of functions. Maybe we need a different level of abstraction there instead, i.e. divorce the hardware channel and software channel model and have several software channels map onto a hardware one. Clean up the DMA API a bit, allowing each engine to register an array of supported functions instead of allocating static names for each possible function. Signed-off-by: Olof Johansson [EMAIL PROTECTED] diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c index 1527804..282ce85 100644 --- a/drivers/dma/dmaengine.c +++ b/drivers/dma/dmaengine.c @@ -80,7 +80,7 @@ static ssize_t show_memcpy_count(struct int i; for_each_possible_cpu(i) - count += per_cpu_ptr(chan-local, i)-memcpy_count; + count += per_cpu_ptr(chan-local, i)-count; return sprintf(buf, %lu\n, count); } @@ -105,7 +105,7 @@ static ssize_t show_in_use(struct class_ } static struct class_device_attribute dma_class_attrs[] = { - __ATTR(memcpy_count, S_IRUGO, show_memcpy_count, NULL), + __ATTR(count, S_IRUGO, show_memcpy_count, NULL), __ATTR(bytes_transferred, S_IRUGO, show_bytes_transferred, NULL), __ATTR(in_use, S_IRUGO, show_in_use, NULL), __ATTR_NULL @@ -402,11 +402,11 @@ subsys_initcall(dma_bus_init); EXPORT_SYMBOL(dma_async_client_register); EXPORT_SYMBOL(dma_async_client_unregister); EXPORT_SYMBOL(dma_async_client_chan_request); -EXPORT_SYMBOL(dma_async_memcpy_buf_to_buf); -EXPORT_SYMBOL(dma_async_memcpy_buf_to_pg); -EXPORT_SYMBOL(dma_async_memcpy_pg_to_pg); -EXPORT_SYMBOL(dma_async_memcpy_complete); -EXPORT_SYMBOL(dma_async_memcpy_issue_pending); +EXPORT_SYMBOL(dma_async_buf_to_buf); +EXPORT_SYMBOL(dma_async_buf_to_pg); +EXPORT_SYMBOL(dma_async_pg_to_pg); +EXPORT_SYMBOL(dma_async_complete); +EXPORT_SYMBOL(dma_async_issue_pending); EXPORT_SYMBOL(dma_async_device_register); EXPORT_SYMBOL(dma_async_device_unregister); EXPORT_SYMBOL(dma_chan_cleanup); diff --git a/drivers/dma/ioatdma.c b/drivers/dma/ioatdma.c index dbd4d6c..6cbed42 100644 --- a/drivers/dma/ioatdma.c +++ b/drivers/dma/ioatdma.c @@ -40,6 +40,7 @@ #define to_ioat_device(dev) container_of(dev, struct ioat_device, common) #define to_ioat_desc(lh) container_of(lh, struct ioat_desc_sw, node) + /* internal functions */ static int __devinit ioat_probe(struct pci_dev *pdev, const struct pci_device_id *ent); static void __devexit ioat_remove(struct pci_dev *pdev); @@ -681,6 +682,14 @@ out: return err; } +struct dma_function ioat_memcpy_functions = { + .buf_to_buf = ioat_dma_memcpy_buf_to_buf, + .buf_to_pg = ioat_dma_memcpy_buf_to_pg, + .pg_to_pg = ioat_dma_memcpy_pg_to_pg, + .complete = ioat_dma_is_complete, + .issue_pending = ioat_dma_memcpy_issue_pending, +}; + static int __devinit ioat_probe(struct pci_dev *pdev, const struct pci_device_id *ent) { @@ -756,11 +765,8 @@ static int __devinit ioat_probe(struct p device-common.device_alloc_chan_resources = ioat_dma_alloc_chan_resources; device-common.device_free_chan_resources = ioat_dma_free_chan_resources; - device-common.device_memcpy_buf_to_buf = ioat_dma_memcpy_buf_to_buf; - device-common.device_memcpy_buf_to_pg = ioat_dma_memcpy_buf_to_pg; -
Re: libata hotplug and md raid?
Hello all, On 9/15/06, Greg KH [EMAIL PROTECTED] wrote: On Thu, Sep 14, 2006 at 02:24:45PM +0200, Leon Woestenberg wrote: On 9/13/06, Tejun Heo [EMAIL PROTECTED] wrote: Ric Wheeler wrote: Leon Woestenberg wrote: In short, I use ext3 over /dev/md0 over 4 SATA drives /dev/sd[a-d] each driven by libata ahci. I unplug then replug the drive that is rebuilding in RAID-5. ... So the question remains: How will hotplug and md work together? ... How does md and hotplug work together for current hotplug devices? The answer to both of these questions is, not very well. Me and Kay have been talking with Neil Brown about this and he agrees that it needs to be fixed up. That md device needs to have proper lifetime rules and go away proper. Hopefully it gets fixed soon. I will try to catch any kernel work on this so that I can pick it up for testing. For the moment, I'll try to make this work as best as possible using udev rules and userspace (mdadm). I suppose I can act on both unplugs and plugs, both before and after the event, is that true? Regards, Leon Woestenberg. -- Leon - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH] dmaengine: clean up and abstract function types (was Re: [PATCH 08/19] dmaengine: enable multiple clients and operations)
On Fri, 15 Sep 2006 11:38:17 -0500 Olof Johansson [EMAIL PROTECTED] wrote: On Mon, 11 Sep 2006 19:44:16 -0400 Jeff Garzik [EMAIL PROTECTED] wrote: Are we really going to add a set of hooks for each DMA engine whizbang feature? That will get ugly when DMA engines support memcpy, xor, crc32, sha1, aes, and a dozen other transforms. Yes, it will be unmaintainable. We need some sort of multiplexing with per-function registrations. Here's a first cut at it, just very quick. It could be improved further but it shows that we could exorcise most of the hardcoded things pretty easily. Ok, that was obviously a naive and not so nice first attempt, but I figured it was worth it to show how it can be done. This is a little more proper: Specify at client registration time what the function the client will use is, and make the channel use it. This way most of the error checking per call can be removed too. Chris/Dan: Please consider picking this up as a base for the added functionality and cleanups. Clean up dmaengine a bit. Make the client registration specify which channel functions (type) the client will use. Also, make devices register which functions they will provide. Also exorcise most of the memcpy-specific references from the generic dma engine code. There's still some left in the iov stuff. Signed-off-by: Olof Johansson [EMAIL PROTECTED] Index: linux-2.6/drivers/dma/dmaengine.c === --- linux-2.6.orig/drivers/dma/dmaengine.c +++ linux-2.6/drivers/dma/dmaengine.c @@ -73,14 +73,14 @@ static LIST_HEAD(dma_client_list); /* --- sysfs implementation --- */ -static ssize_t show_memcpy_count(struct class_device *cd, char *buf) +static ssize_t show_count(struct class_device *cd, char *buf) { struct dma_chan *chan = container_of(cd, struct dma_chan, class_dev); unsigned long count = 0; int i; for_each_possible_cpu(i) - count += per_cpu_ptr(chan-local, i)-memcpy_count; + count += per_cpu_ptr(chan-local, i)-count; return sprintf(buf, %lu\n, count); } @@ -105,7 +105,7 @@ static ssize_t show_in_use(struct class_ } static struct class_device_attribute dma_class_attrs[] = { - __ATTR(memcpy_count, S_IRUGO, show_memcpy_count, NULL), + __ATTR(count, S_IRUGO, show_count, NULL), __ATTR(bytes_transferred, S_IRUGO, show_bytes_transferred, NULL), __ATTR(in_use, S_IRUGO, show_in_use, NULL), __ATTR_NULL @@ -142,6 +142,10 @@ static struct dma_chan *dma_client_chan_ /* Find a channel, any DMA engine will do */ list_for_each_entry(device, dma_device_list, global_node) { + /* Skip devices that don't provide the right function */ + if (!device-funcs[client-type]) + continue; + list_for_each_entry(chan, device-channels, device_node) { if (chan-client) continue; @@ -241,7 +245,8 @@ static void dma_chans_rebalance(void) * dma_async_client_register - allocate and register a dma_client * @event_callback: callback for notification of channel addition/removal */ -struct dma_client *dma_async_client_register(dma_event_callback event_callback) +struct dma_client *dma_async_client_register(enum dma_function_type type, + dma_event_callback event_callback) { struct dma_client *client; @@ -254,6 +259,7 @@ struct dma_client *dma_async_client_regi client-chans_desired = 0; client-chan_count = 0; client-event_callback = event_callback; + client-type = type; mutex_lock(dma_list_mutex); list_add_tail(client-global_node, dma_client_list); @@ -402,11 +408,11 @@ subsys_initcall(dma_bus_init); EXPORT_SYMBOL(dma_async_client_register); EXPORT_SYMBOL(dma_async_client_unregister); EXPORT_SYMBOL(dma_async_client_chan_request); -EXPORT_SYMBOL(dma_async_memcpy_buf_to_buf); -EXPORT_SYMBOL(dma_async_memcpy_buf_to_pg); -EXPORT_SYMBOL(dma_async_memcpy_pg_to_pg); -EXPORT_SYMBOL(dma_async_memcpy_complete); -EXPORT_SYMBOL(dma_async_memcpy_issue_pending); +EXPORT_SYMBOL(dma_async_buf_to_buf); +EXPORT_SYMBOL(dma_async_buf_to_pg); +EXPORT_SYMBOL(dma_async_pg_to_pg); +EXPORT_SYMBOL(dma_async_complete); +EXPORT_SYMBOL(dma_async_issue_pending); EXPORT_SYMBOL(dma_async_device_register); EXPORT_SYMBOL(dma_async_device_unregister); EXPORT_SYMBOL(dma_chan_cleanup); Index: linux-2.6/drivers/dma/ioatdma.c === --- linux-2.6.orig/drivers/dma/ioatdma.c +++ linux-2.6/drivers/dma/ioatdma.c @@ -681,6 +682,14 @@ out: return err; } +struct dma_function ioat_memcpy_functions = { + .buf_to_buf = ioat_dma_memcpy_buf_to_buf, + .buf_to_pg = ioat_dma_memcpy_buf_to_pg, + .pg_to_pg = ioat_dma_memcpy_pg_to_pg, + .complete = ioat_dma_is_complete, + .issue_pending
[PATCH] [v2] dmaengine: clean up and abstract function types (was Re: [PATCH 08/19] dmaengine: enable multiple clients and operations)
[Bad day, forgot a quilt refresh.] Clean up dmaengine a bit. Make the client registration specify which channel functions (type) the client will use. Also, make devices register which functions they will provide. Also exorcise most of the memcpy-specific references from the generic dma engine code. There's still some left in the iov stuff. Signed-off-by: Olof Johansson [EMAIL PROTECTED] Index: linux-2.6/drivers/dma/dmaengine.c === --- linux-2.6.orig/drivers/dma/dmaengine.c +++ linux-2.6/drivers/dma/dmaengine.c @@ -73,14 +73,14 @@ static LIST_HEAD(dma_client_list); /* --- sysfs implementation --- */ -static ssize_t show_memcpy_count(struct class_device *cd, char *buf) +static ssize_t show_count(struct class_device *cd, char *buf) { struct dma_chan *chan = container_of(cd, struct dma_chan, class_dev); unsigned long count = 0; int i; for_each_possible_cpu(i) - count += per_cpu_ptr(chan-local, i)-memcpy_count; + count += per_cpu_ptr(chan-local, i)-count; return sprintf(buf, %lu\n, count); } @@ -105,7 +105,7 @@ static ssize_t show_in_use(struct class_ } static struct class_device_attribute dma_class_attrs[] = { - __ATTR(memcpy_count, S_IRUGO, show_memcpy_count, NULL), + __ATTR(count, S_IRUGO, show_count, NULL), __ATTR(bytes_transferred, S_IRUGO, show_bytes_transferred, NULL), __ATTR(in_use, S_IRUGO, show_in_use, NULL), __ATTR_NULL @@ -142,6 +142,10 @@ static struct dma_chan *dma_client_chan_ /* Find a channel, any DMA engine will do */ list_for_each_entry(device, dma_device_list, global_node) { + /* Skip devices that don't provide the right function */ + if (!device-funcs[client-type]) + continue; + list_for_each_entry(chan, device-channels, device_node) { if (chan-client) continue; @@ -241,7 +245,8 @@ static void dma_chans_rebalance(void) * dma_async_client_register - allocate and register a dma_client * @event_callback: callback for notification of channel addition/removal */ -struct dma_client *dma_async_client_register(dma_event_callback event_callback) +struct dma_client *dma_async_client_register(enum dma_function_type type, + dma_event_callback event_callback) { struct dma_client *client; @@ -254,6 +259,7 @@ struct dma_client *dma_async_client_regi client-chans_desired = 0; client-chan_count = 0; client-event_callback = event_callback; + client-type = type; mutex_lock(dma_list_mutex); list_add_tail(client-global_node, dma_client_list); @@ -402,11 +408,11 @@ subsys_initcall(dma_bus_init); EXPORT_SYMBOL(dma_async_client_register); EXPORT_SYMBOL(dma_async_client_unregister); EXPORT_SYMBOL(dma_async_client_chan_request); -EXPORT_SYMBOL(dma_async_memcpy_buf_to_buf); -EXPORT_SYMBOL(dma_async_memcpy_buf_to_pg); -EXPORT_SYMBOL(dma_async_memcpy_pg_to_pg); -EXPORT_SYMBOL(dma_async_memcpy_complete); -EXPORT_SYMBOL(dma_async_memcpy_issue_pending); +EXPORT_SYMBOL(dma_async_buf_to_buf); +EXPORT_SYMBOL(dma_async_buf_to_pg); +EXPORT_SYMBOL(dma_async_pg_to_pg); +EXPORT_SYMBOL(dma_async_complete); +EXPORT_SYMBOL(dma_async_issue_pending); EXPORT_SYMBOL(dma_async_device_register); EXPORT_SYMBOL(dma_async_device_unregister); EXPORT_SYMBOL(dma_chan_cleanup); Index: linux-2.6/drivers/dma/ioatdma.c === --- linux-2.6.orig/drivers/dma/ioatdma.c +++ linux-2.6/drivers/dma/ioatdma.c @@ -40,6 +40,7 @@ #define to_ioat_device(dev) container_of(dev, struct ioat_device, common) #define to_ioat_desc(lh) container_of(lh, struct ioat_desc_sw, node) + /* internal functions */ static int __devinit ioat_probe(struct pci_dev *pdev, const struct pci_device_id *ent); static void __devexit ioat_remove(struct pci_dev *pdev); @@ -681,6 +682,14 @@ out: return err; } +struct dma_function ioat_memcpy_functions = { + .buf_to_buf = ioat_dma_memcpy_buf_to_buf, + .buf_to_pg = ioat_dma_memcpy_buf_to_pg, + .pg_to_pg = ioat_dma_memcpy_pg_to_pg, + .complete = ioat_dma_is_complete, + .issue_pending = ioat_dma_memcpy_issue_pending, +}; + static int __devinit ioat_probe(struct pci_dev *pdev, const struct pci_device_id *ent) { @@ -756,11 +765,8 @@ static int __devinit ioat_probe(struct p device-common.device_alloc_chan_resources = ioat_dma_alloc_chan_resources; device-common.device_free_chan_resources = ioat_dma_free_chan_resources; - device-common.device_memcpy_buf_to_buf = ioat_dma_memcpy_buf_to_buf; - device-common.device_memcpy_buf_to_pg = ioat_dma_memcpy_buf_to_pg; - device-common.device_memcpy_pg_to_pg = ioat_dma_memcpy_pg_to_pg; -
Re: 2.6.18-rc5-mm1 - bd_claim_by_disk oops
John I got the following on 2.6.18-rc5-mm1 when trying to lvextend a John test logical volume that I had just created. This came about John because I have been trying to expand some LVs on my system, John which are based on a VG ontop of an MD mirror pair. It's an SMP John box too if that means anything. John device-mapper: table: 253:3: linear: dm-linear: Device lookup failed John device-mapper: ioctl: error adding target to table John device-mapper: table: 253:3: linear: dm-linear: Device lookup failed John device-mapper: ioctl: error adding target to table John device-mapper: table: 253:2: linear: dm-linear: Device lookup failed John device-mapper: ioctl: error adding target to table There error I got was: # lvextend -v -L +1g /dev/data_vg/home_lv Finding volume group data_vg Archiving volume group data_vg metadata (seqno 16). Extending logical volume home_lv to 52.00 GB Creating volume group backup /etc/lvm/backup/data_vg (seqno 17). Found volume group data_vg Found volume group data_vg Loading data_vg-home_lv table device-mapper: reload ioctl failed: Invalid argument Failed to suspend home_lv I've found a solution to this problem of NOT being able to use 'lvextend' on some LVM2 Logical Volumes (LV). Basically, I had to apply the following patch to 2.6.18-rc6-mm2 to get it to work properly. I don't know why this wasn't reported here to the kernel people. Thanks, John -- If a matching bd_holder is found in bd_holder_list, add_bd_holder() completes its job by just incrementing the reference count. In this case, it should be considered as success but it used to return 'fail' to let the caller free temporary bd_holder. Fixed it to return success and free given object by itself. Also, if either one of symlinking fails, the bd_holder should not be added to the list so that it can be discarded later. Otherwise, the caller will free bd_holder which is in the list. This patch is neccessary only for -mm (later than 2.6.18-rc1-mm1). fs/block_dev.c | 11 +++ 1 file changed, 7 insertions(+), 4 deletions(-) Signed-off-by: Jun'ichi Nomura [EMAIL PROTECTED] diff -urp linux-2.6.18-rc5-mm1.orig/fs/block_dev.c linux-2.6.18-rc5-mm1/fs/block_dev.c --- linux-2.6.18-rc5-mm1.orig/fs/block_dev.c2006-09-11 19:33:35.0 -0400 +++ linux-2.6.18-rc5-mm1/fs/block_dev.c 2006-09-11 19:21:46.0 -0400 @@ -655,8 +655,8 @@ static void free_bd_holder(struct bd_hol * If there is no matching entry with @bo in @bdev-bd_holder_list, * add @bo to the list, create symlinks. * - * Returns 0 if @bo was added to the list. - * Returns -ve if @bo wasn't used by any reason and should be freed. + * Returns 0 if symlinks are created or already there. + * Returns -ve if something fails and @bo can be freed. */ static int add_bd_holder(struct block_device *bdev, struct bd_holder *bo) { @@ -669,7 +669,9 @@ static int add_bd_holder(struct block_de list_for_each_entry(tmp, bdev-bd_holder_list, list) { if (tmp-sdir == bo-sdir) { tmp-count++; - return -EEXIST; + /* We've already done what we need to do here. */ + free_bd_holder(bo); + return 0; } } @@ -682,7 +684,8 @@ static int add_bd_holder(struct block_de if (ret) del_symlink(bo-sdir, bo-sdev); } - list_add_tail(bo-list, bdev-bd_holder_list); + if (ret == 0) + list_add_tail(bo-list, bdev-bd_holder_list); return ret; } - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html