Hi, all.
I am trying to test block-throttle on dm-thin devices. I find the throttling
on dm-thin device is OK, but the throttling doesn't work for the data device
of dm-thin pool.
The following is my test case:
#!/bin/sh
dmsetup create pool --table '0 41943040 thin-pool /dev/vdb /dev/vda \
On Mon, Jan 09, 2017 at 08:54:02AM -0600, Eric Sandeen wrote:
> On 1/9/17 8:22 AM, Zdenek Kabelac wrote:
> > But could anyone from XFS specify - why umount is causing some
> > 'more' damage, then no umount at all ?
>
> Please reread this thread... it /started/ with problems
> /caused by unmoun
On Mon, Jan 09, 2017 at 04:11:08PM +0100, Zdenek Kabelac wrote:
>
> You have the case were application will write to 'different' filesystem,
> while in other cases user will be able to continue to use their filesystem
> and cause irreparable filesystem damage (whatever you want to believe
> is fai
Please see below for some bugs I spotted.
The surest way to find bugs is to post patches to a public list :-/
On 01/09/2017 10:20 AM, Andy Grover wrote:
+int dm_release(struct inode *inode, struct file *filp)
+ if (priv->md) {
should be "if (priv && priv->md)"
+static unsigned int
All,
Comments on this patch would be very much appreciated.
Thank you.
Best regards.
On 12/15/16 15:51, Damien Le Moal wrote:
> The dm-zoned device mapper provides transparent write access to zoned
> block devices (ZBC and ZAC compliant devices). dm-zoned hides to the
> device user (a file sys
We will need access to struct file in a following commit in an ioctl
handler. Since lookup_ioctl() wants all ioctl handler functions to have
the same signature, change all signatures to take struct file.
Signed-off-by: Andy Grover
---
drivers/md/dm-ioctl.c | 38 +++---
Hi all,
This patchset allows events for multiple DM devices to be monitored by
a single thread, instead of requiring one thread per device. This is
made possible by a new ioctl to create an association between an open
file descriptor to /dev/mapper/control and a DM device. A program can
open and a
Instead of requiring a thread for each device to sleep in ioctl(DEV_WAIT)
to receive dm events, allow a single thread to:
1) Open /dev/mapper/control multiple times
2) Associate each of these open file descriptors with different DM devices
3) poll() on all of these
4) When an event occurs, use TAB
Allows TABLE_STATUS to be used without specifying the dm device in the
ioctl, if it has previously been associated with a device.
Signed-off-by: Andy Grover
---
drivers/md/dm-ioctl.c | 25 -
1 file changed, 16 insertions(+), 9 deletions(-)
diff --git a/drivers/md/dm-ioct
On Mon, Jan 09, 2017 at 03:22:00PM +0100, Zdenek Kabelac wrote:
> lvm2 will initiate lazy umount of ALL thin devices from a thin-pool
> when it gets about 95% fullness (so it's a bit sooner then 100%
> with still some 5% 'free-space'.
Yes, and we want this not to be done. Not for XFS and not for
Dne 9.1.2017 v 15:54 Eric Sandeen napsal(a):
On 1/9/17 8:22 AM, Zdenek Kabelac wrote:
But could anyone from XFS specify - why umount is causing some
'more' damage, then no umount at all ?
Please reread this thread... it /started/ with problems
/caused by unmount/ for Christoph.
It's not t
On 1/9/17 8:22 AM, Zdenek Kabelac wrote:
> But could anyone from XFS specify - why umount is causing some
> 'more' damage, then no umount at all ?
Please reread this thread... it /started/ with problems
/caused by unmount/ for Christoph.
It's not that unmount damages the filesystem per se; it
Dne 9.1.2017 v 14:39 Christoph Hellwig napsal(a):
On Fri, Jan 06, 2017 at 09:46:00AM +1100, Dave Chinner wrote:
And my 2c worth on the "lvm unmounting filesystems on error" - stop
it, now. It's the wrong thing to do, and it makes it impossible for
filesystems to handle the error and recover grac
On Fri, Jan 06, 2017 at 09:46:00AM +1100, Dave Chinner wrote:
> And my 2c worth on the "lvm unmounting filesystems on error" - stop
> it, now. It's the wrong thing to do, and it makes it impossible for
> filesystems to handle the error and recover gracefully when
> possible.
It's causing way more
14 matches
Mail list logo