On 05/03/2018 11:18, Zdenek Kabelac wrote:
Yes - it has been updated/improved/fixed - and I've already given you a
link where you can configure the behavior of XFS when i.e. device
reports ENOSPC to the filesystem.
Sure - I already studied it months ago during my testing. I simply was
under
Il 04-03-2018 21:53 Zdenek Kabelac ha scritto:
On the other hand all common filesystem in linux were always written
to work on a device where the space is simply always there. So all
core algorithms simple never counted with something like
'thin-provisioning' - this is almost 'fine' since
Dne 3.3.2018 v 19:32 Xen napsal(a):
Zdenek Kabelac schreef op 28-02-2018 22:43:
It still depends - there is always some sort of 'race' - unless you
are willing to 'give-up' too early to be always sure, considering
there are technologies that may write many GB/s...
That's why I think it is
Zdenek Kabelac schreef op 28-02-2018 22:43:
It still depends - there is always some sort of 'race' - unless you
are willing to 'give-up' too early to be always sure, considering
there are technologies that may write many GB/s...
That's why I think it is only possible for snapshots.
You can
Gionatan Danti schreef op 28-02-2018 20:07:
To recap (Zdeneck, correct me if I am wrong): the main problem is
that, on a full pool, async writes will more-or-less silenty fail
(with errors shown on dmesg, but nothing more).
Yes I know you were writing about that in the later emails.
Another
I did not rewrite this entire message, please excuse the parts where I
am a little more "on the attack".
Zdenek Kabelac schreef op 28-02-2018 10:26:
I'll probably repeat my self again, but thin provision can't be
responsible for all kernel failures. There is no way DM team can fix
all the
On 01/03/2018 17:00, Zdenek Kabelac wrote:
metadata snapshot 'just consumes' thin-pool metadata space,
at any time there can be only 1 snapshot - so before next usage
you have to drop the existing one.
So IMHO it should have no other effects unless you hit some bugs...
Mmm... does it means
On 01/03/2018 12:23, Zdenek Kabelac wrote:
In general - for extX it's remount read-only upon error - which works
for journaled metadata - if you want same protection for 'data' you need
to switch to rather expensive data journaling mode.
For XFS there is now similar logic where write error
Dne 1.3.2018 v 10:43 Gianluca Cecchi napsal(a):
On Thu, Mar 1, 2018 at 9:31 AM, Zdenek Kabelac > wrote:
Also note - we are going to integrate VDO support - which will be a 2nd.
way for thin-provisioning with different set of features -
On Thu, Mar 1, 2018 at 9:31 AM, Zdenek Kabelac wrote:
>
>
> Also note - we are going to integrate VDO support - which will be a 2nd.
> way for thin-provisioning with different set of features - missing
> snapshots, but having compression & deduplication
>
> Regards
>
>
On 01/03/2018 09:31, Zdenek Kabelac wrote:
If the tool wanted to write 1sector to 256K chunk that needed
provisioning,
and provisioning was not possible - after reboot - you will still see
the 'old' content. >
In case of filesystem, that does not stop upon 1st. failing write you
then can see
Dne 1.3.2018 v 08:14 Gionatan Danti napsal(a):
Il 28-02-2018 22:43 Zdenek Kabelac ha scritto:
On default - full pool starts to 'error' all 'writes' in 60 seconds.
Based on what I remember, and what you wrote below, I think "all writes" in
the context above means "writes to unallocated
Il 28-02-2018 22:43 Zdenek Kabelac ha scritto:
On default - full pool starts to 'error' all 'writes' in 60 seconds.
Based on what I remember, and what you wrote below, I think "all writes"
in the context above means "writes to unallocated areas", right? Because
even full pool can write to
Dne 28.2.2018 v 20:07 Gionatan Danti napsal(a):
Hi all,
Il 28-02-2018 10:26 Zdenek Kabelac ha scritto:
Overprovisioning on DEVICE level simply IS NOT equivalent to full
filesystem like you would like to see all the time here and you've
been already many times explained that filesystems are
Hi all,
Il 28-02-2018 10:26 Zdenek Kabelac ha scritto:
Overprovisioning on DEVICE level simply IS NOT equivalent to full
filesystem like you would like to see all the time here and you've
been already many times explained that filesystems are simply not
there ready - fixes are on going but it
Dne 27.2.2018 v 19:39 Xen napsal(a):
Zdenek Kabelac schreef op 24-04-2017 23:59:
I'm just currious - what the you think will happen when you have
root_LV as thin LV and thin pool runs out of space - so 'root_LV'
is replaced with 'error' target.
Why do you suppose Root LV is on thin?
Why
Zdenek Kabelac schreef op 24-04-2017 23:59:
I'm just currious - what the you think will happen when you have
root_LV as thin LV and thin pool runs out of space - so 'root_LV'
is replaced with 'error' target.
Why do you suppose Root LV is on thin?
Why not just stick to the common scenario
On 16/05/2017 12:54, Zdenek Kabelac wrote:
Hi
Somehow I think you've rather made a mistake during your test (or you
have buggy kernel). Can you take full log of your test show all options
are
properly applied
i.e. dmesg log + /proc/self/mountinfo report showing all options used
for
Dne 16.5.2017 v 09:53 Gionatan Danti napsal(a):
On 15/05/2017 17:33, Zdenek Kabelac wrote:> Ever tested this:
mount -o errors=remount-ro,data=journal ?
Yes, I tested it - same behavior: a full thinpool does *not* immediately put
the filesystem in a read-only state, even when using
On 15/05/2017 17:33, Zdenek Kabelac wrote:> Ever tested this:
mount -o errors=remount-ro,data=journal ?
Yes, I tested it - same behavior: a full thinpool does *not* immediately
put the filesystem in a read-only state, even when using sync/fsync and
"errorwhenfull=y".
So, it seems EXT4
On 15/05/2017 14:50, Zdenek Kabelac wrote> Hi
I still think you are mixing apples & oranges together and you expecting
answer '42' :)
'42' would be the optimal answer :p
There is simply NO simple answer. Every case has its pros & cons.
There is simply cases where XFS beats Ext4 and there
On Fri, May 12, 2017 at 03:02:58PM +0200, Gionatan Danti wrote:
> On 02/05/2017 13:00, Gionatan Danti wrote:
> >>Anyway, I think (and maybe I am wrong...) that the better solution is to
> >>fail *all* writes to a full pool, even the ones directed to allocated
> >>space. This will effectively
On 02/05/2017 13:00, Gionatan Danti wrote:
On 26/04/2017 18:37, Gionatan Danti wrote:
True, but the case exists that, even on a full pool, an application with
multiple outstanding writes will have some of them completed/commited
while other get I/O error, as writes to already allocated space
On 26/04/2017 18:37, Gionatan Danti wrote:
True, but the case exists that, even on a full pool, an application with
multiple outstanding writes will have some of them completed/commited
while other get I/O error, as writes to already allocated space are
permitted while writes to non-allocated
Il 26-04-2017 16:33 Zdenek Kabelac ha scritto:
But you get correct 'write' error - so from application POV - you get
failing
transaction update/write - so app knows 'data' were lost and should
not proceed with next transaction - so it's in line with 'no data is
lost' and filesystem is not
Dne 26.4.2017 v 15:37 Gionatan Danti napsal(a):
On 26/04/2017 13:23, Zdenek Kabelac wrote:
You need to use 'direct' write more - otherwise you are just witnessing
issues related with 'page-cache' flushing.
Every update of file means update of journal - so you surely can lose
some data
On 26/04/2017 13:23, Zdenek Kabelac wrote:
You need to use 'direct' write more - otherwise you are just witnessing
issues related with 'page-cache' flushing.
Every update of file means update of journal - so you surely can lose
some data in-flight - but every good software needs to the flush
Dne 26.4.2017 v 10:10 Gionatan Danti napsal(a):
I'm not sure this is sufficient. In my testing, ext4 will *not* remount-ro on
any error, rather only on erroneous metadata updates. For example, on a
thinpool with "--errorwhenfull y", trying to overcommit data with a simple "dd
if=/dev/zero
Il 26-04-2017 09:42 Zdenek Kabelac ha scritto:
At this moment it's not possible.
I do have some plans/idea how to workaround this in user-space but
it's non-trivial - especially on recovery path.
It would be possible to 'reroute' thin to dm-delay and then write path
to error and read path leave
Dne 26.4.2017 v 09:26 Gionatan Danti napsal(a):
Il 24-04-2017 23:59 Zdenek Kabelac ha scritto:
If you set '--errorwhenfull y' - it should instantly fail.
It's my understanding that "--errorwhenfull y" will instantly fail writes
which imply new allocation requests, but writes to
Il 24-04-2017 23:59 Zdenek Kabelac ha scritto:
If you set '--errorwhenfull y' - it should instantly fail.
It's my understanding that "--errorwhenfull y" will instantly fail
writes which imply new allocation requests, but writes to
already-allocated space will be completed.
It is
Dne 24.4.2017 v 15:49 Gionatan Danti napsal(a):
On 22/04/2017 23:22, Zdenek Kabelac wrote:
ATM there is even bug for 169 & 170 - dmeventd should generate message
at 80,85,90,95,100 - but it does it only once - will be fixed soon...
Mmm... quite a bug, considering how important is
Dne 23.4.2017 v 07:29 Xen napsal(a):
Zdenek Kabelac schreef op 22-04-2017 23:17:
That is awesome, that means a errors=remount-ro mount will cause a remount
right?
Well 'remount-ro' will fail but you will not be able to read anything
from volume as well.
Well that is still preferable to
Zdenek Kabelac schreef op 22-04-2017 23:17:
That is awesome, that means a errors=remount-ro mount will cause a
remount right?
Well 'remount-ro' will fail but you will not be able to read anything
from volume as well.
Well that is still preferable to anything else.
It is preferable to a
Dne 22.4.2017 v 18:32 Xen napsal(a):
Gionatan Danti schreef op 22-04-2017 9:14:
Il 14-04-2017 10:24 Zdenek Kabelac ha scritto:
However there are many different solutions for different problems -
and with current script execution - user may build his own solution -
i.e. call
'dmsetup remove
Il 14-04-2017 10:24 Zdenek Kabelac ha scritto:
However there are many different solutions for different problems -
and with current script execution - user may build his own solution -
i.e. call
'dmsetup remove -f' for running thin volumes - so all instances get
'error' device when pool is
Zdenek Kabelac schreef op 18-04-2017 12:17:
Already got lost in lots of posts.
But there is tool 'thin_ls' which can be used for detailed info
about used space by every single thin volume.
It's not support directly by 'lvm2' command (so not yet presented in
shiny cool way via 'lvs -a') -
On Tue, 18 Apr 2017, Gionatan Danti wrote:
Any thoughts on the original question? For snapshot with relatively big CoW
table, from a stability standpoint, how do you feel about classical vs
thin-pool snapshot?
Classic snapshots are rock solid. There is no risk to the origin
volume. If the
Stuart D. Gathman schreef op 13-04-2017 19:32:
On Thu, 13 Apr 2017, Xen wrote:
Stuart Gathman schreef op 13-04-2017 17:29:
IMO, the friendliest thing to do is to freeze the pool in read-only
mode
just before running out of metadata.
It's not about metadata but about physical extents.
Il 14-04-2017 11:37 Zdenek Kabelac ha scritto:
The problem is not with 'stopping' access - but to gain the access
back.
So in this case - you need to run 'fsck' - and this fsck usually needs
more space - and the complexity starts with - where to get this space.
In the the 'most trivial' case
Dne 14.4.2017 v 11:07 Gionatan Danti napsal(a):
Il 14-04-2017 10:24 Zdenek Kabelac ha scritto:
But it's currently impossible to expect you will fill the thin-pool to
full capacity and everything will continue to run smoothly - this is
not going to happen.
Even with EXT4 and
Il 14-04-2017 10:24 Zdenek Kabelac ha scritto:
But it's currently impossible to expect you will fill the thin-pool to
full capacity and everything will continue to run smoothly - this is
not going to happen.
Even with EXT4 and errors=remount-ro?
However there are many different solutions
Il 13-04-2017 16:33 Zdenek Kabelac ha scritto:
Hello
Just let's repeat.
Full thin-pool is NOT in any way comparable to full filesystem.
Full filesystem has ALWAYS room for its metadata - it's not pretending
it's bigger - it has 'finite' space and expect this space to just BE
there.
Now when
Il 13-04-2017 14:59 Stuart Gathman ha scritto:
Using a classic snapshot for backup does not normally involve
activating
a large CoW. I generally create a smallish snapshot (a few gigs),
that
will not fill up during the backup process. If for some reason, a
snapshot were to fill up before
Il 13-04-2017 14:41 Xen ha scritto:
See, you only compared multiple non-thin with a single-thin.
So my question is:
did you consider multiple thin volumes?
Hi, the multiple-thin-volume solution, while being very flexible, is not
well understood by libvirt and virt-manager. So I need to
On Thu, 13 Apr 2017, Xen wrote:
Stuart Gathman schreef op 13-04-2017 17:29:
IMO, the friendliest thing to do is to freeze the pool in read-only mode
just before running out of metadata.
It's not about metadata but about physical extents.
In the thin pool.
Ok. My understanding is that
On Thu, 13 Apr 2017, Xen wrote:
Stuart Gathman schreef op 13-04-2017 17:29:
understand and recover. A sysadmin could have a plain LV for the
system volume, so that logs and stuff would still be kept, and admin
logins work normally. There is no panic, as the data is there read-only.
Stuart Gathman schreef op 13-04-2017 17:29:
IMO, the friendliest thing to do is to freeze the pool in read-only
mode
just before running out of metadata.
It's not about metadata but about physical extents.
In the thin pool.
While still involving application
level data loss (the data it
On 04/13/2017 10:33 AM, Zdenek Kabelac wrote:
>
>
> Now when you have thin-pool - it cause quite a lot of trouble across
> number of layers. There are solvable and being fixed.
>
> But as the rule #1 still applies - do not run your thin-pool out of
> space - it will not always heal easily without
Zdenek Kabelac schreef op 13-04-2017 16:33:
Hello
Just let's repeat.
Full thin-pool is NOT in any way comparable to full filesystem.
Full filesystem has ALWAYS room for its metadata - it's not pretending
it's bigger - it has 'finite' space and expect this space to just BE
there.
Now when
Dne 13.4.2017 v 15:52 Xen napsal(a):
Stuart Gathman schreef op 13-04-2017 14:59:
If you are going to keep snapshots around indefinitely, the thinpools
are probably the way to go. (What happens when you fill up those?
Hopefully it "freezes" the pool rather than losing everything.)
My
Using a classic snapshot for backup does not normally involve activating
a large CoW. I generally create a smallish snapshot (a few gigs), that
will not fill up during the backup process. If for some reason, a
snapshot were to fill up before backup completion, reads from the
snapshot get I/O
Gionatan Danti schreef op 13-04-2017 12:20:
Hi,
anyone with other thoughts on the matter?
I wondered why a single thin LV does work for you in terms of not
wasting space or being able to make more efficient use of "volumes" or
client volumes or whatever.
But a multitude of thin volumes
On 06/04/2017 16:31, Gionatan Danti wrote:
Hi all,
I'm seeking some advice for a new virtualization system (KVM) on top of
LVM. The goal is to take agentless backups via LVM snapshots.
In short: what you suggest to snapshot a quite big (8+ TB) volume?
Classic LVM (with old snapshot behavior) or
Hi
Agent less snapshot of the vm server might be an issue with application
running in the vm guest os.
Especially as there are no VSS like features on linux.
Perhaps someone can introduce a udev listener that can be used?
Den 6 apr. 2017 16:32 skrev "Gionatan Danti" :
> Hi
Il 07-04-2017 15:50 L A Walsh ha scritto:
Gionatan Danti wrote:
I more concerned about lenghtly snapshot activation due to a big,
linear CoW table that must be read completely...
---
What is 'big'? Are you just worried about the IO time?
If that's the case, much will depend on your HW.
Il 07-04-2017 10:19 Mark Mielke ha scritto:
I found classic LVM snapshots to suffer terrible performance. I
switched to BTRFS as a result, until LVM thin pools became a real
thing, and I happily switched back.
So you are now on lvmthin? Can I ask on what pool/volume/filesystem
size?
I
On Thu, Apr 6, 2017 at 10:31 AM, Gionatan Danti wrote:
> I'm seeking some advice for a new virtualization system (KVM) on top of
> LVM. The goal is to take agentless backups via LVM snapshots.
>
> In short: what you suggest to snapshot a quite big (8+ TB) volume? Classic
>
58 matches
Mail list logo