On 29/04/2016 10:44, Marek Podmaka wrote:
Hello Xen,
Now I'm not sure what your use-case for thin pools is.
I don't see it much useful if the presented space is smaller than
available physical space. In that case I can just use plain LVM with
PV/VG/LV. For snaphosts you don't care much as if
Il 18-05-2016 15:47 Gionatan Danti ha scritto:
One question: I did some test (on another machine), deliberately
killing/stopping the lvmetad service/socket. When the pool was almost
full, the following entry was logged in /var/log/messages
WARNING: Failed to connect to lvmetad. Falling back
On 17/05/2016 15:48, Zdenek Kabelac wrote:
Yes - in general - you've witnessed general tool failure,
and dmeventd is not 'smart' to recognize the reason of failure.
Normally this 'error' should not happen.
And while I'd even say there could have been a 'shortcut'
without even reading VG
Hi all,
using thin provisioning in production machines (using it mostly for its
fast snapshot support, rather than for thin provision / storage
overcommit by itself), I wonder what to do if a critical metadata
corruption, as the loss of the superblock, should happen.
Filesystems generally
Hi list,
I had an unexptected filesystem unmount on a machine were I am using
thin provisioning.
It is a CentOS 7.2 box (kernel 3.10.0-327.3.1.el7,
lvm2-2.02.130-5.el7_2.1), with the current volumes situation:
# lvs -a
LV VG Attr LSize Pool Origin
> On 18/11/2016 04:43, Marian Csontos wrote:
Hi, the warning was added only recently - commit c0912af3 added 2016-01-22:
https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=c0912af3104cb72ea275d90b8b1d68a25a9ca48a
Ok, this explain why I only recently saw this warning
Were the partitions
On 20/03/2017 12:01, Zdenek Kabelac wrote:
As said - please try with HEAD - and report back if you still see a
problem.
There were couple issue fixed along this path.
Ok, I tried now with tools and library from git:
LVM version: 2.02.169(2)-git (2016-11-30)
Library version:
On 20/03/2017 10:51, Zdenek Kabelac wrote:
Please check upstream behavior (git HEAD)
It will still take a while before final release so do not use it
regularly yet (as few things still may change).
I will surely try with git head and report back here.
Not sure for which other comment you
Hi all,
any comments on the report below?
Thanks.
On 09/03/2017 16:33, Gionatan Danti wrote:
On 09/03/2017 12:53, Zdenek Kabelac wrote:
Hmm - it would be interesting to see your 'metadata' - it should be
still
quite good fit 128M of metadata for 512G when you are not using
snapshots
Il 07-04-2017 10:19 Mark Mielke ha scritto:
I found classic LVM snapshots to suffer terrible performance. I
switched to BTRFS as a result, until LVM thin pools became a real
thing, and I happily switched back.
So you are now on lvmthin? Can I ask on what pool/volume/filesystem
size?
I
On 06/04/2017 16:31, Gionatan Danti wrote:
Hi all,
I'm seeking some advice for a new virtualization system (KVM) on top of
LVM. The goal is to take agentless backups via LVM snapshots.
In short: what you suggest to snapshot a quite big (8+ TB) volume?
Classic LVM (with old snapshot behavior
Il 13-04-2017 14:41 Xen ha scritto:
See, you only compared multiple non-thin with a single-thin.
So my question is:
did you consider multiple thin volumes?
Hi, the multiple-thin-volume solution, while being very flexible, is not
well understood by libvirt and virt-manager. So I need to
Il 13-04-2017 16:33 Zdenek Kabelac ha scritto:
Hello
Just let's repeat.
Full thin-pool is NOT in any way comparable to full filesystem.
Full filesystem has ALWAYS room for its metadata - it's not pretending
it's bigger - it has 'finite' space and expect this space to just BE
there.
Now when
Il 14-04-2017 10:24 Zdenek Kabelac ha scritto:
But it's currently impossible to expect you will fill the thin-pool to
full capacity and everything will continue to run smoothly - this is
not going to happen.
Even with EXT4 and errors=remount-ro?
However there are many different solutions
Il 14-04-2017 11:37 Zdenek Kabelac ha scritto:
The problem is not with 'stopping' access - but to gain the access
back.
So in this case - you need to run 'fsck' - and this fsck usually needs
more space - and the complexity starts with - where to get this space.
In the the 'most trivial' case
Il 13-04-2017 14:59 Stuart Gathman ha scritto:
Using a classic snapshot for backup does not normally involve
activating
a large CoW. I generally create a smallish snapshot (a few gigs),
that
will not fill up during the backup process. If for some reason, a
snapshot were to fill up before
Il 14-04-2017 10:24 Zdenek Kabelac ha scritto:
However there are many different solutions for different problems -
and with current script execution - user may build his own solution -
i.e. call
'dmsetup remove -f' for running thin volumes - so all instances get
'error' device when pool is
Il 07-04-2017 15:50 L A Walsh ha scritto:
Gionatan Danti wrote:
I more concerned about lenghtly snapshot activation due to a big,
linear CoW table that must be read completely...
---
What is 'big'? Are you just worried about the IO time?
If that's the case, much will depend on your HW
Il 24-04-2017 23:59 Zdenek Kabelac ha scritto:
If you set '--errorwhenfull y' - it should instantly fail.
It's my understanding that "--errorwhenfull y" will instantly fail
writes which imply new allocation requests, but writes to
already-allocated space will be completed.
It is
Il 26-04-2017 09:42 Zdenek Kabelac ha scritto:
At this moment it's not possible.
I do have some plans/idea how to workaround this in user-space but
it's non-trivial - especially on recovery path.
It would be possible to 'reroute' thin to dm-delay and then write path
to error and read path leave
On 26/04/2017 13:23, Zdenek Kabelac wrote:
You need to use 'direct' write more - otherwise you are just witnessing
issues related with 'page-cache' flushing.
Every update of file means update of journal - so you surely can lose
some data in-flight - but every good software needs to the flush
Il 26-04-2017 16:33 Zdenek Kabelac ha scritto:
But you get correct 'write' error - so from application POV - you get
failing
transaction update/write - so app knows 'data' were lost and should
not proceed with next transaction - so it's in line with 'no data is
lost' and filesystem is not
On 15/05/2017 14:50, Zdenek Kabelac wrote> Hi
I still think you are mixing apples & oranges together and you expecting
answer '42' :)
'42' would be the optimal answer :p
There is simply NO simple answer. Every case has its pros & cons.
There is simply cases where XFS beats Ext4 and there
On 16/05/2017 12:54, Zdenek Kabelac wrote:
Hi
Somehow I think you've rather made a mistake during your test (or you
have buggy kernel). Can you take full log of your test show all options
are
properly applied
i.e. dmesg log + /proc/self/mountinfo report showing all options used
for
On 02/05/2017 13:00, Gionatan Danti wrote:
On 26/04/2017 18:37, Gionatan Danti wrote:
True, but the case exists that, even on a full pool, an application with
multiple outstanding writes will have some of them completed/commited
while other get I/O error, as writes to already allocated space
On 15/05/2017 17:33, Zdenek Kabelac wrote:> Ever tested this:
mount -o errors=remount-ro,data=journal ?
Yes, I tested it - same behavior: a full thinpool does *not* immediately
put the filesystem in a read-only state, even when using sync/fsync and
"errorwhenfull=y".
So, it seems EXT4
On 26/04/2017 18:37, Gionatan Danti wrote:
True, but the case exists that, even on a full pool, an application with
multiple outstanding writes will have some of them completed/commited
while other get I/O error, as writes to already allocated space are
permitted while writes to non-allocated
Hi Zdenek,
thanks for pointing me to thin_delta - very useful utility. Maybe I can
code around it...
As an additional question, is direct lvm2 support for send/receive
planned, or not?
Thanks.
On 05/06/2017 10:44, Zdenek Kabelac wrote:
Dne 5.6.2017 v 10:11 Gionatan Danti napsal(a):
Hi
On 14/09/2017 11:37, Zdenek Kabelac wrote:
Sorry my typo here - is NOT ;)
Zdenek
Hi Zdenek,
as the only variable is the LVM volume type (fat/thick vs thin), why the
thin volume is slower than the thick one?
I mean: all other things being equal, what is holding back the thin volume?
On 14/09/2017 11:37, Zdenek Kabelac wrote:
Sorry my typo here - is NOT ;)
Zdenek
Hi Zdenek,
as the only variable is the LVM volume type (fat/thick vs thin), why the
thin volume is slower than the thick one?
I mean: all other things being equal, what is holding back the thin volume?
Il 15-09-2017 04:06 Brassow Jonathan ha scritto:
We probably won’t be able to provide any highly refined scripts that
users can just plug in for the behavior they want, since they are
often so highly specific to each customer. However, I think it will
be useful to try to create better tools so
Hi all,
being an heavy snapshot user, I wonder if (and how) to silence such a
warning:
"WARNING: Sum of all thin volume sizes (2.00 GiB) exceeds the size of
thin pool cl_gdanti-lenovo/thinpool (1.00 GiB)!"
I fully understand the reasoning for this warning, but snapshots are
almost
Il 18-09-2017 20:55 David Teigland ha scritto:
It's definitely an irritation, and I described a configurable
alternative
here that has not yet been implemented:
https://bugzilla.redhat.com/show_bug.cgi?id=1465974
Is this the sort of topic where we should start making use of
On 18/09/2017 23:10, matthew patton wrote:
If the warnings are not being emitted to STDERR then that needs to be fixed
right off the bat.
The line with WARNINGs are written on STDERR, at least con recent LVM
version.
'lvs -q blah' should squash any warnings.
'lvcreate' frankly shouldn't
On 18/09/2017 22:08, Zdenek Kabelac wrote:
We can possibly print WARNING message only with the 1st. thin LV which
causes overprovisioining of thin-pool.
This should be ok, but it would not cover all use cases. For example, a
thin snapshot taken, and removed, for backup purpose would
Il 17-09-2017 09:10 Xen ha scritto:
Xen schreef op 17-09-2017 8:31:
But if it's not active I don't see why it should be critical or why
you should reserve space for it to be honest...
Xen, I really think that the combination of hard-threshold obtained by
setting
Il 19-09-2017 17:30 Zdenek Kabelac ha scritto:
The main purpose of the Warning is to really warn user there is NOT
configured auto-extension of thin-pool (so no threshold is setup) - so
thin-pool is not set-up in 'preferred' way (so when user counts with
the fact the thin-pool can grow and
Hi list,
as by the subject: is it possible to reserve space for specific thin
logical volumes?
This can be useful to "protect" critical volumes from having their space
"eaten" by other, potentially misconfigured, thin volumes.
Another, somewhat more convoluted, use case is to prevent
Il 08-09-2017 12:35 Gionatan Danti ha scritto:
Hi list,
as by the subject: is it possible to reserve space for specific thin
logical volumes?
This can be useful to "protect" critical volumes from having their
space "eaten" by other, potentially misconfigured, thin volumes
Il 12-09-2017 00:56 matthew patton ha scritto:
with the obvious caveat that in ZFS the block layer and the file
layers are VERY tightly coupled. LVM and the block layer see
eye-to-eye but ext4 et. al. have absolutely (almost?) no clue what's
going on beneath it and thus LVM is making (false)
Il 13-09-2017 00:16 Zdenek Kabelac ha scritto:
Dne 12.9.2017 v 23:36 Gionatan Danti napsal(a):
Il 12-09-2017 21:44 matthew patton ha scritto:
Again, please don't speak about things you don't know.
I am *not* interested in thin provisioning itself at all; on the other
side, I find CoW
There could be a simple answer and complex one :)
I'd start with simple one - already presented here -
when you write to INDIVIDUAL thin volume target - respective dn thin
target DOES manipulate with single btree set - it does NOT care there
are some other snapshot and never influnces them -
Il 13-09-2017 10:15 Zdenek Kabelac ha scritto:
Ohh this is pretty major constrain ;)
Sure :p
Sorry for not explicitly stating that before.
But as pointed out multiple times - with scripting around various
fullness moments of thin-pool - several different actions can be
programmed around,
Il 13-09-2017 01:22 matthew patton ha scritto:
Step-by-step example:
> - create a 40 GB thin volume and subtract its size from the thin
pool (USED 40 GB, FREE 60 GB, REFER 0 GB);
> - overwrite the entire volume (USED 40 GB, FREE 60 GB, REFER 40 GB);
> - snapshot the volume (USED 40 GB, FREE
Hi Jonathan,
Il 13-09-2017 01:25 Brassow Jonathan ha scritto:
Hi,
I’m the manager of the LVM/DM team here at Red Hat. Let me thank
those of you who have taken the time to share how we might improve LVM
thin-provisioning. We really do appreciate it and you ideas are
welcome.
I see merit in
Il 13-09-2017 01:31 Zdenek Kabelac ha scritto:
It's not just about 'complexity' in frame work.
You would lose all the speed as well.
You would significantly raise-up memory requirement.
There is very good reason complex tools like 'thin_ls' are kept in
user-space outside of kernel - with
Il 13-09-2017 10:44 Zdenek Kabelac ha scritto:
Forcible remove (with some reasonable locking - so i.e. 2 processes
are not playing with same device :) 'dmsetup remove --force' - is
replacing
existing device with 'error' target (with built-in noflush)
Anyway - if you see a reproducible problem
Il 11-09-2017 12:35 Zdenek Kabelac ha scritto:
The first question here is - why do you want to use thin-provisioning ?
Because classic LVM snapshot behavior (slow write speed and linear
performance decrease as snapshot count increases) make them useful for
nightly backups only.
On the
On 12/09/2017 13:01, Zdenek Kabelac wrote:
There is very good reason why thinLV is fast - when you work with thinLV -
you work only with data-set for single thin LV.
So you write to thinLV and either you modify existing exclusively owned
chunk
or you duplicate and provision new one. Single
On 12/09/2017 14:03, Zdenek Kabelac wrote:> # lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move
Log Cpy%Sync Convert
[lvol0_pmspare] vg ewi--- 2,00m
lvol1 vg Vwi-a-tz-- 20,00m pool40,00
poolvg twi-aotz-- 10,00m
Il 12-09-2017 21:44 matthew patton ha scritto:
Wrong: Qemu/KVM *does* honors write barrier, unless you use
"cache=unsafe".
seems the default is now 'unsafe'.
http://libvirt.org/formatdomain.html#elementsDisks
The default is cache=none
Only if the barrier frame gets passed along by KVM
On 12/09/2017 13:46, Zdenek Kabelac wrote:
What's wrong with BTRFS
Either you want fs & block layer tied together - that the btrfs/zfs
approach
BTRFS really has a ton of performance problem - please, don't recommend
it for anything IO intensive (as virtual machines and databases).
On 13/11/2017 16:20, Zdenek Kabelac wrote:
Are you talking about RH bug 1388632?
https://bugzilla.redhat.com/show_bug.cgi?id=1388632
Unfortunately I can only view the google-cached version of the bugzilla
page, since the bug is restricted to internal view only.
that could be similar issue
Il 19-10-2017 13:45 Alexander 'Leo' Bergolth ha scritto:
On 10/17/2017 03:45 PM, Alexander 'Leo' Bergolth wrote:
I just tested lv activation with a degraded raid1 thin pool.
Unfortunately it looks like activation mode=degraded only works for
plain raid1 lvs. If you add a thin pool, lvm won't
Il 30-10-2017 18:04 David Teigland ha scritto:
On Mon, Oct 30, 2017 at 02:06:45PM +0800, Eric Ren wrote:
Hi all,
Sometimes, I see the following message in the VG metadata backups
under
/etc/lvm/archive:
"""
contents = "Text Format Volume Group"
version = 1
description = "Created *before*
Il 24-06-2018 21:18 Ryan Launchbury ha scritto:
In testing, forcibly removing the cache, via editing the LVM config
file has caused extensive XFS filesystem corruption, even when backing
up the metadata first and restoring after the cache device is missing.
Any advice on how to safely uncache
Il 25-06-2018 19:20 Ryan Launchbury ha scritto:
Hi Gionatan,
The system with the issue is with writeback cache mode enabled.
Best regards,
Ryan
Ah, I was under the impression that it was a writethough cache.
Sorry for the noise.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. -
Hi list,
I wonder if a method exists to have a >16 GB thin metadata volume.
When using a 64 KB chunksize, a maximum of ~16 TB can be addressed in a
single thin pool. The obvious solution is to increase the chunk size, as
128 KB chunks are good for over 30 TB, and so on. However, increasing
Il 22-06-2018 22:13 Zdenek Kabelac ha scritto:
Addressing is internally limited to use lower amount of bits.
Usage of memory resources, efficiency.
ATM we do not recommend to use cache with more then 1.000.000 chunks
for better efficiency reasons although on bigger machines bigger
amount of
Il 22-06-2018 22:07 Zdenek Kabelac ha scritto:
When cache will experience write error - it will become invalidate and
will
need to be dropped - but this thing is not automated ATM - so admin
works is needed to handle this task.
So, if a writethrough cache experience write errors but the
Il 20-06-2018 12:15 Zdenek Kabelac ha scritto:
Hi
Aren't there any kernel write errors in your 'dmegs'.
LV becomes fragile if the associated devices with cache are having HW
issues (disk read/write errors)
Zdenek
Is that true even when using a writethrough cache mode?
--
Danti Gionatan
Hi all,
there is any method to preallocate any space to a specific thin volume,
other that writing to it? If not, it is in the todo list?
I know a similar question was raised in the past, but it was in the
different context of reserverd space - ie to be "sure" about a volume
not filling up.
Il 07-02-2018 21:37 Xen ha scritto:
This is the reason for the problems but LVM released bad products all
the same with the solution being to not use them for very long, or
rather, to upgrade.
Yes Ubuntu runs a long time behind and Debian also.
As a user, I can't help that, upgrading LVM just
Il 15-07-2018 21:47 Zdenek Kabelac ha scritto:
Hi Zdenek,
Hi
Try to open BZ like i.e. this one:
https://bugzilla.redhat.com/show_bug.cgi?id=1532071
this is quite scary, especially considering no updates on the ticket in
recent months. How did the OP solve the issue?
Add all possible
Hi all,
Il 28-02-2018 10:26 Zdenek Kabelac ha scritto:
Overprovisioning on DEVICE level simply IS NOT equivalent to full
filesystem like you would like to see all the time here and you've
been already many times explained that filesystems are simply not
there ready - fixes are on going but it
Il 28-02-2018 22:43 Zdenek Kabelac ha scritto:
On default - full pool starts to 'error' all 'writes' in 60 seconds.
Based on what I remember, and what you wrote below, I think "all writes"
in the context above means "writes to unallocated areas", right? Because
even full pool can write to
On 01/03/2018 09:31, Zdenek Kabelac wrote:
If the tool wanted to write 1sector to 256K chunk that needed
provisioning,
and provisioning was not possible - after reboot - you will still see
the 'old' content. >
In case of filesystem, that does not stop upon 1st. failing write you
then can see
On 01/03/2018 12:23, Zdenek Kabelac wrote:
In general - for extX it's remount read-only upon error - which works
for journaled metadata - if you want same protection for 'data' you need
to switch to rather expensive data journaling mode.
For XFS there is now similar logic where write error
On 01/03/2018 17:00, Zdenek Kabelac wrote:
metadata snapshot 'just consumes' thin-pool metadata space,
at any time there can be only 1 snapshot - so before next usage
you have to drop the existing one.
So IMHO it should have no other effects unless you hit some bugs...
Mmm... does it means
On 27/03/2018 10:30, Zdenek Kabelac wrote:
Hi
Well just for the 1st. look - 116MB for metadata for 7.21TB is *VERY*
small size. I'm not sure what is the data 'chunk-size' - but you will
need to extend pool's metadata sooner or later considerably - I'd
suggest at least 2-4GB for this data
On 27/03/2018 12:18, Zdenek Kabelac wrote:
Tool for size estimation is giving some 'rough' first guess/first choice
number.
The metadata usage is based in real-word data manipulation - so while
it's relatively easy to 'cup' a single thin LV metadata usage - once
there is a lot of sharing
On 27/03/2018 12:39, Zdenek Kabelac wrote:
Hi
I've forget to mention there is "thin_ls" tool (comes with
device-mapper-persistent-data package (with thin_check) - for those who
want to know precise amount of allocation and what amount of blocks is
owned exclusively by a single thinLV and
On 27/03/2018 12:58, Gionatan Danti wrote:
Mmm no, I am caring for the couple MBs themselves. I was concerned about
the possibility to get a full metadata device by writing far less data
than expected. But I now get the point.
Sorry, I really meant "I am NOT caring for the coupl
Il 04-03-2018 21:53 Zdenek Kabelac ha scritto:
On the other hand all common filesystem in linux were always written
to work on a device where the space is simply always there. So all
core algorithms simple never counted with something like
'thin-provisioning' - this is almost 'fine' since
On 05/03/2018 11:18, Zdenek Kabelac wrote:
Yes - it has been updated/improved/fixed - and I've already given you a
link where you can configure the behavior of XFS when i.e. device
reports ENOSPC to the filesystem.
Sure - I already studied it months ago during my testing. I simply was
under
On 19/10/2018 12:58, Zdenek Kabelac wrote:
Hi
Writecache simply doesn't care about caching your reads at all.
Your RAM with it's page caching mechanism keeps read data as long as
there is free RAM for this - the less RAM goes to page cache - less read
operations remains cached.
Hi, does it
Il 19-10-2018 15:08 Zdenek Kabelac ha scritto:
Hi
It's rather about different workload takes benefit from different
caching approaches.
If your system is heavy on writes - dm-writecache is what you want,
if you mostly reads - dm-cache will win.
That's why there is dmstats to also help
On 19/10/2018 11:12, Zdenek Kabelac wrote:
And final note - there is upcoming support for accelerating writes with
new dm-writecache target.
Hi, should not it be already possible with current dm-cache and
writeback caching?
Thanks.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. -
Il 15-12-2018 18:59 Giuseppe Vacanti ha scritto:
- pvscan does not report /dev/sdc
- pvdisplay does not seem to know about this PV
pvdisplay /dev/sdc
Failed to find physical volume "/dev/sdc"
Can you show the output of "lsblk" and "pvscan -vvv" ?
Thanks.
--
Danti Gionatan
Supporto
Hi list,
in BZ 1643651 I read:
"In 7.6 (and 8.0), lvm began using a new i/o layer (bcache)
to read and write data blocks."
Last time I checked, bcache was a completely different caching layer,
unrelated from LVM. The above quote, instead, implies that bcache is now
actively used by LVM.
Am
On 30/11/2018 10:52, Zdenek Kabelac wrote:
Hi
The name of i/o layer bcache is only internal to lvm2 code for caching
reads form disks during disk processing - the name comes from usage
of bTree and caching - thus the name bcache.
It's not a dm target - so nothing you could use for LVs.
And
Hi all,
doing some tests on a 4-bays, entry-level NAS/SAN system, I discovered
it is entirely based on lvm thin volumes.
On configuring what it calls "thick volumes" it create a new thin
logical volume and pre-allocates all space inside the new volume.
What surprised me is the speed at
Il 03-06-2019 15:23 Joe Thornber ha scritto:
On Fri, May 31, 2019 at 03:13:41PM +0200, Gionatan Danti wrote:
- does standard lvmthin support something similar? If not, how do you
see a
zero coalesce/compression/trim/whatever feature?
There isn't such a feature as yet.
Ok, so the NAS
Il 13-06-2019 18:05 Ilia Zykov ha scritto:
Hello.
Tell me please, how can I get the maximum address used by a virtual
disk
(disk created with -V VirtualSize). I have several large virtual disks,
but they use only a small part at the beginning of the disk. For
example:
# lvs
LV VG
Il 13-05-2019 10:26 Zdenek Kabelac ha scritto:
Hi
There is no technical problem to enable caching of cached volume (aka
convert cache_cdata LV into another 'cached' volume.
And as long as there are not errors anywhere - it works.
Difficulty comes with solving error cases - and that's the main
Il 23-08-2019 14:47 Zdenek Kabelac ha scritto:
Ok - serious disk error might lead to eventually irrepairable metadata
content - since if you lose some root b-tree node sequence it might be
really hard
to get something sensible (it's the reason why the metadata should be
located
on some
Hi all,
I have a (virtual) block devices which does not expose any io scheduler:
[root@localhost block]# cat /sys/block/zd0/queue/scheduler
none
I created an lvm volume on top of that block devices with:
[root@localhost ~]# pvcreate /dev/zd0
Physical volume "/dev/zd0" successfully created.
Il 31-07-2019 12:16 Zdenek Kabelac ha scritto:
When it appears to work on a system with a single disk really doesn't
make it 'clearly working' - we are providing solution for thousands
disks based heavily loaded servers as well, so there is not much wish
to 'hack-in' occasionally working fix.
On 09/12/19 11:26, Daniel Janzon wrote:
Exactly. The md driver executes on a single core, but with a bunch of RAID5s
I can distribute the load over many cores. That's also why I cannot join the
bunch of RAID5's with a RAID0 (as someone suggested) because then again
all data is pulled through a
Il 08-12-2019 00:14 Stuart D. Gathman ha scritto:
On Sat, 7 Dec 2019, John Stoffel wrote:
The biggest harm to performance here is really the RAID5, and if you
can instead move to RAID 10 (mirror then stripe across mirrors) then
you should be a performance boost.
Yeah, That's what I do.
On 23/10/19 12:46, Zdenek Kabelac wrote:
Just few 'comments' - it's not really comparable - the efficiency of
thin-pool metadata outperforms old snapshot in BIG way (there is no
point to talk about snapshots that takes just couple of MiB)
Yes, this matches my experience.
There is also BIG
On 23/10/19 15:05, Zdenek Kabelac wrote:
Yep - we are recommending to disable zeroing as soon as chunksize >512K.
But for 'security' reason the option it's up to users to select what
fits the needs in the best way - there is no 'one solution fits them
all' in this case.
Sure, but again: if
On 23/10/19 14:59, Zdenek Kabelac wrote:
Dne 23. 10. 19 v 13:08 Gionatan Danti napsal(a):
Talking about thin snapshot, an obvious performance optimization which
seems to not be implemented is to skip reading source data when
overwriting in larger-than-chunksize blocks.
Hi
Hi,
Il 22-10-2019 18:15 Stuart D. Gathman ha scritto:
"Old" snapshots are exactly as efficient as thin when there is exactly
one. They only get inefficient with multiple snapshots. On the other
hand, thin volumes are as inefficient as an old LV with one snapshot.
An old LV is as efficient,
Il 23-10-2019 17:37 Zdenek Kabelac ha scritto:
Hi
If you use 1MiB chunksize for thin-pool and you use 'dd' with proper
bs size
and you write 'aligned' on 1MiB boundary (be sure you user directIO,
so you are not a victim of some page cache flushing...) - there should
not be any useless read.
Il 23-10-2019 00:53 Stuart D. Gathman ha scritto:
If you can find all the leaf nodes belonging to the root (in my btree
database they are marked with the root id and can be found by
sequential
scan of the volume), then reconstructing the btree data is
straightforward - even in place.
I
Il 2020-02-22 12:58 Eric Toombs ha scritto:
So, is there a sort of "dumber" way of making these snapshots, maybe by
changing the allocation algorithm or something?
Hi, I think that total snapshot creation time is dominated by LVM
flushing its (meta)data to the physical disks. Two things to
Il 2020-02-15 21:19 Zdenek Kabelac ha scritto:
IMHO ZFS is 'somewhat' slow to play with...
and I've no idea how ZFS can resolve all correctness issues in
kernel...
Zdenek
Oh, it surely does *not* solve all correctness issues. Rather, having
much simpler constrains (and use cases), it
Il 2020-02-15 21:49 Chris Murphy ha scritto:
Are you referring to this known problem?
https://btrfs.wiki.kernel.org/index.php/Gotchas#Block-level_copies_of_devices
Yes.
By default the snapshot LV isn't active, so the problem doesn't
happen. I've taken many LVM thinp snapshots of Btrfs file
Il 2020-02-14 21:40 David Teigland ha scritto:
You're right, filters are difficult to understand and use correctly.
The
complexity and confusion in the code is no better. With the removal of
lvmetad in 2.03 versions (e.g. RHEL8) there's no difference between
filter
and global_filter, so
1 - 100 of 158 matches
Mail list logo