On Tue, February 20, 2018 4:33 pm, Momchil Ivanov wrote:
> On Tue, February 20, 2018 3:42 pm, Marian Marinov wrote:
>> Предложението ти не е лошо, но е в пъти по-сложно и за съжаление ще
>> hit-ва
>> сериозно write performance-а за клиента.
>> Замисли се, вместо директно да почнеш да пишеш на диска, първо ще се
>> случва write със същата големина :(
>
> На пръв поглед това даже се случва като си избереш
>
> LV Zero new blocks     yes
>
> погледни например [1] и по специално [2]. Предпологам трябва да се
> разгледа по-подробно за да се види дали става навсякъде където трябва.
>
> 1:
> https://elixir.bootlin.com/linux/latest/source/drivers/md/dm-thin.c#L1248
> 2:
> https://elixir.bootlin.com/linux/latest/source/drivers/md/dm-thin.c#L1310
>
> Поздрави,
> Момчил
>

Хм, това нещо ми показва друг код като го отварям през лаптопа, странна
работа. Става въпрос за следното парче код от drivers/md/dm-thin.c

/*
 * A partial copy also needs to zero the uncopied region.
 */
static void schedule_copy(struct thin_c *tc, dm_block_t virt_block,
                          struct dm_dev *origin, dm_block_t data_origin,
                          dm_block_t data_dest,
                          struct dm_bio_prison_cell *cell, struct bio *bio,
                          sector_t len)
{
        int r;
        struct pool *pool = tc->pool;
        struct dm_thin_new_mapping *m = get_next_mapping(pool);

        m->tc = tc;
        m->virt_begin = virt_block;
        m->virt_end = virt_block + 1u;
        m->data_block = data_dest;
        m->cell = cell;

        /*
         * quiesce action + copy action + an extra reference held for the
         * duration of this function (we may need to inc later for a
         * partial zero).
         */
        atomic_set(&m->prepare_actions, 3);

        if (!dm_deferred_set_add_work(pool->shared_read_ds, &m->list))
                complete_mapping_preparation(m); /* already quiesced */

        /*
         * IO to pool_dev remaps to the pool target's data_dev.
         *
         * If the whole block of data is being overwritten, we can issue the
         * bio immediately. Otherwise we use kcopyd to clone the data first.
         */
        if (io_overwrites_block(pool, bio))
                remap_and_issue_overwrite(tc, bio, data_dest, m);
        else {
                struct dm_io_region from, to;

                from.bdev = origin->bdev;
                from.sector = data_origin * pool->sectors_per_block;
                from.count = len;

                to.bdev = tc->pool_dev->bdev;
                to.sector = data_dest * pool->sectors_per_block;
                to.count = len;

                r = dm_kcopyd_copy(pool->copier, &from, 1, &to,
                                   0, copy_complete, m);
                if (r < 0) {
                        DMERR_LIMIT("dm_kcopyd_copy() failed");
                        copy_complete(1, 1, m);

                        /*
                         * We allow the zero to be issued, to simplify the
                         * error path.  Otherwise we'd need to start
                         * worrying about decrementing the prepare_actions
                         * counter.
                         */
                }

                /*
                 * Do we need to zero a tail region?
                 */
                if (len < pool->sectors_per_block && pool->pf.zero_new_blocks) {
                        atomic_inc(&m->prepare_actions);
                        ll_zero(tc, m,
                                data_dest * pool->sectors_per_block + len,
                                (data_dest + 1) * pool->sectors_per_block);
                }
        }

        complete_mapping_preparation(m); /* drop our ref */
}

и съответната документация от lvmthin(7):

Zeroing
When a thin pool provisions a new data block for a thin LV, the new block
is first overwritten with zeros. The zeroing mode is indicated by the "z"
attribute displayed by lvs. The option -Z (or --zero) can be added to
commands to specify the zeroing mode.

Command to set the zeroing mode when creating a thin pool LV:
lvconvert --type thin-pool -Z{y|n}
--poolmetadata VG/ThinMetaLV VG/ThinDataLV
Command to change the zeroing mode of an existing thin pool LV:
lvchange -Z{y|n} VG/ThinPoolLV

If zeroing mode is changed from "n" to "y", previously provisioned blocks
are not zeroed.

Provisioning of large zeroed chunks impacts performance.

lvm.conf(5) thin_pool_zero
controls the default zeroing mode used when creating a thin pool.

_______________________________________________
Lug-bg mailing list
Lug-bg@linux-bulgaria.org
http://linux-bulgaria.org/mailman/listinfo/lug-bg

Reply via email to