Hello Sergei!

On Fri, 29 Sept 2023 at 23:14, Sergei Golubchik <s...@mariadb.org> wrote:

> But I suggest to try to swap the implementation. Make
>
>    online_alter_hton->savepoint_offset=sizeof(void*);
>
> and in savepoint_set create a list or an array of all offsets and store
> it in the savepoint. Something like this:
>
>   struct {
>     uint savepoints;
>     my_off_t log_prev_pos[1];
>   } oasv;
>
>   int online_alter_savepoint_set(handlerton *hton, THD *thd, oasv **sv)
>   {
>     uint count= 0, i=0;
>     for (auto &cache: cache_list)
>       count++;
>     DBUG_ASSERT(count);
>
>     *sv= malloc(sizeof(oasv) + sizeof(my_off_t)*(count - 1));
>     (*sv)->savepoints= count;
>
>     for (auto &cache: cache_list)
>       (*sv)->log_prev_pos[i]= cache.get_byte_position();
>   }
>
> and so on. You just need to make sure you always use push_back and not
> push_front for adding caches to the cache_list.
>
> instead of a structure with a counter you can use an array with
> MY_OFF_T_UNDEF (not 0!) at the end.
>

Just checked, I can really make it zero yes!

Nice idea of exposing a flexible array member to reduce a number of
allocations.

I'll remark also, that my implementation can also be alloc-optimized.

Overall I don't like the savepoint_offset design. Even Innodb only uses it
to store a local counter, which I also had in an early implementation (but
then found that a pointer itself can identify it). Only binlog is lucky to
use it on purpose. Also if a plugin is uninstalled, the offset memory will
never be reused. And if a dynamic size is required for a savepoint, like we
have, it will not be automatically deallocated.

So in your case, we'd also need to deallocate all the skipped savepoints
somehow. Probably, they should be linked in a list, but then it's not worth
it to transponse the implementation.

-- 
Yours truly,
Nikita Malyavin
_______________________________________________
developers mailing list -- developers@lists.mariadb.org
To unsubscribe send an email to developers-le...@lists.mariadb.org

Reply via email to