Hi

2018-04-27 21:40 GMT+02:00 Corey Huinker <corey.huin...@gmail.com>:

> As of v11, DO blocks can do transactions. I think this will meet your
> needs.
>
> A loop that starts at point X in the code and terminates at point Y has to
> know how to jump back in the file (if there even is a file!) to point X and
> re-interpret commands as it makes it's way back through the "file" toward
> point Y again... a place it might not reach, or it might reach some other
> loop termination first. \commands can be hidden inside psql variables,
> files can be conditionally included based on \if statements, and those
> files might have loop starters/terminators in them. And those commands
> *are* processed.
>
> That, or you'd have to capture the code by somehow parsing ahead to the
> next \until-0 (processing all inner loops as you go, and the files they
> include, etc), but that means that variables that were expanded the first
> time are *not* expanded on subsequent iterations, and that makes it hard
> to set an exit-condition variable. It would also seriously alter what psql
> is when inside that loop.
>
> I once did a presentation on ways to (ab)use psql, and one thing I did was
> recursion via include files. Adapting your loop as literally as possible,
> it would look like this:
>
> loop_file.sql:
>
> BEGIN;
> WITH deleted_rows AS (DELETE FROM big_table
>                       WHERE id in (SELECT id FROM big_table WHERE bad =
> true LIMIT 1000)
>                       RETURNING 1)
> SELECT (COUNT(*) > 0) as deleted_some_rows FROM deleted_rows
> \gset
> VACUUM big_table;
> COMMIT;
> \if :deleted_some_rows
>    \include loop_file.sql
> \endif
>
>
> What you don't see here is that you're using your psql process's available
> open file handles as a stack, and when you hit that limit psql will fail.
> If you remove that limit, then you get a bit further before psql segfaults
> on you. I think I got ~2700 files deep before that happened. Your stackage
> may vary.
>
> I'm not saying this is a good solution, quite the contrary. I think the
> sane solution is right around the corner in Version 11.
>
> Now if we just had a way of passing parameters into DO blocks...
>

I hope so there will be schema (temporal) variables:

create temp variable foo int default 10;

do $$
begin
  for i in 1..foo loop
    raise notice '%', i;
  end loop;
end;
$$;



>
>
> On Tue, Apr 24, 2018 at 3:59 AM Pierre Ducroquet <
> pierre.ducroq...@people-doc.com> wrote:
>
>> Hi
>>
>> When running database migrations with .sql files on a live database, it's
>> not
>> uncommon to have to run a migration in a loop to prevent a big lock on a
>> table.
>> For instance if one want to delete some old datas from a big table one
>> would
>> write :
>>
>> DELETE FROM big_table WHERE id IN (SELECT id FROM big_table WHERE bad =
>> true
>> LIMIT 1000);
>> VACUUM big_table;
>>
>> Right now, doing this is quite inefficient. We either have to write a
>> script
>> in another language, or run psql in a shell loop and wait for the
>> migration to
>> stop altering rows.
>>
>> The attached **proof of concept** patch (I insist, it's a 15 minutes hack
>> sprint with no previous knowledge of psql code) implements an 'until-0'
>> loop
>> in psql.
>> The previous migration could be simply written as :
>>
>> \until-0
>> BEGIN;
>> DELETE FROM big_table WHERE id IN (SELECT id FROM big_table WHERE bad =
>> true
>> LIMIT 1000);
>> VACUUM big_table;
>> COMMIT;
>> \end-until
>>
>> And psql will execute it until there is no row affected in the inner
>> queries.
>>
>> I am willing to write a proper patch for this (I hope the tell/seek is an
>> acceptable implementation…), but I prefer having some feedback first.
>>
>> Thanks
>>
>>  Pierre
>
>

Reply via email to