Hi,

On 2024-04-07 00:19:35 +0200, Daniel Gustafsson wrote:
> > On 6 Apr 2024, at 23:44, Andres Freund <and...@anarazel.de> wrote:
> 
> > It might be useful to print a few lines, but the whole log files can be
> > several megabytes worth of output.
> 
> The non-context aware fix would be to just print the last 1024 (or something)
> bytes from the logfile:

That'd be better, yes. I'd mainly log the path to the logfile though, that's
probably at least as helpful for actually investigating the issue?

> diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm 
> b/src/test/perl/PostgreSQL/Test/Cluster.pm
> index 54e1008ae5..53d4751ffc 100644
> --- a/src/test/perl/PostgreSQL/Test/Cluster.pm
> +++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
> @@ -951,8 +951,8 @@ sub start
>  
>       if ($ret != 0)
>       {
> -             print "# pg_ctl start failed; logfile:\n";
> -             print PostgreSQL::Test::Utils::slurp_file($self->logfile);
> +             print "# pg_ctl start failed; logfile excerpt:\n";
> +             print substr 
> PostgreSQL::Test::Utils::slurp_file($self->logfile), -1024;
>  
>               # pg_ctl could have timed out, so check to see if there's a pid 
> file;
>               # otherwise our END block will fail to shut down the new 
> postmaster.

That's probably unnecessary optimization, but it seems a tad silly to read an
entire, potentially sizable, file to just use the last 1k. Not sure if the way
slurp_file() uses seek supports negative ofsets, the docs read to me like that
may only be supported with SEEK_END.

Greetings,

Andres Freund


Reply via email to