On Wed, Feb 3, 2021 at 3:24 PM Bharath Rupireddy <bharath.rupireddyforpostg...@gmail.com> wrote: > > On Wed, Feb 3, 2021 at 1:49 PM vignesh C <vignes...@gmail.com> wrote: > > > > On Wed, Feb 3, 2021 at 1:00 PM Tom Lane <t...@sss.pgh.pa.us> wrote: > > > > > > vignesh C <vignes...@gmail.com> writes: > > > > On Mon, Feb 1, 2021 at 11:04 AM Bharath Rupireddy > > > > <bharath.rupireddyforpostg...@gmail.com> wrote: > > > >> Are these superuser and permission checks enough from a security > > > >> standpoint that we don't expose some sensitive information to the > > > >> user? > > > > > > > This will just print the backtrace of the current backend. Users > > > > cannot get password information from this. > > > > > > Really? > > > > > > A backtrace normally exposes the text of the current query, for > > > instance, which could contain very sensitive data (passwords in ALTER > > > USER, customer credit card numbers in ordinary data, etc etc). We > > > don't allow the postmaster log to be seen by any but very privileged > > > users; it's not sane to think that this data is any less > > > security-critical than the postmaster log. > > > > > > This point is entirely separate from the question of whether > > > triggering stack traces at inopportune moments could cause system > > > malfunctions, but that question is also not to be ignored. > > > > > > TBH, I'm leaning to the position that this should be superuser > > > only. I do NOT agree with the idea that ordinary users should > > > be able to trigger it, even against backends theoretically > > > belonging to their own userid. (Do I need to point out that > > > some levels of the call stack might be from security-definer > > > functions with more privilege than the session's nominal user?) > > > > > > > I had seen that the log that will be logged will be something like: > > postgres: test postgres [local] > > idle(ProcessClientReadInterrupt+0x3a) [0x9500ec] > > postgres: test postgres [local] idle(secure_read+0x183) [0x787f43] > > postgres: test postgres [local] idle() [0x7919de] > > postgres: test postgres [local] idle(pq_getbyte+0x32) [0x791a8e] > > postgres: test postgres [local] idle() [0x94fc16] > > postgres: test postgres [local] idle() [0x950099] > > postgres: test postgres [local] idle(PostgresMain+0x6d5) [0x954bd5] > > postgres: test postgres [local] idle() [0x898a09] > > postgres: test postgres [local] idle() [0x89838f] > > postgres: test postgres [local] idle() [0x894953] > > postgres: test postgres [local] idle(PostmasterMain+0x116b) > > [0x89422a] > > postgres: test postgres [local] idle() [0x79725b] > > /lib64/libc.so.6(__libc_start_main+0xf5) [0x7f6e68d75555] > > postgres: test postgres [local] idle() [0x484249] > > I was not sure if we would be able to get any secure information from > > this. I did not notice the function arguments being printed. I felt > > the function name, offset and the return address will be logged. I > > might be missing something here. > > Thoughts? > > First of all, we need to see if the output of pg_print_backtrace shows > up function parameter addresses or only function start addresses along > with line and file information when attached to gdb. In either case, > IMO, it will be easy for experienced hackers(I'm not one though) to > calculate and fetch the query string or other function parameters or > the variables inside the functions from the stack by looking at the > code (which is available openly, of course). > > Say, if a backend is in a long running scan or insert operation, then > pg_print_backtrace is issued from another session, the > exec_simple_query function shows up query_string. Below is captured > from attached gdb though, I'm not sure whether the logged backtrace > will have function address or the function parameters addresses, I > think we can check that by having a long running query which > frequently checks interrupts and issue pg_print_backtrace from another > session to that backend. Now, attach gdb to the backend in which the > query is running, then take bt, see if the logged backtrace and the > gdb bt have the same or closer addresses. > > #13 0x00005644f4320729 in exec_simple_query ( > query_string=0x5644f6771bf0 "select pg_backend_pid();") at postgres.c:1240 > #14 0x00005644f4324ff4 in PostgresMain (argc=1, argv=0x7ffd819bd5e0, > dbname=0x5644f679d2b8 "postgres", username=0x5644f679d298 "bharath") > at postgres.c:4394 > #15 0x00005644f4256f9d in BackendRun (port=0x5644f67935c0) at > postmaster.c:4484 > #16 0x00005644f4256856 in BackendStartup (port=0x5644f67935c0) at > postmaster.c:4206 > #17 0x00005644f4252a11 in ServerLoop () at postmaster.c:1730 > #18 0x00005644f42521aa in PostmasterMain (argc=3, argv=0x5644f676b1f0) > at postmaster.c:1402 > #19 0x00005644f4148789 in main (argc=3, argv=0x5644f676b1f0) at main.c:209 > > As suggested by Tom, I'm okay if this function is callable only by the > superusers. In that case, the superusers can fetch the backtrace and > send it for further analysis in case of any hangs or issues. > > Others may have better thoughts.
I would like to clarify a bit to avoid confusion here: Currently when there is a long running query or hang in the server, one of our customer support members will go for a screen share with the customer. If gdb is not installed we tell the customer to install gdb. Then we tell the customer to attach the backend process and execute the command. We tell the customer to share this to the customer support team and later the development team checks if this is an issue or a long running query and provides a workaround or explains what needs to be done next. This feature reduces a lot of these processes. Whenever there is an issue and if the user/customer is not sure if it is a hang or long running query. User can execute pg_print_backtrace, after this is executed, the backtrace will be logged to the log file something like below: postgres: test postgres [local] idle(ProcessClientReadInterrupt+0x3a) [0x9500ec] postgres: test postgres [local] idle(secure_read+0x183) [0x787f43] postgres: test postgres [local] idle() [0x7919de] postgres: test postgres [local] idle(pq_getbyte+0x32) [0x791a8e] postgres: test postgres [local] idle() [0x94fc16] postgres: test postgres [local] idle() [0x950099] postgres: test postgres [local] idle(PostgresMain+0x6d5) [0x954bd5] postgres: test postgres [local] idle() [0x898a09] postgres: test postgres [local] idle() [0x89838f] postgres: test postgres [local] idle() [0x894953] postgres: test postgres [local] idle(PostmasterMain+0x116b) [0x89422a] postgres: test postgres [local] idle() [0x79725b] /lib64/libc.so.6(__libc_start_main+0xf5) [0x7f6e68d75555] postgres: test postgres [local] idle() [0x484249] The above log contents (not the complete log file) will be shared by the customer/user along with the query/configuration/statistics, etc to the support team. I have mentioned a few steps in the documentation how to get the file/line from the backtrace logged using gdb/addr2line. Here this will be done in the developer environment, not in the actual customer environment which has the sensitive data. We don't attach to the running process in gdb. Developers will use the same binary that was released to the customer(not in the customer environment) to get the file/line number. Users will be able to simulate a backtrace which includes file and line number. I felt users cannot get the sensitive information from here. This information will help the developer to analyze and suggest what is the next action that customers need to take. I felt that there was a slight misunderstanding in where gdb is executed, it is not in the customer environment but in the developer environment. From the customer environment we will only get the logs of stack trace as mentioned above. I have changed it so that this feature is supported only for superuser users. Thoughts? Regards, Vignesh
From 65b6105e17ac64f2d1773b3d8680b88531e79717 Mon Sep 17 00:00:00 2001 From: vignesh <vignes...@gmail.com> Date: Wed, 5 May 2021 20:31:38 +0530 Subject: [PATCH v7] Print backtrace of specified postgres process. The idea here is to implement & expose pg_print_backtrace function, internally what this function does is, the connected backend will send SIGUSR1 signal by setting PROCSIG_PRINT_BACKTRACE to postgres backend whose pid matches the specified process id. Once the backend process receives this signal it will print the backtrace of the process to the log file based on the logging configuration, if logging is disabled backtrace will be printed to the console where postmaster was started. --- doc/src/sgml/func.sgml | 77 +++++++++++++++++++ src/backend/postmaster/autovacuum.c | 7 ++ src/backend/postmaster/interrupt.c | 8 ++ src/backend/storage/ipc/procsignal.c | 18 +++++ src/backend/storage/ipc/signalfuncs.c | 46 +++++++++++ src/backend/tcop/postgres.c | 9 +++ src/backend/utils/error/elog.c | 20 ++++- src/backend/utils/init/globals.c | 1 + src/include/catalog/pg_proc.dat | 5 ++ src/include/miscadmin.h | 2 + src/include/storage/procsignal.h | 2 + src/include/utils/elog.h | 2 + .../t/002_print_backtrace_validation.pl | 73 ++++++++++++++++++ 13 files changed, 266 insertions(+), 4 deletions(-) create mode 100644 src/test/modules/test_misc/t/002_print_backtrace_validation.pl diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml index 5ae8abff0c..c196e19335 100644 --- a/doc/src/sgml/func.sgml +++ b/doc/src/sgml/func.sgml @@ -24978,6 +24978,33 @@ SELECT collation for ('foo' COLLATE "de_DE"); </para></entry> </row> + <row> + <entry role="func_table_entry"><para role="func_signature"> + <indexterm> + <primary>pg_print_backtrace</primary> + </indexterm> + <function>pg_print_backtrace</function> ( <parameter>pid</parameter> <type>integer</type> ) + <returnvalue>boolean</returnvalue> + </para> + <para> + Prints the backtrace whose backend process has the specified process ID. + Backtrace will be logged at <literal>LOG</literal> message level. They + will appear in the server log based on the log configuration set + (See <xref linkend="runtime-config-logging"/> for more information), + but will not be sent to the client regardless of + <xref linkend="guc-client-min-messages"/>. This + will help in identifying where exactly the backend process is currently + executing. This information will be useful in reporting the hangs and + also helps the developers in diagonising the problems. This feature is + not supported for postmaster, logging and statistics process. This + feature will be available, if the installer was generated on a platform + which had backtrace capturing capability. If not available it will + return false by throwing the following warning "WARNING: backtrace + generation is not supported by this installation". Only superusers can + request to log backtrace of a backend process. + </para></entry> + </row> + <row> <entry role="func_table_entry"><para role="func_signature"> <indexterm> @@ -25108,6 +25135,56 @@ LOG: Grand total: 1651920 bytes in 201 blocks; 622360 free (88 chunks); 1029560 because it may generate a large number of log messages. </para> + <para> + <function>pg_print_backtrace</function> can be used to print backtrace of + a backend process. For example: +<programlisting> +postgres=# select pg_print_backtrace(pg_backend_pid()); + pg_print_backtrace +-------------------- + t +(1 row) + +The backtrace will be logged to the log file if logging is enabled, if logging +is disabled backtrace will be logged to the console where the postmaster was +started. For example: +2021-01-27 11:33:50.247 IST [111735] LOG: current backtrace: + postgres: postgresdba postgres [local] SELECT(set_backtrace+0x38) [0xae06c5] + postgres: postgresdba postgres [local] SELECT(ProcessInterrupts+0x788) [0x950c34] + postgres: postgresdba postgres [local] SELECT() [0x761e89] + postgres: postgresdba postgres [local] SELECT() [0x71bbda] + postgres: postgresdba postgres [local] SELECT() [0x71e380] + postgres: postgresdba postgres [local] SELECT(standard_ExecutorRun+0x1d6) [0x71c1fe] + postgres: postgresdba postgres [local] SELECT(ExecutorRun+0x55) [0x71c026] + postgres: postgresdba postgres [local] SELECT() [0x953fc5] + postgres: postgresdba postgres [local] SELECT(PortalRun+0x262) [0x953c7e] + postgres: postgresdba postgres [local] SELECT() [0x94db78] + postgres: postgresdba postgres [local] SELECT(PostgresMain+0x7d7) [0x951e72] + postgres: postgresdba postgres [local] SELECT() [0x896b2f] + postgres: postgresdba postgres [local] SELECT() [0x8964b5] + postgres: postgresdba postgres [local] SELECT() [0x892a79] + postgres: postgresdba postgres [local] SELECT(PostmasterMain+0x116b) [0x892350] + postgres: postgresdba postgres [local] SELECT() [0x795f72] + /lib64/libc.so.6(__libc_start_main+0xf5) [0x7f2107bbd505] + postgres: postgresdba postgres [local] SELECT() [0x4842a9] + +</programlisting> + You can get the file name and line number by using gdb/addr2line in + linux platforms, as a prerequisite users must ensure gdb/addr2line is + already installed: +<programlisting> +1) "info line *address" from gdb on postgres executable. For example: +gdb ./postgres +(gdb) info line *0x71c25d +Line 378 of "execMain.c" starts at address 0x71c25d <literal><</literal>standard_ExecutorRun+470<literal>></literal> and ends at 0x71c263 <literal><</literal>standard_ExecutorRun+476<literal>></literal>. +OR +2) Using "addr2line -e postgres address", For example: +addr2line -e ./postgres 0x71c25d +/home/postgresdba/src/backend/executor/execMain.c:378 +</programlisting> +</para> + + </sect2> <sect2 id="functions-admin-backup"> diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c index d516df0ac5..739180b8d9 100644 --- a/src/backend/postmaster/autovacuum.c +++ b/src/backend/postmaster/autovacuum.c @@ -837,6 +837,13 @@ HandleAutoVacLauncherInterrupts(void) if (ProcSignalBarrierPending) ProcessProcSignalBarrier(); + /* Process printing backtrace */ + if (PrintBacktracePending) + { + PrintBacktracePending = false; + set_backtrace(NULL, 0); + } + /* Process sinval catchup interrupts that happened while sleeping */ ProcessCatchupInterrupt(); } diff --git a/src/backend/postmaster/interrupt.c b/src/backend/postmaster/interrupt.c index dd9136a942..59aff6ca02 100644 --- a/src/backend/postmaster/interrupt.c +++ b/src/backend/postmaster/interrupt.c @@ -21,6 +21,7 @@ #include "storage/ipc.h" #include "storage/latch.h" #include "storage/procsignal.h" +#include "tcop/tcopprot.h" #include "utils/guc.h" volatile sig_atomic_t ConfigReloadPending = false; @@ -41,6 +42,13 @@ HandleMainLoopInterrupts(void) ProcessConfigFile(PGC_SIGHUP); } + /* Process printing backtrace */ + if (PrintBacktracePending) + { + PrintBacktracePending = false; + set_backtrace(NULL, 0); + } + if (ShutdownRequestPending) proc_exit(0); } diff --git a/src/backend/storage/ipc/procsignal.c b/src/backend/storage/ipc/procsignal.c index eac6895141..7cea2a42c5 100644 --- a/src/backend/storage/ipc/procsignal.c +++ b/src/backend/storage/ipc/procsignal.c @@ -441,6 +441,21 @@ HandleProcSignalBarrierInterrupt(void) /* latch will be set by procsignal_sigusr1_handler */ } +/* + * Handle receipt of an interrupt indicating a print backtrace. + * + * Note: this is called within a signal handler! All we can do is set + * a flag that will cause the next CHECK_FOR_INTERRUPTS to invoke + * set_backtrace function which will log the backtrace. + */ +static void +HandlePrintBacktraceInterrupt(void) +{ + InterruptPending = true; + PrintBacktracePending = true; + /* latch will be set by procsignal_sigusr1_handler */ +} + /* * Perform global barrier related interrupt checking. * @@ -679,6 +694,9 @@ procsignal_sigusr1_handler(SIGNAL_ARGS) if (CheckProcSignal(PROCSIG_RECOVERY_CONFLICT_BUFFERPIN)) RecoveryConflictInterrupt(PROCSIG_RECOVERY_CONFLICT_BUFFERPIN); + if (CheckProcSignal(PROCSIG_PRINT_BACKTRACE)) + HandlePrintBacktraceInterrupt(); + SetLatch(MyLatch); errno = save_errno; diff --git a/src/backend/storage/ipc/signalfuncs.c b/src/backend/storage/ipc/signalfuncs.c index 0337b00226..9af41731e3 100644 --- a/src/backend/storage/ipc/signalfuncs.c +++ b/src/backend/storage/ipc/signalfuncs.c @@ -23,6 +23,7 @@ #include "storage/pmsignal.h" #include "storage/proc.h" #include "storage/procarray.h" +#include "tcop/tcopprot.h" #include "utils/acl.h" #include "utils/builtins.h" @@ -331,3 +332,48 @@ pg_rotate_logfile_v2(PG_FUNCTION_ARGS) SendPostmasterSignal(PMSIGNAL_ROTATE_LOGFILE); PG_RETURN_BOOL(true); } + +/* + * pg_print_backtrace - print backtrace of backend process. + * + * Only superusers can print backtrace. + */ +Datum +pg_print_backtrace(PG_FUNCTION_ARGS) +{ +#ifdef HAVE_BACKTRACE_SYMBOLS + int pid = PG_GETARG_INT32(0); + PGPROC *proc; + + /* Only superusers can print back trace. */ + if (!superuser()) + ereport(ERROR, + (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE), + errmsg("must be a superuser to print backtrace"))); + + /* BackendPidGetProc returns NULL if the pid isn't valid. */ + proc = BackendPidGetProc(pid); + if (proc == NULL) + { + ereport(WARNING, + (errmsg("PID %d is not a PostgreSQL server process", pid))); + PG_RETURN_BOOL(false); + } + + /* + * Send SIGUSR1 to postgres backend whose pid matches pid by + * setting PROCSIG_PRINT_BACKTRACE, the backend process will print + * the backtrace once the signal is received. + */ + if (!SendProcSignal(pid, PROCSIG_PRINT_BACKTRACE, InvalidBackendId)) + PG_RETURN_BOOL(true); + else + ereport(WARNING, + (errmsg("could not send signal to process %d: %m", pid))); /* return false below */ +#else + ereport(WARNING, + (errmsg("backtrace generation is not supported by this installation"))); +#endif + + PG_RETURN_BOOL(false); +} diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c index 2d6d145ecc..d3d0de28ab 100644 --- a/src/backend/tcop/postgres.c +++ b/src/backend/tcop/postgres.c @@ -3356,6 +3356,15 @@ ProcessInterrupts(void) if (LogMemoryContextPending) ProcessLogMemoryContextInterrupt(); + + /* Process printing backtrace */ + if (PrintBacktracePending) + { + PrintBacktracePending = false; + ereport(LOG, + (errmsg("logging backtrace of PID %d", MyProcPid))); + set_backtrace(NULL, 0); + } } diff --git a/src/backend/utils/error/elog.c b/src/backend/utils/error/elog.c index 65019989cf..45687eb16b 100644 --- a/src/backend/utils/error/elog.c +++ b/src/backend/utils/error/elog.c @@ -172,7 +172,6 @@ static char formatted_log_time[FORMATTED_TS_LEN]; static const char *err_gettext(const char *str) pg_attribute_format_arg(1); -static pg_noinline void set_backtrace(ErrorData *edata, int num_skip); static void set_errdata_field(MemoryContextData *cxt, char **ptr, const char *str); static void write_console(const char *line, int len); static void setup_formatted_log_time(void); @@ -949,9 +948,10 @@ errbacktrace(void) * Compute backtrace data and add it to the supplied ErrorData. num_skip * specifies how many inner frames to skip. Use this to avoid showing the * internal backtrace support functions in the backtrace. This requires that - * this and related functions are not inlined. + * this and related functions are not inlined. If edata pointer is valid + * backtrace information will set in edata. */ -static void +void set_backtrace(ErrorData *edata, int num_skip) { StringInfoData errtrace; @@ -978,7 +978,19 @@ set_backtrace(ErrorData *edata, int num_skip) "backtrace generation is not supported by this installation"); #endif - edata->backtrace = errtrace.data; + if (edata) + edata->backtrace = errtrace.data; + else + { + /* + * LOG_SERVER_ONLY is used intentionally to make sure this information + * is not sent to client based on client_min_messages. We don't want + * to mess up a different session as pg_print_backtrace will be + * sending SIGNAL to a different backend. + */ + elog(LOG_SERVER_ONLY, "current backtrace:%s", errtrace.data); + pfree(errtrace.data); + } } /* diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c index 381d9e548d..bd869ebf8c 100644 --- a/src/backend/utils/init/globals.c +++ b/src/backend/utils/init/globals.c @@ -39,6 +39,7 @@ volatile sig_atomic_t LogMemoryContextPending = false; volatile uint32 InterruptHoldoffCount = 0; volatile uint32 QueryCancelHoldoffCount = 0; volatile uint32 CritSectionCount = 0; +volatile sig_atomic_t PrintBacktracePending = false; int MyProcPid; pg_time_t MyStartTime; diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat index 91f0ea2212..251a7f35e1 100644 --- a/src/include/catalog/pg_proc.dat +++ b/src/include/catalog/pg_proc.dat @@ -11611,4 +11611,9 @@ proname => 'brin_minmax_multi_summary_send', provolatile => 's', prorettype => 'bytea', proargtypes => 'pg_brin_minmax_multi_summary', prosrc => 'brin_minmax_multi_summary_send' }, +# function to get the backtrace of server process +{ oid => '6105', descr => 'print backtrace of process', + proname => 'pg_print_backtrace', provolatile => 'v', prorettype => 'bool', + proargtypes => 'int4', prosrc => 'pg_print_backtrace' }, + ] diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h index 95202d37af..34b0909848 100644 --- a/src/include/miscadmin.h +++ b/src/include/miscadmin.h @@ -94,6 +94,8 @@ extern PGDLLIMPORT volatile uint32 InterruptHoldoffCount; extern PGDLLIMPORT volatile uint32 QueryCancelHoldoffCount; extern PGDLLIMPORT volatile uint32 CritSectionCount; +extern PGDLLIMPORT volatile sig_atomic_t PrintBacktracePending; + /* in tcop/postgres.c */ extern void ProcessInterrupts(void); diff --git a/src/include/storage/procsignal.h b/src/include/storage/procsignal.h index eec186be2e..089b15993b 100644 --- a/src/include/storage/procsignal.h +++ b/src/include/storage/procsignal.h @@ -44,6 +44,8 @@ typedef enum PROCSIG_RECOVERY_CONFLICT_BUFFERPIN, PROCSIG_RECOVERY_CONFLICT_STARTUP_DEADLOCK, + PROCSIG_PRINT_BACKTRACE, /* ask backend to print the current backtrace */ + NUM_PROCSIGNALS /* Must be last! */ } ProcSignalReason; diff --git a/src/include/utils/elog.h b/src/include/utils/elog.h index f53607e12e..c63d25716a 100644 --- a/src/include/utils/elog.h +++ b/src/include/utils/elog.h @@ -453,4 +453,6 @@ extern void set_syslog_parameters(const char *ident, int facility); */ extern void write_stderr(const char *fmt,...) pg_attribute_printf(1, 2); +extern pg_noinline void set_backtrace(ErrorData *edata, int num_skip); + #endif /* ELOG_H */ diff --git a/src/test/modules/test_misc/t/002_print_backtrace_validation.pl b/src/test/modules/test_misc/t/002_print_backtrace_validation.pl new file mode 100644 index 0000000000..00b2cae14e --- /dev/null +++ b/src/test/modules/test_misc/t/002_print_backtrace_validation.pl @@ -0,0 +1,73 @@ +use strict; +use warnings; + +use PostgresNode; +use TestLib; +use Test::More tests => 2; +use Time::HiRes qw(usleep); + +# Set up node with logging collector +my $node = get_new_node('primary'); +$node->init(); +$node->append_conf( + 'postgresql.conf', qq( +logging_collector = on +lc_messages = 'C' +)); + +$node->start(); + +# Verify that log output gets to the file +$node->psql('postgres', 'select pg_print_backtrace(pg_backend_pid())'); + +# might need to retry if logging collector process is slow... +my $max_attempts = 180 * 10; + +my $current_logfiles; +for (my $attempts = 0; $attempts < $max_attempts; $attempts++) +{ + eval { + $current_logfiles = slurp_file($node->data_dir . '/current_logfiles'); + }; + last unless $@; + usleep(100_000); +} +die $@ if $@; + +note "current_logfiles = $current_logfiles"; + +like( + $current_logfiles, + qr|^stderr log/postgresql-.*log$|, + 'current_logfiles is sane'); + +my $lfname = $current_logfiles; +$lfname =~ s/^stderr //; +chomp $lfname; + +my $first_logfile; +my $bt_occurence_count; + +# Verify that the backtraces of the processes are logged into logfile. +for (my $attempts = 0; $attempts < $max_attempts; $attempts++) +{ + $first_logfile = $node->data_dir . '/' . $lfname; + chomp $first_logfile; + print "file is $first_logfile"; + open my $fh, '<', $first_logfile + or die "Could not open '$first_logfile' $!"; + while (my $line = <$fh>) + { + chomp $line; + if ($line =~ m/current backtrace/) + { + $bt_occurence_count++; + } + } + last if $bt_occurence_count == 1; + usleep(100_000); +} + +is($bt_occurence_count, 1, 'found expected backtrace in the log file'); + +$node->stop(); -- 2.25.1