On Wed, Nov 25, 2020 at 11:01:37AM +0900, 江川潔 wrote:
> Hi,
>
> WAL log recovery was failed on wrong log record size. Could you please
> advise me what is wrong in the setting ? Any suggestions will be highly
> appreciated.
> 2020-11-25 10:12:23.569 JST [7792] FATAL: archive file
> "0001
Hi,
WAL log recovery was failed on wrong log record size. Could you please
advise me what is wrong in the setting ? Any suggestions will be highly
appreciated.
Thanks,
Kiyoshi
postgres.conf:
wal_level = replica
archive_mode = on
archive_command = 'copy "%p" "D:\\BKUP\\pg_archivedir\\PostgreSQL_D
Many thanks for the clarification David. I wish there is a way without
touching my input array but at least based on your feedback I can avoid the
call to string_to_array call in the query. I tried it and it worked but I still
have to permute the C array. Here is what I have working so far:
c
The convention here is to inline or bottom-post, not top-post.
On Tue, Nov 24, 2020 at 3:47 PM Dave Greeko wrote:
> I would really like to just pass an array of filters of type (const char*
> const*) to PQexecPrepared's paramValues[] parameter instead of making it
> some sort of csv string.
> //
I am sorry I used different query in my my last reply and yes you are correct
Tom. Using the $1 worked and the back-end indeed prepared the statement
successfully but this will force me to do some work on the input array that
contains the dynamic elements to comply with string_to_array delimite
Hey, all,
As part of configuring the max_worker_processes parameter I would like to know
how many current worker processes are active, but I'm unsure of the best way
to do this.
pg_stat_activity has a column backend_type which normally takes one of the
following values:
autovacuum launcher
autov
Dave Greeko writes:
> I tried both and I am getting syntax error.
> char *query="select codec_id,fs_name,pt from codec_defs where pt =
> ANY(string_to_array(?, ','))";
> OR
> char *query="select codec_id,fs_name,pt from codec_defs where pt =
> ANY(?::text)";
> PGresult *res=PQprepare(conn,"cod
Hi David,
I tried both and I am getting syntax error.
char *query="select codec_id,fs_name,pt from codec_defs where pt =
ANY(string_to_array(?, ','))";
OR
char *query="select codec_id,fs_name,pt from codec_defs where pt =
ANY(?::text)";
PGresult *res=PQprepare(conn,"codecs",query,1,NULL);
Da
On Tue, Nov 24, 2020 at 12:14 PM Dave Greeko wrote:
> Dear All,
> I am having a hard time figuring out how prepare and execute a Prepared
> Statement with an "IN" operator in the WHERE clause using libpq. The total
> elements that will be passed to IN operator is dynamic and varied at
> runtime.
Dear All,
I am having a hard time figuring out how prepare and execute a Prepared
Statement with an "IN" operator in the WHERE clause using libpq. The total
elements that will be passed to IN operator is dynamic and varied at runtime.
here is an example query:
select payload_id,ptime,frequency
On 11/24/20 8:36 AM, David Gauthier wrote:
Hi:
11.3 on linux
I've come up with a plan to archive data from my main DB which involves
creating other DBs on the same server. But even though there will be zero
activity on the archive DBs in terms of insert/update/delete, and almost
no activity
Hello,
I wrote a description of the psycopg3 adaptation system and the main
differences compared to psycopg2: available at
https://www.psycopg.org/articles/2020/11/24/psycopg3-adaptation/
Initial API docs are available at
https://www.psycopg.org/psycopg3/docs/adaptation.html
Feedback is welcome.
On 11/24/20 7:33 AM, David Gauthier wrote:
Ok, thanks.
I was also planning on manually running vacuum, reindex and analyze on
the main DB after removing the data from the main DB after archiving.
Does that sound necessary and reasonable ?
Sounds reasonable.
On Tue, Nov 24, 2020 at 10:15
On Tue, Nov 24, 2020 at 10:33:46AM -0500, David Gauthier wrote:
> Ok, thanks.
>
> I was also planning on manually running vacuum, reindex and analyze on the
> main
> DB after removing the data from the main DB after archiving. Does that sound
> necessary and reasonable ?
This blog entry summar
Ok, thanks.
I was also planning on manually running vacuum, reindex and analyze on the
main DB after removing the data from the main DB after archiving. Does
that sound necessary and reasonable ?
On Tue, Nov 24, 2020 at 10:15 AM Adrian Klaver
wrote:
> On 11/24/20 6:36 AM, David Gauthier wrote:
On 11/24/20 6:36 AM, David Gauthier wrote:
Hi:
11.3 on linux
I've come up with a plan to archive data from my main DB which involves
creating other DBs on the same server. But even though there will be
zero activity on the archive DBs in terms of insert/update/delete, and
almost no activity
On Tue, Nov 24, 2020 at 7:36 AM David Gauthier
wrote:
> Hi:
>
> 11.3 on linux
>
> I've come up with a plan to archive data from my main DB which involves
> creating other DBs on the same server. But even though there will be zero
> activity on the archive DBs in terms of insert/update/delete, an
Hi:
11.3 on linux
I've come up with a plan to archive data from my main DB which involves
creating other DBs on the same server. But even though there will be zero
activity on the archive DBs in terms of insert/update/delete, and almost no
activity in terms of select, I'm still worried that the
18 matches
Mail list logo