Hi all,
I was playing around with the stored procedure support in v11 and found
that pure OUT parameters are not supported. Is there any reason we only
support INOUT but not OUT parameters?
psql (11.0 (Homebrew petere/postgresql))
dev=# CREATE PROCEDURE test_sp(a OUT int) LANGUAGE plpgsql AS $$
d
When one get a "No space left on device" and there is a lot of space it is
sometimes caused by lack of inodes.
Try run the command:
df --inodes
On Mon, Dec 10, 2018 at 4:56 PM Ron wrote:
> Hi,
>
> v9.6.6
>
>
> 2018-12-07 06:21:55.504 EST 10.140.181.89(35868) CDS CDSLBXW 13748 SELECT
> Postgr
On Tue, Dec 11, 2018 at 3:58 PM Ron wrote:
> On 12/10/2018 06:24 PM, Thomas Munro wrote:
> > On Tue, Dec 11, 2018 at 2:56 AM Ron wrote:
> >> 2018-12-07 06:21:55.504 EST 10.140.181.89(35868) CDS CDSLBXW 13748 SELECT
> >> PostgreSQL JDBC Driver 53100 ERROR: could not write to tuplestore
> >> temp
On 12/10/2018 06:24 PM, Thomas Munro wrote:
On Tue, Dec 11, 2018 at 2:56 AM Ron wrote:
2018-12-07 06:21:55.504 EST 10.140.181.89(35868) CDS CDSLBXW 13748 SELECT
PostgreSQL JDBC Driver 53100 ERROR: could not write to tuplestore temporary
file: No space left on device
I see this in the pg_log f
I extracted the queries from pg_log and sent them to the customer team. To
fill up 96GB of disk space seems like the customer selected a huge date range.
On 12/10/2018 06:07 PM, Rene Romero Benavides wrote:
Yes, pgsql_tmp, you ought to be looking for a sudden and drastic jump in
space utilizat
On Mon, Dec 10, 2018 at 02:24:43PM -0500, Jeff Janes wrote:
> What does this mean? Is the standby "caught up" when it replays the LSN
> which was current on the master as-of the time that the standby initiated
> this connection? Or is it caught up when the master receives at least one
> notificat
Thomas Carroll writes:
> On Monday, December 10, 2018, 5:50:22 PM EST, Tom Lane
> wrote:
>> Also, as mentioned upthread, it'd be interesting to see if there's
>> a memory context dump showing up in your server log.
> There are many memory context dumps, and they are long (well over 100 line
On Tue, Dec 11, 2018 at 2:56 AM Ron wrote:
> 2018-12-07 06:21:55.504 EST 10.140.181.89(35868) CDS CDSLBXW 13748 SELECT
> PostgreSQL JDBC Driver 53100 ERROR: could not write to tuplestore temporary
> file: No space left on device
>
> I see this in the pg_log file, but #1 can't figure out what "tup
Yes, pgsql_tmp, you ought to be looking for a sudden and drastic jump in
space utilization around the time of the error message. You're not
concerned with the current space utilization, but with the one around that
time, because, it probably got freed right after the error was raised.
How many time
On Monday, December 10, 2018, 5:50:22 PM EST, Tom Lane
wrote:
> Is the error message spelling really exactly "Cannot allocate memory"?
Yes - but I think the message is forwarded from Linux. Here is an example:
2018-12-08 00:00:00.070 EST,,,32506,,5bc71a25.7efa,25643,,2018-10-17 07:16:53
Which file system (specifically, which directory)? Is it
data/base/pgsql_tmp? There's 96GB free, which is 74% of the volume.
On 12/10/2018 04:50 PM, Rene Romero Benavides wrote:
Maybe the temp space got released right after the failure?
https://grokbase.com/t/postgresql/pgsql-general/02ag7k8g
Maybe the temp space got released right after the failure?
https://grokbase.com/t/postgresql/pgsql-general/02ag7k8gcr/tuplestore-write-failed
do you have space usage charts for that partition? doesn't it show a spike
during that time?
Am Mo., 10. Dez. 2018 um 15:54 Uhr schrieb Ron :
> There's cer
Thomas Carroll writes:
>On Monday, December 10, 2018, 3:45:33 PM EST, Tom Lane
> wrote:
>> I imagine you checked this already, but ... what is temp_buffers set to?
> Thanks for your reply! temp_buffers is the default 8MB, and I should have
> included that in my first email.
Hm. Well,
On Monday, December 10, 2018, 3:45:33 PM EST, Tom Lane
wrote:
Thomas Carroll writes:
> Postgres version: 10.5. work_mem setting: 4MB, shared_buffers setting: 800
> MB, connections typically around 30-40.
I imagine you checked this already, but ... what is temp_buffers set to?
That con
Laurent FAILLIE writes:
> I'm trying to save a database from a dying disk on a Gentoo/Linux box.
> Unfortunately, when I'm issuing a
> $ pg_dumpall --clean
> I got
> pg_dump: [programme d'archivage (db)] échec de la requête : ERROR: array
> size exceeds the maximum allowed (268435455)
> pg_dum
There's certainly a problem with the application, but the error is in the
pg_log, not the application log.
On 12/10/2018 03:21 PM, Rene Romero Benavides wrote:
What if this error message pertains to something happening on the
application side?
Am Mo., 10. Dez. 2018 um 09:56 Uhr schrieb Ron
On 12/10/2018 03:15 PM, Laurent FAILLIE wrote:
Hello,
I'm trying to save a database from a dying disk on a Gentoo/Linux box.
[snip]
This box is running
pg_dump (PostgreSQL) 10.3
but I was on way to upgrade to 10.6 when I discovered this issue.
Since 10.3 and 10.6 are the same primary ve
What if this error message pertains to something happening on the
application side?
Am Mo., 10. Dez. 2018 um 09:56 Uhr schrieb Ron :
> Hi,
>
> v9.6.6
>
>
> 2018-12-07 06:21:55.504 EST 10.140.181.89(35868) CDS CDSLBXW 13748 SELECT
> PostgreSQL JDBC Driver 53100 ERROR: could not write to tuplestor
Hello,
I'm trying to save a database from a dying disk on a Gentoo/Linux box.
Unfortunately, when I'm issuing a
$ pg_dumpall --clean
I got
pg_dump: [programme d'archivage (db)] échec de la requête : ERROR: array size
exceeds the maximum allowed (268435455)
pg_dump: [programme d'archivage (db)]
Thomas Carroll writes:
> Postgres version: 10.5. work_mem setting: 4MB, shared_buffers setting: 800
> MB, connections typically around 30-40.
I imagine you checked this already, but ... what is temp_buffers set to?
That constrains the maximum memory used for temporary-table buffers in
each proc
Thomas Carroll wrote:
> We have a situation where a long-persistent Postgres connection consumes more
> and more memory. If ignored, we eventually get “Cannot allocate memory”
> errors in the Postgres log. If still ignored, the box will eventually crash.
> This takes about 3 weeks to happen. It
We have a situation where a long-persistent Postgres connection consumes more
and more memory. If ignored, we eventually get “Cannot allocate memory” errors
in the Postgres log. If still ignored, the box will eventually crash. This
takes about 3 weeks to happen. It issues a call to a single
I'm not sure if this is a documentation issue, or something else.
The description of the pg_stat_replication.state column gives:
* catchup: This WAL sender's connected standby is catching up with the
primary.
* streaming: This WAL sender is streaming changes after its connected
standby server ha
Tom Lane wrote:
> Kevin Brannen writes:
> > So, can you (or anyone) help me understand what "alter type drop
> > attribute" is meant to do? I don't see "attribute" on the page you
> > reference.
> IIRC, that drops a column from a composite type; it's more or less a variant
> spelling of ALTE
Olga Vingurt writes:
> The only question left is how we got into corrupted data state.
> In the event logs (PorstgeSQL is runnign on Wondows Server) we found error
> which looks relevant:
> ERROR: could not truncate file "base/12373/17254" to 19 blocks: Permission
> denied
> CONTEXT: automatic
The data indeed wasn't consistent on the source system and foreign key
index was corrupted.
After manually cleaning not relevant records and running REINDEX on the
table pd_dump and pg_restore worked as expected.
The only question left is how we got into corrupted data state.
In the event logs (Po
Hi,
v9.6.6
2018-12-07 06:21:55.504 EST 10.140.181.89(35868) CDS CDSLBXW 13748 SELECT
PostgreSQL JDBC Driver 53100 ERROR: could not write to tuplestore temporary
file: No space left on device
I see this in the pg_log file, but #1 can't figure out what "tuplestore" is
(Google doesn't help),
Ian Barwick writes:
> On 12/10/2018 11:50 AM, Michael Paquier wrote:
>> Perhaps. Even with that, writing an extension for 9.2 would require
>> copy-pasting a lot of code from pg_controldata.c and taking as well the
>> code which has been introduced for the various control functions.
>> Dirty, sti
Hi,
El lun., 10 dic. 2018 a las 7:21, Andreas Kretschmer (<
andr...@a-kretschmer.de>) escribió:
>
> Am 10.12.18 um 11:15 schrieb Olga Vingurt:
> > After playing with the dump and importing schema first and data next
> > without the triggers we indeed see that data is missing in the table
> > i.e.
David G. Johnston wrote:
> > and what happens to this data after completion of the transaction ?
> >
>
> Your choice. See “on conflict” clause of create temp table command.
You mean the ON COMMIT clause.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
Am 10.12.18 um 11:15 schrieb Olga Vingurt:
After playing with the dump and importing schema first and data next
without the triggers we indeed see that data is missing in the table
i.e. dump is not consistent.
We don't stop the application which uses database during the dump but
according to
Hi,
We are using PostgresSQL 9.5.10 and pg_dump/pg_restore to export and import
database.
We encountered an issue (which is not easily reproducible) when running
pg_restore:
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 3624; 2606 37504 F
32 matches
Mail list logo