Re: Presentation tools used ?

2023-10-22 Thread Maciek Sakrejda
On Sat, Oct 21, 2023, 22:57 Achilleas Mantzios <
a.mantz...@cloud.gatewaynet.com> wrote:

> What tools
> would you suggest ? What's your setup ?


I've used reveal.js before and I was pretty happy with it:
https://revealjs.com/


Re: log_statement vs log_min_duration_statement

2023-09-26 Thread Maciek Sakrejda
On Tue, Sep 26, 2023, 12:34 Atul Kumar  wrote:

> What am I missing here to understand, because as per my understanding
> log_statement and log_min_duration_statement are correlated, postgres
> should log according to log_statement parameter.
>

The two settings are independent. One logs *all* statements in the
specified category, and the other *all* statements that execute longer than
the specified threshold. I don't believe there is a built-in way of getting
the behavior you're looking for.


Re: update from 13 to16

2023-09-16 Thread Maciek Sakrejda
On Fri, Sep 15, 2023, 15:02 Adrian Klaver  wrote:

> On 9/15/23 9:55 AM, Martin Mueller wrote:
>
> I am thinking of upgrading from version 13 to version 16.
>
>
>
> I think I can do this by leaving the data alone and just replacing the
> software.  My data are extremely simple and consist of several hundred
>
> No you can't.
>
> You will need to either use pg_upgrade:
>
> https://www.postgresql.org/docs/16/pgupgrade.html
>

To elaborate on this, if you want to avoid copying the data, you may want
to take a look at pg_upgrade in link mode.

>


Re: PSQL = Yes ... JDBC = no ??

2023-09-03 Thread Maciek Sakrejda
On Sun, Sep 3, 2023, 16:25 Amn Ojee Uw  wrote:

> Are you saying that JDBC cannot handle or process \gexec, since it is
> PG-15 exclusive?
>

JDBC cannot handle our process \gexec since it is _psql_ exclusive. It's a
command interpreted and evaluated by that client specifically, not by the
Postgres server.

>


Re: Postgres SQL

2023-07-20 Thread Maciek Sakrejda
On Wed, Jul 19, 2023, 22:40 Anthony Apollis 
wrote:

> Hi
>
> What list can i post sql related errors etc?
>

This is a good place to start, but you may want to review
https://wiki.postgresql.org/wiki/Guide_to_reporting_problems to make it
easier to get help if you're not already familiar with it.

Thanks,
Maciek


Re: Nu-B here

2023-07-19 Thread Maciek Sakrejda
On Wed, Jul 19, 2023, 17:36 Amn Ojee Uw  wrote:

> After this command 'sudo -u postgres psql'
> I get this message :
> *could not change directory to "/home/my_account": Permission denied*
> *psql (12.15 (Debian 12.15-1.pgdg120+1))*
> *Type "help" for help.*
>
> I tried using help, but to no avail.
> What am I doing wrong?
>
It looks like you're hitting a non-fatal error at some point during psql
startup when you're running as the "postgres" user. It seems to be a
permissions issue. Do you still get the error if you run

sudo -u postgres psql --no-psqlrc

?

You may also want to create a Postgres user (possibly a superuser) matching
your operating system user if you have not already done so to avoid having
to deal with sudo for connecting.

Thanks
Maciek


Re: pg_stats.avg_width

2023-05-19 Thread Maciek Sakrejda
Thanks, that makes sense. It was going to be my third guess, but it
seemed pretty wide for a TOAST pointer. Reviewing what goes in there,
though, it's reasonable.

I assume that this means for unTOASTed but compressed data, this
counts the compressed size.

Would a doc patch clarifying this (and possibly linking to the
relevant TOAST docs [1]) be welcome? The current wording is pretty
vague. Something like

diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 7c09ab3000..2814ac8007 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -7466,7 +7466,9 @@ SCRAM-SHA-256$iteration
count:
stawidth int4
   
   
-   The average stored width, in bytes, of nonnull entries
+   The average stored width, in bytes, of nonnull entries. For compressed
+   entries, counts the compressed size; for TOASTed data, the size of the
+   TOAST pointer (see TOAST).
   
  

diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index bb1a418450..62184fe32b 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -3680,7 +3680,9 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
avg_width int4
   
   
-   Average width in bytes of column's entries
+   Average width in bytes of column's entries. For compressed entries,
+   counts the compressed size; for TOASTed data, the size of the TOAST
+   pointer (see TOAST).
   
  

(not sure if this should be  or ).

Thanks,
Maciek

[1]: https://www.postgresql.org/docs/current/storage-toast.html




pg_stats.avg_width

2023-05-17 Thread Maciek Sakrejda
Hello,

The pg_stats.avg_width field is documented [1] as "Average width in
bytes of column's entries" but it's not defined exactly what "entries"
means here with respect to STORAGE (the underlying pg_statistic
documentation doesn't clarify this either). I thought initially this
was the "logical" size of the values, but I ran an experiment that
suggests this interpretation is not right:

maciek=# create table foo(a text);
CREATE TABLE
maciek=# insert into foo(a) select string_agg(floor((random() *
10)::numeric)::text, '') from generate_series(1,100) g;
INSERT 0 1
maciek=# analyze foo;
ANALYZE
maciek=# select avg_width from pg_stats where tablename = 'foo' and
attname = 'a';
 avg_width
---
18
(1 row)
maciek=# select length(a) from foo;
 length
-
 100
(1 row)
maciek=# select reltoastrelid::regclass from pg_class where relname = 'foo';
   reltoastrelid
---
 pg_toast.pg_toast_6454708
(1 row)
maciek=# select sum(length(chunk_data)) from pg_toast.pg_toast_6454708;
  sum

 724257
(1 row)

So the avg_width here appears to correspond to neither the logical
size nor the compressed toasted size. Am I missing something? Postgres
14.7 in case that matters.

Thanks,
Maciek

[1]: https://www.postgresql.org/docs/current/view-pg-stats.html




Re: Using CTID system column as a "temporary" primary key

2023-03-28 Thread Maciek Sakrejda
Note that VACUUM FULL and CLUSTER can update ctids. I don't believe regular
VACUUM can, so you should be safe from autovacuum interfering in this
scheme, but the ctid colum documentation [1] states "A primary key should
be used to identify logical rows," so this is not exactly intended usage.

Thanks,
Maciek

[1]: https://www.postgresql.org/docs/current/ddl-system-columns.html


Re: Is there a way to know write statistics on an individual index

2022-12-08 Thread Maciek Sakrejda
On Thu, Dec 8, 2022 at 2:53 AM David Rowley  wrote:
>
> On Thu, 8 Dec 2022 at 19:59, higherone  wrote:
> > I know there's a pg_stats_user_indexes that gives statistics about the 
> > index scans/fetches, but I don't find any information about how many 
> > writes(insert/update/delete) happen on this individual index.
> > Is there some tool that could achieve this?
>
> Is this a partial index? If not, those should just increase at the
> same rate that's mentioned in pg_stat_user_tables for the table that
> the index belongs to.

Well, not deletes, right? And HOT updates won't affect indexes either,
so they should be subtracted from n_tup_upd.




Re: PostgreSQL 14: pg_dump / pg_restore error: could not write to the communication channel: Broken pipe

2021-10-15 Thread Maciek Sakrejda
Anything interesting in the Postgres server log when this happens?

On Fri, Oct 15, 2021, 05:21 Nick Renders  wrote:

> Hello,
>
> I have been trying to import a Postgres 11 database into Postgres 14,
> but the pg_restore command exits with the following message:
>
> pg_restore: error: could not write to the communication channel:
> Broken
> pipe
>
> The command I sent looks like this:
>
> /Library/PostgreSQL/14/bin/pg_restore -h localhost -p 48100 -U
> postgres
> -w -C -d template1 -j 24 -Fd /Volumes/Migration/dbname --verbose
>
>
> It seems that the multiple jobs parameter is the cause. If I specify "-j
> 1", the command works without problems. If I specify "-j 2" or higher, I
> get the above error after a few seconds.
>
> Postgres is running on a Mac Pro 12-core machine, so it has plenty of
> resources at its disposal. The config file is a copy of the Postgres 11
> configuration, which has no problem with multiple jobs.
>
>
> Furthermore, the pg_dump command seems to have the same issue as well.
> The following command:
>
> /Library/PostgreSQL/14/bin/pg_dump -h localhost -p 48100 -U
> postgres -w
> ServicePGR_UTF8 -j 24 -Fd -f /Volumes/Migration/dbname --verbose
>
> will stop prematurely with the following error:
>
> pg_dump: error: could not write to the communication channel:
> Broken
> pipe
>
>
> Does this sound familiar to anyone? Is it an issue with the new Postgres
> 14 release, or is there something else that might be causing this?
>
> Best regards,
>
> Nick Renders
>
>
>


GENERATED STORED columns and table rewrites?

2020-04-15 Thread Maciek Sakrejda
Hello,

Does adding a GENERATED STORED column need to rewrite the table (like
adding a column with a DEFAULT before 11)? Neither the ALTER TABLE docs [1]
nor the generated column docs [2] discuss this. The former has a very nice
Tip regarding DEFAULT columns--maybe we should clarify GENERATED STORED
behavior either way? It seems like a very similar issue. Experimenting with
this, \timing suggests that there is work proportional to the size of the
table, but I'd like to understand the behavior better (and other users
probably would, too).

Thanks,
Maciek

[1]:
https://www.postgresql.org/docs/current/ddl-alter.html#DDL-ALTER-ADDING-A-COLUMN
[2]: https://www.postgresql.org/docs/current/ddl-generated-columns.html


Re: EXPLAIN BUFFERS and I/O timing accounting questions

2019-10-24 Thread Maciek Sakrejda
On Thu, Oct 24, 2019 at 2:25 PM Andres Freund  wrote:
> Note that the buffer access stats do *not* count the number of distinct
> buffers accessed, but that they purely the number of buffer
> accesses.

You mean, even within a single node? That is, if a node hits a block ten
times, that counts as ten blocks hit? And if it reads a block and then
needs it three more times, that's one read plus three hit?

> Do you have an example?

Sure, here's the "abridged" plan:

[{ "Plan": {
  "Node Type": "Aggregate",
  "Plan Rows": 1,
  "Plan Width": 8,
  "Total Cost": 26761745.14,
  "Actual Rows": 1,
  "I/O Read Time": 234129.299,
  "I/O Write Time": 0,
  "Local Hit Blocks": 0,
  "Local Read Blocks": 0,
  "Local Dirtied Blocks": 0,
  "Local Written Blocks": 0,
  "Shared Hit Blocks": 4847762,
  "Shared Read Blocks": 1626312,
  "Shared Dirtied Blocks": 541014,
  "Shared Written Blocks": 0,
  "Temp Read Blocks": 0,
  "Temp Written Blocks": 4786,
  "Plans": [
{
  "Node Type": "ModifyTable",
  "Operation": "Delete",
  "Parent Relationship": "InitPlan",
  "Plan Rows": 13943446,
  "Plan Width": 6,
  "Total Cost": 25774594.63,
  "Actual Rows": 2178416,
  "I/O Read Time": 234129.299,
  "I/O Write Time": 0,
  "Local Hit Blocks": 0,
  "Local Read Blocks": 0,
  "Local Dirtied Blocks": 0,
  "Local Written Blocks": 0,
  "Shared Hit Blocks": 4847762,
  "Shared Read Blocks": 1626312,
  "Shared Dirtied Blocks": 541014,
  "Shared Written Blocks": 0,
  "Temp Read Blocks": 0,
  "Temp Written Blocks": 0,
  "Plans": ""
},
{
  "Node Type": "ModifyTable",
  "Operation": "Delete",
  "Parent Relationship": "InitPlan",
  "Plan Rows": 63897788,
  "Plan Width": 38,
  "Total Cost": 315448.53,
  "Actual Rows": 0,
  "I/O Read Time": 30529.231,
  "I/O Write Time": 0,
  "Local Hit Blocks": 0,
  "Local Read Blocks": 0,
  "Local Dirtied Blocks": 0,
  "Local Written Blocks": 0,
  "Shared Hit Blocks": 12964205,
  "Shared Read Blocks": 83260,
  "Shared Dirtied Blocks": 48256,
  "Shared Written Blocks": 0,
  "Temp Read Blocks": 4788,
  "Temp Written Blocks": 0,
  "Plans": ""
},
{
  "Node Type": "ModifyTable",
  "Operation": "Delete",
  "Parent Relationship": "InitPlan",
  "Plan Rows": 45657680,
  "Plan Width": 38,
  "Total Cost": 357974.43,
  "Actual Rows": 0,
  "I/O Read Time": 24260.512,
  "I/O Write Time": 0,
  "Local Hit Blocks": 0,
  "Local Read Blocks": 0,
  "Local Dirtied Blocks": 0,
  "Local Written Blocks": 0,
  "Shared Hit Blocks": 10521264,
  "Shared Read Blocks": 64450,
  "Shared Dirtied Blocks": 36822,
  "Shared Written Blocks": 0,
  "Temp Read Blocks": 4788,
  "Temp Written Blocks": 1,
  "Plans": ""
},
{
  "Node Type": "CTE Scan",
  "Parent Relationship": "Outer",
  "Plan Rows": 13943446,
  "Plan Width": 8,
  "Total Cost": 278868.92,
  "Actual Rows": 2178416,
  "I/O Read Time": 234129.299,
  "I/O Write Time": 0,
  "Local Hit Blocks": 0,
  "Local Read Blocks": 0,
  "Local Dirtied Blocks": 0,
  "Local Written Blocks": 0,
  "Shared Hit Blocks": 4847762,
  "Shared Read Blocks": 1626312,
  "Shared Dirtied Blocks": 541014,
  "Shared Written Blocks": 0,
  "Temp Read Blocks": 0,
  "Temp Written Blocks": 4786
}
  ]
}}]

Let me know if I removed anything I shouldn't have and I can follow up with
extra info.

>  I assume what's going on is that the cost of
> the CTE is actually attributed (in equal parts or something like that)
> to all places using the CTE. Do the numbers add up if you just exclude
> the CTE?

Not really--it looks like the full Shared Blocks Hit cost in the root is
the same as the CTE by itself. This is playing around with the plan in a
node console:

> p[0].Plan['Shared Hit Blocks']
4847762
> p[0].Plan.Plans.map(p => p['Node Type'])
[ 'ModifyTable', 'ModifyTable', 'ModifyTable', 'CTE Scan' ]
> p[0].Plan.Plans.map(p => p['Shared Hit Blocks'])
[ 4847762, 12964205, 10521264, 4847762 ]

> IIRC one can get multiple plans when there's a DO ALSO rule. There might
> be other ways to get there too.

Thanks, good to know.


Re: jsonb_set() strictness considered harmful to data

2019-10-23 Thread Maciek Sakrejda
On Wed, Oct 23, 2019 at 12:01 PM Stuart McGraw  wrote:
> Why the inconsistency between the array
> type and json type?  Are there any cases other than json where the entire
> compound value is set to NULL as a result of one of its components being
> NULL?

That's a great point. It does look like hstore's delete / minus
operator behaves like that, though:

=# select 'a=>1,b=>2'::hstore - null;
 ?column?
--

(1 row)




Re: EXPLAIN BUFFERS and I/O timing accounting questions

2019-10-23 Thread Maciek Sakrejda
Also, I noticed that in this plan, the root (again, an Aggregate) has 0
Temp Read Blocks, but two of its children (two of the ModifyTable nodes)
have non-zero Temp Read Blocks. Again, this contradicts the documentation,
as these costs are stated to be cumulative. Any ideas?

Thanks,
Maciek


EXPLAIN BUFFERS and I/O timing accounting questions

2019-10-22 Thread Maciek Sakrejda
Hello,

I ran across an EXPLAIN plan and had some questions about some of its
details. The BUFFERS docs say

>The number of blocks shown for an upper-level node includes those used by
all its child nodes.

I initially assumed this would be cumulative, but I realized it's probably
not because some of the blocks affected by each child will actually
overlap. But this particular plan has a Shared Hit Blocks at the root (an
Aggregate) that is smaller than some of its children (three ModifyTables
and a CTE Scan). This seems to contradict the documentation (since if
children overlap fully in their buffers usage, the parent should still have
a cost equal to the costliest child)--any idea what's up? I can send the
whole plan (attached? inline? it's ~15kb) if that helps.

I also noticed the I/O Read Time (from track_io_timing) of two children in
this plan is equal to the I/O Read Time in the root. Is I/O time
potentially fully parallelized across children? There are no parallel
workers according to the plan, so I'm surprised at this and would like to
understand better.

Also, a tangential question: why is the top-level structure of a JSON plan
an array? I've only ever seen one root node with a Plan key there.

Thanks,
Maciek