Re: Finding error in long input file

2024-07-10 Thread Hans Schou
On Wed, Jul 10, 2024 at 2:59β€―PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

>
> And what are the first few lines of the file? Use text, not screenshots.
>

Yes the line with 'INSERT'

grep -ni 'INSERT INTO' scripts/insert-addrs.sql


-- 
π•³π–†π–“π–˜ π•Ύπ–ˆπ–π–”π–š
☏ ➁➁ βž…βžƒ βž‡β“ͺ ➁β“ͺ


Re: Finding error in long input file

2024-07-10 Thread Hans Schou
If the file has these line breaks you show, then can make it to multiple
'INSERT INTO' instead.

Search for lines starting with parentese begin '(' and replace it with the
correct INSERT and last comma to semi-colon:
  cat i.sql | sed -e 's/^(/INSERT INTO foo VALUES(/' -e 's/,$/;/'

Does the file come from mysqldump? Then try option --extended-insert=FALSE

On Wed, Jul 10, 2024 at 2:53β€―PM Rich Shepard 
wrote:

> On Tue, 9 Jul 2024, Craig McIlwee wrote:
>
> > The input file is 488 lines (presumably, since Rich said the file should
> > insert 488 rows). It seems like too much of a coincidence that the last
> > character of the last line is really the error. My guess is that there is
> > an unmatched character, perhaps a parenthesis, that is throwing off the
> > parser because it doesn't expect the statement to terminate yet. Maybe
> > that unmatched char really is on the last line, but '85250 Red House Rd'
> > doesn't seem like the issue. I don't know anything about the joe editor,
> > but I'd hope that any decent editor with syntax highlighting would make
> it
> > apparent where things went awry.
>
> Craig, et al.,
>
> I use emacs for scripts and coding, joe's only for small jobs.
>
> I added a line to the file so the bottom line is now 489. The attached
> image
> shows that line is the only one terminated with a semicolon rather than a
> comma.
>
> psql would tell me if there was no closing parenthesis on a line, if the
> terminating comma was missing, or other similar error, and would tell me
> the
> number of the line or following line. Having the error marked at the end of
> the file does not tell _me_ just where the error actually is.
>
> Partial screenshot attached.
>
> Thanks all,
>
> Rich



-- 
π•³π–†π–“π–˜ π•Ύπ–ˆπ–π–”π–š
☏ ➁➁ βž…βžƒ βž‡β“ͺ ➁β“ͺ


Oracle Linux 9 Detected RPMs with RSA/SHA1 signature

2024-06-12 Thread Hans Schou
Hi

On my test server I have Oracle Linux 8.10 installed.
Here I have installed postgresql 16.1 from postgresql.org repository.

Upgrade to Oracle Linux 9:
When doing a Β»leapp preupgrade --oraclelinuxΒ« I get the message below.

I want to have postgresql.org as my repo for PostgreSQL and Oracle Linux
for the rest. But it fails due to this SHA1 signature.

As Oracle Linux 8 since April 2024 now have PostgreSQL 16.1 in the repo I
could just disable the pg-repo and use the ol-repo. But is this the
recommended way to do it?


Output from /var/log/leapp/leapp-report.txt

Risk Factor: high (inhibitor)
Title: Detected RPMs with RSA/SHA1 signature
Summary: Digital signatures using SHA-1 hash algorithm are no longer
considered secure and are not allowed to be used on OL 9 systems by
default. This causes issues when using DNF/RPM to handle packages with
RSA/SHA1 signatures as the signature cannot be checked with the default
cryptographic policy. Any such packages cannot be installed, removed, or
replaced unless the signature check is disabled in dnf/rpm or SHA-1 is
enabled using non-default crypto-policies. For more information see the
following documents:
  - Major changes in OL 9:
https://docs.oracle.com/en/operating-systems/oracle-linux/9/relnotes9.4/ol9-NewFeaturesandChanges.html
  - Security Considerations in adopting OL 9:
https://docs.oracle.com/en/operating-systems/oracle-linux/9/security/security-ImplementingAdditionalSecurityFeaturesandBestPractices.html#system-crypto-policies
 The list of problematic packages:
- libpq5 (DSA/SHA1, Fri 15 Sep 2023 12:11:13 PM CEST, Key ID
1f16d2e1442df0f8)
- postgresql16 (DSA/SHA1, Mon 20 Nov 2023 10:56:22 AM CET, Key ID
1f16d2e1442df0f8)
- pgdg-redhat-repo (DSA/SHA1, Thu 14 Sep 2023 02:41:37 PM CEST, Key ID
1f16d2e1442df0f8)
- postgresql16-libs (DSA/SHA1, Mon 20 Nov 2023 10:56:22 AM CET, Key ID
1f16d2e1442df0f8)
- postgresql16-contrib (DSA/SHA1, Mon 20 Nov 2023 10:56:23 AM CET, Key
ID 1f16d2e1442df0f8)
- postgresql16-server (DSA/SHA1, Mon 20 Nov 2023 10:56:22 AM CET, Key
ID 1f16d2e1442df0f8)
Related links:
- Major changes in OL 9:
https://docs.oracle.com/en/operating-systems/oracle-linux/9/relnotes9.4/ol9-NewFeaturesandChanges.html
- Security Considerations in adopting OL 9:
https://docs.oracle.com/en/operating-systems/oracle-linux/9/security/security-ImplementingAdditionalSecurityFeaturesandBestPractices.html#system-crypto-policies
Remediation: [hint] It is recommended that you contact your package vendor
and ask them for new builds signed with supported signatures and install
the new packages before the upgrade. If this is not possible you may
instead remove the incompatible packages.
Key: f16f40f49c2329a2691c0801b94d31b6b3d4f876

-- 
π•³π–†π–“π–˜ π•Ύπ–ˆπ–π–”π–š
☏ ➁➁ βž…βžƒ βž‡β“ͺ ➁β“ͺ


Re: utf8 vs UTF-8

2024-05-17 Thread Hans Schou
> test3   | troels   | UTF8 | libc| en_US.utf8 |
en_US.utf8

It is wrong but I guess it's working?

how did you create test3?

On Fri, May 17, 2024 at 2:44β€―PM Troels Arvin  wrote:

> Hello,
>
> In a Postgres installation, I have databases where the locale is
> slightly different. Which one is correct? Excerpt from "psql --list":
>
>   test1   | loc_test | UTF8 | libc| en_US.UTF-8 |
> en_US.UTF-8
>   test3   | troels   | UTF8 | libc| en_US.utf8 |
> en_US.utf8
>
> OS is Ubuntu.
>
> --
> Kind regards,
> Troels Arvin
>
>
>
>

-- 
π•³π–†π–“π–˜ π•Ύπ–ˆπ–π–”π–š
☏ ➁➁ βž…βžƒ βž‡β“ͺ ➁β“ͺ


Re: Password forgotten

2024-04-23 Thread Hans Schou
Have you tried

   sudo -u postgres psql


On Tue, Apr 23, 2024 at 2:14β€―PM Arbol One  wrote:

> Hello.
> In my Debian box, after entering this command to psql-16, *psql -h
> localhost -U postgres*
>
> psql-16 asks for the password, which I have forgotten.
> So I get this message:
>
>
>
> *psql: error: connection to server at "localhost" (::1), port 5432 failed:
> FATAL:  password authentication failed for user "postgres" connection to
> server at "localhost" (::1), port 5432 failed: FATAL:  password
> authentication failed for user "postgres" *
>
> Is there a way I can retrieve this master password?
>
> Thanks in advance.
>
>
> --
> *ArbolOne.ca* Using Fire Fox and Thunderbird. ArbolOne is composed of
> students and volunteers dedicated to providing free services to charitable
> organizations. ArbolOne on Java Development is in progress [ Γ­ ]
>


-- 
π•³π–†π–“π–˜ π•Ύπ–ˆπ–π–”π–š
☏ ➁➁ βž…βžƒ βž‡β“ͺ ➁β“ͺ


Re: Feature request: pg_get_tabledef(text)

2024-03-31 Thread Hans Schou
On Wed, Nov 22, 2023 at 5:09β€―PM Laurenz Albe 
wrote:

>
> One of the problems is what should be included.
> Indexes?  Policies?  Constraints?
>

A high limit could be all objects except data.
All the objects which would be deleted by a 'DROP TABLE'.

Maybe including 'CASCADE'?

No unsurmountable questions, but someone would have to come up with a
> clear design and implement it.
>

I gave it a try.
I'm not that skilled in plpgsql so there is probably room for improvement.

https://github.com/chlordk/pg_get_tabledef

For your convenience here is a copy/paste of the function.

CREATE OR REPLACE FUNCTION pg_get_tabledef(TEXT)
RETURNS TEXT
LANGUAGE plpgsql
AS $_$
-- pg_get_tabledef ( text ) β†’ text
-- Reconstructs the underlying CREATE command for a table and objects
related to a table.
-- (This is a decompiled reconstruction, not the original text of the
command.)
DECLARE
R TEXT; -- Return result
R_c TEXT; -- Comments result, show after table definition
rec RECORD;
tmp_text TEXT;
v_oid OID; -- Table object id
v_schema TEXT; -- Schema
v_table TEXT; -- Table name
rxrelname TEXT;
BEGIN
rxrelname :=  '^(' || $1 || ')$';
-- Get oid and schema
SELECT
c.oid, n.nspname, c.relname
INTO
v_oid, v_schema, v_table
FROM pg_catalog.pg_class c
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relname OPERATOR(pg_catalog.~) rxrelname COLLATE
pg_catalog.default
AND pg_catalog.pg_table_is_visible(c.oid);
-- If table not found exit
IF NOT FOUND THEN
-- RAISE EXCEPTION 'Table % not found', $1;
RETURN '-- Table not found: ''' || $1 || ;
END IF;
-- Table comment first, columns comment second, init variable R_c,
SELECT obj_description(v_oid) INTO tmp_text;
IF LENGTH(tmp_text) > 0 THEN
R_c := 'COMMENT ON TABLE ' || v_schema || '."' || v_table || '" IS
''' || tmp_text || ''';' || E'\n';
ELSE
R_c := '';
END IF;
R := 'CREATE TABLE ' || v_schema || '."' || v_table || '" (';
-- Get columns
FOR rec IN
SELECT
a.attname,
pg_catalog.format_type(a.atttypid, a.atttypmod),
(SELECT pg_catalog.pg_get_expr(d.adbin, d.adrelid, true)
 FROM pg_catalog.pg_attrdef d
 WHERE d.adrelid = a.attrelid AND d.adnum = a.attnum AND
a.atthasdef),
a.attnotnull,
(SELECT c.collname FROM pg_catalog.pg_collation c,
pg_catalog.pg_type t
 WHERE c.oid = a.attcollation AND t.oid = a.atttypid AND
a.attcollation <> t.typcollation) AS attcollation,
a.attidentity,
a.attgenerated,
a.attnum
FROM pg_catalog.pg_attribute a
WHERE a.attrelid = v_oid AND a.attnum > 0 AND NOT a.attisdropped
ORDER BY a.attnum
LOOP
--RAISE NOTICE '% % %', rec.attnum, rec.attname, rec.format_type;
IF rec.attnum > 1 THEN
R := R || ','; -- no comma after last column definition
END IF;
R := R || E'\n' || '"' || rec.attname || '" ' ||
rec.format_type;
IF rec.attnotnull THEN
R := R || ' NOT NULL';
END IF;
-- Comment on column
SELECT col_description( v_oid, rec.attnum) INTO tmp_text;
IF LENGTH(tmp_text) > 0 THEN
R_c := R_c || 'COMMENT ON COLUMN ' || v_schema || '."' ||
v_table || '.' || rec.attname || '" IS ''' || tmp_text || ''';' || E'\n';
END IF;
END LOOP; -- Columns
-- Finalize table
R := R || E'\n' || ');' || E'\n';
-- Add COMMENTs
IF LENGTH(R_c) > 0 THEN
R := R || R_c;
END IF;
-- Index
FOR rec IN
SELECT
pg_catalog.pg_get_indexdef(i.indexrelid, 0, true) AS indexdef
FROM pg_catalog.pg_class c, pg_catalog.pg_class c2,
pg_catalog.pg_index i
LEFT JOIN pg_catalog.pg_constraint con ON (conrelid = i.indrelid
AND conindid = i.indexrelid AND contype IN ('p','u','x'))
WHERE c.oid = v_oid AND c.oid = i.indrelid AND i.indexrelid = c2.oid
ORDER BY i.indisprimary DESC, c2.relname
LOOP
R := R || rec.indexdef || ';' || E'\n';
END LOOP; -- Index
RETURN R;
END;
$_$;



-- 
π•³π–†π–“π–˜ π•Ύπ–ˆπ–π–”π–š
☏ ➁➁ βž…βžƒ βž‡β“ͺ ➁β“ͺ


Re: Trainning and Certification

2023-12-06 Thread Hans Schou
On Wed, Dec 6, 2023 at 3:39β€―PM roger popa  wrote:

> You can tell if exists an oficial Postgresql Certification issued by
> postgresql.org ?
>

No.


> Or others (like PearsonVue)?
>

EnterpriseDB has certifications. I think you can get some of them for free.
https://www.enterprisedb.com/accounts/register/biganimal


-- 
π•³π–†π–“π–˜ π•Ύπ–ˆπ–π–”π–š
☏ ➁➁ βž…βžƒ βž‡β“ͺ ➁β“ͺ


Feature request: pg_get_tabledef(text)

2023-11-22 Thread Hans Schou
Hi

Similar to pg_get_viewdef() and pg_get_functiondef() it would be useful
with a pg_get_tabledef() to get a full description of how a table is
defined.

Currently the table definition can be extracted with the command:

  pg_dump -d foo --schema-only --table=bar | egrep '^[^-]'

The psql command '\d bar' gives some of the same information but it is not
in a format where it can be used to create a table.

Extra:
With the pg_get_tabledef() function in place it is very close to be
possible to implement pg_dump() within the system. So instead of running:
  pg_dump -d foo
one could just run:
  psql -d foo -c 'SELECT pg_dump()'

The function could also be called from within a programming language like
Java/PHP.

pg_dump has a lot of options where some of them could be parameters to the
pg_dump() function. If using a cloud or other webservice this will be an
easy way to make an extra backup.

-- 
π•³π–†π–“π–˜ π•Ύπ–ˆπ–π–”π–š
☏ ➁➁ βž…βžƒ βž‡β“ͺ ➁β“ͺ


Re: backing up the data from a single table?

2019-09-13 Thread Hans Schou
On Fri, Sep 13, 2019 at 4:14 PM stan  wrote:

> Is there a way to "export" a single table, that can be easily re
> "imported"?
>

Export:

pg_dump --table=foo > foo.sql

Import:

cat foo.sql | psql


Re: Memory settings

2019-06-29 Thread Hans Schou
Try run postgresqltuner.pl as suggested on
https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server and also
look at the other info there.

After running a few days with live data run cache_hit_ratio.sql by Melvin
Davidson:
SELECT pg_stat_database.datname,
   pg_stat_database.blks_read,
   pg_stat_database.blks_hit,
   round((pg_stat_database.blks_hit::double precision
  / (pg_stat_database.blks_read
 + pg_stat_database.blks_hit
 +1)::double precision * 100::double precision)::numeric,
2) AS cachehitratio
   FROM pg_stat_database
  WHERE pg_stat_database.datname !~ '^(template(0|1)|postgres)$'::text
  ORDER BY round((pg_stat_database.blks_hit::double precision
 / (pg_stat_database.blks_read
+ pg_stat_database.blks_hit
+ 1)::double precision * 100::double
precision)::numeric, 2) DESC;

The real question is: Is your system slow?


On Sun, Jun 30, 2019 at 5:14 AM Daulat Ram 
wrote:

> Hi team,
>
>
>
> Can you please suggest what will be  the suitable memory settings for
> Postgresql11 if we have 80gb RAM, 16 CPU’s and OS  Linux.
>
>
>
> If we set 25 % of total RAM then shared_buffers value will be 20GB. Will
> it be useful or we can set it any random vale like 8g or 12gb.
>
>
>
> According to https://pgtune.leopard.in.ua/#/
>
> below are the suggested memory  values for 80gb RAM and 16 CPU.  I assume
> the values preferred for effective_cache_size = 60GB and shared_buffers =
> 20GB are too large.
>
>
>
> max_connections = 500
>
> shared_buffers = 20GB
>
> effective_cache_size = 60GB
>
> maintenance_work_mem = 2GB
>
> checkpoint_completion_target = 0.7
>
> wal_buffers = 16MB
>
> default_statistics_target = 100
>
> random_page_cost = 1.1
>
> effective_io_concurrency = 300
>
> work_mem = 6553kB
>
> min_wal_size = 1GB
>
> max_wal_size = 2GB
>
> max_worker_processes = 16
>
> max_parallel_workers_per_gather = 8
>
> max_parallel_workers = 16
>
>
>
> Please give your suggestions.
>
>
>
> Regards,
>
> Daulat
>
>
>


Re: software or hardware RAID?

2019-03-23 Thread Hans Schou
On Sun, Mar 17, 2019 at 11:54 PM Rory Campbell-Lange <
r...@campbell-lange.net> wrote:

> We aren't sure whether to use software MDRaid or a MegaRAID card.
>

Never go with hardRaid.  I have had a breakdown on a hardware RAID and as
it was special and not off-the-shelf, I could not move the disk to another
controller. I think it was a capacitor, maybe capasitor plaegue.
Only thing I had to do was to restore to the day before and the customer
lost one days work.

>From that on, I only use softRAID.


Re: Installing PostgreSQL on Oracle Solaris

2019-03-23 Thread Hans Schou
On Sat, Mar 23, 2019 at 3:48 PM Igor Korot  wrote:

>
> You mean even running as "sudo"?
>

igor@solaris:/usr$ sudo bunzip2 < postgresql-9.6.1-S11.i386-64.tar.bz2
| tar xpf -

No, you are not running tar as sudo.

I would at least suggest:
  cd /usr/local/src
  sudo tar --bzip2 xf /usr/postgresql-9.6.1-S11.i386-64.tar.bz2

a bit dependen of which tar you have.

otherwise:
  cd /usr/local/src
  sudo tar xf <( bzcat /usr/postgresql-9.6.1-S11.i386-64.tar.bz2 )


Re: WSL (windows subsystem on linux) users will need to turn fsync off as of 11.2

2019-02-16 Thread Hans Schou
On Fri, Feb 15, 2019 at 1:34 AM Bruce Klein  wrote:

>
> If you are running Postgres inside Microsoft WSL
>

https://docs.microsoft.com/en-us/windows/wsl/faq
Who is WSL for?
This is primarily a tool for developers ...
---

One problem with WSL is that the I/O performance is not good and it might
never be solved. So using WSL for production is not what it was ment for.

WSL is called a "compatibility layer". When running WSL there is no Linux
kernel despite "uname" say so. Like WINE, where one can run Windows
binaries on Linux but there is no Windows OS.
https://en.wikipedia.org/wiki/Compatibility_layer

That said, WSL is a great tool for developers. Better than Cygwin.

./hans


Re: What is the tuplestore?

2018-12-10 Thread Hans Schou
When one get a "No space left on device" and there is a lot of space it is
sometimes caused by lack of inodes.

Try run the command:
  df --inodes


On Mon, Dec 10, 2018 at 4:56 PM Ron  wrote:

> Hi,
>
> v9.6.6
>
>
> 2018-12-07 06:21:55.504 EST 10.140.181.89(35868) CDS CDSLBXW 13748 SELECT
> PostgreSQL JDBC Driver 53100 ERROR:  could not write to tuplestore
> temporary
> file: No space left on device
>
> I see this in the pg_log file, but #1 can't figure out what "tuplestore"
> is
> (Google doesn't help), and #2 there's lots of space on all my file
> systems.
> data/base, where pgsql_tmp lives, has 96GB free.)
>
> Thanks
>
> --
> Angular momentum makes the world go 'round.
>
>


Re: Swap on postgres master server

2018-10-16 Thread Hans Schou
Are you sure that swap is used actively? Maybe it had just been used during
backup or something.

Look after SwapIn/SwapOut (si/so) it should be '0'
$ vmstat 1
procs ---memory-- ---swap-- -io -system--
--cpu-
 r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id
wa st
 1  0 12  89344  46608 5863840012 8   30   86  0  0 99
0  0

If you want to see the amount of ram used by each program with childs run
this:
ps -A --sort -rss -o comm,pmem | awk '
  NR == 1 { print; next }
  { a[$1] += $2 }
  END {
for (i in a) {
  printf "%-15s\t%s\n", i, a[i];
}
  }
'


On Tue, Oct 16, 2018 at 11:04 AM Nicola Contu 
wrote:

> Hello,
> we are running Postgres 10.5 with master slave replication.
>
> These are our custom params
>
> archive_command = 'pgbackrest --stanza=cmdprod archive-push %p' # command
> to use to archive a logfile segment
> archive_mode = on # enables archiving; off, on, or always
> checkpoint_completion_target = 0.7 # checkpoint target duration, 0.0 - 1.0
> default_text_search_config = 'pg_catalog.english' #
> datestyle = 'iso, mdy' #
> effective_cache_size = 120GB #
> hot_standby = on# "on" allows queries during
> recovery
> lc_messages = 'en_US.UTF-8'  # locale for system
> error message
> lc_monetary = 'en_US.UTF-8'  # locale for monetary
> formatting
> lc_numeric = 'en_US.UTF-8'  # locale for number
> formatting
> lc_time = 'en_US.UTF-8'  # locale for time
> formatting
> listen_addresses = '*' # defaults to 'localhost', '*' = all
> log_autovacuum_min_duration = 1000ms# -1 disables, 0 logs all
> actions and
> log_checkpoints = on #
> log_line_prefix = '%t [%r] [%p]: [%l-1] db=%d,user=%u ' #
> log_lock_waits = on  # log lock waits >=
> deadlock_timeout
> log_min_duration_statement = 1000ms# -1 is disabled, 0 logs all
> statements
> log_statement = 'ddl'  # none, ddl, mod, all
> log_temp_files = 1024kB  # log temporary files equal
> or larger
> maintenance_work_mem = 2GB #
> max_connections = 220 #
> max_parallel_workers_per_gather = 8# taken from
> max_worker_processes
> max_wal_size = 2GB #
> min_wal_size = 1GB #
> pg_stat_statements.max = 1 #
> pg_stat_statements.track = all #
> port = 5432# port number which Postgres listen
> shared_buffers = 10GB #
> shared_preload_libraries = 'pg_stat_statements'  # (change
> requires restart)
> synchronous_standby_names = '1 ( "usnyh2" )' # comment out during upgrade
> track_activity_query_size = 16384# (change requires restart)
> track_io_timing = on #
> wal_buffers = 16MB #
> wal_keep_segments = 100 #
> wal_level = replica#  minimal, replica, or logical
> work_mem = 600MB #
>
> This server is on Centos 7 and the strange thing is that we see a lot of
> swap usage :
>
> [root@usnyh-cmd1 ~]# free -m
>   totalusedfree  shared  buff/cache
>  available
> Mem: 25765275555559   12804  244536
> 236036
> Swap: 1638373269057
>
> 7GB used.
>
> But can't see it from any of the commands like top etc.
> I am sure it is postgres because it is the only service running on that
> machine.
>
> Is there anything we can do?
> On the sync slave, the usage is just 400MB.
>
> Any trick?
>
> Thanks a lot,
> Nicola
>


Re: Shared buffers increased but cache hit ratio is still 85%

2018-07-18 Thread Hans Schou
On Wed, Jul 18, 2018 at 10:44 AM Andreas Kretschmer 
wrote:

>
> ||pg_stat_reset()
>

Thanks, I guess we can see the result in a few days.

BTW, strang command: it only reset current database and it can't take db as
parameter.


Shared buffers increased but cache hit ratio is still 85%

2018-07-18 Thread Hans Schou
Hi

I have this system with some databases and I have run the
cache_hit_ratio.sql script on it. It showed that the db acme777booking had
a ratio on 85%. I then changed shared_buffer size from 0.5GB to 4GB as the
server has 16GB of physical RAM. After 6 days of running I checked the
ratio again and it is still 85%.
Am I doing something wrong or should some history be cleared?


cache_hit_ratio.sql
datname | blks_read  |   blks_hit   | cachehitratio
++--+---
 acme777web |   50225009 |   3157586919 | 98.43
 acmelog| 462198 | 14332508 | 96.88
 acme777domain  | 7540616252 | 119574349075 | 94.07
 acme777booking |  337915568 |   1902310783 | 84.92
(4 rows)

pg_runtime.sql
   pg_start|runtime
---+
 2018-07-12 06:49:48.654252+02 | 6 days 02:44:09.488978
(1 row)

get_version_num.sql
 Version text |  Num
--+---
 9.1.9| 90109
(1 row)

SELECT pg_stat_database.datname,
   pg_stat_database.blks_read,
   pg_stat_database.blks_hit,
   round((pg_stat_database.blks_hit::double precision
  / (pg_stat_database.blks_read
 + pg_stat_database.blks_hit
 +1)::double precision * 100::double precision)::numeric,
2) AS cachehitratio
   FROM pg_stat_database
  WHERE pg_stat_database.datname !~ '^(template(0|1)|postgres)$'::text
  ORDER BY round((pg_stat_database.blks_hit::double precision
 / (pg_stat_database.blks_read
+ pg_stat_database.blks_hit
+ 1)::double precision * 100::double
precision)::numeric, 2) DESC;

OS: Red Hat Enterprise Linux Server release 6.4 (Santiago)


Re: Load data from a csv file without using COPY

2018-06-19 Thread Hans Schou
On Tue, Jun 19, 2018 at 10:17 PM Ravi Krishna  wrote:

> In order to test a real life scenario (and use it for benchmarking) I want
> to load large number of data from csv files.
> The requirement is that the load should happen like an application writing
> to the database ( that is, no COPY command).


Once you have parsed the data it is fairly easy to use PostgreSQL "COPY
FROM stdin" format. If you have all data with a tabulator separator. A
simple table (t1) could look like:

COPY t1 (f1,f2) FROM stdin;
3Joe
7Jane
\.

These data can be piped directly to psql and it will be fast.

Note: NULL should be '\N', see manual:
https://www.postgresql.org/docs/current/static/sql-copy.html

It is the same kind of data you get with pg_dump.

./hans


Grant to a group defined in Windows AD

2018-04-10 Thread Hans Schou
Hi

Can I have a advise on how to handle groups?

In my Windows AD (Active Directory) I have two groups named:
  readers
  writers

In Postgresql I have these databases:
  d1
  d2

The "writers" should have NOCREATEDB etc but ALL PRIVILEGES to d1 and d2.

The "readers" should have SELECT to all tables in d1 and no access at all
to d2.

It seems like I can either use a group ROLE or a SCHEME to accomplish my
goal. Which one will be the most best/simple to administer for me and my
colleagues? Or is there another approach we can use?

Here is my draft for a daily-run Powershell script which will get all the
users in the group "readers" in the Windows AD and create them in
Postgresql:

$LdapCon = "LDAP://CN=readers,OU=specimen,DC=example,DC=org"
Write-Host "-- Get group members from: $($LdapCon)"
$Group = [ADSI]$LdapCon
$Group.Member | ForEach-Object {
$Searcher = [adsisearcher]"(distinguishedname=$_)"
$u = $($searcher.FindOne().Properties.samaccountname).ToLower()
Write-Host "CREATE ROLE `"$u`";"
Write-Host " ALTER ROLE `"$u`" WITH LOGIN;"
Write-Host " GRANT SELECT ... readers ...;"
}

And then I pipe the output to psql.exe.
The output looks like:
  CREATE ROLE "joe";
   ALTER ROLE "joe" WITH LOGIN;
   GRANT SELECT ... readers ...;

PS: To get a list of your own groups use Powershell:

([ADSISEARCHER]"samaccountname=$($env:USERNAME)").Findone().Properties.memberof


Re: wiki Disk Usage, table size: ERROR: could not open relation with OID 0

2018-03-15 Thread Hans Schou
On Thu, Mar 15, 2018 at 12:44 AM, Tom Lane  wrote:

>
> The query does fail on < 9.2, because on rows with no reltoastrelid
>

Thats, fine. I will live with that until upgrade.


> But hey, it's a wiki;
> if you feel more ambitious, edit away.
>

I tried but it said:
"The site you are trying to log in to (the postgresql wiki) requires a
cool-off period between account creation and logging in. Please try again
later, or contact the postgresql.org webmasters if you have an urgent need
to log in."


Re: wiki Disk Usage, table size: ERROR: could not open relation with OID 0

2018-03-15 Thread Hans Schou
On Thu, Mar 15, 2018 at 12:32 AM, Michael Paquier 
wrote:


> > I'm running version 9.1.9 so it should be working according to the
> > wiki.
>
> You should update and upgrade.  9.1 has fallen out of community support
>

I will recommend that to the database owner. Thanks


wiki Disk Usage, table size: ERROR: could not open relation with OID 0

2018-03-14 Thread Hans Schou
Hi

I got the message
  ERROR: could not open relation with OID 0
when running the "General Table Size Information" from
https://wiki.postgresql.org/wiki/Disk_Usage

This patch gives some system tables
@@ -12,5 +12,6 @@
   FROM pg_class c
   LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
   WHERE relkind = 'r'
+  AND reltoastrelid!=0
   ) a
 ) a;

But I guess it was supposed to give size of all tables.

I'm running version 9.1.9 so it should be working according to the wiki.

The original statement:

SELECT *, pg_size_pretty(total_bytes) AS total
, pg_size_pretty(index_bytes) AS INDEX
, pg_size_pretty(toast_bytes) AS toast
, pg_size_pretty(table_bytes) AS TABLE
  FROM (
  SELECT *, total_bytes-index_bytes-COALESCE(toast_bytes,0) AS
table_bytes FROM (
  SELECT c.oid,nspname AS table_schema, relname AS TABLE_NAME
  , c.reltuples AS row_estimate
  , pg_total_relation_size(c.oid) AS total_bytes
  , pg_indexes_size(c.oid) AS index_bytes
  , pg_total_relation_size(reltoastrelid) AS toast_bytes
  FROM pg_class c
  LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
  WHERE relkind = 'r'
  ) a) a;


Any help much appreciated.

./best regards


Re: Deadlock with one table - PostgreSQL is doing it right

2017-12-22 Thread Hans Schou
2017-12-21 21:50 GMT+01:00 Rakesh Kumar :

> whenever SQLERROR EXIT ROLLBACK
>

Thanks. You are absolutely right.
After starting with:
   WHENEVER SQLERROR EXIT ROLLBACK
the process getting the deadlock will exit to command prompt (with
%ERRORLEVEL% = 0).

So what actually found out was that Oracle has some strange combinations of
default values regarding
1. AUTOCOMMIT = 0
2. Don't exit/rollback on deadlock


Re: Deadlock with one table - PostgreSQL is doing it right

2017-12-21 Thread Hans Schou
2017-12-21 12:41 GMT+01:00 Rakesh Kumar :

> Could it be that the tool you are using in Oracle is doing commit while
> exiting out due to Deadlock, because there is no explicit rollback.
>

The tool Im using is "sqlplus". By default you are always in a transaction
and auto-commit only occur on exit.
Please note that Oracle leave the table with a half transaction, i.e. only
one row is updated.


Re: Deadlock with one table - PostgreSQL is doing it right

2017-12-21 Thread Hans Schou
2017-12-21 17:46 GMT+01:00 Jeremy Finzel :

> It's hard to follow how the 2 videos relate, because you don't run the
> same SQL both places.  You first update where i = 2 in Postgres and i = 1
> in Oracle.
>

Well OK. I made a new one for PostgreSQL: https://youtu.be/En8EFv90yCc
Now with same background color.


Deadlock with one table - PostgreSQL is doing it right

2017-12-21 Thread Hans Schou
Hi

FYI - if it has any interest

During my preparation for describing what happens when two processes update
the same row in a table, I came across that PostgreSQL is doing right and
Oracle is doing it wrong.

The situation is a process which get a deadlock, but because it is a
script, it sends a commit anyway. This is bad behavior by humans but that's
how they are.

After both processes commit's the table should be:
 i |  n
---+---
 1 | 11
 2 | 21
in Oracle it is:
 i |  n
---+---
 1 | 11
 2 | 22

PostgreSQL: https://youtu.be/rH-inFRMcvQ
Oracle: https://youtu.be/l2IGoaWql64

PostgreSQL:
A
select * from t;
begin;
update t set n=n+1 where i=2;

B
begin;
update t set n=n+1 where i=1;
update t set n=n+1 where i=2;

A
update t set n=n+1 where i=1;

B
commit;

A
commit;

best regards
hans