Re: Tablespace column value null on select * from pg_tables

2019-07-28 Thread Adrian Klaver

On 7/28/19 1:24 PM, Peter J. Holzer wrote:

On 2019-07-16 19:20:24 +, Alex Williams wrote:

1. postgres=# ALTER DATABASE mydatabase SET TABLESPACE pg_default;

ERROR:  some relations of database "mydatabase" are already in tablespace 
"pg_default"

HINT:  You must move them back to the database's default tablespace before 
using this command.


What's the reason for this error? Wouldn't it be simpler to leave
relations alone which are already in the correct tablespace?


~/src/backend/commands/dbcommands.c

 /*
* Check for existence of files in the target directory, i.e., objects of
* this database that are already in the target tablespace.  We can't
* allow the move in such a case, because we would need to change those
* relations' pg_class.reltablespace entries to zero, and we don't have
* access to the DB's pg_class to do so.
*/





 hp




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Tablespace column value null on select * from pg_tables

2019-07-28 Thread Tom Lane
"Peter J. Holzer"  writes:
> On 2019-07-16 19:20:24 +, Alex Williams wrote:
>> 1. postgres=# ALTER DATABASE mydatabase SET TABLESPACE pg_default;
>> ERROR:  some relations of database "mydatabase" are already in tablespace 
>> "pg_default"
>> HINT:  You must move them back to the database's default tablespace before 
>> using this command.

> What's the reason for this error? Wouldn't it be simpler to leave
> relations alone which are already in the correct tablespace?

It'd be simpler, it'd also be wrong.

The difficulty here is that ALTER DATABASE executes from a different
database, so it can't examine or change pg_class.reltablespace, so
it can't fix values that it would make inconsistent.

So imagine that we have a database in tablespace X, containing one
table T1 in tablespace Y.  All the other entries in that database's
reltablespace column are zero signifying "it's in the database's default
tablespace".  The one reference to tablespace X is in the shared catalog
pg_database, where we can see and change it during ALTER DATABASE.

Now let's do ALTER DATABASE ... SET TABLESPACE Y, and suppose that the
error check is not there.  We move everything in X into Y, and we update
pg_database.dattablespace, but we don't change the reltablespace column.
Things are not actually broken at this point; T1's reltablespace says
Y and indeed that's where it is.

Now let's do ALTER DATABASE ... SET TABLESPACE Z.  From outside the
database, all we can see is that all its files are physically in Y,
so we move them all to Z.  Including T1.  But its reltablespace entry
still says Y, so now things *are* broken.

(Another small issue here is that table relfilenodes are only
guaranteed unique per tablespace per database, so that there is
a hazard of OID collision between T1 and some other table being
moved into tablespace Y.  We could fix that, but only if we
can change pg_class.relfilenode...)

So this can't be fixed without having a way to access and change
catalogs from a different database, which is a large can of worms
we don't want to open for such a small feature.

regards, tom lane




Re: Tablespace column value null on select * from pg_tables

2019-07-28 Thread Peter J. Holzer
On 2019-07-16 19:20:24 +, Alex Williams wrote:
> 1. postgres=# ALTER DATABASE mydatabase SET TABLESPACE pg_default;
> 
> ERROR:  some relations of database "mydatabase" are already in tablespace 
> "pg_default"
> 
> HINT:  You must move them back to the database's default tablespace before 
> using this command.

What's the reason for this error? Wouldn't it be simpler to leave
relations alone which are already in the correct tablespace?

hp

-- 
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 


signature.asc
Description: PGP signature


Re: Why does backend send buffer size hardcoded at 8KB?

2019-07-28 Thread Peter J. Holzer
On 2019-07-27 19:10:22 -0400, Tom Lane wrote:
> Andres Freund  writes:
> > On 2019-07-27 18:34:50 -0400, Tom Lane wrote:
> >> Yeah.  The existing commentary about that is basically justifying 8K
> >> as being large enough to avoid performance issues; if somebody can
> >> show that that's not true, I wouldn't have any hesitation about
> >> kicking it up.
> 
> > You think that unnecessary fragmentation, which I did show, isn't good
> > enough? That does have cost on the network level, even if it possibly
> > doesn't show up that much in timing.
> 
> I think it is worth doing some testing, rather than just blindly changing
> buffer size, because we don't know how much we'd have to change it to
> have any useful effect.

I did a little test with nttcp between two of our servers (1 Gbit to
different switches, switches connected by 10 Gbit). The difference
between a 1024 byte buffer and a 1460 byte buffer is small but
measurable. Anything larger doesn't make a difference. So increasing the
buffer beyond 8 kB probably doesn't improve performance on a 1 Gbit LAN.

I didn't test 10 Gbit LAN or WAN - those might be different.

hp

-- 
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 


signature.asc
Description: PGP signature


Re: logging "raise" to file

2019-07-28 Thread wambacher
Thank Georg,

regards
walter

> the red part writes your "raise notice" to your log
>
> psql (+your connection string) -f /path/to/file.sql *>
> /path/to/log/xxx.log 2>&1*
>
> -- 
My projects:

Admin Boundaries of the World 
Missing Boundaries

Emergency Map 
Postal Code Map (Germany only) 
Fools (QA for zipcodes in Germany) 
Postcode Boundaries of Germany 
OSM Software Watchlist



Re: logging "raise" to file

2019-07-28 Thread Georg H.

Hi,

Am 28.07.2019 um 12:32 schrieb wambac...@posteo.de:


Hi,

is there a way to log output from "raise ..." to a local file? \o file 
does not work (for me).


regards
walter



the red part writes your "raise notice" to your log

psql (+your connection string) -f /path/to/file.sql *> 
/path/to/log/xxx.log 2>&1*


see

https://dba.stackexchange.com/questions/107199/how-to-log-custom-messages-from-inside-a-postgresql-transaction

regards

Georg



Re: logging "raise" to file

2019-07-28 Thread Guillaume Lelarge
Hi,

Le dim. 28 juil. 2019 à 12:32,  a écrit :

> Hi,
>
> is there a way to log output from "raise ..." to a local file? \o file
> does not work (for me).
>

No, RAISE messages can only go to log files. You would need to call a
function that could write to a file (though without calling RAISE).


-- 
Guillaume.


logging "raise" to file

2019-07-28 Thread wambacher
Hi,

is there a way to log output from "raise ..." to a local file? \o file
does not work (for me).

regards
walter

-- 
My projects:

Admin Boundaries of the World 
Missing Boundaries

Emergency Map 
Postal Code Map (Germany only) 
Fools (QA for zipcodes in Germany) 
Postcode Boundaries of Germany 
OSM Software Watchlist



RE: Hardware for writing/updating 12,000,000 rows per hour

2019-07-28 Thread farjad . farid
HI Arya,

Probably the easiest solution is to use cloud computing with dedicated network.

Good luck.




From: Neil 
Sent: 2019 July 28 01:33
To: Arya F 
Cc: pgsql-general@lists.postgresql.org; farjad.farid 
; Alvaro Herrera ; 
Tom Lane ; Ron 
Subject: Re: Hardware for writing/updating 12,000,000 rows per hour


On Jul 27, 2019, at 11:55 AM, Arya F 
mailto:arya6...@gmail.com>> wrote:

On Sat, Jul 27, 2019 at 11:49 AM farjad.farid 
mailto:farjad.fa...@checknetworks.com>> wrote:
With this kind of design requirements it is worth considering hardware "failure 
& recovery". Even SSDs can and do fail.

It is not just a matter of just speed. RAID disks of some kind, depending on 
the budget is worth the effort.
-Original Message-
From: Alvaro Herrera mailto:alvhe...@2ndquadrant.com>>
Sent: 2019 July 26 22:39
To: Arya F mailto:arya6...@gmail.com>>
Cc: Tom Lane mailto:t...@sss.pgh.pa.us>>; Ron 
mailto:ronljohnso...@gmail.com>>; 
pgsql-general@lists.postgresql.org
Subject: Re: Hardware for writing/updating 12,000,000 rows per hour

On 2019-Jul-26, Arya F wrote:

> I think I can modify my application to do a batch update. Right now
> the server has an HDD and it really can't handle a lot of updates and
> inserts per second. Would changing to a regular SSD be able to easily
> do 3000 updates per second?

That's a pretty hard question in isolation -- you need to consider how many 
indexes are there to update, whether the updated columns are indexed or not, 
what the datatypes are, how much locality of access you'll have ... I'm 
probably missing some other important factors.  (Of course, you'll have to tune 
various PG server settings to find your sweet spot.)

I suggest that should be measuring instead of trying to guess.  A reasonably 
cheap way is to rent a machine somewhere with the type of hardware you think 
you'll need, and run your workload there for long enough, making sure to 
carefully observe important metrics such as table size, accumulated bloat, 
checkpoint regime, overall I/O activity, and so on.

--
Álvaro Herrera
https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Hi Farjad
I was thinking of having physical or logical replication. Or is having RAID a 
must if I don’t want to lose data?

Arya,

Answering the question of what hardware would work better requires a lot of 
thought. I would suggest that you hire someone that can do the calculations to 
get you in the ball park and provide other pros and cons.  Then test with 
rented or cloud equipment like Álvaro suggested.  The following are some basic 
back of the hand calculations to show you the thought process, the numbers 
presented here are probably way off and need to be based on your hardware and 
research.

The first thing you need to understand is how much data are you going to be 
moving around.  You mentioned ’12,000,000 writes/updates per hour’ and the size 
of the tuples getting updated at about 2k.  That is about 3,333 per second (as 
someone else already noted). That translates to about  * 2000 = 6.6 MB/sec 
of basic data movement if they are all inserts, not including index updates, 
replication, vacuum, etc.  If they are all updates then you can double that.  
So lets say they are 1/2 updates, that means basic data movement is 2 x for 
updates and 1x for the inserts so that changes it to 9.9 MB/sec.  Lets say the 
index keys are total 200 Bytes we have 3,330 * 200 * 1.5 (half update, half 
inserts) = 0.99 MB/Sec.  If you have an existing system all of these things can 
be measured.

This brings the total to 10.9 MB/sec sustained operation minimum, not including 
WAL, OS, Vacuum processing, etc. and provided the data is being processed 
evenly over time, which it never is.  This will start to tax a standard HDD 
since it has to handle the OS, WAL log (about the same I/O as the database), 
and database, which probably puts the sustained usage certainly above 22MB/sec, 
considering a single drive handling a little over 40MB/sec (and most drives do 
not perform at their rating). Considering that data storage is not processed 
evenly over time, then you also need to consider peaks in the processing and 
multiply this data rate by a factor (which without knowing your data access 
pattern is impossible to predict).  So we already suspect based on the fact 
that your HDD is not handling it that there might be more going on than you 
have provided and that a single hard drive is nowhere near adequate.

Now lets think about architecture.

Best practice would say to have the OS on one drive, the WAL log on another 
drive, and the database on another drive.  The reason is that you can probably 
get 40+MB/Sec on each HDD drive. Of course server grade hardware with 15,000 
RPM HDDs would be higher performance.  If you only have one drive then the OS, 
the WAL log, and the database are competing for the 

Re: Too slow to create new schema and their tables, functions, triggers.

2019-07-28 Thread Luca Ferrari
On Fri, Jul 26, 2019 at 9:21 PM PegoraroF10  wrote:
> So, is it better to configure autovacuum properly to these tables or should
> I run vacuum periodically ?
> Obviously I´ll check our script too.
>

My guess would be that either you have disabled autovacuum on such
tables (I don't know if that is possible being system tables, but for
regular tables it is) or your script is running too frequently to let
autvacuum proceed on the sys tables. I would bet on the last one.
Seems to me you are also creating and deleting a lot of stuff to bloat
the catalog in such a way. Could it be your script is keeping a
trnsaction open (something like try in a loop)?