pg_dump with compressible and non-compressible tables

2018-05-05 Thread Ron

Hi,

v9.6

We've got big databases where some of the tables are highly compressible, 
but some have many bytea fields containing PDF files.


When the data format is custom, directory or tar, how feasible would a 
"--no-blob-compression" option be (where pg_dump just tells the zlib library 
to just Store tables with bytea columns, while compressing all other tables 
at the specified -Z level)?


Thanks

--
Angular momentum makes the world go 'round.



Re: Enhancement to psql command, feedback.

2018-05-09 Thread Ron

On 05/09/2018 02:59 AM, John McKown wrote:
I just wanted to throw this out to the users before I made a complete fool 
of myself by formally requesting it. But I would like what I hope would be 
a minor change (enhancement) to the psql command. If you look on this 
page, https://wiki.postgresql.org/wiki/Shared_Database_Hosting ,

you will see a number of example which look like:

psql -U postgres template1 -f - << EOT

REVOKE ALL ON DATABASE template1 FROM public;
REVOKE ALL ON SCHEMA public FROM public;
GRANT ALL ON SCHEMA public TO postgres;
CREATE LANGUAGE plpgsql;

EOT

To me this looks similar to a UNIX shell script.


Because it *is* a Unix shell script.  The "<< EOT" is part of a heredoc, 
which is designed to keep everything in one place instead of needing a 
second file for the SQL commands.


https://en.wikipedia.org/wiki/Here_document

(The concept is as old as computing.  Anyone who's worked on mainframes or 
proprietary minicomputers from DEC will instantly recognize it.)


--
Angular momentum makes the world go 'round.


Re: Rationale for aversion to the central database?

2018-04-27 Thread Ron



On 04/27/2018 05:52 PM, g...@luxsci.net wrote:


On April 24, 2018 07:27:59 am PDT, "Sam Gendler" 
 wrote:
On Sun, Apr 8, 2018 at 15:37 g...@luxsci.net  
> wrote:



On April 8, 2018 02:40:46 pm PDT, "Guyren Howe" > wrote:

One advantage to using logic and functions in  the db is that you can
fix things immediately without having to make new application builds.
That in itself is a huge advantage, IMO.

I doubt most of us would consider this any kind of advantage outside of 
the momentary temptation to do it when an app is completely broken and 
needs to be up in a hurry. Application changes, whether in the dB or in 
application logic, need to be tested, and they need to be revision 
controlled and released in a manner that can be easily rolled back in an 
automated manner. The fact that putting logic in the database can 
effectively allow developers to make unreleased changes to production apps 
is specifically one of the problems that I am trying to avoid when I keep 
most logic in the app instead of the dB. It’s a whole lot harder to make 
arbitrary manual changes to code in the app, whether interpreted or 
compiled, if it is running inside a container that cannot be updated. Even 
if you go in with a shell and update an interpreted file, the next time 
that container is launched the change will be lost, which is usually 
sufficient motivation to keep devs from doing that kind of thing.
I’ll put some things in the db, either for performance or because I want 
that logic to be built into the data and not be part of the application, 
but I choose those contexts carefully and I write them in as portable a 
manner as possible. And for those who say migrations don’t happen, I’ve 
certainly been through a few, usually as part of an acquisition or the 
like, but sometimes simply because another dB server better meets our 
needs after a time. And migrating stored procs can be really difficult. 
Such code usually has less complete unit and integration tests, which 
makes validating those changes more difficult, too.
But the biggest reason is that databases often have to scale up rather 
than out, so keeping as much logic in the application code allows my 
scaling requirements for the dB server to be as minimal as possible. Sure, 
there are workloads where pushing raw data across the wire will be more 
work than processing it in the dB, and in those cases, I may do that, but 
I consider it premature optimization to just assume that is necessary 
without hard evidence from production examples to suggest otherwise.
Finally, there’s the consistency argument. I want to find all of the logic 
in one place. Either entirely in the source code or entirely in the dB. 
Having to trace things from the code to the dB and back again can make it 
a whole lot harder to see, at a glance, what is happening in the code. 
Having logic in the dB also means it can be difficult or impossible to 
have two releases talking to the same schema version at the same time - so 
canary builds and rolling deployments can be difficult. Of course, schema 
changes can cause this problem, regardless of whether there are stored 
procs, but the more of your logic that lives in the db, the more likely it 
is that your releases will conflict over the db. So I’m more likely to be 
able to do a rolling release if I keep the db as a dumb data store and 
keep logic in the application code.


===

I could have worded that better but I think that we're coming at it from 
different directions. You think of your application as the "master" 
operator. I think of a PG db as the "master", not a slave. I believe that 
we shouldn't _have_ to use an external application for the database to be 
useful and coherent.  I like to think of external applications as 
subservient to the db and not the other way around. Yeah, I know, probably 
not a popular viewpoint.


Sorry, I don't really understand why it would be so hard to migrate, say 
pl/pgsql functions. You can maybe expect to write some likely convoluted 
application code, though. :) Reusable functions in the db that are solid 
also means that developers don't have to reinvent the wheel in whatever 
language and debugging also becomes simpler.


And it's not like the developers don't rewrite the code every time they 
migrate to the Latest and Greatest Language...



--
Angular momentum makes the world go 'round.


Re: pg_dump with compressible and non-compressible tables

2018-05-05 Thread Ron

On 05/05/2018 12:13 PM, Adrian Klaver wrote:

On 05/05/2018 07:14 AM, Ron wrote:

Hi,

v9.6

We've got big databases where some of the tables are highly compressible, 
but some have many bytea fields containing PDF files.


Can you see a demonstrable difference?


Very much so.  The ASCII hex representations of the PDF files are 
compressible, but take a *long* time to compress. Uncompressed backups are 
50% faster.




When the data format is custom, directory or tar, how feasible would a 
"--no-blob-compression" option be (where pg_dump just tells the zlib 
library to just Store tables with bytea columns, while compressing all 
other tables at the specified -Z level)?


In pg_dump blob refers to large objects:

https://www.postgresql.org/docs/10/static/app-pgdump.html
"
-b
--blobs

    Include large objects in the dump. This is the default behavior except 
when --schema, --table, or --schema-only is specified. The -b switch is 
therefore only useful to add large objects to dumps where a specific 
schema or table has been requested. Note that blobs are considered data 
and therefore will be included when --data-only is used, but not when 
--schema-only is.

"

These are different critters then bytea.


Ok.  I need the data in my backups anyway, so excluding them is 100% 
contrary to what I need.


--
Angular momentum makes the world go 'round.


Re: [GENERAL] Postgre compatible version with RHEL 7.5

2018-05-23 Thread Ron



On 05/23/2018 08:13 AM, Adrian Klaver wrote:

On 05/23/2018 03:59 AM, Deepti Sharma S wrote:

Hi David,

“9.6.6 is compatible but not supported”, what does this means?

For details see:

https://www.postgresql.org/support/versioning/

Basically it is supported by the community, but keeping up to date with 
the latest minor release(9.6.9) is strongly advised. The bigger issue is 
that the repo is going to be at the latest release.


Not supported because only 9.6.9 is supported?

--
Angular momentum makes the world go 'round.



Re: computing z-scores

2018-05-24 Thread Ron

On 05/24/2018 10:15 AM, Martin Mueller wrote:

You construct a z-score for a set of values by subtracting the average from the 
value and dividing the result by the standard deviation. I know how to do this 
in a two-step procedure. First, I compute the average and standard deviation. 
In a second run I use the formula and apply it to each value.

Is there a way of doing this in a single-step procedure or can you chain the 
two parts together in one query?  This goes beyond my SQL competence.


What about this?

SELECT value, (value - AVG(value))/stddev(value) as zvalue
FROM sometable
WHERE some conditions
GROUP by value


--
Angular momentum makes the world go 'round.



Re: posgresql.log

2018-05-21 Thread Ron

On 05/21/2018 04:40 PM, Bartosz Dmytrak wrote:


Hi Gurus,

Looking into my postgresql.log on one of my test servers I found scary entry:

--2018-05-19 05:28:21-- http://207.148.79.161/post0514/post

Connecting to 207.148.79.161:80... connected.

HTTP request sent, awaiting response... 200 OK

Length: 1606648 (1.5M) [application/octet-stream]

Saving to: ‘/var/lib/postgresql/10/main/postgresq1’

0K .. .. .. .. ..  3% 71.0K 21s

    50K .. .. .. .. ..  6% 106K 17s

   100K .. .. .. .. ..  9% 213K 13s

   150K .. .. .. .. .. 12% 213K 11s


[snip]

1500K .. .. .. .. .. 98% 11.8M 0s

  1550K ..  100% 12.5M=2.6s

2018-05-19 05:28:25 (598 KB/s) - ‘/var/lib/postgresql/10/main/postgresq1’ 
saved [1606648/1606648]


Downloaded file is not posgresql but postgresq1(one).

It was pure pg instalation without any contrib modules addons etc, 
istalled on ubuntu box by apt manager using repos:


http://apt.postgresql.org/pub/repos/apt xenial-pgdg/main

http://apt.postgresql.org/pub/repos/apt xenial-pgdg

I have never seen such entry on other my other servers…

Could you be so kind and explain me what is it? I am afraid my postgres 
has been hacekd.




This looks like what happens when the adobe flash player package downloads 
the closed-source binary installer.  Thus, I wouldn't be surprised if the 
repository package isn't downloading the installation binaries from 
http://207.148.79.161/post0514/post.


--
Angular momentum makes the world go 'round.


Re: Can you make a simple view non-updatable?

2018-06-08 Thread Ron



On 06/08/2018 04:17 AM, Ryan Murphy wrote:


maybe it is time to overhaul the security concept.


I could see how I could revoke permissions from, say, all users that 
aren't superusers to INSERT or UPDATE certain views.  However, if possible 
it would be nice to get an error message about the VIEW not being 
updatable, rather than a user access error, which could be misleading.


When I try to insert into a non-updatable VIEWs, I get this message:

ERROR:  cannot insert into view "test_view"
DETAIL:  Views containing GROUP BY are not automatically updatable.
HINT:  To enable inserting into the view, provide an INSTEAD OF INSERT 
trigger or an unconditional ON INSERT DO INSTEAD rule.


It would be great to see something like this when trying to insert into a 
simple VIEW that I had made non-updatable:


ERROR:  cannot insert into view "test_view2"
DETAIL:  This view has manually been made non-updatable.


Something like CREATE READ ONLY VIEW test_view2 AS SELECT 

--
Angular momentum makes the world go 'round.


Re: Code of Conduct plan

2018-06-08 Thread Ron




On 06/08/2018 12:09 AM, Gavin Flower wrote:

On 08/06/18 16:55, Ron wrote:

On 06/07/2018 04:55 AM, Gavin Flower wrote:
[snip]
The Americans often seem to act as though most people lived in the USA, 
therefore we should all be bound by what they think is correct!


"You" are wearing a tee-shirt (or hoodie), blue jeans and Nikes, while 
eating a fast food hamburger, drinking a Coke, listening to rock and 
roll, emailing us over the Internet from your Mac, thinking all Men are 
created equal, and feeling glad that NZ isn't an English colony.


That kind of cultural dominance makes one think the US truly is exceptional.

Only two of those things you said about me are currently true, and some 
are never true.


That's why I put "you" in quotes.



Perhaps accusing someone as being a Mac user should be banned by the CoC?


--
Angular momentum makes the world go 'round.



Re: What does Natvie Posgres mean?

2018-06-12 Thread Ron
This, to me, is the true meaning of "native PostgreSQL" (as opposed to 
"stock PostgreSQL", which is uncustomized code).  However, if the job wanted 
post was written by an HR flunky, it could mean anything.



On 06/12/2018 01:11 PM, Benjamin Scherrey wrote:
In my experience it refers to *development directly via SQL against the 
Postgres server* rather than via an ORM like Django or the like. A 
remarkably high percentage of applications backed by Postgres have been 
written by developers that have never actually seen or written SQL code 
directly. It's all generated (often quite naively) by the object 
relational mapper. Requesting "native" developers means that they want you 
to understand how the DB actually behaves and to be able to generate 
optimal SQL code and proper DDLs that fit the application domain correctly.


  - - Ben Scherrey

On Wed, Jun 13, 2018, 12:59 AM David G. Johnston 
mailto:david.g.johns...@gmail.com>> wrote:


On Tuesday, June 12, 2018, bto...@computer.org
 mailto:bto...@broadstripe.net>> wrote:


When someone, e.g., as appeared in a recent and some older
pgsql-jobs messages, says "Native Postgres", what do you suppose
that means?

Does it mean something different than just "PostgreSQL"?


Likely it means the open source product built directly from the source
code published here (or packages derived there-from).  As opposed to
say AWS RDS or EnterpriseDB or various other forks of the product
available in the wild.

David J.



--
Angular momentum makes the world go 'round.


Re: Code of Conduct plan

2018-06-07 Thread Ron

On 06/07/2018 04:55 AM, Gavin Flower wrote:
[snip]
The Americans often seem to act as though most people lived in the USA, 
therefore we should all be bound by what they think is correct!


"You" are wearing a tee-shirt (or hoodie), blue jeans and Nikes, while 
eating a fast food hamburger, drinking a Coke, listening to rock and roll, 
emailing us over the Internet from your Mac, thinking all Men are created 
equal, and feeling glad that NZ isn't an English colony.


That kind of cultural dominance makes one think the US truly is exceptional.

--
Angular momentum makes the world go 'round.



Re: Code of Conduct plan

2018-06-03 Thread Ron

On 06/03/2018 04:54 PM, Berend Tober wrote:

Tom Lane wrote:

Two years ago, there was considerable discussion about creating a
Code of Conduct for the Postgres community...

We are now asking for a final round of community comments...


I really like that this was included: "Any allegations that prove not to 
be substantiated...will be viewed as a serious community offense and a 
violation of this Code of Conduct."


Good attempt to prevent the CoC being used as vindictive weaponry.


But a futile attempt: "A lie can travel half way around the world while the 
truth is putting on its shoes."



--
Angular momentum makes the world go 'round.



Re: How to get postmaster shut down time in postgres?

2018-06-04 Thread Ron

On 06/04/2018 08:44 AM, pavan95 wrote:

Hi Adrian/Melvin,

Thanks for your prompt replies. Yeah, I'm aware of that way.

But my requirement is to get the server shutdown time whenever that event
occurs and insert into a table dynamically!! Is it possible?


You want to trap the shutdown action and write to an audit table just before 
it happens?


--
Angular momentum makes the world go 'round.



Re: Pgagent is not reading pgpass file either in Windows or Linux.

2018-06-04 Thread Ron
I've noticed that .pgpass is case sensitive, so am not surprised that it 
also wouldn't note the difference between 127.0.0.1 and localhost.


On 06/04/2018 05:31 PM, nageswara Bandla wrote:

I have figured out the issue with pgAgent both in Windows and Linux.

PgAgent seems to ignore pgpass.conf/.pgpass whenever it has 127.0.0.1 
(127.0.0.1:5432:*:postgres:postgres) throws an error:


*DEBUG: Creating DB connection: user=postgres port=5432 hostaddr=127.0.0.1 
dbname=linuxpostgresdb*


*WARNING: Couldn't create the primary connection (attempt 1): fe_sendauth: 
no password supplied*


*
*

The solution could be update .pgpass to have ( 
localhost:5432:*:postgres:postgres ) and then pgagent works fine without 
issues.



I think, pgagent is not inline with libpq.dll while passing host address 
parameter. I have raised this concern with pgagent github where exactly 
they need to change


the code in order for pgagent to be in line with psql program.


https://github.com/postgres/pgagent/issues/14


On Fri, Jun 1, 2018 at 9:43 AM, nageswara Bandla > wrote:




On Thu, May 31, 2018 at 5:16 PM, George Neuner mailto:gneun...@comcast.net>> wrote:

On Thu, 31 May 2018 15:40:21 -0500, nageswara Bandla
mailto:nag.ban...@gmail.com>> wrote:

>On Thu, May 31, 2018 at 12:57 PM, George Neuner
mailto:gneun...@comcast.net>>
>wrote:
>
>> It just occurred to me that you said PGPASSFILE was set to
>>
>>     %APPDATA%/postgresql/pgpass.conf
>>
>>
>> The problem may be that when LocalSystem expands %APPDATA%, it is
>> finding its own directory, which might be any of:
>>
>>    C:\Windows\ServiceProfiles\LocalService\appdata
>>    C:\Windows\System32\config\systemprofile\AppData
>>    C:\Windows\SysWOW64\config\systemprofile\AppData
>>
>> depending on your Windows version, policies (if any), and
whether the
>> executable is 32 or 64 bit.
>>
>>
>> I wouldn't try messing with any of these directories. Instead try
>> setting PGPASSFILE to the full path to your file.
>>
>>
>I have tried all of them, pgagent is not recognizing any of the above
>locations. In fact, I have tried both options
>
> #1. By defining PGPASSFILE to the above locations one after the
other.
> #2. By copying pgpass.conf to all the three locations by creating
>Roaming/postgresql directories.
>
>And also I have defined PGPASSFILE=C:\pgpass.conf; I think, this
should be
>accessible to any system account. This also not working.


One more stupid question and then I'm out of ideas ...


Have you rebooted after changing the environment variable?

Global environment changes normally don't take effect until the user
logs out/in again.  LocalSystem is not an interactive user - you have
to restart the system to let it see environment changes.  PITA.


Yes, I did. But no luck..I guess, we have to live with this problem
for pgagent running as a Local System account.
We need to run pgagent service as  "Logon user account" and provide
user logon credentials for running pgagent service.

In Linux case, pgagent is not even reading .pgpass itself. The issue
here is that the logs (debug level log) are no help. It don't have
much information.
Which password file it is trying to read.




George






--
Angular momentum makes the world go 'round.


Re: Insert UUID GEN 4 Value

2018-05-31 Thread Ron

On 05/31/2018 07:39 PM, tango ward wrote:
On Thu, May 31, 2018 at 12:32 PM, tango ward > wrote:



On Thu, May 31, 2018 at 12:18 PM, David G. Johnston
mailto:david.g.johns...@gmail.com>> wrote:

On Wednesday, May 30, 2018, tango ward mailto:tangowar...@gmail.com>> wrote:

Okay I will try it.


When I tried it, I am getting an error: Invalid input syntax
for UUID: uuid_generate_v4(),


Avoid references to "it" and just show the code you tried to run.

David J.



I'm testing this code:

curr_pgsql.execute('''CREATE EXTENSION IF NOT EXISTS "uuid-ossp"; ''')
curr_pgsql.execute('''
    INSERT INTO enrollmentinfo (
created, modified,
secure_id,
relationship,
tuition_bill,
cashier_name,
cashier_phone_number
)
    VALUES (current_timestamp, current_timestamp,
    uuid_generate_v4(), '', '', '',)
''')


curr_pgsql.execute('''CREATE EXTENSION IF NOT EXISTS "uuid-ossp"; ''')
curr_pgsql.execute('''
    INSERT INTO enrollmentinfo (
created, modified,
secure_id,
relationship,
tuition_bill,
cashier_name,
cashier_phone_number
)
    VALUES (current_timestamp, current_timestamp,
    uuid_generate_v4(), '', '', '',)
''')


Hi, this is the query that I am trying to run. I'll try your suggestions 
when I arrived in the office.


Honestly, I'd try it in psql first.  That will eliminate all possible 
problems due to quoting errors, etc.


--
Angular momentum makes the world go 'round.


Re: Question on disk contention

2018-05-31 Thread Ron

On 05/31/2018 08:52 AM, Melvin Davidson wrote:



On Thu, May 31, 2018 at 1:13 AM, Charles Clavadetscher 
mailto:clavadetsc...@swisspug.org>> wrote:


Hi Melvin

As an answer to a previous post you wrote:

"Also, your main problem is that when you have two exact same queries
executing at the same time, they will cause contention in
the disk, and neither one will make much progress."

Could you elaborate a little more on the meaning of "contention in the
disk"?
What is it that happens?

Thank you and have a good day.
Regards
Charles



>Could you elaborate a little more on the meaning of "contention in the disk"?
>What is it that happens?

To simplify, you have two users/jobs, both wanting the exact same 
information. So the system instructs the disk to get
that information from the disk, which causes the disk head to "seek" to 
the position of the first eligible row and
continues positioning to other eligible rows. Now the job is not 
exclusive, so the system temporarily switches to the
other job, which causes the disk to go back to the first row and work from 
there. The switching back and forth continues,
so that instead of one job finishing quickly, they both have to take turns 
waiting for needed information. That takes

a lot longer,

Try this, Select a table that has a lot of rows, ideally 1M+. Then start a 
query with a WHERE clause and see how long
it takes. Then submit the same query from 5 separate connections 
simultaneously and see how long that takes.


Why isn't the OS caching the disk blocks, and why isn't Postgres using the 
cached data?



--
Angular momentum makes the world go 'round.


Re: Code of Conduct plan

2018-06-04 Thread Ron
If there's been so much Bad Behavior that's so Weakened the Community, then 
someone's done an excellent job of hiding that Bad Behavior.


On 06/04/2018 09:57 AM, Evan Macbeth wrote:
I just want to chime in and thank all those who worked on this Code of 
Conduct. It's well thought out, and I'm personally very glad to see it. I 
think this just makes our community and its work stronger. I strongly 
support it being put into effect.


Evan Macbeth

On Sun, Jun 3, 2018 at 2:29 PM, Tom Lane > wrote:


Two years ago, there was considerable discussion about creating a
Code of Conduct for the Postgres community, as a result of which
the core team announced a plan to create an exploration committee
to draft a CoC [1].  That process has taken far longer than expected,
but the committee has not been idle.  They worked through many comments
and many drafts to produce a version that seems acceptable in the view
of the core team.  This final(?) draft can be found at

https://wiki.postgresql.org/wiki/Code_of_Conduct


We are now asking for a final round of community comments.
Please send any public comments to the pgsql-general list (only).
If you wish to make a private comment, you may send it to
c...@postgresql.org .

The initial membership of the CoC committee will be announced separately,
but shortly.

Unless there are substantial objections, or nontrivial changes as a result
of this round of comments, we anticipate making the CoC official as of
July 1 2018.

                        regards, tom lane

[1]
https://www.postgresql.org/message-id/56a8516b.8000...@agliodbs.com





--
Evan Macbeth - Director of Support - Crunchy Data
+1 443-421-0343 - evan.macb...@crunchydata.com 



--
Angular momentum makes the world go 'round.


Re: CSVQL? CSV SQL? tab-separated table I/O? RENAME COLUMN

2018-05-02 Thread Ron

On 05/02/2018 04:49 PM, David G. Johnston wrote:
[snip]


- the microsoft patented CSV would be required for implementation. it
handles special data with commas and double-quotes in them


​If true this seems like a show-stopper to anything PostgreSQL would implement


If MSFT really holds a patent on the CSV format, then Postgresql is already 
in a world of hurt.


--
Angular momentum makes the world go 'round.


Re: PostgreSQL Volume Question

2018-06-19 Thread Ron

On 06/15/2018 11:26 AM, Data Ace wrote:


Well I think my question is somewhat away from my intention cause of my 
poor understanding and questioning :(


Actually, I have 1TB data and have hardware spec enough to handle this 
amount of data, but the problem is that it needs too many join operations 
and the analysis process is going too slow right now.


I've searched and found that graph model nicely fits for network data like 
social data in query performance.




If your data is hierarchal, then storing it in a network database is 
perfectly reasonable.  I'm not sure, though, that there are many network 
databases for Linux.  Raima is the only one I can think of.


Should I change my DB (I mean my DB for analysis)? or do I need some other 
solutions or any extension?



Thanks



--
Angular momentum makes the world go 'round.


Re: Settings for fast restores

2018-08-01 Thread Ron

On 08/01/2018 09:11 AM, Vick Khera wrote:
On Wed, Aug 1, 2018 at 2:03 AM, Ron <mailto:ronljohnso...@gmail.com>> wrote:


Hi,

http://www.databasesoup.com/2014/09/settings-for-fast-pgrestore.html
<http://www.databasesoup.com/2014/09/settings-for-fast-pgrestore.html>

shared_buffers = 1/2 of what you'd usually set
maintenance_work_mem = 1GB-2GB
wal_level = minimal
full_page_writes = off
wal_buffers = 64MB
checkpoint_segments = 256 or higher
max_wal_senders = 0
wal_keep_segments = 0

How many of these 4 year old setting recommendations are still valid
for 9.6?


They all look still valid to me. I personally also set fsync=off since I 
can always start over if the machine crashes and corrupts the data.


Right.  I didn't mention these, because they seem version-agnostic:

fsync = off
synchronous_commit = off
archive_mode = off
autovacuum = off
all activity logging settings disabled

--
Angular momentum makes the world go 'round.


Re: Design of a database table

2018-07-30 Thread Ron

On 07/30/2018 09:37 AM, hmidi slim wrote:
I'm trying to design a database table. First of all there are two 
alternatives:

1-) Divide the table into two tables and make a join.
2-) Design a single table.

1rst alternative:
Create table data_periods(
id serial primary key not null,
period daterange,
project_id integer
)

create table data_periods_info(
id serial primary key not null,
data_periods_id integer,
data_sub_periods daterange,
stock1 integer,
stock2 integer,
CONSTRAINT data_periods_allotment_id_fkey FOREIGN KEY (data_periods_id)
    REFERENCES data_periods (id) MATCH SIMPLE
    ON UPDATE NO ACTION
    ON DELETE NO ACTION
)


Are you absolutely 100% positive that there will NEVER be more than two 
stock numbers?  (People say "yes" to this kind of question all the time and 
then discover that they need more stock numbers when the business changes.)


--
Angular momentum makes the world go 'round.



Settings for fast restores

2018-08-01 Thread Ron

Hi,

http://www.databasesoup.com/2014/09/settings-for-fast-pgrestore.html

shared_buffers = 1/2 of what you'd usually set
maintenance_work_mem = 1GB-2GB
wal_level = minimal
full_page_writes = off
wal_buffers = 64MB
checkpoint_segments = 256 or higher
max_wal_senders = 0
wal_keep_segments = 0

How many of these 4 year old setting recommendations are still valid for 9.6?

Thanks

--
Angular momentum makes the world go 'round.



ALTER TABLE .. SET STATISTICS

2018-08-05 Thread Ron



v9.6.9

For columns of type bytea which store image data (PDFs, JPGs, etc) would it 
speed up the ANALYZE process to SET STATISTICS = 0?


That way, default_statistics_target could be cranked higher -- giving better 
statistics for needed columns -- without polluting pg_statistics with 
unneeded data?


Thanks

--
Angular momentum makes the world go 'round.



Order in which tables are dumped

2018-07-25 Thread Ron

Hi,

v8.4 if it matters.

It looked like the tables were being backed up in alphanumeric order, but 
now I see that table "docformat" is being dumped *after* "doc_image".


Are there some other rules besides alphabetical sorting?

--
Angular momentum makes the world go 'round.


Re: Order in which tables are dumped

2018-07-25 Thread Ron

On 07/25/2018 10:28 AM, Tom Lane wrote:

Ron  writes:

It looked like the tables were being backed up in alphanumeric order, but
now I see that table "docformat" is being dumped *after* "doc_image".

Looks like standard C-locale (ASCII) sort order to me ...


I hate spreadsheets.  And gnu sort, and unicode...  :(

--
Angular momentum makes the world go 'round.



Re: Order in which tables are dumped

2018-07-25 Thread Ron

On 07/25/2018 10:43 AM, Vick Khera wrote:
On Wed, Jul 25, 2018 at 11:15 AM, Ron <mailto:ronljohnso...@gmail.com>> wrote:


Hi,

v8.4 if it matters.

It looked like the tables were being backed up in alphanumeric order,
but now I see that table "docformat" is being dumped *after* "doc_image".

Are there some other rules besides alphabetical sorting?


Is there some concern about the order? Lower case f comes after _ in ascii.


I'm tracking the progress of a very long backup, and the spreadsheet I using 
(and gnu sort, too, and IIRC Postgres' own ORDER BY) sort based on the 
current locale (en_US), whereas pg_dump sorts based on the C locale.  Thus, 
things aren't as I expected.


The only time it could possibly matter is on restore when there are 
references for foreign keys, but on a restore those are all done after the 
data is restored.


--
Angular momentum makes the world go 'round.


Re: pg_basebackup failed to read a file

2018-08-14 Thread Ron




On 08/14/2018 11:14 AM, Tom Lane wrote:

Mike Cardwell  writes:

pg_basebackup: could not get write-ahead log end position from server:
ERROR:  could not open file "./postgresql.conf~": Permission denied
Now, I know what this error means. There was a root owned file at
"/var/lib/pgsql/10/data/postgresql.conf~" which contained an old
version of our postgres config and was not readable by the postgres
user. I'll delete this file and try again. However, in the mean time: I
feel like it would be useful for pg_basebackup to check that it has
read access to all of the existing files in the source directory at the
start, before it begins it's copy.

That seems like a pretty expensive thing to do, if there are lots of
files ... and you'd still end up failing, so it's not moving the ball
very far.


Why is checking a bunch of file permissions anywhere close to being as 
expensive as transferring 1.5TB over a WAN link?


--
Angular momentum makes the world go 'round.



Re: Copying data from a CSV file into a table dynamically

2018-08-14 Thread Ron

On 08/14/2018 08:38 AM, pavan95 wrote:

Hi Adrian,

I tried to use
*"COPY postgres_log1 FROM '/tmp/abc/xyz/postgresql-`date --date="0 days ago"
+%Y-%m-%d`_*.csv' WITH csv;"*

But it resulted in an error. How to issue such that it is understandable by
psql?

And I am completely unaware of python & psycopg2. Anything which suits my
requirement is enough!!ostgresql-archive.org/PostgreSQL-general-f1843780.html


Why not:
cat /tmp/abc/xyz/postgresql-`date --date="0 days ago"+%Y-%m-%d`_*.csv' | \
   psql YOURDB -c "COPY postgres_log1 FROM STDIN WITH csv;"

--
Angular momentum makes the world go 'round.



Re: [External] Multiple COPY on the same table

2018-08-20 Thread Ron

Maybe he just has a large file that needs to be loaded into a table...

On 08/20/2018 11:47 AM, Vijaykumar Jain wrote:

Hey Ravi,

What is the goal you are trying to achieve here.
To make pgdump/restore faster?
To make replication faster?
To make backup faster ?

Also no matter how small you split the files into, if network is your 
bottleneck then I am not sure you can attain n times the benefit my simply 
sending the files in parallel but yeah maybe some benefit.
But then for parallel processing you also need to ensure your server is 
having relevant resources or else it will just be a lot of context 
switching I guess ?

Pg dump has an option to dump in parallel
pgbasebackup is single threaded I read but pgbackrest can allow better 
parallel processing in backups.
There is also logical replication where you can selectively replicate your 
tables to avoid bandwidth issues.
I might have said a lot and nothing may be relevant, but you need to let 
us know the goal you want to achieve :)


Regards,
Vijay

*From:* Ravi Krishna 
*Sent:* Monday, August 20, 2018 8:24:35 PM
*To:* pgsql-general@lists.postgresql.org
*Subject:* [External] Multiple COPY on the same table
Can I split a large file into multiple files and then run copy using each 
file.  The table does not contain any
serial or sequence column which may need serialization. Let us say I split 
a large file to 4 files.  Will the

performance boost by close to 4x??

ps: Pls ignore my previous post which was without a subject (due to mistake)


--
Angular momentum makes the world go 'round.


pg_dump order of operation

2018-08-25 Thread Ron

Hi,

In v8.4, I noticed that the tables seemed to be dumped in alphabetical 
order.  Not so much, though, in a multithreaded 9.6 dump of an 8.4 database; 
there's no pattern that I can discern.


In what order does the 9.6 pg_dump dump tables?

Thanks

--
Angular momentum makes the world go 'round.



Re: pg_dump order of operation

2018-08-26 Thread Ron

On 08/26/2018 02:44 PM, Tom Lane wrote:

Ron  writes:

On 08/26/2018 01:42 PM, Tom Lane wrote:

Perhaps I don't understand *your* question.  What concrete problem are you
having?

I want to track the progress of pg_dump so as to estimate completion time.

Well, if you don't use --jobs then you should get more or less the same
behavior as in 8.4.  If you do use that, then it's hardly surprising that
things are more complicated.


I'm not demanding that it be simple, but just asking what the pattern is.

--
Angular momentum makes the world go 'round.



Re: pg_dump order of operation

2018-08-26 Thread Ron

On 08/26/2018 10:24 AM, Tom Lane wrote:

Ron  writes:

In v8.4, I noticed that the tables seemed to be dumped in alphabetical
order.  Not so much, though, in a multithreaded 9.6 dump of an 8.4 database;
there's no pattern that I can discern.
In what order does the 9.6 pg_dump dump tables?

I don't believe the ordering rules have changed materially since 8.4;
it's intended to be by object kind, and within that by name, except
where dependencies force doing something else.


I thought that didn't matter, since FK and PK constraints were added (in the 
required order) after all data was loaded).



But what are you thinking constitutes the "dump order" in a parallel dump?


I don't understand your question.


--
Angular momentum makes the world go 'round.



Re: pg_dump order of operation

2018-08-26 Thread Ron



On 08/26/2018 01:42 PM, Tom Lane wrote:

Ron  writes:

On 08/26/2018 10:24 AM, Tom Lane wrote:

Ron  writes:

In what order does the 9.6 pg_dump dump tables?

I don't believe the ordering rules have changed materially since 8.4;
it's intended to be by object kind, and within that by name, except
where dependencies force doing something else.

I thought that didn't matter, since FK and PK constraints were added (in the
required order) after all data was loaded).

But what are you thinking constitutes the "dump order" in a parallel dump?

I don't understand your question.

Perhaps I don't understand *your* question.  What concrete problem are you
having?


I want to track the progress of pg_dump so as to estimate completion time.

--
Angular momentum makes the world go 'round.



Improving pg_dump performance

2018-07-23 Thread Ron

Hi,

We've got an old (v8.4.17, thus no parallel backups) 2.9TB database that 
needs to be migrated to a new data center and then restored to v9.6.9.


The database has many large tables full of bytea columns containing pdf 
images, and so the dump file is going to be more than 2x larger than the 
existing data/base...



The command is:
$ pg_dump -v -Z0 -Fc $DB --file=${TARGET}/${DATE}_${DB}.dump 2> 
${DATE}_${DB}.log


Using -Z0 because pdf files are already compressed.

Because of an intricate web of FK constraints and partitioned tables, the 
customer doesn't trust a set of "partitioned" backups using --table= and 
regular expressions (the names of those big tables all have the year in 
them), and so am stuck with a single-threaded backup.


Are there any config file elements that I can tweak (extra points for not 
having to restart postgres) to make it run faster, or deeper knowledge of 
how pg_restore works so that I could convince them to let me do the 
partitioned backups?


Lastly, is there any way to not make the backups so large (maybe by using 
the --binary-upgrade option, even though the man page says, "in-place 
upgrades only")?


--
Angular momentum makes the world go 'round.



Re: Improving pg_dump performance

2018-07-23 Thread Ron

On 07/23/2018 08:46 AM, Stephen Frost wrote:

Greetings,

* Ron (ronljohnso...@gmail.com) wrote:

An interesting idea.  To clarify: it's possible to parallel backup a running
8.4 cluster remotely from a 9.6 system?

Yes, you can do a parallel backup, but you won't be able to get a
consistent snapshot.  You'll need to pause all changes to the database
while the pg_dump processes connect and start their transactions to
have the backup be consistent.


I can do that!!!

--
Angular momentum makes the world go 'round.



Re: Improving pg_dump performance

2018-07-23 Thread Ron




On 07/23/2018 08:56 AM, Adrian Klaver wrote:

On 07/23/2018 06:47 AM, Ron wrote:

On 07/23/2018 08:46 AM, Stephen Frost wrote:

Greetings,

* Ron (ronljohnso...@gmail.com) wrote:
An interesting idea.  To clarify: it's possible to parallel backup a 
running

8.4 cluster remotely from a 9.6 system?

Yes, you can do a parallel backup, but you won't be able to get a
consistent snapshot.  You'll need to pause all changes to the database
while the pg_dump processes connect and start their transactions to
have the backup be consistent.


I can do that!!!


Assuming you can get this setup, have you tested some subset of your data 
on 9.6.9?:


1) Going from 8.4 --> 9.6 is jumping 7 major versions of Postgres. Do you 
know that the data/code will work in 9.6.9?


2) Does the transfer have time built in for fixing problems on the 9.6.9 end?

3) If the answer 2) is no, then is there a plan to deal with changes in 
the 8.4 database while working on the 9.6.9 database?


Yes, we've migrated CAT and Staging databases, and the application has been 
tested.


And this is a test conversion of the prod databases...

--
Angular momentum makes the world go 'round.



Re: Improving pg_dump performance

2018-07-23 Thread Ron

On 07/23/2018 09:11 AM, Andres Freund wrote:

Hi,

On 2018-07-23 02:23:45 -0500, Ron wrote:

We've got an old (v8.4.17, thus no parallel backups) 2.9TB database that
needs to be migrated to a new data center and then restored to v9.6.9.

Have you considered using pg_upgrade instead?


Yes, but:
1. can't find 9.6 packages on the RHEL 5.10 server.
2. The window is only 8 hours, and the data also has to be moved to a new DC 
in that window.


--
Angular momentum makes the world go 'round.



Re: Improving pg_dump performance

2018-07-23 Thread Ron

On 07/23/2018 02:32 AM, Andreas Kretschmer wrote:



Am 23.07.2018 um 09:23 schrieb Ron:

Hi,

We've got an old (v8.4.17, thus no parallel backups) 2.9TB database that 
needs to be migrated to a new data center and then restored to v9.6.9. 


you can use the pg_dump from the newer version (9.6) to dump the old 
database, over the net. In this way you can also use parallel backups 
(directory format)


That DC circuit is too slow, and also used by lots of other production data.

--
Angular momentum makes the world go 'round.



Re: Improving pg_dump performance

2018-07-23 Thread Ron

On 07/23/2018 08:27 AM, Andreas Kretschmer wrote:


Am 23.07.2018 um 15:06 schrieb Ron:

On 07/23/2018 02:32 AM, Andreas Kretschmer wrote:



Am 23.07.2018 um 09:23 schrieb Ron:

Hi,

We've got an old (v8.4.17, thus no parallel backups) 2.9TB database 
that needs to be migrated to a new data center and then restored to 
v9.6.9. 


you can use the pg_dump from the newer version (9.6) to dump the old 
database, over the net. In this way you can also use parallel backups 
(directory format)


That DC circuit is too slow, and also used by lots of other production data.



install the 9.6 also on the old server, 


Are there 9.6 packages for RHEL 5.10?


or use an other server in the same DC.


An interesting idea.  To clarify: it's possible to parallel backup a running 
8.4 cluster remotely from a 9.6 system?


--
Angular momentum makes the world go 'round.



Speccing a remote backup server

2018-07-24 Thread Ron

Hi,

v9.6 backing up v8.4

Where does the gzip run (where the database lives, or the remote server 
where the pg_dump runs from)?  I ask this because I need to know how beefy 
to make the backup server.  (It'll just store backups for a version upgrade.)


--
Angular momentum makes the world go 'round.



Re: Barman versus pgBackRest

2018-09-04 Thread Ron

On 09/04/2018 10:24 AM, Joshua D. Drake wrote:

On 09/04/2018 07:52 AM, Ron wrote:

On 09/04/2018 09:24 AM, Joshua D. Drake wrote:

On 09/04/2018 07:14 AM, Ron wrote:


That was about barman, in the barman group.  This is asking about 
pgbackrest...  :)


So: does pgbackrest have this ability which barman does not have? The 
"--db-include" option seems to indicate that you can restore a single 
db, but does indicate whether or not you can rename it.


https://pgbackrest.org/configuration.html#section-restore/option-db-include



Which implies that you can't do it?


You can restore a single database and then issue a simple ALTER DATABASE 
command to change the DB name.




(Postgres backup/restore capabilities are quite limited, which is 
disapointing.)


Not sure I agree with that. If you want to restore and then rename a DB, 
rename it.


ALTER DATABASE foo RENAME TO bar;


But restoring an old "foo" overwrites the existing "foo".  On SQL Server 
databases, we occasionally need to restore an old foo backup "foo_old" along 
side production foo.



--
Angular momentum makes the world go 'round.



Re: Max number of WAL files in pg_xlog directory for Postgres 9.2 version

2018-09-05 Thread Ron

On 09/05/2018 12:39 PM, Raghavendra Rao J S V wrote:

Hi All,

We are using postgres 9.2 verstion database.

Please let me know, how many max number of wal files in pg_xlog directory?

What is the formul. I am seeing different formulas. Could you provide me 
which decides number of max WAL files in PG_XLOG directory for Postgres 
9.2 Database,please?


If you're doing WAL file replication, and -- for whatever reason -- the 
standby system isn't applying them fast enough, there could be tens of 
thousands of files in pg_xlog.  (It would drain pretty quickly when you 
resolve the problem, though.)



--
Angular momentum makes the world go 'round.


Re: PostgreSQL intenal scheduler?

2018-09-05 Thread Ron



Maybe https://github.com/chanks/que is what you need.

On 09/05/2018 02:35 PM, Thiemo Kellner wrote:
I have seen pg_cron but it is not what I am looking for. It schedules 
tasks only by time. I am looking for a fifo queue. pg_cron neither 
prevents from simultaneous runs I believe.


Quoting Thomas Kellerer :


There is no built-in scheduler, but there is an extension that supplies that

https://github.com/citusdata/pg_cron







--
Angular momentum makes the world go 'round.



pgbackrest when data/base is symlinked to another volume

2018-09-07 Thread Ron

Hi,

Will pgbackrest properly backup and restore the cluster if data/base, 
data/pg_xlog and data/pg_log are symlinks?


PGDATA=/var/lib/pgsql/9.6/data
$PGDATA/base -> /Database/9.6/base
$PGDATA/pg_log -> /Database/9.6/pg_log
$PGDATA/pg_xlog -> /Database/9.6/pg_xlog

(I'm not just defining data_directory because the DBAs are used to looking 
in $PGDATA and seeing all the relevant files.)


Thanks
--
Angular momentum makes the world go 'round.



Volume partitioning (was Re: pgbackrest when data/base is symlinked to another volume)

2018-09-08 Thread Ron

On 09/08/2018 03:07 PM, David Steele wrote:

On 9/7/18 8:47 PM, Ron wrote:

On 09/07/2018 05:22 PM, David Steele wrote:


On 9/6/18 11:21 PM, Ron wrote:


Will pgbackrest properly backup and restore the cluster if data/base, 
data/pg_xlog and data/pg_log are symlinks?


PGDATA=/var/lib/pgsql/9.6/data
$PGDATA/base -> /Database/9.6/base
$PGDATA/pg_log -> /Database/9.6/pg_log
$PGDATA/pg_xlog -> /Database/9.6/pg_xlog


Yes, this will work.  Note that restore does not recreate symlinks by 
default so you'll need to specify --link-all to enable symlink creation.


See 
https://pgbackrest.org/configuration.html#section-restore/option-link-all 
for details.


Using symlinks in this way will make management of your clusters more 
difficult, mostly because systems need more provisioning before restores 
can be performed.  In general I'd recommend against it unless there are 
performance considerations.


Now that I'm thinking more about what you wrote... "data" isn't on it's 
own partition.  data/*base* has it's own partition.


What's the recommended method for putting *base**/* on a partition 
different from data/?  Or is that not recommended?


All the user data goes in base so there's really no need to separate it 
out of data.  Typically pg_wal and tablespaces are relocated onto 
different devices for performance (or to get more space).  If the 
partitions are on the same device then there's no performance benefit, 
just admin hassle.




Googled "postgresql disk partitioning" and "postgresql volume partitioning" 
without much success.


Is the best practice volume partitioning:
/Database/9.6/data
/Database/9.6/data/pg_log
/Database/9.6/data/pg_xlog

where /var/lib/pgsql/9.6 (on RHEL6) is a symlink to /Database/9.6/data and 
PGDATA=/Database/9.6/data


*or *

/Database/9.6/data/base
/Database/9.6/data/pg_log
/Database/9.6/data/pg_xlog

where PGDATA=/var/lib/pgsql/9.6/data and base, pg_log and px_xlog are 
symlinks to the partitions?


Thanks

--
Angular momentum makes the world go 'round.


Re: Barman versus pgBackRest

2018-09-04 Thread Ron

On 09/04/2018 10:51 AM, David Steele wrote:
[snip]

This will work, but I don't think it's what Ron is getting at.

To be clear, it is not possible to restore a database into an *existing*
cluster using pgBackRest selective restore.  This is a limitation of
PostgreSQL file-level backups.

To do what Ron wants you would need to restore it to a new cluster, then
use pg_dump to logically dump and restore it to whatever cluster you
want it in.  This still saves time since there is less to restore but is
obviously not ideal.


That's exactly what I'm referring to.

Presumably I could restore it to a new cluster on the same VM via initdb on 
a different port and PGDATA directory?


--
Angular momentum makes the world go 'round.



Re: Barman versus pgBackRest

2018-09-04 Thread Ron

On 09/04/2018 09:24 AM, Joshua D. Drake wrote:

On 09/04/2018 07:14 AM, Ron wrote:


That was about barman, in the barman group.  This is asking about 
pgbackrest...  :)


So: does pgbackrest have this ability which barman does not have? The 
"--db-include" option seems to indicate that you can restore a single db, 
but does indicate whether or not you can rename it.


https://pgbackrest.org/configuration.html#section-restore/option-db-include



Which implies that you can't do it?

(Postgres backup/restore capabilities are quite limited, which is disapointing.)

--
Angular momentum makes the world go 'round.



Re: Barman versus pgBackRest

2018-09-04 Thread Ron


That was about barman, in the barman group.  This is asking about 
pgbackrest...  :)


So: does pgbackrest have this ability which barman does not have? The 
"--db-include" option seems to indicate that you can restore a single db, 
but does indicate whether or not you can rename it.



On 09/04/2018 08:44 AM, Thomas Poty wrote:

Your problem looks like this one ;-)
https://groups.google.com/forum/#!topic/pgbarman/kXcEpSLhw8w 
<https://groups.google.com/forum/#%21topic/pgbarman/kXcEpSLhw8w>

answer may help

Physical backup/restore operates on a whole cluster...

Le mar. 4 sept. 2018 à 14:47, Ron <mailto:ronljohnso...@gmail.com>> a écrit :


On 09/04/2018 07:14 AM, Thomas Poty wrote:

> Do you just change the IP address of the "restore target"?
Do you expect a typical restore command?


I'm investigating barman and pgBackRest to replace our exitsing
NetBackup system, so don't know what you mean by "typical restore
command".

Here are our typical use cases:

1. If my barman backup server has full backups, diffs and WALs for
database server MAIN_PG_SERVER, which hosts databases D1, D2 and D3,
how much work is it to do a PITR restore of D2 to a *different* Pg server?

2. Can I restore an older copy of database D2 to MAIN_PG_SERVER,
*giving it a new name* (so that now there would be databases D1, D2,
D3 *and D2_OLD*)?  That's pretty trivial on SQL Server, and something
I've had to do before so the operations staff can research a problem.)

Thanks

-- 
Angular momentum makes the world go 'round.




--
Angular momentum makes the world go 'round.


Re: pgbackrest when data/base is symlinked to another volume

2018-09-07 Thread Ron

On 09/07/2018 05:22 PM, David Steele wrote:

Hi Ron,

On 9/6/18 11:21 PM, Ron wrote:


Will pgbackrest properly backup and restore the cluster if data/base, 
data/pg_xlog and data/pg_log are symlinks?


PGDATA=/var/lib/pgsql/9.6/data
$PGDATA/base -> /Database/9.6/base
$PGDATA/pg_log -> /Database/9.6/pg_log
$PGDATA/pg_xlog -> /Database/9.6/pg_xlog


Yes, this will work.  Note that restore does not recreate symlinks by 
default so you'll need to specify --link-all to enable symlink creation.


See 
https://pgbackrest.org/configuration.html#section-restore/option-link-all 
for details.


Using symlinks in this way will make management of your clusters more 
difficult, mostly because systems need more provisioning before restores 
can be performed.  In general I'd recommend against it unless there are 
performance considerations.


Also, you might consider using log_directory to relocate log files rather 
than a symlink.  This will exclude log files from your backup which is 
usually preferable -- primary logs restored to a standby are out of 
context and can cause confusion.


Thanks for the tips.  I'll probably implement that on our new systems.

--
Angular momentum makes the world go 'round.



Re: Barman versus pgBackRest

2018-09-04 Thread Ron

On 09/04/2018 07:14 AM, Thomas Poty wrote:

> Do you just change the IP address of the "restore target"?
Do you expect a typical restore command?


I'm investigating barman and pgBackRest to replace our exitsing NetBackup 
system, so don't know what you mean by "typical restore command".


Here are our typical use cases:

1. If my barman backup server has full backups, diffs and WALs for database 
server MAIN_PG_SERVER, which hosts databases D1, D2 and D3, how much work is 
it to do a PITR restore of D2 to a *different* Pg server?


2. Can I restore an older copy of database D2 to MAIN_PG_SERVER, *giving it 
a new name* (so that now there would be databases D1, D2, D3 *and D2_OLD*)? 
That's pretty trivial on SQL Server, and something I've had to do before so 
the operations staff can research a problem.)


Thanks

--
Angular momentum makes the world go 'round.


Re: Barman versus pgBackRest

2018-09-04 Thread Ron

On 03/09/2018 08:56 AM, David Steele wrote:
[snip]

About pgBarman, I like :
- be able restore on a remote server from the backup server

This a good feature, and one that has been requested for pgBackRest. You
can do this fairly trivially with ssh, however, so it generally hasn't
been a big deal for people.  Is there a particular reason you need this
feature?


(Sorry to dredge up this old thread.)

Do you just change the IP address of the "restore target"?


- use replication slots for backingup wal on the backup server.

Another good feature.  We have not added it yet because pgBackRest was
originally written for very high-volume clusters (100K+ WAL per day) and
our parallel async feature answers that need much better.  We recommend
a replicated standby for more update-to-date data.


Every N minutes you copy the WAL files to the backup server?


--
Angular momentum makes the world go 'round.



Re: PG8.3->10 migration data differences

2018-09-11 Thread Ron

Then fix your field-based data comparing mechanism.

On 09/11/2018 03:41 AM, Csaba Ragasits wrote:

Hello,

We would like to migrate from 8.3 to 10 version. We've hundreds databases 
with different structures. That reason we're working on an automatic data 
comparing process.


I've found the following storage settings:
- pg83: Date/time type storage:   64-bit integers
- pg10: Date/time type storage:   64-bit integers

When I running the following select from psql (Select '09/10/18 
07:10:25.110'::timestamp;)

The results are same:
-pg83: 2018-09-10 07:10:25.11
-pg10: 2018-09-10 07:10:25.11

When I select it from a table (Select v.entry_timestamp from t_vis v):
The results are different:
- pg83: 2015-08-28 21:25:07.70
- pg10: 2015-08-28 21:25:07.7

The field type:
entry_timestamp TIMESTAMP WITHOUT TIME ZONE NOT NULL,

Our difference every time the last 0 values. The time values are same, but 
our field based data comparing mechanism every time mark it as error.


thx,
Csaba


--
Angular momentum makes the world go 'round.



Re: Return select statement with sql case statement

2018-07-04 Thread Ron

On 07/04/2018 10:32 AM, hmidi slim wrote:
Actually, I need the use of case because based on the numberOfPremiumDays 
there are different type of treatment:

select numberOfPremiumDays
            case  when numberOfPremiumDays = date_part('day', 
('2018-11-05'::timestamp) - ('2018-11-01'::timestamp)) then

                select product_id,
                premium_price,
    period_price
                from product
                where occupation_type_id = 1
                group by product_id, occupation_type_id
   else
    select product_id,
                classic_price,
    period_price
                from product1
                where occupation_type_id = 1
                group by product_id, occupation_type_id



Then try:
select product_id,
case when numberOfPremiumDays = date_part('day', ('2018-11-05'::timestamp) - 
('2018-11-01'::timestamp)) then

       premium_price
   else
   period_price
   end as the_price
from product
where occupation_type_id = 1
order by product_id, occupation_type_id


--
Angular momentum makes the world go 'round.



Re: Return select statement with sql case statement

2018-07-04 Thread Ron




On 07/04/2018 05:08 PM, Adrian Klaver wrote:

On 07/04/2018 03:03 PM, Ron wrote:

On 07/04/2018 10:32 AM, hmidi slim wrote:
Actually, I need the use of case because based on the 
numberOfPremiumDays there are different type of treatment:

select numberOfPremiumDays
            case  when numberOfPremiumDays = date_part('day', 
('2018-11-05'::timestamp) - ('2018-11-01'::timestamp)) then

                select product_id,
                premium_price,
    period_price
                from product
                where occupation_type_id = 1
                group by product_id, occupation_type_id
   else
    select product_id,
                classic_price,
    period_price
                from product1
                where occupation_type_id = 1
                group by product_id, occupation_type_id



Then try:
select product_id,
case when numberOfPremiumDays = date_part('day', 
('2018-11-05'::timestamp) - ('2018-11-01'::timestamp)) then

        premium_price
    else
    period_price
    end as the_price
from product
where occupation_type_id = 1
order by product_id, occupation_type_id


The issue with the above is that table changes from product to product1 in 
the OP's desired behavior so the price switch alone will not work:(


Ah, didn't notice that.  Then... dynamic sql constructed by the programming 
language executing the query?



--
Angular momentum makes the world go 'round.



Re: Open Source tool to deploy/promote PostgreSQL DDL

2018-07-11 Thread Ron

On 07/11/2018 03:21 PM, Christopher Browne wrote:

I have built one that I call Mahout
(https://github.com/cbbrowne/mahout) which has the merit of involving
just two shell scripts, one of which is an auditing tool (pgcmp).

It implements a "little language" to indicate dependencies between the
SQL scripts that implement the DDL changes.

The notable thing that I have seen "downthread" that it does not
attempt to implement is "rollback scripts."  I find that "necessity"
to be a ruby-on-rails dogma that does not seem to fit what I see
people doing.


Where I work, the requirement to have rollback scripts is part of the ITIL 
requirement for Changes to have a backout procedure.


--
Angular momentum makes the world go 'round.



Re: Open Source tool to deploy/promote PostgreSQL DDL

2018-07-11 Thread Ron

On 07/11/2018 03:39 PM, Ravi Krishna wrote:

Where I work, the requirement to have rollback scripts is part of the ITIL 
requirement for Changes to have a backout procedure.


Liquibase provides that ability, but IMO rollback for RDBMS is always bit 
tricky.  Certain DDL operations can take long time if it involves
a table rewrite.  PG is actually better than others.


Yeah, that's true.  Sometimes I just dump the whole table, and reload if a 
rollback is necessary.




--
Angular momentum makes the world go 'round.



Re: Open Source tool to deploy/promote PostgreSQL DDL

2018-07-11 Thread Ron

On 07/11/2018 04:10 PM, Christopher Browne wrote:
[snip]

ITIL surely does NOT specify the use of database rollback scripts as
THE SPECIFIED MECHANISM for a backout procedure.

In practice, we tend to take database snapshots using filesystem
tools, as that represents a backout procedure that will work regardless
of the complexity of an upgrade.

It is quite possible for an upgrade script to not be reversible.

After all, not all matrices are invertible; there are a surprisingly large
number of preconditions that are required for that in linear algebra.

And in databases, not all upgrades may be reversed via rollback scripts.


Does "rollback script" truly mean undoing what you just did in a 
transaction-like manner?


--
Angular momentum makes the world go 'round.



Re: Return select statement with sql case statement

2018-07-04 Thread Ron

On 07/04/2018 07:48 AM, hmidi slim wrote:

Hi,
I need to use conditional expression in my query, So I want to make a 
query like this:

select numberOfPremiumDays
            case  when numberOfPremiumDays = date_part('day', 
('2018-11-05'::timestamp) - ('2018-11-01'::timestamp)) then

                select product_id,
                price
                from product
                where occupation_type_id = 1
                group by product_id, occupation_type_id

However, in the documentation I found that the return of case was a value, 
not like in my case I want to return a select statement.

How can I use a conditional expression in a sql query?
Best Regards.


The CASE clause is used to return one of many choices.  Based on this 
example, you need to do this:


select numberOfPremiumDays,
   product_id,
   price
   from product
where occupation_type_id = 1
  and  numberOfPremiumDays = date_part('day', ('2018-11-05'::timestamp) - 
('2018-11-01'::timestamp))

group by product_id, occupation_type_id



--
Angular momentum makes the world go 'round.



Re: Postgresql Split Brain: Which one is latest

2018-04-10 Thread Ron
You need to find out when the split happened, and whether each new master 
have records since then.


On 04/10/2018 11:47 AM, Vikas Sharma wrote:
Thanks Adrian and Edison, I also think so. At the moment I have 2 masters, 
as soon as slave is promoted to master it starts its own timeline and 
application might have added data to either of them or both, only way to 
find out correct master now is the instance with max count of data in 
tables which could incur data loss as well. Correct me if wrong please?


Thanks and Regards
Vikas

On Tue, Apr 10, 2018, 17:29 Adrian Klaver > wrote:


On 04/10/2018 08:04 AM, Vikas Sharma wrote:
> Hi Adrian,
>
> This can be a good example: Application server e.g. tomcat having two
> entries to connect to databases, one for master and 2nd for Slave
> (ideally used when slave becomes master). If application is not able to
> connect to first, it will try to connect to 2nd.

So the application server had a way of seeing the new master(old slave),
in spite of the network glitch, that the original master database did not?

If so and it was distributing data between the two masters on an unknown
schedule, then as Edison pointed out in another post, you really have a
split brain issue. Each master would have it's own view of the data and
latest update would really only be relevant for that master.

>
> Regards
> Vikas
>
> On 10 April 2018 at 15:26, Adrian Klaver 
> >> wrote:
>
>     On 04/10/2018 06:50 AM, Vikas Sharma wrote:
>
>         Hi,
>
>         We have postgresql 9.5 with streaming replication(Master-slave)
>         and automatic failover. Due to network glitch we are in
>         master-master situation for quite some time. Please, could you
>         advise best way to confirm which node is latest in terms of
>         updates to the postgres databases.
>
>
>     It might help to know how the two masters received data when they
>     where operating independently.
>
>
>         Regards
>         Vikas Sharma
>
>
>
>     --
>     Adrian Klaver
> adrian.kla...@aklaver.com 
>
>
>


--
Adrian Klaver
adrian.kla...@aklaver.com 



--
Angular momentum makes the world go 'round.


Re: pg_basebackup restore a single table

2018-04-11 Thread Ron



On 04/11/2018 10:21 AM, Andreas Kretschmer wrote:



Am 11.04.2018 um 15:53 schrieb camarillo:

Can I do a restore of a single table or single base using the archive
generated for the basebackup without having to delete the filesystem
(/var/lib/pgsql/9.5/*)?.


No, but you can use a spare machine to restore the hole database 
(point-in-time-recovery) and restore the table from that using pg_dump or 
copy.


Not everyone has a spare machine and a few TB lying around...

--
Angular momentum makes the world go 'round.



Re: how to securely delete the storage freed when a table is dropped?

2018-04-13 Thread Ron

After you drop a table, aren't the associated files dropped?

On 04/13/2018 02:29 PM, Ozz Nixon wrote:

There are free utilities that do government leave wipes. The process would be, 
drop the table, shrink the old table space then (if linux based), dd fill the 
drive, and use wipe, 5x or 8x deletion to make sure the drive does not have 
readable imprints on the platers.

Now what Jonathan mentions - sounds like he wants to do the same to the 
physical table. Never dabbling into PSQL’s storage and optimization algorithms, 
I would first assume, a script to do a row by row update table set 
field1…fieldx, different data patterns, existing field value length and field 
max length. Run the script at least 5 to 8 times, then drop the table .. the 
problem will be, does PSQL use a new page as you do this, then you are just 
playing with yourself. Let alone, how does PSQL handle indexes - new pages, or 
overwrite the existing page? And is any NPI (Non-Public-Info) data in the index 
itself?

* So any PSQL core-engine guys reading?

O.


On Apr 13, 2018, at 3:03 PM, Ron <ronljohnso...@gmail.com> wrote:



On 04/13/2018 12:48 PM, Jonathan Morgan wrote:

For a system with information stored in a PostgreSQL 9.5 database, in which data stored 
in a table that is deleted must be securely deleted (like shred does to files), and where 
the system is persistent even though any particular table likely won't be (so can't just 
shred the disks at "completion"), I'm trying to figure out my options for 
securely deleting the underlying data files when a table is dropped.

As background, I'm not a DBA, but I am an experienced implementor in many 
languages, contexts, and databases. I've looked online and haven't been able to 
find a way to ask PostgreSQL to do the equivalent of shredding its underlying 
files before releasing them to the OS when a table is DROPped. Is there a 
built-in way to ask PostgreSQL to do this? (I might just not have searched for 
the right thing - my apologies if I missed something)

A partial answer we're looking at is shredding the underlying data files for a 
given relation and its indexes manually before dropping the tables, but this 
isn't so elegant, and I'm not sure it is getting all the information from the 
tables that we need to delete.

We also are looking at strategies for shredding free space on our data disk - either 
running a utility to do that, or periodically replicating the data volume, swapping in 
the results of the copy, then shredding the entire volume that was the source so its 
"free" space is securely overwritten in the process.

Are we missing something? Are there other options we haven't found? If we have 
to clean up manually, are there other places we need to go to shred data than 
the relation files for a given table, and all its related indexes, in the 
database's folder? Any help or advice will be greatly appreciated.

I'd write a program that fills all free space on disk with a specific pattern.  
You're probably using a logging filesystem, so that'll be far from perfect, 
though.

--
Angular momentum makes the world go 'round.



--
Angular momentum makes the world go 'round.



Re: pg_dump to a remote server

2018-04-17 Thread Ron

On 04/16/2018 11:07 PM, Adrian Klaver wrote:

On 04/16/2018 06:43 PM, Ron wrote:



On 04/16/2018 07:18 PM, Adrian Klaver wrote:

On 04/16/2018 04:58 PM, Ron wrote:
We're upgrading from v8.4 to 9.6 on a new VM in a different DC.  The 
dump file will be more than 1TB, and there's not enough disk space on 
the current system for the dump file.


Thus, how can I send the pg_dump file directly to the new server while 
the pg_dump command is running?  NFS is one method, but are there 
others (netcat, rsync)?  Since it's within the same company, encryption 
is not required.


Maybe?:

pg_dump -d test -U postgres -Fc | ssh aklaver@arkansas 'cat > test_cat.out'


That looks promising.  I could then "pg_restore -jX".


More promising would be the suggestion from Michael Nolan:

https://www.postgresql.org/message-id/CAOzAqu%2BVpOfzBHwcqptSzm3PkeZAjkqqc0XqB%2BA-jBNioU6x%2Bg%40mail.gmail.com 



"Can you run pg_dump on the new server, connecting remotely to the current 
one?"


It eliminates two programs(ssh and cat) and a pipe.


Is that supported?

--
Angular momentum makes the world go 'round.



Re: Postgresql database encryption

2018-04-20 Thread Ron



On 04/20/2018 06:11 PM, Stephen Frost wrote:

Greetings,

* Ron (ronljohnso...@gmail.com) wrote:

On 04/20/2018 03:55 PM, Vick Khera wrote:

On Fri, Apr 20, 2018 at 11:24 AM, Vikas Sharma <shavi...@gmail.com
For anyone to offer a proper solution, you need to say what purpose your
encryption will serve. Does the data need to be encrypted at rest? Does it
need to be encrypted in memory? Does it need to be encrypted at the
database level or at the application level? Do you need to be able to
query the data? There are all sorts of scenarios and use cases, and you
need to be more specific.

For me, using whole-disk encryption solved my need, which was to ensure
that the data on disk cannot be read once removed from the server.

Someone really needs to explain that to me. My company-issued laptop has
WDE, and that's great for when the machine is shut down and I'm carrying it
from place to place, but when it's running, all the data is transparently
decrypted for every process that wants to read the data, including malware,
industrial spies,

Thus, unless you move your DB server on a regular basis, I can't see the
usefulness of WDE on a static machine.

The typical concern (aka, attack vector) isn't around moving the DB
server on a regular basis or about someone breaking into your data
center and stealing your drives, it's making sure that disposal of
equipment doesn't result in valuable data being retained on the
drives when they leave the data center for replacement or disposal.


That makes some sense, but years of added CPU overhead to mitigate a problem 
that could be solved by writing zeros to the disk as a step in the decomm 
process seems more than a bit wasteful.



--
Angular momentum makes the world go 'round.



Re: Postgresql database encryption

2018-04-20 Thread Ron

On 04/20/2018 03:55 PM, Vick Khera wrote:
On Fri, Apr 20, 2018 at 11:24 AM, Vikas Sharma > wrote:


Hello Guys,

Could someone throw light on the postgresql instance wide or database
wide encryption please? Is this possible in postgresql and been in use
in production?.


For anyone to offer a proper solution, you need to say what purpose your 
encryption will serve. Does the data need to be encrypted at rest? Does it 
need to be encrypted in memory? Does it need to be encrypted at the 
database level or at the application level? Do you need to be able to 
query the data? There are all sorts of scenarios and use cases, and you 
need to be more specific.


For me, using whole-disk encryption solved my need, which was to ensure 
that the data on disk cannot be read once removed from the server.


Someone really needs to explain that to me. My company-issued laptop has 
WDE, and that's great for when the machine is shut down and I'm carrying it 
from place to place, but when it's running, all the data is transparently 
decrypted for every process that wants to read the data, including malware, 
industrial spies,


Thus, unless you move your DB server on a regular basis, I can't see the 
usefulness of WDE on a static machine.


For certain fields in one table, I use application level encryption so 
only the application itself can see the original data. Anyone else 
querying that table sees the encrypted blob, and it was not searchable.


--
Angular momentum makes the world go 'round.


Re: Postgresql database encryption

2018-04-20 Thread Ron

On 04/20/2018 10:24 AM, Vikas Sharma wrote:

Hello Guys,

Could someone throw light on the postgresql instance wide or database wide 
encryption please? Is this possible in postgresql and been in use in 
production?.


What about encrypted backups?

--
Angular momentum makes the world go 'round.



Re: pg_dump to a remote server

2018-04-16 Thread Ron

On 04/16/2018 07:47 PM, Gao Jack wrote:

-Original Message-
From: Ron <ronljohnso...@gmail.com>
Sent: Tuesday, April 17, 2018 7:59 AM
To: pgsql-general <pgsql-gene...@postgresql.org>
Subject: pg_dump to a remote server

We're upgrading from v8.4 to 9.6 on a new VM in a different DC.  The dump
file will be more than 1TB, and there's not enough disk space on the current
system for the dump file.

Thus, how can I send the pg_dump file directly to the new server while the
pg_dump command is running?  NFS is one method, but are there others
(netcat, rsync)?  Since it's within the same company, encryption is not
required.

Or would it be better to install both 8.4 and 9.6 on the new server (can I
even install 8.4 on RHEL 6.9?), rsync the live database across and then set
up log shipping, and when it's time to cut over, do an in-place pg_upgrade?

(Because this is a batch system, we can apply the data input files to bring
the new database up to "equality" with the 8.4 production system.)

Thanks

--
Angular momentum makes the world go 'round.

Hi

https://www.postgresql.org/docs/current/static/backup-dump.html#BACKUP-DUMP-RESTORE

...
...

The ability of pg_dump and psql to write to or read from pipes makes it 
possible to dump a database directly from one server to another, for example:

pg_dump -h host1 dbname | psql -h host2 dbname


But that assumes --format=plain which will send a whole lot of uncompressed 
text across the wire.


--
Angular momentum makes the world go 'round.



A couple of pg_dump questions

2018-04-19 Thread Ron


$ pg_dump --host=farawaysrvr -Fc $REMOTEDB > /local/disk/backups/$REMOTEDB.dump

Is the data compressed on the remote server (thus minimizing traffic on the 
wire), or locally?  (I'd test this myself, but the company has really strict 
firewall rules in place.)


$ pg_dump --host=farawaysrvr -Fc -j4 $REMOTEDB > 
/local/disk/backups/$REMOTEDB.dump


Will parallel backups work if pg_dump is v9.6 and the remote system is v8.4?

Thanks

--
Angular momentum makes the world go 'round.



Re: Must re-connect to see tables

2018-03-27 Thread Ron
If it worked in 9.3.22 and now it's failed in 9.3.22, then  I'd look to see 
if something has changed in data.sql.


On 03/27/2018 06:22 AM, Blake McBride wrote:

Hi,

I have been using PostgreSQL for many years but all of a sudden a db load 
script I've been using no longer works.  What id does (essentially) is:


psql postgres postgres
drop database test;
create database test;
\c test
\i data.sql
\dt
\q


I tested this with a small data.sql and it worked fine. However, when I 
use a real (much larger) data.sql then \dt replies "No relations"  !  I 
found this if I stick an extra '\c test' after the '\i data.sql' it works.


The data.sql is a PostgreSQL dump from another database. When the '\i 
data.sql' is done, the prompt still shows 'test#' so it's not like I've 
changed databases.


I only have one schema, 'public'.

Can't figure out why a simple example works but a large file requires an 
extra \c.


Sure appreciate any help.

Blake McBride



--
Angular momentum makes the world go 'round.



Re: [EXTERNAL]Re: pg_dump -Fd -j2 on standby in 9.6.6

2018-03-28 Thread Ron

On 03/28/2018 03:05 AM, Andreas Kretschmer wrote:
[snip]

> This e-mail message, including any attachments,

this is a public mailing list ...


The intended recipient is the public mailing list, no?

--
Angular momentum makes the world go 'round.



Re: using pg_basebackup for point in time recovery

2018-06-21 Thread Ron

On 06/21/2018 12:27 AM, Michael Paquier wrote:
[snip]

Attached is a patch which includes your suggestion.  What do you think?
As that's an improvement, only HEAD would get that clarification.


You've *got* to be kidding.

Fixing an ambiguously or poorly worded bit of *documentation* should 
obviously be pushed to all affected versions.


--
Angular momentum makes the world go 'round.


Re: COPY from a remote machine in Datastage

2018-10-05 Thread Ron

On 10/05/2018 09:18 AM, Ravi Krishna wrote:


Hello, if you need to use COPY command from remote machine and you use 
some libpq bindings (aka ruby pg gem for example), you can use functions 
associated with COPY command 
(https://www.postgresql.org/docs/10/static/libpq-copy.html). They should 
be provided by bindings of postgres library you're using.


If you can share more info, at least how do you access postgres (via Ruby 
gem for example or from client's console), I can try to be more descriptive.


We not writing any Ruby/Python code.  We are using Datastage. Datastage 
has in built features for most of the stuff. For example to bulk load data 
from csv files
into Oracle/DB2 etc, it has a BULK loader feature. However DS has no 
support for PG directly and we are piggy backing on ODBC where there is no 
BULK loader.


The only recourse for us is to type in SQL as DS allows user code.


Can you install the postgres client software (psql) on the client machine 
and then have Datastage spawn "psql -c 'COPY ...'"?


--
Angular momentum makes the world go 'round.


Re: something weird happened - can select by column value although column value exist

2018-10-11 Thread Ron

On 10/11/2018 03:17 PM, Dmitry O Litvintsev wrote:

Hi,

Today the following happened:

Found this error in my production log:

< 2018-10-11 13:31:52.587 CDT >ERROR:  insert or update on table "file" violates foreign 
key constraint "$1"
< 2018-10-11 13:31:52.587 CDT >DETAIL:  Key (volume)=(155303) is not present in table 
"volume".
< 2018-10-11 13:31:52.587 CDT >STATEMENT:
 INSERT INTO file 
(sanity_crc,uid,pnfs_id,crc,deleted,cache_mod_time,drive,volume,sanity_size,cache_status,gid,location_cookie,cache_location,original_library,bfid,pnfs_path,size)
 VALUES (
 4002229874,0,'',256655919,'u','2018-10-11 13:31:52','',(SELECT 
id FROM volume where label='A'),65536,'B',0,'C','D','E','F','',197243) 
RETURNING *

file table references volume table on file.volume = volume.id and file.volume 
is FK to volume.id. I doctored the query for privacy replacing string values 
with 'A', 'B'. ...

(Queries similar to the above quoted  are executed by an application and run 
thousand of times every day for years)

So, the problem:

SELECT id FROM volume where label='A';
id

  155303
(1 row)

BUT:

select * from volume where id = 155303;
  ...
(0 rows)

?!

id is a sequence:

id| integer | not null default 
nextval(('volume_seq'::text)::regclass)


This entry id = 155303 has existed for some time and has a lot of existing file 
entries holding
FK reference to volume id = 155303

I "fixed" the issue just by:

update volume set id  = 155303 where label='A';

BUT It did not work  right away. Meaning I did this once:

update volume set id  = 155303 where label='A';

no effect.

I did it again, I also did it;

update volume set id = (select id from volume where  label='A');

and then again

update volume set id  = 155303 where label='A';

eventually it worked. Now,


select count(*) from volume where label='A';
  count
---
  1
(1 row)


What is this? Version 9.3.9,. running on Linux RH6.


Index corruption?  Maybe rebuild the FK.


--
Angular momentum makes the world go 'round.



pg_restore to new database wants to wipe out the old database?

2018-10-30 Thread Ron

Hi,

v9.6.9

Why is pg_restore trying to drop my production database, when I (think I) am 
telling it to create the new database "Molson"?


$ cd /backup
$ pg_dump -d proddb -j 8 -Fd --no-synchronized-snapshots -Z0 -v -f proddb 2> 
proddb_pgdump.log


$ mv proddb Molson
$ pg_restore -vcC --if-exists --jobs=8 -d postgres Molson
pg_restore: connecting to database for restore
pg_restore: dropping DATABASE proddb
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 145485; 1262 16385 DATABASE 
proddb TAP
pg_restore: [archiver (db)] could not execute query: ERROR:  database 
"proddb" is being accessed by other users

DETAIL:  There are 3 other sessions using the database.
    Command was: DROP DATABASE IF EXISTS "proddb";

pg_restore: processing item 145482 ENCODING ENCODING
pg_restore: processing item 145483 STDSTRINGS STDSTRINGS
pg_restore: processing item 145484 SEARCHPATH SEARCHPATH
pg_restore: processing item 145485 DATABASE proddb
pg_restore: creating DATABASE "proddb"
pg_restore: [archiver (db)] could not execute query: ERROR:  database 
"proddb" already exists
    Command was: CREATE DATABASE "proddb" WITH TEMPLATE = template0 
ENCODING = 'UTF8' LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8';

Thanks


--
Angular momentum makes the world go 'round.


Re: Trouble Upgrading Postgres

2018-11-03 Thread Ron

On 11/03/2018 02:57 PM, Charles Martin wrote:
I'd be grateful for some help. I am trying to move a large database from 
PostgreSQL 9.6 on Centos 6 to a different server using PostgreSQL 11 on 
Centos 7. I can't do a pg_dump because it always fails on the largest table.


What error message?


--
Angular momentum makes the world go 'round.


Re: backend crash on DELETE, reproducible locally

2018-11-03 Thread Ron

On 11/03/2018 02:19 PM, obo...@email.cz wrote:

Hello,

we reached the exactly same problem after upgrading to PostgreSQL 11 - the 
server crashed on a DELETE statement with a trigger.We also observed an 
AFTER DELETE trigger receiving NULL values in OLD. Now I see the problem 
seems to be solved (theoretically). Unfortunately, we are not able to 
build the server with the patch, so we cannot confirm that. However, when 
we just copied the database (within the same server), the same DELETE 
executed on the copy with no problems.


I would like to ask, however: could the same problemarise from an UPDATE 
statement (also on a table with an AFTER trigger), or would that be 
another bug (technical details below)?
As the bug causes our production servers segfault several times a day, 
we'd like to solve the problem as soon as possible. Do I understand it 
correctly that if we dump and restore the database, the bug should not 
occur (until the next ALTER TABLE ADD COLUMN - which we'll avoid until the 
patch is released)?


You can dump that single table, truncate the table (presuming no FKs) and 
then restore it.


--
Angular momentum makes the world go 'round.


Truncated queries in the pg_log file

2018-11-01 Thread Ron

Hi,

v8.4  (we're migrating to 9.6 tomorrow night, but the work must still 
proceed today.)


SQL statements are being logged (which is good), but only the fist 400 or so 
bytes is recorded.  What knob do I tweak to record the whole query?


Here are the relevant postgres.conf items which are not commented out:

log_destination = 'stderr'
logging_collector = on
log_directory = 'pg_log'
log_filename = 'postgresql-%F.log'
log_rotation_age = 1d
silent_mode = off
client_min_messages = notice
log_min_messages = notice
log_min_error_statement = notice
log_min_duration_statement = 5000
log_checkpoints = on
log_line_prefix = '%m %d %r %u %p %x %i'
log_lock_waits = on
log_statement = 'mod'
log_temp_files = 1kB
track_activity_query_size = 10240

Thanks

--
Angular momentum makes the world go 'round.



Re: Truncation of UNLOGGED tables upon restart.

2018-11-01 Thread Ron

On 11/01/2018 03:58 PM, David G. Johnston wrote:
On Thu, Nov 1, 2018 at 1:49 PM Ravi Krishna > wrote:


Per documentation unlogged tables are not crash safe and PG will
truncate it when it restarts after a crash. Does this apply to even
read only unlogged tables.

For example:

On Monday I load data into unlogged tables.

Then from Tue onwards the table is only read by application.

On Fri morning PG crashes.  When it restarts will it truncate all
unlogged tables even though they remained
unchanged.  I hope it does not.


"an unlogged table is automatically truncated after a crash or unclean 
shutdown." - there are no exceptions noted.


There is no such thing as a "read only" table in PostgreSQL.  All tables 
are read/write no matter that frequency of either event.  There is nothing 
inherently special about "no writes for 4 days" and "no writes for 10 
seconds" that would allow for a distinction to be made.  There could be 
write in progress on the table just as it crashes Friday.


Too bad you can't say ALTER TABLESPACE foo SET READONLY = false;

That would effectively give you read only tables.  I've done that before on 
the legacy RDBMS I managed.


--
Angular momentum makes the world go 'round.


Re: why select count(*) consumes wal logs

2018-11-06 Thread Ron

On 11/06/2018 11:12 AM, Michael Nolan wrote:
On Tue, Nov 6, 2018 at 11:08 AM Ravi Krishna > wrote:


PG 10.5

I loaded 133 million rows to a wide table (more than 100 cols) via COPY.


It's always a good idea after doing a large scale data load to do a vacuum 
analyze on the table (or the entire database.)




I understand the need to ANALYZE (populate the histograms needed by the 
dynamic optimizer), but why VACUUM (which is recommended after updates and 
deletes).


Thanks

--
Angular momentum makes the world go 'round.


Re: Fwd: Log file

2018-11-06 Thread Ron

On 11/06/2018 12:06 PM, Igor Korot wrote:
[snip]

Ok.
I guess I will have to write such function.


Cron and the relevant log_* config variables should solve your problems.

--
Angular momentum makes the world go 'round.



Re: Move cluster to new host, upgraded version

2018-11-11 Thread Ron

On 11/11/2018 02:51 PM, Rich Shepard wrote:

On Sun, 11 Nov 2018, Adrian Klaver wrote:

pg_dumpall is going to need to run against a Postgres server not just a 
data directory.


Adrian,

  Of course. Yet it's the data directory that's written to the .sql file.


Unless your db is small, do a parallel dump.  Even then, do a "-Fc" backup 
instead.  That's been the recommended method for many years.



--
Angular momentum makes the world go 'round.



Re: Move cluster to new host, upgraded version

2018-11-12 Thread Ron

On 11/12/2018 07:55 AM, Rich Shepard wrote:

On Sun, 11 Nov 2018, Ron wrote:


Unless your db is small, do a parallel dump. Even then, do a "-Fc" backup
instead. That's been the recommended method for many years.


Ron,

  I've several databases, none 'large.' When I've used pg_dumpall in the
past it's always with the clean option (-c) and the output directed to a
specified filename with the -f option. I don't find an -F option on the man
page.


Hmm.  Apparently -F is only an option to pg_dump.  (I use multiple pg_dump 
statements + "pg_dumpall --globals-only".)


--
Angular momentum makes the world go 'round.



Re: WTF with hash index?

2018-11-13 Thread Ron

On 11/13/2018 12:07 PM, Andreas Kretschmer wrote:



Am 13.11.2018 um 17:42 schrieb Олег Самойлов:
insert into gender (gender) select case when random<0.50 then 'female' 
when random<0.99 then 'male' else 'other' end from (select random() as 
random, generate_series(1,:table_size)) as subselect;


is that really your intended data distibution? 99% male?


select case when random<0.50 then 'female'
when random<0.99 then 'male'
    else 'other' end
from (select random() as random, generate_series(1,:table_size)) as subselect;

Shouldn't that make 49% male?

--
Angular momentum makes the world go 'round.



Re: Running pg_upgrade Version 11

2018-11-06 Thread Ron

On 11/06/2018 06:30 PM, rob stone wrote:


On Tue, 2018-11-06 at 15:17 +0900, Michael Paquier wrote:

On Tue, Nov 06, 2018 at 04:27:35PM +1100, rob stone wrote:

Logged in as user postgres and postgres owns the files created by
initdb, so is this a permissions problem or am I having a brain
fade?

Having 0600 as umask for those files is normal.  Don't you have more
logs about the error?  You should not see this error, except if
data_v10
is not a data folder initialized correctly, so perhaps you messed up
something in your environment?
--
Michael



Problem caused by my eyesight.
A colleague pointed out the typo in the argument to the -d parameter.
Working as intended.


That's why I like to line up my statements

/usr/lib/postgresql/11/bin/pg_upgrade \
-b /usr/lib/postgresql/10/bin \
-B /usr/lib/postgresql/11/bin \
-d /home/postgres/testing/data_v10 \
-D /home/postgres/testing/data_v11


--
Angular momentum makes the world go 'round.



CREATE OR REPLACE FUNCTION statement just sitting there

2018-11-05 Thread Ron
I've got a very puzzling problem on 9.6.6 systems we just migrated from 
8.4.  (The same problem happened on 9.6.9, but rolled it back so as to make 
prod have the same version as our Staging systems.)


We've got a giant script full of DROP TRIGGER IF EXISTS and CREATE TABLE and 
DROP TABLE and CREATE OR REPLACE FUNCTION statements.


It's purpose is to drop old parts of partitioned tables and add new tables.

It *ALWAYS worked* just fine on our big, ancient, production 8.4 databases 
(otherwise I'd have heard the screams of user rage), and on our 9.6.6 
staging environment.  However, one or more of our big (and schema-identical) 
prod databases (which are each on a different server) it is finicky and 
tends to just "sit" at a random one of the CREATE OR REPLACE FUNCTION 
statements.


The "list all blocking queries" I run doesn't show that anything is blocking 
it (though it blocks everything else), and neither top(1) nor iotop(1) show 
any activity.


If it matters, this script is fed to the databases via the JDBC driver, and 
it works fine when I run it via psql.  (I'd gladly run the scripts manually, 
but these are child databases, and a parent db must be updated at the same 
time by a canned application.)


Where in Postgres can I look to see why it's just sitting there?

Thanks

--
Angular momentum makes the world go 'round.


Re: CREATE OR REPLACE FUNCTION statement just sitting there

2018-11-05 Thread Ron

On 11/05/2018 08:30 PM, Rob Sargent wrote:


On 11/5/18 7:05 PM, Ron wrote:

I've got a very puzzling problem on 9.6.6 systems we just migrated from 
8.4.  (The same problem happened on 9.6.9, but rolled it back so as to 
make prod have the same version as our Staging systems.)


We've got a giant script full of DROP TRIGGER IF EXISTS and CREATE TABLE 
and DROP TABLE and CREATE OR REPLACE FUNCTION statements.


It's purpose is to drop old parts of partitioned tables and add new tables.

It *ALWAYS worked* just fine on our big, ancient, production 8.4 
databases (otherwise I'd have heard the screams of user rage), and on our 
9.6.6 staging environment.  However, one or more of our big (and 
schema-identical) prod databases (which are each on a different server) 
it is finicky and tends to just "sit" at a random one of the CREATE OR 
REPLACE FUNCTION statements.


The "list all blocking queries" I run doesn't show that anything is 
blocking it (though it blocks everything else), and neither top(1) nor 
iotop(1) show any activity.


If it matters, this script is fed to the databases via the JDBC driver, 
and it works fine when I run it via psql.  (I'd gladly run the scripts 
manually, but these are child databases, and a parent db must be updated 
at the same time by a canned application.)


Where in Postgres can I look to see why it's just sitting there?

Thanks

--
Angular momentum makes the world go 'round.



select * from pg_stat_activity;

might shed some light?



That (plus pg_locks)  is the heart of the "list all blocking queries" 
statement I copied from https://wiki.postgresql.org/wiki/Lock_Monitoring.


--
Angular momentum makes the world go 'round.


Re: Idle query that's not ""?

2018-11-06 Thread Ron
Right.  But when does the query text become ""?  Or has that become 
obsolete? (We recently migrated from 8.4.)


On 11/06/2018 02:53 PM, Hellmuth Vargas wrote:

Hi
In the documentation describes the data in this field:

https://www.postgresql.org/docs/10/monitoring-stats.html#PG-STAT-ACTIVITY-VIEW 

"Text of this backend's most recent query. If |state| is |active| this 
field shows the currently executing query. In all other states, it shows 
the last query that was executed. "



El mar., 6 de nov. de 2018 a la(s) 15:46, Ron (ronljohnso...@gmail.com 
<mailto:ronljohnso...@gmail.com>) escribió:


Hi,

v9.6.6

Why do these idle queries (pids 8357, 11260 and 11355) "remember" the
queries they ran instead of having the text ""?

postgres=# select pid,
   xact_start as txn_start,
   to_char(EXTRACT(epoch FROM now() - query_start),
'999,999.') as query_age_secs,
   state,
   cast (query as char(20))
from pg_stat_activity
where datname <> 'postgres'
order by query_start;
;

 pid  |   txn_start   | query_age_secs | state 
|    query

--+---+++--
26538 | 2018-11-06 14:40:55.053842-05 | 3,451.9853  | active | SELECT 
to_char(b.pr <http://b.pr>
27497 | 2018-11-06 14:59:26.946854-05 | 2,340.5871  | active | SELECT 
to_char(b.pr <http://b.pr>
29110 | 2018-11-06 14:59:50.479934-05 | 2,317.1725  | active | SELECT 
to_char(b.pr <http://b.pr>
* 8357 |   | 1,324.1356  | idle   |
SELECT  CAST(c.ecid*
 9016 | 2018-11-06 15:34:51.535476-05 | 215.8391  | active | SELECT 
to_char(b.pr <http://b.pr>
 9810 | 2018-11-06 15:35:00.483292-05 | 206.7676  | active | SELECT 
to_char(b.pr <http://b.pr>
*11260 |   | 190.0814  | idle   | WITH
configs AS (SEL*
12800 | 2018-11-06 15:35:49.540631-05 | 157.9880  | active | SELECT 
to_char(b.pr <http://b.pr>
*11355 |   | 42.9772  | idle   | SELECT
CASE WHEN typ*
22618 | 2018-11-06 15:38:02.317146-05 | 25.3219  | active | SELECT 
to_char(b.pr <http://b.pr>
23176 | 2018-11-06 15:38:12.90985-05  | 14.7325  | active | SELECT 
to_char(b.pr <http://b.pr>
23566 | 2018-11-06 15:38:28.802919-05 | .6116  | active | select
tms.TRIGGER.T
23588 | 2018-11-06 15:38:29.207373-05 | .2089  | active | select
cds.IMAGE_RPS
23590 | 2018-11-06 15:38:29.233724-05 | .1814  | active | select
tms.TRIGGER.T
23584 | 2018-11-06 15:38:29.046393-05 | .0442  | active | select
tms.MARK_SENS
23595 | 2018-11-06 15:38:29.403969-05 | .0001  | active | select
JOB_STEP.JOB_
(16 rows)


-- 
Angular momentum makes the world go 'round.




--
Cordialmente,

Ing. Hellmuth I. Vargas S.
Esp. Telemática y Negocios por Internet
Oracle Database 10g Administrator Certified Associate
EnterpriseDB Certified PostgreSQL 9.3 Associate



--
Angular momentum makes the world go 'round.


Re: Idle query that's not ""?

2018-11-06 Thread Ron

On 11/06/2018 03:04 PM, David G. Johnston wrote:

On Tue, Nov 6, 2018 at 1:59 PM Ron  wrote:

Right.  But when does the query text become ""?  Or has that become 
obsolete? (We recently migrated from 8.4.)

That behavior changed sometime around 9.0; since it always shows the
last query executed it logically follows that it will never show the
placeholder "" (I suppose it might do so upon initial connect if
no queries have been sent yet...not sure what it says then or even if
it is possible)


So... obsolete.  Thanks.

--
Angular momentum makes the world go 'round.



Idle query that's not ""?

2018-11-06 Thread Ron

Hi,

v9.6.6

Why do these idle queries (pids 8357, 11260 and 11355) "remember" the 
queries they ran instead of having the text ""?


postgres=# select pid,
   xact_start as txn_start,
   to_char(EXTRACT(epoch FROM now() - query_start), '999,999.') 
as query_age_secs,

   state,
   cast (query as char(20))
from pg_stat_activity
where datname <> 'postgres'
order by query_start;
;

 pid  |   txn_start   | query_age_secs | state  |    query
--+---+++--
26538 | 2018-11-06 14:40:55.053842-05 |    3,451.9853  | active | SELECT  
to_char(b.pr
27497 | 2018-11-06 14:59:26.946854-05 |    2,340.5871  | active | SELECT  
to_char(b.pr
29110 | 2018-11-06 14:59:50.479934-05 |    2,317.1725  | active | SELECT  
to_char(b.pr
* 8357 |   |    1,324.1356  | idle   | SELECT  
CAST(c.ecid*
 9016 | 2018-11-06 15:34:51.535476-05 |  215.8391  | active | SELECT  
to_char(b.pr
 9810 | 2018-11-06 15:35:00.483292-05 |  206.7676  | active | SELECT  
to_char(b.pr
*11260 |   |  190.0814  | idle   | WITH 
configs AS (SEL*
12800 | 2018-11-06 15:35:49.540631-05 |  157.9880  | active | SELECT  
to_char(b.pr
*11355 |   |   42.9772  | idle   | SELECT 
CASE WHEN typ*
22618 | 2018-11-06 15:38:02.317146-05 |   25.3219  | active | SELECT  
to_char(b.pr
23176 | 2018-11-06 15:38:12.90985-05  |   14.7325  | active | SELECT  
to_char(b.pr
23566 | 2018-11-06 15:38:28.802919-05 | .6116  | active | select 
tms.TRIGGER.T
23588 | 2018-11-06 15:38:29.207373-05 | .2089  | active | select 
cds.IMAGE_RPS
23590 | 2018-11-06 15:38:29.233724-05 | .1814  | active | select 
tms.TRIGGER.T
23584 | 2018-11-06 15:38:29.046393-05 | .0442  | active | select 
tms.MARK_SENS
23595 | 2018-11-06 15:38:29.403969-05 | .0001  | active | select 
JOB_STEP.JOB_

(16 rows)


--
Angular momentum makes the world go 'round.


Re: CREATE OR REPLACE FUNCTION statement just sitting there

2018-11-06 Thread Ron

On 11/06/2018 05:05 AM, Laurenz Albe wrote:

Ron wrote:

However, one or more of our big (and schema-identical) prod databases (which 
are each on a different server)
it is finicky and tends to just "sit" at a random one of the CREATE OR REPLACE 
FUNCTION statements.

The "list all blocking queries" I run doesn't show that anything is blocking it 
(though it blocks
everything else), and neither top(1) nor iotop(1) show any activity.

If it matters, this script is fed to the databases via the JDBC driver, and it 
works fine when I run it via psql.
(I'd gladly run the scripts manually, but these are child databases, and a 
parent db must be updated
at the same time by a canned application.)

Where in Postgres can I look to see why it's just sitting there?

select * from pg_stat_activity;
might shed some light?
  
That (plus pg_locks)  is the heart of the "list all blocking queries" statement I copied

from https://wiki.postgresql.org/wiki/Lock_Monitoring.

If there is nothing with "granted" set to FALSE in "pg_locks", you are not 
blocked by
a database lock.

What is the "state" of the hanging database session in "pg_stat_activity"?

If it is "idle" or "idle in transaction", then the lock must be in your Java 
process.


Good question.  I'll look at that the next time we try it.

--
Angular momentum makes the world go 'round.



Re: CREATE OR REPLACE FUNCTION statement just sitting there

2018-11-06 Thread Ron

On 11/06/2018 05:34 AM, Alvaro Herrera wrote:

On 2018-Nov-05, Ron wrote:


That (plus pg_locks)  is the heart of the "list all blocking queries"
statement I copied from https://wiki.postgresql.org/wiki/Lock_Monitoring.

On that page there's a note about 9.6.  Did you see the referenced
commit
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=52f5d578d6c29bf254e93c69043b817d4047ca67
?  Maybe see about using the "pg_blocking_pids(int) returns int[]"
function instead.


I did see it, but the https://wiki.postgresql.org/wiki/Lock_Monitoring query 
seems to work (seeing that it regularly shows locks).


Is this query from https://stackoverflow.com/a/43363536/1543618 adequate to 
the task?


|selectpid,usename,pg_blocking_pids(pid)asblocked_by,query asblocked_query 
frompg_stat_activity wherecardinality(pg_blocking_pids(pid))>0;|




--
Angular momentum makes the world go 'round.


Re: PgAgent on Windows

2018-11-06 Thread Ron

On 11/06/2018 07:36 AM, Marcio Meneguzzi wrote:

Hello,

I´m trying use pgAgent on windows 10 and Windows Server 2012 R2.
My version of PostgreSQL is 9.5.1.14
Install and configure pgAgent with sucess, but, when I try run a Job, 
status is Failed with a message bellow:
*"Couldn't create the primary connection (attempt 1): fe_sendauth: no 
password supplied"

*

My %APPDATA%\postgresql\pg_pass.conf is:
localhost:5490:*:postgres:my_pass

06/11/18 11:36:31   



What do you mean by "run a job"?  Can you interactively log in?


--
Angular momentum makes the world go 'round.


Re: pg_dump out of memory for large table with LOB

2018-11-14 Thread Ron

On 11/14/2018 11:14 AM, Jean-Marc Lessard wrote:

Adrien Nayrat wrote:
> With 17 million LO, it could eat lot of memory ;)
Yes it does.

I did several tests and here are my observations.

First memory settings are:
shared_buffers = 3GB
work_mem = 32Mb
maintenance_work_mem = 1GB
effective_cache_size = 9MB
bytea_output = 'escape'


Why escape instead of hex?


--
Angular momentum makes the world go 'round.


Re: Trouble Upgrading Postgres

2018-11-04 Thread Ron

Not enough swap space?

On 11/04/2018 04:55 PM, Charles Martin wrote:

Yep, you called it:

Nov  2 20:30:45 localhost kernel: Out of memory: Kill process 30438 
(postmaster) score 709 or sacrifice child
Nov  2 20:30:45 localhost kernel: Killed process 30438, UID 26, 
(postmaster) total-vm:3068900kB, anon-rss:1695392kB, file-rss:1074692kB


So it's running out of memory when trying to dump this table. The "old" 
server has 4GB of ram, the "new" server 20GB.



On Sun, Nov 4, 2018 at 3:13 PM Adrian Klaver > wrote:


On 11/4/18 8:38 AM, Charles Martin wrote:
>
> Adtrian said:
>>> pg_dump: Error message from server: server closed the connection
>>> unexpectedly
>
>  >Is this error the client reporting?
>  >Is this the same that is showing up in the server log?
>
> Yes, that's the client message, i.e. what appeared in the terminal
> window that gave the command. The server log shows:
>
> 2018-11-02 20:30:46 EDT [20405]: [4-1] user=,db= LOG: server process
> (PID 30438) was terminated by signal 9: Killed
>
> 2018-11-02 20:30:46 EDT [20405]: [5-1] user=,db= DETAIL: Failed process
> was running: COPY public.docfile (docfile_pkey, docfileoriginalname,
> ordernumber, versionnum, docfilecontents, docfilepath, d$
>
> 2018-11-02 20:30:46 EDT [20405]: [6-1] user=,db= LOG: terminating any
> other active server processes
>
> 2018-11-02 20:30:46 EDT [20415]: [10-1] user=,db= WARNING:  terminating
> connection because of crash of another server process
>
> 2018-11-02 20:30:46 EDT [20415]: [11-1] user=,db= DETAIL:  The
> postmaster has commanded this server process to roll back the current
> transaction and exit, because another server process exited abnor$
>
>
>
>>So where is the server located relative to the pg_dump client?
>>On the same machine?
>>If so is it a virtual machine e.g AWS?
>>Across a local or remote network?
>
>
>   I gave the command in a terminal session after SSHing to the server
> from the same network. It is not a virtual machine.
>
>
> Lsaurenz said:
>
>
>>You probably have a corrupted database.
>>You should get that fixed first, then you can upgrade.
>>Maybe you should hire a professional for that.
>
>
> I suspect this is is correct, both that there is corruption in the
table
> and that I need a professional to help. If someone here is available,
> I'm interested.

Given that this involves your largest table I would confirm that the
signal 9 kill was not coming from the system OOM killer. Take a look at
the system logs to see what they show over the same time period.

>
>
> Andreas said:
>
>
>  >which exact minor version please?
>
>
> PostgreSQL 9.6.10 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7
> 20120313 (Red Hat 4.4.7-23), 64-bit
>
>


-- 
Adrian Klaver

adrian.kla...@aklaver.com 



--
Angular momentum makes the world go 'round.


Re: How to maintain the csv log files in pg_log directory only for past 30 days

2018-09-28 Thread Ron

On 09/28/2018 12:03 AM, Raghavendra Rao J S V wrote:


Hi All,

Log file will be generated in *csv* format at *pg_log* directory in our 
PostgreSQL. Every day we are getting one log file. We would like to 
maintain only max 30 days. Which setting need to modify by us in 
“postgresql.conf” in order to recycle the log files after 30 days.


Does it have to be in postgresql.conf?  A cron job which runs a few minutes 
after midnight works just fine.


Compresses yesterday's log file and deletes files older than 30 days:
#!/bin/bash
DIR=/var/lib/pgsql/data/pg_log
cd $DIR
PREVDT=$(date -d "-1 day" +"%F")
bzip2 -9 postgresql-${PREVDT}.log
OLDFILES=$(find $DIR/postgresql-*log* -mtime +30)
rm -v $OLDFILES



--
Angular momentum makes the world go 'round.


Re: Please let me know which configuration setting we need to modify to speedup the pg_dump backup.

2018-10-10 Thread Ron

On 10/10/2018 09:32 PM, Raghavendra Rao J S V wrote:

Hi All,

pg_dump is taking more time. Please let me know which configuration 
setting we need to modify to speedup the pg_dump backup.We are using 9.2 
version on Centos Box.


Is it taking "more time" because your database is bigger?


--
Angular momentum makes the world go 'round.


Re: dat names generated by pg_dump

2018-09-02 Thread Ron

On 09/02/2018 09:26 PM, Tom Lane wrote:

Ron  writes:

I can associate these dat names with their source tables through a bunch of
bash and vim manual operations, but I was wondering if there's any automated
method (maybe some SQL query of some catalog table; pg_class didn't seem to
have the relevant data) of making the association.

Those numbers are the "dump object IDs" generated by pg_dump.  They don't
have any significance on the server side, and typically would vary from
one pg_dump run to another.  You have to look at the dump TOC (table of
contents) to figure out what corresponds to what.  For example,

$ pg_dump -Fd -f dumpd regression
$ ls -1 dumpd
6143.dat.gz
6144.dat.gz
6145.dat.gz
...
blob_3001.dat.gz
blobs.toc
toc.dat
$ pg_restore -l dumpd
;
; Archive created at 2018-09-02 22:14:48 EDT
...
6573; 2613 119655 BLOB - 119655 postgres


Thanks. That's exactly what I needed.

--
Angular momentum makes the world go 'round.



dat names generated by pg_dump

2018-09-02 Thread Ron

Hi,

I can associate these dat names with their source tables through a bunch of 
bash and vim manual operations, but I was wondering if there's any automated 
method (maybe some SQL query of some catalog table; pg_class didn't seem to 
have the relevant data) of making the association.


If relevant, the source database is v8.4, but the backup was done by 9.6 on 
a separate server.


$ ls -1 CDSLBXW/*dat | head
CDSLBXW/8412.dat
CDSLBXW/8414.dat
CDSLBXW/8416.dat
CDSLBXW/8418.dat
CDSLBXW/8420.dat
CDSLBXW/8422.dat
CDSLBXW/8423.dat
CDSLBXW/8425.dat
CDSLBXW/8427.dat
CDSLBXW/8428.dat

Thanks

--
Angular momentum makes the world go 'round.



Re: dat names generated by pg_dump

2018-09-02 Thread Ron

On 09/02/2018 08:41 PM, Adrian Klaver wrote:

On 09/02/2018 05:40 PM, Ron wrote:

Hi,

I can associate these dat names with their source tables through a bunch 
of bash and vim manual operations, but I was wondering if there's any 
automated method (maybe some SQL query of some catalog table; pg_class 
didn't seem to have the relevant data) of making the association.


Some background would be helpful:

1) What is producing the *.dat files?


pg_dump, as described in the Subject.



2) What is their structure?


They're pg_dump files.



3) What do the numbers refer to?


That's what I'm asking the list.





If relevant, the source database is v8.4, but the backup was done by 9.6 
on a separate server.


$ ls -1 CDSLBXW/*dat | head
CDSLBXW/8412.dat
CDSLBXW/8414.dat
CDSLBXW/8416.dat
CDSLBXW/8418.dat
CDSLBXW/8420.dat
CDSLBXW/8422.dat
CDSLBXW/8423.dat
CDSLBXW/8425.dat
CDSLBXW/8427.dat
CDSLBXW/8428.dat

Thanks






--
Angular momentum makes the world go 'round.



Re: Implementing standard SQL's DOMAIN constraint [RESOLVED]

2019-01-02 Thread Ron

On 1/2/19 12:05 PM, Rich Shepard wrote:

On Wed, 2 Jan 2019, David G. Johnston wrote:


You add the create domain command once before any objects that make use of
it.


David,

  This is the answer I sought: postgres supports the create domain command.
I did not see this in your first response.


Note that a CHECK constraint with 50 items is a *Very Bad Idea*, since 
changing such a constraint is very painful.  Use a FK constraint instead.


--
Angular momentum makes the world go 'round.


Re: Dropping and creating a trigger

2019-01-05 Thread Ron

On 1/5/19 3:59 AM, Mitar wrote:

Hi!

I am seeing such errors in logs:

ERROR:  trigger "myapp_assignments" for relation "assignments" already exists
STATEMENT:
BEGIN TRANSACTION;
DROP TRIGGER IF EXISTS "myapp_assignments" ON "assignments";
CREATE TRIGGER "myapp_assignments"
AFTER INSERT OR UPDATE OR DELETE ON "assignments"
FOR EACH ROW EXECUTE PROCEDURE "tblobs_myapp"();
COMMIT;

How is this possible? If I am inside a transaction, this should work, no?


I'd think it should.  Have you run the commands manually, one at a time, 
from psql, and checking the table after the DROP TRIGGER, to verify that the 
trigger actually gets dropped?


--
Angular momentum makes the world go 'round.



  1   2   3   4   5   6   7   8   9   10   >