Re: pg_dump out of memory for large table with LOB

2018-11-11 Thread Adrien Nayrat
Hello, On 11/10/18 12:49 AM, Jean-Marc Lessard wrote: > The dumped table is 0.5TB, 17 million rows and LOB uses about 99% of the > space. > If I understand, you have 17 million Large Object? I do not recall exactly and maybe I am wrong. But it seems pg_dump has to allocate memory for each

Re: Fwd: Log file

2018-11-11 Thread Peter J. Holzer
On 2018-10-29 14:56:06 -0400, Tom Lane wrote: > Igor Korot writes: > > Or I will have to change the owner/group manuall every time I will > > access the file? > > You can set up the log files as readable by the OS group of the server > (see log_file_mode), and then grant membership in that group

query patterns for multipass aggregating

2018-11-11 Thread Rob Nikander
Hi, I’ve run into this pattern a few times, and I usually get a little confused. I’m wondering if there are some common solutions or techniques. Simplified example: I have tables `items`, `colors`, and `images`. Items have many colors, and many images. I want a query to list items, with

Re: query patterns for multipass aggregating

2018-11-11 Thread Andrew Gierth
> "Rob" == Rob Nikander writes: Rob> I want a query to list items, with their colors and images. Each Rob> result row is an item, so the colors and images must be aggregated Rob> into arrays or json. Rob> If there were only one other table, it’s a simple join and group… Rob>

Re: query patterns for multipass aggregating

2018-11-11 Thread Andrew Gierth
> "Andrew" == Andrew Gierth writes: Andrew> Unfortunately, the planner isn't smart enough yet to know that Andrew> these two are equivalent, oops, I edited the second one before posting in a way that made them not be equivalent: adding a "group by x0.item_id" in both subqueries in method

Re: Move cluster to new host, upgraded version

2018-11-11 Thread Ron
On 11/11/2018 02:51 PM, Rich Shepard wrote: On Sun, 11 Nov 2018, Adrian Klaver wrote: pg_dumpall is going to need to run against a Postgres server not just a data directory. Adrian,   Of course. Yet it's the data directory that's written to the .sql file. Unless your db is small, do a

pg9.6 when is a promoted cluster ready to accept "rewind" request?

2018-11-11 Thread magodo
Dear supporters, I'm writing some scripts to implement manual failover. I have two clusters(let's say p1 and p2), where one is primary(e.g. p1) and the other is standby(e.g. p2). The way to do manual failover is straight forward, like following: 1. promote on p2 2. wait `pg_is_ready()` on p2

Re: Move cluster to new host, upgraded version

2018-11-11 Thread Rich Shepard
On Sun, 11 Nov 2018, Adrian Klaver wrote: Not sure if you have any extensions or not, but the part I often skip is installing extensions in the new cluster before running the dump restore. Thanks, Adrian. No extensions here. Regards, Rich

Re: Copy data from DB2 (Linux) to PG

2018-11-11 Thread Ravi Krishna
>Haven't tried it myself, but you may be able to connect the DB2 database >to your PostgreSQL cluster using this FDW module: >https://github.com/wolfgangbrandl/db2_fdw >Looks like db2_fdw is DB2 LUW only though, so you might be out of luck >if your DB2 is on IBM i (or z ;-) As the thread

Re: Move cluster to new host, upgraded version

2018-11-11 Thread Adrian Klaver
On 11/11/18 12:18 PM, Rich Shepard wrote: On Sun, 11 Nov 2018, Andrew Gierth wrote: The most reliable and preferred procedure is to use the _new_ version's pg_dumpall, for example by allowing access to the old host from the new one (possibly using an ssh port forward), or (on OSes that make it

Re: Move cluster to new host, upgraded version

2018-11-11 Thread Rich Shepard
On Sun, 11 Nov 2018, Andrew Gierth wrote: The most reliable and preferred procedure is to use the _new_ version's pg_dumpall, for example by allowing access to the old host from the new one (possibly using an ssh port forward), or (on OSes that make it easy to do package installs of multiple

Re: Move cluster to new host, upgraded version

2018-11-11 Thread Rich Shepard
On Sun, 11 Nov 2018, Adrian Klaver wrote: pg_dumpall is going to need to run against a Postgres server not just a data directory. Adrian, Of course. Yet it's the data directory that's written to the .sql file. If both your old and new machines are on the same network, why not just point

Re: Move cluster to new host, upgraded version

2018-11-11 Thread Rich Shepard
On Sun, 11 Nov 2018, Rich Shepard wrote: Haven't tried to run an application on one host using data on another host. I'll look at which tool will do that. Looks like the pg_dumpall '-h' option will act on the other host's data directory. Regards, Rich

Re: Move cluster to new host, upgraded version

2018-11-11 Thread Adrian Klaver
On 11/11/18 12:51 PM, Rich Shepard wrote: On Sun, 11 Nov 2018, Adrian Klaver wrote: pg_dumpall is going to need to run against a Postgres server not just a data directory. Adrian,   Of course. Yet it's the data directory that's written to the .sql file. In order for pg_dumpall to access

Re: Move cluster to new host, upgraded version

2018-11-11 Thread Adrian Klaver
On 11/11/18 12:53 PM, Rich Shepard wrote: On Sun, 11 Nov 2018, Rich Shepard wrote: Haven't tried to run an application on one host using data on another host. I'll look at which tool will do that.   Looks like the pg_dumpall '-h' option will act on the other host's data directory. No it

Re: query patterns for multipass aggregating

2018-11-11 Thread Rob Nikander
> On Nov 11, 2018, at 12:54 PM, Andrew Gierth > wrote: > … Thank you that is very helpful. Could the CTE’s theoretically be optimized in a future version of PG, to work like the subqueries? I like to use them to give names to intermediate results, but I’ll stay away from them for now.

Re: Copy data from DB2 (Linux) to PG

2018-11-11 Thread Florian Bachmann
On 01.11.2018 18:27, Ravi Krishna wrote: I have a project to develop a script/tool to copy data from DB2 to PG. The approach I am thinking is 1. Export data from db2 in a text file, with, say pipe as delimiter. 2. Load the data from the text file to PG using COPY command. In order to make it

Move cluster to new host, upgraded version

2018-11-11 Thread Rich Shepard
My current desktop server/workstation is running version 10.5. I'm configuring a replacement desktop and have installed version 11.1 on it. To copy all databases from the 10.5 version to the 11.1 version I assume that I should do a pg_dumpall on the current host and read in that file on the

Re: query patterns for multipass aggregating

2018-11-11 Thread Ondřej Bouda
Dne 11.11.2018 v 17:20 Rob Nikander napsal(a): > I have tables `items`, `colors`, and `images`. Items have many colors, and many images. > > I want a query to list items, with their colors and images. Each result row is an item, so the colors and images must be aggregated into arrays or json.

Re: query patterns for multipass aggregating

2018-11-11 Thread Andrew Gierth
> "Ondřej" == Ondřej Bouda writes: Ondřej> What about subqueries? Ondřej> SELECT Ondřej> items.*, Ondřej> (SELECT array_agg(color_name) FROM colors WHERE item_id = Ondřej> items.id) AS color_names, Ondřej> (SELECT array_agg(image_file) FROM images WHERE item_id = Ondřej>

Re: Move cluster to new host, upgraded version

2018-11-11 Thread Andrew Gierth
> "Rich" == Rich Shepard writes: Rich> My current desktop server/workstation is running version 10.5. Rich> I'm configuring a replacement desktop and have installed version Rich> 11.1 on it. To copy all databases from the 10.5 version to the Rich> 11.1 version I assume that I should do a

Re: Move cluster to new host, upgraded version

2018-11-11 Thread Adrian Klaver
On 11/11/18 11:21 AM, Rich Shepard wrote:   My current desktop server/workstation is running version 10.5. I'm configuring a replacement desktop and have installed version 11.1 on it. To copy all databases from the 10.5 version to the 11.1 version I assume that I should do a pg_dumpall on the