Re: [GENERAL] Improve PostGIS performance with 62 million rows?

2017-01-05 Thread Israel Brewster
On Jan 5, 2017, at 1:38 PM, Rémi Cura wrote: > > Hey, > 1 sec seems really good in this case, > and I'm assuming you tuned postgres so the main index fits into ram (work_mem > and all other stuff). > > You could avoid a CTE by mixing both cte. > > WITH pts AS ( >

Re: [GENERAL] Improve PostGIS performance with 62 million rows?

2017-01-05 Thread Israel Brewster
Ah, yes indeed. Upping the segment length to 1,000 brings the execution time down to 642 ms, and further upping it to 10,000 brings the execution time down again to 442.104 ms. I'll have to play around with it and see where the minimum is. Would that be likely to vary depending on initial path

Re: [GENERAL] Improve PostGIS performance with 62 million rows?

2017-01-05 Thread Rémi Cura
Hey, 1 sec seems really good in this case, and I'm assuming you tuned postgres so the main index fits into ram (work_mem and all other stuff). You could avoid a CTE by mixing both cte. WITH pts AS ( SELECT (pt).geom, (pt).path[1] as vert FROM ST_DumpPoints( ST_Segmentize(

Re: [GENERAL] Improve PostGIS performance with 62 million rows?

2017-01-05 Thread Paul Ramsey
Varying the segment length upwards might have a salutary effect for a while, as the efficiency improvement of fewer inner loops battles with the inefficiency of having more points selected by the index filter. Worth an experiment. P On Thu, Jan 5, 2017 at 1:00 PM, Israel Brewster

[GENERAL] FATAL: requested WAL segment has already been removed

2017-01-05 Thread Patrick B
Hi, I got this scenario: master01 --> slave01 ---> slave02 -> slave03 ---> slave04 As you can see, slave03 replicates from slave02 and slave04 from slave03. I'm promoting slave03 into a master, and trying to make slave04 to be able to connect to its new master. AS i'm using PostgreSQL

Re: [GENERAL] psql error (encoding related?)

2017-01-05 Thread Tom Lane
=?UTF-8?Q?Torsten_F=C3=B6rtsch?= writes: > This hex string decodes to something sensible: > $ perl -le 'print pack "H*", shift' > 246c69626469722f757466385f616e645f69736f383835395f31 > $libdir/utf8_and_iso8859_1 > Maybe it rings a bell. Hah, that's pretty suggestive. So

Re: [GENERAL] psql error (encoding related?)

2017-01-05 Thread Torsten Förtsch
This hex string decodes to something sensible: $ perl -le 'print pack "H*", shift' 246c69626469722f757466385f616e645f69736f383835395f31 $libdir/utf8_and_iso8859_1 Maybe it rings a bell. On Thu, Jan 5, 2017 at 7:57 PM, BRUSSER Michael wrote: > I see this with

Re: [GENERAL] Improve PostGIS performance with 62 million rows?

2017-01-05 Thread Israel Brewster
> On Jan 5, 2017, at 10:38 AM, Paul Ramsey wrote: > > Yes, you did. You want a query that spits out a tupleset of goemetries (one > each for each wee segment), and then you can join that set to your main table > using st_dwithin() as the join clause. > So start by

Re: [GENERAL] Improve PostGIS performance with 62 million rows?

2017-01-05 Thread Paul Ramsey
Yes, you did. You want a query that spits out a tupleset of goemetries (one each for each wee segment), and then you can join that set to your main table using st_dwithin() as the join clause. So start by ditching the main table and just work on a query that generates a pile of wee segments. On

Re: [GENERAL] Improve PostGIS performance with 62 million rows?

2017-01-05 Thread Israel Brewster
On Jan 5, 2017, at 8:50 AM, Paul Ramsey wrote: > > The index filters using bounding boxes. A long, diagonal route will have a > large bounding box, relative to the area you actually care about (within a > narrow strip of the route). Use ST_Segmentize() to add points

Re: [GENERAL] Row value expression much faster than equivalent OR clauses

2017-01-05 Thread Merlin Moncure
On Wed, Jan 4, 2017 at 8:49 AM, Kevin Grittner wrote: > On Wed, Dec 28, 2016 at 10:58 AM, Steven Grimm wrote: > >> WHERE ((e.timeStamp > '2016-12-19T20:34:22.315Z') >> OR (e.timeStamp = '2016-12-19T20:34:22.315Z' >> AND

Re: [GENERAL] COPY: row is too big

2017-01-05 Thread Rob Sargent
On 01/05/2017 11:46 AM, Adrian Klaver wrote: On 01/05/2017 08:31 AM, Rob Sargent wrote: On 01/05/2017 05:44 AM, vod vos wrote: I finally figured it out as follows: 1. modified the corresponding data type of the columns to the csv file 2. if null values existed, defined the data type to

[GENERAL] psql error (encoding related?)

2017-01-05 Thread BRUSSER Michael
I see this with PostgreSQL 9.4.7 and some 8.x versions running on Linux Red Hat. Older versions "supposedly" do not exhibit this behavior, but I didn't check. $ psql Password: psql: FATAL: could not access file "\x246c69626469722f757466385f616e645f69736f383835395f31": No such file or directory

Re: [GENERAL] COPY: row is too big

2017-01-05 Thread Adrian Klaver
On 01/05/2017 08:31 AM, Rob Sargent wrote: On 01/05/2017 05:44 AM, vod vos wrote: I finally figured it out as follows: 1. modified the corresponding data type of the columns to the csv file 2. if null values existed, defined the data type to varchar. The null values cause problem too. so

Re: [GENERAL] Improve PostGIS performance with 62 million rows?

2017-01-05 Thread Paul Ramsey
The index filters using bounding boxes. A long, diagonal route will have a large bounding box, relative to the area you actually care about (within a narrow strip of the route). Use ST_Segmentize() to add points to your route, ST_DumpPoints() to dump those out as point and ST_MakeLine to generate

[GENERAL] Improve PostGIS performance with 62 million rows?

2017-01-05 Thread Israel Brewster
I have a database (PostgreSQL 9.6.1) containing 62,702,675 rows of latitude (numeric), longitude(numeric), elevation(integer) data, along with a PostGIS (2.3.0) geometry column (location), running on a CentOS 6.8 box with 64GB RAM and a RAID10 SSD data drive. I'm trying to get the maximum

Re: [GENERAL] Postgres 9.6.1 big slowdown by upgrading 8.4.22

2017-01-05 Thread Tom Lane
Job writes: > Could you please help me? There's advice here on how to ask this type of question with enough detail to get answers: https://wiki.postgresql.org/wiki/Slow_Query_Questions regards, tom lane -- Sent via pgsql-general mailing

Re: [GENERAL] Postgres 9.6.1 big slowdown by upgrading 8.4.22

2017-01-05 Thread Rob Sargent
On 01/05/2017 10:18 AM, Job wrote: Hello guys, a very strange thing: after upgrading from 8.4.22 to 9.6.1 i noticed, under heavy beanchmarks, a really slowdown of Postgresql 9.6.1, with the machine really "without breath". By replacing Postgresql 8.4.22 evberything returns working fine.

[GENERAL] Postgres 9.6.1 big slowdown by upgrading 8.4.22

2017-01-05 Thread Job
Hello guys, a very strange thing: after upgrading from 8.4.22 to 9.6.1 i noticed, under heavy beanchmarks, a really slowdown of Postgresql 9.6.1, with the machine really "without breath". By replacing Postgresql 8.4.22 evberything returns working fine. With three days of investigation, i come

Re: [GENERAL] COPY: row is too big

2017-01-05 Thread Rob Sargent
On 01/05/2017 05:44 AM, vod vos wrote: I finally figured it out as follows: 1. modified the corresponding data type of the columns to the csv file 2. if null values existed, defined the data type to varchar. The null values cause problem too. so 1100 culumns work well now. This problem

[GENERAL] requested timeline doesn't contain minimum recovery point

2017-01-05 Thread Tom DalPozzo
Hi, there is something happening in my replication that is not clear to me. I think I'm missing something. I've two server, red and blue. red is primary blue is standby, async repl. Now: 1 cleanly stop red 2 promote blue 3 insert tuples in blue 4 from red site, pg_rewind from blue to red dir. 5

Re: [GENERAL] COPY: row is too big

2017-01-05 Thread Adrian Klaver
On 01/05/2017 04:44 AM, vod vos wrote: I finally figured it out as follows: 1. modified the corresponding data type of the columns to the csv file 2. if null values existed, defined the data type to varchar. The null values cause problem too. Did you change the NULLs to something else? As

Re: [GENERAL] COPY: row is too big

2017-01-05 Thread Pavel Stehule
2017-01-05 13:44 GMT+01:00 vod vos : > I finally figured it out as follows: > > 1. modified the corresponding data type of the columns to the csv file > > 2. if null values existed, defined the data type to varchar. The null > values cause problem too. > int, float, double can

Re: [GENERAL] vacuum of empty table slows down as database table count grows

2017-01-05 Thread marcin kowalski
Well, unfortunately i am not seeing much difference. I shaved off maybe a second of worst case run. I guess i should just split the db into smaller ones, since tmpstats are now per-db. Are there any other things i could try? 2017-01-05 8:18 GMT+01:00 marcin kowalski : >

Re: [GENERAL] COPY: row is too big

2017-01-05 Thread vod vos
I finally figured it out as follows: 1. modified the corresponding data type of the columns to the csv file 2. if null values existed, defined the data type to varchar. The null values cause problem too. so 1100 culumns work well now. This problem wasted me three days. I have lots of