Re: [GENERAL] four template0 databases after vacuum

2016-02-10 Thread Kazuaki Fujikura
Thank you for your comments.

>Can you explain what your replication set up is?

Streaming Replication.
=>
masterslave1 (async)
masterslave2 (async)

>So are you doing the below on the master, the slaves or all?
=>master

Our current plan at this moment is
- Upgrade 9.1.19 from 9.1.6 at the end of March at our scheduled maintenance
- Export and import pg_dump files ( to eliminate the effect of template0
and xid, which are all reset at import)

We are still afraid that it is too late (as we still have 4 strange
template0 files). So, if you have any ideas/suggestions which we can try
before scheduled maintenance, that is much appreciated.


Best regards,
Kazuaki Fujikura


2016-02-09 1:51 GMT+09:00 Adrian Klaver :

> On 02/08/2016 04:16 AM, Kazuaki Fujikura wrote:
>
>> Thank you for your comments.
>>
>> First, I think I need to tell you our database situation
>>
>> - 3 physical databases (installed in different servers. 1master, 2 slave
>> servers.)
>>
>
> For future reference the above are generally called database clusters or
> instances to distinguish them from the databases created inside them, what
> you call logical databases. Thanks for explaining it helps clear up some
> confusion on my part.
>
>
> Can you explain what your replication set up is?
>
> - more than logical 1100 databases in each servers
>>
>>
>> [Karsten and Melvin]
>> It shows 0 records in template0 with the query you provided.
>>
>> ===
>>   schema | table | index | idx_scan | idx_tup_read | idx_tup_fetch |
>> type | pg_get_indexdef | statusi | size_in_bytes | size
>>
>> +---+---+--+--+---+--+-+-+---+--
>> (0 rosw)
>> ===
>>
>>
>>
> So are you doing the below on the master, the slaves or all?
>
>
>
>>
>> [Adrian]
>>
>>  >> [problem history/background]
>>  >>
>>  >> Jan 10th, 2016:
>>  >> The first problem was autovacuum issue.
>>  >> - autovacuum could not finish successfully.
>>  >> - I set autovacuum_freeze_max_age to 2 hundreds million.
>>  >> - autovacuum immediately finished against the database which age was
>>  >> over 2 hundreds million.
>>  >
>>  >Which was?
>>
>> Any logical database (we have more than 1100 databases) which age
>> (relfrozenxid) is more than 2 hundreds
>> million shows that autovacuum runs repeatedly (it starts and stops
>> autovacuum process repeatedly with no
>> vacuum processing).
>>
>>
>>  >> - so, autovacuum did not go next database
>>  >
>>  >Which was?
>>
>> - I saw autovacuum stops at template0 because it can not run vacuum
>> freeze against it
>> - I thought it was because the age of template0 exceeds the config
>> parameter of autovacuum kick, which is 2 hundreds milliions
>> - So, I wanted to reduce the age of template0 (I don't know why it
>> increases though)
>>
>>
>>  >Not finish on what?
>>
>> I could finish vacuum manually.
>> But, autovacuum was not finished.
>>
>>
>>  >> This time, the age did not reduce with manual vacuum.
>>  >Age of what?
>>
>> age(relfrozenxid) of template0.
>>
>>
>>  >This I do not get, how the xid count on pg_database so quickly and to
>>  >such an extent that it needs a VACUUM FULL?
>>  >Is there a script that is creating and dropping databases rapidly?
>>
>> We have more than 1100 databases and create new database every day
>> whenever new customer comes.
>> Number of transactions are more than ten millions in total of 1100+
>> database.
>>
>>
>>
>>  >What are you trying to do with the above?
>>  >I do not think it is a coincidence that the first time the above was run
>>  >in this sequence, shortly after 4 template0 databases appear.
>>
>>
>> I run vacuum full because I could not change the value of relfrozenxid
>> of pg_database with vacuum/vacuum freeze.
>> Except template0 database, I can change relfrozenxid if I run vacuum
>> full pg_database.
>>
>>
>>
>>  >Can you show?:
>>  >
>>  >SELECT * from pg_database;
>>  >
>>  >If you do not want to show the whole cluster, then at least the
>>  >databases involved in this discussion.
>>
>> ===
>>
>> postgres=# SELECT oid,ctid,* from pg_database where datname =
>> 'template0' ;
>>oid  |  ctid   |  datname  | datdba | encoding | datcollate |
>> datctype | datistemplate | datallowconn | datconnlimit | datlastsysoid |
>> datfrozenxid | dattablespace |
>> datacl
>>
>> ---+-+---++--++--+---+--+--+---+--+---+
>> -
>>   12772 | (36,25) | template0 | 10 |6 | C  | C
>>   | t | f|   -1 | 12772 |
>> 2412920847 |  1663 | {=c/pos
>> tgres,postgres=CTc/postgres}
>>   12772 | (36,26) | template0 | 10 |6 | C  | C
>>   | t | f|   -1 | 

Re: [GENERAL] PostgreSQL vs Firebird SQL

2016-02-10 Thread Andy Colson

On 2/9/2016 10:10 PM, ioan ghip wrote:

I have a Firebird SQL database running on one of my servers which has
about 50k inserts, about 100k updates and about 30k deletes every day.
There are about 4 million records in 24 tables. I have a bunch of stored
procedures, triggers, events and views that I'm using.
Firebird works fairly well, but from time to time the database gets
corrupted and I couldn't figure out yet (after many years of running)
what's the reason. When this happens I run "gfix -mend -full -ignore",
backup and restore the db and everything is fine until next problem in a
week, or a month.

I never used PostgreSQL. Yesterday I installed it on my development
machine and after few tests I saw that it's fairly easy to use.

Does anyone have experience with both, Firebird and PostgreSQL? Is
PostgreSQL way better performing than Firebird? Is it worth the effort
moving away from Firebird? Would I gain stability and increased performance?

Thanks.




One of our windows apps runs on a client/server setup in the office, and 
then on laptop for remote use.  We use Firebird (FB) for both.  Its a 
quick simple install, runs in 8 meg of ram, has zero maintenance.


The only time I've seen corruptions is anti-virus scanning the db, and 
HD/raid problems.


FB is a nice little db.  That said, I can wholeheartedly recommend PG! 
It could still run on a laptop, might require a bit more maintenance, 
but on a dedicated server, it would be able to grow and use all the 
resources available.


If you have HD/raid problems, then you wont gain stability.  Upgrading 
between major versions is also more difficult.


That said, yes, you'd gain stability and performance, and not only that, 
a huge amount of functionality.  A Huge Amount!


FB has, replace() for string ops, oh and substring().  Baa.  That's 
nothing compared to PG's.  Its like that Aladdin song 'a whole new world'!


You know, in FB, when one person does a large delete on a table?  The 
next person that runs a select will perform the vacuum on it.  Its the 
person running the select that pays the time for a huge delete.  In PG, 
there is a background vacuum task, so users don't pay the price.


Respect for FB, but my heart belongs to PG.

-Andy


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL vs Firebird SQL

2016-02-10 Thread Edson Richter

Em 10/02/2016 13:32, Andy Colson escreveu:

On 2/9/2016 10:10 PM, ioan ghip wrote:

I have a Firebird SQL database running on one of my servers which has
about 50k inserts, about 100k updates and about 30k deletes every day.
There are about 4 million records in 24 tables. I have a bunch of stored
procedures, triggers, events and views that I'm using.
Firebird works fairly well, but from time to time the database gets
corrupted and I couldn't figure out yet (after many years of running)
what's the reason. When this happens I run "gfix -mend -full -ignore",
backup and restore the db and everything is fine until next problem in a
week, or a month.

I never used PostgreSQL. Yesterday I installed it on my development
machine and after few tests I saw that it's fairly easy to use.

Does anyone have experience with both, Firebird and PostgreSQL? Is
PostgreSQL way better performing than Firebird? Is it worth the effort
moving away from Firebird? Would I gain stability and increased 
performance?


Thanks.




One of our windows apps runs on a client/server setup in the office, 
and then on laptop for remote use.  We use Firebird (FB) for both.  
Its a quick simple install, runs in 8 meg of ram, has zero maintenance.


The only time I've seen corruptions is anti-virus scanning the db, and 
HD/raid problems.


FB is a nice little db.  That said, I can wholeheartedly recommend PG! 
It could still run on a laptop, might require a bit more maintenance, 
but on a dedicated server, it would be able to grow and use all the 
resources available.


If you have HD/raid problems, then you wont gain stability. Upgrading 
between major versions is also more difficult.


That said, yes, you'd gain stability and performance, and not only 
that, a huge amount of functionality.  A Huge Amount!


FB has, replace() for string ops, oh and substring().  Baa. That's 
nothing compared to PG's.  Its like that Aladdin song 'a whole new 
world'!


You know, in FB, when one person does a large delete on a table? The 
next person that runs a select will perform the vacuum on it. Its the 
person running the select that pays the time for a huge delete.  In 
PG, there is a background vacuum task, so users don't pay the price.


Respect for FB, but my heart belongs to PG.

-Andy



+1

Also, running a office server, being it small or huge, you can have a 
replicated server - so it is virtually impossible to loose data.


Synchronous and asynchronous replication is really easy to implement in 
PG, and makes it a strong (in terms of lossless) database server - even 
when compared with Oracle, MS SQL or IBM Db2. You can have two database 
servers, being one updatable and two for queries - that make reporting 
faster, for example!


By using BARMAN you can have online incremental backups to a third 
server, which a unvaluable for online transaction and operation. You may 
never ever loose data again - except if you database server crashes 
(hardware failure), and your replicated server crashes at same time 
(hardware failure also), and then you may loose up to last 15Mb of 
changes (the amount of data transfered to backup server on each 
incremental step).


So if your concern if for safety: keep your servers geografically 
separated, or, at minimum, in different eletrical and network 
installations, preferable in different buildings, using good hardware 
(with twins disks, power lines, network interfaces - all doubled). 
Personally, I do like Dell R420 servers for database servers - they are 
really reliable in my setup.


Finally, you can have embed database running togheter with your app - 
and even the for said additional maintenance, you can schedule it or 
even throw from inside your app.


You will see it is possible to have a 99.999% database uptime with no 
hasless, running for years without aditional DBA interference.


Also, the tooling to help planning indices and test query performance is 
really good, and the PgAdmin III has been good and quite strong (has 
some flaws, but nothing that really interfere in its usage).



Regards,

Edson Richter



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL vs Firebird SQL

2016-02-10 Thread David G. Johnston
On Wed, Feb 10, 2016 at 10:08 AM, John R Pierce  wrote:

> On 2/10/2016 8:51 AM, Pierre Chevalier Géologue wrote:
>
> PPS: how should I behave on this list: should I systematically "reply to
> all", or just "reply" to the list?  I'm used to a number of mailing lists
> where a simple "reply" automatically replies to the list, and the rule
> obliges you to **only** use "reply".
>
>
>
> Thunderbird offers me both 'reply' and 'reply list' buttons for the
> messages on this list.
>
> most of the lists I'm on, a simple reply IS a reply list, as they wish to
> encourage discussions rather than private responses.   I personally dislike
> 'reply all' as I'm ON the list so why send me another CC ?
>
>
It lets me distinguish between the list threads that I haven't responded to
and those that I have.​

>From observation pretty much everyone here uses reply-to-all...I suspect
for a similar reason.

David J.


Re: [GENERAL] PostgreSQL vs Firebird SQL

2016-02-10 Thread John R Pierce

On 2/10/2016 8:51 AM, Pierre Chevalier Géologue wrote:
PPS: how should I behave on this list: should I systematically "reply 
to all", or just "reply" to the list?  I'm used to a number of mailing 
lists where a simple "reply" automatically replies to the list, and 
the rule obliges you to *only* use "reply". 



Thunderbird offers me both 'reply' and 'reply list' buttons for the 
messages on this list.


most of the lists I'm on, a simple reply IS a reply list, as they wish 
to encourage discussions rather than private responses.   I personally 
dislike 'reply all' as I'm ON the list so why send me another CC ?




--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] PostgreSQL vs Firebird SQL

2016-02-10 Thread ioan ghip
> Is this running as an embedded engine or a standalone server?  One thing
> about Firebird is that since it is embeddable, in that mode other
> application bugs could corrupt the database.  In the other case, I would
> expect you may want to run hardware diagnostics to rule out hardware
> problems going forward.  If you find hardware problems fix them first, then
> look further.
>
>
I'm running the standalone server (Super Classic version) on an EC2
instance. My software is on another machine in the same VPC.



>
>


Re: [GENERAL] PostgreSQL vs Firebird SQL

2016-02-10 Thread Pierre Chevalier Géologue

Hello,

Le 10/02/2016 08:43, Chris Travers a écrit :

I have never seen database corruption on PostgreSQL that was not a
result of either:
1. Use cases WAY out of the ordinary (and then only years ago and I
reported a bug on this and it was very quickly fixed)
2. Hardware problems
3. Heat management problems (sticking a db server in a hot closet, and
then only indexes were corrupted).

I do think on decent hardware you will have no trouble. In other words,
outside of horrible abuse, PostgreSQL does very well.


Yes, I've seen more than horrible abuses... and I can firmly confirm 
that PostgreSQL behaves very reliably, in my humble experience with 
PostgreSQL.
Let me tell you about one of the worst use cases I've encountered. I had 
set up a server (it was actually my desktop machine that I abandoned on 
site, with the name "server" quickly written on it, running with a 
Debian Stable GNU/Linux and a PostgreSQL cluster) on one of my clients' 
site, in West Africa. A few months after, a colleague called me on the 
phone, he was complaining about the server not restarting. He forgot to 
mention a few details:

- the power was going down about once to twice per hour during five to ten
minutes each time;
- the UPS battery was dead, it provided about half a second of power during
failures;
- the floor was carefully bloomed daily, but without sprinkling water
on the floor, so that all the laterite dust was floating in the air.

During this phone call, a power failure happened. He told me "I must 
restart the server, please hold on", and then I heard a strange noise, 
like someone repeatedly banging with a hammer. When he resumed our 
telephonic conversation, a few seconds later, I asked him about that 
noise; he said "oh, yes, I need to hammer on the power button of the 
server; otherwise it won't start"...


Some time later, I did a mission on site. The situation of the poor 
little server was absolutely horrible: it was covered with red laterite 
dust, the inside of the machine was all red and dusty, the grids in 
front of the fans had totally rusted within a few months; people were 
literally hammering on it, for the power button was stuck with fine 
dust; temperature was around 30°C to 45°C, depending on the season, no 
or so few air conditioning, and moisture often close to 100% (you know, 
when you see condensation happening indoor, indoor rain is an 
interesting phenomena).
I carefully dismantled the machine, cleaned it thoroughly, we moved it 
in a slightly cooler area (the boss's office), trying to find a power 
outlet which would be stable enough...


So, the context of was, I think, way outside of what I would call a 
horrible abuse; but despite these conditions, PostgreSQL did very well, 
there has never been any data loss, it was used daily for 
mission-critical operations.




So there are my $0.02


And these were my small 0.02€ ;)

À+
Pierre

PS: sorry Chris, I didn't pay attention while replying: I replied to you 
only, instead of the list => corrected, sorry for the double entry in 
your mailbox.


PPS: how should I behave on this list: should I systematically "reply to 
all", or just "reply" to the list?  I'm used to a number of mailing 
lists where a simple "reply" automatically replies to the list, and the 
rule obliges you to *only* use "reply".





On Wed, Feb 10, 2016 at 5:10 AM, ioan ghip > wrote:

I have a Firebird SQL database running on one of my servers which
has about 50k inserts, about 100k updates and about 30k deletes
every day. There are about 4 million records in 24 tables. I have a
bunch of stored procedures, triggers, events and views that I'm using.
Firebird works fairly well, but from time to time the database gets
corrupted and I couldn't figure out yet (after many years of
running) what's the reason. When this happens I run "gfix -mend
-full -ignore", backup and restore the db and everything is fine
until next problem in a week, or a month.


Is this running as an embedded engine or a standalone server?  One thing
about Firebird is that since it is embeddable, in that mode other
application bugs could corrupt the database.  In the other case, I would
expect you may want to run hardware diagnostics to rule out hardware
problems going forward.  If you find hardware problems fix them first,
then look further.

But the low hanging possible things to look at here are moving from an
embedded mode to a standalone server if applicable, and checking out
your hardware.  If these turn out not to be the problem, then I would
recommend moving.

I never used PostgreSQL. Yesterday I installed it on my development
machine and after few tests I saw that it's fairly easy to use.

Does anyone have experience with both, Firebird and PostgreSQL? Is
PostgreSQL way better performing than Firebird? Is it worth the
effort moving away from Firebird? Would I gain stability and
increased 

Re: [GENERAL] PostgreSQL vs Firebird SQL

2016-02-10 Thread Adrian Klaver

On 02/10/2016 09:24 AM, David G. Johnston wrote:

On Wed, Feb 10, 2016 at 10:08 AM, John R Pierce >wrote:

On 2/10/2016 8:51 AM, Pierre Chevalier Géologue wrote:

PPS: how should I behave on this list: should I systematically
"reply to all", or just "reply" to the list?  I'm used to a number
of mailing lists where a simple "reply" automatically replies to
the list, and the rule obliges you to **only** use "reply".



Thunderbird offers me both 'reply' and 'reply list' buttons for the
messages on this list.

most of the lists I'm on, a simple reply IS a reply list, as they
wish to encourage discussions rather than private responses.   I
personally dislike 'reply all' as I'm ON the list so why send me
another CC ?


Go here:

https://mail.postgresql.org/mj/mj_wwwusr/domain=postgresql.org?func=lists-long-full=pgsql-general

And in settings check:

eliminatecc
Select this if you do not want two copies when someone sends a 
message both to you and to the list.






It lets me distinguish between the list threads that I haven't responded
to and those that I have.​

 From observation pretty much everyone here uses reply-to-all...I
suspect for a similar reason.


Also if the mail server is down or overloaded(a less frequent occurrence 
these days) you can carry on a conversation while the server catches up.




David J.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_multixact issues

2016-02-10 Thread Kiriakos Georgiou
Hello,

Our pg_multixact directory keeps growing.  I did a "vacuum freeze” which didn’t 
help.  I also did a "vacuum full” which didn’t help either.
We had this condition with 9.3.4 as well.  When I upgraded our cluster to 9.4.5 
(via plain sql dump and load) as expected the issue was resolved but now it’s 
happening again.  Luckily it has no ill effect other than consuming 4G of space 
for an otherwise 1G database.

Can you offer any hints as to how I can cure this?

thanks,
Kiriakos Georgiou


pg_controldata output:

pg_control version number:942
Catalog version number:   201409291
Database system identifier:   6211781659140720513
Database cluster state:   in production
pg_control last modified: Wed Feb 10 13:45:02 2016
Latest checkpoint location:   D/FB5FE630
Prior checkpoint location:D/FB5FE558
Latest checkpoint's REDO location:D/FB5FE5F8
Latest checkpoint's REDO WAL file:0001000D00FB
Latest checkpoint's TimeLineID:   1
Latest checkpoint's PrevTimeLineID:   1
Latest checkpoint's full_page_writes: on
Latest checkpoint's NextXID:  0/3556219
Latest checkpoint's NextOID:  2227252
Latest checkpoint's NextMultiXactId:  2316566
Latest checkpoint's NextMultiOffset:  823062151
Latest checkpoint's oldestXID:668
Latest checkpoint's oldestXID's DB:   1
Latest checkpoint's oldestActiveXID:  3556219
Latest checkpoint's oldestMultiXid:   1
Latest checkpoint's oldestMulti's DB: 1
Time of latest checkpoint:Wed Feb 10 13:45:02 2016
Fake LSN counter for unlogged rels:   0/1
Minimum recovery ending location: 0/0
Min recovery ending loc's timeline:   0
Backup start location:0/0
Backup end location:  0/0
End-of-backup record required:no
Current wal_level setting:hot_standby
Current wal_log_hints setting:off
Current max_connections setting:  100
Current max_worker_processes setting: 8
Current max_prepared_xacts setting:   0
Current max_locks_per_xact setting:   1024
Maximum data alignment:   8
Database block size:  8192
Blocks per segment of large relation: 131072
WAL block size:   8192
Bytes per WAL segment:16777216
Maximum length of identifiers:64
Maximum columns in an index:  32
Maximum size of a TOAST chunk:1996
Size of a large-object chunk: 2048
Date/time type storage:   64-bit integers
Float4 argument passing:  by value
Float8 argument passing:  by value
Data page checksum version:   0

the offsets directory:

-rw---   1 postgres dba   262144 Nov  3 15:22 
-rw---   1 postgres dba   262144 Nov  5 12:45 0001
-rw---   1 postgres dba   262144 Nov  9 14:25 0002
-rw---   1 postgres dba   262144 Nov 13 10:10 0003
-rw---   1 postgres dba   262144 Nov 16 15:40 0004
-rw---   1 postgres dba   262144 Nov 20 09:55 0005
-rw---   1 postgres dba   262144 Dec  1 08:00 0006
-rw---   1 postgres dba   262144 Dec  9 11:50 0007
-rw---   1 postgres dba   262144 Dec 16 08:14 0008
-rw---   1 postgres dba   262144 Dec 21 09:40 0009
-rw---   1 postgres dba   262144 Dec 31 09:55 000A
-rw---   1 postgres dba   262144 Jan  4 21:17 000B
-rw---   1 postgres dba   262144 Jan  6 10:50 000C
-rw---   1 postgres dba   262144 Jan  7 18:20 000D
-rw---   1 postgres dba   262144 Jan 13 13:55 000E
-rw---   1 postgres dba   262144 Jan 15 11:55 000F
-rw---   1 postgres dba   262144 Jan 22 07:50 0010
-rw---   1 postgres dba   262144 Jan 26 16:35 0011
-rw---   1 postgres dba   262144 Jan 29 10:16 0012
-rw---   1 postgres dba   262144 Feb  3 13:17 0013
-rw---   1 postgres dba   262144 Feb  3 16:13 0014
-rw---   1 postgres dba   262144 Feb  4 08:24 0015
-rw---   1 postgres dba   262144 Feb  5 13:20 0016
-rw---   1 postgres dba   262144 Feb  8 11:26 0017
-rw---   1 postgres dba   262144 Feb  8 11:46 0018
-rw---   1 postgres dba   262144 Feb  8 12:25 0019
-rw---   1 postgres dba   262144 Feb  8 13:19 001A
-rw---   1 postgres dba   262144 Feb  8 14:23 001B
-rw---   1 postgres dba   262144 Feb  8 15:32 001C
-rw---   1 postgres dba   262144 Feb  8 17:01 001D
-rw---   1 postgres dba   262144 Feb  8 19:19 001E
-rw---   1 postgres dba   262144 Feb  8 22:11 001F
-rw---   1 postgres dba   262144 Feb  9 01:44 0020
-rw---   1 postgres dba   262144 Feb  9 05:57 0021
-rw---   1 postgres dba   262144 Feb  9 10:45 0022
-rw---   1 postgres dba98304 Feb 10 13:35 0023

the members directory has 15723 files:
ls -l|wc -l
   15723

Re: [GENERAL] PostgreSQL vs Firebird SQL

2016-02-10 Thread Karsten Hilbert
On Wed, Feb 10, 2016 at 01:14:42PM +0200, Achilleas Mantzios wrote:

> >>About checksums in our office master DB that's a fine idea, too bad that 
> >>pg_upgrade doesn't cope with them
> >I am sure you have considered "failing over" the master to an
> >in-office slave which has got checksums turned on ?
> 
> Is that possible with standard streaming replication? As far as I am
> concerned the (master/hot standby) images have to be identical (no initdb
> involved). I guess you mean some sort of external (logical?) replication
> mechanism? We are trying to avoid initdb and restore for the obvious
> reasons.

I had been thinking of a one-time, purpose-driven use of Slony.

Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] four template0 databases after vacuum

2016-02-10 Thread Adrian Klaver

On 02/10/2016 07:19 AM, Kazuaki Fujikura wrote:

Thank you for your comments.

 >Can you explain what your replication set up is?

Streaming Replication.
=>
masterslave1 (async)
masterslave2 (async)

 >So are you doing the below on the master, the slaves or all?
=>master

Our current plan at this moment is
- Upgrade 9.1.19 from 9.1.6 at the end of March at our scheduled maintenance
- Export and import pg_dump files ( to eliminate the effect of template0
and xid, which are all reset at import)

We are still afraid that it is too late (as we still have 4 strange
template0 files). So, if you have any ideas/suggestions which we can try
before scheduled maintenance, that is much appreciated.


On which of the database clusters are you seeing the 4 template0 databases?

From a previous post:

"We have more than 1100 databases and create new database every day
whenever new customer comes. .."

How are the databases created?




Best regards,
Kazuaki Fujikura




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Test CMake build

2016-02-10 Thread Yury Zhuravlev

Hello all.
Please test build Postgres using cmake. If you are of course interested.
Still not everything is ready but most of the work. Assembly instructions 
as does the repository is on github:

https://github.com/stalkerg/postgres_cmake

The compilation will be enough (tests even better). I need feedbacks so 
that create issues on github.
Very interesting NetBSD, OpenBSD, Solaris. 


Thanks!
--
Yury Zhuravlev


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Test CMake build

2016-02-10 Thread Andy Colson

On 2/10/2016 2:45 PM, Andy Colson wrote:

On 2/10/2016 12:09 PM, Yury Zhuravlev wrote:

Hello all.
Please test build Postgres using cmake. If you are of course interested.
Still not everything is ready but most of the work. Assembly
instructions as does the repository is on github:
https://github.com/stalkerg/postgres_cmake

The compilation will be enough (tests even better). I need feedbacks so
that create issues on github.
Very interesting NetBSD, OpenBSD, Solaris.
Thanks!

Slackware64, 14.1


-- The C compiler identification is GNU 4.9.2
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works


This might be important:

-- Looking for include file pwd.h
-- Looking for include file pwd.h - found
-- Found Readline: /usr/include
-- Looking for include files stdio.h, readline.h
-- Looking for include files stdio.h, readline.h - not found
-- Looking for include files stdio.h, history.h
-- Looking for include files stdio.h, history.h - not found
-- Looking for include files stdio.h, readline/history.h
-- Looking for include files stdio.h, readline/history.h - found
-- Looking for include files stdio.h, readline/readline.h
-- Looking for include files stdio.h, readline/readline.h - found
-- Looking for include files stdio.h, editline/history.h
-- Looking for include files stdio.h, editline/history.h - not found
-- Looking for include files stdio.h, editline/readline.h
-- Looking for include files stdio.h, editline/readline.h - not found

> -- Check size of long long int - failed
> CMake Error at CMakeLists.txt:262 (message):
>Cannot find a working 64-bit integer type.
>



the end of CMakeFiles/CMakeError.log shows:

Determining size of long long int failed with the following output:
Change Dir: /home/andy/projects/postgres_cmake/build/CMakeFiles/CMakeTmp

Run Build Command:/usr/bin/gmake "cmTryCompileExec301475258/fast"
/usr/bin/gmake -f CMakeFiles/cmTryCompileExec301475258.dir/build.make 
CMakeFiles/cmTryCompileExec301475258.dir/build
gmake[1]: Entering directory 
`/home/andy/projects/postgres_cmake/build/CMakeFiles/CMakeTmp'
/usr/bin/cmake -E cmake_progress_report 
/home/andy/projects/postgres_cmake/build/CMakeFiles/CMakeTmp/CMakeFiles 1
Building C object 
CMakeFiles/cmTryCompileExec301475258.dir/HAVE_LONG_LONG_INT_64.c.o
/usr/bin/cc-o 
CMakeFiles/cmTryCompileExec301475258.dir/HAVE_LONG_LONG_INT_64.c.o   -c 
/home/andy/projects/postgres_cmake/build/CMakeFiles/CheckTypeSize/HAVE_LONG_LONG_INT_64.c

Linking C executable cmTryCompileExec301475258
/usr/bin/cmake -E cmake_link_script 
CMakeFiles/cmTryCompileExec301475258.dir/link.txt --verbose=1
/usr/bin/cc 
CMakeFiles/cmTryCompileExec301475258.dir/HAVE_LONG_LONG_INT_64.c.o  -o 
cmTryCompileExec301475258 -rdynamic -lreadline
/usr/lib64/gcc/x86_64-slackware-linux/4.9.2/../../../../lib64/libreadline.so: 
undefined reference to `tputs'
/usr/lib64/gcc/x86_64-slackware-linux/4.9.2/../../../../lib64/libreadline.so: 
undefined reference to `tgoto'
/usr/lib64/gcc/x86_64-slackware-linux/4.9.2/../../../../lib64/libreadline.so: 
undefined reference to `tgetflag'
/usr/lib64/gcc/x86_64-slackware-linux/4.9.2/../../../../lib64/libreadline.so: 
undefined reference to `UP'
/usr/lib64/gcc/x86_64-slackware-linux/4.9.2/../../../../lib64/libreadline.so: 
undefined reference to `tgetent'
/usr/lib64/gcc/x86_64-slackware-linux/4.9.2/../../../../lib64/libreadline.so: 
undefined reference to `tgetnum'
/usr/lib64/gcc/x86_64-slackware-linux/4.9.2/../../../../lib64/libreadline.so: 
undefined reference to `PC'
/usr/lib64/gcc/x86_64-slackware-linux/4.9.2/../../../../lib64/libreadline.so: 
undefined reference to `tgetstr'
/usr/lib64/gcc/x86_64-slackware-linux/4.9.2/../../../../lib64/libreadline.so: 
undefined reference to `BC'

collect2: error: ld returned 1 exit status
gmake[1]: *** [cmTryCompileExec301475258] Error 1
gmake[1]: Leaving directory 
`/home/andy/projects/postgres_cmake/build/CMakeFiles/CMakeTmp'

gmake: *** [cmTryCompileExec301475258/fast] Error 2






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Input data for column is not compatible with data type

2016-02-10 Thread drum.lu...@gmail.com
Hi all

I'm having this error message:

Input data for column `%1$s` is not compatible with data type


But I was unable to find the table which is having this issue

Would be possible someone help me with that?
Thanks

Complete error link:
https://bitbucket.org/snippets/lucaspossamai/7r9yX

Query link:
https://bitbucket.org/snippets/lucaspossamai/rbpk4/edit/


Re: [GENERAL] Input data for column is not compatible with data type

2016-02-10 Thread drum.lu...@gmail.com
Actually I do know which table is...

it seems to be ja_notes.


But I can't see any difference



Lucas Possamai

- kinghost.co.nz

- DigitalOcean 

On 11 February 2016 at 09:11, David G. Johnston 
wrote:

> On Wed, Feb 10, 2016 at 12:56 PM, drum.lu...@gmail.com <
> drum.lu...@gmail.com> wrote:
>
>> Hi all
>>
>> I'm having this error message:
>>
>> Input data for column `%1$s` is not compatible with data type
>>
>>
>> But I was unable to find the table which is having this issue
>>
>> Would be possible someone help me with that?
>> Thanks
>>
>> Complete error link:
>> https://bitbucket.org/snippets/lucaspossamai/7r9yX
>>
>> Query link:
>> https://bitbucket.org/snippets/lucaspossamai/rbpk4/edit/
>>
>
> ​For those who come later please, if at all possible, write self-contained
> emails.
>
> David J.​
>
>
>


Re: [GENERAL] Test CMake build

2016-02-10 Thread Andy Colson

On 2/10/2016 2:50 PM, Andy Colson wrote:

On 2/10/2016 2:45 PM, Andy Colson wrote:

On 2/10/2016 12:09 PM, Yury Zhuravlev wrote:

Hello all.
Please test build Postgres using cmake. If you are of course interested.
Still not everything is ready but most of the work. Assembly
instructions as does the repository is on github:
https://github.com/stalkerg/postgres_cmake

The compilation will be enough (tests even better). I need feedbacks so
that create issues on github.
Very interesting NetBSD, OpenBSD, Solaris.
Thanks!



Slackware64, 14.1





/usr/lib64/gcc/x86_64-slackware-linux/4.9.2/../../../../lib64/libreadline.so:
undefined reference to `tputs'



tputs is in ncurses?

I did not see a:

-- Looking for curses

And it didnt try to link with it:

/usr/bin/cc 
CMakeFiles/cmTryCompileExec301475258.dir/HAVE_LONG_LONG_INT_64.c.o  -o 
cmTryCompileExec301475258 -rdynamic -lreadline


-Andy



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Input data for column is not compatible with data type

2016-02-10 Thread Adrian Klaver

On 02/10/2016 11:56 AM, drum.lu...@gmail.com wrote:

Hi all

I'm having this error message:

Input data for column `%1$s` is not compatible with data type


The above, from looking at the link below, looks like part of the query.

If that is so I am guessing the back ticks are causing the problem.

If not what is the actual error message and what is creating it?




But I was unable to find the table which is having this issue

Would be possible someone help me with that?
Thanks

Complete error link:
https://bitbucket.org/snippets/lucaspossamai/7r9yX

Query link:
https://bitbucket.org/snippets/lucaspossamai/rbpk4/edit/



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Input data for column is not compatible with data type

2016-02-10 Thread David G. Johnston
On Wed, Feb 10, 2016 at 12:56 PM, drum.lu...@gmail.com  wrote:

> Hi all
>
> I'm having this error message:
>
> Input data for column `%1$s` is not compatible with data type
>
>
> But I was unable to find the table which is having this issue
>
> Would be possible someone help me with that?
> Thanks
>
> Complete error link:
> https://bitbucket.org/snippets/lucaspossamai/7r9yX
>
> Query link:
> https://bitbucket.org/snippets/lucaspossamai/rbpk4/edit/
>

​For those who come later please, if at all possible, write self-contained
emails.

David J.​


Re: [GENERAL] Test CMake build

2016-02-10 Thread Alban Hertroys

> On 10 Feb 2016, at 19:09, Yury Zhuravlev  wrote:
> 
> Hello all.
> Please test build Postgres using cmake. If you are of course interested.
> Still not everything is ready but most of the work. Assembly instructions as 
> does the repository is on github:
> https://github.com/stalkerg/postgres_cmake
> 
> The compilation will be enough (tests even better). I need feedbacks so that 
> create issues on github.
> Very interesting NetBSD, OpenBSD, Solaris. 

I was under the impression that the FreeBSD port already uses cmake?

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Test CMake build

2016-02-10 Thread Andy Colson

On 2/10/2016 12:09 PM, Yury Zhuravlev wrote:

Hello all.
Please test build Postgres using cmake. If you are of course interested.
Still not everything is ready but most of the work. Assembly
instructions as does the repository is on github:
https://github.com/stalkerg/postgres_cmake

The compilation will be enough (tests even better). I need feedbacks so
that create issues on github.
Very interesting NetBSD, OpenBSD, Solaris.
Thanks!

Slackware64, 14.1


-- The C compiler identification is GNU 4.9.2
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check if the system is big endian
-- Searching 16 bit integer
-- Looking for sys/types.h
-- Looking for sys/types.h - found
etc, etc,
-- Performing Test AC_FUNC_ACCEPT
-- Performing Test AC_FUNC_ACCEPT - Failed
-- Performing Test AC_FUNC_ACCEPT
-- Performing Test AC_FUNC_ACCEPT - Failed
...A TON of the above...
-- Performing Test AC_FUNC_ACCEPT
-- Performing Test AC_FUNC_ACCEPT - Failed
ERRORcould not determine argument types
-- Looking for accept function args - found , , ,  *
-- Check alignment of double
-- Check alignment of double - 8
-- Check alignment of int
-- Check alignment of int - 4
-- Check alignment of long
-- Check alignment of long - 8
-- Check alignment of long long int
-- Check alignment of long long int - 8
-- Check alignment of short
-- Check alignment of short - 2
-- Check size of int64
-- Check size of int64 - failed
-- Check size of uint64
-- Check size of uint64 - failed
-- Check size of int8
-- Check size of int8 - failed
-- Check size of void *
-- Check size of void * - failed
-- Check size of long int
-- Check size of long int - failed
-- Check size of long
-- Check size of long - failed
-- Check size of size_t
-- Check size of size_t - failed
-- Check size of locale_t
-- Check size of locale_t - failed
-- Check size of long long int
-- Check size of long long int - failed
CMake Error at CMakeLists.txt:262 (message):
  Cannot find a working 64-bit integer type.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Input data for column is not compatible with data type

2016-02-10 Thread Adrian Klaver

On 02/10/2016 12:11 PM, David G. Johnston wrote:

On Wed, Feb 10, 2016 at 12:56 PM, drum.lu...@gmail.com
 >wrote:

Hi all

I'm having this error message:

Input data for column `%1$s` is not compatible with data type


But I was unable to find the table which is having this issue

Would be possible someone help me with that?
Thanks

Complete error link:
https://bitbucket.org/snippets/lucaspossamai/7r9yX

Query link:
https://bitbucket.org/snippets/lucaspossamai/rbpk4/edit/


​For those who come later please, if at all possible, write
self-contained emails.


In that vein the second link is non-functional:(



David J.​




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Catalog bloat (again)

2016-02-10 Thread Ivan Voras
As a follow-up, here's a portion of the nightly vacuum's logs, just want to
confirm if my conclusions are right:


INFO:  vacuuming "pg_catalog.pg_attribute"

INFO:  scanned index "pg_attribute_relid_attnam_index" to remove 3014172
row versions
DETAIL:  CPU 0.20s/1.08u sec elapsed 3.72 sec.
INFO:  scanned index "pg_attribute_relid_attnum_index" to remove 3014172
row versions
DETAIL:  CPU 0.14s/0.89u sec elapsed 1.70 sec.

INFO:  "pg_attribute": removed 3014172 row versions in 52768 pages
DETAIL:  CPU 0.31s/0.30u sec elapsed 1.15 sec.

INFO:  index "pg_attribute_relid_attnam_index" now contains 19578 row
versions in 45817 pages
DETAIL:  3013689 index row versions were removed.
45668 index pages have been deleted, 34116 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.

INFO:  index "pg_attribute_relid_attnum_index" now contains 19578 row
versions in 32554 pages
DETAIL:  3010630 index row versions were removed.
32462 index pages have been deleted, 24239 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.

INFO:  "pg_attribute": found 2278389 removable, 17319 nonremovable row
versions in 52856 out of 57409 pages
DETAIL:  298 dead row versions cannot be removed yet.
There were 641330 unused item pointers.
0 pages are entirely empty.
CPU 1.44s/2.88u sec elapsed 10.55 sec.

INFO:  "pg_attribute": stopping truncate due to conflicting lock request
INFO:  analyzing "pg_catalog.pg_attribute"
INFO:  "pg_attribute": scanned 3 of 57409 pages, containing 10301 live
rows and 0 dead rows; 10301 rows in sample, 24472 estimated total rows


So,

   - About 3 million rows churned in the table and its two indexes (i.e.
   dead rows which vacuum found and marked) - that's per day since this is a
   nightly operation.
   - After the vacuum, the indexes are left with 19578 rows in (for the
   first one) in 45817 pages. That's a lot of empty pages, which should be
   reused the next day, together with free space in partially filled tables,
   right?
   - Since pages are 8 KiB, 46,000 pages is about 360 MiB of space - that's
   how large just one of the pg_attribute table's indexes is after the vacuum.
   Altogether, it's more than 1 GB.
   - The "stopping truncate..." message is not really significant, it would
   have shortened the data files from the end if there are empty pages at the
   end, which in this case there isn't a significant number of. The truncation
   can probably never happen on system tables like these since they are always
   used...?

The real question is: why is the total size (i.e. the number of pages)
growing at an alarming rate? On one of the db's, we're seeing almost
doubling in size each week. Is the internal fragmentation of the data files
so significant?

Ok, a couple more questions:

   1. How come "0 pages are entirely empty" if there are 17319 rows spread
   around in 52856 pages?
   2. What are "unused item pointers"?


(I agree with your previous suggestions, will see if they can be
implemented).



On 28 January 2016 at 00:13, Bill Moran  wrote:

> On Wed, 27 Jan 2016 23:54:37 +0100
> Ivan Voras  wrote:
> >
> > I've done my Googling, and it looks like this is a fairly common problem.
> > In my case, there's a collection of hundreds of databases (10 GB+) with
> > apps which are pretty much designed (a long time ago) with heavy use of
> > temp tables - so a non-trivial system.
> >
> > The databases are vacuumed (not-full) daily, from cron
>
> Vacuuming once a day is seldom often enough, except on very quiet
> databases.
>
> > (autovacuum was
> > turned off some time ago for performance reasons), and still their size
> > increases unexpectedly. By using some of the queries floating around on
> the
> > wiki and stackoverflow[*], I've discovered that the bloat is not, as was
> > assumed, in the user tables, but in the system tables, mostly in
> > pg_attributes and pg_class.
>
> The size increase isn't really unexpected. If you're only vacuuming once
> per day, it's very easy for activity to cause active tables to bloat quite
> a bit.
>
> > This is becoming a serious problem, as I've seen instances of these
> tables
> > grow to 6 GB+ (on a 15 GB total database), while still effectively
> > containing on the order of 10.000 records or so. This is quite abnormal.
> >
> > For blocking reasons, we'd like to avoid vacuum fulls on these tables (as
> > it seems like touching them will lock up everything else).
>
> It will. But to get them back down to a reasonable size, you're going to
> have to do a VACUUM FULL at least _once_. If you retune things correctly,
> you shouldn't need any more FULLs after that 1 time.
>
> > So, question #1: WTF? How could this happen, on a regularly vacuumed
> > system? Shouldn't the space be reused, at least after a VACUUM? The issue
> > here is not the absolute existence of the bloat space, it's that it's
> > constantly growing for *system* tables.
>
> With a lot of activity, once a day probably isn't 

Re: [GENERAL] Input data for column is not compatible with data type

2016-02-10 Thread drum.lu...@gmail.com
yep..

thank you for your replly.

The problem is a code problem. Not in the DB.

Thank you.



Lucas Possamai

- kinghost.co.nz

- DigitalOcean 

On 11 February 2016 at 10:27, Adrian Klaver 
wrote:

> On 02/10/2016 12:11 PM, David G. Johnston wrote:
>
>> On Wed, Feb 10, 2016 at 12:56 PM, drum.lu...@gmail.com
>>  > >wrote:
>>
>> Hi all
>>
>> I'm having this error message:
>>
>> Input data for column `%1$s` is not compatible with data type
>>
>>
>> But I was unable to find the table which is having this issue
>>
>> Would be possible someone help me with that?
>> Thanks
>>
>> Complete error link:
>> https://bitbucket.org/snippets/lucaspossamai/7r9yX
>>
>> Query link:
>> https://bitbucket.org/snippets/lucaspossamai/rbpk4/edit/
>>
>>
>> ​For those who come later please, if at all possible, write
>> self-contained emails.
>>
>
> In that vein the second link is non-functional:(
>
>
>> David J.​
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: [GENERAL] pg_multixact issues

2016-02-10 Thread Thomas Munro
On Thu, Feb 11, 2016 at 7:52 AM, Kiriakos Georgiou
 wrote:
> Hello,
>
> Our pg_multixact directory keeps growing.  I did a "vacuum freeze” which
> didn’t help.  I also did a "vacuum full” which didn’t help either.
> We had this condition with 9.3.4 as well.  When I upgraded our cluster to
> 9.4.5 (via plain sql dump and load) as expected the issue was resolved but
> now it’s happening again.  Luckily it has no ill effect other than consuming
> 4G of space for an otherwise 1G database.
>
> Can you offer any hints as to how I can cure this?
>
> thanks,
> Kiriakos Georgiou
>
>
> pg_controldata output:
>
> pg_control version number:942
> Catalog version number:   201409291
> Database system identifier:   6211781659140720513
> Database cluster state:   in production
> pg_control last modified: Wed Feb 10 13:45:02 2016
> Latest checkpoint location:   D/FB5FE630
> Prior checkpoint location:D/FB5FE558
> Latest checkpoint's REDO location:D/FB5FE5F8
> Latest checkpoint's REDO WAL file:0001000D00FB
> Latest checkpoint's TimeLineID:   1
> Latest checkpoint's PrevTimeLineID:   1
> Latest checkpoint's full_page_writes: on
> Latest checkpoint's NextXID:  0/3556219
> Latest checkpoint's NextOID:  2227252
> Latest checkpoint's NextMultiXactId:  2316566
> Latest checkpoint's NextMultiOffset:  823062151
> Latest checkpoint's oldestXID:668
> Latest checkpoint's oldestXID's DB:   1
> Latest checkpoint's oldestActiveXID:  3556219
> Latest checkpoint's oldestMultiXid:   1
> Latest checkpoint's oldestMulti's DB: 1
> Time of latest checkpoint:Wed Feb 10 13:45:02 2016
> Fake LSN counter for unlogged rels:   0/1
> Minimum recovery ending location: 0/0
> Min recovery ending loc's timeline:   0
> Backup start location:0/0
> Backup end location:  0/0
> End-of-backup record required:no
> Current wal_level setting:hot_standby
> Current wal_log_hints setting:off
> Current max_connections setting:  100
> Current max_worker_processes setting: 8
> Current max_prepared_xacts setting:   0
> Current max_locks_per_xact setting:   1024
> Maximum data alignment:   8
> Database block size:  8192
> Blocks per segment of large relation: 131072
> WAL block size:   8192
> Bytes per WAL segment:16777216
> Maximum length of identifiers:64
> Maximum columns in an index:  32
> Maximum size of a TOAST chunk:1996
> Size of a large-object chunk: 2048
> Date/time type storage:   64-bit integers
> Float4 argument passing:  by value
> Float8 argument passing:  by value
> Data page checksum version:   0
>
> the offsets directory:
>
> -rw---   1 postgres dba   262144 Nov  3 15:22 
> -rw---   1 postgres dba   262144 Nov  5 12:45 0001
> -rw---   1 postgres dba   262144 Nov  9 14:25 0002
> -rw---   1 postgres dba   262144 Nov 13 10:10 0003
> -rw---   1 postgres dba   262144 Nov 16 15:40 0004
> -rw---   1 postgres dba   262144 Nov 20 09:55 0005
> -rw---   1 postgres dba   262144 Dec  1 08:00 0006
> -rw---   1 postgres dba   262144 Dec  9 11:50 0007
> -rw---   1 postgres dba   262144 Dec 16 08:14 0008
> -rw---   1 postgres dba   262144 Dec 21 09:40 0009
> -rw---   1 postgres dba   262144 Dec 31 09:55 000A
> -rw---   1 postgres dba   262144 Jan  4 21:17 000B
> -rw---   1 postgres dba   262144 Jan  6 10:50 000C
> -rw---   1 postgres dba   262144 Jan  7 18:20 000D
> -rw---   1 postgres dba   262144 Jan 13 13:55 000E
> -rw---   1 postgres dba   262144 Jan 15 11:55 000F
> -rw---   1 postgres dba   262144 Jan 22 07:50 0010
> -rw---   1 postgres dba   262144 Jan 26 16:35 0011
> -rw---   1 postgres dba   262144 Jan 29 10:16 0012
> -rw---   1 postgres dba   262144 Feb  3 13:17 0013
> -rw---   1 postgres dba   262144 Feb  3 16:13 0014
> -rw---   1 postgres dba   262144 Feb  4 08:24 0015
> -rw---   1 postgres dba   262144 Feb  5 13:20 0016
> -rw---   1 postgres dba   262144 Feb  8 11:26 0017
> -rw---   1 postgres dba   262144 Feb  8 11:46 0018
> -rw---   1 postgres dba   262144 Feb  8 12:25 0019
> -rw---   1 postgres dba   262144 Feb  8 13:19 001A
> -rw---   1 postgres dba   262144 Feb  8 14:23 001B
> -rw---   1 postgres dba   262144 Feb  8 15:32 001C
> -rw---   1 postgres dba   262144 Feb  8 17:01 001D
> -rw---   1 postgres dba   262144 Feb  8 19:19 001E
> -rw---   1 postgres dba   262144 Feb  8 22:11 001F
> -rw---   1 postgres dba   262144 Feb  9 01:44 0020
> -rw---   1 postgres dba   262144 Feb  9 05:57 0021
> -rw---   1 postgres dba  

Re: [GENERAL] pg_multixact issues

2016-02-10 Thread Alvaro Herrera
Thomas Munro wrote:

> 4.  You could look into whether all those multixacts with many member
> are really expected.  (Large numbers of concurrent FK checks or
> explicit share locks on the same rows perhaps?  A good proportion of
> this happened on one day this week I see.)

I think maybe this patch
https://www.postgresql.org/message-id/20150620043650.GX133018%40postgresql.org
should help with this.  I expect to come back to this and get it pushed
to 9.3 and 9.4 sometime ...

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Transaction ID not logged if no explicit transaction used

2016-02-10 Thread Martín Marqués
Hi,

I've been fooling around on a test environment where I wanted to run
some PITR tests using recovery_target_xid.

So I started setting up postgresql.conf with log_statement='ddl' (and
'mod' also) and the %x placeholder in log_line_prefix:

Odd result was that I always got a zero as the xid.


So after some chats (and a power outage at home :( ) I tried setting
log_statement to 'none' and log_min_duration_statement = 0 with the same
log_line_prefix:

I ran these commands to test:

data=# create table test_xid (id int);
CREATE TABLE
data=# begin;
BEGIN
data=# drop table test_xid;
DROP TABLE
data=# end;
COMMIT

Which give these logs:

<2016-02-10 17:41:19 EST [5729]: [1] xid=0
db=data,user=postgres,app=psql,client=[local]>LOG:  duration: 17.242 ms
 statement: create table test_xid (id int);
<2016-02-10 17:41:21 EST [5729]: [2] xid=0
db=data,user=postgres,app=psql,client=[local]>LOG:  duration: 0.055 ms
statement: begin;
<2016-02-10 17:41:32 EST [5729]: [3] xid=31063
db=data,user=postgres,app=psql,client=[local]>LOG:  duration: 6.858 ms
statement: drop table test_xid;
<2016-02-10 17:41:34 EST [5729]: [4] xid=0
db=data,user=postgres,app=psql,client=[local]>LOG:  duration: 2.540 ms
statement: end;

It's clear that if the command isn't executed in an explicit
transaction, you don't get the xid in the logs. Very annoying!

Worst, I guess most people would want the xid of a DROP TABLE to be
logged, as well as with other DDLs and/or DMLs with log_statement
configured accordingly. I suppose this is not so simple to achieve.

So, my question is: Is this a bug, or a feature? I recall being able to
log xids on DDLs but can't find the correct settings now.

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_multixact issues

2016-02-10 Thread Thomas Munro
On Thu, Feb 11, 2016 at 11:05 AM, Alvaro Herrera
 wrote:
> Thomas Munro wrote:
>
>> 4.  You could look into whether all those multixacts with many member
>> are really expected.  (Large numbers of concurrent FK checks or
>> explicit share locks on the same rows perhaps?  A good proportion of
>> this happened on one day this week I see.)
>
> I think maybe this patch
> https://www.postgresql.org/message-id/20150620043650.GX133018%40postgresql.org
> should help with this.  I expect to come back to this and get it pushed
> to 9.3 and 9.4 sometime ...

Ah, so the OP might get some improvement today by moving to 9.5, which
has the optimization already.

BTW in my message above I said vacuum_multixact_freeze_table_age when
I meant to say autovacuum_multixact_freeze_max_age (and its default is
400 million).

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transaction ID not logged if no explicit transaction used

2016-02-10 Thread Tom Lane
=?UTF-8?Q?Mart=c3=adn_Marqu=c3=a9s?=  writes:
> [ log_line_prefix %x frequently reports zero ]

> <2016-02-10 17:41:19 EST [5729]: [1] xid=0
> db=data,user=postgres,app=psql,client=[local]>LOG:  duration: 17.242 ms
>  statement: create table test_xid (id int);
> <2016-02-10 17:41:21 EST [5729]: [2] xid=0
> db=data,user=postgres,app=psql,client=[local]>LOG:  duration: 0.055 ms
> statement: begin;
> <2016-02-10 17:41:32 EST [5729]: [3] xid=31063
> db=data,user=postgres,app=psql,client=[local]>LOG:  duration: 6.858 ms
> statement: drop table test_xid;
> <2016-02-10 17:41:34 EST [5729]: [4] xid=0
> db=data,user=postgres,app=psql,client=[local]>LOG:  duration: 2.540 ms
> statement: end;

> It's clear that if the command isn't executed in an explicit
> transaction, you don't get the xid in the logs. Very annoying!

Think you're outta luck on that.  If we logged the duration before
commit, it would be entirely misleading for short commands, because
the time needed to commit wouldn't be included.  So we log it after,
when there's no longer any active transaction.

We could maybe fix this by redefining %x as "the current or most recent
xid", so that it'd still be valid for messages issued post-commit.
But I'm afraid that would add about as many bad behaviors as it would
remove.  In your example above, that would result in a pretty misleading
xid attached to the "begin" statement, since at that point we have
started a new transaction but not assigned it any xid.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Optimize Query

2016-02-10 Thread John R Pierce

On 2/10/2016 6:38 PM, drum.lu...@gmail.com wrote:

Sorry but I was unable to see the *n_quote_status = 0*


I'm unable to see this variable anywhere in your two original queries, 
the SQL one, and the other ?? abbreviated thing, nor did you give any 
table definitions, so I'm not even sure what you mean by n_quote_status





--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] ERROR: missing FROM-clause entry for table

2016-02-10 Thread bigkev
Yes, c.start_time is a timestamp.
My goal is to list events (calls) which are scheduled on fortnightly basis.

The query works with hard coded values, but I need to generate results for
days beginning at the start_time and ending with the end_time.: eg:

2016-02-06
2016-02-07
2016-02-08 | Some account name | other info
2016-02-09
2016-02-10

2016-02-22 | Some account name | other info
2016-02-23
2016-02-24
etc etc

All dates within the first generate_series are listed, 1 row for each day.
The second generate_series is to show events(calls) which begin on
c.start_time and end at c.start_time.
Records would be displayed, at each 14 days from the start_time.
In the above example, this would be 2016-02-08 and end_time would be a date
in the future.



--
View this message in context: 
http://postgresql.nabble.com/ERROR-missing-FROM-clause-entry-for-table-tp5886750p5886777.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: missing FROM-clause entry for table

2016-02-10 Thread Roxanne Reid-Bennett

On 2/9/2016 11:11 PM, bigkev wrote:

I am receiving this error for the query pasted below.
Is the LEFT JOIN on the table not enough?
What needs to happen here?
I am guess something to do with derived tables

http://pastie.org/10715876



Your error is in the reference to c.start_time, c.end_time.  During the 
parse, the system doesn't know about "c" yet.
and swapping fortnight and "c" won't help - you can't reference 
c.start_time in the "from" portion of the join.


So - substituting static values for c.start_time, c.end_time :
select * FROM generate_series('2016-01-22', '2017-12-31', '1 
day'::interval) g(day)
left join generate_series('2015-01-25', '2016-07-01', '2 
weeks'::interval) f(fortnight)  ON g.day=f.fortnight


generates results... but I'm not sure it is giving you what you want.

Exactly what are you trying to achieve with the fortnight construct?
BTW - assuming call_schedule.start_time is a timestamp... do your 
start/end times cross day boundaries?  the test g.day between start/end 
will never be true otherwise - you are dealing with "midnight" values 
for time.
e.g. '2016-01-23' does not fall between '2016-01-23 08:30:01' and 
"2016-01-23 10:45:01'


Roxanne

--
[At other schools] I think the most common fault in general is to teach 
students how to pass exams instead of teaching them the science.
Donald Knuth



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL vs Firebird SQL

2016-02-10 Thread Karsten Hilbert
On Wed, Feb 10, 2016 at 10:34:53AM +0200, Achilleas Mantzios wrote:

PG on tankers:

> About checksums in our office master DB that's a fine idea, too bad that 
> pg_upgrade doesn't cope with them

I am sure you have considered "failing over" the master to an
in-office slave which has got checksums turned on ?

Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL vs Firebird SQL

2016-02-10 Thread Achilleas Mantzios

On 10/02/2016 12:40, Karsten Hilbert wrote:

On Wed, Feb 10, 2016 at 10:34:53AM +0200, Achilleas Mantzios wrote:

PG on tankers:


About checksums in our office master DB that's a fine idea, too bad that 
pg_upgrade doesn't cope with them

I am sure you have considered "failing over" the master to an
in-office slave which has got checksums turned on ?


Is that possible with standard streaming replication? As far as I am concerned the (master/hot standby) images have to be identical (no initdb involved). I guess you mean some sort of external 
(logical?) replication mechanism? We are trying to avoid initdb and restore for the obvious reasons.
But anyway, we have streaming replication to a hot standby (non checksum server) + WAL archiving for some years now. For 10+ years we survived (surprisingly!!) without those, we are better than ever 
now. BTW, the checksum feature would definitely make sense to run on our vessels where the vibrations and harsh conditions tend to affect hardware badly. Unfortunately migrating from 8.3 is a huge 
project, which we won't be forever postponing and should deal with some day.




Karsten



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL vs Firebird SQL

2016-02-10 Thread Achilleas Mantzios

On 10/02/2016 06:10, ioan ghip wrote:
I have a Firebird SQL database running on one of my servers which has about 50k inserts, about 100k updates and about 30k deletes every day. There are about 4 million records in 24 tables. I have a 
bunch of stored procedures, triggers, events and views that I'm using.
Firebird works fairly well, but from time to time the database gets corrupted and I couldn't figure out yet (after many years of running) what's the reason. When this happens I run "gfix -mend -full 
-ignore", backup and restore the db and everything is fine until next problem in a week, or a month.


I never used PostgreSQL. Yesterday I installed it on my development machine and 
after few tests I saw that it's fairly easy to use.

Does anyone have experience with both, Firebird and PostgreSQL? Is PostgreSQL way better performing than Firebird? Is it worth the effort moving away from Firebird? Would I gain stability and 
increased performance?


Thanks.



Hello,

we have been running over 100 PostgerSQL servers (8.3) on remote tanker vessels 
in harsh conditions
under heavy vibrations due to both weather and mechanical vibrations, on 
commodity PC workstations
for years, and only one of them (hardware) was damaged beyond repair (not 
PgSQL's fault).
In other cases with databases corrupted due to heavily damaged disks, we 
managed to recover
and rescue all of the data except some few rows which could be re-generated 
anyway.

PostgreSQL *is* a reliable DB.

About checksums in our office master DB that's a fine idea, too bad that 
pg_upgrade doesn't cope with them
(and upgrading without pg_upgrade is out of the question)

--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: [GENERAL] ERROR: missing FROM-clause entry for table

2016-02-10 Thread Roxanne Reid-Bennett

On 2/10/2016 1:33 AM, bigkev wrote:

Yes, c.start_time is a timestamp.

It is really late where I am, so I apologize if I'm being dense...

My goal is to list events (calls) which are scheduled on fortnightly basis.



The query works with hard coded values, but I need to generate results for
days beginning at the start_time and ending with the end_time.: eg:

2016-02-06
2016-02-07
2016-02-08 | Some account name | other info
2016-02-09
2016-02-10

2016-02-22 | Some account name | other info
2016-02-23
2016-02-24
etc etc

Is this exactly the format of the data you are looking for?
If there was an event on 02/10, would it show or not? [because it 
doesn't fit in a fortnight schedule from 02/08, but could still be 
scheduled to run fortnightly [e.g. 02/10, 02/24...]

All dates within the first generate_series are listed, 1 row for each day.
The second generate_series is to show events(calls) which begin on
c.start_time and end at c.start_time.
Records would be displayed, at each 14 days from the start_time.
I am confused about "which" start time.  Your description implies there 
is just one start time that bounds your entire query.  You have a column 
"start_time" in the Call Event table which is presumably the start of an 
event.  However, each row in the table potentially has a totally 
different value.  So what do you mean by "the start time"?  There isn't 
a 14 days from "a" start time if you are using c.start_time and 
calculating 14 days into the future for each row in the Call Event 
table..  Each event would then have a fortnight period and your query 
results are not bounded by time at all.



In the above example, this would be 2016-02-08 and end_time would be a date
in the future.
So for "the start time" do you mean the first (minimum) start time in 
the Call Event table?
Or are you trying to gather all the call events that are scheduled 14 
days apart, listed in order by day?
Is there anything that ties the call events together as being in a set 
other than they take place exactly 14 days apart?


Roxanne

--
[At other schools] I think the most common fault in general is to teach 
students how to pass exams instead of teaching them the science.
Donald Knuth



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Optimize Query

2016-02-10 Thread drum.lu...@gmail.com
>
>
> FYI, it is always helpful (and polite) to state the PostgreSQL VERSION and
> O/S for archive documentation purposes!
>

That's right. My mistake... I'm using PostgreSQL 9.2.


>
> Note that various postgresql.conf options, system memory & hardware also
> play a factor here, in addition to current table statistics.
>
> That being said, try the following:
>
> WITH jobs AS
> (
>  SELECT id,
> customerid,
> status_label_id
>   FROM ja_jobs
>  WHERE NOT templated
>AND NOT deleted
>AND clientid = 6239
>AND time_job >= 1438351200
>AND time_job <= 144340
> )
> SELECT concat(client.company,
>' ',
>client.name_first,
>' ', client.name_last) AS customer,
>sum(COALESCE(bill_item.unit_price, billable.unit_price, 0) *
> bill_item.quantity) AS revenue,
>sum(bill_item.quantity) AS quantity,
>sum(COALESCE(bill_item.unit_cost, billable.unit_cost, 0) *
> bill_item.quantity) AS cost
>   FROM ja_clientsAS account
>   JOIN ja_customers  AS client ON client.clientid = account.id
>   JOIN jobs  AS jobON job.customerid = client.id
>   JOIN ja_notes  AS note   ON note.jobid = job.id
>   JOIN dm.bill_items AS bill_item ON
> bill_item.bill_item_id=note.bill_item_id
>   LEFT JOIN dm.bills AS bill ON bill.bill_id=bill_item.bill_id
>   LEFT JOIN dm.invoices AS invoice ON invoice.invoice_id=bill.invoice_id
>OR
> invoice.invoice_id=bill_item.invoice_id
>   LEFT JOIN dm.billables AS billable ON
> billable.billable_id=note.billable_id
>   LEFT JOIN dm.labors AS labs ON labs.billable_id = billable.billable_id
>   JOIN ja_mobiusers AS "user" ON "user".id = note.mobiuserid
>   JOIN ja_status AS status ON status.id = job.status_label_id
>   JOIN ja_role AS ROLE ON ROLE.id="user".user_type
>  WHERE note.note_type::text = ANY (ARRAY[ ('time'::CHARACTER
> VARYING)::text, ('part'::CHARACTER VARYING)::text ])
>AND bill_item.for_invoicing = TRUE
>  GROUP BY customer
>  ORDER BY revenue DESC;
>

Thank you Melvin.
Sorry but I was unable to see the *n_quote_status = 0*

Did you use it?


Re: [GENERAL] Transaction ID not logged if no explicit transaction used

2016-02-10 Thread Adrian Klaver

On 02/10/2016 02:58 PM, Martín Marqués wrote:

Hi,

I've been fooling around on a test environment where I wanted to run
some PITR tests using recovery_target_xid.

So I started setting up postgresql.conf with log_statement='ddl' (and
'mod' also) and the %x placeholder in log_line_prefix:

Odd result was that I always got a zero as the xid.


So after some chats (and a power outage at home :( ) I tried setting
log_statement to 'none' and log_min_duration_statement = 0 with the same
log_line_prefix:

I ran these commands to test:

data=# create table test_xid (id int);
CREATE TABLE
data=# begin;
BEGIN
data=# drop table test_xid;
DROP TABLE
data=# end;
COMMIT

Which give these logs:

<2016-02-10 17:41:19 EST [5729]: [1] xid=0
db=data,user=postgres,app=psql,client=[local]>LOG:  duration: 17.242 ms
  statement: create table test_xid (id int);
<2016-02-10 17:41:21 EST [5729]: [2] xid=0
db=data,user=postgres,app=psql,client=[local]>LOG:  duration: 0.055 ms
statement: begin;
<2016-02-10 17:41:32 EST [5729]: [3] xid=31063
db=data,user=postgres,app=psql,client=[local]>LOG:  duration: 6.858 ms
statement: drop table test_xid;
<2016-02-10 17:41:34 EST [5729]: [4] xid=0
db=data,user=postgres,app=psql,client=[local]>LOG:  duration: 2.540 ms
statement: end;

It's clear that if the command isn't executed in an explicit
transaction, you don't get the xid in the logs. Very annoying!

Worst, I guess most people would want the xid of a DROP TABLE to be
logged, as well as with other DDLs and/or DMLs with log_statement
configured accordingly. I suppose this is not so simple to achieve.

So, my question is: Is this a bug, or a feature? I recall being able to
log xids on DDLs but can't find the correct settings now.


Maybe?:

%v  Virtual transaction ID (backendID/localXID)




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Optimize Query

2016-02-10 Thread drum.lu...@gmail.com
Hi all,

I've got a slow query and I'm trying to make it faster.

*New Query:*

SELECT concat(client.company, ' ', client.name_first, ' ',
> client.name_last) AS customer,
>sum(COALESCE(bill_item.unit_price, billable.unit_price, 0) *
> bill_item.quantity) AS revenue,
>sum(bill_item.quantity) AS quantity,
>sum(COALESCE(bill_item.unit_cost, billable.unit_cost, 0) *
> bill_item.quantity) AS cost
>FROM ja_clients AS account
>JOIN ja_customers AS client ON client.clientid = account.id
>JOIN ja_jobs AS job ON client.id=job.customerid
>JOIN ja_notes AS note ON note.jobid = job.id
>JOIN dm.bill_items AS bill_item ON
> bill_item.bill_item_id=note.bill_item_id
>LEFT JOIN dm.bills AS bill ON bill.bill_id=bill_item.bill_id
>LEFT JOIN dm.invoices AS invoice ON invoice.invoice_id=bill.invoice_id
>  OR invoice.invoice_id=bill_item.invoice_id
>LEFT JOIN dm.billables AS billable ON
> billable.billable_id=note.billable_id
>LEFT JOIN dm.labors AS labs ON labs.billable_id = billable.billable_id
>JOIN ja_mobiusers AS "user" ON "user".id = note.mobiuserid
>JOIN ja_status AS status ON status.id = job.status_label_id
>JOIN ja_role AS ROLE ON ROLE.id="user".user_type
>WHERE note.note_type::text = ANY (ARRAY[ ('time'::CHARACTER
> VARYING)::text,
> ('part'::CHARACTER
> VARYING)::text ])
>  AND NOT job.templated
>  AND NOT job.deleted
>  AND job.clientid = 6239
>  AND job.time_job >= 1438351200
>  AND job.time_job <= 144340
>  AND bill_item.for_invoicing = TRUE
> GROUP BY customer
> ORDER BY revenue DESC;


*The original query has:*


SELECT $cols $ec , sum(revenue) as revenue, $cost_cols
> FROM (".note_detail_report_view(). ") AS i
> LEFT JOIN (
> SELECT $join_col , SUM(cost) AS cost, SUM(quantity) AS quantity
> FROM (".note_detail_report_view(). ") AS note_detail_report_view
> $whereClause AND *n_quote_status = 0*
> GROUP BY $join_col
> ) AS a
> ON $joiner
> $whereClause AND invoice = true $limit_inv
> GROUP BY $group_by $ec, a.cost , a.quantity
> ORDER BY $order_by


I just need the a-case. i and a look very similar, except A with an
additional filter: *n_quote_status = 0*


How can I re-write that using the A case?

Thanks


Re: [GENERAL] Optimize Query

2016-02-10 Thread Melvin Davidson
On Wed, Feb 10, 2016 at 8:25 PM, drum.lu...@gmail.com 
wrote:

> Hi all,
>
> I've got a slow query and I'm trying to make it faster.
>
> *New Query:*
>
> SELECT concat(client.company, ' ', client.name_first, ' ',
>> client.name_last) AS customer,
>>sum(COALESCE(bill_item.unit_price, billable.unit_price, 0) *
>> bill_item.quantity) AS revenue,
>>sum(bill_item.quantity) AS quantity,
>>sum(COALESCE(bill_item.unit_cost, billable.unit_cost, 0) *
>> bill_item.quantity) AS cost
>>FROM ja_clients AS account
>>JOIN ja_customers AS client ON client.clientid = account.id
>>JOIN ja_jobs AS job ON client.id=job.customerid
>>JOIN ja_notes AS note ON note.jobid = job.id
>>JOIN dm.bill_items AS bill_item ON
>> bill_item.bill_item_id=note.bill_item_id
>>LEFT JOIN dm.bills AS bill ON bill.bill_id=bill_item.bill_id
>>LEFT JOIN dm.invoices AS invoice ON invoice.invoice_id=bill.invoice_id
>>  OR invoice.invoice_id=bill_item.invoice_id
>>LEFT JOIN dm.billables AS billable ON
>> billable.billable_id=note.billable_id
>>LEFT JOIN dm.labors AS labs ON labs.billable_id = billable.billable_id
>>JOIN ja_mobiusers AS "user" ON "user".id = note.mobiuserid
>>JOIN ja_status AS status ON status.id = job.status_label_id
>>JOIN ja_role AS ROLE ON ROLE.id="user".user_type
>>WHERE note.note_type::text = ANY (ARRAY[ ('time'::CHARACTER
>> VARYING)::text,
>> ('part'::CHARACTER
>> VARYING)::text ])
>>  AND NOT job.templated
>>  AND NOT job.deleted
>>  AND job.clientid = 6239
>>  AND job.time_job >= 1438351200
>>  AND job.time_job <= 144340
>>  AND bill_item.for_invoicing = TRUE
>> GROUP BY customer
>> ORDER BY revenue DESC;
>
>
> *The original query has:*
>
> SELECT $cols $ec , sum(revenue) as revenue, $cost_cols
>> FROM (".note_detail_report_view(). ") AS i
>> LEFT JOIN (
>> SELECT $join_col , SUM(cost) AS cost, SUM(quantity) AS quantity
>> FROM (".note_detail_report_view(). ") AS note_detail_report_view
>> $whereClause AND *n_quote_status = 0*
>> GROUP BY $join_col
>> ) AS a
>> ON $joiner
>> $whereClause AND invoice = true $limit_inv
>> GROUP BY $group_by $ec, a.cost , a.quantity
>> ORDER BY $order_by
>
>
> I just need the a-case. i and a look very similar, except A with an
> additional filter: *n_quote_status = 0*
>
> How can I re-write that using the A case?
>
> Thanks
>

FYI, it is always helpful (and polite) to state the PostgreSQL VERSION and
O/S for archive documentation purposes!

Note that various postgresql.conf options, system memory & hardware also
play a factor here, in addition to current table statistics.

That being said, try the following:

WITH jobs AS
(
 SELECT id,
customerid,
status_label_id
  FROM ja_jobs
 WHERE NOT templated
   AND NOT deleted
   AND clientid = 6239
   AND time_job >= 1438351200
   AND time_job <= 144340
)
SELECT concat(client.company,
   ' ',
   client.name_first,
   ' ', client.name_last) AS customer,
   sum(COALESCE(bill_item.unit_price, billable.unit_price, 0) *
bill_item.quantity) AS revenue,
   sum(bill_item.quantity) AS quantity,
   sum(COALESCE(bill_item.unit_cost, billable.unit_cost, 0) *
bill_item.quantity) AS cost
  FROM ja_clientsAS account
  JOIN ja_customers  AS client ON client.clientid = account.id
  JOIN jobs  AS jobON job.customerid = client.id
  JOIN ja_notes  AS note   ON note.jobid = job.id
  JOIN dm.bill_items AS bill_item ON
bill_item.bill_item_id=note.bill_item_id
  LEFT JOIN dm.bills AS bill ON bill.bill_id=bill_item.bill_id
  LEFT JOIN dm.invoices AS invoice ON invoice.invoice_id=bill.invoice_id
   OR
invoice.invoice_id=bill_item.invoice_id
  LEFT JOIN dm.billables AS billable ON
billable.billable_id=note.billable_id
  LEFT JOIN dm.labors AS labs ON labs.billable_id = billable.billable_id
  JOIN ja_mobiusers AS "user" ON "user".id = note.mobiuserid
  JOIN ja_status AS status ON status.id = job.status_label_id
  JOIN ja_role AS ROLE ON ROLE.id="user".user_type
 WHERE note.note_type::text = ANY (ARRAY[ ('time'::CHARACTER
VARYING)::text, ('part'::CHARACTER VARYING)::text ])
   AND bill_item.for_invoicing = TRUE
 GROUP BY customer
 ORDER BY revenue DESC;

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.