Re: [GENERAL] Installing PostgreSQL 10 on Mac OSX Undefined Symbol _heap_modify_tuple_by_cols

2017-10-25 Thread Ben Madin
to clarify, I commented them out, hence the # - it wasn't that I removed
lines that were already commented out :)

On 25 October 2017 at 22:21, Ben Madin <b...@ausvet.com.au> wrote:

> G'day Tom,
>
> Thanks for the feedback. I couldn't find anywhere that suggested that I
> had set the CPPFLAGS or CFLAGS environment variables, so I removed the
> following lines from my profile:
>
> #export USE_PGXS=1
> #export PG_LIB_DIR="/usr/local/pgsql/lib"
> #export PG_CONFIG="/usr/local/pgsql/bin/pg_config"
> #export PGDATA="/usr/local/pgsql/data"
>
> and tried again - worked a treat!
>
> I haven't tried to work out which of these lines caused the problem, but
> hopefully if anyone else has a similar problem they might benefit!
>
> cheers
>
> Ben
>
>
> On 24 October 2017 at 02:43, Tom Lane <t...@sss.pgh.pa.us> wrote:
>
>> Ben Madin <b...@ausvet.com.au> writes:
>> > we are quite excited about the parallelisation enhancements, and keen to
>> > try, but trying to build (using the same configure as we have used for
>> 9.6)
>> > is giving some warnings and errors.
>>
>> Something's definitely messed up there:
>>
>> > gcc -Wall -Wmissing-prototypes -Wpointer-arith
>> > -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
>> > -Wformat-security -fno-strict-aliasing -fwrapv
>> > -Wno-unused-command-line-argument -O2 -arch x86_64  -DREFINT_VERBOSE
>> -I.
>> > -I./ -I/usr/local/pgsql965/include/server
>> > -I/usr/local/pgsql965/include/internal
>> > -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOS
>> X.platform/Developer/SDKs/MacOSX10.13.sdk/usr/include/libxml2
>> > -I/usr/local/include  -c -o autoinc.o autoinc.c
>>
>> Looking at this example of a v10 build log on macOS:
>> https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?n
>> m=longfin=2017-10-23%2018%3A15%3A34=make
>>
>> the compile command for autoinc is
>>
>> ccache gcc -Wall -Wmissing-prototypes -Wpointer-arith
>> -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
>> -Wformat-security -fno-strict-aliasing -fwrapv
>> -Wno-unused-command-line-argument -g -O2 -fno-common
>> -Wno-deprecated-declarations -Werror  -DREFINT_VERBOSE -I. -I.
>> -I../../src/include  -I/Applications/Xcode.app/Cont
>> ents/Developer/Platforms/MacOSX.platform/Developer/SDKs/
>> MacOSX10.13.sdk/usr/include/libxml2  -I/usr/local/ssl/include  -c -o
>> autoinc.o autoinc.c
>>
>> Some of the discrepancies (e.g. -Werror on the buildfarm machine) are
>> explainable as different configuration choices, but the references to
>> /usr/local/pgsql965 in your build sure look like trouble.
>>
>> > Is this looking for an existing environment variable (which
>> > seems unlikely for a build process) or is something else unusual?
>>
>> I believe the configure script *does* pay attention to environment
>> variables, particularly CPPFLAGS and CFLAGS.  Most likely you had
>> version-specific values in those when you ran configure, and they
>> got absorbed into src/Makefile.global.
>>
>> regards, tom lane
>>
>


Re: [GENERAL] Installing PostgreSQL 10 on Mac OSX Undefined Symbol _heap_modify_tuple_by_cols

2017-10-25 Thread Ben Madin
G'day Tom,

Thanks for the feedback. I couldn't find anywhere that suggested that I had
set the CPPFLAGS or CFLAGS environment variables, so I removed the
following lines from my profile:

#export USE_PGXS=1
#export PG_LIB_DIR="/usr/local/pgsql/lib"
#export PG_CONFIG="/usr/local/pgsql/bin/pg_config"
#export PGDATA="/usr/local/pgsql/data"

and tried again - worked a treat!

I haven't tried to work out which of these lines caused the problem, but
hopefully if anyone else has a similar problem they might benefit!

cheers

Ben


On 24 October 2017 at 02:43, Tom Lane <t...@sss.pgh.pa.us> wrote:

> Ben Madin <b...@ausvet.com.au> writes:
> > we are quite excited about the parallelisation enhancements, and keen to
> > try, but trying to build (using the same configure as we have used for
> 9.6)
> > is giving some warnings and errors.
>
> Something's definitely messed up there:
>
> > gcc -Wall -Wmissing-prototypes -Wpointer-arith
> > -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
> > -Wformat-security -fno-strict-aliasing -fwrapv
> > -Wno-unused-command-line-argument -O2 -arch x86_64  -DREFINT_VERBOSE -I.
> > -I./ -I/usr/local/pgsql965/include/server
> > -I/usr/local/pgsql965/include/internal
> > -I/Applications/Xcode.app/Contents/Developer/Platforms/
> MacOSX.platform/Developer/SDKs/MacOSX10.13.sdk/usr/include/libxml2
> > -I/usr/local/include  -c -o autoinc.o autoinc.c
>
> Looking at this example of a v10 build log on macOS:
> https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?
> nm=longfin=2017-10-23%2018%3A15%3A34=make
>
> the compile command for autoinc is
>
> ccache gcc -Wall -Wmissing-prototypes -Wpointer-arith
> -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
> -Wformat-security -fno-strict-aliasing -fwrapv 
> -Wno-unused-command-line-argument
> -g -O2 -fno-common -Wno-deprecated-declarations -Werror  -DREFINT_VERBOSE
> -I. -I. -I../../src/include  -I/Applications/Xcode.app/
> Contents/Developer/Platforms/MacOSX.platform/Developer/
> SDKs/MacOSX10.13.sdk/usr/include/libxml2  -I/usr/local/ssl/include  -c -o
> autoinc.o autoinc.c
>
> Some of the discrepancies (e.g. -Werror on the buildfarm machine) are
> explainable as different configuration choices, but the references to
> /usr/local/pgsql965 in your build sure look like trouble.
>
> > Is this looking for an existing environment variable (which
> > seems unlikely for a build process) or is something else unusual?
>
> I believe the configure script *does* pay attention to environment
> variables, particularly CPPFLAGS and CFLAGS.  Most likely you had
> version-specific values in those when you ran configure, and they
> got absorbed into src/Makefile.global.
>
> regards, tom lane
>



-- 

Dr Ben Madin
Managing Director



m : +61 448 887 220

e : b...@ausvet.com.au

5 Shuffrey Street, Fremantle
Western Australia

on the web: www.ausvet.com.au


This transmission is for the intended addressee only and is confidential
information. If you have received this transmission in error, please delete
it and notify the sender. The contents of this email are the opinion of the
writer only and are not endorsed by Ausvet unless expressly stated
otherwise. Although Ausvet uses virus scanning software we do not accept
liability for viruses or similar in any attachments.


[GENERAL] Installing PostgreSQL 10 on Mac OSX Undefined Symbol _heap_modify_tuple_by_cols

2017-10-22 Thread Ben Madin
G'day,

we are quite excited about the parallelisation enhancements, and keen to
try, but trying to build (using the same configure as we have used for 9.6)
is giving some warnings and errors.

The detail is below, but the oddity I'm really wondering about is the
reference in the command to /usr/local/pgsql965/... this was my current 9.6
install (which went smoothly) - but why is it being referenced in the make
for 10...? Is this looking for an existing environment variable (which
seems unlikely for a build process) or is something else unusual?

I am still on Sierra (Darwin orion.local 16.7.0 Darwin Kernel Version
16.7.0: Thu Jun 15 17:36:27 PDT 2017; root:xnu-3789.70.16~2/RELEASE_X86_64
x86_64)

and using this configure:

./configure --prefix=/usr/local/pgsql-10 --with-extra-version=BM
--with-python --with-openssl --with-bonjour --with-uuid=e2fs --with-libxml
--with-libxslt
PYTHON=/Library/Frameworks/Python.framework/Versions/3.6/bin/python3

and the build failure ends with:

/Applications/Xcode.app/Contents/Developer/usr/bin/make -C
../../../contrib/spi
gcc -Wall -Wmissing-prototypes -Wpointer-arith
-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
-Wformat-security -fno-strict-aliasing -fwrapv
-Wno-unused-command-line-argument -O2 -arch x86_64  -DREFINT_VERBOSE -I.
-I./ -I/usr/local/pgsql965/include/server
-I/usr/local/pgsql965/include/internal
-I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.13.sdk/usr/include/libxml2
-I/usr/local/include  -c -o autoinc.o autoinc.c
autoinc.c:116:14: warning: implicit declaration of function
'heap_modify_tuple_by_cols' is invalid in C99
[-Wimplicit-function-declaration]
rettuple = heap_modify_tuple_by_cols(rettuple, tupdesc,
   ^
autoinc.c:116:12: warning: incompatible integer to pointer conversion
assigning to 'HeapTuple' (aka 'struct HeapTupleData *') from 'int'
[-Wint-conversion]
rettuple = heap_modify_tuple_by_cols(rettuple, tupdesc,
 ^ 
2 warnings generated.
gcc -Wall -Wmissing-prototypes -Wpointer-arith
-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
-Wformat-security -fno-strict-aliasing -fwrapv
-Wno-unused-command-line-argument -O2 -arch x86_64
-L/usr/local/pgsql965/lib
-L/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.13.sdk/usr/lib
-L/usr/local/lib -Wl,-dead_strip_dylibs  -arch x86_64
-L/usr/local/pgsql965/lib/pgxs/src/makefiles/../../src/port -lpgport
-bundle -bundle_loader /usr/local/pgsql965/bin/postgres -o autoinc.so
autoinc.o
ld: warning: directory not found for option
'-L/usr/local/pgsql965/lib/pgxs/src/makefiles/../../src/port'
Undefined symbols for architecture x86_64:
  "_heap_modify_tuple_by_cols", referenced from:
  _autoinc in autoinc.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see
invocation)
make[3]: *** [autoinc.so] Error 1
make[2]: *** [submake-contrib-spi] Error 2
make[1]: *** [all-test/regress-recurse] Error 2
make: *** [all-src-recurse] Error 2


cheers

Ben

-- 

Ben Madin



m : +61 448 887 220
w : +61 8 7200 7220

e : b...@ausvet.com.au

5 Shuffrey Street, Fremantle
Western Australia

on the web: www.ausvet.com.au


This transmission is for the intended addressee only and is confidential
information. If you have received this transmission in error, please delete
it and notify the sender. The contents of this email are the opinion of the
writer only and are not endorsed by Ausvet unless expressly stated
otherwise. Although Ausvet uses virus scanning software we do not accept
liability for viruses or similar in any attachments.


Re: [GENERAL] Isolation of schema renames

2017-08-09 Thread Ben Leslie
On 10 August 2017 at 12:14, Tom Lane <t...@sss.pgh.pa.us> wrote:

> Ben Leslie <be...@benno.id.au> writes:


Thanks for the quick response, I appreciate it.

> I'm wondering if I can/should expect schema renames to be isolated.

Nope, you should not.

That's fine. I think I can achieve what I want using an alternative
approach.


> This is not an especially easy thing to fix, because to have the system
> behave as you wish it did, your second transaction would have to be
> ignoring already-committed DDL changes, and it is very easy to show
> examples where that would be fatal.  For example, consider
>

This is a bit confusing to me, as it seems like for other similar DDL
changes it does behave as I expect, and it's just the "ALTER SCHEMA" is a
bit of an outlier.


> Now, in practice this specific example doesn't go through anyway, because
> after its first insert, S2's transaction is holding a lock on t that
> precludes the ALTER TABLE steps.


Right, this was I guess my expectation, that ALTER SCHEMA would block in a
similar way to how ALTER TABLE blocks.

  But I only wrote it this way for
> pedagogic purposes.  If S2 had done some things but not touched t2 yet,
> the concurrent ALTERs would succeed, and then S2 has no choice but to
> respect the results of that DDL.
>

If I have:

S1S2
begin;begin;

  select * from test.test; (takes AccessShareLock
on test.test)

alter table test.test rename to test_old;   <--- blocks getting
AccessExclusiveLock on test.test

  select * from test.test; (returns same data as
first select)


By contrast:

S1S2
begin;begin;

  select * from test.test; (takes AccessShareLock
on test.test)

alter schema test rename to test_old;< does not block
commit;
  select * from test.test; (fails )

I'm not saying anything should change, but from a naive point of view, I
feel like scenario one above is "doing the right thing" and S2 is getting
repeatable reads as it would expect, and the second scenario seems to be
"doing the wrong thing" because it is no longer getting a repeatable read.

>From my point of view it seems like some DDL changes are isolated, but
schema changes are not isolated. [Maybe I'm not using 'isolated' in a
formally correct way here, in this specific case I just mean that S2
doesn't see the changes done in S1, not that both S1 and S2 can make
progress].

I don't know the PostgreSQL internals, but I'm guessing that there must not
be locks on schemas as there are on tables, and/or taking schemas would be
too much overhead to avoid the case shown above (which is fine, I just
wanted to verify this is intended behaviour and make sure I wasn't doing
something too stupid).

Cheers,

Ben


[GENERAL] Isolation of schema renames

2017-08-09 Thread Ben Leslie
I'm wondering if I can/should expect schema renames to be isolated.

For example, I have two schemas "test" and "test_new". Each with a "test"
table (with same columns, but different data).

In one transaction I'm renaming the schemas test => test_old, test_new =>
test. I.e.:

BEGIN;
ALTER SCHEMA test RENAME TO test_old;
ALTER SCHEMA test_new RENAME TO test;
COMMIT;

In another transaction I do:

BEGIN;
SELECT * FROM test.test;

SELECT * FROM test.test;
COMMIT;

My expectation is that both "SELECT" would return the same data (or that
the transaction performing the rename is blocked until the select
transaction is complete).

In testing this on 9.5.7 the behaviour I see is that the select "SELECT"
returns different data. (Regardless of isolation level chosen).

Is it possible to perform this rename in any way?

If not (which I suspect is the case) is this documented somewhere, I
couldn't find it (but that is probably me not looking hard enough / in the
right place).

Thanks,

Ben


Re: [GENERAL] Upgrading postgresql minor version

2017-05-10 Thread Ron Ben
Not possible 
https://www.postgresql.org/download/linux/debian/
 
To upgrade I do: apt-get install postgresql-9.3
There is no way to "roll back" from here.
I can not choose which version to install, it install the latest version packed for debian. Currently its 9.3.16
 
The docuntation says that upgrading minor version is just "replacing the executables", so there has to be a way to save them on the side for roll back.
If not, the documntation is insuffecent. We all know that sometimes even the smallest change can cause troubles.ב מאי 10, 2017 18:02, Andreas Joseph Krogh כתב:På onsdag 10. mai 2017 kl. 16:55:50, skrev Ron Ben <ronb...@walla.co.il>:




 
I think you miss understood me.

pg_dump dumps the data. the tables, functions and the data saved in them.

I have daily backups for this so i'm not worried.

 

What i'm woried about are the "executables files". These files are what is actualy being updated when you update the version.

I want to be able to roll them back in case of upgrade failure without rolling back the data.

 

 

Basicly what I want is to save on the side all the files that will be change by the update and in case of failure just to replace them back.

Is it doable? How?





 

What about just re-installing the previous version the same way you installed the "patch-release"-version?


--
Andreas Joseph Krogh



 




Re: [GENERAL] Upgrading postgresql minor version

2017-05-10 Thread Ron Ben

I think you miss understood me.
pg_dump dumps the data. the tables, functions and the data saved in them.
I have daily backups for this so i'm not worried.
 
What i'm woried about are the "executables files". These files are what is actualy being updated when you update the version.
I want to be able to roll them back in case of upgrade failure without rolling back the data.
 
 
Basicly what I want is to save on the side all the files that will be change by the update and in case of failure just to replace them back.
Is it doable? How?
ב מאי 10, 2017 16:27, Adrian Klaver כתב:On 05/10/2017 06:08 AM, Ron Ben wrote:> I'm about to upgrade my postgresql to the latest 9.3 version> On my test server eveything works.> However I want to save a backup of my production server before the > upgrade...> I'm not sure how I do that.https://www.postgresql.org/docs/9.3/static/app-pg-dumpall.html> It says that only system files are changed during upgrade... which > folders exactly I need to save on the side for roll back?I would say that would depend on the nature of the release changes.> I know I can take a snap shoot of my server before the update but i'm > affried that rolling back to this snap shoot will also roll back the > data itself?> Since after the update the users continue to work I don't want to lose > thier data.Well if the upgrade takes then you don't need to rollback. If it did not then the users did not get any work done or any data they entered would be considered suspect. If you are concerned then take another pg_dumpall before rolling back.> How do I do that?-- Adrian Klaveradrian.kla...@aklaver.com-- Sent via pgsql-general mailing list (pgsql-general@postgresql.org)To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] Upgrading postgresql minor version

2017-05-10 Thread Ron Ben
I'm about to upgrade my postgresql to the latest 9.3 version
 
On my test server eveything works.
However I want to save a backup of my production server before the upgrade...
I'm not sure how I do that.
 
It says that only system files are changed during upgrade... which folders exactly I need to save on the side for roll back?
I know I can take a snap shoot of my server before the update but i'm affried that rolling back to this snap shoot will also roll back the data itself?
Since after the update the users continue to work I don't want to lose thier data.
 
How do I do that?

Re: [GENERAL] How to upgrade PostgreSQL minor version?

2017-04-26 Thread Ron Ben


Re: [GENERAL] How to upgrade PostgreSQL minor version?

2017-04-26 Thread Ron Ben


Hi :)
I do have a test enviroment and I will test eveything there first but I had cases where eveything worked on test but on production I had difficultes. It's rare but can happen.
Problems with upgrading PostgreSQL isn't like upgrading a package which effects few functions.
Since the documntation says just replace the executables I'm lost...
I notced that 9.3.5 and 9.3.9 requires sepcific test. Further more it says secificly not to perform the tests for 9.3.9 with versions piror to 9.3.5 as it might not work : "The lingering effects of a previously-fixed bug in pg_upgrade could also cause such a failure, in installations that had used pg_upgrade versions between 9.3.0 and 9.3.4."  So this means I have to upgrade to 9.3.5 and only then to 9.3.16
How do I perform upgrade in steps to specific versions?
 
I'm runing 9.3.3 in production server and in test server.
Operating system is Ubuntu server 12
 
I'm still not sure that I can upgrade directly from 9.3.3 to 9.3.16 - The documntation always tells to "if you are upgrading from version earlier than 9.3.X check 9.3.X-1". It assums that everyone upgrading once the version was released which is not always the case. And as mentioned earlier 9.3.5 and 9.3.9 can create problems as they require specific checks.
 
 
I'd appriciate any insight.
 
 
 
 


ב אפר׳ 26, 2017 13:26, Sameer Kumar כתב:



On Wed, Apr 26, 2017 at 4:41 PM Ron Ben <ronb...@walla.co.il> wrote:


I'm runing PostgreSQL 9.3.3 and I want to upgrade it to the latest 9.3 version
The documontation does not specify what needs to be done other than "just install the executables".
This is wierd as for example 9.3.5 release notes request to run a specifc query to  check for pg_multixact files left.
 
I saw this question:
https://serverfault.com/questions/563667/update-9-3-x-minor-verison-of-postgres-on-ubuntu-12-04
I'm not sure how this will handle the 9.3.5 check issue.
 
 


 
What is the version of PostgreSQL that you are running?
 
Though there are some general guidelines/procedure (which is as simple as stop the server, upgrade binaries and start, there will also be some peculiar releases which has a bug-fix. While the bug-fix covers you in future, there might have been incidents where you are already bitten by bug being fixed (worse if you have not yet identified it). To cover such cases, you would need to follow some custom procedure. Again, mostly the custom steps are not for patch application but more of a mitigation against risks of bug or in some cases bug-fix.
 
Are you running with a standby or just stand-alone setup?
 
 


Is it really just runing one command?
What if there are errors and problems? how do i revert back?
Sadly there is not enough information regarding minor verion updates...
 
I'm also runing ubuntu server 12 which will be updated as well later this quarter.


 
Do you have a test environment? If I were you I would have a non-prod that mimics my production setup exactly and I would apply the patch there first.  
 


-- 


-- 













Best Regards,
Sameer Kumar | Senior Solution Architect
ASHNIK PTE. LTD.
101 Cecil Street, #11-11 Tong Eng Building, Singapore 069533
T: +65 6438 3504 | www.ashnik.com
Skype: sameer.ashnik |   M: +65 8110 0350




















[GENERAL] How to upgrade PostgreSQL minor version?

2017-04-26 Thread Ron Ben
I'm runing PostgreSQL 9.3.3 and I want to upgrade it to the latest 9.3 version
The documontation does not specify what needs to be done other than "just install the executables".
This is wierd as for example 9.3.5 release notes request to run a specifc query to  check for pg_multixact files left.
 
I saw this question:
https://serverfault.com/questions/563667/update-9-3-x-minor-verison-of-postgres-on-ubuntu-12-04
I'm not sure how this will handle the 9.3.5 check issue.
 
Is it really just runing one command?
What if there are errors and problems? how do i revert back?
Sadly there is not enough information regarding minor verion updates...
 
I'm also runing ubuntu server 12 which will be updated as well later this quarter.

Re: [GENERAL] Not sure this should be asked here but...

2017-04-23 Thread Ron Ben
A simple open source forum system can be enough simetng like php-bb
example: warez-bb.org
 
the installation of such system is like 1 hour of work.
 
In my point of view something like stack overflow is the best but i'm not sure if it's open source.ב אפר׳ 22, 2017 22:15, David G. Johnston כתב:On Sat, Apr 22, 2017 at 12:04 PM, Ron Ben <ronb...@walla.co.il> wrote:Why to use mailining list rather than forum?
forum gives much more flexablitiy, easy to read and respond, allows to search for other simillar issues, topics can be tagged...
 
​Was this intended to display irony re: "allows to search for other similar issues"?​​https://www.postgresql.org/message-id/CY4PR13MB1751ED37A959456DA0DE115599310%40CY4PR13MB1751.namprd13.prod.outlook.com"browser interface to forums please?"​David J.



[GENERAL] Not sure this should be asked here but...

2017-04-22 Thread Ron Ben
Why to use mailining list rather than forum?
forum gives much more flexablitiy, easy to read and respond, allows to search for other simillar issues, topics can be tagged...
 
 

Re: [GENERAL] Unable to upload backups

2017-04-20 Thread Ron Ben
OK. I think I found a bug in PostgreSQL (9.3).
 
When I do:

 
CREATE ROLE ronb  SUPERUSER INHERIT CREATEDB CREATEROLE NOREPLICATION;
 
Everything works. I can create the schemas and upload the backup correclty.
 
But if I do:

CREATE ROLE "ronb" LOGIN  NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION;GRANT users TO "ronb";
 

CREATE ROLE users   SUPERUSER INHERIT CREATEDB CREATEROLE NOREPLICATION;
 
 
It doesn't work. ronb user still can't create schemas eventhogh the role users give ronb permission to do so.
The GRANT here is worthless.
 
 
Also, PostgreSQL doesn't show the GRANTS of role in the same order as they were given (In PgAdmin). It sort them alphabeticly which is highly confusing!
If for example you run "alter table x add column" you know that the new column is added last (if you refresh the table you will see it last).
But if you add another GRANT statment to user it won't be in the last.. you have no way of knowing the correct order of GRANTS.


ב אפר׳ 19, 2017 17:26, Adrian Klaver כתב:On 04/19/2017 07:16 AM, Ron Ben wrote:> Here :)Thanks.See my previous response. Basically we need more information before this can be solved.> I think I may have found the problem.>> The role defined as:>> CREATE ROLE "ronb" LOGIN>   NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION;> GRANT users TO "ronb";> GRANT users2 TO "ronb";>> users is a group role:>> CREATE ROLE users>   SUPERUSER INHERIT CREATEDB CREATEROLE NOREPLICATION;>>  users2 is a group role:> CREATE ROLE users2>   NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION;> GRANT reports TO users2 ;>>> I think PostgreSQL doesn't know how to handle this conflicted commands.> What PostgreSQL does when such conflic appears? does it take the last> known command of grant?>> Sadly, when there are more than one role it's impossible to know which> role was first. PostgreSQL shows them alphabeticly rather than by date> so in case of overlaping instructions its impossible to know which one> was first.>>> ב אפר׳ 19, 2017 17:01, Adrian Klaver כתב:>> On 04/19/2017 06:49 AM, Ron Ben wrote:>> Is it possible to get your email program to left justify text on> sending? I can figure out the right justified text, it just> takes me longer.>> > I think I may have found the problem.> >> > The role defined as:> >> > CREATE ROLE "ronb" LOGIN> > NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION;> > GRANT users TO "ronb";> > GRANT users2 TO "ronb";> >> > users is a group role:> >> > CREATE ROLE users> > SUPERUSER INHERIT CREATEDB CREATEROLE NOREPLICATION;> >> > users2 is a group role:> > CREATE ROLE users2> > NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION;> > GRANT reports TO users2 ;>> That may or may not be the problem. See:>> https://www.postgresql.org/docs/9.6/static/sql-createrole.html>> "The INHERIT attribute governs inheritance of grantable> privileges (that> is, access privileges for database objects and role> memberships). It> does not apply to the special role attributes set by CREATE ROLE> and> ALTER ROLE. For example, being a member of a role with CREATEDB> privilege does not immediately grant the ability to create> databases,> even if INHERIT is set; it would be necessary to become that> role via> SET ROLE before creating a database.">>> What you show above is part of the answer. The other parts are the> actual privileges on the objects. Also the command that created> the dump> file that you are trying to restore. Permissions/privileges> issues can> be complex and solving them requires a complete set of information.>> >> >> > I think PostgreSQL doesn't know how to handle this conflicted> commands.> > What PostgreSQL does when such conflic appears? does it take> the last> > known command of grant?> >> > Sadly, when there are more than one role it's impossible to> know which> > role was first. PostgreSQL shows them alphabeticly rather than> by date> > so in case of overlaping instructions its impossible to know> which one> > was first.> >> >> &g

Re: [GENERAL] Unable to upload backups

2017-04-19 Thread Ron Ben


Here :)
I think I may have found the problem.
 
The role defined as:
 
CREATE ROLE "ronb" LOGIN  NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION;GRANT users TO "ronb";
GRANT users2 TO "ronb";
 
users is a group role:
 
CREATE ROLE users   SUPERUSER INHERIT CREATEDB CREATEROLE NOREPLICATION;
 
 users2 is a group role:
CREATE ROLE users2   NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION;
GRANT reports TO users2 ;
 
 
I think PostgreSQL doesn't know how to handle this conflicted commands.
What PostgreSQL does when such conflic appears? does it take the last known command of grant?

Sadly, when there are more than one role it's impossible to know which role was first. PostgreSQL shows them alphabeticly rather than by date so in case of overlaping instructions its impossible to know which one was first.ב אפר׳ 19, 2017 17:01, Adrian Klaver כתב:On 04/19/2017 06:49 AM, Ron Ben wrote:Is it possible to get your email program to left justify text on sending? I can figure out the right justified text, it just takes me longer.> I think I may have found the problem.>> The role defined as:>> CREATE ROLE "ronb" LOGIN>   NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION;> GRANT users TO "ronb";> GRANT users2 TO "ronb";>> users is a group role:>> CREATE ROLE users>   SUPERUSER INHERIT CREATEDB CREATEROLE NOREPLICATION;>>  users2 is a group role:> CREATE ROLE users2>   NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION;> GRANT reports TO users2 ;That may or may not be the problem. See:https://www.postgresql.org/docs/9.6/static/sql-createrole.html"The INHERIT attribute governs inheritance of grantable privileges (that is, access privileges for database objects and role memberships). It does not apply to the special role attributes set by CREATE ROLE and ALTER ROLE. For example, being a member of a role with CREATEDB privilege does not immediately grant the ability to create databases, even if INHERIT is set; it would be necessary to become that role via SET ROLE before creating a database."What you show above is part of the answer. The other parts are the actual privileges on the objects. Also the command that created the dump file that you are trying to restore. Permissions/privileges issues can be complex and solving them requires a complete set of information.>>> I think PostgreSQL doesn't know how to handle this conflicted commands.> What PostgreSQL does when such conflic appears? does it take the last> known command of grant?>> Sadly, when there are more than one role it's impossible to know which> role was first. PostgreSQL shows them alphabeticly rather than by date> so in case of overlaping instructions its impossible to know which one> was first.>>> ב אפר׳ 19, 2017 16:40, Adrian Klaver כתב:>> On 04/19/2017 03:56 AM, Ron Ben wrote:> > Hi,> > I'm using PostgreSQL 9.3.2> > I'm running the command:> >> >> > psql -h testserver -U ronb -f backup.sql -q -d foldertest> 2>error.txt> >>output.txt>> What was the command that created backup.sql?>> >> > This should generate my database in foldertest> >> > However this doesn't work. It's unable to create schemas> >> > in the error.txt i see "permission denied for database> foldertest".>> What user is the foldertest owner?>> In psql l will tell you this.>> >> > I know this is not an access permission issue because there is> a public> > schema which is buildin and it does create the tables/data in> there.>> Because the public schema is by default open to all:>> https://www.postgresql.org/docs/9.6/static/ddl-schemas.html>> "A user can also be allowed to create objects in someone else's> schema.> To allow that, the CREATE privilege on the schema needs to be> granted.> Note that by default, everyone has CREATE and USAGE privileges> on the> schema public. This allows all users that are able to connect to> a given> database to create objects in its public schema. ... ">>> >> > It just cant create new schemas.>> In psql do dn+, that will show schema owners and who else has> privileges.>> For what the different privileges are and how they are> represented in> the above output see:>> https://www.postgresql.org/docs/9.6/static/sql-grant.html>> >> >> >>   

Re: [GENERAL] Unable to upload backups

2017-04-19 Thread Ron Ben


I think I may have found the problem.
 
The role defined as:
 
CREATE ROLE "ronb" LOGIN  NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION;GRANT users TO "ronb";
GRANT users2 TO "ronb";
 
users is a group role:
 
CREATE ROLE users   SUPERUSER INHERIT CREATEDB CREATEROLE NOREPLICATION;
 
 users2 is a group role:
CREATE ROLE users2   NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION;
GRANT reports TO users2 ;
 
 
I think PostgreSQL doesn't know how to handle this conflicted commands.
What PostgreSQL does when such conflic appears? does it take the last known command of grant?

Sadly, when there are more than one role it's impossible to know which role was first. PostgreSQL shows them alphabeticly rather than by date so in case of overlaping instructions its impossible to know which one was first.ב אפר׳ 19, 2017 16:40, Adrian Klaver כתב:On 04/19/2017 03:56 AM, Ron Ben wrote:> Hi,> I'm using PostgreSQL 9.3.2> I'm running the command:>>> psql -h testserver -U ronb -f backup.sql  -q -d foldertest  2>error.txt>>output.txtWhat was the command that created backup.sql?>> This should generate my database in foldertest>> However this doesn't work. It's unable to create schemas>> in the error.txt i see "permission denied for database foldertest".What user is the foldertest owner?In psql l will tell you this.>> I know this is not an access permission issue because there is a public> schema which is buildin and it does create the tables/data in there.Because the public schema is by default open to all:https://www.postgresql.org/docs/9.6/static/ddl-schemas.html"A user can also be allowed to create objects in someone else's schema. To allow that, the CREATE privilege on the schema needs to be granted. Note that by default, everyone has CREATE and USAGE privileges on the schema public. This allows all users that are able to connect to a given database to create objects in its public schema.  ... ">> It just cant create new schemas.In psql do dn+, that will show schema owners and who else has privileges.For what the different privileges are and how they are represented in the above output see:https://www.postgresql.org/docs/9.6/static/sql-grant.html>>>> The intresting thing is that if I do:>> psql -h testserver -U postgres -f backup.sql  -q -d foldertest> 2>error.txt >output.txt>>>> Everything works. It create all schemas and generate the database correctly.Because the postgres user is a superuser and can do anything.>> I don't see any diffrent in the hba.conf between postgres and ronb users.That is not the issue. pg_hba determines who can connect, what you are seeing is the Postgres privilege system determining what a user can do once they are connected. If it had been a pg_hba rejection you would have seen something like:aklaver@tito:~> psql -d production -U guest -h localhostpsql: FATAL:  no pg_hba.conf entry for host "::1", user "guest", database "production", SSL onFATAL:  no pg_hba.conf entry for host "::1", user "guest", database "production", SSL offTo get an overview of what users there are in your database cluster in psql do du>> What can be the problem?>-- Adrian Klaveradrian.kla...@aklaver.com-- Sent via pgsql-general mailing list (pgsql-general@postgresql.org)To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] Unable to upload backups

2017-04-19 Thread Ron Ben
Hi,
I'm using PostgreSQL 9.3.2
I'm running the command:
 

psql -h testserver -U ronb -f backup.sql  -q -d foldertest  2>error.txt >output.txt
This should generate my database in foldertest
However this doesn't work. It's unable to create schemas
in the error.txt i see "permission denied for database foldertest".
I know this is not an access permission issue because there is a public schema which is buildin and it does create the tables/data in there.
It just cant create new schemas.
 
The intresting thing is that if I do:
psql -h testserver -U postgres -f backup.sql  -q -d foldertest  2>error.txt >output.txt
 
Everything works. It create all schemas and generate the database correctly.
I don't see any diffrent in the hba.conf between postgres and ronb users.
What can be the problem?


Re: [GENERAL] Nice to have features: Percentage function

2017-04-17 Thread Ron Ben
Hi,
I know I can solve my issue localy but I think that percentage manipulation is commonly used by many users and while it's true that each one can create his own solution localy it would be nice if postgresql would have build in functions for that.
 
percentagee manipulation is a core fuctionality.
It would be nice to see it listed here:
https://www.postgresql.org/docs/9.5/static/functions-math.htmlב אפר׳ 17, 2017 0:47, Adrian Klaver כתב:On 04/16/2017 02:33 PM, Michael Nolan wrote:> I also have some pre-defined percentage functions, they check the> denominator and return null if it is zero, to avoid 'divide by zero'> errors.Are they available somewhere?My previous examples where more proof of concept then complete.> --> Mike Nolan>-- Adrian Klaveradrian.kla...@aklaver.com-- Sent via pgsql-general mailing list (pgsql-general@postgresql.org)To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] Nice to have features: Percentage function

2017-04-15 Thread Ron Ben
Hi,
I'm always finiding myself writing many varations of functions to calculate percentage.
I think it would be nice if postgresql would have build in functions for that.
I think the major functionality is something like the 3 ooptions here:
https://percentagecalculator.net/
 
It may help to keep code simple and clean and it seem like something simple to implement.
 
If you think it's a good idea it would be nice if someone can implement this.

Re: [GENERAL] Request to add feature to the Position function

2017-03-29 Thread Ron Ben
I never expected it to be implemented now.. I understand that there are policies and priorities
I tried to find a feature request chanle but there is none...
I don't know how the postresql team decied what is on the "to do list" and what is not.
This is a feature which I think people will find usefull and it doesn't seem like a lot of work to implement it.ב מרץ 28, 2017 19:42, David G. Johnston כתב:On Tue, Mar 28, 2017 at 7:09 AM, Adrian Klaver <adrian.kla...@aklaver.com> wrote:On 03/28/2017 12:29 AM, Ron Ben wrote:

Here is a refer to the stackoverflow question:

http://stackoverflow.com/questions/42974822/find-sub-string-position-from-the-end-of-string-in-postgresql

it carry another example and the solution however I still think that
this fuctionality should be avliable in a build-in postgresql function.



This would be a new feature and Postgres only introduces new features in new major version releases. The next major release is 10 and it is the latter stages of development. Not sure if there is a feature freeze on yet or not. In any case it will not be released for production until fall sometime(best guess, I have nothing to do with release schedule).​Not technically frozen but if the implementing patch isn't already in the commit-fest system it isn't getting into 10.  All feature freeze means at this point is that anything currently in the commit-fest system that doesn't get committed by that time it going to be punted to 11.​  If it did not make it into that release you are looking another year out. So if this is something you need soon, then I would say the solution would be to create your own function using the suggestions provided. If you really want this as a built in, regardless of when it is done I would suggest either filing bug:

https://www.postgresql.org/account/login/?next=/account/submitbug/
This is a poor recommendation given that its absence is not a bug.​
or

post to the --hackers list:

https://www.postgresql.org/list/pgsql-hackers/Maybe not ideal but we don't have any kind of "feature requests" lists so its either -hackers​ or -general.I'd even go so far as to say "-general" is better because you get more eyes on the request for people to give it support and, apparently in this case, finding people to do the leg work of writing a patch.David J.



Re: [GENERAL] Request to add feature to the Position function

2017-03-28 Thread Ron Ben


[GENERAL] Request to add feature to the Position function

2017-03-27 Thread Ron Ben

Hi,
position(substring in string)
as listed here:
https://www.postgresql.org/docs/9.1/static/functions-string.html
locates sub string in a string.
 
It doesn't support locateing the substring from the back.
 
For example:
 
position('om' in 'Tomomas')
gives 2
 
But if I want to locate the first occurance from the back of the string it's impossible/
 
My suggestion is to create a function 
position(substring in string,order)
where order can be: begin, end
 
and it will find the string according to this parameter.
This is pretty easy to implement and should be a part of the PostgreSQL tools.
 
similar fuctionality exists in trim function where user can specify leading or taling parameter


Re: [GENERAL] Google Cloud Platform, snapshots and WAL

2017-03-20 Thread Ben Chobot

> On Mar 20, 2017, at 6:31 AM, Moreno Andreo  wrote:
> 
> Hi everyone,
> 
>I have my PostgreSQL 9.5 server running on a VM instance on Google Compute 
> Engine (Google Cloud Platform) on Debian Jessie (8.3), and I have another 
> dedicated VM instance that, every night at 3.00, takes a snapshot of the 
> whole disk, without stopping the PG instance itself.
> Snapshots are stored and kept by Google in an incremental way, and we keep 
> the last 2 weeks of history.
> The question is: Keeping all two weeks worth of pg_xlog files, I don't think 
> I still need a periodic pg_basebackup to perform PITR, do I?

You need a base backup to apply your wals to. So long as you have one from 
after the start of your wal stream, you should be good for PITR. That said, 
replaying 2 weeks of wal files can take a long time. For that reason alone, it 
might well make sense to have more than a single basebackup snapshot.

Also, I cannot stress enough how important it is to actually test your recovery 
strategy. Few things are worse than assuming you can recover only to find out 
when you need to that you cannot. Do not let 
https://about.gitlab.com/2017/02/10/postmortem-of-database-outage-of-january-31/
 

 happen to you.

Re: [GENERAL] Clustered index to preserve data locality in a multitenant application?

2016-08-31 Thread Ben Chobot
On Aug 31, 2016, at 2:55 PM, Nicolas Grilly  wrote:
> 
> It looks like Instagram has been using pg_reorg (the ancestor of pg_repack) 
> to keep all likes from the same user contiguous on disk, in order to minimize 
> disk seeks.
> 
> http://instagram-engineering.tumblr.com/post/40781627982/handling-growth-with-postgres-5-tips-from
>  
> 
> 
> This is very similar to what I'm trying to achieve.
> 
> The article is 3 years old. I'd be curious to know if they still do that.

If what they did 3 years ago is similar to what you are trying to do today, who 
cares what they are doing today? (Besides using pg_repack instead of pg_reorg, 
of course.)

For what it's worth, we use pg_repack on a regular basis and it works exactly 
as advertised.

Re: [GENERAL] Using a VIEW as a temporary mechanism for renaming a table

2016-06-14 Thread Ben Buckman
Update on this –
Thanks for the feedback.
We just completed this process successfully. One thing we noticed was,
after creating the view and having the code start to read from it instead
of the table, we saw a significant postgres memory spike. It lasted until
we dropped the view and renamed the table a few minutes later. I don't know
exactly why – was it caching the view in memory? – or whether it would have
persisted if we hadn't dropped the view, but FYI if you want to do the same.

Overall I think this was a successful process and I'd do it again if we had
the same need to rename a table.


On Wed, Jun 8, 2016 at 3:55 PM, Berend Tober <bto...@computer.org> wrote:

> Ben Buckman wrote:
>
>> Hello,
>> I would like to rename a table with ~35k rows (on pgsql 9.4), let's say
>> from `oldthings` to `newthings`.
>> Our application is actively reading from and writing to this table, and
>> the code will break if the table name suddenly changes at runtime. So I
>> can't simply run an `ALTER TABLE oldthings RENAME TO newthings`, unless
>> we take downtime, which we'd prefer not to do. (I'd also prefer to avoid
>> a data migration from one table to another, which would require
>> dual-writes or some other way to handle data written during the
>> transition.)
>>
>> It seems that a reasonable approach to do this without downtime, would
>> be to use a temporary VIEW. We can `CREATE VIEW newthings AS SELECT *
>> FROM oldthings;`. Views in pg9.4 that are backed by a single table
>> support writes. So my plan is like this:
>>
>> 1. Create the view, essentially as an alias to the table.
>> 2. In the code, change all references from the old name to the new name.
>> The code would "think" it's using a renamed table, but would really be
>> using a view.
>>(At this point, I expect that all basic CRUD operations on the view
>> should behave as if they were on the table, and that the added
>> performance impact would be negligible.)
>> 3. In a transaction, drop the view and rename the table, so `newthings`
>> is now the original table and `oldthings` no longer exists. (In my
>> testing, this operation took <10ms.)
>>(When this is done, the view will have only existed and been used by
>> the application for a few minutes.)
>>
>> What are people's thoughts on this approach? Is there a flaw or
>> potential danger that I should be aware of? Is there a simpler approach
>> I should consider instead?
>>
>
> I would totally do it this way ... and after creating the view, I'd
> probably leave it as the normal interface. In fact, I've adopted a practice
> of utilizing views as the user interface generally and not exposing the
> actual tables at all.
>
> As you may realize, but I'll point out for completeness, that for more
> complicated situations (i.e, when the view is not just representing a
> single table as your current case), if the view represents a multi-table
> join, you can use triggers to intercept DML on the view and implement logic
> to interact with the multiple underlying tables for inserts and updates.
>
> Additionally, if you don't want to modify the application, consider
> creating the view, using the same original table name but in a separate
> schema and setting the search_path so the the view is found before the
> table. Then you can rename the table, simultaneously redefining the view to
> point the the new table.
>
>
> -- B
>
>
>
>


-- 


[image: Shyp]
*Ben Buckman / Platform Engineering*
M. 415.471.4180
www.shyp.com
Shipping made easy <https://www.shyp.com/>


Re: [GENERAL] Using a VIEW as a temporary mechanism for renaming a table

2016-06-08 Thread Ben Buckman
Oh yeah, the table structure will change after this is all done, but not in
the middle of it. The view would only last a few minutes and maintain the
exact same schema.

Thanks for the tip re: deadlocks, I'll keep that in mind!

Ben

On Wed, Jun 8, 2016 at 1:24 PM, Andy Colson <a...@squeakycode.net> wrote:

> On 6/8/2016 12:57 PM, Ben Buckman wrote:
>
>> Hello,
>> I would like to rename a table with ~35k rows (on pgsql 9.4), let's say
>> from `oldthings` to `newthings`.
>> Our application is actively reading from and writing to this table, and
>> the code will break if the table name suddenly changes at runtime. So I
>> can't simply run an `ALTER TABLE oldthings RENAME TO newthings`, unless
>> we take downtime, which we'd prefer not to do. (I'd also prefer to avoid
>> a data migration from one table to another, which would require
>> dual-writes or some other way to handle data written during the
>> transition.)
>>
>> It seems that a reasonable approach to do this without downtime, would
>> be to use a temporary VIEW. We can `CREATE VIEW newthings AS SELECT *
>> FROM oldthings;`. Views in pg9.4 that are backed by a single table
>> support writes. So my plan is like this:
>>
>> 1. Create the view, essentially as an alias to the table.
>> 2. In the code, change all references from the old name to the new name.
>> The code would "think" it's using a renamed table, but would really be
>> using a view.
>>   (At this point, I expect that all basic CRUD operations on the view
>> should behave as if they were on the table, and that the added
>> performance impact would be negligible.)
>> 3. In a transaction, drop the view and rename the table, so `newthings`
>> is now the original table and `oldthings` no longer exists. (In my
>> testing, this operation took <10ms.)
>>   (When this is done, the view will have only existed and been used by
>> the application for a few minutes.)
>>
>> What are people's thoughts on this approach? Is there a flaw or
>> potential danger that I should be aware of? Is there a simpler approach
>> I should consider instead?
>>
>> Thank you
>>
>>
>> --
>>
>
>
> Oh, one other minor comment.  I usually have a temp schema staging area
> with exact table structures but new data, and when everything is ready I
> run:
>
> start trans;
>
> drop table public.tableA;
> alter table tmp.tableA new schema public;
>
> ... same for 100 more tables ...
> commit;
>
> 99% of the time it works great, but every once and a while I get a
> deadlock error.  I just re-run it real quick and it works fine.
>
> when you do your drop view, rename table, if you happen to get a deadlock,
> I wouldnt worry too much.  Just re-run it.  Also, I'm still on 9.3 so maybe
> its not as much of a problem anymore.
>
> -Andy
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 


[image: Shyp]
*Ben Buckman / Platform Engineering*
M. 415.471.4180
www.shyp.com
Shipping made easy <https://www.shyp.com/>


Re: [GENERAL] Using a VIEW as a temporary mechanism for renaming a table

2016-06-08 Thread Ben Buckman
Thanks Andy.

My understanding, and please correct me if I'm wrong, is that the view will
effectively inherit the table's constraints, because writes to the view
that can't be written to the table will fail on the table. Re: "will the
data be good data," what risks should I be considering?

In terms of rollout, we would 1) create the view, 2) deploy code that uses
the new [view] name, 3) drop the view and rename the table. Deployments are
"rolling" so there would be no downtime. The app and users shouldn't
notice/care if they're hitting the table or the view.

Thank you


[GENERAL] Using a VIEW as a temporary mechanism for renaming a table

2016-06-08 Thread Ben Buckman
Hello,
I would like to rename a table with ~35k rows (on pgsql 9.4), let's say
from `oldthings` to `newthings`.
Our application is actively reading from and writing to this table, and the
code will break if the table name suddenly changes at runtime. So I can't
simply run an `ALTER TABLE oldthings RENAME TO newthings`, unless we take
downtime, which we'd prefer not to do. (I'd also prefer to avoid a data
migration from one table to another, which would require dual-writes or
some other way to handle data written during the transition.)

It seems that a reasonable approach to do this without downtime, would be
to use a temporary VIEW. We can `CREATE VIEW newthings AS SELECT * FROM
oldthings;`. Views in pg9.4 that are backed by a single table support
writes. So my plan is like this:

1. Create the view, essentially as an alias to the table.
2. In the code, change all references from the old name to the new name.
The code would "think" it's using a renamed table, but would really be
using a view.
  (At this point, I expect that all basic CRUD operations on the view
should behave as if they were on the table, and that the added performance
impact would be negligible.)
3. In a transaction, drop the view and rename the table, so `newthings` is
now the original table and `oldthings` no longer exists. (In my testing,
this operation took <10ms.)
  (When this is done, the view will have only existed and been used by the
application for a few minutes.)

What are people's thoughts on this approach? Is there a flaw or potential
danger that I should be aware of? Is there a simpler approach I should
consider instead?

Thank you


-- 


[image: Shyp]
*Ben Buckman / Platform Engineering*
www.shyp.com
Shipping made easy <https://www.shyp.com/>


Re: [GENERAL] CTE and function

2016-02-25 Thread Ben Primrose
This is my understanding, hopefully someone will chime in if I'm off.  Using
EXECUTE, the SQL is executed in a separate context than the current
statement.  So it's checking for a table with the name you pass, not aliases
within the current statement.

Giving the function another parameter to allow an expression lets this run.
That said, running user-given SQL is very insecure, and this should never be
run in a live database.  I changed the formatting to make it easier (for me)
to read.  Note the two table aliases ("as tab") where the table is passed
in.

CREATE OR REPLACE FUNCTION gini_coefficient(IN table_name text
   ,IN table_expression 
text
   ,IN column_name text
   ,OUT 
gini_coefficient DOUBLE
PRECISION
   ) AS
$$
BEGIN
EXECUTE format('SELECT ((N+1.0)/(N-1.0)) -
((PiXi_sum/(N*(N-1)*u))*2.0) AS gini_coefficient
  FROM (SELECT SUM(PiXi) AS PiXi_sum
 , COUNT(*) AS N
 , (SELECT AVG(%s) FROM %s as 
tab) AS u
  FROM (SELECT row_number() OVER() 
* col AS
PiXi
  FROM (SELECT %s FROM %s 
as tab ORDER
BY %s DESC) t1
   ) t2
   ) t3;'
  , column_name
  , COALESCE (table_name
 ,'(' || table_expression || ')'
 )
  , column_name
  , COALESCE (table_name
 ,'(' || table_expression || ')'
 )
  , column_name
  )
   INTO gini_coefficient;
END;
$$
LANGUAGE plpgsql;


SELECT gini_coefficient(null
   ,'SELECT unnest(ARRAY[1,2,3,4]) AS col'
   ,'col'
   );


A better solution is (using your original definition for gini_coefficient):

create view tab as SELECT unnest(ARRAY[1,2,3,4]) AS col;
SELECT gini_coefficient('tab','col');

Ben

-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Gerhard Wiesinger
Sent: Thursday, February 25, 2016 5:32 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] CTE and function

Hello,

I'm trying to convert a select after a CTE into a function for generic use.
The CTE is normally a complex query but I want to capsulate then the
calculation of the Gini coefficient it into a function:
Based on:
http://www.heckler.com.br/blog/2010/06/15/gini-coeficient-having-fun-in-both-sql-and-python/
Details at: https://en.wikipedia.org/wiki/Gini_coefficient

= OK

WITH tab AS (
   SELECT unnest(ARRAY[1,2,3,4]) AS col
)
SELECT
((N+1.0)/(N-1.0)) - ((PiXi_sum/(N*(N-1)*u))*2.0) AS gini_coefficient
FROM
   ( SELECT
   SUM(PiXi) AS PiXi_sum,
   COUNT(*) AS N,
   (SELECT AVG(col) FROM tab) AS u
 FROM
   ( SELECT
   row_number() OVER() * col AS PiXi
 FROM
   (SELECT col FROM tab ORDER BY col DESC) t1
   ) t2
   ) t3
;


= OK: Create function

CREATE OR REPLACE FUNCTION gini_coefficient(IN table_name text, IN
column_name text, OUT gini_coefficient DOUBLE PRECISION) AS $$ BEGIN EXECUTE
format('
SELECT
((N+1.0)/(N-1.0)) - ((PiXi_sum/(N*(N-1)*u))*2.0) AS gini_coefficient
FROM
   ( SELECT
   SUM(PiXi) AS PiXi_sum,
   COUNT(*) AS N,
   (SELECT AVG(%s) FROM %s) AS u
 FROM
   ( SELECT
   row_number() OVER() * col AS PiXi
 FROM
   (SELECT %s FROM %s ORDER BY %s DESC) t1
   ) t2
   ) t3
;
', column_name, table_name, column_name, 

Re: [GENERAL] Is PRIMARY KEY the same as UNIQUE NOT NULL?

2016-02-07 Thread Ben Leslie
On 8 February 2016 at 08:04, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Geoff Winkless <pgsqlad...@geoff.dj> writes:
>> On 31 January 2016 at 19:53, David G. Johnston
>> <david.g.johns...@gmail.com> wrote:
>>> A PRIMARY KEY enforces a UNIQUE, NOT NULL constraint and additionally allows
>> [snip]
>
>> I would just remove the whole paragraph. A primary key does what it
>> does, a unique constraint does what it does. I'm not really sure why
>> you need to link them.
>
> I think it is useful to compare them; the only problem is claiming that
> they're equivalent.
>
> I've applied some doc updates based on this discussion.
>
> http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=c477e84fe2471cb675234fce75cd6bb4bc2cf481
>
> regards, tom lane

Thanks Tom,

I think with the documentation expressed this way I don't think I
would have made the error I did originally; from my point of view it
is a welcome improvement.

Cheers,

Ben


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Is PRIMARY KEY the same as UNIQUE NOT NULL?

2016-01-30 Thread Ben Leslie
Apologies in advance if this is not the best list for this. Appreciate
a redirection if there is something more appropriate.

In the document is mentions that:

"Technically, PRIMARY KEY is merely a combination of UNIQUE and NOT NULL"

I wanted to clarify if that was, technically, true. I had a table
where I had used "UNIQUE NOT NULL" rather than primary key (not
recommended by the docs, probably not best practise, but I thought, at
the time, if they are equivalent, it should work).

I then had need to use the  "Group by can guess some missing columns"
feature described here:

https://wiki.postgresql.org/wiki/What%27s_new_in_PostgreSQL_9.1#SQL_and_PL.2FPgSQL_features

Unfortunately this feature does not seem to work when using a column
that is merely "UNIQUE NOT NULL", however it does seem to work when I
change the columns to "PRIMARY KEY".

My questions are:

1. Is this intended behaviour.
2. Does this mean, technically, that PRIMARY KEY is not merely a
combination of UNIQUE and NOT NULL?

The documentation does clarify with:

"identifying a set of columns as primary key also provides metadata
about the design of the schema, as a primary key implies that other
tables can rely on this set of columns as a unique identifier for
rows."

I had assumed that the "metadata" was schema information that may be
used by external tools or the users of the database, however it seems
that this meta-data is also used at some point in the query engine, so
my assumption that "meta-data" was only for external use seems
incorrect. Are there any other cases that take advantage of PRIMARY
KEY? (Maybe NATURAL joins?).

Thanks,

Ben


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] RAM of Postgres Server

2016-01-07 Thread Ben Chobot
On Jan 7, 2016, at 10:32 PM, Sachin Srivastava  wrote:
> 
> 
> Dear John,
> 
> We are looking at more like 500-600 connections simultaneously in 1 day and I 
> want to say we get 1 to 12000 connections a day per db.


Unless you have 300 cores to service those 500-600 simultaneous connections, 
you are really going to want to put your database behind a connection pooler 
such as pgBouncer. A connection pooler is a proxy that decreases the 
concurrency on the database, letting the database see only a few of the 
connections that clients want to make, and thereby increasing overall query 
throughput.

If it works for your application, pgBouncer has wonderful mode called 
transaction pooling, which automatically rotates clients into an open database 
slot on transaction boundaries. 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extract giving wrong week of year

2015-09-28 Thread Ben Primrose
Thanks Tom!  I need to read in more detail in the future, I'd hit the
function definition, and scanned to find the keyword to get WOY.
I'll switch my code to ISOYEAR per the doc.
Ben

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: Friday, September 25, 2015 5:01 PM
To: Ben Primrose
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Extract giving wrong week of year

Ben Primrose <bprimr...@tracelink.com> writes:
> I’m seeing some odd behavior from the extract function.
> Any idea why extract would think January 3rd of next year is the 53rd
> week of the year?

See the fine manual ...
http://www.postgresql.org/docs/9.3/static/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT

  week

The number of the ISO 8601 week-numbering week of the year. By
definition, ISO weeks start on Mondays and the first week of a year
contains January 4 of that year. In other words, the first Thursday of
a year is in week 1 of that year.

In the ISO week-numbering system, it is possible for early-January
dates to be part of the 52nd or 53rd week of the previous year, and
for late-December dates to be part of the first week of the next
year. For example, 2005-01-01 is part of the 53rd week of year 2004,
and 2006-01-01 is part of the 52nd week of year 2005, while 2012-12-31
is part of the first week of 2013. It's recommended to use the isoyear
field together with week to get consistent results.


Depending on what you want to define a "week" as, you can probably get what
you want from some other EXTRACT() property. "doy" div 7 might help, for
example.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Extract giving wrong week of year

2015-09-25 Thread Ben Primrose
All,

I’m seeing some odd behavior from the extract function.
Any idea why extract would think January 3rd of next year is the 53rd week
of the year?

Thanks

Ben



[bprimrose@primrose partitioning]$ psql

Null display is "¤".

Line style is unicode.

Output format is wrapped.

Timing is on.

psql (9.3.9)

Type "help" for help.



bprimrose@home=> select extract (week from '2016-01-03
00:01:00'::timestamp);

date_part

───

53

(1 row)



Time: 1.128 ms

bprimrose@home=> select version();


version

─

PostgreSQL 9.3.9 on x86_64-redhat-linux-gnu, compiled by gcc (GCC) 4.9.2
20150212 (Red Hat 4.9.2-6), 64-bit

(1 row)


Re: [GENERAL] pgpass (in)flexibility

2015-09-15 Thread Ben Chobot
On Sep 15, 2015, at 12:27 AM, Jim Nasby <jim.na...@bluetreble.com> wrote:
> 
> On 9/15/15 12:48 AM, Ben Chobot wrote:
>> We're in a situation where we would like to take advantage of the pgpass 
>> hostname field to determine which password gets used. For example:
>> 
>> psql -h prod-server -d foo # should use the prod password
>> psql -h beta-server -d foo # should use the beta password
>> 
>> This would *seem* to be simple, just put "prod-server" or "beta-server" into 
>> the hostname field of .pgpass. But if somebody uses the FQDN of those hosts, 
>> then the line does not match. If somebody uses the IP address of those 
>> hosts, again, no match. It seems that the hostname must match the hostname 
>> *exactly* - or match any host ("*"), which does not work for our use case.
>> 
>> This seems to make the hostname field unnecessarily inflexible. Has anybody 
>> else experienced - and hopefully overcome - this pain? Maybe I'm just going 
>> about it all wrong.
> 
> I don't know of a way around that, but you might be better off using SSL 
> certs to authenticate. I believe there's even something similar to 
> ssh-keychain that would allow you not to store the passphrase on-disk (though 
> you would have to enter it manually on reboot).

Does that solve the "different passwords for different servers" problem, or 
just the "password on disk" problem?



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pgpass (in)flexibility

2015-09-14 Thread Ben Chobot
We're in a situation where we would like to take advantage of the pgpass 
hostname field to determine which password gets used. For example:

psql -h prod-server -d foo # should use the prod password
psql -h beta-server -d foo # should use the beta password

This would *seem* to be simple, just put "prod-server" or "beta-server" into 
the hostname field of .pgpass. But if somebody uses the FQDN of those hosts, 
then the line does not match. If somebody uses the IP address of those hosts, 
again, no match. It seems that the hostname must match the hostname *exactly* - 
or match any host ("*"), which does not work for our use case.

This seems to make the hostname field unnecessarily inflexible. Has anybody 
else experienced - and hopefully overcome - this pain? Maybe I'm just going 
about it all wrong.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] in defensive of zone_reclaim_mode on linux

2015-09-06 Thread Ben Chobot

> On Sep 6, 2015, at 4:07 AM, Andres Freund  wrote:
> 
> To me that sounds like the negative impact of transparent hugepages
> being mitigated to some degree by zone reclaim mode (which'll avoid some
> cross-node transfers).

FWIW:

$ cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]



[GENERAL] in defensive of zone_reclaim_mode on linux

2015-09-04 Thread Ben Chobot
Over the last several months, I've seen a lot of grumbling about how 
zone_reclaim_mode eats babies, kicks puppies, and basically how you should just 
turn it off and live happily ever after. I thought I should add a 
counterexample, because that advice has not proven very good for us.

Some facts about us:
- postgres 9.3.9
- ubuntu trusty kernels (3.13.0-29-generic #53~precise1-Ubuntu) 
- everything in AWS, on 32-core HVM instances with 60GB of ram
- 6GB shared buffers
- mostly simple queries

Generally, this has worked out pretty well for us. However, we've recently 
added a bunch more load, which, because we're sharded and each shard has its 
own user, means we've added more concurrently active users. ("A bunch" = ~300.) 
We are big pgBouncer users, but because we also use transaction pooling, 
pgBouncer can only do so much to reuse existing connections. (SET ROLE isn't an 
option.)

The end result is that recently, we've been running a dumb number of backends 
(between 600 and 1k) - which are *usually* mostly idle, but there are frequent 
spikes of activity when dozens of them wake up at onces. Even worse, those 
spikes tend to also come with connection churn, as pgBouncer tears down 
existing idle connections to build up new backends for different users.

So our load would hover under 10 most of the time, then spike to over 100 for a 
minute or two. Connections would get refused, the system would freeze up... and 
then everything would go back to normal. The solution? Turning on 
zone_reclaim_mode.

It appears that connection churn is far more manageable to Linux with 
zone_reclaim_mode enabled. I suspect that our dearth of large, complex queries 
helps us out as well. Regardless, our systems no longer desperately seek free 
memory when many idle backends wake up while others are getting torn down and 
and replaced. Babies and puppies rejoice. 

Our situation might not apply to you. But if it does, give zone_reclaim_mode a 
chance. It's not (always) as bad as others have made it out to be.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Examples of projects that use Postgres as API server

2014-06-28 Thread Ben Ellis
Hi all,

I've been using Postgres for a small project and I've been very impressed
by its flexibility in defining new types and functions.  I very much like
having the ability to define a clean relational model and then a set of
functions that act as the API to the data stored in my model.

Does anyone know of a project or projects that use Postgres in a fashion
like this?  I'd love to read the schemas of a project like that.

It's entirely possible that I'm just nuts, but it seems like a very good
idea to me-- or at least a happy medium between 'the database is just
tables' and 'cram everything into the database!'

Curiously,

Ben


Re: [GENERAL] Any way to insert rows with ID used in another column

2014-04-05 Thread Ben Hoyt
Thanks for the info, Francisco and Alban -- that looks useful.

Can you see a good way in the INSERT to combine VALUES with that nextval()
subquery? As there are some columns that are distinct for each row, and
some that are the same or programmatically generated for each row. For
instance, there's a folder column that's different for each inserted row,
so typically I'd specify that directly in the multiple VALUES rows.

-Ben


On Thu, Apr 3, 2014 at 11:17 AM, Francisco Olarte fola...@peoplecall.comwrote:

 ( Forgot to hit reply all, so probably someone will get this twice, sorry
 ).

 Hi:

 On Thu, Apr 3, 2014 at 4:41 PM, Ben Hoyt benh...@gmail.com wrote:
 .
  Is there some way to do something like this:
  INSERT INTO images (filename) VALUES
  ('foo' || image_id_about_to_used_for_this_row::text || '.jpg')
  ('bar' || image_id_about_to_used_for_this_row::text || '.jpg')
  I tried using currval() to see if that'd work, but it gave an error, I
 guess
  because I was using it multiple times per session.

 You normally need to call nextval before currval. Anyway, subqueries
 are your friend:

 psql (9.3.2)
 Type help for help.

 postgres=# create table files ( id serial primary key, file varchar);
 CREATE TABLE
 postgres=# \d+ files
   Table public.files
  Column |   Type| Modifiers
   | Storage  | Stats target | Description

 +---++--+--+-
  id | integer   | not null default
 nextval('files_id_seq'::regclass) | plain|  |
  file   | character varying |
   | extended |  |
 Indexes:
 files_pkey PRIMARY KEY, btree (id)
 Has OIDs: no

 postgres=# select nextval('files_id_seq'::regclass) as id  from
 generate_series(1,3);
  id
 
   1
   2
   3
 (3 rows)
 postgres=# select newid as id, 'image_'||cast(newid as text)||'.jpg'
 as file from ( select nextval('files_id_seq'::regclass) as newid  from
 generate_series(1,3)) as newids;
  id |file
 +-
   4 | image_4.jpg
   5 | image_5.jpg
   6 | image_6.jpg
 (3 rows)

 postgres=# insert into files (id, file) select newid as id,
 'image_'||cast(newid as text)||'.jpg' as file from ( select
 nextval('files_id_seq'::regclass) as newid  from generate_series(1,3))
 as newids returning *;
  id |file
 +-
   7 | image_7.jpg
   8 | image_8.jpg
   9 | image_9.jpg
 (3 rows)

 INSERT 0 3
 postgres=# select * from files;
  id |file
 +-
   7 | image_7.jpg
   8 | image_8.jpg
   9 | image_9.jpg
 (3 rows)



 Francisco Olarte.


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



[GENERAL] Any way to insert rows with ID used in another column

2014-04-03 Thread Ben Hoyt
Hi folks,

We have a table images in our db with id (serial primary key) and filename
columns, where the filename is a unique text column that looks something
like pool-1234.jpg.

The catch is that the 1234 in the filename is the image ID. We want the
filename to include the image ID because it's a nice way of making it
unique and gives the benefit of being able to easily map from the filename
back to the ID for debugging and the like.

Currently I insert new image rows in multiple steps:

1) begin transaction
2) insert a whole bunch of image rows in one multiple-row INSERT, using a
temporary random filename
3) use the RETURNING clause on the above insert to get a mapping between
the database IDs and filenames just inserted
4) loop through all images just inserted, and for each image, execute
UPDATE to set the filename to the real filename which includes the new
image ID
5) commit

This works, but it's pretty cumbersome, and requires N UPDATE statements
which is also slow.

Is there some way to do something like this:

INSERT INTO images (filename) VALUES
('foo' || image_id_about_to_used_for_this_row::text || '.jpg')
('bar' || image_id_about_to_used_for_this_row::text || '.jpg')

I tried using currval() to see if that'd work, but it gave an error, I
guess because I was using it multiple times per session.

Thanks,
Ben


Re: [GENERAL] SSD Drives

2014-04-03 Thread Ben Chobot

On Apr 3, 2014, at 12:47 PM, John R Pierce pie...@hogranch.com wrote:

 On 4/3/2014 9:26 AM, Joe Van Dyk wrote:
 Related, anyone have any thoughts on using postgresql on Amazon's EC2 SSDs?  
 Been looking at 
 http://aws.amazon.com/about-aws/whats-new/2013/12/19/announcing-the-next-generation-of-amazon-ec2-high-i/o-instance
 
 
 if your data isn't very important, by all means, keep it on someone elses 
 virtualized infrastructure with no performance or reliability guarantees.

Well that’s not quite fair. AWS guarantees performance for those instances 
(http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/i2-instances.html#i2-instances-diskperf).
 They also guarantee their instances will fail sooner or later, with or without 
warning (at which point you will loose all your data unless you’ve been putting 
copies onto a different system).

[GENERAL] Getting sequence-generated IDs from multiple row insert

2014-03-31 Thread Ben Hoyt
Hi folks,

I've just run into a subtle but fairly serious race condition while using
web.py's SQL library to insert multiple rows into the database and return
their IDs (a serial primary key column). Specifically I'm using the
multiple_insert() function that web.py defines here:

https://github.com/webpy/webpy/blob/master/web/db.py#L793

This function runs a query like this:

INSERT INTO table (col1, col2) VALUES (col1_1, col2_1), (col1_2,
col2_2), ...; SELECT currval('table_id_seq');

Using the output of the currval(), web.py tries to build a list of the most
recent IDs by creating a range from currval - num_rows_inserted + 1
through currval. In Python:

out = range(out-len(values)+1, out+1)

This *looks* nice, and must have seemed fine to the developers who
implemented it, but I've just hit a case where two sessions each doing a
multiple insert don't use sequential IDs. For example, the range code above
for the first insert gave 2117552...2117829. And the second insert gave
2117625...2117818. Which are obviously overlapping and is a nasty bug
waiting to happen. Thankfully it caused an IntegrityError further down in
my code so I didn't screw things up.

First of all, I presume this is expected, and is how the sequence with a
multi-row insert is supposed to work? In other words, the sequence
guarantees the IDs will be unique, but with multi-row insert, they won't
necessarily be consecutive? If so, it's a fairly serious bug in web.py's
multiple_insert(), which probably shouldn't return anything due to this
issue.

Second, what's the right thing to do here? The first thing I found was
PostgreSQL's RETURNING clause, but somewhat frustratingly for this use
case, even that's not guaranteed to return the results in the order you
specified. I need the IDs in insertion order so I can do further
processing. Tom Lane and others in this thread indicate that this is not a
guarantee of the RETURNING clause, for future optimization reasons and due
to how SQL handles sets:

http://postgresql.1045698.n5.nabble.com/PATCH-Enforce-that-INSERT-RETURNING-preserves-the-order-of-multi-rows-td5728579.html

So currently I've changed my code to use RETURNING and then I'm ordering
the results based on a secondary column that I know the order of. This
works, but seems clunky, so I'm wondering if there's a nicer way.

Thanks,
Ben


Re: [GENERAL] Optimizing tables for known queries?

2014-02-10 Thread Ben Chobot

On Feb 9, 2014, at 2:48 PM, John Anderson son...@gmail.com wrote:

 What I'm wondering is if there is a more denormalized view of this type of 
 data that would make those of types of queries quicker?

That sounds like a materialized view? 



Re: [GENERAL] PG replication across DataCenters

2013-12-09 Thread Ben Chobot
On Dec 9, 2013, at 8:09 AM, Thomas Harold wrote:

 On 11/22/2013 5:57 AM, Albe Laurenz wrote:
 Kaushal Shriyan wrote:
 I have read on the web that Postgresql DB supports replication
 across data centers. Any real life usecase examples if it has been
 implemented by anyone.
 
 Well, we replicate a 1 TB database between two locations. It is a
 fairly active OLTP application, but certainly not pushing the limits
 of what PostgreSQL can do in transactions per second.
 
 
 Something that section 25 in the pgsql documentation is not clear about for 
 hot-standby with WAL log shipping using the built-in streaming:
 
 Can you choose which databases / tables on the master server get streamed to 
 the hot-standby read-only server at the remote site?  If not, I suspect we'll 
 have to go with either Slony or Bucardo.

No, with the built-in binary replication, it's all or nothing, and the slaves 
have to have the exact same schema as the master (no adding or removing 
indices, for example.)

Out of curiosity what did you find unclear about 
http://www.postgresql.org/docs/9.3/static/different-replication-solutions.html?

Re: [GENERAL] 9.1.9 - 9.1.10 causing corruption

2013-11-24 Thread Ben Chobot
For posterity, it appears my issues were 
https://wiki.postgresql.org/wiki/Nov2013ReplicationIssue. 

On Nov 4, 2013, at 3:48 PM, Ben Chobot wrote:

 Anybody? I've tried this again on another streaming replication server, and 
 again had pg_toast errors until I re-basebackup'd it. Does it make sense to 
 try disabling streaming replication for the backup and just use wal replay 
 from the archive? My slave will be a bit behind, but I'll have drained the 
 clients from it anyway, so that's not a big deal for a temporary thing.
 
 On Nov 1, 2013, at 1:44 PM, Ben Chobot wrote:
 
 I've got a bunch of independent database clusters, each with a couple of 
 streaming replication slaves. I'm starting to upgrade them to 9.1.10, but on 
 3 of the 3 I've tried so far, this has somehow resulted in data corruption. 
 I'm hoping it was the upgrade itself that caused the corruption, instead of 
 9.1.10 just exposing some latent corruption, for two reasons:
 1. after doing a basebackup from the master, the same queries that started 
 causing problems on the new 9.1.10 node no longer cause problems
 2. the second 9.1.9 slave of each cluster doesn't have issues that the 
 newly-upgraded 9.1.10 node did have with the same query. 
 
 Interestingly, this seems to only be affecting my replication slaves. We 
 snapshot all our clusters at a filesystem level and restore those snapshots 
 into standalone, non-replicating test environments, and those test 
 environments were all upgraded to 9.1.10 without issue. 
 
 All my servers are running a mostly-stock config of 9.1.9, which has been 
 modified like so:
 
 max_connections = 1000 # we don't really go this high, fwiw
 shared_buffers = 6000MB # 60GB machines
 wal_level = hot_standby
 hot_standby = on
 effective_cache_size = 34278MB
 log_min_duration_statement = 50
 log_duration = on
  
 They store their data on XFS, with the options noatime,allocsize=1m.
 
 As for the errors, on two nodes, a primary key lookup resulted in multiple 
 rows. Interestingly, this was on the same index for both nodes, which I 
 would find unlikely if it was just random corruption or hardware-related, 
 given that the systems have the same schema but independent data and usage. 
 Using pg_reorg on that table on the master node fixed the issue on the node 
 I tried it on, but exposed another problem:
 
 ERROR: unexpected chunk number 3 (expected 1) for toast value 67728890 in 
 pg_toast_2619
 
 The pg_toast error occurred on another node too. Sadly these were are 
 production systems and I didn't know if the pg_toast table was the same for 
 all three nodes, and it's quite possible that all three nodes would have 
 exhibited the same set of problems if I'd let them run production queries 
 long enough.
 
 Anyway, naturally this is making me nervous to do more upgrades, so I'm 
 hoping I can figure out if it's the upgrade that's killing me, or if 9.1.10 
 is exposing underlying problems, or what. Unfortunately I have no idea what 
 to look for.
 
 These system are ubuntu precise, and I did recall this from my memory:
 
 http://www.postgresql.org/message-id/21697.1343141...@sss.pgh.pa.us
 
 ... but it appears that the fix for that has already been applied to my 
 postgresql-common package. Looking over the postgres logs before the 
 upgrade, I don't see anything abnormal.
 
 Does this behavior sound familiar to anybody else out there, or does anybody 
 have suggestions of what to look for or try?
 



Re: [GENERAL] 9.1.9 - 9.1.10 causing corruption

2013-11-04 Thread Ben Chobot
Anybody? I've tried this again on another streaming replication server, and 
again had pg_toast errors until I re-basebackup'd it. Does it make sense to try 
disabling streaming replication for the backup and just use wal replay from the 
archive? My slave will be a bit behind, but I'll have drained the clients from 
it anyway, so that's not a big deal for a temporary thing.

On Nov 1, 2013, at 1:44 PM, Ben Chobot wrote:

 I've got a bunch of independent database clusters, each with a couple of 
 streaming replication slaves. I'm starting to upgrade them to 9.1.10, but on 
 3 of the 3 I've tried so far, this has somehow resulted in data corruption. 
 I'm hoping it was the upgrade itself that caused the corruption, instead of 
 9.1.10 just exposing some latent corruption, for two reasons:
 1. after doing a basebackup from the master, the same queries that started 
 causing problems on the new 9.1.10 node no longer cause problems
 2. the second 9.1.9 slave of each cluster doesn't have issues that the 
 newly-upgraded 9.1.10 node did have with the same query. 
 
 Interestingly, this seems to only be affecting my replication slaves. We 
 snapshot all our clusters at a filesystem level and restore those snapshots 
 into standalone, non-replicating test environments, and those test 
 environments were all upgraded to 9.1.10 without issue. 
 
 All my servers are running a mostly-stock config of 9.1.9, which has been 
 modified like so:
 
 max_connections = 1000 # we don't really go this high, fwiw
 shared_buffers = 6000MB # 60GB machines
 wal_level = hot_standby
 hot_standby = on
 effective_cache_size = 34278MB
 log_min_duration_statement = 50
 log_duration = on
  
 They store their data on XFS, with the options noatime,allocsize=1m.
 
 As for the errors, on two nodes, a primary key lookup resulted in multiple 
 rows. Interestingly, this was on the same index for both nodes, which I would 
 find unlikely if it was just random corruption or hardware-related, given 
 that the systems have the same schema but independent data and usage. Using 
 pg_reorg on that table on the master node fixed the issue on the node I tried 
 it on, but exposed another problem:
 
 ERROR: unexpected chunk number 3 (expected 1) for toast value 67728890 in 
 pg_toast_2619
 
 The pg_toast error occurred on another node too. Sadly these were are 
 production systems and I didn't know if the pg_toast table was the same for 
 all three nodes, and it's quite possible that all three nodes would have 
 exhibited the same set of problems if I'd let them run production queries 
 long enough.
 
 Anyway, naturally this is making me nervous to do more upgrades, so I'm 
 hoping I can figure out if it's the upgrade that's killing me, or if 9.1.10 
 is exposing underlying problems, or what. Unfortunately I have no idea what 
 to look for.
 
 These system are ubuntu precise, and I did recall this from my memory:
 
 http://www.postgresql.org/message-id/21697.1343141...@sss.pgh.pa.us
 
 ... but it appears that the fix for that has already been applied to my 
 postgresql-common package. Looking over the postgres logs before the upgrade, 
 I don't see anything abnormal.
 
 Does this behavior sound familiar to anybody else out there, or does anybody 
 have suggestions of what to look for or try?



[GENERAL] 9.1.9 - 9.1.10 causing corruption

2013-11-01 Thread Ben Chobot
I've got a bunch of independent database clusters, each with a couple of 
streaming replication slaves. I'm starting to upgrade them to 9.1.10, but on 3 
of the 3 I've tried so far, this has somehow resulted in data corruption. I'm 
hoping it was the upgrade itself that caused the corruption, instead of 9.1.10 
just exposing some latent corruption, for two reasons:
1. after doing a basebackup from the master, the same queries that started 
causing problems on the new 9.1.10 node no longer cause problems
2. the second 9.1.9 slave of each cluster doesn't have issues that the 
newly-upgraded 9.1.10 node did have with the same query. 

Interestingly, this seems to only be affecting my replication slaves. We 
snapshot all our clusters at a filesystem level and restore those snapshots 
into standalone, non-replicating test environments, and those test environments 
were all upgraded to 9.1.10 without issue. 

All my servers are running a mostly-stock config of 9.1.9, which has been 
modified like so:

max_connections = 1000 # we don't really go this high, fwiw
shared_buffers = 6000MB # 60GB machines
wal_level = hot_standby
hot_standby = on
effective_cache_size = 34278MB
log_min_duration_statement = 50
log_duration = on
 
They store their data on XFS, with the options noatime,allocsize=1m.

As for the errors, on two nodes, a primary key lookup resulted in multiple 
rows. Interestingly, this was on the same index for both nodes, which I would 
find unlikely if it was just random corruption or hardware-related, given that 
the systems have the same schema but independent data and usage. Using pg_reorg 
on that table on the master node fixed the issue on the node I tried it on, but 
exposed another problem:

ERROR: unexpected chunk number 3 (expected 1) for toast value 67728890 in 
pg_toast_2619

The pg_toast error occurred on another node too. Sadly these were are 
production systems and I didn't know if the pg_toast table was the same for all 
three nodes, and it's quite possible that all three nodes would have exhibited 
the same set of problems if I'd let them run production queries long enough.

Anyway, naturally this is making me nervous to do more upgrades, so I'm hoping 
I can figure out if it's the upgrade that's killing me, or if 9.1.10 is 
exposing underlying problems, or what. Unfortunately I have no idea what to 
look for.

These system are ubuntu precise, and I did recall this from my memory:

http://www.postgresql.org/message-id/21697.1343141...@sss.pgh.pa.us

... but it appears that the fix for that has already been applied to my 
postgresql-common package. Looking over the postgres logs before the upgrade, I 
don't see anything abnormal.

Does this behavior sound familiar to anybody else out there, or does anybody 
have suggestions of what to look for or try?

[GENERAL] pg_dump resulting in excessive memory use by postmaster process

2013-10-21 Thread Ben
We've been using pg_dump and pg_restore for many years now and it has always
worked well for us. However, we are currently undertaking a major db
architecture to partition our tenant data into separate postgres schemas
instead of storing all data in the public schema. When attempting to perform
a pg_dump with the --schema-only option using our new db architecture it
causes the postmaster process to consume massive amounts of memory until it
is killed off (6GB+ used by a single postmaster process).  Here are the
details:

 

PG version: 9.2.5

Total number of postgres schemas: 248   

Total number of relations across all schemas: 53,154

 

If I perform a --schema-only dump on a DB where there are only 11 schemas it
succeeds with a dump file that is only 7.5mb in size. All of our tenant
schemas have the same exact relations so things should scale linearly when
more tenant schemas exist. 

 

I should also mention that when performing these dumps there is absolutely
no other DB activity occurring. Do you have any ideas why the excessive
memory growth on the postmaster service is occurring when doing a pg_dump? 

 



Re: [GENERAL] pg_dump resulting in excessive memory use by postmaster process

2013-10-21 Thread Ben
When you say self-contained test case, what is it exactly that you're
looking for? A script that builds out a DB with hundreds of
schemas/relations, a pg_basebackup or something else?

Thanks!

Ben

-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Stephen Frost
Sent: Monday, October 21, 2013 10:37 AM
To: Ben
Cc: 'PG-General Mailing List'
Subject: Re: [GENERAL] pg_dump resulting in excessive memory use by
postmaster process

* Ben (ben.fy...@champsoftware.com) wrote:
 is killed off (6GB+ used by a single postmaster process).  Here are 
 the
[...]
 Total number of relations across all schemas: 53,154
[...]
 I should also mention that when performing these dumps there is 
 absolutely no other DB activity occurring. Do you have any ideas why 
 the excessive memory growth on the postmaster service is occurring when
doing a pg_dump?

At first blush, I'm suspicious of the system catalog cache which is going to
want to get every bit of attribute for every relation involved and it looks
like you've got quite a few here.

It seems like something which we could/should improve upon, but I'm not sure
who/what/when/where/how it'll get better.

Still, would it be possible for you to provide a self-contained test case
which shows this, so we can look at what's happening exactly and verify
where the fault lies?

Thanks,

Stephen



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] async streaming and recovery_target_timeline=latest

2013-07-29 Thread Ben Chobot
On Jul 28, 2013, at 5:29 PM, Amit Langote wrote:

 I think, the WAL recycling on standby names the recycled segments with
 the latest timelineID (in this case it's 0x10) which creates WALs that
 there shouldn't have been like 0010146A0001 instead of
 000F146A0001. This patch recently applied to 9.1.9 (but
 not in any stable release so far) solves this problem as far as I can
 see. Try and see if you can patch it:
 
 http://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=424cc31a3785bd01108e6f4b20941c6442d3d2d0

Thanks for the link Amit. That makes for 2 patches we're eagerly awaiting in 
9.1.10.

Re: [GENERAL] async streaming and recovery_target_timeline=latest

2013-07-28 Thread Ben Chobot
Anybody?

On Jul 3, 2013, at 3:23 PM, Ben Chobot wrote:

 We have an async streaming setup using 9.1.9 and 3 nodes - let's call them A, 
 B, and C. A is the master, B and C are slaves. Today, A crashed, so we made B 
 be the master and told C to follow along with the switch by changing the 
 primary_conninfo in it's recovery.conf, making sure the history file had made 
 it to the WAL archive, then restarting it. That's worked very well for us in 
 the past, but not so much today. When C came back online, it started 
 complaining about missing WALs:
 
 2013-07-03T21:22:42.441347+00:00 pgdb41-vpc postgres[25779]: [18184-1] 
 db=,user= LOG:  shutting down
 2013-07-03T21:22:42.457728+00:00 pgdb41-vpc postgres[25779]: [18185-1] 
 db=,user= LOG:  database system is shut down
 2013-07-03T21:22:46.852845+00:00 pgdb41-vpc postgres[28942]: [1-1] db=,user= 
 LOG:  database system was shut down in recovery at 2013-07-03 21:22:42 UTC
 2013-07-03T21:22:46.866127+00:00 pgdb41-vpc postgres[28947]: [1-1] 
 db=[unknown],user=[unknown] LOG:  incomplete startup packet
 2013-07-03T21:22:47.368871+00:00 pgdb41-vpc postgres[28942]: [2-1] db=,user= 
 LOG:  restored log file 0010.history from archive
 2013-07-03T21:22:47.413588+00:00 pgdb41-vpc postgres[28956]: [1-1] 
 db=postgres,user=postgres FATAL:  the database system is starting up
 2013-07-03T21:22:47.767182+00:00 pgdb41-vpc postgres[28942]: [3-1] db=,user= 
 LOG:  restored log file 0010.history from archive
 2013-07-03T21:22:47.767289+00:00 pgdb41-vpc postgres[28942]: [4-1] db=,user= 
 LOG:  entering standby mode
 2013-07-03T21:22:47.930394+00:00 pgdb41-vpc postgres[28978]: [1-1] 
 db=postgres,user=postgres FATAL:  the database system is starting up
 2013-07-03T21:22:48.410056+00:00 pgdb41-vpc postgres[28942]: [5-1] db=,user= 
 LOG:  redo starts at 1469/A2604868
 2013-07-03T21:22:48.445921+00:00 pgdb41-vpc postgres[28986]: [1-1] 
 db=postgres,user=postgres FATAL:  the database system is starting up
 2013-07-03T21:22:48.962090+00:00 pgdb41-vpc postgres[28994]: [1-1] 
 db=postgres,user=postgres FATAL:  the database system is starting up
 2013-07-03T21:22:49.477279+00:00 pgdb41-vpc postgres[29020]: [1-1] 
 db=postgres,user=postgres FATAL:  the database system is starting up
 2013-07-03T21:22:49.993021+00:00 pgdb41-vpc postgres[29027]: [1-1] 
 db=postgres,user=postgres FATAL:  the database system is starting up
 2013-07-03T21:22:50.508848+00:00 pgdb41-vpc postgres[29034]: [1-1] 
 db=postgres,user=postgres FATAL:  the database system is starting up
 2013-07-03T21:23:30.651775+00:00 pgdb41-vpc postgres[28942]: [6-1] db=,user= 
 LOG:  consistent recovery state reached at 146A/14FFFA8
 2013-07-03T21:23:30.651805+00:00 pgdb41-vpc postgres[28942]: [7-1] db=,user= 
 LOG:  invalid magic number  in log file 5226, segment 1, offset 5242880
 2013-07-03T21:23:30.653214+00:00 pgdb41-vpc postgres[28917]: [1-1] db=,user= 
 LOG:  database system is ready to accept read only connections
 2013-07-03T21:23:31.123588+00:00 pgdb41-vpc postgres[29754]: [2-1] db=,user= 
 LOG:  streaming replication successfully connected to primary
 2013-07-03T21:23:31.123647+00:00 pgdb41-vpc postgres[29754]: [3-1] db=,user= 
 FATAL:  could not receive data from WAL stream: FATAL:  requested WAL segment 
 0010146A0001 has already been removed
 
 At this point, my understanding of postgres must be wrong, because it appears 
 to me that the slave is looking for WAL 146A/01 because that's where it 
 reached consistent state. However, that was in the previous timeline - we 
 didn't get to the 10 history timeline till 146A/0C:
 
 # cat 0010.history 
 15000F146A000Cno recovery target specified
 
 
 Shouldn't postgres know to be looking for 000F146A0001, not 
 0010146A0001? I'm trying to see what part of our process we 
 have wrong to have ended up in this state but I'm missing it.
 
 
 For what it's worth the new master (node B) certainly seems to have all the 
 WAL files you might expect. Here's some snippets of an ls -l, but all the 
 files are there in between the snippets.
 
 -rw--- 1 postgres postgres 16777216 Jul  3 21:13 000F1469009F
 -rw--- 1 postgres postgres 16777216 Jul  3 21:13 000F146900A0
 -rw--- 1 postgres postgres 16777216 Jul  3 21:13 000F146900A1
 -rw--- 1 postgres postgres 16777216 Jul  3 21:13 000F146900A2
 -rw--- 1 postgres postgres 16777216 Jul  3 21:13 000F146900A3
 -rw--- 1 postgres postgres 16777216 Jul  3 21:13 000F146900A4
 -rw--- 1 postgres postgres 16777216 Jul  3 21:13 000F146900A5
 -rw--- 1 postgres postgres 16777216 Jul  3 21:13 000F146900A6
 .
 .
 .
 -rw--- 1 postgres postgres 16777216 Jul  3 21:15 000F146A
 -rw--- 1 postgres postgres 16777216 Jul  3 21:15 000F146A0001 
 - the consistent state seems to be found here
 -rw--- 1 postgres postgres

[GENERAL] async streaming and recovery_target_timeline=latest

2013-07-03 Thread Ben Chobot
We have an async streaming setup using 9.1.9 and 3 nodes - let's call them A, 
B, and C. A is the master, B and C are slaves. Today, A crashed, so we made B 
be the master and told C to follow along with the switch by changing the 
primary_conninfo in it's recovery.conf, making sure the history file had made 
it to the WAL archive, then restarting it. That's worked very well for us in 
the past, but not so much today. When C came back online, it started 
complaining about missing WALs:

2013-07-03T21:22:42.441347+00:00 pgdb41-vpc postgres[25779]: [18184-1] 
db=,user= LOG:  shutting down
2013-07-03T21:22:42.457728+00:00 pgdb41-vpc postgres[25779]: [18185-1] 
db=,user= LOG:  database system is shut down
2013-07-03T21:22:46.852845+00:00 pgdb41-vpc postgres[28942]: [1-1] db=,user= 
LOG:  database system was shut down in recovery at 2013-07-03 21:22:42 UTC
2013-07-03T21:22:46.866127+00:00 pgdb41-vpc postgres[28947]: [1-1] 
db=[unknown],user=[unknown] LOG:  incomplete startup packet
2013-07-03T21:22:47.368871+00:00 pgdb41-vpc postgres[28942]: [2-1] db=,user= 
LOG:  restored log file 0010.history from archive
2013-07-03T21:22:47.413588+00:00 pgdb41-vpc postgres[28956]: [1-1] 
db=postgres,user=postgres FATAL:  the database system is starting up
2013-07-03T21:22:47.767182+00:00 pgdb41-vpc postgres[28942]: [3-1] db=,user= 
LOG:  restored log file 0010.history from archive
2013-07-03T21:22:47.767289+00:00 pgdb41-vpc postgres[28942]: [4-1] db=,user= 
LOG:  entering standby mode
2013-07-03T21:22:47.930394+00:00 pgdb41-vpc postgres[28978]: [1-1] 
db=postgres,user=postgres FATAL:  the database system is starting up
2013-07-03T21:22:48.410056+00:00 pgdb41-vpc postgres[28942]: [5-1] db=,user= 
LOG:  redo starts at 1469/A2604868
2013-07-03T21:22:48.445921+00:00 pgdb41-vpc postgres[28986]: [1-1] 
db=postgres,user=postgres FATAL:  the database system is starting up
2013-07-03T21:22:48.962090+00:00 pgdb41-vpc postgres[28994]: [1-1] 
db=postgres,user=postgres FATAL:  the database system is starting up
2013-07-03T21:22:49.477279+00:00 pgdb41-vpc postgres[29020]: [1-1] 
db=postgres,user=postgres FATAL:  the database system is starting up
2013-07-03T21:22:49.993021+00:00 pgdb41-vpc postgres[29027]: [1-1] 
db=postgres,user=postgres FATAL:  the database system is starting up
2013-07-03T21:22:50.508848+00:00 pgdb41-vpc postgres[29034]: [1-1] 
db=postgres,user=postgres FATAL:  the database system is starting up
2013-07-03T21:23:30.651775+00:00 pgdb41-vpc postgres[28942]: [6-1] db=,user= 
LOG:  consistent recovery state reached at 146A/14FFFA8
2013-07-03T21:23:30.651805+00:00 pgdb41-vpc postgres[28942]: [7-1] db=,user= 
LOG:  invalid magic number  in log file 5226, segment 1, offset 5242880
2013-07-03T21:23:30.653214+00:00 pgdb41-vpc postgres[28917]: [1-1] db=,user= 
LOG:  database system is ready to accept read only connections
2013-07-03T21:23:31.123588+00:00 pgdb41-vpc postgres[29754]: [2-1] db=,user= 
LOG:  streaming replication successfully connected to primary
2013-07-03T21:23:31.123647+00:00 pgdb41-vpc postgres[29754]: [3-1] db=,user= 
FATAL:  could not receive data from WAL stream: FATAL:  requested WAL segment 
0010146A0001 has already been removed

At this point, my understanding of postgres must be wrong, because it appears 
to me that the slave is looking for WAL 146A/01 because that's where it reached 
consistent state. However, that was in the previous timeline - we didn't get to 
the 10 history timeline till 146A/0C:

# cat 0010.history 
15  000F146A000Cno recovery target specified


Shouldn't postgres know to be looking for 000F146A0001, not 
0010146A0001? I'm trying to see what part of our process we have 
wrong to have ended up in this state but I'm missing it.


For what it's worth the new master (node B) certainly seems to have all the WAL 
files you might expect. Here's some snippets of an ls -l, but all the files are 
there in between the snippets.

-rw--- 1 postgres postgres 16777216 Jul  3 21:13 000F1469009F
-rw--- 1 postgres postgres 16777216 Jul  3 21:13 000F146900A0
-rw--- 1 postgres postgres 16777216 Jul  3 21:13 000F146900A1
-rw--- 1 postgres postgres 16777216 Jul  3 21:13 000F146900A2
-rw--- 1 postgres postgres 16777216 Jul  3 21:13 000F146900A3
-rw--- 1 postgres postgres 16777216 Jul  3 21:13 000F146900A4
-rw--- 1 postgres postgres 16777216 Jul  3 21:13 000F146900A5
-rw--- 1 postgres postgres 16777216 Jul  3 21:13 000F146900A6
.
.
.
-rw--- 1 postgres postgres 16777216 Jul  3 21:15 000F146A
-rw--- 1 postgres postgres 16777216 Jul  3 21:15 000F146A0001 
- the consistent state seems to be found here
-rw--- 1 postgres postgres 16777216 Jul  3 21:15 000F146A0002
-rw--- 1 postgres postgres 16777216 Jul  3 21:15 000F146A0003
-rw--- 1 

Re: [GENERAL] 2 postgresql server on the same station : conflict?

2013-05-28 Thread Ben Chobot
On May 28, 2013, at 2:54 AM, image wrote:

 Hello,
 
 On the same station, i have 2 postgresql server: one for my postgis db
 (v9.1) and so another installed with opener^7 (9.2). Unfortunalty, i noticed
 i'm obliged to stop service for my postgresql postgis (9.1) in order to use
 openerp7 (postgres9.2). I 'm surprised about that. There is there a way to
 use these two servers at the same time?

Sure - configure them to use different ports. In postgresql.conf, see the 
port parameter.

Note that if performance is an issue, these two clusters are going to compete 
for resources when running simultaneously. 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Basic question on recovery and disk snapshotting

2013-04-30 Thread Ben Chobot
On Apr 27, 2013, at 10:40 AM, Yang Zhang wrote:

 My question really boils down to: if we're interested in using COW
 snapshotting (a common feature of modern filesystems and hosting
 environments), would we necessarily need to ensure the data and
 pg_xlog are on the same snapshotted volume?  If not, how should we be
 taking the snapshots - should we be using pg_start_backup() and then
 taking the snapshot of one before the other?  (What order?)  What if
 we have tablespaces, do we take snapshots of those, followed by the
 cluster directory, followed by pg_xlog?

We do this, using xfs to take advantage of being able to freeze the filesystem. 
(Because we're also using software raid.) The process looks like:

1. pg_start_backup()
2. xfs_freeze both the data and xlog filesystems.
3. snapshot all volumes.
4. unfreeze
5. stop backup



Re: [GENERAL] Hosting PG on AWS in 2013

2013-04-08 Thread Ben Chobot
On Apr 8, 2013, at 2:15 AM, Vincent Veyron wrote:

 Could someone explain to me the point of using an AWS instance in the
 case of the OP, whose site is apparently very busy, versus renting a
 bare metal server in a datacenter?

Well, at least in my experience, you don't go to AWS because the databases 
there are awesome. You go to AWS because you have highly cyclical load 
patterns, can't predict your future capacity needs, tend to have very large 
batch jobs, etc. So then you have most of your servers living in AWS, and if 
you need low latencies to your database (which most people do) then it often 
makes sense to try to make your database live in AWS as well, instead of 
putting it a VPN hop away.

I wouldn't claim that AWS is the best place to run a database, but for running 
a service, of which a database is just one part, you could do a lot worse if 
you do it right.

Re: [GENERAL] Hosting PG on AWS in 2013

2013-04-07 Thread Ben Chobot

On Apr 6, 2013, at 6:51 PM, David Boreham wrote:

 First I need to say that I'm asking this question on behalf of a friend, 
 who asked me what I thought on the subject -- I host all the databases 
 important to me and my livelihood, on physical machines I own outright. That 
 said, I'm curious as to the current thinking on a) whether it is wise, and b) 
 if so how to deploy, PG servers on AWS. As I recall, a couple years ago it 
 just wasn't a wise plan because Amazon's I/O performance and reliability 
 wasn't acceptable. Perhaps that's no longer the case..

Tomas gave you a pretty good run-down, but I should just emphasis that you need 
to view AWS instances as disposable, if only because that's how Amazon views 
them. You have multiple AZs in every region use them for replication, 
because its only a matter of time before your master DB goes offline (or the 
entire AZ it's in does). So script up your failover and have it ready to run, 
because you will need to do it. Also, copy data to another region and have a DR 
plan to fail over to it, because history shows AZ aren't always as independent 
as Amazon intends. 

Of course, these are things you should do regardless of if you're in AWS or 
not, but AWS makes it more necessary. (Which arguably pushes you to have a more 
resilient service.)

Also, if you go the route of CC-sized instances, you don't need to bother with 
EBS optimization, because the CC instances have 10Gb network links already. 

Also, if you go the ephemeral instance route, be aware that an instance 
stop/start (not reboot) means you loose your data. There are still too many 
times where we've found an instance needs to be restarted, so you need to be 
really, really ok with your failover if you want those local SSDs. I would say 
synchronous replication would be mandatory. 


Overall I won't say that you can get amazing DB performance inside AWS, but you 
can certainly get reasonable performance with enough PIOPs volumes and memory, 
and while the on-demand cost is absurd compared to what you can build with bare 
metal, the reserved-instance cost is more reasonable (even if not cheap). 

[GENERAL] speeding up ALTER ... SET NOT NULL

2013-03-11 Thread Ben Chobot
I'm in an unfortunate position of needing to add a unique, not null index to a 
very large table with heavy churn. Without having much impact, I can add a NULL 
column that reads default values from a sequence for new rows, and then do 
batch updates over time to fill in the old values but then I hit a big wall 
of locking when updating that column to be NOT NULL. 

If I were to make a partial index on my new, sequence-feed column where value 
is null (concurrently, of course), is postgres smart enough to use that index 
when checking existing values for the ALTER command? Or even any index (I'll 
have to make a unique one eventually anyway).

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Splitting Postgres into Separate Clusters?

2013-03-11 Thread Ben Chobot
On Mar 9, 2013, at 11:54 AM, Paul Jungwirth wrote:

 Hello,
 
 I'm running a specialized search engine that indexes a few tens of millions 
 of web pages, keeping everything in Postgres, and one problem I'm starting to 
 see is poor cache hit rates. My database has two or three tables just for the 
 text of the scraped pages, with one row every time a page is scraped and a 
 `text` column for the HTML. These tables are almost-but-not-quite write-only. 
 They are only read by one job, which uses them to create other objects in the 
 system. I'd like the rest of the database to be in-memory all the time, but I 
 don't really care if I have to read these tables from disk. To keep my 
 problem tables from dragging down performance on the rest of the system, I'm 
 thinking of splitting them out. I suppose I could save the HTML outside of 
 Postgres entirely, but I'm wondering if a nicer solution would be to keep 
 these tables in a separate cluster (i.e. 
 /var/lib/postgresql/9.1/something_else -- I hope this is the right 
 terminology). Then I could tune that cluster differently from the main 
 cluster, or even put it on a different machine. And I could still use dblink 
 to query both clusters conveniently (I think; this isn't really that 
 important). Does this seem like a worthwhile approach? Is there a better way 
 to deal with a few out-sized tables?

Well, what problem exactly are you trying to solve? Having large tables itself 
isn't a problem, but it often tends to imply other things that might be 
problematic:

- large vacuum or analyze times. (Probably fixable in your case with per-table 
autovacuum thresholds.)
- large disk usage. (Probably fixable in your case with tablespaces.)
- slow selects or updates. (Probably fixable in your case with partitioning.)
- blowing out your buffer cache with useless dirty pages - this is where you 
might want to look into separate servers.

[GENERAL] bug, bad memory, or bad disk?

2013-02-14 Thread Ben Chobot
We have a Postgres server (PostgreSQL 9.1.6 on x86_64-unknown-linux-gnu, 
compiled by gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit) which does 
streaming replication to some slaves, and has another set of slaves reading the 
wal archive for wal-based replication. We had a bit of fun yesterday where, 
suddenly, the master started spewing errors like:

2013-02-13T23:13:18.042875+00:00 pgdb18-vpc postgres[20555]: [76-1]  ERROR:  
invalid memory alloc request size 1968078400
2013-02-13T23:13:18.956173+00:00 pgdb18-vpc postgres[23880]: [58-1]  ERROR:  
invalid page header in block 2948 of relation 
pg_tblspc/16435/PG_9.1_201105231/188417/56951641
2013-02-13T23:13:19.025971+00:00 pgdb18-vpc postgres[25027]: [36-1]  ERROR:  
could not open file pg_tblspc/16435/PG_9.1_201105231/188417/58206627.1 
(target block 3936767042): No such file or directory
2013-02-13T23:13:19.847422+00:00 pgdb18-vpc postgres[28333]: [8-1]  ERROR:  
could not open file pg_tblspc/16435/PG_9.1_201105231/188417/58206627.1 
(target block 3936767042): No such file or directory
2013-02-13T23:13:19.913595+00:00 pgdb18-vpc postgres[28894]: [8-1]  ERROR:  
could not open file pg_tblspc/16435/PG_9.1_201105231/188417/58206627.1 
(target block 3936767042): No such file or directory
2013-02-13T23:13:20.043527+00:00 pgdb18-vpc postgres[20917]: [72-1]  ERROR:  
invalid memory alloc request size 1968078400
2013-02-13T23:13:21.548259+00:00 pgdb18-vpc postgres[23318]: [54-1]  ERROR:  
could not open file pg_tblspc/16435/PG_9.1_201105231/188417/58206627.1 
(target block 3936767042): No such file or directory
2013-02-13T23:13:28.405529+00:00 pgdb18-vpc postgres[28055]: [12-1]  ERROR:  
invalid page header in block 38887 of relation 
pg_tblspc/16435/PG_9.1_201105231/188417/58206627
2013-02-13T23:13:29.199447+00:00 pgdb18-vpc postgres[25513]: [46-1]  ERROR:  
invalid page header in block 2368 of relation 
pg_tblspc/16435/PG_9.1_201105231/188417/60418945

There didn't seem to be much correlation to which files were affected, and this 
was a critical server, so once we realized a simple reindex wasn't going to 
solve things, we shut it down and brought up a slave as the new master db.

While that seemed to fix these issues, we soon noticed problems with missing 
clog files. The missing clogs were outside the range of the existing clogs, so 
we tried using dummy clog files. It didn't help, and running pg_check we found 
that one block of one table was definitely corrupt. Worse, that corruption had 
spread to all our replicas.

I know this is a little sparse on details, but my questions are:

1. What kind of fault should I be looking to fix? Because it spread to all the 
replicas, both those that stream and those that replicate by replaying wals in 
the wal archive, I assume it's not a storage issue. (My understanding is that 
streaming replicas stream their changes from memory, not from wals.) So that 
leaves bad memory on the master, or a bug in postgres. Or a flawed 
assumption... :)

2. Is it possible that the corruption that was on the master got replicated to 
the slaves when I tried to cleanly shut down the master before bringing up a 
new slave as the new master and switching the other slaves over to replicating 
from that?




Re: [GENERAL] Restoring a database dump from 9.0 to 9.2

2013-02-08 Thread Ben Madin
If Tom's suggestion doesn't work, can you do your text dump by schema,
or for a subset of tables, and see if you can isolate the problem
table. (using the -n or -t options)

Have you changed the locale / languages settings between db versions?
If you find a quoting problem in a very large table you can run it
through sed to quote the offending bits

If you can dump from 9.0 and restore into 9.1, can you dump from 9.1
and try restoring it into 9.2?

Cheers

Ben



-- 

Ben Madin

t: +61 8 6102 5535
m: +61 448 887 220

Sent from my iPhone, hence the speling...

On 09/02/2013, at 4:46, Jay McGaffigan hooligan...@gmail.com wrote:

 Hi,
  I've been trying to restore a fairly sizeable database dump from my 
 production server onto my dev box.
 Recently upgraded to 9.2.2 and wanted to try it out.

 So I grabbed a text dump of the database and tried the Createdb dbname; psql 
  dmpfile way of restoring that's always worked for me before upgrading my 
 dev box and I'm getting errors on import.  Some of my columns have 'rich 
 text' (carriage returns, XML and other markup) in it and I suspect they are 
 causing the issues (basically the errors I'm seeing seem to imply that the 
 text formatting is getting out of wack I'm suspecting due to carriage returns 
 embedded in the dump file).This causes lots of errors as the processing 
 of the file is now out of sync.

 I had been able to load this same file under PSql 9.1.

 If I get a binary dump file that I need to use something like pg_restore with 
 .  it runs for over 12 hrs locks up my mac adn uses all system memory (i've 
 16G RAM on my system)

 This db is like 30G in size.

 Any one have any debugging advice?  I'm thinking if I can use the text based 
 dump that is created with proper escaping then things might work.  But so far 
 reading documentation I haven't really figured out if this is a viable path.

 If this is not a good way to do it I'm open for any and all suggestions.

 Thanks!
 Jay


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: invalid input syntax for integer:

2013-02-06 Thread Ben Madin
Thanks Tom,

On 2013-02-06, at 13:42 , Tom Lane t...@sss.pgh.pa.us wrote:

 The only part of this query that looks like it could possibly produce
 that error is the res8.resultvalue-to-int cast:

 LEFT JOIN results res8 ON res8.reportid = rep.id AND res8.resulttypeid = 108 
 AND res8.del = false
 LEFT JOIN resultlookup rlu8 ON rlu8.resulttypesid = 108 AND rlu8.id = 
 res8.resultvalue::int

 ^
 Presumably, there are some empty strings in results.resultvalue, and if
 the query happens to try to compare one of them to rlu8.id, kaboom.


Yes - this would be the case if it tried to match it against the resultvalue 
only - some of the values in the table are NULL, but not for this resulttypeid.

So my understanding, working left to right was that the res.8 table rows should 
be limited to those rows which have a resulttypeid = 108. These all have 
numeric values, vis :

select distinct resultvalue from results where resulttypeid  = 108 order by 
resultvalue; 
 resultvalue 
-
   932.0
   933.0
   934.0
   935.0
   936.0
   937.0
   938.0
   939.0
   940.0
  3224.0
(10 rows)

and it should then be only these rows that are joined to the resultlookup 
table… but it seems that the rlu8.id = res8.resultvalue is being done first.

Can I prevent that? Using a subquery, or a some other approach.

 The way that the error comes and goes depending on seemingly-irrelevant
 changes isn't too surprising.  Probably what's happening is that the
 query plan changes around so that that test occurs earlier or later
 relative to other join clauses.

That might just be it - the query explain is different for the same query on 
each machine. 

Just to confuse the issue, if I take the resultlookup table out completely, I 
still get the same error. So maybe it isn't that join at all that is raising 
the error.

If I take the results table out… it works(the commented code below being the 
change.) 

SELECT rep.id, --res.reportid, 
round(st_distance_sphere('010120BB108716D9CEF7A36240643BDF4F8DA741C0', 
post.the_point)/1000) as dist 
FROM reports rep 
LEFT JOIN users u ON rep.link = u.id 
LEFT JOIN postcodes post ON u.postcode::integer = post.postcode  
LEFT JOIN species spe ON rep.species::text like spe.speccode::text AND 
spe.synonym = 0  
--LEFT JOIN results res ON rep.id = res.reportid  AND res.resulttypeid = 108 
AND res.del is false
WHERE rep.del IS false AND rep.projectid = 51
AND round(st_distance_sphere( 
'010120BB108716D9CEF7A36240643BDF4F8DA741C0', post.the_point)/1000)  
150
AND spe.id = 9465;

I'm really not sure what to do here.

cheers

Ben




-- 

Ben Madin

m : +61 448 887 220
e : b...@ausvet.com.au


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: invalid input syntax for integer:

2013-02-06 Thread Ben Madin
Thank you to all for your help on this problem. I've summarised the resolution 
in the hope that it might help someone else.

With all the advice I have gone forward and discovered that the issue related 
to a postcode anomaly. A client had provided a new postbox postcode (the 
application normally prevents this for postboxes because we can't locate 
properties, but because it was new - and our database didn't have a record of 
it - this check had been bypassed). This meant there was no geometry associated 
with the postcode, and when it was joined to the postcodes table (which has 
varchars for postcodes because in Australia some postcodes begin with 0, which 
needs to be printed to allow automatic sorting) during the distance checking 
function (which looked like this in pl/pgsql):

round(st_distance_sphere( '$$ || pccentre || $$', post.the_point)/1000)

If a geometry is NULL, the st_distance_sphere postgis function returned NULL. 

NULL/1000 = NULL

round(NULL) = NULL

AND NULL  150 = NULL

so the predicate probably looks like:

AND round(NULL/1000)  150

AND NULL, so no row returned.

This can't be used in a comparison, so to get around this (thanks Tom) :

coalesce(round(st_distance_sphere( '$$ || pccentre || $$', 
post.the_point)/1000),0)  $$ || quote_literal(distance);

which works - problem no longer being seen.

My final throught relates to the message:

ERROR:  invalid input syntax for integer: ''

The '' suggests (I don't think I was the only one who thought this) that we 
were looking for a string comparison. I guess the NULL value is in there 
between the quotes.

cheers

Ben








On 2013-02-07, at 00:01 , Tom Lane t...@sss.pgh.pa.us wrote:

 Ben Madin b...@ausvet.com.au writes:
 On 2013-02-06, at 13:42 , Tom Lane t...@sss.pgh.pa.us wrote:
 The only part of this query that looks like it could possibly produce
 that error is the res8.resultvalue-to-int cast:
 
 LEFT JOIN results res8 ON res8.reportid = rep.id AND res8.resulttypeid = 
 108 AND res8.del = false
 LEFT JOIN resultlookup rlu8 ON rlu8.resulttypesid = 108 AND rlu8.id = 
 res8.resultvalue::int
 
 ^
 Presumably, there are some empty strings in results.resultvalue, and if
 the query happens to try to compare one of them to rlu8.id, kaboom.
 
 Yes - this would be the case if it tried to match it against the resultvalue 
 only - some of the values in the table are NULL, but not for this 
 resulttypeid.
 
 NULLs are not the problem (casting a NULL to anything is still a NULL).
 The problem you've got is with empty strings, which are not at all the
 same thing, even if Oracle can't tell the difference.
 
 So my understanding, working left to right was that the res.8 table rows 
 should be limited to those rows which have a resulttypeid = 108.
 
 Please recall the section in the fine manual where it points out that
 WHERE clauses are not evaluated left-to-right.  In the case at hand
 I think the planner may be able to rearrange the join order, such that
 the rlu8 join is done first.  Now, having said that, I'm not real sure
 why the res8.resulttypeid = 108 clause couldn't be applied at scan level
 not join level.  But you really need to be looking at EXPLAIN output
 rather than theorizing about what order the clauses will be checked in.
 
 I'm really not sure what to do here.
 
 You need to make sure the join clause is safe to evaluate for any data
 present in the table.  The first question I'd ask is why isn't
 resultvalue of a numeric type to start with --- this whole problem
 smells of crummy schema design.  Or at least, why can't you use NULL
 for the offending values instead of empty strings.  If you really can't
 fix the data representation, you need to complicate the join clause to
 make it not try to convert non-integral strings to ints.  One possible
 solution is nullif(res8.resultvalue, '')::int, if empty strings are
 the only hazard.  If they're not, you could do something with a CASE
 expression using a regex test on the string...
 
   regards, tom lane


-- 

Ben Madin

t : +61 8 6102 5535
m : +61 448 887 220
e : b...@ausvet.com.au

AusVet Animal Health Services
P.O. Box 5467
Broome   WA   6725
Australia

AusVet's website:  http://www.ausvet.com.au

This transmission is for the intended addressee only and is confidential 
information. If you have received this transmission in error, please delete it 
and notify the sender. The contents of this email are the opinion of the writer 
only and are not endorsed by AusVet Animal Health Services unless expressly 
stated otherwise. Although AusVet uses virus scanning software we do not accept 
liability for viruses or similar in any attachments. Thanks for reading.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ERROR: invalid input syntax for integer:

2013-02-05 Thread Ben Madin
G'day,

I hope to be shown to be an idiot, but we are receiving the message 

ERROR:  invalid input syntax for integer: 

when using a pl/pgsl function with some quite complex queries that seem to be 
working on a developer machine using postgresql 9.1.6, but not on the 
production machine using 9.1.7.

The source of our confusion is stemming from the fact that the offending line 
(being the join predicate that if removed allows the query to work) is 
comparing two values in two tables:

...
FROM reports rep
LEFT JOIN results res 
ON res.reportid = rep.id == this line is causing the error to be 
returned
AND res.resulttypeid = 108 
AND res.del = false
…

I have included the full query executed by the function at the bottom of the 
email.

 In the first it is an integer primary key, in the second a not null integer, 
as shown below:

  Table data.reports
Column |   Type   |  Modifiers  
 
---+--+--
 id| integer  | not null default 
nextval('reports_id_seq'::regclass)
 projectid | integer  | 
…
Indexes:
reports_pkey PRIMARY KEY, btree (id)



  Table data.results
Column |   Type   |  Modifiers  
 
---+--+--
 id| integer  | not null default 
nextval('results_id_seq'::regclass)
 reportid  | integer  | not null
…

Indexes:
results_pkey PRIMARY KEY, btree (id)
results_del_btree btree (del)
results_idx_reportid btree (reported)


My questions then are :

Given that the join is between two integer columns, how could it be an invalid 
syntax for one of them?

Given the query is working on one machine (using a copy of the database 
downloaded and imported from the second machine last night) running 9.1.6, is 
there any reason it wouldn't work on the original machine - have there been any 
changes in casting that I didn't notice between 9.1.6 and 9.1.7?

cheers

Ben

The full query is :

{{{
SELECT rep.id, res8.reportid, 
round(st_distance_sphere('010120BB108716D9CEF7A36240643BDF4F8DA741C0', 
post.the_point::geometry)/1000) as dist 
FROM reports rep 
LEFT JOIN users u ON rep.link = u.id 
LEFT JOIN postcodes post ON u.postcode::integer = post.postcode  
LEFT JOIN species spe ON rep.species::text like spe.speccode::text AND 
spe.synonym = 0  
LEFT JOIN results res8 ON res8.reportid = rep.id AND res8.resulttypeid = 108 
AND res8.del = false
LEFT JOIN resultlookup rlu8 ON rlu8.resulttypesid = 108 AND rlu8.id = 
res8.resultvalue::int
WHERE rep.del IS false AND rep.projectid = 51 
AND round(st_distance_sphere( 
'010120BB108716D9CEF7A36240643BDF4F8DA741C0', post.the_point)/1000)  
'150' AND spe.id = '9465' AND rlu8.id = '935';
}}}





-- 

Ben Madin

m : +61 448 887 220
e : b...@ausvet.com.au


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ERROR: invalid input syntax for integer: - more confusion

2013-02-05 Thread Ben Madin
G'day again,

inconceivably, on a completely different issue, I've run into the above error 
again - this time on both machines, one running 9.1.6, and the other running 
9.1.7.

This time, I have a table with locations, some states (level = 1) and some 
shires (level = 2). level is defined as an integer type (no Modifiers or 
indexes)

The distribution of these values is best shown by :

SELECT level, count(*) FROM locations GROUP BY level ORDER BY level;
 level | count 
---+---
 1 |18
 2 |   876
(2 rows)

If I run this query :

SELECT l.id, l.location, l.abbrev, l.locationcode
FROM locations l
WHERE l.level = 2
ORDER BY split_part(locationcode, '.', 1)::int, split_part(locationcode, '.', 
2)::int;

I get many hundreds of results in the correct order. If I change the level to 1:

SELECT l.id, l.location, l.abbrev, l.locationcode
FROM locations l
WHERE l.level = 1
ORDER BY split_part(locationcode, '.', 1)::int, split_part(locationcode, '.', 
2)::int;

I get:

ERROR:  invalid input syntax for integer: 

even more confusing, if I take away the ORDER BY clause, it works.

Do I have some corruption somewhere?
 
I have done a dump / reload, any other suggestions?

cheers

Ben



-- 

Ben Madin

m : +61 448 887 220
e : b...@ausvet.com.au



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: invalid input syntax for integer:

2013-02-05 Thread Ben Madin
Thanks Adrian,

On 2013-02-06, at 12:40 , Adrian Klaver adrian.kla...@gmail.com wrote:

 I am not seeing anything obvious.
 Is there a chance the import to the second machine did not go well?

Actually, these queries work on the machine that the import was done to - but 
not the original. However, not all is well on the development machine, I've 
sent a further email.

 Have you looked at the values for id and reportid to see if they look alright?

Yes, those I've scanned seem OK - I've tried some tests looking for nulls etc. 
(there are several million reports, and about 30 times as many results, hence 
I'm not posting a reproducible example - yet!)

cheers

Ben


-- 

Ben Madin

m : +61 448 887 220
e : b...@ausvet.com.au



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: invalid input syntax for integer: - more confusion

2013-02-05 Thread Ben Madin
G'day Adrian,

On 2013-02-06, at 12:44 , Adrian Klaver adrian.kla...@gmail.com wrote:

 This one I could see if the split_part yielded an empty string.

You are right on this one - the higher level components don't have a split part 
2. Thank you.

 
 What type is locationcode and could you provide an example?


Varchar   an example would be 6 for level 1 field, and 6.34 for level 2.

cheers

Ben



-- 

Ben Madin

t : +61 8 6102 5535
m : +61 448 887 220
e : b...@ausvet.com.au

AusVet Animal Health Services
P.O. Box 5467
Broome   WA   6725
Australia

AusVet's website:  http://www.ausvet.com.au

This transmission is for the intended addressee only and is confidential 
information. If you have received this transmission in error, please delete it 
and notify the sender. The contents of this email are the opinion of the writer 
only and are not endorsed by AusVet Animal Health Services unless expressly 
stated otherwise. Although AusVet uses virus scanning software we do not accept 
liability for viruses or similar in any attachments. Thanks for reading.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: invalid input syntax for integer:

2013-02-05 Thread Ben Madin
Thanks Adrian,

On 2013-02-06, at 12:52 , Adrian Klaver adrian.kla...@gmail.com wrote:

 On 02/05/2013 08:24 PM, Ben Madin wrote:
 The full query is :
 
 {{{
 SELECT rep.id, res8.reportid, 
 round(st_distance_sphere('010120BB108716D9CEF7A36240643BDF4F8DA741C0',
  post.the_point::geometry)/1000) as dist
 FROM reports rep
 LEFT JOIN users u ON rep.link = u.id
 LEFT JOIN postcodes post ON u.postcode::integer = post.postcode
 LEFT JOIN species spe ON rep.species::text like spe.speccode::text AND 
 spe.synonym = 0
 LEFT JOIN results res8 ON res8.reportid = rep.id AND res8.resulttypeid = 108 
 AND res8.del = false
 LEFT JOIN resultlookup rlu8 ON rlu8.resulttypesid = 108 AND rlu8.id = 
 res8.resultvalue::int
 WHERE rep.del IS false AND rep.projectid = 51
 AND round(st_distance_sphere( 
 '010120BB108716D9CEF7A36240643BDF4F8DA741C0', post.the_point)/1000) 
  '150' AND spe.id = '9465' AND rlu8.id = '935';
 }}}
 
 
 
 Follow up questions:
 
 1) Where is this query being run from?

It is meant to be being executed in a pl/pgsql function as part of a loop - the 
rep.id is then used to return the corresponding rows. This function is working 
on the dev machine. The query I have appended is produced in the function as 
below. When I throw the query as above at the psql command line, it works on 
the dev machine. (but not on the production box). The final part of the 
function looks like :

{{{
RAISE NOTICE 'The final query is : %', querystring;

FOR repid, dist IN EXECUTE querystring LOOP
RETURN QUERY SELECT reportid, surname, city, state, postcode, telephone, 
species, breed, status, dist FROM data_view WHERE reportid = repid;
END LOOP;
RETURN;
}}}

 2) Why are the integers at the end of the query quoted?

I have quote_literal(speciesid) etc, even thought it is an int parameter to the 
query. I realise it isn't needed, but it was working on one. FWIW, I have tried 
it without all of the quotes (manually removed), but it doesn't  make any 
difference to the result.

cheers

Ben


-- 

Ben Madin

t : +61 8 6102 5535
m : +61 448 887 220
e : b...@ausvet.com.au

AusVet Animal Health Services
P.O. Box 5467
Broome   WA   6725
Australia

AusVet's website:  http://www.ausvet.com.au

This transmission is for the intended addressee only and is confidential 
information. If you have received this transmission in error, please delete it 
and notify the sender. The contents of this email are the opinion of the writer 
only and are not endorsed by AusVet Animal Health Services unless expressly 
stated otherwise. Although AusVet uses virus scanning software we do not accept 
liability for viruses or similar in any attachments. Thanks for reading.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: invalid input syntax for integer:

2013-02-05 Thread Ben Madin
Adrian,

On 2013-02-06, at 13:33 , Adrian Klaver adrian.kla...@gmail.com wrote:

 Dim bulb moment.
 
 What happens if you run a simplified version of the query?
 
 One that just LEFT JOINS reports to results ON reportid=rep.id.


A fair question - it only makes it more confusing :

{{{
SELECT rep.id, res8.reportid
FROM reports rep 
LEFT JOIN results res8 ON res8.reportid = rep.id AND res8.resulttypeid = 108 
AND res8.del = false
LEFT JOIN resultlookup rlu8 ON rlu8.resulttypesid = 108 AND rlu8.id = 
res8.resultvalue::int
WHERE rep.del IS false 
AND rep.projectid = 51 
AND rlu8.id = '935';
}}}

works perfectly well - so does:

{{{
SELECT rep.id,  
round(st_distance_sphere('010120BB108716D9CEF7A36240643BDF4F8DA741C0', 
post.the_point::geometry)/1000) as dist 
FROM reports rep 
LEFT JOIN users u ON rep.link = u.id 
LEFT JOIN postcodes post ON u.postcode::integer = post.postcode  
LEFT JOIN species spe ON rep.species::text like spe.speccode::text AND 
spe.synonym = 0  
WHERE rep.del IS false 
AND rep.projectid = 51 
AND round(st_distance_sphere( 
'010120BB108716D9CEF7A36240643BDF4F8DA741C0', post.the_point)/1000)  
'150' 
AND spe.id = '9465';
}}}

but the combination only works on the older db…

{{{
SELECT rep.id, res.reportid, 
round(st_distance_sphere('010120BB108716D9CEF7A36240643BDF4F8DA741C0', 
post.the_point::geometry)/1000) as dist 
FROM reports rep 
LEFT JOIN users u ON rep.link = u.id 
LEFT JOIN postcodes post ON u.postcode::integer = post.postcode  
LEFT JOIN species spe ON rep.species::text like spe.speccode::text AND 
spe.synonym = 0  
LEFT JOIN results res ON res.reportid = rep.id AND res.resulttypeid = 108 AND 
res.del = false
LEFT JOIN resultlookup rlu ON rlu.resulttypesid = 108 AND rlu.id = 
res.resultvalue::int
WHERE rep.del IS false 
AND rep.projectid = 51 
AND round(st_distance_sphere( 
'010120BB108716D9CEF7A36240643BDF4F8DA741C0', post.the_point)/1000)  
'150' 
AND spe.id = '9465' 
AND rlu.id = '935';
ERROR:  invalid input syntax for integer: 
}}}

cheers

Ben


-- 

Ben Madin

t : +61 8 6102 5535
m : +61 448 887 220
e : b...@ausvet.com.au

AusVet Animal Health Services
P.O. Box 5467
Broome   WA   6725
Australia

AusVet's website:  http://www.ausvet.com.au

This transmission is for the intended addressee only and is confidential 
information. If you have received this transmission in error, please delete it 
and notify the sender. The contents of this email are the opinion of the writer 
only and are not endorsed by AusVet Animal Health Services unless expressly 
stated otherwise. Although AusVet uses virus scanning software we do not accept 
liability for viruses or similar in any attachments. Thanks for reading.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Determine if an index is a B-tree, GIST, or something else?

2013-01-17 Thread Ben Chobot

On Jan 17, 2013, at 10:03 AM, Paul Jungwirth wrote:

 Is there any way to determine, by querying pg_index and other pg_* tables, 
 whether an index was created as `USING something`? I've already got a big 
 query joining pg_class, pg_index, etc. to pull out various attributes about 
 the indexes in my database, and I'd like to include whether it's a GIST 
 index, a B-Tree, or whatever was in the USING clause when the index was 
 created.
 
 I see that I can join with the pg_opclass table, but that has 100+ rows, and 
 I'm not sure how to tell what is a B-Tree and what isn't. Any advice?

pg_indexes (not pg_index) seems to have the data you're looking for, unless I 
misunderstood the question.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to store clickmap points?

2013-01-09 Thread Ben Chobot
On Jan 8, 2013, at 2:12 AM, aasat wrote:

 Hi, 
 
 I want to store clickmap points (X, Y and hits value) for website
 
 I currently have table like this
 
 CREATE TABLE clickmap (
  page_id integer,
  date date,
  x smallint,
  y smallint,
  hits integer
 )
 
 But this generated about 1M rows per day.
 
 Can Postgres have better method to store this data? I also have the
 possibility to update hits value for point

Instead of storing x/y, have you considered referencing a region of pixels? The 
bigger the region, the larger your possible savings.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] large database

2012-12-11 Thread Ben Chobot
On Dec 11, 2012, at 2:25 AM, Chris Angelico wrote:

 On Tue, Dec 11, 2012 at 7:26 AM, Mihai Popa mi...@lattica.com wrote:
 Second, where should I deploy it? The cloud or a dedicated box?
 
 Forget cloud. For similar money, you can get dedicated hosting with
 much more reliable performance. We've been looking at places to deploy
 a new service, and to that end, we booked a few cloud instances and
 started playing. Bang for buck, even the lower-end dedicated servers
 (eg about $35/month) majorly outdo Amazon cloud instances.

So, speaking as somebody that's spent the last several months dealing with 
postgres scaling in the Amazon cloud, I don't think it's so clear cut.

Yes, AWS instances are way more expensive compared to what you can buy in 
physical hardware. (Paying for a reserved instance drops that cost a lot, but 
even then you're still paying a hefty premium.) And yes, the biggest AWS 
instance you can get today is mediocre at best compared to real DB hardware. 
Yes, smaller instances sizes are crammed together and can suffer from noisy 
neighbors. (This isn't a problem with larger instances - which of course cost 
more.) And while you still have to worry about hardware failures, natural 
disasters, and other forms of calamities... at least with AWS solving those 
problems is easy and as simple as throwing more money at the problem. When 
you're relatively small, getting your severs cololocated in nearby-but-seperate 
datacenters is a hassle, and having enough hardware on hand to manage failures 
is annoying. Dealing with networking between your data centers is annoying.

So does AWS cost a lot more and give you a lower ceiling? No doubt. But it 
arguably also gives you a much more stable floor to stand on, and it has a lot 
of other benefits that don't apply to dbs.

Re: [GENERAL] PSA: XFS and Linux Cache Poisoning

2012-11-12 Thread Ben Chobot
On Nov 12, 2012, at 7:37 AM, Shaun Thomas wrote:

 Hey everyone,
 
 We recently got bit by this, and I wanted to make sure it was known to the 
 general community.
 
 In new(er) Linux kernels, including late versions of the 2.6 tree, XFS has 
 introduced dynamic speculative preallocation. What does this do? It was added 
 to prevent filesystem fragmentation by preallocating a large chunk of memory 
 to files so extensions to those files can go on the same allocation. The 
 dynamic part just means it adjusts the size of this preallocation based on 
 internal heuristics.
 
 Unfortunately, they also changed the logic in how this extra space is 
 tracked. At least in previous kernels, this space would eventually be 
 deallocated. Now, it survives as long as there are any in-memory references 
 to a file, such as in a busy PG database. The filesystem itself sees this 
 space as used and will be reported as such with tools such as df or du.
 
 How do you check if this is affecting you?
 
 du -sm --apparent-size /your/pg/dir; du -sm /your/pg/dir
 
 If you're using XFS, and there is a large difference in these numbers, you've 
 been bitten by the speculative preallocation system.
 
 But where does it go while allocated? Why, to your OS system cache, of 
 course. Systems with several GB of RAM may experience extreme phantom 
 database bloat, because of the dynamic aspect of the preallocation system, 
 So there are actually two problems:
 
 1. Data files are reported as larger than their actual size and have extra 
 space around just in case. Since PG has a maximum file size of 1GB, this is 
 basically pointless.
 2. Blocks that could be used for inode caching to improve query performance 
 are reserved instead for caching empty segments for XFS.
 
 The first can theoretically exhaust the free space on a file system. We were 
 seeing 45GB(!) of bloat on one of our databases caused directly by this. The 
 second, due to the new and improved PG planner, can result in terrible query 
 performance and high system load since the OS cache does not match 
 assumptions.
 
 So how is this fixed? Luckily, the dynamic allocator can be disabled by 
 choosing an allocation size. Add allocsize to your mount options. We used a 
 size of 1m (for 1 megabyte) to retain some of the defragmentation benefits, 
 while still blocking the dynamic allocator. The minimum size is 64k, so some 
 experimentation is probably warranted.
 
 This mount option *is not compatible* with the remount mount option, so 
 you'll need to completely shut everything down and unmount the filesystem to 
 apply.
 
 We spent days trying to track down the reason our systems were reporting a 
 load of 20-30 after a recent OS upgrade. I figured it was only fair to share 
 this to save others the same effort.
 
 Good luck!


Oh hey, I've been wondering for a while why our master dbs seem to be using so 
much more space than their slaves. This appears to be the reason. Thanks for 
the work in tracking it down!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Streaming replication failed to start scenarios

2012-10-23 Thread Ben Chobot
On Oct 22, 2012, at 6:57 AM, chinnaobi wrote:

 Hi Laurenz Albe,
 
 I have tested using cygwin rsync in windows 2008 R2, just after restart the
 server.
 
 for 10 GB it took nearly 5 minutes to sync,
 for 50 GB it took nearly 30 minutes,  -- too long Though there were no big
 changes.
 
 My requirement is something less than 5 minutes. I am doing high
 availability. In my setup there is WAL archiving enabled as well.
 
 Is there any way I can speedup the configuring dead primary server to
 standby ??

If it's a planned switch, and not a crash, take a look at this thread:

http://archives.postgresql.org/pgsql-general/2012-08/msg00083.php

If it's a crash, then the only way to bring your crashed node back up as a 
slave to your new master is with another full copy from that master node.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL Magazine

2012-10-17 Thread Fathi Ben Nasr
Hello,

Is there a PostgreSQL magazine like the one sent by Oracle to whom
requests it ?

If answer is yes: how to sbscribe to it ?

Else

I know this could cost a lot to print such magazines, but maybe a pdf
version could be affordable.

The main idea behind this is to show there a big active community
behind PostgreSQL, commercial derivatives and success stories of
companies, not necesserly fortune 500, using PostgreSQL or products,
like OpenERP, that use it as their backend db server.

Other contents, like function/script of the month, tutorials, etc could
enrich the content of the magazine.
End;




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Magazine

2012-10-17 Thread Fathi Ben Nasr
Thank You.


On Wed, 2012-10-17 at 14:44 +0200, damien clochard wrote:
 Le 17/10/2012 12:35, Fathi Ben Nasr a écrit :
  Hello,
 
 
 Hi
 
  Is there a PostgreSQL magazine like the one sent by Oracle to whom
  requests it ?
  
 
 PostgreSQL Magazine started two years ago. The second issue will be
 released in a few weeks.
 
 So far the magazine is available for free online (download the PDF or
 read it with the flash reader). You can also buy your own paper version
 (http://pgmag.org/01/buy). You can also get one for free in some
 PostgreSQL conferences thanks to our sponsors.
 
  If answer is yes: how to sbscribe to it ?
  
 
 You can subscribe to our newletter and twitter account to be informed
 when a new issue is released
 
 https://twitter.com/intent/follow?screen_name=pg_mag
 http://pgmag.us2.list-manage.com/subscribe?u=d23db8f49246eb6e74c6ca21aid=f1bf0dbe7d
 
  I know this could cost a lot to print such magazines, but maybe a pdf
  version could be affordable.
  
  The main idea behind this is to show there a big active community
  behind PostgreSQL, commercial derivatives and success stories of
  companies, not necesserly fortune 500, using PostgreSQL or products,
  like OpenERP, that use it as their backend db server.
  
  Other contents, like function/script of the month, tutorials, etc could
  enrich the content of the magazine.
 
 This is what we intend to do ! Please note that magazine itself works
 like an open source project. You can join us and participate ! There are
 many ways to get involved : writing articles, editing, proof-reading,
 layout design, translating, etc.
 
 If you want to contribute, you can join the project mailing list :
 https://groups.google.com/group/pgmag/
 
 Regards,
 
 --
 damien
 
 
 





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Magazine

2012-10-17 Thread Fathi Ben Nasr
Thank You.


On Wed, 2012-10-17 at 12:49 +0200, Albe Laurenz wrote:
 Fathi Ben Nasr wrote:
  Is there a PostgreSQL magazine like the one sent by Oracle to whom
  requests it ?
  
  If answer is yes: how to sbscribe to it ?
  
  Else
  
  I know this could cost a lot to print such magazines, but maybe a pdf
  version could be affordable.
  
  The main idea behind this is to show there a big active community
  behind PostgreSQL, commercial derivatives and success stories of
  companies, not necesserly fortune 500, using PostgreSQL or products,
  like OpenERP, that use it as their backend db server.
  
  Other contents, like function/script of the month, tutorials, etc could
  enrich the content of the magazine.
  End;
 
 There's PG Mag: http://pgmag.org/Start
 
 It does not appear regularly (yet?).
 
 Yours,
 Laurenz Albe





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trajectory of a [Pg] DBA

2012-10-04 Thread Ben Chobot
On Oct 4, 2012, at 1:44 PM, Thalis Kalfigkopoulos wrote:

 Hi all.
 
 I'd like to tap into the list's experience regarding the job of a DBA
 in general and Pg DBA in particular.
 
 I see that most of the DBA job posts ask for Sr or Ssr which is
 understandable given that databases are among a company’s most
 valuable assets, but it is also an obvious catch-22. So I'd like to
 ask the list's part- and full-time DBAs, if it's not too personal, how
 they landed their jobs.
 
 Is it an easier and more common entry point to be a part-time DBA e.g.
 perform DBA duties as part of being a U**X sysadmin?
 
 Is it more common to start as a developer and change focus to DBA?
 
 In particular how does one go about starting as a Pg DBA? Is the most
 common case by migrating from another DBMS?

As somebody standing guilty of looking for a Postgres DBA for a while now and 
passing on many people, I think it's pretty safe to say the following.

We don't really care if you've worked as a DBA professionally or not, senior or 
otherwise. We do want to know that you can work as a member of a team, under 
pressure, and understand about the evils of downtime, but you can get that from 
a lot of jobs. And obviously it's important to know the basics of being a DBA. 
You know, why we have indices, and when not to use them; why we have 
transaction logs and how they work; how a database might drive load on a system 
and how to choose hardware that will cope with that... basically, the stuff 
talked about on this mailing list all the time. :)

But mostly what we care about - and this is where most people fall down - is 
how you learn. Do you absorb the minimum of what it takes to get your task 
done, and then follow that procedure as long as you can? Or do you figure out 
the underlying principles at work, so you can effectively go off-script? I 
can't tell you the amount of times people have told me, Yeah, I like to keep 
my server's load average under 3 as good rule of thumb without any 
understanding of why. Why not 10? Why not 0.5? They just don't know, and being 
a DBA is all about being ready to go off-script when something blows up.

So if you're looking to be a good DBA, read this list, and learn to understand 
things you don't understand. That should get you further than most in an 
interview.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] surprising behavior or nothing to see here?

2012-10-03 Thread Ben Chobot
Today we saw a couple behaviors in postgres we weren't expecting, and I'm not 
sure if there's something odd going on, or this is all business as usual and we 
never noticed before.

In steady-state, we have a 32-core box with a fair amount of ram acting as a 
job queue. It's constantly busy inserting, updating, and deleting a table that 
rarely has more than 100k rows. All access is done via indices, and query times 
are usually just a few ms. So that's all good. This server version is:

PostgreSQL 9.1.5 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 
4.6.3-1ubuntu5) 4.6.3, 64-bit

This morning, one of the two servers that this streams wal files to for hot 
standby got locked up xlog replay pause mode. Nobody noticed this for several 
hours. The other server kept replicating just fine.

Within an hour, we had a query idle in transaction that was blocking other 
queries. This happens with our application sometimes, so we killed it and all 
the locks cleared out immediately. We didn't think much of it at the time.

However, soon after that, all other queries started taking longer. Even though 
there wasn't apparently any locking going on, queries were now taking almost 2 
seconds each, and the CPUs were pegged. Eventually, we noticed that if we 
rebuilt the index that was being used by that query, things would speed right 
back up again until autovacuum analyze kicked off, at which point things 
would immediately start taking 2 seconds again. Manually analyzing would not 
immediately cause things to get bad, but they would eventually revert back to 
being too slow. To be clear, even with the 2s query times, the proper index was 
still being used - it was just taking hundreds of ms, instead of a few ms. We 
tried increasing column stats to try to tease out any change in value 
distribution but this had no effect. So this was our first mystery.

Then we noticed that autovacuum wasn't clearing out tuples. Even odder, it 
appeared that when autovacuum started running, all the dead tuples would be 
listed as live tuples in pg_stat_user_tables, and after the vacuum completed, 
would go back to being dead tuples. Vacuum verbose showed that these tuples 
were not yet removable, even though there was no long transaction running. That 
was the second and third mysteries.

Finally, we found the wal replication problem and resume wal replay. All the 
problems listed above disappeared. So that's mystery #4.

My questions are:

1. Could wal replay on a hot streaming asynchronous slave really cause this? It 
certainly seems to have, and we do have hot_standby_feedback turned on, but we 
wouldn't have expected any symptoms from that which would have reduced the 
effectiveness of an index on the master.

2. Is it expected that during a vacuum, dead tuples show up as live tuples in 
pg_stat_user_tables?

3. Does a paused wal replay on an asynchronous slave count as a long running 
transaction, at least in terms of vacuuming out dead tuples on the master? If 
not, what else might keep vacuum from removing dead rows? All I could find on 
teh intarwebz was mention of long-running transactions, which pg_stat_activity 
and pg_locks didn't indicate.

4. What might cause autovacuum analyze to make an index perform worse 
immediately, when a manual vacuum analyze does not have the same affect? And 
I'm not talking about changing things so the planner doesn't use the index, but 
rather, having the index actually take longer. 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] surprising behavior or nothing to see here?

2012-10-03 Thread Ben Chobot
On Oct 3, 2012, at 11:50 AM, Tom Lane wrote:

 Ben Chobot be...@silentmedia.com writes:
 4. What might cause autovacuum analyze to make an index perform worse 
 immediately, when a manual vacuum analyze does not have the same affect? And 
 I'm not talking about changing things so the planner doesn't use the index, 
 but rather, having the index actually take longer. 
 
 Dunno about the replication angle, but would this have been a GIN index?
 I'm wondering about possible interference with flushing of its
 pending-insert queue (the FASTUPDATE stuff).


Nope, btree:

create index get_delayed_jobs_index on delayed_jobs (priority, run_at) 
tablespace data1 where locked_at is null and queue='queue' and 
next_in_strand=true;

There are half a dozen other indices on this table too (that weren't applicable 
to the long query) but they're all btrees.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] bgwriter and pg_locks

2012-09-12 Thread Ben Chobot
In an attempt to get a hackfix for 
http://pgfoundry.org/tracker/index.php?func=detailaid=1011203group_id=1000411atid=1376,
 I'm wonder if it's true that, when looking at pg_locks, the only pid which 
will have virtualxid = '1/1' and virtualtransaction = '-1/0' will be the 
bgwriter. That seems correct on all my dbs, but that's hardly a definitive test.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] can we avoid pg_basebackup on planned switches?

2012-08-07 Thread Ben Chobot

On Aug 7, 2012, at 9:32 AM, Fujii Masao wrote:

 On Mon, Aug 6, 2012 at 3:29 AM, Ben Chobot be...@silentmedia.com wrote:
 
 Oh, I would have though that doing a clean shutdown of the old master (step 
 1) would have made sure that all the unstreamed wal records would be flushed 
 to any connected slaves as part of the master shutting down. In retrospect, 
 I don't remember reading that anywhere, so I must have made that up because 
 I wanted it to be that way. Is it wishful thinking?
 
 When clean shutdown is requested, the master sends all WAL records to
 the standby,
 but it doesn't wait for the standby to receive them. So there is no
 guarantee that all WAL
 records have been flushed to the standby. Walreceiver process in the
 standby might
 detect the termination of replication connection and exit before
 receiving all WAL records.
 Unfortunately I've encountered that case some times.


Oh, I see. Well, that's unfortunate. Thanks for the help though! It shouldn't 
be too hard to script up what you suggest.
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] maximum number of databases and / or schemas in a single database instance

2012-08-06 Thread Ben Chobot
On Aug 4, 2012, at 12:24 PM, Menelaos PerdikeasSemantix wrote:

 The following page:
 
 http://www.postgresql.org/about/
 
 mentions some limits but not the following:
 
 [1] maximum number of databases per database server instance
 [2] maximum number of schemas per database
 
 Is there empirical information on whether both these values can be as high as 
 a few hundreds? (I don't really need any more for my use case)

As a anecdote, we have a postgres cluster running a single db with ~1200 
schemas, each schema having ~600 relations. Tab-completion is a little slow, 
but http://archives.postgresql.org/pgsql-general/2012-03/msg00666.php allowed 
things to be quite reasonable. 

Re: [GENERAL] can we avoid pg_basebackup on planned switches?

2012-08-05 Thread Ben Chobot

On Aug 5, 2012, at 11:12 AM, Fujii Masao wrote:

 On Sat, Jul 28, 2012 at 2:00 AM, Ben Chobot be...@silentmedia.com wrote:
 We make heavy use of streaming replication on PG 9.1 and it's been great for
 us. We do have one issue with it, though, and that's when we switch master
 nodes - currently, the documentation says that you must run pg_basebackup on
 your old master to turn it into a slave. That makes sense when the old
 master had crashed, but it seems that in the case of a planned switch, we
 could do better. Here's what we tried that seemed to work... are we shooting
 ourselves in the foot?
 
 1. Cleanly shut down the current master.
 2. Pick a slave, turn it into the new master.
 
 Before promoting the standby, you have to confirm that all WAL files
 the old master generated have been shipped to the standby which you'll 
 promote. Because the
 standby might terminate the replication before receiving all WAL
 files. Note that there is no clean way to confirm that. For example, to 
 confirm that, you need to
 execute CHECKPOINT in the standby, run pg_controldata in both old master and
 standby, and check whether their latest checkpoint locations are the same. You
 may think to compare the latest checkpoint location in the old master and
 pg_last_xlog_replay_location in the standby. But the former indicates
 the *starting* location of the last WAL record (i.e., shutdown checkpoint WAL 
 record). OTOH,
 the latter indicates the *ending* location of it. So you should not compare 
 them
 without taking into consideration the above mismatch.
 
 If the standby failed to receive some WAL files, you need to manually copy 
 them
 in pg_xlog from the old master to the standby.

Oh, I would have though that doing a clean shutdown of the old master (step 1) 
would have made sure that all the unstreamed wal records would be flushed to 
any connected slaves as part of the master shutting down. In retrospect, I 
don't remember reading that anywhere, so I must have made that up because I 
wanted it to be that way. Is it wishful thinking?
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] can we avoid pg_basebackup on planned switches?

2012-08-04 Thread Ben Chobot
Anybody?

On Jul 27, 2012, at 10:00 AM, Ben Chobot wrote:

 We make heavy use of streaming replication on PG 9.1 and it's been great for 
 us. We do have one issue with it, though, and that's when we switch master 
 nodes - currently, the documentation says that you must run pg_basebackup on 
 your old master to turn it into a slave. That makes sense when the old master 
 had crashed, but it seems that in the case of a planned switch, we could do 
 better. Here's what we tried that seemed to work... are we shooting ourselves 
 in the foot?
 
 1. Cleanly shut down the current master.
 2. Pick a slave, turn it into the new master.
 3. Copy the new pg_xlog history file over to the old master.
 4. On any other slaves (many of our clusters are 3 nodes), we already have 
 recovery_target_timeline=latest and wal archiving, so they should already 
 be working as slaves of the new master.
 5. Set up recovery.conf on the old master to be like the other slaves.
 6. Start up the old master.
 
 Have we just avoided running pg_basebackup, or have we just given ourselves 
 data corruption? Because we're using wal archiving, can we simplify and leave 
 out step 3?



[GENERAL] can we avoid pg_basebackup on planned switches?

2012-07-27 Thread Ben Chobot
We make heavy use of streaming replication on PG 9.1 and it's been great for 
us. We do have one issue with it, though, and that's when we switch master 
nodes - currently, the documentation says that you must run pg_basebackup on 
your old master to turn it into a slave. That makes sense when the old master 
had crashed, but it seems that in the case of a planned switch, we could do 
better. Here's what we tried that seemed to work... are we shooting ourselves 
in the foot?

1. Cleanly shut down the current master.
2. Pick a slave, turn it into the new master.
3. Copy the new pg_xlog history file over to the old master.
4. On any other slaves (many of our clusters are 3 nodes), we already have 
recovery_target_timeline=latest and wal archiving, so they should already be 
working as slaves of the new master.
5. Set up recovery.conf on the old master to be like the other slaves.
6. Start up the old master.

Have we just avoided running pg_basebackup, or have we just given ourselves 
data corruption? Because we're using wal archiving, can we simplify and leave 
out step 3?

Re: [GENERAL] Catalog Bloat in Development - Frequently dropping/adding schemas and objects

2012-06-29 Thread Ben Madin
David,

the VACUUM FULL VERBOSE command might overcome this - I believe it works by 
effectively doing what you are proposing with a drop database and recreate. It 
does however lock the tables during the process (not a problem in a dev environ 
one assumes) but may not be ideal on a live database.

cheers

Ben


On 30/06/2012, at 4:45 AM, David Johnston wrote:

 In my current development environment I often drop some or all of the schemas 
 in the database and then re-create them schemas and the objects they contain. 
  When I go to bring up the database in my GUI it takes a considerable amount 
 of time to initialize.  I suspect this is because the catalog tables are 
 becoming bloated.  What is the recommended course of action to de-bloat them? 
  Running an unqualified vacuum does not seem to help.  Is it better to just 
 periodically drop and recreate the database itself or would a vacuum with 
 specific tables listed be sufficient – and if so which tables?
  
 Thanks!
  
 David J.



Re: [GENERAL] Promotion of standby to master

2012-06-29 Thread Ben Chobot
On Jun 29, 2012, at 12:16 PM, Andy Chambers wrote:

 I understand that it's possible to promote a hot standby pg server
 simply by creating the failover file.  In a scenario where there are
 multiple standby servers, is it possible to point the other standby
 servers to the new master without creating a new base backup?
 
 When I tried to do this, I ran into the timeline 2 of the primary
 does not match recovery target timeline 1.  I'm just trying to figure
 out if that's because the procedure I followed to promote the standby
 was wrong or because it's just not possible.

On your slaves, you need to have the line:

recovery_target_timeline=latest

This allows the slaves to notice that a new timeline has started and move to 
it. The one gotcha is that you have to make sure to promote a slave which is 
most ahead in terms of replication; otherwise, if you promote a slave which is 
further behind than another slave, and that could cause you grief when the 
lagging slave tries to replicate from the new master.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Customising pg directories

2012-06-28 Thread Ben Carbery
I am building a new server with a separate partition for data and
xlogs. What is the correct way to do this?

Can I create a symlink from /var/lib/pgsql/9.1/data - /pg_data (and
then a symlink in /pgdata for xlogs- /pg_xlog)
Or do I have to modify $PGDATA in the init script?

This is all after installing packages but prior to 'service
postgresql-91 initdb'

cheers

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Simple method to format a string?

2012-06-20 Thread Ben Chobot
On Jun 20, 2012, at 7:43 AM, Emi Lu wrote:

 Good morning,
 
 Is there a simply method in psql to format a string?
 
 For example, adding a space to every three consecutive letters:
 
 abcdefgh - *** *** ***
 
 Thanks a lot!
 Emi

I'm unaware of such a function (it seems like a generic format() function's 
arguments would be so complex as to be almost unusable) but you can make use of 
locales to do some formatting for you. See 
http://www.postgresql.org/docs/9.1/static/locale.html.
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Partitioning Advice

2012-06-10 Thread Ben Carbery
 do many of the statistical queries use the whole month's data?
 have you considered partitioning the log table by day?  if you can
 reduce the number of rows involved in the table-scans by partitioning
 it'll be help performance.


I am summarising by hour, day, week and month. So I guess partitioning by
day would help for the daily and hourly summaries.

Thanks for the suggestions everyone, I'll look into all these ideas.

cheers


Re: [GENERAL] Partitioning Advice

2012-06-07 Thread Ben Carbery


 Since you are on RHEL 6 I would use ext4 throughout.

Great, I'll look into it. However my research suggested a journalled
filesystem is unnecessary for xlogs and I assume ext4 is journalled?


 You say you have I/O problems when stats jobs run.  Can you describe
 those jobs
 and what they are doing?


They summarise all the activity in the logs since the last run, essentially
counting logs. Since there are around 30 columns in each log and I want to
summarise in a number of ways - think multiple but separate groups bys with
a regexp to filter out some of the logs which are not interesting -
inevitably there are several passes over the table. Of course regexps are
also very slow in postgres. I found various indices did not help at all,
query planner thinks sequential scan is the way to go.



 If you have a lot of sequential scans on a 10GB table, that will suck
 majorly
 no matter how you tune it.


Ok.. but they I don't care how long they take, only that they don't affect
new writes to the table. So will splitting xlogs off into a different
partition/lun etc help?



 Two more things that you can try out:
 - Change the I/O scheduler to deadline by booting with
 elevator=deadline.

I'll check it out.


 - Split the 400GB LUN into several smaller LUNs and use tablespaces.


This could be worth doing on the active logging table if splitting off
xlogs won't help. I archive (copy) these records off to the a new table
every month and then use inheritance to access the entire logset..

log_master
..inherits
  log_active
  log_201205
  log_201204
  ..etc

This is how I got the active table down to 10GB :)



 I don't say that that is guaranteed to help, but I have made good
 experiences
 with it.

 Yours,
 Laurenz Albe


thanks,

Ben


[GENERAL] Partitioning Advice

2012-06-05 Thread Ben Carbery
I have a postgres server I need to move onto a new OS (RHEL6) on a new VM
and am looking for advice on how to partition the disks to gain some
performance improvement.

In the current environment I am given a single VHDD which I have not
partitioned at all. The SAN performance is pretty good, but we have noticed
slowdowns at various times.. The database does a lot of logging - constant
small writes, with some probably insignificant reads of smaller tables.
Delays in logging can effect the service which is doing the logging and
cause problems upstream. Typically this does not happen, but there are
hourly jobs which generate stats from the logs. Due to their complexity the
query planner always chooses to do sequential scans on the main log
table. This table is truncated monthly when the data is archived to another
table, but peaks in size at around 10GB at the end of the month. Generally
any time the stats jobs are running there are delays which I would like to
reduce/eliminate. There is also a fair bit of iowait on the cpu.

The new server has a great deal more memory which I am hoping will help
(shared_buffers = 8GB, total RAM 20GB), but I am looking at what might be
optimal for the storage configuration. From looking at previous
conversations here I am thinking of something like this..

100GB OS (ext3)
50GB pg_xlog (ext2)
400GB pg_data (ext3 data=writeback noatime?)

Hopefully this would mean the small writes can continue while a large read
is going. Currently there is no streaming replication so only a gig or so
is actually needed for xlogs. We do however use slony to sync some smaller
tables to a secondary which may or may not affect anything.

This is the first time I have needed to delve into the storage
configuration before of a database before so any advice or comments welcome.

Ben


Re: [GENERAL] evaluating subselect for each row

2012-06-03 Thread Ben Chobot
On Jun 3, 2012, at 10:55 AM, Scott Ribe wrote:

 As part of anonymizing some data, I want to do something like:
 
 update foo set bar = (select bar2 from fakes order by random() limit 1);

it may or may not be an option, but update foo set bar=md5(bar) is a pretty 
simple way to redact data.
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] why jasperserver has been changed from MySQL to PostGreSQL

2012-05-22 Thread Ben Chobot
On May 22, 2012, at 7:31 AM, farhad koohbor wrote:

 My question is that why jasperserver changed its mind to PostGreSQL. Which of 
 the features of PostGreSQL are powerful than MySQL?
 Could you please give me a clue?

Postgres is more SQL-compliant and tends to work better at larger scale than 
MySQL. It also is more extendable. For more details, see 
http://www.postgresql.org/about/, and to answer your real question 
definitively, instead of just speculating, ask the jasperserver people 
themselves. :)



Re: [GENERAL] COPY from CSV, passing in default value?

2012-05-16 Thread Ben Madin
Does creating a table with a default not work?

CREATE TABLE salaries (
   Town varchar(30),
   County varchar(30) NOT NULL DEFAULT 'Australia',
   Supervisor varchar(30),
   StartDate date,
   Salary int,
   Benefits int
);

You might also want an auto-incrementing primary key, especially if you are 
importing data so you can delete any duplicates…

CREATE TABLE salaries (
   id serial unique PRIMARY KEY,
   Town varchar(30),
   County varchar(30) NOT NULL DEFAULT 'Australia',
   Supervisor varchar(30),
   StartDate date,
   Salary int,
   Benefits int
);

An alternative that becomes simpler for importing repeatedly is to create a 
temporary table with the same column names as your csv file, but all the data 
types varchar. Import the csv (which is now easy even if there are '' in the 
salary field, which are not int) and then insert (with appropriate casting) the 
results from the temp table into the real table.

cheers

Ben




On 15/05/2012, at 1:31 AM, adebarros wrote:

 Assuming I have a table structured like so:
 
 CREATE TABLE salaries (
Town varchar(30),
County varchar(30),
Supervisor varchar(30),
StartDate date,
Salary int,
Benefits int
 );
 
 If I have a CSV with only three of those fields, I can import like this:
 
 COPY salaries (Town, Supervisor, Salary)
 FROM 'C:\salaries.csv'
 WITH (FORMAT CSV);
 
 However, what if I wanted to assign a default value during import to
 populate the County field? In my dreams it would be something like this
 (which does not work):
 
 COPY salaries (Town, 'County Name', Supervisor, Salary)
 FROM 'C:\salaries.csv'
 WITH (FORMAT CSV);
 
 Any ideas?
 
 Thanks.
 
 
 --
 View this message in context: 
 http://postgresql.1045698.n5.nabble.com/COPY-from-CSV-passing-in-default-value-tp5708672.html
 Sent from the PostgreSQL - general mailing list archive at Nabble.com.
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] .pgpass not working

2012-05-04 Thread Ben Chobot
On May 4, 2012, at 9:30 AM, Rebecca Clarke wrote:

 I do not want to touch the pg_hba.conf so I have generated the .pgpass file.
 The permissions is set to 600, and I have correctly inputted the details into 
 .pgpass, there are no leading spaces.
 
 myhostname:myport:*:postgres:mypassword
 
 However I am still prompted for a password.
 I have tested pg_dump as well and it prompts also.
 
 Does anyone have any suggestions on what may be the culprit. Is there 
 somewhere I need to specify to tell the system to look into the .pgpass file?

Where is the .pgpass file? If it's not in ~/.pgpass or doesn't have the right 
ownership (your permissions are good) then it won't be used. If it's in a 
different location, you might need to make use of the PGPASSFILE environment 
variable.

If you really get stuck, you can always strace psql or pg_dump and see if it 
has problems opening your .pgpass file. 

Re: [GENERAL] SQLSTATE XX000 Internal Error 7

2012-05-03 Thread Ben Madin
G'day Tom,

On 03/05/2012, at 11:57 AM, Tom Lane wrote:

 Ben Madin li...@remoteinformation.com.au writes:
 SQLSTATE[XX000]: Internal error: 7 ERROR:  could not open file 
 base/102979/430122_fsm: Invalid argument
 
 [ scratches head ... ]  AFAICS the only documented reason for open() to
 fail with EINVAL on OS X is 
 
 [EINVAL]   The value of oflag is not valid.
 
 which is surely bogus since that code path calls it with a constant
 value for oflag --- there's no way it could fail just some of the time.
 
 So this is smelling like a kernel or filesystem bug.  I wonder exactly
 which OS X update you're running, and what sort of filesystem the
 database is stored on.

I think that sounds bad! 

The OSX Update is 10.7.3 (11D50)

The System is a 2.66 GHz Intel Core i7 with 8GB RAM.

The database is stored on a partition that looks like :

  Capacity: 447.69 GB (447 687 770 112 bytes)
  Available:74.96 GB (74 956 308 480 bytes)
  Writable: Yes
  File System:  Journaled HFS+
  BSD Name: disk0s2
  Mount Point:  /
  Content:  Apple_HFS


and the data is stored in the /usr/local/pgsql-9.1/data directory, but there is 
a symlink (as I've retained the previous versions when I upgrade.) and so the 
/usr/local directory looks like :

lrwxr-xr-x   1 root  wheel   9  1 May 11:11 pgsql - pgsql-9.1
drwxr-xr-x  11 root  wheel 374 17 Feb 21:26 pgsql-8.4
drwxr-xr-x   8 root  admin 272 17 Feb 21:26 pgsql-9.0
drwxr-xr-x   8 root  admin 272 17 Feb 22:41 pgsql-9.1

and the data directory :

drwx--  20 _postgres  _postgres   680  1 May 11:11 data

is this the sort of exact information you were wondering? 

Since I last posted, I have again received :

PL/pgSQL function fill_ctybnda line 18 at EXECUTE statement
ERROR:  could not open file base/102979/430320_fsm: Invalid argument

and I went looking and found in the base/102979/ directory:

-rw---1 _postgres  _postgres1253376  3 May 11:51 430320
-rw---1 _postgres  _postgres  24576  3 May 11:51 430320_fsm

so it look to my uneducated eye as though it has been able to open the 
file(quite a few of the files ending in _fsm have 24576 bytes) 

(PS How did you come to deciding that it was EINVAL - is that 'Error INVALid 
argument'?)

cheers

Ben







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] SQLSTATE XX000 Internal Error 7

2012-05-02 Thread Ben Madin
G'day all,

I'm going to go slowly on this, but I am intermittently (as in sometimes the 
query works fine, sometimes it stops after 5 minutes and I get the message) 
receiving the error message below on a long running query that is populating a 
newly created table with a PostGIS Geometry column. 

The Error Message is :

SQLSTATE[XX000]: Internal error: 7 ERROR:  could not open file 
base/102979/430122_fsm: Invalid argument

I don't seem to be able to leverage any search engine to explain what this 
message means - To many quotes, colons and brackets for google. I'm not even 
sure why a file is being opened, but I assume that the file is part of the data 
storage - which maybe I need to know about, but I haven't so far!

I have also fiddled a bit with the postgresql.conf settings to increase work 
men etc. The details of the table and query are below.

So my question is really - what does this error message mean, and where do I 
start looking for what could be causing it. Should I try a debug trace, or just 
looking in the logs (I've attached the log entries at the bottom, but maybe I 
should up the logging level)?

I haven't yet posted this to the postgis list, as it looked as though this 
message is a postgresql message, not a postgis one. 

cheers

Ben



I'm running on :

PostgreSQL 9.1.3 on x86_64-apple-darwin11.3.0, compiled by 
i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) 
(LLVM build 2336.9.00), 64-bit

and PostGIS :

POSTGIS=1.5.3 GEOS=3.3.2-CAPI-1.7.2 PROJ=Rel. 4.7.1, 23 September 2009 
LIBXML=2.7.3 USE_STATS






The table definition at the time that the error is occurring is:

Table system.ctybnda2009
  Column   |Type |   Modifiers  
  
---+-+
 recordid  | integer | not null default 
nextval('ctybnda2009_recordid_seq'::regclass)
 ccode | character varying(3)| 
 year  | integer | not null default 2005
 fips  | character varying(2)| 
 l_1_name  | character varying   | 
 l_2_name  | character varying   | 
 l_3_name  | character varying   | 
 area  | numeric | 
 modfiedon | timestamp without time zone | not null default now()
 the_geom  | geometry| 
Indexes:
ctybnda2009_recordid_key UNIQUE CONSTRAINT, btree (recordid)
Check constraints:
enforce_dims_the_geom CHECK (st_ndims(the_geom) = 2)
enforce_geotype_the_geom CHECK (geometrytype(the_geom) = 
'MULTIPOLYGON'::text OR the_geom IS NULL)
enforce_srid_the_geom CHECK (st_srid(the_geom) = 4326)


The query that causes the problem is :

INSERT INTO system.ctybnda2009 (ccode, the_geom) 
SELECT m.country, st_multi(st_union(m.geom)) 
FROM maptable m 
LEFT OUTER JOIN countries c 
ON m.country = c.ccode 
WHERE geom IS NOT NULL 
AND m.valstart = 2009
AND (m.valend IS NULL OR m.valend = 2009) 
GROUP BY 1 
ORDER BY 1; 

postgresql.log

2012-05-03 05:18:23 WSTERROR:  could not open file base/102979/430122_fsm: 
Invalid argument
2012-05-03 05:18:23 WSTSTATEMENT:  INSERT INTO system.ctybnda2011 (ccode, 
the_geom) SELECT m.country, st_multi(st_union(m.geom)) FROM maptable m LEFT 
OUTER JOIN countries c ON m.country = c.ccode WHERE geom IS NOT NULL AND 
m.valstart = $1 AND (m.valend IS NULL OR m.valend = $2) GROUP BY 1 ORDER BY 
1; 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] logging query result size?

2012-05-02 Thread Ben Chobot
I'm not seeing anything in the docs, but is there a way in 9.1 to log the size 
of the query result in the same way that we can log the duration of the query?
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


  1   2   3   4   5   6   7   >