Re: [GENERAL] infinite recursion detected in rules for relation

2015-02-13 Thread pinker
hmm I wanted to show only rows that was inserted today, so instead of
who='me' wanted to filter for instance where timestamp_column=CURRENT_DATE.

Yes, a view would be a solution but I thouhgt that's the case rules were
made for? Isn't it?



--
View this message in context: 
http://postgresql.nabble.com/infinite-recursion-detected-in-rules-for-relation-tp5837697p5837822.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Initializing JsonbValue from cstring

2015-02-13 Thread Pavel Stehule
Hi

you can call input function - jsonb_in

Jsonb *targetjsonbvar = DatumGetJsonb(DirectFunctionCall1(json_in,
CStringGetDatum(cstrvalue)));

Regards

Pavel Stehule



2015-02-13 11:32 GMT+01:00 Igor Stassiy istas...@gmail.com:

 Hi,

 Is there a way to initialize JsonbValue from cstring from public c api in
 postgres 9.4? Without the use of functions declared with PG_FUNCTION_ARGS
 http://doxygen.postgresql.org/fmgr_8h.html#adf4dec9b7d23f1b4c68477affde8b7ff
 .

 I posted the following question earlier, but received no reply. Please have
 a look:

 postgresql jsonb processing with c api
 http://www.postgresql.org/message-id/cakvojex6wtqmvfzsmx5vwhgfc3yympp2dqnkcppcbauk+1p...@mail.gmail.com
 


 http://www.postgresql.org/message-id/cakvojex6wtqmvfzsmx5vwhgfc3yympp2dqnkcppcbauk+1p...@mail.gmail.com



[GENERAL] Initializing JsonbValue from cstring

2015-02-13 Thread Igor Stassiy
Hi,

Is there a way to initialize JsonbValue from cstring from public c api in
postgres 9.4? Without the use of functions declared with PG_FUNCTION_ARGS
http://doxygen.postgresql.org/fmgr_8h.html#adf4dec9b7d23f1b4c68477affde8b7ff
.

I posted the following question earlier, but received no reply. Please have
a look:

postgresql jsonb processing with c api
http://www.postgresql.org/message-id/cakvojex6wtqmvfzsmx5vwhgfc3yympp2dqnkcppcbauk+1p...@mail.gmail.com


http://www.postgresql.org/message-id/cakvojex6wtqmvfzsmx5vwhgfc3yympp2dqnkcppcbauk+1p...@mail.gmail.com


[GENERAL] No repo for postgresql 9.4 for raspberry pi 2

2015-02-13 Thread seb
No 9.4 available for debian trusty, only option is to build from source. ​​

There is only i386 and amd64, no arm in postgres repo for current
production debian, trusty.
http://apt.postgresql.org/pub/repos/apt/dists/trusty-pgdg/9.4/

​The debian testing jessie has 9.4 for my raspberry pi 2, which is expected
to move to production this year.

​With the new Raspberry Pi 2,4 core 1gig ram  armv7, board  expecting to
sell
3 million units this year, could we get the postgres repositories update
for armv7?

I know there are a lot of different arm processor, but now that the
raspberry pi 2 is armv7, it is a good family to support.

​Thanks

-- 
​Seb


Re: [GENERAL] segmentation fault postgres 9.3.5 core dump perlu related ?

2015-02-13 Thread Guy Helmer

 On Feb 12, 2015, at 3:21 PM, Day, David d...@redcom.com wrote:
 
 Update/Information sharing on my pursuit of  segmentation faults
  
 FreeBSD 10.0-RELEASE-p12 amd64
 Postgres version 9.3.5
  
 Below are three postgres core files generated from two different machine ( 
 Georgia and Alabama ) on Feb 11. 
 These cores would not be caused  from an  environment update issue that I 
 last suspected might be causing the segfaults
 So I am kind of back to square one in terms of thinking what is occurring.
  
 ?  I am not sure that I understand the associated time events in the  
 postgres log file output.  Is this whatever happens to be running on the 
 other postgress forked process when the cored  process was detected ? 
 If this is the case then I have probably been reading to much from the 
 content of the postgres log file at the time of core.
 This probably just represents collateral damage of routine transactions that 
 were in other forked  processes at the time one of the processes cored ?
  
 Therefore I would now just assert  that postgres has a sporadic segmentation 
 problem,  no known way to reliably cause it 
 and am uncertain as to how to proceed to resolve it. 

. . .

  Georgia-Core 8:38 -  Feb 11
 [New process 101032]
 [New Thread 802c06400 (LWP 101032)]
 Core was generated by `postgres'.
 Program terminated with signal SIGSEGV, Segmentation fault.
 #0  0x00080c4b6d51 in Perl_hfree_next_entry () from 
 /usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
 (gdb) bt
 #0  0x00080c4b6d51 in Perl_hfree_next_entry () from 
 /usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
 #1  0x00080c4cab49 in Perl_sv_clear () from 
 /usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
 #2  0x00080c4cb13a in Perl_sv_free2 () from 
 /usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
 #3  0x00080c4e5102 in Perl_free_tmps () from 
 /usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
 #4  0x00080bcfedea in plperl_destroy_interp () from 
 /usr/local/lib/postgresql/plperl.so
 #5  0x00080bcfec05 in plperl_fini () from 
 /usr/local/lib/postgresql/plperl.so
 #6  0x006292c6 in ?? ()
 #7  0x0062918d in proc_exit ()
 #8  0x006443f3 in PostgresMain ()
 #9  0x005ff267 in PostmasterMain ()
 #10 0x005a31ba in main ()
 (gdb) info threads
   Id   Target Id Frame
 * 2Thread 802c06400 (LWP 101032) 0x00080c4b6d51 in 
 Perl_hfree_next_entry () from 
 /usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
 * 1Thread 802c06400 (LWP 101032) 0x00080c4b6d51 in 
 Perl_hfree_next_entry () from 
 /usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
  

Given two of the coredumps are in down in libperl and this is FreeBSD 10.0 
amd64, have you seen this? 

https://rt.perl.org/Public/Bug/Display.html?id=122199 
https://rt.perl.org/Public/Bug/Display.html?id=122199

Michael Moll suggested trying setting vm.pmap.pcid_enabled to 0 but I don’t 
recall seeing if that helped.

Guy




[GENERAL] SELECT, GROUP BY, and aggregates

2015-02-13 Thread Ryan Delaney
Why couldn't an RDBMS such as postgres interpret a SELECT that omits the GROUP
BY as implicitly grouping by all the columns that aren't part of an aggregate?

If I do this, Postgres throws an exception that I cannot SELECT a series of
columns including an aggregate without a corresponding GROUP BY clause. But it
knew to throw the error, right?  It must have some method of knowing which
columns aren't part of an aggregate.  Or is it that a column might not have an
aggregate, but still be hard to figure out how to group by it? 

But how would that happen?

If I omit something from GROUP BY, it throws another exception. If I put
something there that doesn't belong, I get a different exception. So it already
knows how to do this! :P
-- 
Regards,
Ryan Delaney
ryan.dela...@gmail.com
https://github.com/rpdelaney
GPG ID: 4096R/311C 10F2 26E0 14E3 8BA4  3B06 B634 36F1 C9E7 771B


pgpf40ct7rNnz.pgp
Description: PGP signature


Re: [GENERAL] Unknown error while running postgresql_installer_dc46cfee2c\getlocales.exe

2015-02-13 Thread George Weaver

Hi Glenn,

No, and neither did I receive any responses.

I did find the following reference that 9.4 is officially not supported on 
XP, Server 2003 or Vista, however I have not seen anything official on the 
PostgreSQL or Enterprise DB sites.


http://www.postgresql.org/message-id/canfyu97_wyaxjge_kegr62mqwovy0exjbqec9cl8stlkakw...@mail.gmail.com

In fact, the Enterprise DB site states: The 9.x installers are supported on 
 and Windows XP and above.


All the best,
George

- Original Message - 
From: Bald, Glenn

To: gwea...@shaw.ca ; pgsql-general@postgresql.org
Sent: Friday, February 13, 2015 5:49 AM
Subject: Unknown error while running  
postgresql_installer_dc46cfee2c\getlocales.exe



Hi George,
Did you find a resolution? I have exactly the same problem.  Same postgres 
version, same xp with sp3, same error log.

Thanks in advance
Glenn
Glenn Bald | GIS Analyst | Information Services
Forestry Commission

Unknown error while running  
postgresql_installer_dc46cfee2c\getlocales.exe

From:George Weaver gweaver(at)shaw(dot)ca
To:pgsql-general pgsql-general(at)postgresql(dot)org
Subject:Unknown error while running  
postgresql_installer_dc46cfee2c\getlocales.exe

Date:2015-02-06 22:07:01
Message-ID:5B0276776FFA40499867E955C65F3251@D420 (view raw or flat)
Thread:2015-02-06 22:07:01 from George Weaver gweaver(at)shaw(dot)ca
Lists:pgsql-general

Hi List,

Trying to install PostgresSQL 9.4.1 on Window XP Pro Service Pack 3.

Installation is aborted with the following error:

Unknown error while running C:\Documents and Settings\George Weaver\Local 
Settings\Temp\postgresql_installer_dc46cfee2c\getlocales.exe


bitrock_installer.log output below.

Would this be an issue with trying to install on XP?

Thanks
George

Log started 02/05/2015 at 11:17:31
Preferred installation mode : qt
Trying to init installer in mode qt
Mode qt successfully initialized
Executing C:\Documents and Settings\George Weaver\Local 
Settings\Temp/postgresql_installer_dc46cfee2c/temp_check_comspec.bat

Script exit code: 0

Script output:
test ok
Script stderr:
Could not find registry key 
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4 Data 
Directory. Setting variable iDataDirectory to empty value
Could not find registry key 
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4 Base 
Directory. Setting variable iBaseDirectory to empty value
Could not find registry key 
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4 Service 
ID. Setting variable iServiceName to empty value
Could not find registry key 
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4 Service 
Account. Setting variable iServiceAccount to empty value
Could not find registry key 
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4 Super 
User. Setting variable iSuperuser to empty value
Could not find registry key 
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4 
Branding. Setting variable iBranding to empty value
Could not find registry key 
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4 Version. 
Setting variable brandingVer to empty value
Could not find registry key 
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4 
Shortcuts. Setting variable iShortcut to empty value
Could not find registry key 
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4 
DisableStackBuilder. Setting variable iDisableStackBuilder to empty value

[11:17:37] Existing base directory:
[11:17:37] Existing data directory:
[11:17:37] Using branding: PostgreSQL 9.4
[11:17:37] Using Super User: postgres and Service Account: NT 
AUTHORITY\NetworkService

[11:17:37] Using Service Name: postgresql-9.4
Executing cscript //NoLogo C:\Documents and Settings\George Weaver\Local 
Settings\Temp\postgresql_installer_dc46cfee2c\prerun_checks.vbs

Script exit code: 0
Script output:
The scripting host appears to be functional.
Script stderr:
Executing C:\Documents and Settings\George Weaver\Local 
Settings\Temp\postgresql_installer_dc46cfee2c\vcredist_x86.exe /passive 
/norestart

Script exit code: 0
Script output:
Script stderr:
Executing C:\Documents and Settings\George Weaver\Local 
Settings\Temp\postgresql_installer_dc46cfee2c\getlocales.exe

Script exit code:
Script output:
Script stderr:


Unknown error while running C:\Documents and Settings\George Weaver\Local 
Settings\Temp\postgresql_installer_dc46cfee2c\getlocales.exe





+ The Forestry Commission's computer systems may be monitored and 
communications carried out on them recorded, to secure the effective 
operation of the system and for other lawful purposes. +


The original of this email was scanned for viruses by the Government Secure 
Intranet (GSi) virus scanning service supplied exclusively by Cable  
Wireless in partnership with MessageLabs.


On leaving the GSi this email was certified virus-free 




--
Sent via pgsql-general mailing list 

[GENERAL] domain gets cast to a text type

2015-02-13 Thread Igor Stassiy
Hi,

you might not be able to run this code, but maybe you know why is there a
type conversion with domain and the concept might apply to other examples
as well. I have the following code:

CREATE FUNCTION get_key_jsonb(key text, j jsonb) RETURNS text
LANGUAGE plv8 IMMUTABLE STRICT
AS $$
  plv8.elog(NOTICE, typeof j);
  return ok;
$$;

CREATE FUNCTION valid_jsonb(j jsonb) RETURNS boolean
LANGUAGE plv8 IMMUTABLE STRICT
AS $$
  return true;
$$;

select get_key_jsonb('ok', '{ok: true}'::jsonb);

prints object, however if we create a domain for the type

CREATE DOMAIN plv8.jsonb AS jsonb
CONSTRAINT jsonb_check CHECK (valid_jsonb(VALUE));

and replace function input type

DROP FUNCTION get_key_jsonb(text, jsonb);
CREATE FUNCTION get_key_jsonb(key text, j plv8.jsonb) RETURNS text
LANGUAGE plv8 IMMUTABLE STRICT
AS $$
  plv8.elog(NOTICE, typeof j);
  return ok;
$$;

select get_key_jsonb('ok', '{ok: true}'::jsonb);

prints string

So there is some conversion of the original type 'jsonb' to 'text' when
passed to v8 depending on whether the function argument is defined via a
domain. Do you have an idea why is this the case? Is this the expected
behaviour?

-Igor


Re: [GENERAL] How to hide stored procedure's bodies from specific user

2015-02-13 Thread Merlin Moncure
On Fri, Feb 13, 2015 at 5:17 AM, Saimon Lim aimon.s...@gmail.com wrote:
 Thanks for your help

 I want to restrict some postgres users as much as possible and allow them to
 execute a few my own stored procedures only.

 If I block access using:

 REVOKE ALL ON pg_catalog.pg_proc FROM PUBLIC;
 REVOKE ALL ON FUNCTION pg_catalog.pg_get_functiondef(oid) FROM PUBLIC;

 the user will fail to select procedure's body from pg_catalog.pg_proc and
 using psql \sf of \ef.
 Is this method correct?

[FYI -- please try to avoid top-posting]

That is correct. However, I'm not guaranteeing that it's a 100% clean
solution...there may be other loopholes you have to close also.  For
example, if you get an unhandled error inside a function the database
will send the error context back to the client.  Basically you're
playing 'whack-a-mole' -- however, it's not too difficult to hide
stored procedure bodies from the *casual observer* if you take certain
precautions.

A stronger approach would be write a wrapper to pl/pgsql that
encrypted the function bodies (this is not trivial), or to write them
in C as John noted.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] segmentation fault postgres 9.3.5 core dump perlu related ?

2015-02-13 Thread Day, David
Guy,

No I had not seen that bug report before.  ( 
https://rt.perl.org/Public/Bug/Display.html?id=122199 )

We did migrate from FreeBSD 9.x (2?) and I think it true
that we were not experiencing the problem at time.
So it might be a good fit/explanation for our current experience

There were a couple of suggestions to follow up on.
I’ll keep the thread updated.

Thanks, a  good start to my  Friday the 13th.


Regards


Dave Day





From: Guy Helmer [mailto:ghel...@palisadesystems.com]
Sent: Thursday, February 12, 2015 6:19 PM
To: Day, David
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] segmentation fault postgres 9.3.5 core dump perlu 
related ?


On Feb 12, 2015, at 3:21 PM, Day, David 
d...@redcom.commailto:d...@redcom.com wrote:

Update/Information sharing on my pursuit of  segmentation faults

FreeBSD 10.0-RELEASE-p12 amd64
Postgres version 9.3.5

Below are three postgres core files generated from two different machine ( 
Georgia and Alabama ) on Feb 11.
These cores would not be caused  from an  environment update issue that I last 
suspected might be causing the segfaults
So I am kind of back to square one in terms of thinking what is occurring.

?  I am not sure that I understand the associated time events in the  postgres 
log file output.  Is this whatever happens to be running on the other postgress 
forked process when the cored  process was detected ?
If this is the case then I have probably been reading to much from the content 
of the postgres log file at the time of core.
This probably just represents collateral damage of routine transactions that 
were in other forked  processes at the time one of the processes cored ?

Therefore I would now just assert  that postgres has a sporadic segmentation 
problem,  no known way to reliably cause it
and am uncertain as to how to proceed to resolve it.

. . .


 Georgia-Core 8:38 -  Feb 11
[New process 101032]
[New Thread 802c06400 (LWP 101032)]
Core was generated by `postgres'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x00080c4b6d51 in Perl_hfree_next_entry () from 
/usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
(gdb) bt
#0  0x00080c4b6d51 in Perl_hfree_next_entry () from 
/usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
#1  0x00080c4cab49 in Perl_sv_clear () from 
/usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
#2  0x00080c4cb13a in Perl_sv_free2 () from 
/usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
#3  0x00080c4e5102 in Perl_free_tmps () from 
/usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
#4  0x00080bcfedea in plperl_destroy_interp () from 
/usr/local/lib/postgresql/plperl.so
#5  0x00080bcfec05 in plperl_fini () from 
/usr/local/lib/postgresql/plperl.so
#6  0x006292c6 in ?? ()
#7  0x0062918d in proc_exit ()
#8  0x006443f3 in PostgresMain ()
#9  0x005ff267 in PostmasterMain ()
#10 0x005a31ba in main ()
(gdb) info threads
  Id   Target Id Frame
* 2Thread 802c06400 (LWP 101032) 0x00080c4b6d51 in 
Perl_hfree_next_entry () from 
/usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
* 1Thread 802c06400 (LWP 101032) 0x00080c4b6d51 in 
Perl_hfree_next_entry () from 
/usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18


Given two of the coredumps are in down in libperl and this is FreeBSD 10.0 
amd64, have you seen this?

https://rt.perl.org/Public/Bug/Display.html?id=122199

Michael Moll suggested trying setting vm.pmap.pcid_enabled to 0 but I don’t 
recall seeing if that helped.

Guy




Re: [GENERAL] Unknown error while running postgresql_installer_dc46cfee2c\getlocales.exe

2015-02-13 Thread Bill Moran

Purely out of curiosity ...

All support for Windows XP was dropped in April of last year ... and this
was many years after mainstream support was dropped (2009).

What possible reason could you have for still running Windows XP?
Furthermore, do you actually expect application vendors to still support
it with their applications?

In an effort to actually be helpful ...

Do you _need_ 9.4? Can you go back (possibly to 9.2) to a version that was
in use closer to when XP was still supported? I don't know if that will
help, but it seems like a reasonable thing to try.

On Fri, 13 Feb 2015 08:10:54 -0600
George Weaver gwea...@shaw.ca wrote:

 Hi Glenn,
 
 No, and neither did I receive any responses.
 
 I did find the following reference that 9.4 is officially not supported on 
 XP, Server 2003 or Vista, however I have not seen anything official on the 
 PostgreSQL or Enterprise DB sites.
 
 http://www.postgresql.org/message-id/canfyu97_wyaxjge_kegr62mqwovy0exjbqec9cl8stlkakw...@mail.gmail.com
 
 In fact, the Enterprise DB site states: The 9.x installers are supported on 
  and Windows XP and above.
 
 All the best,
 George
 
 - Original Message - 
 From: Bald, Glenn
 To: gwea...@shaw.ca ; pgsql-general@postgresql.org
 Sent: Friday, February 13, 2015 5:49 AM
 Subject: Unknown error while running  
 postgresql_installer_dc46cfee2c\getlocales.exe
 
 
 Hi George,
 Did you find a resolution? I have exactly the same problem.  Same postgres 
 version, same xp with sp3, same error log.
 Thanks in advance
 Glenn
 Glenn Bald | GIS Analyst | Information Services
 Forestry Commission
 
 Unknown error while running  
 postgresql_installer_dc46cfee2c\getlocales.exe
 From:George Weaver gweaver(at)shaw(dot)ca
 To:pgsql-general pgsql-general(at)postgresql(dot)org
 Subject:Unknown error while running  
 postgresql_installer_dc46cfee2c\getlocales.exe
 Date:2015-02-06 22:07:01
 Message-ID:5B0276776FFA40499867E955C65F3251@D420 (view raw or flat)
 Thread:2015-02-06 22:07:01 from George Weaver gweaver(at)shaw(dot)ca
 Lists:pgsql-general
 
 Hi List,
 
 Trying to install PostgresSQL 9.4.1 on Window XP Pro Service Pack 3.
 
 Installation is aborted with the following error:
 
 Unknown error while running C:\Documents and Settings\George Weaver\Local 
 Settings\Temp\postgresql_installer_dc46cfee2c\getlocales.exe
 
 bitrock_installer.log output below.
 
 Would this be an issue with trying to install on XP?
 
 Thanks
 George
 
 Log started 02/05/2015 at 11:17:31
 Preferred installation mode : qt
 Trying to init installer in mode qt
 Mode qt successfully initialized
 Executing C:\Documents and Settings\George Weaver\Local 
 Settings\Temp/postgresql_installer_dc46cfee2c/temp_check_comspec.bat
 Script exit code: 0
 
 Script output:
  test ok
 Script stderr:
 Could not find registry key 
 HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4 Data 
 Directory. Setting variable iDataDirectory to empty value
 Could not find registry key 
 HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4 Base 
 Directory. Setting variable iBaseDirectory to empty value
 Could not find registry key 
 HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4 Service 
 ID. Setting variable iServiceName to empty value
 Could not find registry key 
 HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4 Service 
 Account. Setting variable iServiceAccount to empty value
 Could not find registry key 
 HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4 Super 
 User. Setting variable iSuperuser to empty value
 Could not find registry key 
 HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4 
 Branding. Setting variable iBranding to empty value
 Could not find registry key 
 HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4 Version. 
 Setting variable brandingVer to empty value
 Could not find registry key 
 HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4 
 Shortcuts. Setting variable iShortcut to empty value
 Could not find registry key 
 HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4 
 DisableStackBuilder. Setting variable iDisableStackBuilder to empty value
 [11:17:37] Existing base directory:
 [11:17:37] Existing data directory:
 [11:17:37] Using branding: PostgreSQL 9.4
 [11:17:37] Using Super User: postgres and Service Account: NT 
 AUTHORITY\NetworkService
 [11:17:37] Using Service Name: postgresql-9.4
 Executing cscript //NoLogo C:\Documents and Settings\George Weaver\Local 
 Settings\Temp\postgresql_installer_dc46cfee2c\prerun_checks.vbs
 Script exit code: 0
 Script output:
  The scripting host appears to be functional.
 Script stderr:
 Executing C:\Documents and Settings\George Weaver\Local 
 Settings\Temp\postgresql_installer_dc46cfee2c\vcredist_x86.exe /passive 
 /norestart
 Script exit code: 0
 Script output:
 Script stderr:
 Executing C:\Documents and Settings\George Weaver\Local 
 

Re: [GENERAL] Unknown error while running postgresql_installer_dc46cfee2c\getlocales.exe

2015-02-13 Thread Bald, Glenn
I would not be using xp if I had a choice. Actually would not be using
Windows if I had my own way. Its all to do with Legacy systems. 

I  have installed 9.3.6.1. which should be fine. This seem to be
working, or it has installed. I will test when I get the time. 

-Original Message-
From: Bill Moran [mailto:wmo...@potentialtech.com] 
Sent: 13 February 2015 14:28
To: George Weaver
Cc: Bald, Glenn; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Unknown error while running 
postgresql_installer_dc46cfee2c\getlocales.exe


Purely out of curiosity ...

All support for Windows XP was dropped in April of last year ... and
this was many years after mainstream support was dropped (2009).

What possible reason could you have for still running Windows XP?
Furthermore, do you actually expect application vendors to still support
it with their applications?

In an effort to actually be helpful ...

Do you _need_ 9.4? Can you go back (possibly to 9.2) to a version that
was in use closer to when XP was still supported? I don't know if that
will help, but it seems like a reasonable thing to try.

On Fri, 13 Feb 2015 08:10:54 -0600
George Weaver gwea...@shaw.ca wrote:

 Hi Glenn,
 
 No, and neither did I receive any responses.
 
 I did find the following reference that 9.4 is officially not 
 supported on XP, Server 2003 or Vista, however I have not seen 
 anything official on the PostgreSQL or Enterprise DB sites.
 
 http://www.postgresql.org/message-id/CANFyU97_WyAXjge_kEGR62MQwOVy0Exj
 bqec9cl8stlkakw...@mail.gmail.com
 
 In fact, the Enterprise DB site states: The 9.x installers are 
 supported on  and Windows XP and above.
 
 All the best,
 George
 
 - Original Message -
 From: Bald, Glenn
 To: gwea...@shaw.ca ; pgsql-general@postgresql.org
 Sent: Friday, February 13, 2015 5:49 AM
 Subject: Unknown error while running  
 postgresql_installer_dc46cfee2c\getlocales.exe
 
 
 Hi George,
 Did you find a resolution? I have exactly the same problem.  Same 
 postgres version, same xp with sp3, same error log.
 Thanks in advance
 Glenn
 Glenn Bald | GIS Analyst | Information Services Forestry Commission
 
 Unknown error while running 
 postgresql_installer_dc46cfee2c\getlocales.exe
 From:George Weaver gweaver(at)shaw(dot)ca To:pgsql-general 
 pgsql-general(at)postgresql(dot)org
 Subject:Unknown error while running  
 postgresql_installer_dc46cfee2c\getlocales.exe
 Date:2015-02-06 22:07:01
 Message-ID:5B0276776FFA40499867E955C65F3251@D420 (view raw or flat)
 Thread:2015-02-06 22:07:01 from George Weaver 
 gweaver(at)shaw(dot)ca Lists:pgsql-general
 
 Hi List,
 
 Trying to install PostgresSQL 9.4.1 on Window XP Pro Service Pack 3.
 
 Installation is aborted with the following error:
 
 Unknown error while running C:\Documents and Settings\George 
 Weaver\Local 
 Settings\Temp\postgresql_installer_dc46cfee2c\getlocales.exe
 
 bitrock_installer.log output below.
 
 Would this be an issue with trying to install on XP?
 
 Thanks
 George
 
 Log started 02/05/2015 at 11:17:31
 Preferred installation mode : qt
 Trying to init installer in mode qt
 Mode qt successfully initialized
 Executing C:\Documents and Settings\George Weaver\Local 
 Settings\Temp/postgresql_installer_dc46cfee2c/temp_check_comspec.bat
 Script exit code: 0
 
 Script output:
  test ok
 Script stderr:
 Could not find registry key
 HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4 
 Data Directory. Setting variable iDataDirectory to empty value Could 
 not find registry key
 HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4 
 Base Directory. Setting variable iBaseDirectory to empty value Could 
 not find registry key
 HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4 
 Service ID. Setting variable iServiceName to empty value Could not 
 find registry key
 HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4 
 Service Account. Setting variable iServiceAccount to empty value Could

 not find registry key
 HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4 
 Super User. Setting variable iSuperuser to empty value Could not find 
 registry key
 HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4
 Branding. Setting variable iBranding to empty value Could not find 
 registry key
 HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4
Version. 
 Setting variable brandingVer to empty value Could not find registry 
 key
 HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4
 Shortcuts. Setting variable iShortcut to empty value Could not find 
 registry key
 HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4
 DisableStackBuilder. Setting variable iDisableStackBuilder to empty 
 value [11:17:37] Existing base directory:
 [11:17:37] Existing data directory:
 [11:17:37] Using branding: PostgreSQL 9.4 [11:17:37] Using Super User:

 postgres and Service Account: NT AUTHORITY\NetworkService [11:17:37] 
 Using 

Re: [GENERAL] Collection

2015-02-13 Thread Pavel Stehule
2015-02-13 14:13 GMT+01:00 Ramesh T rameshparnandit...@gmail.com:

 COLLECT




Hi

Depends on what you needs. The collections are not supported by PostgreSQL
- use a arrays instead.

http://www.postgresql.org/docs/9.4/static/arrays.html

Regards

Pavel Stehule


Re: [GENERAL] infinite recursion detected in rules for relation

2015-02-13 Thread David G Johnston
User created rules are almost never the correct solution.  There are too
many cavets and views can accomplish nearly everything that a user might
want.

David J.

On Friday, February 13, 2015, pinker [via PostgreSQL] 
ml-node+s1045698n583782...@n5.nabble.com wrote:

 hmm I wanted to show only rows that was inserted today, so instead of
 who='me' wanted to filter for instance where timestamp_column=CURRENT_DATE.

 Yes, a view would be a solution but I thouhgt that's the case rules were
 made for? Isn't it?

 --
  If you reply to this email, your message will be added to the discussion
 below:

 http://postgresql.nabble.com/infinite-recursion-detected-in-rules-for-relation-tp5837697p5837822.html
  To unsubscribe from infinite recursion detected in rules for relation, click
 here
 http://postgresql.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_codenode=5837697code=ZGF2aWQuZy5qb2huc3RvbkBnbWFpbC5jb218NTgzNzY5N3wtMzI2NTA0MzIx
 .
 NAML
 http://postgresql.nabble.com/template/NamlServlet.jtp?macro=macro_viewerid=instant_html%21nabble%3Aemail.namlbase=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespacebreadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml





--
View this message in context: 
http://postgresql.nabble.com/infinite-recursion-detected-in-rules-for-relation-tp5837697p5837859.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

[GENERAL] Collection

2015-02-13 Thread Ramesh T
cast(COLLECT (r_id) as num) in oracle..

is their *collect *function in postgres plpgsql?or  any alternate for
this..?

thanks in advance,


Re: [GENERAL] What's a reasonable maximum number for table partitions?

2015-02-13 Thread Bill Moran
On Fri, 13 Feb 2015 11:12:13 -0500
Vick Khera vi...@khera.org wrote:

 On Thu, Feb 12, 2015 at 7:44 PM, Tim Uckun timuc...@gmail.com wrote:
 
  Does anybody have experience with huge number of partitions if so where
  did you start running into trouble?
 
 I use an arbitrary 100-way split for a lot of tracking info. Just modulo
 100 on the ID column. I've never had any issues with that. If you can
 adjust your queries to pick the right partition ahead of time, which I am
 able to do for many queries, the number of partitions shouldn't matter
 much. Only rarely do I need to query the primary table.
 
 I don't think your plan for 365 partitions is outrageous on modern large
 hardware. For 1000 partitions, I don't know. It will depend on how you can
 optimize your queries before giving them to postgres.

Worked on a project last year where we did 256 partitions. Didn't experience
any problems, but I don't recall if 256 was an arbitrary number or if we
did any significant testing into whether it was the sweet spot. In any event,
we did a LOT of performance testing and found that 256 partitions performed
very well. I second Vick's comments on selecting the partition ahead of time,
in particular, we realized HUGE performance gains on inserts when our code
determined the partition ahead of time and inserted directly into the
partition instead of into the primary table.

-- 
Bill Moran


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What's a reasonable maximum number for table partitions?

2015-02-13 Thread Vick Khera
On Fri, Feb 13, 2015 at 11:29 AM, Seref Arikan 
serefari...@kurumsalteknoloji.com wrote:

 Hi Bill,
 Could you point at some resource(s) that discuss inserting directly into
 the partition? Would it be possible to read directly from the partition as
 well?

 When preparing your SQL statement, you just specify the partition directly
like this. Here's a snippet from my code in perl.

my $msg_recipients_modulo = 100; # number of partitions

sub msg_recipients_part($) {
 use integer;
 my $id = shift;
 my $part = $id % $msg_recipients_modulo;
 return 'msg_recipients_' . sprintf('%02d',$part);
}

then in when generating sql you do

$table =  msg_recipients_part($msg_id);
$sql = SELECT FROM $table WHERE ...

or something similar for insert/update.


Re: [GENERAL] What's a reasonable maximum number for table partitions?

2015-02-13 Thread Seref Arikan
Thanks, google it is then ;)



On Fri, Feb 13, 2015 at 4:31 PM, Bill Moran wmo...@potentialtech.com
wrote:

 On Fri, 13 Feb 2015 16:29:02 +
 Seref Arikan serefari...@kurumsalteknoloji.com wrote:

  Hi Bill,
  Could you point at some resource(s) that discuss inserting directly into
  the partition?

 Not off the top of my head, I would have to google just like
 you would.

  Would it be possible to read directly from the partition as
  well?

 Certainly. Whatever process you use to determine the name of the
 partition in the partition config will work anywhere else. For example,
 if the partition config is picking a table partition based on % 256
 (which is what we were doing, based on an integer entity ID) you can
 do that same math in whatever programming language the application is
 written in (in our case it was PHP).

  On Fri, Feb 13, 2015 at 4:15 PM, Bill Moran wmo...@potentialtech.com
  wrote:
 
   On Fri, 13 Feb 2015 11:12:13 -0500
   Vick Khera vi...@khera.org wrote:
  
On Thu, Feb 12, 2015 at 7:44 PM, Tim Uckun timuc...@gmail.com
 wrote:
   
 Does anybody have experience with huge number of partitions if so
 where
 did you start running into trouble?
   
I use an arbitrary 100-way split for a lot of tracking info. Just
 modulo
100 on the ID column. I've never had any issues with that. If you can
adjust your queries to pick the right partition ahead of time, which
 I am
able to do for many queries, the number of partitions shouldn't
 matter
much. Only rarely do I need to query the primary table.
   
I don't think your plan for 365 partitions is outrageous on modern
 large
hardware. For 1000 partitions, I don't know. It will depend on how
 you
   can
optimize your queries before giving them to postgres.
  
   Worked on a project last year where we did 256 partitions. Didn't
   experience
   any problems, but I don't recall if 256 was an arbitrary number or if
 we
   did any significant testing into whether it was the sweet spot. In any
   event,
   we did a LOT of performance testing and found that 256 partitions
 performed
   very well. I second Vick's comments on selecting the partition ahead of
   time,
   in particular, we realized HUGE performance gains on inserts when our
 code
   determined the partition ahead of time and inserted directly into the
   partition instead of into the primary table.
  
   --
   Bill Moran
  
  
   --
   Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
   To make changes to your subscription:
   http://www.postgresql.org/mailpref/pgsql-general
  


 --
 Bill Moran



Re: [GENERAL] infinite recursion detected in rules for relation

2015-02-13 Thread David G Johnston
On Fri, Feb 13, 2015 at 8:24 AM, pinker [via PostgreSQL] 
ml-node+s1045698n583786...@n5.nabble.com wrote:

 Ok, but in this particular case I don't see any caveats


​You mean other than the infinite recursion, right?​

and think that could be classic case for rule to be used.
 If it is almost never the correct solution why rules still exists at
 all?


​Backward compatibility, the almost, and ​the fact that views use rules
as an implementation mechanism.

David J.




--
View this message in context: 
http://postgresql.nabble.com/infinite-recursion-detected-in-rules-for-relation-tp5837697p5837871.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] What's a reasonable maximum number for table partitions?

2015-02-13 Thread Seref Arikan
Hi Bill,
Could you point at some resource(s) that discuss inserting directly into
the partition? Would it be possible to read directly from the partition as
well?

Regards
Seref


On Fri, Feb 13, 2015 at 4:15 PM, Bill Moran wmo...@potentialtech.com
wrote:

 On Fri, 13 Feb 2015 11:12:13 -0500
 Vick Khera vi...@khera.org wrote:

  On Thu, Feb 12, 2015 at 7:44 PM, Tim Uckun timuc...@gmail.com wrote:
 
   Does anybody have experience with huge number of partitions if so where
   did you start running into trouble?
 
  I use an arbitrary 100-way split for a lot of tracking info. Just modulo
  100 on the ID column. I've never had any issues with that. If you can
  adjust your queries to pick the right partition ahead of time, which I am
  able to do for many queries, the number of partitions shouldn't matter
  much. Only rarely do I need to query the primary table.
 
  I don't think your plan for 365 partitions is outrageous on modern large
  hardware. For 1000 partitions, I don't know. It will depend on how you
 can
  optimize your queries before giving them to postgres.

 Worked on a project last year where we did 256 partitions. Didn't
 experience
 any problems, but I don't recall if 256 was an arbitrary number or if we
 did any significant testing into whether it was the sweet spot. In any
 event,
 we did a LOT of performance testing and found that 256 partitions performed
 very well. I second Vick's comments on selecting the partition ahead of
 time,
 in particular, we realized HUGE performance gains on inserts when our code
 determined the partition ahead of time and inserted directly into the
 partition instead of into the primary table.

 --
 Bill Moran


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] What's a reasonable maximum number for table partitions?

2015-02-13 Thread Seref Arikan
Ah, I should have thought that it would be simple. Thanks a lot Vick.

Regards
Seref


On Fri, Feb 13, 2015 at 4:54 PM, Vick Khera vi...@khera.org wrote:

 On Fri, Feb 13, 2015 at 11:29 AM, Seref Arikan 
 serefari...@kurumsalteknoloji.com wrote:

 Hi Bill,
 Could you point at some resource(s) that discuss inserting directly into
 the partition? Would it be possible to read directly from the partition as
 well?

 When preparing your SQL statement, you just specify the partition
 directly like this. Here's a snippet from my code in perl.

 my $msg_recipients_modulo = 100; # number of partitions

 sub msg_recipients_part($) {
  use integer;
  my $id = shift;
  my $part = $id % $msg_recipients_modulo;
  return 'msg_recipients_' . sprintf('%02d',$part);
 }

 then in when generating sql you do

 $table =  msg_recipients_part($msg_id);
 $sql = SELECT FROM $table WHERE ...

 or something similar for insert/update.





Re: [GENERAL] Collection

2015-02-13 Thread Raymond O'Donnell
On 13/02/2015 13:13, Ramesh T wrote:
 cast(COLLECT (r_id) as num) in oracle..
 
 is their *collect *function in postgres plpgsql?or  any alternate
 for this..?

I don't use Oracle, but I think array_agg() is the closest - it
aggregates the column into an array.

postgres=# create table test(a integer, b text);
CREATE TABLE
postgres=# insert into test values (1, 'abc');
INSERT 0 1
postgres=# insert into test values (2, 'def');
INSERT 0 1
postgres=# insert into test values (2, 'ghi');
INSERT 0 1
postgres=# select a, array_agg(b) from test group by a;
 a | array_agg
---+---
 1 | {abc}
 2 | {def,ghi}
(2 rows)


Ray.

-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] infinite recursion detected in rules for relation

2015-02-13 Thread pinker
Ok, but in this particular case I don't see any caveats and think that could
be classic case for rule to be used.
If it is almost never the correct solution why rules still exists at all?



--
View this message in context: 
http://postgresql.nabble.com/infinite-recursion-detected-in-rules-for-relation-tp5837697p5837867.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What's a reasonable maximum number for table partitions?

2015-02-13 Thread Bill Moran
On Fri, 13 Feb 2015 16:29:02 +
Seref Arikan serefari...@kurumsalteknoloji.com wrote:

 Hi Bill,
 Could you point at some resource(s) that discuss inserting directly into
 the partition?

Not off the top of my head, I would have to google just like
you would.

 Would it be possible to read directly from the partition as
 well?

Certainly. Whatever process you use to determine the name of the
partition in the partition config will work anywhere else. For example,
if the partition config is picking a table partition based on % 256
(which is what we were doing, based on an integer entity ID) you can
do that same math in whatever programming language the application is
written in (in our case it was PHP).

 On Fri, Feb 13, 2015 at 4:15 PM, Bill Moran wmo...@potentialtech.com
 wrote:
 
  On Fri, 13 Feb 2015 11:12:13 -0500
  Vick Khera vi...@khera.org wrote:
 
   On Thu, Feb 12, 2015 at 7:44 PM, Tim Uckun timuc...@gmail.com wrote:
  
Does anybody have experience with huge number of partitions if so where
did you start running into trouble?
  
   I use an arbitrary 100-way split for a lot of tracking info. Just modulo
   100 on the ID column. I've never had any issues with that. If you can
   adjust your queries to pick the right partition ahead of time, which I am
   able to do for many queries, the number of partitions shouldn't matter
   much. Only rarely do I need to query the primary table.
  
   I don't think your plan for 365 partitions is outrageous on modern large
   hardware. For 1000 partitions, I don't know. It will depend on how you
  can
   optimize your queries before giving them to postgres.
 
  Worked on a project last year where we did 256 partitions. Didn't
  experience
  any problems, but I don't recall if 256 was an arbitrary number or if we
  did any significant testing into whether it was the sweet spot. In any
  event,
  we did a LOT of performance testing and found that 256 partitions performed
  very well. I second Vick's comments on selecting the partition ahead of
  time,
  in particular, we realized HUGE performance gains on inserts when our code
  determined the partition ahead of time and inserted directly into the
  partition instead of into the primary table.
 
  --
  Bill Moran
 
 
  --
  Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-general
 


-- 
Bill Moran


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What's a reasonable maximum number for table partitions?

2015-02-13 Thread Vick Khera
On Thu, Feb 12, 2015 at 7:44 PM, Tim Uckun timuc...@gmail.com wrote:

 Does anybody have experience with huge number of partitions if so where
 did you start running into trouble?


I use an arbitrary 100-way split for a lot of tracking info. Just modulo
100 on the ID column. I've never had any issues with that. If you can
adjust your queries to pick the right partition ahead of time, which I am
able to do for many queries, the number of partitions shouldn't matter
much. Only rarely do I need to query the primary table.

I don't think your plan for 365 partitions is outrageous on modern large
hardware. For 1000 partitions, I don't know. It will depend on how you can
optimize your queries before giving them to postgres.


Re: [GENERAL] SELECT, GROUP BY, and aggregates

2015-02-13 Thread Bill Moran

 Ryan Delaney ryan.dela...@gmail.com writes:
  Why couldn't an RDBMS such as postgres interpret a SELECT that omits the 
  GROUP
  BY as implicitly grouping by all the columns that aren't part of an 
  aggregate?

I'm Mr. Curious today ...

Why would you think that such a thing is necessary or desirable? Simply add the
columns to the GROUP BY clause and make the request unambiguous.

-- 
Bill Moran


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] SELECT, GROUP BY, and aggregates

2015-02-13 Thread Jeff Janes
On Fri, Feb 13, 2015 at 10:26 AM, Bill Moran wmo...@potentialtech.com
wrote:


  Ryan Delaney ryan.dela...@gmail.com writes:
   Why couldn't an RDBMS such as postgres interpret a SELECT that omits
 the GROUP
   BY as implicitly grouping by all the columns that aren't part of an
 aggregate?

 I'm Mr. Curious today ...

 Why would you think that such a thing is necessary or desirable? Simply
 add the
 columns to the GROUP BY clause and make the request unambiguous.


Where would the ambiguity be?

I waste an inordinate amount of time retyping select lists over into the
group by list, or copying and pasting and then deleting the aggregate
clauses.  It is an entirely pointless exercise.  I can't fault PostgreSQL
for following the standard, but its too bad the standards aren't more
sensible.

Cheers,

Jeff


[GENERAL] Contrib build fault for pgdg postgres 9.2 at debian 6 (squeeze)

2015-02-13 Thread Sergey Burladyan
Hello All!

I install postgresql-server-dev-9.2 from
'deb http://apt.postgresql.org/pub/repos/apt/ squeeze-pgdg main'

and try to build contrib module from 9.2 but it fault with error:

$ make USE_PGXS=1 PG_CONFIG=/usr/lib/postgresql/9.2/bin/pg_config
gcc -g -O2 -I/usr/include/mit-krb5 -fPIC -pie -DLINUX_OOM_ADJ=0
-fno-omit-frame-pointer -I/usr/include/mit-krb5 -fPIC -pie
-DLINUX_OOM_ADJ=0 -fno-omit-frame-pointer -Wall -Wmissing-prototypes
-Wpointer-arith -Wdeclaration-after-statement -Wendif-labels
-Wmissing-format-attribute -Wformat-security -fno-strict-aliasing
-fwrapv -g -pthread  -D_REENTRANT -D_THREAD_SAFE
-D_POSIX_PTHREAD_SEMANTICS -I/usr/include/postgresql -I. -I./
-I/usr/include/postgresql/9.2/server
-I/usr/include/postgresql/internal -D_GNU_SOURCE
-I/usr/include/libxml2  -I/usr/include/tcl8.5  -c -o pgbench.o
pgbench.c
In file included from pgbench.c:37:
/usr/include/postgresql/libpq-fe.h:547: error: expected '=', ',', ';',
'asm' or '__attribute__' before 'lo_lseek64'
/usr/include/postgresql/libpq-fe.h:551: error: expected '=', ',', ';',
'asm' or '__attribute__' before 'lo_tell64'
/usr/include/postgresql/libpq-fe.h:553: error: expected declaration
specifiers or '...' before 'pg_int64'
make: *** [pgbench.o] Error 1


How can I solve this?

/usr/include/postgresql/libpq-fe.h is from libpq-dev, looks like
libpq-dev have wrong version:

$ dpkg -S /usr/include/postgresql/libpq-fe.h
libpq-dev: /usr/include/postgresql/libpq-fe.h
$ apt-cache policy libpq-dev
libpq-dev:
  Installed: 9.4.1-1.pgdg60+1
  Candidate: 9.4.1-1.pgdg60+1
  Version table:
 *** 9.4.1-1.pgdg60+1 0
500 http://apt.postgresql.org/pub/repos/apt/ squeeze-pgdg/main
amd64 Packages
100 /var/lib/dpkg/status
 8.4.21-0squeeze1 0
500 http://mirror.yandex.ru/debian/ squeeze/main amd64 Packages
 8.4.20-0squeeze1 0
500 http://security.debian.org/ squeeze/updates/main amd64 Packages

-- 
Sergey Burladyan


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SELECT, GROUP BY, and aggregates

2015-02-13 Thread Tom Lane
Ryan Delaney ryan.dela...@gmail.com writes:
 Why couldn't an RDBMS such as postgres interpret a SELECT that omits the GROUP
 BY as implicitly grouping by all the columns that aren't part of an aggregate?

Per SQL standard, a SELECT with aggregates but no GROUP BY is supposed to
give exactly one row.  What you suggest would not do that.

In general we're not that much into assigning made-up semantics to cases
that are specifically disallowed by the spec.  It's usually not too
obvious what the result should be, so we run the risk that the SQL
committee might someday make a contrary decision.  More, this would lose
error detection, and reduce interoperability with other DBMSes that follow
the spec more faithfully.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Contrib build fault for pgdg postgres 9.2 at debian 6 (squeeze)

2015-02-13 Thread Tom Lane
Sergey Burladyan eshkin...@gmail.com writes:
 I install postgresql-server-dev-9.2 from
 'deb http://apt.postgresql.org/pub/repos/apt/ squeeze-pgdg main'

 and try to build contrib module from 9.2 but it fault with error:
 
 $ make USE_PGXS=1 PG_CONFIG=/usr/lib/postgresql/9.2/bin/pg_config
 gcc -g -O2 -I/usr/include/mit-krb5 -fPIC -pie -DLINUX_OOM_ADJ=0
 -fno-omit-frame-pointer -I/usr/include/mit-krb5 -fPIC -pie
 -DLINUX_OOM_ADJ=0 -fno-omit-frame-pointer -Wall -Wmissing-prototypes
 -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels
 -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing
 -fwrapv -g -pthread  -D_REENTRANT -D_THREAD_SAFE
 -D_POSIX_PTHREAD_SEMANTICS -I/usr/include/postgresql -I. -I./
 -I/usr/include/postgresql/9.2/server
 -I/usr/include/postgresql/internal -D_GNU_SOURCE
 -I/usr/include/libxml2  -I/usr/include/tcl8.5  -c -o pgbench.o
 pgbench.c
 In file included from pgbench.c:37:
 /usr/include/postgresql/libpq-fe.h:547: error: expected '=', ',', ';',
 'asm' or '__attribute__' before 'lo_lseek64'
 /usr/include/postgresql/libpq-fe.h:551: error: expected '=', ',', ';',
 'asm' or '__attribute__' before 'lo_tell64'
 /usr/include/postgresql/libpq-fe.h:553: error: expected declaration
 specifiers or '...' before 'pg_int64'
 make: *** [pgbench.o] Error 1
 

'lo_lseek64' didn't appear until 9.3, so you have a version skew here.
The problem is evidently the -I/usr/include/postgresql which is causing
it to pull in some installed copy of libpq-fe.h instead of the one that's
in your 9.2 tree.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SELECT, GROUP BY, and aggregates

2015-02-13 Thread Bill Moran
On Fri, 13 Feb 2015 10:48:13 -0800
Jeff Janes jeff.ja...@gmail.com wrote:

 On Fri, Feb 13, 2015 at 10:26 AM, Bill Moran wmo...@potentialtech.com
 wrote:
 
   Ryan Delaney ryan.dela...@gmail.com writes:
Why couldn't an RDBMS such as postgres interpret a SELECT that omits
  the GROUP
BY as implicitly grouping by all the columns that aren't part of an
  aggregate?
 
  I'm Mr. Curious today ...
 
  Why would you think that such a thing is necessary or desirable? Simply
  add the
  columns to the GROUP BY clause and make the request unambiguous.

 Where would the ambiguity be?

With a large, complex query, trying to visually read through a list of
column selections to figure out which ones _aren't_ aggregated and will
be auto-GROUP-BYed would be ... tedious and error prone at best.

You're right, though, it wouldn't be ambiguous ... that was a poor
choice of words on my part.

 I waste an inordinate amount of time retyping select lists over into the
 group by list, or copying and pasting and then deleting the aggregate
 clauses.

Interesting ... I've never kept accurate track of the time I spend doing
things like that, but inordinate seems like quite a lot.

In my case, I'm a developer so I would tend toward creating code on the
client side that automatically compiled the GROUP BY clause if I found
that scenarios like you describe were happening frequently. Of course,
that doesn't help a data anaylyst who's just writing queries 

 It is an entirely pointless exercise.  I can't fault PostgreSQL
 for following the standard, but its too bad the standards aren't more
 sensible.

I can't speak to the standard and it's reasons for doing this, but there
are certainly some whacko things in the standard.

Thanks for the response.

-- 
Bill Moran


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How can I refer to an ANYELEMENT variable in postgresql dynamic SQL?

2015-02-13 Thread Christopher Currie
Cross-posting from stackoverflow in the hope of getting some additional
eyes on the question.

http://stackoverflow.com/questions/28505782/how-can-i-refer-to-an-anyelement-variable-in-postgresql-dynamic-sql

I'm trying to write a PostgreSQL function for table upserts that can be
used for any table. My starting point is taken from a concrete function for
a specific table type:

CREATE TABLE doodad(id BIGINT PRIMARY KEY, data JSON);
CREATE OR REPLACE FUNCTION upsert_doodad(d doodad) RETURNS VOID AS
  $BODY$
BEGIN
  LOOP
UPDATE doodad
   SET id = (d).id, data = (d).data
 WHERE id = (d).id;
IF found THEN
  RETURN;
END IF;

-- does not exist, or was just deleted.

BEGIN
  INSERT INTO doodad SELECT d.*;
  RETURN;
EXCEPTION when UNIQUE_VIOLATION THEN
  -- do nothing, and loop to try the update again
END;

  END LOOP;
END;
  $BODY$
LANGUAGE plpgsql;

The dynamic SQL version of this for any table that I've come up with is
here: SQL Fiddle

CREATE OR REPLACE FUNCTION upsert(target ANYELEMENT) RETURNS VOID AS
$
DECLARE
  attr_name NAME;
  col TEXT;
  selectors TEXT[];
  setters TEXT[];
  update_stmt TEXT;
  insert_stmt TEXT;
BEGIN
  FOR attr_name IN SELECT a.attname
 FROM pg_index i
 JOIN pg_attribute a ON a.attrelid = i.indrelid
AND a.attnum = ANY(i.indkey)
WHERE i.indrelid = format_type(pg_typeof(target),
NULL)::regclass
  AND i.indisprimary
  LOOP
selectors := array_append(selectors, format('%1$s = target.%1$s',
attr_name));
  END LOOP;

  FOR col IN SELECT json_object_keys(row_to_json(target))
  LOOP
setters := array_append(setters, format('%1$s = (target).%1$s', col));
  END LOOP;

  update_stmt := format(
'UPDATE %s SET %s WHERE %s',
pg_typeof(target),
array_to_string(setters, ', '),
array_to_string(selectors, ' AND ')
  );
  insert_stmt := format('INSERT INTO %s SELECT (target).*',
pg_typeof(target));

  LOOP
EXECUTE update_stmt;
IF found THEN
  RETURN;
END IF;

BEGIN
  EXECUTE insert_stmt;
  RETURN;
EXCEPTION when UNIQUE_VIOLATION THEN
  -- do nothing
END;
  END LOOP;
END;
$
LANGUAGE plpgsql;

When I attempt to use this function, I get an error:

SELECT * FROM upsert(ROW(1,'{}')::doodad);

ERROR: column target does not exist: SELECT * FROM
upsert(ROW(1,'{}')::doodad)

I tried changing the upsert statement to use placeholders, but I can't
figure out how to invoke it using the record:

EXECUTE update_stmt USING target;

ERROR: there is no parameter $2: SELECT * FROM upsert(ROW(1,'{}')::doodad)

EXECUTE update_stmt USING target.*;

ERROR: query SELECT target.* returned 2 columns: SELECT * FROM
upsert(ROW(1,'{}')::doodad)

I feel really close to a solution, but I can't figure out the syntax issues.



--

Christopher Currie

Engineering, Usermind

codemon...@usermind.com

206.353.2867 x109


Re: [GENERAL] SELECT, GROUP BY, and aggregates

2015-02-13 Thread Peter Eisentraut
On 2/13/15 1:48 PM, Jeff Janes wrote:
 I waste an inordinate amount of time retyping select lists over into the
 group by list, or copying and pasting and then deleting the aggregate
 clauses.  It is an entirely pointless exercise.  I can't fault
 PostgreSQL for following the standard, but its too bad the standards
 aren't more sensible.

An extension like GROUP BY ALL might be useful, without breaking much.

Also note that you can group by primary key only.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SELECT, GROUP BY, and aggregates

2015-02-13 Thread Brian Dunavant
To lower the amount of time spent copy pasting aggregate column names,
it's probably worth noting Postgres will allow you to short cut that
with the column position.  For example:

select long_column_name_A, long_column_name_b, count(1)
from foo
group by 1,2
order by 1,2

This works just fine.  It's not in the spec, but postgres supports it.
I'll leave it to others to argue about it being a best practice or
not.


On Fri, Feb 13, 2015 at 1:57 PM, Bill Moran wmo...@potentialtech.com wrote:
 On Fri, 13 Feb 2015 10:48:13 -0800
 Jeff Janes jeff.ja...@gmail.com wrote:

 On Fri, Feb 13, 2015 at 10:26 AM, Bill Moran wmo...@potentialtech.com
 wrote:

   Ryan Delaney ryan.dela...@gmail.com writes:
Why couldn't an RDBMS such as postgres interpret a SELECT that omits
  the GROUP
BY as implicitly grouping by all the columns that aren't part of an
  aggregate?
 
  I'm Mr. Curious today ...
 
  Why would you think that such a thing is necessary or desirable? Simply
  add the
  columns to the GROUP BY clause and make the request unambiguous.

 Where would the ambiguity be?

 With a large, complex query, trying to visually read through a list of
 column selections to figure out which ones _aren't_ aggregated and will
 be auto-GROUP-BYed would be ... tedious and error prone at best.

 You're right, though, it wouldn't be ambiguous ... that was a poor
 choice of words on my part.

 I waste an inordinate amount of time retyping select lists over into the
 group by list, or copying and pasting and then deleting the aggregate
 clauses.

 Interesting ... I've never kept accurate track of the time I spend doing
 things like that, but inordinate seems like quite a lot.

 In my case, I'm a developer so I would tend toward creating code on the
 client side that automatically compiled the GROUP BY clause if I found
 that scenarios like you describe were happening frequently. Of course,
 that doesn't help a data anaylyst who's just writing queries

 It is an entirely pointless exercise.  I can't fault PostgreSQL
 for following the standard, but its too bad the standards aren't more
 sensible.

 I can't speak to the standard and it's reasons for doing this, but there
 are certainly some whacko things in the standard.

 Thanks for the response.

 --
 Bill Moran


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SELECT, GROUP BY, and aggregates

2015-02-13 Thread Igor Neyman


-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Brian Dunavant
Sent: Friday, February 13, 2015 2:11 PM
To: Bill Moran
Cc: Jeff Janes; Ryan Delaney; pgsql-general@postgresql.org
Subject: Re: [GENERAL] SELECT, GROUP BY, and aggregates

To lower the amount of time spent copy pasting aggregate column names, it's 
probably worth noting Postgres will allow you to short cut that with the column 
position.  For example:

select long_column_name_A, long_column_name_b, count(1) from foo group by 1,2 
order by 1,2

This works just fine.  It's not in the spec, but postgres supports it.
I'll leave it to others to argue about it being a best practice or not.

---

I use this feature a lot.

Igor Neyman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unknown error while running postgresql_installer_dc46cfee2c\getlocales.exe

2015-02-13 Thread George Weaver


- Original Message - 



Purely out of curiosity ...

All support for Windows XP was dropped in April of last year ... and this
was many years after mainstream support was dropped (2009).

What possible reason could you have for still running Windows XP?
Furthermore, do you actually expect application vendors to still support
it with their applications?



I work with every variation of Windows (XP/Win 7/Win 8) and Win Servers.
Moved last customer off Server 2003 yesterday...

I simply tried to install 9.4 on XP - didn't work.  You move on.  It would
have been nice having some indication not to try...

9.3 seems to work quite nicely on XP.

George


In an effort to actually be helpful ...

Do you _need_ 9.4? Can you go back (possibly to 9.2) to a version that was
in use closer to when XP was still supported? I don't know if that will
help, but it seems like a reasonable thing to try.

On Fri, 13 Feb 2015 08:10:54 -0600
George Weaver gwea...@shaw.ca wrote:


Hi Glenn,

No, and neither did I receive any responses.

I did find the following reference that 9.4 is officially not supported 
on
XP, Server 2003 or Vista, however I have not seen anything official on 
the

PostgreSQL or Enterprise DB sites.

http://www.postgresql.org/message-id/canfyu97_wyaxjge_kegr62mqwovy0exjbqec9cl8stlkakw...@mail.gmail.com

In fact, the Enterprise DB site states: The 9.x installers are supported 
on

 and Windows XP and above.

All the best,
George

- Original Message - 
From: Bald, Glenn

To: gwea...@shaw.ca ; pgsql-general@postgresql.org
Sent: Friday, February 13, 2015 5:49 AM
Subject: Unknown error while running 
postgresql_installer_dc46cfee2c\getlocales.exe


Hi George,
Did you find a resolution? I have exactly the same problem.  Same 
postgres

version, same xp with sp3, same error log.
Thanks in advance
Glenn
Glenn Bald | GIS Analyst | Information Services
Forestry Commission

Unknown error while running 
postgresql_installer_dc46cfee2c\getlocales.exe
From:George Weaver gweaver(at)shaw(dot)ca
To:pgsql-general pgsql-general(at)postgresql(dot)org
Subject:Unknown error while running 
postgresql_installer_dc46cfee2c\getlocales.exe
Date:2015-02-06 22:07:01
Message-ID:5B0276776FFA40499867E955C65F3251@D420 (view raw or flat)
Thread:2015-02-06 22:07:01 from George Weaver gweaver(at)shaw(dot)ca
Lists:pgsql-general

Hi List,

Trying to install PostgresSQL 9.4.1 on Window XP Pro Service Pack 3.

Installation is aborted with the following error:

Unknown error while running C:\Documents and Settings\George Weaver\Local
Settings\Temp\postgresql_installer_dc46cfee2c\getlocales.exe

bitrock_installer.log output below.

Would this be an issue with trying to install on XP?

Thanks
George

Log started 02/05/2015 at 11:17:31
Preferred installation mode : qt
Trying to init installer in mode qt
Mode qt successfully initialized
Executing C:\Documents and Settings\George Weaver\Local
Settings\Temp/postgresql_installer_dc46cfee2c/temp_check_comspec.bat
Script exit code: 0

Script output:
 test ok
Script stderr:
Could not find registry key
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4 Data
Directory. Setting variable iDataDirectory to empty value
Could not find registry key
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4 Base
Directory. Setting variable iBaseDirectory to empty value
Could not find registry key
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4 
Service

ID. Setting variable iServiceName to empty value
Could not find registry key
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4 
Service

Account. Setting variable iServiceAccount to empty value
Could not find registry key
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4 Super
User. Setting variable iSuperuser to empty value
Could not find registry key
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4
Branding. Setting variable iBranding to empty value
Could not find registry key
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4 
Version.

Setting variable brandingVer to empty value
Could not find registry key
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4
Shortcuts. Setting variable iShortcut to empty value
Could not find registry key
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4
DisableStackBuilder. Setting variable iDisableStackBuilder to empty value
[11:17:37] Existing base directory:
[11:17:37] Existing data directory:
[11:17:37] Using branding: PostgreSQL 9.4
[11:17:37] Using Super User: postgres and Service Account: NT
AUTHORITY\NetworkService
[11:17:37] Using Service Name: postgresql-9.4
Executing cscript //NoLogo C:\Documents and Settings\George Weaver\Local
Settings\Temp\postgresql_installer_dc46cfee2c\prerun_checks.vbs
Script exit code: 0
Script output:
 The scripting host appears to be functional.
Script stderr:
Executing C:\Documents and Settings\George Weaver\Local

Re: [GENERAL] What's a reasonable maximum number for table partitions?

2015-02-13 Thread Bill Moran
On Sat, 14 Feb 2015 11:14:10 +1300
Tim Uckun timuc...@gmail.com wrote:

 If I used modulo arithmetic how would the query optimizer know which table
 to include and exclude? For example say I did modulo 100 based on the field
 client_id.  I create a base table with the trigger to insert the data into
 the proper child table. Each table has the constraint (client_id % 100) = X
 
 So if I do select from base table where client_id = 10  would postgres know
 to only select from client_table_10? Normally I would always have a
 client_id in my queries so hopefully the this could be very efficient.

Unless the newest versions of PostgreSQL has improved on this, you have to
give the planner just a bit of a hint ... you're query should look like:

SELET ... WHERE client_id = 10 AND client_id % 100 = 10;

The part after the AND looks silly and redundant, but it guarantees that
the planner will consider the partition layout when it plans the query,
and in every test that I've run the result will be that the planner only
looks at the one child table.

 On Sat, Feb 14, 2015 at 5:12 AM, Vick Khera vi...@khera.org wrote:
 
 
  On Thu, Feb 12, 2015 at 7:44 PM, Tim Uckun timuc...@gmail.com wrote:
 
  Does anybody have experience with huge number of partitions if so where
  did you start running into trouble?
 
 
  I use an arbitrary 100-way split for a lot of tracking info. Just modulo
  100 on the ID column. I've never had any issues with that. If you can
  adjust your queries to pick the right partition ahead of time, which I am
  able to do for many queries, the number of partitions shouldn't matter
  much. Only rarely do I need to query the primary table.
 
  I don't think your plan for 365 partitions is outrageous on modern large
  hardware. For 1000 partitions, I don't know. It will depend on how you can
  optimize your queries before giving them to postgres.
 


-- 
Bill Moran


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SELECT, GROUP BY, and aggregates

2015-02-13 Thread John McKown
I will take a bit of a contrarian position from the OP. I, personally,
prefer that computer _languages_ do exactly and only what _I_ tell them to
do. I do _NOT_ want them to things for me. IMO, that is why many programs
are unreliable. They make an assumption which is not what the original
programmer really wanted. Of course, I _do_ like having an good IDE which
will help me with suggestions which are based on what I have already typed
in and what else is possible. In the OP's case, that would be something
which would look at the SQL I have already typed in, and have a
highlighted set of column names in the GROUP BY based on the names in the
SELECT. This would complicate the editor, but (again in my opinion) this is
where the help should be available. Basically, I want the _application_
programmer to be responsible for the SELECT, not the SQL engine programmer.

On Fri, Feb 13, 2015 at 1:11 PM, Brian Dunavant br...@omniti.com wrote:

 To lower the amount of time spent copy pasting aggregate column names,
 it's probably worth noting Postgres will allow you to short cut that
 with the column position.  For example:

 select long_column_name_A, long_column_name_b, count(1)
 from foo
 group by 1,2
 order by 1,2

 This works just fine.  It's not in the spec, but postgres supports it.
 I'll leave it to others to argue about it being a best practice or
 not.


 On Fri, Feb 13, 2015 at 1:57 PM, Bill Moran wmo...@potentialtech.com
 wrote:
  On Fri, 13 Feb 2015 10:48:13 -0800
  Jeff Janes jeff.ja...@gmail.com wrote:
 
  On Fri, Feb 13, 2015 at 10:26 AM, Bill Moran wmo...@potentialtech.com
  wrote:
 
Ryan Delaney ryan.dela...@gmail.com writes:
 Why couldn't an RDBMS such as postgres interpret a SELECT that
 omits
   the GROUP
 BY as implicitly grouping by all the columns that aren't part of
 an
   aggregate?
  
   I'm Mr. Curious today ...
  
   Why would you think that such a thing is necessary or desirable?
 Simply
   add the
   columns to the GROUP BY clause and make the request unambiguous.
 
  Where would the ambiguity be?
 
  With a large, complex query, trying to visually read through a list of
  column selections to figure out which ones _aren't_ aggregated and will
  be auto-GROUP-BYed would be ... tedious and error prone at best.
 
  You're right, though, it wouldn't be ambiguous ... that was a poor
  choice of words on my part.
 
  I waste an inordinate amount of time retyping select lists over into the
  group by list, or copying and pasting and then deleting the aggregate
  clauses.
 
  Interesting ... I've never kept accurate track of the time I spend doing
  things like that, but inordinate seems like quite a lot.
 
  In my case, I'm a developer so I would tend toward creating code on the
  client side that automatically compiled the GROUP BY clause if I found
  that scenarios like you describe were happening frequently. Of course,
  that doesn't help a data anaylyst who's just writing queries
 
  It is an entirely pointless exercise.  I can't fault PostgreSQL
  for following the standard, but its too bad the standards aren't more
  sensible.
 
  I can't speak to the standard and it's reasons for doing this, but there
  are certainly some whacko things in the standard.
 
  Thanks for the response.
 
  --
  Bill Moran
 
 
  --
  Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-general


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
He's about as useful as a wax frying pan.

10 to the 12th power microphones = 1 Megaphone

Maranatha! 
John McKown


Re: [GENERAL] Hardware requirements for a PostGIS server

2015-02-13 Thread Mathieu Basille
Thanks to Gavin and Alban for additional considerations, all very useful. 
As for Linux, I have to admit that I am biased too! I use it heavily, which 
is the reason I would incline for its use. But after all, since I'm not 
going to administrate the server, the best choice will probably be IT 
choice! As long as it makes sense from a user perspective (performance, 
feature set, usability, etc.)...


Thanks again! As usual, feel free to add more to the discussion, but there 
are already plenty of information that is very useful to start the process 
with IT in my center!


Best,
Mathieu.


Le 12/02/2015 08:34, Alban Hertroys a écrit :

On 12 February 2015 at 00:38, Mathieu Basille
basille@ase-research.org wrote:


Platform


Linux is the platform of choice:
* Easier administration (install/configuration/upgrade), which is also true
for addons/dependencies (starting with PostGIS, but also GEOS, GDAL, PL/R);
* Better performance [4];
* More tuning options (limited with MS systems);


It has to be said that Linux isn't the only choice there. For example,
FreeBSD (or NetBSD/OpenBSD) are popular choices for Postgres database
servers as well and they perform great and have splendid documentation
(an area where I often find Linux a little lacking). They might even
be a bit more stable.

There are also still several commercial UNIX flavours. I can't say how
any of these alternatives (in combination with PostGIS) compare to
Linux though, nor whether PostGIS is even available on all of these,
but I suspect they're at least on par for performance and stability.

Of all of these, Windows is probably the least suitable OS for the job.

Which is the most suitable depends on quite a few things, not in the
least how likely you'll be able to get experienced support for them.
If you're new to the OS and you have to support the system yourself
for any length of time, I think that good documentation is a factor to
take into account.

Am I biased? Definitely.



--

~$ whoami
Mathieu Basille
http://ase-research.org/basille

~$ locate --details
University of Florida \\
Fort Lauderdale Research and Education Center
(+1) 954-577-6314

~$ fortune
« Le tout est de tout dire, et je manque de mots
Et je manque de temps, et je manque d'audace. »
 -- Paul Éluard



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] question on Postgres smart shutdown mode

2015-02-13 Thread Jim Nasby

On 2/12/15 5:46 PM, Bo Tian wrote:

I have a question on PG smart shutdown mode.

When shutdown Postgres by issuing /Smart Shutdown /mode
(SIGTERM) request, is there a way for client to be notified of this
shutdown event? I tried PG_NOTIFY, but I cannot get any
notification events when this happens. BTW, I am relative new to Postgres.


-hackers is for development of PostgreSQL, not general questions. I'm 
moving this to -general.


To answer your question, I don't believe there's any way to get 
notification. At least, grepping for shutdown in src/backend doesn't 
turn up anything obvious.

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What's a reasonable maximum number for table partitions?

2015-02-13 Thread Bill Moran
On Sat, 14 Feb 2015 11:27:52 +1300
Tim Uckun timuc...@gmail.com wrote:

 This might get pretty crazy if I am doing queries like WHERE client_id in
 () or when I am trying to join some table with the client table.   Maybe I
 can precalculate the ids that are going to go into each partition and set
 the constraint as where client_id in (some_huge_list).

The general caveat is if client_id IN (some huge list) then the list is
probably going to include most or all of the partitions, PostgreSQL will
have to look at all of them anyway, so giving it the hint probably isn't
helpful anyway.

Obviously, there are corner cases to every generalization, but overall,
the research I did last year showed that partitioning was a big win when
frequent queries could narrow the number of partitions down to just a few,
and in those cases, constructing a WHERE clause that created the desired
effect was pretty easy.  In our case, since the case of selecting from
more than one table partition was unusual, we decided not to put the
effort into optimizing those queries.

This is why it's SO important to partition on the right column. Getting
the wrong column can have you doing all the work of partitioning with
none or few of the benefits. Think of it this way, if I have an orders
table, I can partition on order_id or customer_id or product_id. There
are probably other options, but let's just consider those for this
exercise.

If I partition on order_id, then the case of selecting a single order
when the order_id is known will be very fast, and queries that select
based on product or customer will benefit little if any at all.

If I partition on customer_id, then customers will be able to find all
of their orders very efficiently, but queries that look for all orders
of a specific product or searches for a specific order_id won't benefit
much, if any.

If I partition on product_id, then queries to find out who's buying a
particular product will be very efficient, but queries based on customer
or order_id won't benefit much, if any.

Which one you choose is dependent on your expected workload, and your
performance requirements. You may decide that customers need speed the
most, and that crunching data on products is something that it's OK
to be a little slower. You may find that the most frequent query is to
search by order_id, so overall your application will see the most
benefit by super-optimizing that request. There's no one answer that's
right for every situation ... at least not in my experience.

 On Sat, Feb 14, 2015 at 11:15 AM, Bill Moran wmo...@potentialtech.com
 wrote:
 
  On Sat, 14 Feb 2015 11:14:10 +1300
  Tim Uckun timuc...@gmail.com wrote:
 
   If I used modulo arithmetic how would the query optimizer know which
  table
   to include and exclude? For example say I did modulo 100 based on the
  field
   client_id.  I create a base table with the trigger to insert the data
  into
   the proper child table. Each table has the constraint (client_id % 100)
  = X
  
   So if I do select from base table where client_id = 10  would postgres
  know
   to only select from client_table_10? Normally I would always have a
   client_id in my queries so hopefully the this could be very efficient.
 
  Unless the newest versions of PostgreSQL has improved on this, you have to
  give the planner just a bit of a hint ... you're query should look like:
 
  SELET ... WHERE client_id = 10 AND client_id % 100 = 10;
 
  The part after the AND looks silly and redundant, but it guarantees that
  the planner will consider the partition layout when it plans the query,
  and in every test that I've run the result will be that the planner only
  looks at the one child table.
 
   On Sat, Feb 14, 2015 at 5:12 AM, Vick Khera vi...@khera.org wrote:
  
   
On Thu, Feb 12, 2015 at 7:44 PM, Tim Uckun timuc...@gmail.com wrote:
   
Does anybody have experience with huge number of partitions if so
  where
did you start running into trouble?
   
   
I use an arbitrary 100-way split for a lot of tracking info. Just
  modulo
100 on the ID column. I've never had any issues with that. If you can
adjust your queries to pick the right partition ahead of time, which I
  am
able to do for many queries, the number of partitions shouldn't matter
much. Only rarely do I need to query the primary table.
   
I don't think your plan for 365 partitions is outrageous on modern
  large
hardware. For 1000 partitions, I don't know. It will depend on how you
  can
optimize your queries before giving them to postgres.
   
 
 
  --
  Bill Moran
 


-- 
Bill Moran


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What's a reasonable maximum number for table partitions?

2015-02-13 Thread Tim Uckun
This might get pretty crazy if I am doing queries like WHERE client_id in
() or when I am trying to join some table with the client table.   Maybe I
can precalculate the ids that are going to go into each partition and set
the constraint as where client_id in (some_huge_list).



On Sat, Feb 14, 2015 at 11:15 AM, Bill Moran wmo...@potentialtech.com
wrote:

 On Sat, 14 Feb 2015 11:14:10 +1300
 Tim Uckun timuc...@gmail.com wrote:

  If I used modulo arithmetic how would the query optimizer know which
 table
  to include and exclude? For example say I did modulo 100 based on the
 field
  client_id.  I create a base table with the trigger to insert the data
 into
  the proper child table. Each table has the constraint (client_id % 100)
 = X
 
  So if I do select from base table where client_id = 10  would postgres
 know
  to only select from client_table_10? Normally I would always have a
  client_id in my queries so hopefully the this could be very efficient.

 Unless the newest versions of PostgreSQL has improved on this, you have to
 give the planner just a bit of a hint ... you're query should look like:

 SELET ... WHERE client_id = 10 AND client_id % 100 = 10;

 The part after the AND looks silly and redundant, but it guarantees that
 the planner will consider the partition layout when it plans the query,
 and in every test that I've run the result will be that the planner only
 looks at the one child table.

  On Sat, Feb 14, 2015 at 5:12 AM, Vick Khera vi...@khera.org wrote:
 
  
   On Thu, Feb 12, 2015 at 7:44 PM, Tim Uckun timuc...@gmail.com wrote:
  
   Does anybody have experience with huge number of partitions if so
 where
   did you start running into trouble?
  
  
   I use an arbitrary 100-way split for a lot of tracking info. Just
 modulo
   100 on the ID column. I've never had any issues with that. If you can
   adjust your queries to pick the right partition ahead of time, which I
 am
   able to do for many queries, the number of partitions shouldn't matter
   much. Only rarely do I need to query the primary table.
  
   I don't think your plan for 365 partitions is outrageous on modern
 large
   hardware. For 1000 partitions, I don't know. It will depend on how you
 can
   optimize your queries before giving them to postgres.
  


 --
 Bill Moran



Re: [GENERAL] Question on session_replication_role

2015-02-13 Thread Vasudevan, Ramya


 Here's the part that's slow. The index scan on each partition is taking 
 ~2.5ms, and is being repeated 1847 times *for each partition*.



 What is the table partitioned on?



The table is partitioned on registration_id.

CREATE TABLE emailsubscription.reg_email_subscriptions_p00

(

  CONSTRAINT reg_email_subscriptions_p00_pkey PRIMARY KEY (registration_id, 
subscription_id),

  CONSTRAINT reg_email_subscriptions_p00_registration_id_check CHECK 
((abs(registration_id) % 8::bigint) = 0)

)

INHERITS (emailsubscription.reg_email_subscriptions)

WITH (

  OIDS=FALSE

);





Update:

We changed the query from

SELECT DISTINCT it.recipient_id

FROM  iru.iru_tags it

WHERE it.recipient_id BETWEEN 758587587 and 968696896

AND   it.status = 0

AND   it.last_update_date = date_trunc('day', now() - interval '90 days')

AND EXISTS (SELECT DISTINCT res.registration_id

FROM emailsubscription.reg_email_subscriptions res

WHERE res.registration_id = it.recipient_id

AND  res.subscription_id = 200400);



to:

SELECT DISTINCT it.recipient_id

FROM   iru.iru_tags it

where   it.STATUS = 0

AND   it.last_update_date = date_trunc('day', now() - interval '90 days')

AND   EXISTS   (SELECT res.registration_id

FROM   
emailsubscription.reg_email_subscriptions res

WHERE res.registration_id = it.recipient_id

andres.registration_id BETWEEN 
8706059856 AND 8706077435

AND   res.subscription_id = 200400);



And it slightly improved the performance.



  QUERY PLAN



HashAggregate  (cost=733840.24..734045.35 rows=6837 width=8) (actual 
time=14208.223..14208.414 rows=891 loops=1)

   Output: it.recipient_id

   Buffers: shared hit=73563 read=18189

   I/O Timings: read=12490.324

   -  Nested Loop  (cost=30901.28..733823.14 rows=6837 width=8) (actual 
time=6445.168..14203.967 rows=2547 loops=1)

 Output: it.recipient_id

 Buffers: shared hit=73563 read=18189

 I/O Timings: read=12490.324

 -  HashAggregate  (cost=30900.72..31284.18 rows=12782 width=8) 
(actual time=6394.062..6413.045 rows=14452 loops=1)

   Output: res.registration_id

   Buffers: shared hit=14158 read=914

   I/O Timings: read=5771.423

   -  Append  (cost=0.00..30868.76 rows=12782 width=8) (actual 
time=85.811..6361.937 rows=14452 loops=1)

 Buffers: shared hit=14158 read=914

 I/O Timings: read=5771.423

 -  Seq Scan on emailsubscription.reg_email_subscriptions 
res  (cost=0.00..3470.45 rows=1 width=8) (actual time=75.713..75.713 rows=0 
loops=1)

   Output: res.registration_id

   Filter: ((res.registration_id = 8706059856::bigint) 
AND (res.registration_id = 8706077435::bigint) AND (res.subscription_id = 
200400))

   Rows Removed by Filter: 77239

   Buffers: shared hit=569

 -  Index Only Scan using reg_email_subscriptions_p00_pkey 
on emailsubscription.reg_email_subscriptions_p00 res_1  (cost=0.57..3406.75 
rows=1612 width=8) (actual time=10.095..611.086 rows=1802 loops=1)

   Output: res_1.registration_id

   Index Cond: ((res_1.registration_id = 
8706059856::bigint) AND (res_1.registration_id = 8706077435::bigint) AND 
(res_1.subscription_id = 200400))

   Heap Fetches: 1806

   Buffers: shared hit=1695 read=110

   I/O Timings: read=562.961

 -  Index Only Scan using reg_email_subscriptions_p01_pkey 
on emailsubscription.reg_email_subscriptions_p01 res_2  (cost=0.57..3061.12 
rows=1401 width=8) (actual time=19.052..849.618 rows=1794 loops=1)

   Output: res_2.registration_id

   Index Cond: ((res_2.registration_id = 
8706059856::bigint) AND (res_2.registration_id = 8706077435::bigint) AND 
(res_2.subscription_id = 200400))

   Heap Fetches: 1794

   Buffers: shared hit=1674 read=120

   I/O Timings: read=739.525

 -  Index Only Scan using reg_email_subscriptions_p02_pkey 
on emailsubscription.reg_email_subscriptions_p02 res_3  (cost=0.57..3495.50 
rows=1662 width=8) (actual time=19.480..1037.415 rows=1806 loops=1)

   Output: res_3.registration_id

   Index Cond: ((res_3.registration_id = 

[GENERAL] Re: How can I refer to an ANYELEMENT variable in postgresql dynamic SQL?

2015-02-13 Thread David G Johnston
Christopher Currie wrote
 Cross-posting from stackoverflow in the hope of getting some additional
 eyes on the question.
 
 http://stackoverflow.com/questions/28505782/how-can-i-refer-to-an-anyelement-variable-in-postgresql-dynamic-sql
 
   update_stmt := format(
 'UPDATE %s SET %s WHERE %s',
 pg_typeof(target),
 array_to_string(setters, ', '),
 array_to_string(selectors, ' AND ')
   );
 
 [...]
 
 EXECUTE update_stmt USING target;
 
 ERROR: there is no parameter $2: SELECT * FROM upsert(ROW(1,'{}')::doodad)
 
 EXECUTE update_stmt USING target.*;
 
 ERROR: query SELECT target.* returned 2 columns: SELECT * FROM
 upsert(ROW(1,'{}')::doodad)

Haven't tried to determine or explain where you are exposing yourself to SQL
injection; but I'm pretty sure you are.

I suggest you learn the difference between a simple string, an SQL
identifier, and a SQL literal as described in the format function
documentation.  Choosing the correct one will offer some protection that you
are forgoing in your current code.  It will also help you better understand
where you can place parameters and where you have to inject data into the
source SQL string.

With dynamic SQL putting the word target into the SQL string causes it to
look within that string for a source relation named target.  It will not
look to the calling environment (i.e., pl/pgsql) for a variable of that
name.

Your update_stmt above doesn't have any parameter placeholders so adding a
USING clause to the EXECUTE command is going to fail.

I have no clue why you are making use of pg_typeof(...).

Consider that (I think...): UPDATE %s SET, pg_typeof(1.00) = UPDATE
numeric SET

The function itself also has no protection from race conditions...

Hopefully between the above observations and the documentation you will be
able to at least build up an executable dynamic sql statement - whether it
is safe is another matter entirely.

I would suggest you try building up simpler statements first.

Lastly, I'm not sure how or whether your issues have anything to do with
ANYELEMENT; but I am doubtful that is the case.

David J.





--
View this message in context: 
http://postgresql.nabble.com/How-can-I-refer-to-an-ANYELEMENT-variable-in-postgresql-dynamic-SQL-tp5837899p5837927.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What's a reasonable maximum number for table partitions?

2015-02-13 Thread Tim Uckun
If I used modulo arithmetic how would the query optimizer know which table
to include and exclude? For example say I did modulo 100 based on the field
client_id.  I create a base table with the trigger to insert the data into
the proper child table. Each table has the constraint (client_id % 100) = X

So if I do select from base table where client_id = 10  would postgres know
to only select from client_table_10? Normally I would always have a
client_id in my queries so hopefully the this could be very efficient.





On Sat, Feb 14, 2015 at 5:12 AM, Vick Khera vi...@khera.org wrote:


 On Thu, Feb 12, 2015 at 7:44 PM, Tim Uckun timuc...@gmail.com wrote:

 Does anybody have experience with huge number of partitions if so where
 did you start running into trouble?


 I use an arbitrary 100-way split for a lot of tracking info. Just modulo
 100 on the ID column. I've never had any issues with that. If you can
 adjust your queries to pick the right partition ahead of time, which I am
 able to do for many queries, the number of partitions shouldn't matter
 much. Only rarely do I need to query the primary table.

 I don't think your plan for 365 partitions is outrageous on modern large
 hardware. For 1000 partitions, I don't know. It will depend on how you can
 optimize your queries before giving them to postgres.



Re: [GENERAL] How to hide stored procedure's bodies from specific user

2015-02-13 Thread Saimon Lim
Thanks for your help

I want to restrict some postgres users as much as possible and allow them
to execute a few my own stored procedures only.

If I block access using:

REVOKE ALL ON pg_catalog.pg_proc FROM PUBLIC;
REVOKE ALL ON FUNCTION pg_catalog.pg_get_functiondef(oid) FROM PUBLIC;

the user will fail to select procedure's body from pg_catalog.pg_proc and
using psql \sf of \ef.
Is this method correct?

2015-02-12 11:53 GMT+03:00 Saimon Lim aimon.s...@gmail.com:

 For clarification - I run the commands using psql program.

 2015-02-11 12:54 GMT+03:00 Saimon Lim aimon.s...@gmail.com:

 Hi
 I want to hide my own stored procedures' bodies from the specific user.
 As far as I know, procedure's body is visible in the pg_catalog.pg_proc
 table.

 So, I did the following:
 REVOKE ALL ON pg_catalog.pg_proc FROM PUBLIC;

 And after it, when user tries:
 SELECT * from pg_proc;

 The following error occurs:
 ERROR:  permission denied for relation pg_proc

 It turns out that user don't have access to the body of the procedure.
 But I still can get stored procedure's body using
 \sf function_name
 or with
 \ef function_name

 So, how can I completely hide my own stored procedure's bodies from this
 user?

 Thanks in advance
 Saimon





[GENERAL] Unknown error while running postgresql_installer_dc46cfee2c\getlocales.exe

2015-02-13 Thread Bald, Glenn

Hi George,


Did you find a resolution? I have exactly the same problem.  Same
postgres version, same xp with sp3, same error log. 


Thanks in advance


Glenn


Glenn Bald | GIS Analyst | Information Services

Forestry Commission 

 

Unknown error while running 
postgresql_installer_dc46cfee2c\getlocales.exe 


From:George Weaver gweaver(at)shaw(dot)ca   
To:  pgsql-general pgsql-general(at)postgresql(dot)org  
Subject: Unknown error while running 
postgresql_installer_dc46cfee2c\getlocales.exe  
Date:2015-02-06 22:07:01
Message-ID:  5B0276776FFA40499867E955C65F3251@D420
http://www.postgresql.org/message-id/5B0276776FFA40499867E955C65F3251@D
420  (view raw
http://www.postgresql.org/message-id/raw/5B0276776FFA40499867E955C65F32
51@D420  or flat
http://www.postgresql.org/message-id/flat/5B0276776FFA40499867E955C65F3
251@D420 ) 
Thread:  2015-02-06 22:07:01 from George Weaver
gweaver(at)shaw(dot)ca
http://www.postgresql.org/message-id/5B0276776FFA40499867E955C65F3251@D
420#nogo   
Lists:   pgsql-general
http://www.postgresql.org/list/pgsql-general/since/201502062207   
Hi List,

Trying to install PostgresSQL 9.4.1 on Window XP Pro Service Pack 3.

Installation is aborted with the following error:

Unknown error while running C:\Documents and Settings\George
Weaver\Local
Settings\Temp\postgresql_installer_dc46cfee2c\getlocales.exe 

bitrock_installer.log output below.

Would this be an issue with trying to install on XP?

Thanks
George

Log started 02/05/2015 at 11:17:31
Preferred installation mode : qt
Trying to init installer in mode qt
Mode qt successfully initialized
Executing C:\Documents and Settings\George Weaver\Local
Settings\Temp/postgresql_installer_dc46cfee2c/temp_check_comspec.bat 
Script exit code: 0

Script output:
 test ok
Script stderr:
Could not find registry key
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4 Data
Directory. Setting variable iDataDirectory to empty value
Could not find registry key
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4 Base
Directory. Setting variable iBaseDirectory to empty value
Could not find registry key
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4
Service ID. Setting variable iServiceName to empty value
Could not find registry key
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4
Service Account. Setting variable iServiceAccount to empty value
Could not find registry key
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4
Super User. Setting variable iSuperuser to empty value
Could not find registry key
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4
Branding. Setting variable iBranding to empty value
Could not find registry key
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4
Version. Setting variable brandingVer to empty value
Could not find registry key
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4
Shortcuts. Setting variable iShortcut to empty value
Could not find registry key
HKEY_LOCAL_MACHINE\SOFTWARE\PostgreSQL\Installations\postgresql-9.4
DisableStackBuilder. Setting variable iDisableStackBuilder to empty
value
[11:17:37] Existing base directory: 
[11:17:37] Existing data directory: 
[11:17:37] Using branding: PostgreSQL 9.4
[11:17:37] Using Super User: postgres and Service Account: NT
AUTHORITY\NetworkService
[11:17:37] Using Service Name: postgresql-9.4
Executing cscript //NoLogo C:\Documents and Settings\George
Weaver\Local
Settings\Temp\postgresql_installer_dc46cfee2c\prerun_checks.vbs
Script exit code: 0
Script output:
 The scripting host appears to be functional.
Script stderr:
Executing C:\Documents and Settings\George Weaver\Local
Settings\Temp\postgresql_installer_dc46cfee2c\vcredist_x86.exe /passive
/norestart
Script exit code: 0
Script output:
Script stderr:
Executing C:\Documents and Settings\George Weaver\Local
Settings\Temp\postgresql_installer_dc46cfee2c\getlocales.exe 
Script exit code: 
Script output:
Script stderr:
 

Unknown error while running C:\Documents and Settings\George
Weaver\Local
Settings\Temp\postgresql_installer_dc46cfee2c\getlocales.exe 

 

 

+ The Forestry Commission's computer systems may be monitored and 
communications carried out on them recorded, to secure the effective operation 
of the system and for other lawful purposes. +

The original of this email was scanned for viruses by the Government Secure 
Intranet (GSi) virus scanning service supplied exclusively by Cable  Wireless 
in partnership with MessageLabs.

On leaving the GSi this email was certified virus-free

[GENERAL] increasing varchar column size is taking too much time

2015-02-13 Thread AI Rumman
Hi,

I started the following query in Postgresql 9.1 where only this sql is
running on the host and it has been taking more than an hour and still
running.

 alter table userdata.table1 alter column name type varchar(512);


Here is the table description:


  d+ userdata.table1
Table userdata.table1
Column|Type |   Modifiers
  | Storage  |   Description

 -+-++--+-
  id  | character varying(50)   | not null
   | extended |
  code   | character varying(32)   | not null   |
 extended |
  accid   | character varying(50)   | not null
   | extended |
  name| character varying(100)  | not null
   | extended |
  create_time | timestamp with time zone| not null default
 now() | plain|
  modified_time   | timestamp with time zone| not null default
 now() | plain|
  install_date| timestamp without time zone |
  | plain|
  recent_scan_date| timestamp without time zone |
  | plain|
  update_date | timestamp without time zone |
  | plain|
  setting | character varying(100)  |
  | extended |
  name| character varying(100)  |
  | extended |
  type| character varying(8)|
  | extended |
  version | character varying(128)  |
  | extended |
  package | character varying(255)  |
  | extended |
  permission  | text|
  | extended |
  trigger | character varying(10)   |
  | extended |
  reasons | character varying(200)  |
  | extended |
  note| character varying(255)  |
  | extended |
  size| bigint  ||
 plain|
  usage| bigint  ||
 plain|
  running | character varying(4)|
  | extended |
  location| character varying(60)   |
  | extended |
  can_stop| character(1)|
  | extended |
  can_uninstall   | character(1)|
  | extended |
  flagged_status  | character(1)|
  | extended |
 status   | character(1)|
  | extended |
  consultation_status | character(1)|
  | extended |
  trust   | character(1)|
  | extended |
 Indexes:
 table1_pk PRIMARY KEY, btree (id, code)
 table1_accid_id_hashcode_idx btree (accid, id, code)
 table1_accid_idx btree (accid)
 table1_id_idx btree (id)
 Triggers:
 table1s_delete_trigger BEFORE DELETE ON table1 FOR EACH ROW EXECUTE
 PROCEDURE delete_jangles_table1()
 table1s_insert_trigger BEFORE INSERT ON table1 FOR EACH ROW EXECUTE
 PROCEDURE insert_jangles_table1()
 Child tables: table1_0,
   table1_1,
   table1_10,
   table1_2,
   table1_3,
   table1_4,
   table1_5,
   table1_6,
   table1_7,
   table1_8,
   table1_9
 Has OIDs: no


Here are the number of rows and pages in partition:



   relname| reltuples | relpages
 ---+---+--
  table1_0  |  10076840 |   362981
  table1_1  |  10165073 |   366548
  table1_2  |  10046372 |   361838
  table1_3  |  10114727 |   364360
  table1_4  |  10155816 |   366054
  table1_5  |  10188953 |   367023
  table1_6  |  10275270 |   370887
  table1_7  |  10163937 |   366245
  table1_8  |  10262516 |   369350
  table1_9  |  10359893 |   372099
  table1_10 |  10434026 |   375327
  table1| 0 |0



Any idea why the above ALTER statement is taking that much time?
Is it because of the number of rows we have in each partition?
Any suggestion for it?