Re: [GENERAL] how long to wait on 9.2 bitrock installer?

2012-09-13 Thread Sachin Srivastava
The reboot might have purged the installation log.

On Thu, Sep 13, 2012 at 11:22 AM, Dann Corbit dcor...@connx.com wrote:

  I should mention also that the new PostgreSQL instance is working fine.
 I am looking forward to testing it.

 ** **

 *From:* Dann Corbit
 *Sent:* Wednesday, September 12, 2012 10:50 PM
 *To:* 'Sachin Srivastava'
 *Cc:* pgsql-general@postgresql.org
 *Subject:* RE: [GENERAL] how long to wait on 9.2 bitrock installer?

 ** **

 C:\Users\dcorbit\AppData\Local\Tempdir /s install-postgresql.log

 Volume in drive C has no label.

 Volume Serial Number is 7460-E8CF

 File Not Found

 ** **

 I did perform a reboot, because of several Windowsupdate installed
 things.  I don’t know if that caused the file to go away or not.

 ** **

 *From:* Sachin Srivastava [mailto:sachin.srivast...@enterprisedb.com]
 *Sent:* Wednesday, September 12, 2012 10:47 PM
 *To:* Dann Corbit
 *Cc:* pgsql-general@postgresql.org
 *Subject:* Re: [GENERAL] how long to wait on 9.2 bitrock installer?

 ** **

 If possible, can you share the installation log?
 (%TEMP%\install-postgresql.log)

 On Thu, Sep 13, 2012 at 11:12 AM, Dann Corbit dcor...@connx.com wrote:**
 **

 I just let it sit.  3.5 hours later, it completed.

  

 *From:* pgsql-general-ow...@postgresql.org [mailto:
 pgsql-general-ow...@postgresql.org] *On Behalf Of *Dann Corbit
 *Sent:* Wednesday, September 12, 2012 1:24 PM
 *To:* pgsql-general@postgresql.org
 *Subject:* [GENERAL] how long to wait on 9.2 bitrock installer?

  

 I have several versions of PostgreSQL already installed on this machine.
 We need to test compatibility with PostgreSQL database systems with our
 products.

 I tried to install 9.2 64 bit using the one click installer from this
 location:

 http://www.enterprisedb.com/products/pgdownload.do#windows

  

 I did use an unused port of 5439 as 5432-5438 are already in use by other
 instances of PostgreSQL.

  

 I am stalled here for more than one hour:

 

  

 Is there any chance that the installer is still doing something useful, or
 do I need to kill the process and wait until the installer gets stabilized?
 

  

  

 System info:

  

 Operating System new – server roles System Model

 Windows Server 2008 R2 Standard (x64) Service Pack 1 (build 7601)

 Install Language: English (United States)

 System Locale: English (United States)

 Installed: 5/17/2011 9:19:36 AM

  

 Server Roles:

 Application Server

 Web Server (IIS)

 File Services

 Hyper-V   Dell Inc. Precision WorkStation T3500
 

 System Service Tag: 4QV4FP1 (support for this PC)

 Chassis Serial Number: 4QV4FP1

 Enclosure Type: Tower

 Processor a Main Circuit Board b

 3.20 gigahertz Intel Xeon

 64 kilobyte primary memory cache

 256 kilobyte secondary memory cache

 8192 kilobyte tertiary memory cache

 64-bit ready

 Multi-core (4 total)

 Not hyper-threaded   Board: Dell Inc. 09KPNV A00

 Serial Number: ..CN7082111680FL.

 Bus Clock: 4800 megahertz

 BIOS: Dell Inc. A10 01/21/2011

 new USB Storage Use in past 30 Days (mouse over last used for details)
new Hosted Virtual Machines (mouse over name
 for details)

 Last Used

 Generic Flash HS-CF -- drive 2, s/n 26020128B005, rev 4.44
 8/21/2012 4:04:38 PM

 Generic Flash HS-COMBO -- drive 3, s/n 26020128B005, rev 4.44
 8/21/2012 4:04:38 PM

 CENTON DS Pro -- drive 1, s/n C0603681, rev 8.07  8/21/2012
 4:04:37 PM

 

 NameLast UsedStatus

 DCORBITxp 4/16/2012 8:13:43 PMStopped

 Drives new – drive encryption   Memory Modules c,d

 1000.21 Gigabytes Usable Hard Drive Capacity

 210.02 Gigabytes Hard Drive Free Space

  

 PLDS DVD-ROM DH-16D5S [Optical drive]

  

 ARRAY0 [Hard drive] -- drive 0, Not SMART

 CENTON DS Pro USB Device [Hard drive] (66.26 GB) -- drive 1

 Generic Flash HS-CF USB Device [Hard drive] -- drive 2

 Generic Flash HS-COMBO USB Device [Hard drive] -- drive 3
 12286 Megabytes Usable Installed Memory

  

 Slot 'DIMM 1 ' has 2048 MB (serial number 83631A8F)

 Slot 'DIMM 2 ' has 2048 MB (serial number 83631A8D)

 Slot 'DIMM 3 ' has 2048 MB (serial number 83631A8B)

 Slot 'DIMM 4 ' has 2048 MB (serial number 83631A87)

 Slot 'DIMM 5 ' has 2048 MB (serial number 83631A85)

 Slot 'DIMM 6 ' has 2048 MB (serial number 83631A83)

 Local Drive Volumes new – volume encryption

 

 

 c: (NTFS on drive 0) 1000.21 GB

Re: [GENERAL] how long to wait on 9.2 bitrock installer?

2012-09-13 Thread John R Pierce

On 09/12/12 10:42 PM, Dann Corbit wrote:


I just let it sit.  3.5 hours later, it completed.

...

I have several versions of PostgreSQL already installed on this 
machine.  We need to test compatibility with PostgreSQL database 
systems with our products.


I tried to install 9.2 64 bit using the one click installer from this 
location:


http://www.enterprisedb.com/products/pgdownload.do#windows

I did use an unused port of 5439 as 5432-5438 are already in use by 
other instances of PostgreSQL.




are those other instances running?  if they have significant shared 
memory allocations, your 12GB of ram could be significantly blocked up, 
so the new install may have ended up doing a lot of paging.


also, your configuration file says this system has a single 1TB drive?   
thats a weak storage configuration for a database server, especially if 
its a 7200rpm or less SATA drive.




--
john r pierceN 37, W 122
santa cruz ca mid-left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Amazon High I/O instances

2012-09-13 Thread Sébastien Lorion
pgbench initialization has been going on for almost 5 hours now and still
stuck before vacuum starts .. something is definitely wrong as I don't
remember it took so long first time I created the db. Here are the current
stats now:

*iostat (xbd13-14 are WAL zpool)*

 device r/s   w/skr/skw/s qlen svc_t  %b
xbd8 161.3 109.8  1285.4  3450.50  12.5  19
xbd7 159.5 110.6  1272.3  3450.50  11.4  14
xbd6 161.1 108.8  1284.4  3270.60  10.9  14
xbd5 159.5 109.0  1273.1  3270.60  11.6  15
xbd14  0.0   0.0 0.0 0.00   0.0   0
xbd13  0.0   0.0 0.0 0.00   0.0   0
xbd12204.6 110.8  1631.3  3329.20   9.1  15
xbd11216.0 111.2  1722.5  3329.21   8.6  16
xbd10197.2 109.4  1573.5  3285.80   9.8  15
xbd9 195.0 109.4  1557.1  3285.80   9.9  15
*
*
*zpool iostat (db pool)*
poolalloc   free   read  write   read  write
db   143G   255G  1.40K  1.53K  11.2M  12.0M

*vmstat*
*
*
procs  memory  pagedisks faults cpu
 r b w avmfre   flt  re  pi  pofr  sr ad0 xb8   in   sy   cs us
sy id
 0 0 0   5634M28G 7   0   0   0  7339   0   0 245 2091 6358 20828
 2  5 93
 0 0 0   5634M28G10   0   0   0  6989   0   0 312 1993 6033 20090
 1  4 95
 0 0 0   5634M28G 7   0   0   0  6803   0   0 292 1974 6111 22763
 2  5 93
 0 0 0   5634M28G10   0   0   0  7418   0   0 339 2041 6170 20838
 2  4 94
 0 0 0   5634M28G   123   0   0   0  6980   0   0 282 1977 5906 19961
 2  4 94
*
*
*top*
*
*
 last pid:  2430;  load averages:  0.72,  0.73,  0.69 up 0+04:56:16
 04:52:53
32 processes:  1 running, 31 sleeping
CPU:  1.8% user,  0.0% nice,  5.3% system,  1.4% interrupt, 91.5% idle
Mem: 1817M Active, 25M Inact, 36G Wired, 24K Cache, 699M Buf, 28G Free
Swap:

  PID USERNAME  THR PRI NICE   SIZERES STATE   C   TIME   WCPU COMMAND
 1283 pgsql   1  340  3967M  1896M zio-i  5  80:14 21.00% postgres
 1282 pgsql   1  250 25740K  3088K select  2  10:34  0.00% pgbench
 1274 pgsql   1  200  2151M 76876K select  1   0:09  0.00% postgres

On Wed, Sep 12, 2012 at 9:16 PM, Sébastien Lorion
s...@thestrangefactory.comwrote:

 I recreated the DB and WAL pools, and launched pgbench -i -s 1. Here
 are the stats during the load (still running):

 *iostat (xbd13-14 are WAL zpool)*
 device r/s   w/skr/skw/s qlen svc_t  %b
 xbd8   0.0 471.5 0.0 14809.3   40  67.9  84
 xbd7   0.0 448.1 0.0 14072.6   39  62.0  74
 xbd6   0.0 472.3 0.0 14658.6   39  61.3  77
 xbd5   0.0 464.7 0.0 14433.1   39  61.4  76
 xbd14  0.0   0.0 0.0 0.00   0.0   0
 xbd13  0.0   0.0 0.0 0.00   0.0   0
 xbd12  0.0 460.1 0.0 14189.7   40  63.4  78
 xbd11  0.0 462.9 0.0 14282.8   40  61.8  76
 xbd10  0.0 477.0 0.0 14762.1   38  61.2  77
 xbd9   0.0 477.6 0.0 14796.2   38  61.1  77

 *zpool iostat (db pool)*
 poolalloc   free   read  write   read  write
 db  11.1G   387G  0  6.62K  0  62.9M

 *vmstat*
 procs  memory  pagedisks faults cpu
  r b w avmfre   flt  re  pi  pofr  sr ad0 xb8   in   sy   cs
 us sy id
  0 0 0   3026M35G   126   0   0   0 29555   0   0 478 2364 31201 26165
 10  9 81

 *top*
 last pid:  1333;  load averages:  1.89,  1.65,  1.08  up 0+01:17:08
  01:13:45
 32 processes:  2 running, 30 sleeping
 CPU: 10.3% user,  0.0% nice,  7.8% system,  1.2% interrupt, 80.7% idle
 Mem: 26M Active, 19M Inact, 33G Wired, 16K Cache, 25M Buf, 33G Free



 On Wed, Sep 12, 2012 at 9:02 PM, Sébastien Lorion 
 s...@thestrangefactory.com wrote:
 
  One more question .. I could not set wal_sync_method to anything else
 but fsync .. is that expected or should other choices be also available ? I
 am not sure how the EC2 SSD cache flushing is handled on EC2, but I hope it
 is flushing the whole cache on every sync .. As a side note, I got
 corrupted databases (errors about pg_xlog directories not found, etc) at
 first when running my tests, and I suspect it was because of
 vfs.zfs.cache_flush_disable=1, though I cannot prove it for sure.
 
  Sébastien
 
 
  On Wed, Sep 12, 2012 at 8:49 PM, Sébastien Lorion 
 s...@thestrangefactory.com wrote:
 
  Is dedicating 2 drives for WAL too much ? Since my whole raid is
 comprised of SSD drives, should I just put it in the main pool ?
 
  Sébastien
 
 
  On Wed, Sep 12, 2012 at 8:28 PM, Sébastien Lorion 
 s...@thestrangefactory.com wrote:
 
  Ok, make sense .. I will update that as well and report back. Thank
 you for your advice.
 
  Sébastien
 
 
  On Wed, Sep 12, 2012 at 8:04 PM, John R Pierce pie...@hogranch.com
 wrote:
 
  On 09/12/12 4:49 PM, Sébastien Lorion wrote:
 
  You set shared_buffers way below what is suggested in Greg Smith
 book (25% or more of RAM) .. what is the rationale behind that rule of
 thumb ? Other values are more or less 

[GENERAL] Planner forces seq scan when select without quoting its values

2012-09-13 Thread Alex Lai

alex1 is a table with 47 million rows and a rule that deletes from another
table whenever an insert is made into alex1.  Insertions into alex1 are 
causing

a seq scan that takes a very long time:

  EXPLAIN insert into alex1 (fileid, archiveset) select 35352974, 10003;
(EXPLAIN output is at the bottom of this post)

Interestingly, quoting the values causes the plannner to use an index scan,
making the operation several orders of magnitude faster:

  EXPLAIN insert into alex1 (fileid, archiveset) select '35352974', 
'10003';

(EXPLAIN output is at the bottom of this post)


Can someone please tell me how to write the insert statement without the 
quotes, but

still executes quickly?


Some additional information:

All search columns have a b-tree index.

Relevant parameter values:
effective_cache_size = 16GB
cpu_operator_cost = 0.0025
cpu_tuple_cost = 0.01
cursor_tuple_fraction = 0.1
enable_bitmapscan = on
enable_hashagg = on
enable_hashjoin = on
enable_indexscan = on
enable_material = on
enable_mergejoin = on
enable_nestloop = on
enable_seqscan = on
enable_sort = on
enable_tidscan = on


This line from the EXPLAIN output suggests that the rule is causing a 
seq scan

to be used instead on an index scan:

  ... WHERE ...   a.archiveset = new.archiveset AND a.fileid  new.fileid.

When I replace the new. with actual integers:

  ... WHERE ...   a.archiveset = 10003 AND a.fileid   35352974,

the problem goes away.

That makes me think that the planner is unable to find the index, so it 
instead

uses a seq scan.



alex1 table:

Table public.alex1
Column | Type | Modifiers | Storage | Description

+--+---+-+-

fileid   | integer  | | plain |
archiveset | integer  | | plain |
lastchange | timestamp with time zone | | plain |
Indexes:
pk_alex1_archiveset btree (archiveset)
pk_alex1_archiveset_lastchange btree (archiveset, lastchange)
pk_alex1_fileid btree (fileid)
pk_alex1_fileid_archiveset btree (fileid, archiveset)
pk_alex1_lastchange btree (lastchange)
Rules:
alex_rule AS
ON INSERT TO alex1 DO  DELETE FROM alex1 fa
WHERE ((fa.fileid, fa.archiveset) IN ( SELECT a.fileid, a.archiveset
FROM alex1 a
JOIN filemeta m ON m.fileid = a.fileid
JOIN filemeta o ON o.esdt::text = m.esdt::text AND o.key::text = 
m.key::text
WHERE o.fileid = new.fileid AND a.archiveset = new.archiveset AND 
a.fileid  new.fileid))

Has OIDs: no



filemeta table:

Table public.filemeta
Column |   Type| Modifiers | Storage  | Description

+---+---+--+--
fileid | integer | not null  | plain  |  The unique SIPS 
generated file identifier
esdt | character varying | | extended |  The Earth Science Data 
Type indicates the file contents

key  | character varying | | extended |
source | character varying | | extended |
Indexes:
pk_filemeta PRIMARY KEY, btree (fileid)
ak_filemeta_esdt btree (esdt)
ak_filemeta_esdt_fileid btree (fileid, esdt)
ak_filemeta_esdt_key btree (esdt, key)
ak_filemeta_fileid btree (fileid)
ak_filemeta_key btree (key)
ak_filemeta_source btree (source)
Foreign-key constraints:
fk_filemeta_esdt FOREIGN KEY (esdt) REFERENCES esdt_def(esdt) 
DEFERRABLE
fk_filemeta_file FOREIGN KEY (fileid) REFERENCES file(fileid) ON 
DELETE CASCADE

fk_filemeta_source FOREIGN KEY (source) REFERENCES source_def(source)
Child tables: filemeta_anc,
filemeta_app,
filemeta_clg_aux,
filemeta_l0,
filemeta_l0r,
filemeta_mdkey,
filemeta_ompslookup,
filemeta_orbital,
filemeta_timerange
Has OIDs: no




= SLOW INSERT (without quotes) 
==


explain insert into alex1 (fileid, archiveset) select 35352974, 10003;
 QUERY PLAN
--
 Insert on alex1  (cost=0.00..0.02 rows=1 width=8)
   -  Subquery Scan on *SELECT*  (cost=0.00..0.02 rows=1 width=8)
 -  Result  (cost=0.00..0.01 rows=1 width=0)

 Delete on alex1 fa  (cost=0.00..1023312671395.36 rows=23786988 width=38)
   -  Nested Loop  (cost=0.00..1023312671395.36 rows=23786988 width=38)
 Join Filter: (SubPlan 1)
 -  Subquery Scan on *SELECT*  (cost=0.00..0.02 rows=1 width=40)
   -  Result  (cost=0.00..0.01 rows=1 width=0)
 -  Seq Scan on alex1 fa  (cost=0.00..732896.76 rows=47573976 
width=14)

 SubPlan 1
   -  Nested Loop  (cost=14.98..43004.15 rows=263 width=8)
 

Re: [GENERAL] Where do I get pgAdmin 1.16 for openSuSE?

2012-09-13 Thread Devrim GÜNDÜZ

Hi,

On Wed, 2012-09-12 at 20:51 -0700, Mike Christensen wrote:
 In file included from ../pgadmin/include/pgAdmin3.h:24:0,
  from ./pgAdmin3.cpp:13:
 ../pgadmin/include/ctl/ctlSQLBox.h:17:24: fatal error: wx/stc/stc.h:
 No such file or directory
 compilation terminated. 

Here is the buildRequires that I found on a SuSE spec for pgadmin3:

BuildRequires:  automake
BuildRequires:  gcc-c++
BuildRequires:  libopenssl-devel
BuildRequires:  libxml2-devel
BuildRequires:  libxslt-devel
BuildRequires:  postgresql-devel
BuildRequires:  update-desktop-files
BuildRequires:  wxWidgets
BuildRequires:  wxWidgets-wxcontainer-devel

So, you probably need the last two packages for the error you wrote.

Regards,
-- 
Devrim GÜNDÜZ
Principal Systems Engineer @ EnterpriseDB: http://www.enterprisedb.com
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org  Twitter: http://twitter.com/devrimgunduz


signature.asc
Description: This is a digitally signed message part


[GENERAL] Multicorn 0.9.1 and 0.9.2 released

2012-09-13 Thread Ronan Dunklau
Hello.

I'm pleased to annouce that multicorn 0.9.2 has been released, which is 
compatible with PostgreSQL 9.2 (but not 9.1 anymore).

The last release compatible with PG 9.1 is 0.9.1.

Version 0.9.2 major feature is the ability to declare columns which can be 
used as parameterized paths.

This mean that the planner will be able to pick a nested loop over a 
full foreign scan if a look up by key is advertised as cheaper.

Multicorn website: http://multicorn.org
Multicorn on PGXN: http://pgxn.org/dist/multicorn/

Best regards,

--
Ronan Dunklau


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] RPM contents for 9.2 community release.

2012-09-13 Thread Jeremy Whiting
Hi Devrim,
 Thank you for pointing these facts out. Very helpful and I now have my
installations working without issue.
 Changing the scripts as you have done is very useful for parallel
installations. It will make my work easier.

Regards,
Jeremy

On 12/09/12 16:57, Devrim GUNDUZ wrote:
 Hi Jeremy,

 On Wed, September 12, 2012 6:23 pm, Jeremy Whiting wrote:

  I tried installing using the Direct Download RPMs for two systems I am
 using. The installation using the rpm files did not install in one case
 and the other did not install all files to the target directory I was
 expecting.

 1) Upgrading postgresql on a 64 bit Fedora 16 desktop from 9.1.5 to 9.2
 using the rpms provided in the download section. I tried installing with
 yum the rpm files but getting an error message.

 $ ls
 postgresql92-9.2.0-1PGDG.f16.x86_64.rpm
 postgresql92-devel-9.2.0-1PGDG.f16.x86_64.rpm
 postgresql92-server-9.2.0-1PGDG.f16.x86_64.rpm
 postgresql92-contrib-9.2.0-1PGDG.f16.x86_64.rpm
 postgresql92-docs-9.2.0-1PGDG.f16.x86_64.rpm
 postgresql92-test-9.2.0-1PGDG.f16.x86_64.rpm
 postgresql92-debuginfo-9.2.0-1PGDG.f16.x86_64.rpm
 postgresql92-libs-9.2.0-1PGDG.f16.x86_64.rpm
 $ sudo yum install ./postgresql92-*
 Loaded plugins: auto-update-debuginfo, fastestmirror, langpacks, presto,
 refresh-packagekit
 Examining ./postgresql92-9.2.0-1PGDG.f16.x86_64.rpm:
 postgresql92-9.2.0-1PGDG.f16.x86_64
 ./postgresql92-9.2.0-1PGDG.f16.x86_64.rpm: does not update installed
 package.
 snip

 Community RPMs cannot be used to upgrade Fedora's own RPMs, since the
 package names are different and we have a different layout, so this seems
 ok to me.

 http://yum.postgresql.org/9.2/fedora/fedora-16-x86_64/repoview/

  Did I down the correct rpms ?
 Yes.

 2) The second install was to a 64 bit RHEL6 server. The RPM files
 installed using yum successfully. Downloaded the files from

 http://yum.postgresql.org/9.2/redhat/rhel-6-x86_64/repoview/

  However the files to provide postgresql tooling and server execution
 were not installed into

 /usr/bin

  I was trying to execute initdb. I did find it in an the
 'postgresql92-9.2.0-1PGDG.rhel6.x86_64.rpm' file but the installation
 target directory is

 /usr/pgsql-9.2/bin
 Right. I have a blog post regarding this in here:
 http://people.planetpostgresql.org/devrim/index.php?/archives/43-How-to-install-PostgreSQL-9.0-Beta-1-to-FedoraCentOSRHEL.html

 (though this post needs some fixes, like init script name, etc)

 I changed the layout, so that we can install multiple versions in parallel.

 For initdb, you can use

 service postgresql-9.2 initdb

 $ ls
 postgresql92-9.2.0-1PGDG.rhel6.x86_64.rpm
 postgresql92-docs-9.2.0-1PGDG.rhel6.x86_64.rpm
 postgresql92-contrib-9.2.0-1PGDG.rhel6.x86_64.rpm
 postgresql92-libs-9.2.0-1PGDG.rhel6.x86_64.rpm
 postgresql92-devel-9.2.0-1PGDG.rhel6.x86_64.rpm
 postgresql92-server-9.2.0-1PGDG.rhel6.x86_64.rpm
 $ ls /usr/pgsql-9.2/bin
 clusterdb   droplang  pg_archivecleanup  pg_ctl  pg_restore
 postgresvacuumlo
 createdbdropuser  pg_basebackup  pg_dump pg_standby
 postmaster
 createlang  ecpg  pgbenchpg_dumpall  pg_test_fsync
 psql
 createuser  initdbpg_config  pg_receivexlog  pg_test_timing
 reindexdb
 dropdb  oid2name  pg_controldata pg_resetxlogpg_upgrade
 vacuumdb
 [postgres@benchserver2 tmp]$

  That means users on the system cannot immediately execute these files.

  Was there a mistake in the packaging paths of these files in the RPM ?
 That is intentional. See here:
 http://svn.pgrpms.org/browser/rpm/redhat/9.2/postgresql/EL-6/postgresql-9.2.spec#L576

 The binaries I put under /usr/bin are the ones that can be used for
 multiple versions, like psql, etc. Utilities like pg_ctl, initdb,
 pg_controldata and pg_resetxlog are not among these, so they are kept
 under /usr/pgsql-9.2/bin.

 Regards,




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Amazon High I/O instances

2012-09-13 Thread Sébastien Lorion
maintenance_work_mem is already 4GB. How large should it be during load
then ?

Sébastien

On Thu, Sep 13, 2012 at 1:29 AM, John R Pierce pie...@hogranch.com wrote:

 On 09/12/12 10:01 PM, Sébastien Lorion wrote:

 pgbench initialization has been going on for almost 5 hours now and still
 stuck before vacuum starts .. something is definitely wrong as I don't
 remember it took so long first time I created the db


 pgbench initialization with a high scale factor, like the -s 1 I
 frequently use, does take quite a few hours.you need a large
 maintenance_work_mem, or the create index phase will take a really long
 time to index the 150GB worth of tables its created.




 --
 john r pierceN 37, W 122
 santa cruz ca mid-left coast




 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/**mailpref/pgsql-generalhttp://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Planner forces seq scan when select without quoting its values

2012-09-13 Thread Tom Lane
Alex Lai m...@sesda2.com writes:
EXPLAIN insert into alex1 (fileid, archiveset) select 35352974, 10003;
 [vs]
EXPLAIN insert into alex1 (fileid, archiveset) select '35352974', 
 '10003';

You might be happier using INSERT ... VALUES instead of INSERT
... SELECT for this.  In the former, the parser is going to be more
aggressive about forcing values to the correct datatype, which is the
root of your difficulties here.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Where do I get pgAdmin 1.16 for openSuSE?

2012-09-13 Thread Mike Christensen
On Thu, Sep 13, 2012 at 12:15 AM, Devrim GÜNDÜZ dev...@gunduz.org wrote:

 Hi,

 On Wed, 2012-09-12 at 20:51 -0700, Mike Christensen wrote:
 In file included from ../pgadmin/include/pgAdmin3.h:24:0,
  from ./pgAdmin3.cpp:13:
 ../pgadmin/include/ctl/ctlSQLBox.h:17:24: fatal error: wx/stc/stc.h:
 No such file or directory
 compilation terminated.

 Here is the buildRequires that I found on a SuSE spec for pgadmin3:

 BuildRequires:  automake
 BuildRequires:  gcc-c++
 BuildRequires:  libopenssl-devel
 BuildRequires:  libxml2-devel
 BuildRequires:  libxslt-devel
 BuildRequires:  postgresql-devel
 BuildRequires:  update-desktop-files
 BuildRequires:  wxWidgets
 BuildRequires:  wxWidgets-wxcontainer-devel

Yea, I had built wxWidgets by hand, but I guess it wasn't in the
search path or installed correctly.  I finally got around that error,
but got some other error that I wasn't able to figure out.

I think I'll just use pgAdmin on my Mac for now :)

Mike


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Auto Vacuum of pg_catalog tables causes huge delay in opening new connections

2012-09-13 Thread Mike Roest
 Good Morning everyone,

   We have a interesting thing happening on one of our DB's that when
autovacuum runs against the pg_catalog tables (or we run a manual vacuum)
we get a large delay in opening new connections.  If this happens during a
busy period on our system we end up getting 200-300 connections trying to
open at once.  All the backend processes on the server for the connections
show up in the ps list in a state of startup.  As the connections don't
actually open until the delay clears the maximum connections in the
connection pool we have configured on our application servers is not
getting hit causing the large number of connections to be attempted open.
The performance of queries on the existing open connections are also
effected (which is why more connections are being opened since the existing
open ones are releasing back to the pool slower).

The database contains roughly 1300 schemas each with about 1350
objects. Each schema is used to store the data for a single client.  As
part of our demoing system we create 100 new schemas each day.  And have
the same number of schemas dropped each day due to expired demos.

On a test server when I manually run a vacuum on pg_catalog.pg_class and
then attempt to open 20 connections from a remote client to the DB I get an
average of 64 seconds to get the connections open with the longest taking
80 seconds
If I immediately run another vacuum of pg_class (with no drop or creation
of any schemas) and then attempt to open 20 connections from the same
client I get an average of 84 seconds to get the connections open with the
longest taking 123 seconds.  The behavior seems to not be 100% consistent
however as some times the subsequent vacuum run doesn't cause the
connections to be slow.

We pulled a core dump from one of the backend processes when it was in this
state and got the following backtrace

#0  0x0036e74d5d77 in semop () from /lib64/libc.so.6
#1  0x005d42e3 in PGSemaphoreLock (sema=0x2b8ff150ec40,
interruptOK=0 '\000') at pg_sema.c:418
#2  0x0060d796 in LWLockAcquire (lockid=FirstBufMappingLock,
mode=LW_SHARED) at lwlock.c:464
#3  0x005fada8 in BufferAlloc (smgr=0x47c7c10, relpersistence=112
'p', forkNum=MAIN_FORKNUM, blockNum=680095, mode=RBM_NORMAL,
strategy=0x47c3570, hit=0x7fff6dd6a987 ) at bufmgr.c:531
#4  ReadBuffer_common (smgr=0x47c7c10, relpersistence=112 'p',
forkNum=MAIN_FORKNUM, blockNum=680095, mode=RBM_NORMAL, strategy=0x47c3570,
hit=0x7fff6dd6a987 ) at bufmgr.c:325
#5  0x005fb4cd in ReadBufferExtended (reln=0x2b8ff54a9540,
forkNum=MAIN_FORKNUM, blockNum=680095, mode=RBM_NORMAL, strategy=0x47c3570)
at bufmgr.c:246
#6  0x004676fd in heapgetpage (scan=0x47c2d50, page=680095) at
heapam.c:223
#7  0x0046804e in heapgettup (scan=0x47c2d50, dir=value optimized
out, nkeys=2, key=0x47c3680) at heapam.c:556
#8  0x004685fc in heap_getnext (scan=0x47c2d50,
direction=1842784336) at heapam.c:1345
#9  0x006c1378 in RelationBuildTupleDesc (targetRelId=value
optimized out, insertIt=1 '\001') at relcache.c:468
#10 RelationBuildDesc (targetRelId=value optimized out, insertIt=1
'\001') at relcache.c:882
#11 0x006c2abe in RelationIdGetRelation (relationId=2610) at
relcache.c:1568
#12 0x0046b5fb in relation_open (relationId=2610,
lockmode=1842784336) at heapam.c:907
#13 0x0046b673 in heap_open (relationId=20840476,
lockmode=1842784336) at heapam.c:1077
#14 0x006bbd27 in SearchCatCache (cache=0x4777980, v1=2655, v2=0,
v3=0, v4=value optimized out) at catcache.c:1183
#15 0x006c03da in RelationInitIndexAccessInfo
(relation=0x2b8ff54b4688) at relcache.c:996
#16 0x006c1aa5 in RelationBuildDesc (targetRelId=value optimized
out, insertIt=1 '\001') at relcache.c:904
#17 0x006c1cfd in load_critical_index (indexoid=2655, heapoid=2603)
at relcache.c:3080
#18 0x006c3be2 in RelationCacheInitializePhase3 () at
relcache.c:2890
#19 0x006d7fcb in InitPostgres (in_dbname=0x47182d0 aspdata,
dboid=0, username=value optimized out, out_dbname=0x0) at postinit.c:816
#20 0x00619f3d in PostgresMain (argc=value optimized out,
argv=0x46f2a28, username=0x46f2950 aspuser) at postgres.c:3650
#21 0x005df87b in ServerLoop () at postmaster.c:3600
#22 0x005e0497 in PostmasterMain (argc=5, argv=0x46f1c10) at
postmaster.c:1115
#23 0x0058518e in main (argc=5, argv=value optimized out) at
main.c:199


As a stop gap we've added 3 additional schema storage DB's and are rotating
the creations between them to reduce the number of schemas in the single DB
that we had (also reducing the removal and addition of schemas to that DB).
 To reduce the impact of the vacuum on what used to be our single DB.  But
this is only gaining us some time as if the vacuum hits us at a busy time
for requests we will get the same backup of 200-300 connections in the
startup state.

Any one have any suggestions as to how to resolve this?

The 

[GENERAL] What is the state of the art for using LINQ with PostgreSQL?

2012-09-13 Thread Rob Richardson
Greetings!

I would like to begin learning to use LINQ with PostgreSQL.  Ideally, it would 
be good to have free tools to do it, since I probably won't be finished playing 
with it before 30 days are up and I'd have to pay for something.  It would also 
be good if whatever I use has a good set of tutorials that I can follow to 
begin using them.  Where should I start?

I have found the dotConnect provider from DevArt, and I'm going to begin 
playing with their free Express provider.  Does anyone have experience 
working with DevArt's tools, and if so, what did you think?

Thanks very much!

RobR


[GENERAL] Best free tool for relationship extraction

2012-09-13 Thread Alexander Gataric
I need to determine relationships between several tables. Is there a free tool 
to extract these from catalog tables? Is there an SQL that also does this?

Thanks
Alex

Sent from my smartphone



Re: [GENERAL] Auto Vacuum of pg_catalog tables causes huge delay in opening new connections

2012-09-13 Thread Tom Lane
Mike Roest mike.ro...@replicon.com writes:
We have a interesting thing happening on one of our DB's that when
 autovacuum runs against the pg_catalog tables (or we run a manual vacuum)
 we get a large delay in opening new connections.

I think you're hitting the problem that was fixed here:

Author: Tom Lane t...@sss.pgh.pa.us
Branch: master Release: REL9_2_BR [532fe28da] 2012-05-26 19:09:52 -0400
Branch: REL9_1_STABLE Release: REL9_1_4 [6c1bf45ea] 2012-05-26 19:09:59 -0400
Branch: REL9_0_STABLE Release: REL9_0_8 [2ce097e6e] 2012-05-26 19:10:05 -0400
Branch: REL8_4_STABLE Release: REL8_4_12 [35cc2be6f] 2012-05-26 19:10:13 -0400
Branch: REL8_3_STABLE Release: REL8_3_19 [422022b12] 2012-05-26 19:10:19 -0400

Prevent synchronized scanning when systable_beginscan chooses a heapscan.

The only interesting-for-performance case wherein we force heapscan here
is when we're rebuilding the relcache init file, and the only such case
that is likely to be examining a catalog big enough to be syncscanned is
RelationBuildTupleDesc.  But the early-exit optimization in that code gets
broken if we start the scan at a random place within the catalog, so that
allowing syncscan is actually a big deoptimization if pg_attribute is large
(at least for the normal case where the rows for core system catalogs have
never been changed since initdb).  Hence, prevent syncscan here.  Per my
testing pursuant to complaints from Jeff Frost and Greg Sabino Mullane,
though neither of them seem to have actually hit this specific problem.

Back-patch to 8.3, where syncscan was introduced.

 For our setup we're running postgres 9.1.1 compiled from source on Centos
 5.8 x64 (Dual Xeon x5650 with 32 gigs of ram)

Try updating ...

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What is the state of the art for using LINQ with PostgreSQL?

2012-09-13 Thread Mike Christensen
 I would like to begin learning to use LINQ with PostgreSQL.  Ideally, it
 would be good to have free tools to do it, since I probably won’t be
 finished playing with it before 30 days are up and I’d have to pay for
 something.  It would also be good if whatever I use has a good set of
 tutorials that I can follow to begin using them.  Where should I start?



 I have found the dotConnect provider from DevArt, and I’m going to begin
 playing with their free “Express” provider.  Does anyone have experience
 working with DevArt’s tools, and if so, what did you think?

I use DevArt's Oracle drivers at work and they're very, very good.
Their sales people are very difficult to get a response from, however
their support people are knowledgable and will always respond with
code examples and what not.  However, at least the Oracle tools are
quite expensive.  I'm not familiar with their free stuff, but I'd
expect it has limited functionality.

For my Postgres project at home, I use Castle ActiveRecord which is an
Active Record implementation built on top of NHibernate.  I absolutely
love it, and it's free and open source.  However, Castle ActiveRecord
is no longer actively being worked on, with the assumption people will
be using either the .NET Entity Framework or Fluent NHibernate, both
which support LINQ.  Personally, I prefer the Active Record pattern a
lot more, so am kinda hoping the Castle project is resurrected at some
point.

Mike


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What is the state of the art for using LINQ with PostgreSQL?

2012-09-13 Thread Rob Richardson
Replying partially to my own question:

I downloaded the Express provider from DevArt.  I then set them an E-mail as 
scornfully as I possibly could write suggesting that if they are going to 
provide samples with their code, the samples should probably be tested at least 
once.

We'll see if their customer support people are willing to reply to someone as 
rude as I was, but they thoroughly deserved all the rudeness I used and many 
times more.

RobR


From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Rob Richardson
Sent: Thursday, September 13, 2012 12:46 PM
To: pgsql-general General
Subject: [GENERAL] What is the state of the art for using LINQ with PostgreSQL?

Greetings!

I would like to begin learning to use LINQ with PostgreSQL.  Ideally, it would 
be good to have free tools to do it, since I probably won't be finished playing 
with it before 30 days are up and I'd have to pay for something.  It would also 
be good if whatever I use has a good set of tutorials that I can follow to 
begin using them.  Where should I start?

I have found the dotConnect provider from DevArt, and I'm going to begin 
playing with their free Express provider.  Does anyone have experience 
working with DevArt's tools, and if so, what did you think?

Thanks very much!

RobR


Re: [GENERAL] Is there a way to use pack in pl/perl without resorting to pl/perlu?

2012-09-13 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


Tom Lane replied:
 My recollection is that they're intentionally excluded because they 
 would otherwise be security holes --- there are some format codes that 
 allow direct access to memory, or something like that. 

They are not part of the standard base core opcode group, but part 
of the still_to_be_decided group, because they can create and use 
memory pointers. It's possible that they may someday get moved out 
of this group, in which case they will suddenly start working :)

Herouth Maoz wrote:
 Anyway, I replaced the functionality with 
 $content =~ s/([a-fA-F0-9]{2})/chr(hex($1))/eg; 
 which seems to be doing the same thing as unpack( H*, $content )

They are not doing the same thing, so be careful. :)

 I suspect it's less efficient than unpack, and I hope the function 
 I created won't be too slow for use inside a trigger.

You should be fine with either one. While the unpack is indeed 
around 3x faster, a simple s/// like the above still runs at over 
220,000 times per second on my slow box, so I would not worry 
about the speed. Remember in general to add a /o to the end of your 
regex if the first part is constant to avoid unneeded recomplilation.

- -- 
Greg Sabino Mullane g...@turnstep.com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 201209131324
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAlBSGn4ACgkQvJuQZxSWSsiLCQCePwOEYvm52IdIc3tQh5KafrfR
vnYAn0zufZffao1VxhKwquVi6OkIHdz5
=7BQ8
-END PGP SIGNATURE-




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What is the state of the art for using LINQ with PostgreSQL?

2012-09-13 Thread Mike Christensen
BTW, they'll probably take a day to respond since they're all in the Ukraine :)

They also seem to respond to questions fairly quickly on
StackOverflow, if you use the DevArt tag.

On Thu, Sep 13, 2012 at 10:26 AM, Rob Richardson
rdrichard...@rad-con.com wrote:
 Replying partially to my own question:



 I downloaded the Express provider from DevArt.  I then set them an E-mail as
 scornfully as I possibly could write suggesting that if they are going to
 provide samples with their code, the samples should probably be tested at
 least once.



 We’ll see if their customer support people are willing to reply to someone
 as rude as I was, but they thoroughly deserved all the rudeness I used and
 many times more.



 RobR





 From: pgsql-general-ow...@postgresql.org
 [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Rob Richardson
 Sent: Thursday, September 13, 2012 12:46 PM
 To: pgsql-general General
 Subject: [GENERAL] What is the state of the art for using LINQ with
 PostgreSQL?



 Greetings!



 I would like to begin learning to use LINQ with PostgreSQL.  Ideally, it
 would be good to have free tools to do it, since I probably won’t be
 finished playing with it before 30 days are up and I’d have to pay for
 something.  It would also be good if whatever I use has a good set of
 tutorials that I can follow to begin using them.  Where should I start?



 I have found the dotConnect provider from DevArt, and I’m going to begin
 playing with their free “Express” provider.  Does anyone have experience
 working with DevArt’s tools, and if so, what did you think?



 Thanks very much!



 RobR


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Auto Vacuum of pg_catalog tables causes huge delay in opening new connections

2012-09-13 Thread Mike Roest
Hi Tom,
   On the test box this seems to have completely resolved our problem.

I'll be scheduling an upgrade on the production cluster to verify it.

Thanks


On Thu, Sep 13, 2012 at 11:08 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 Mike Roest mike.ro...@replicon.com writes:
 We have a interesting thing happening on one of our DB's that when
  autovacuum runs against the pg_catalog tables (or we run a manual vacuum)
  we get a large delay in opening new connections.

 I think you're hitting the problem that was fixed here:

 Author: Tom Lane t...@sss.pgh.pa.us
 Branch: master Release: REL9_2_BR [532fe28da] 2012-05-26 19:09:52 -0400
 Branch: REL9_1_STABLE Release: REL9_1_4 [6c1bf45ea] 2012-05-26 19:09:59
 -0400
 Branch: REL9_0_STABLE Release: REL9_0_8 [2ce097e6e] 2012-05-26 19:10:05
 -0400
 Branch: REL8_4_STABLE Release: REL8_4_12 [35cc2be6f] 2012-05-26 19:10:13
 -0400
 Branch: REL8_3_STABLE Release: REL8_3_19 [422022b12] 2012-05-26 19:10:19
 -0400

 Prevent synchronized scanning when systable_beginscan chooses a
 heapscan.

 The only interesting-for-performance case wherein we force heapscan
 here
 is when we're rebuilding the relcache init file, and the only such case
 that is likely to be examining a catalog big enough to be syncscanned
 is
 RelationBuildTupleDesc.  But the early-exit optimization in that code
 gets
 broken if we start the scan at a random place within the catalog, so
 that
 allowing syncscan is actually a big deoptimization if pg_attribute is
 large
 (at least for the normal case where the rows for core system catalogs
 have
 never been changed since initdb).  Hence, prevent syncscan here.  Per
 my
 testing pursuant to complaints from Jeff Frost and Greg Sabino Mullane,
 though neither of them seem to have actually hit this specific problem.

 Back-patch to 8.3, where syncscan was introduced.

  For our setup we're running postgres 9.1.1 compiled from source on Centos
  5.8 x64 (Dual Xeon x5650 with 32 gigs of ram)

 Try updating ...

 regards, tom lane



[GENERAL] Fixing or diagnosing Canceled on identification as a pivot, during write

2012-09-13 Thread Andrus
After switching to PostgreSql 9.1 serializable transaction level for all 
transactions during posting single document errors

40001:ERROR: could not serialize access due to read/write dependencies among 
transactions
Reason code: Canceled on identification as a pivot, during write.;

started to appear in log file.

Code which causes them is below.

Code involves only single document (in this example id 95162) . Is document is 
probably not accesed by others.
How to fix or diagnose this error ?

tasutud1 is temporary table created in transaction earlier:

CREATE TEMP TABLE tasutud1 (dokumnr INTEGER, tasutud NUMERIC(1)) ON COMMIT DROP

Other tables are permanent tables updated and accessed by 7 users concurrently.


Code where exception occurs is :

CREATE TEMP TABLE ids(dokumnr INT) ON COMMIT DROP;
INSERT INTO ids VALUES(95162);
analyze ids;UPDATE DOK set
  kinnitatud = TRUE,
  doksumma=CASE WHEN TRUE THEN COALESCE(doksumma.doksumma,0) ELSE 0 END,
  tasumata =CASE WHEN TRUE AND dok.krdokumnr IS NULL and
  dok.doktyyp IN ('G','O') THEN
   doksumma.doksumma-COALESCE(doksumma.tasutud,0) ELSE 0 END
 FROM
(SELECT 
ids.dokumnr,
SUM( CASE WHEN rid.toode is NULL OR LENGTH(RTRIM(rid.toode))2 OR 
toode.grupp'S' or
   (STRPOS(toode.klass,'T')!=0 AND STRPOS(toode.klass,'E')=0)
  THEN
ROUND(COALESCE(rid.hind,0)*CASE WHEN COALESCE(rid.kogus,0)=0 THEN 1 
ELSE rid.kogus END*CASE WHEN COALESCE(rid.kogpak,0)=0 THEN 1 ELSE rid.kogpak 
END,2) ELSE 0 END ) AS doksumma,
max(tasutud1.tasutud) as tasutud
  FROM ids 
  JOIN dok USING(dokumnr)
  JOIN rid USING(dokumnr)
  LEFT JOIN toode USING(toode)
  LEFT JOIN tasutud1 ON tasutud1.dokumnr=ids.dokumnr
WHERE not rid.fifoexpens and not rid.calculrow
  and (not dok.inventuur or rid.kogus0 )
GROUP BY 1
) doksumma 
left join bilkaib on bilkaib.dokumnr=doksumma.dokumnr and bilkaib.alusdok='LO'
WHERE dok.dokumnr=doksumma.dokumnr

Should this code split into multiple commands to find which part causes 
exception or other idea ?

Andrus.

Re: [GENERAL] Compressed binary field

2012-09-13 Thread Edson Richter

Em 12/09/2012 21:50, Edson Richter escreveu:

Em 12/09/2012 15:09, Kevin Grittner escreveu:

VACUUM FREEZE VERBOSE ANALYZE
Sorry, most of the messages are in portuguese, but I guess numbers are 
more important, right?


INFO:  índice pk_notafiscalarq agora contém 715084 versões de 
registros em 1963 páginas

DETALHE:  0 versões de registros de índices foram apagadas.
0 páginas de índice foram removidas, 0 são reutilizáveis.
CPU 0.00s/0.00u sec elapsed 0.12 sec.
INFO:  notafiscalarq: encontrados 0 versões de registros removíveis 
e 715084 não-removíveis em 6684 de 6684 páginas
DETALHE:  0 versões de registros não vigentes não podem ser removidas 
ainda.

Havia 0 ponteiros de itens não utilizados.
0 páginas estão completamente vazias.
CPU 0.03s/0.04u sec elapsed 0.38 sec.
INFO:  limpando pg_toast.pg_toast_18409
INFO:  índice pg_toast_18409_index agora contém 2643728 versões de 
registros em 7251 páginas

DETALHE:  0 versões de registros de índices foram apagadas.
0 páginas de índice foram removidas, 0 são reutilizáveis.
CPU 0.00s/0.00u sec elapsed 0.10 sec.
INFO:  pg_toast_18409: encontrados 0 versões de registros removíveis 
e 2643728 não-removíveis em 636949 de 636949 páginas
DETALHE:  0 versões de registros não vigentes não podem ser removidas 
ainda.

Havia 0 ponteiros de itens não utilizados.
0 páginas estão completamente vazias.
CPU 2.45s/1.30u sec elapsed 30.16 sec.
INFO:  analisando public.notafiscalarq
INFO:  notafiscalarq: processados 6684 de 6684 páginas, contendo 
715084 registros vigentes e 0 registros não vigentes; 3 registros 
amostrados, 715084 registros totais estimados



Thanks,

Edson



Anything else I can do from here?

Edson


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Compressed binary field

2012-09-13 Thread Kevin Grittner
Edson Richter edsonrich...@hotmail.com wrote:
 
 Anything else I can do from here?
 
Did that result in more accurate numbers for pg_class.reltuples?
 
-Kevin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how long to wait on 9.2 bitrock installer?

2012-09-13 Thread Dann Corbit
-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of John R Pierce
Sent: Wednesday, September 12, 2012 11:00 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] how long to wait on 9.2 bitrock installer?

On 09/12/12 10:42 PM, Dann Corbit wrote:

 I just let it sit.  3.5 hours later, it completed.

 ...

 I have several versions of PostgreSQL already installed on this 
 machine.  We need to test compatibility with PostgreSQL database 
 systems with our products.

 I tried to install 9.2 64 bit using the one click installer from this
 location:

 http://www.enterprisedb.com/products/pgdownload.do#windows

 I did use an unused port of 5439 as 5432-5438 are already in use by 
 other instances of PostgreSQL.


are those other instances running?  if they have significant shared memory 
allocations, your 12GB of ram could be significantly blocked up, so the new 
install may have ended up doing a lot of paging.

I had 2 gigs free RAM when the install started.

also, your configuration file says this system has a single 1TB drive?   
thats a weak storage configuration for a database server, especially if its a 
7200rpm or less SATA drive.

It's a raid array.   The system diagnostic software is not perfect.
I'm a developer.  The only database activity on this particular machine is 
testing for correctness.



-- 
john r pierceN 37, W 122
santa cruz ca mid-left coast



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What is the state of the art for using LINQ with PostgreSQL?

2012-09-13 Thread Rob Richardson
I downloaded the LinqConnect demo, which is what I really need, and which seems 
to work well.  I'm unsure whether their Entity Developer tool is required with 
it, or if I can get the context class I need for inclusion in a C# project some 
other way, such as with something provided inside Visual Studio.

RobR


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What is the state of the art for using LINQ with PostgreSQL?

2012-09-13 Thread Adam Bruss
I use Devart dotconnect for Postgresql at work and love it. I use it for 
inserting into and querying a database of regression test results. I also use 
it for managing a database of crash reports. I can't say anything bad about it. 
One of the cool features
I like a lot is the amount of Gettype functions the PgSqlDataReader class 
has. There is a GetBoolean() for bools, GetDate() for timestamps etc. It's also 
cheap as all can be. It's 200$ US for a single developer. That gets you the api 
and the visual developer tools.

Adam Bruss
Senior Development Engineer
AWR Corporation
11520 N. Port Washington Rd., Suite 201
Mequon, WI  53092  USA
P: 1.262.240.0291 x104
F: 1.262.240.0294
E: 
abr...@awrcorp.comhttp://meqtestprfrm-dt:8080/cruisecontrol/buildresults/abr...@awrcorp.com
W: http://www.awrcorp.comhttp://www.awrcorp.com/

From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Rob Richardson
Sent: Thursday, September 13, 2012 11:46 AM
To: pgsql-general General
Subject: [GENERAL] What is the state of the art for using LINQ with PostgreSQL?

Greetings!

I would like to begin learning to use LINQ with PostgreSQL.  Ideally, it would 
be good to have free tools to do it, since I probably won't be finished playing 
with it before 30 days are up and I'd have to pay for something.  It would also 
be good if whatever I use has a good set of tutorials that I can follow to 
begin using them.  Where should I start?

I have found the dotConnect provider from DevArt, and I'm going to begin 
playing with their free Express provider.  Does anyone have experience 
working with DevArt's tools, and if so, what did you think?

Thanks very much!

RobR


Re: [GENERAL] 9.0 to 9.2 pg_upgrade pain due to collation mismatch

2012-09-13 Thread Peter Eisentraut
On 9/12/12 2:31 PM, Tom Lane wrote:
 C is the official name of that locale.  Not sure how you got it to say
 POSIX ... maybe we didn't have normalization of the locale name back
 then?

Says who?  I think C and POSIX are distinct locales that just happen to
behave the same way.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best free tool for relationship extraction

2012-09-13 Thread Eden Cardim

AG I need to determine relationships between several tables. Is there a
AG free tool to extract these from catalog tables? Is there an SQL that
AG also does this?

https://metacpan.org/module/DBIx::Class

It can't figure out m2m's though, for obvious reasons.

-- 
Polytope tetris is so unrealistic



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best free tool for relationship extraction

2012-09-13 Thread Igor Neyman
From: Alexander Gataric [mailto:gata...@usa.net] 
Sent: Thursday, September 13, 2012 12:52 PM
To: pgsql-general@postgresql.org
Subject: Best free tool for relationship extraction

I need to determine relationships between several tables. Is there a free tool 
to extract these from catalog tables? Is there an SQL that also does this?

Thanks
Alex


Try this SQL:

WITH RECURSIVE FK_recursive(distance, child_table, parent_table, 
FK_constraint_name, unique_constraint_name,
  ON_DELETE, ON_UPDATE, is_deferrable, 
FK_path) AS (
SELECT 1, tc.table_name, ctu.table_name, ctu.constraint_name, 
rc.unique_constraint_name,
   rc.delete_rule, rc.update_rule, tc.is_deferrable, 
quote_ident(ctu.table_name)
FROM information_schema.constraint_table_usage ctu, 
information_schema.table_constraints tc, 
information_schema.referential_constraints rc
WHERE ctu.table_name = 'gp_part_space' and ctu.table_catalog = 'vector'
  and tc.constraint_name = ctu.constraint_name and tc.constraint_type = 
'FOREIGN KEY' and tc.constraint_catalog = 'vector'
  and ctu.constraint_name = rc.constraint_name
  UNION ALL
SELECT er.distance + 1, tc.table_name, ctu.table_name, ctu.constraint_name, 
rc.unique_constraint_name, 
   rc.delete_rule, rc.update_rule, tc.is_deferrable, er.FK_path || ' - 
' || quote_ident(ctu.table_name)
FROM FK_recursive er, information_schema.constraint_table_usage ctu, 
information_schema.table_constraints tc, 
information_schema.referential_constraints rc
WHERE er.child_table = ctu.table_name and ctu.table_catalog = 'vector'
  and tc.constraint_name = ctu.constraint_name and tc.constraint_type = 
'FOREIGN KEY' and tc.constraint_catalog = 'vector'
  and ctu.constraint_name = rc.constraint_name
  )
SELECT distance, child_table, parent_table, FK_constraint_name, 
unique_constraint_name, 
   ON_DELETE, ON_UPDATE, is_deferrable, FK_path || ' - ' || 
quote_ident(child_table) AS FK_path
  FROM FK_recursive ORDER BY distance, parent_table;

If you get an error like this (possible on 8.4.5):

ERROR:  operator is not unique: smallint[] @ smallint[]
LINE 1: select $1 @ $2 and $2 @ $1
  ^
HINT:  Could not choose a best candidate operator. You might need to add 
explicit type casts.
QUERY:  select $1 @ $2 and $2 @ $1
CONTEXT:  SQL function _pg_keysequal during inlining

Then recompile the function:

SET search_path TO information_schema;
CREATE OR REPLACE FUNCTION _pg_keysequal(smallint[], smallint[]) RETURNS boolean
LANGUAGE sql IMMUTABLE
AS 'select $1 operator(pg_catalog.@) $2 and $2 operator(pg_catalog.@) $1';
SET search_path TO public;


And then re-run original recursive query.

Regards,
Igor Neyman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump -Fd must create directory

2012-09-13 Thread David Salisbury


It looks to me like you're misusing git..

You should only git init once, and always use that directory.
Then pg_dump, which should create one file per database
with the file name you've specified.
Not sure of the flags but I'd recommend plain text format.

I'm also unsure what you mean by network traffic, as you don't
mention a remote repository, but there nice visual tools
for you to see the changes to files between you're committed
objects.  git init.. will more than likely lose all changes
to files.

-ds



On 9/12/12 5:12 AM, François Beausoleil wrote:

Hi all!

Why must pg_dump create a fresh new directory everytime? I'm running some tests 
where I dump a database to a directory, git init and git add --all, then dump 
again. When I did that after doing some modifications (specifically creating a 
new table and adding a few hundred thousand records), git status told me some 
files had been renamed and others were removed. I worked around this by dumping 
to a new directory, then moving .git manually.

My real-world use case is I have a largish database (46 GiB gzip'd dump) that I 
dump every few hours, and some tables are essentially static. I was thinking of 
saving some network traffic by transferring only the delta.

Any thoughts on this? Is this something that can or can be made to change? 
Where are the IDs used in the dump coming from? Can they be made stable?

Thanks!
François Beausoleil

$ cat a.sh
PGDATABASE=${USER}
rm -rf thedump thedump2
psql -c select * into a from generate_series(1, 10, 1) as t1(a)
pg_dump -Fd --file=thedump
cd thedump  git init  git add --all .  git commit -m initial  cd ..
psql -c select * into b from generate_series(1, 10, 1) as t1(b)
pg_dump -Fd --file=thedump2
mv thedump/.git thedump2/
cd thedump2
git status --short

$ sh a.sh
SELECT 10
Initialized empty Git repository in /root/tmp/thedump/.git/
[master (root-commit) 9f8dc9f] initial
  2 files changed, 0 insertions(+), 0 deletions(-)
  create mode 100644 1882.dat.gz
  create mode 100644 toc.dat
SELECT 10
  D 1882.dat.gz
  M toc.dat
?? 1886.dat.gz
?? 1887.dat.gz




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Amazon High I/O instances

2012-09-13 Thread Sébastien Lorion
I started db creation over, this time with 16GB maintenance_work_mem and
fsync=off and it does not seem to have a great effect. After again 5 hours,
during index creation, disk and cpu are barely used: 95% idle and 2-3 MB/s
writes (150 reads/s, 90 writes/s).

Sébastien

On Thu, Sep 13, 2012 at 1:29 AM, John R Pierce pie...@hogranch.com wrote:

 On 09/12/12 10:01 PM, Sébastien Lorion wrote:

 pgbench initialization has been going on for almost 5 hours now and still
 stuck before vacuum starts .. something is definitely wrong as I don't
 remember it took so long first time I created the db


 pgbench initialization with a high scale factor, like the -s 1 I
 frequently use, does take quite a few hours.you need a large
 maintenance_work_mem, or the create index phase will take a really long
 time to index the 150GB worth of tables its created.




 --
 john r pierceN 37, W 122
 santa cruz ca mid-left coast




 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/**mailpref/pgsql-generalhttp://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Best free tool for relationship extraction

2012-09-13 Thread Bill Moran
On Thu, 13 Sep 2012 11:51:42 -0500 Alexander Gataric gata...@usa.net wrote:

 I need to determine relationships between several tables. Is there a free 
 tool to extract these from catalog tables? Is there an SQL that also does 
 this?

I've gotten good results with schemaspy.

-- 
Bill Moran wmo...@potentialtech.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Amazon High I/O instances

2012-09-13 Thread John R Pierce

On 09/13/12 2:08 PM, Sébastien Lorion wrote:
I started db creation over, this time with 16GB maintenance_work_mem 
and fsync=off and it does not seem to have a great effect. After again 
5 hours, during index creation, disk and cpu are barely used: 95% idle 
and 2-3 MB/s writes (150 reads/s, 90 writes/s).


I've never had to set maintenance_work_mem any higher than 1gb for 
plenty good enough performance.


whats the %busy on the disk ?   if you have a slow disk device (such as 
a shared virtual disk), 90 write/sec may be all its good for. MB/s is 
fairly meaningless when dealing with random committed writes.






--
john r pierceN 37, W 122
santa cruz ca mid-left coast




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Planner forces seq scan when select without quoting its values

2012-09-13 Thread Alex Lai

On 09/13/2012 09:59 AM, Tom Lane wrote:

Alex Laim...@sesda2.com  writes:

EXPLAIN insert into alex1 (fileid, archiveset) select 35352974, 10003;
[vs]
EXPLAIN insert into alex1 (fileid, archiveset) select '35352974', '10003';

You might be happier using INSERT ... VALUES instead of INSERT
... SELECT for this.  In the former, the parser is going to be more
aggressive about forcing values to the correct datatype, which is the
root of your difficulties here.

regards, tom lane

Tom,

I noticed using VALUES works like this query.
insert into alex1 (fileid, archiveset) values(35352974, 10003);
The problem I am facing is our application need to clone a number of 
rows from another select statement.


I do not find a way to insert based on a select statement without 
writing PG-PLSQL.  I hope to find a way like casting the datatype, but I 
have not the solution.


I tried like
... WHERE ...   a.archiveset::INTEGER = new.archiveset::INTEGER AND 
a.fileid::INTEGER  new.fileid::INTEGER.

it still do the seq scan.


--
Best regards,


Alex Lai
OMI SIPS DBA ADNET Systems , Inc.
7515 Mission Drive,
Suite A100 Lanham, MD 20706
301-352-4657 (phone)
301-352-0437 (fax)
a...@sesda2.com



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.0 to 9.2 pg_upgrade pain due to collation mismatch

2012-09-13 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 On 9/12/12 2:31 PM, Tom Lane wrote:
 C is the official name of that locale.  Not sure how you got it to say
 POSIX ... maybe we didn't have normalization of the locale name back
 then?

 Says who?

Says setlocale(), at least on the Fedora machine I just checked.
The reason I see this:

$ initdb --locale=POSIX
The files belonging to this database system will be owned by user tgl.
This user must also own the server process.

The database cluster will be initialized with locale C.
The default database encoding has accordingly been set to SQL_ASCII.
The default text search configuration will be set to english.

is that setlocale is returning C as the canonical name of the locale.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Need help in reclaiming disk space by deleting the selected records

2012-09-13 Thread Yelai, Ramkumar IN BLR STS
Hi All,

I am a beginner in Postgresql and Databases. I have a requirement that 
reclaiming disk space by deleting the rows in a selected time span.  I went 
through the documents and articles to know how to get the table size 
(http://wiki.postgresql.org/wiki/Disk_Usage)

But before let the user delete, I have to show the size of the records size in 
the selected time span.  But here I don't know how to calculate the selected 
records size.
In addition to this, I assume that after deleted the records I have to run 
VACUUM FULL command to reclaiming the space( Please correct me if I am wrong or 
let me know the best approach) .

The table looks like this

CREATE TABLE IF NOT EXISTS SN_SamplTable
(
  ID integer NOT NULL,
  Data integer,
  CLIENT_COUNT_TIMESTAMP timestamp without time zone
);

Please help me to how to proceed on this.

Thanks  Regards,
Ramkumar.




Re: [GENERAL] Fixing or diagnosing Canceled on identification as a pivot, during write

2012-09-13 Thread Kevin Grittner
Andrus kobrule...@hot.ee wrote:
 After switching to PostgreSql 9.1 serializable transaction level
 for all transactions during posting single document errors
 
 40001:ERROR: could not serialize access due to read/write
 dependencies among transactions
 Reason code: Canceled on identification as a pivot, during write.;
 
 started to appear in log file.
 
This means that the database transaction was canceled to prevent
data anomalies from a race condition between different database
transactions.  The documentation says applications using this level
must be prepared to retry transactions due to serialization
failures.  Are these transaction succeeding when they are retried?
What percentage of statements are requiring a retry?  If the
percentage is high enough to cause concern, have you tried the
things recommended in the For optimal performance section of the
Serializable docs?
 
http://www.postgresql.org/docs/9.1/interactive/transaction-iso.html#XACT-SERIALIZABLE

 Code which causes them is below.
 
 Code involves only single document (in this example id 95162) . Is
 document is probably not accesed by others.
 
It is not necessarily this statement which is causing the conflict;
it might be getting canceled because it happens to be the next
statement run in the transaction after the race condition is
detected based on other statements.
 
 How to fix or diagnose this error ?
 
These are not something that you necessarily need to fix -- they
are necessary to protect your data if you want to use serializable
transactions to protect your data integrity rather than taking out
locks which cause blocking.
 
 tasutud1 is temporary table created in transaction earlier:
 
 CREATE TEMP TABLE tasutud1 (dokumnr INTEGER, tasutud NUMERIC(1))
 ON COMMIT DROP
 
Temporary tables do not contribute to this sort of event.
 
 Other tables are permanent tables updated and accessed by 7 users 
 concurrently.
 
 
 UPDATE DOK set
   kinnitatud = TRUE,
   doksumma=CASE WHEN TRUE THEN COALESCE(doksumma.doksumma,0)
 ELSE 0 END,
   tasumata =CASE WHEN TRUE AND dok.krdokumnr IS NULL and
   dok.doktyyp IN ('G','O') THEN
doksumma.doksumma-COALESCE(doksumma.tasutud,0) ELSE 0
 END
  FROM
 (SELECT 
 ids.dokumnr,
 SUM( CASE WHEN rid.toode is NULL OR LENGTH(RTRIM(rid.toode))2
   OR toode.grupp'S' or
(STRPOS(toode.klass,'T')!=0 AND
 STRPOS(toode.klass,'E')=0)
   THEN
 ROUND(COALESCE(rid.hind,0)*CASE WHEN
   COALESCE(rid.kogus,0)=0 THEN 1 
   ELSE rid.kogus END*CASE WHEN COALESCE(rid.kogpak,0)=0 THEN 1
  ELSE rid.kogpak 
 END,2) ELSE 0 END ) AS doksumma,
 max(tasutud1.tasutud) as tasutud
   FROM ids 
   JOIN dok USING(dokumnr)
   JOIN rid USING(dokumnr)
   LEFT JOIN toode USING(toode)
   LEFT JOIN tasutud1 ON tasutud1.dokumnr=ids.dokumnr
 WHERE not rid.fifoexpens and not rid.calculrow
   and (not dok.inventuur or rid.kogus0 )
 GROUP BY 1
 ) doksumma 
 left join bilkaib on bilkaib.dokumnr=doksumma.dokumnr and 
 bilkaib.alusdok='LO'
 WHERE dok.dokumnr=doksumma.dokumnr
 
 Should this code split into multiple commands to find which part
 causes exception or other idea ?
 
Splitting up this statement is probably not a good idea.  You might
want to look at the overall transaction and make sure that it is a
sensible grouping of work -- in particular that for correctness
everything written by it should persisted and become visible
*atomically*.  If that is *not* required for correctness, then
splitting the transaction into smaller units of work might be a good
idea.
 
The first thing to do is to ensure that your are running through
some sort of framework which will automatically retry any
transaction which fails with SQLSTATE 40001.  You should expect some
exceptions like this, although in most workloads it is a fraction of
a percent.  They can happen on any statement in the transaction,
including COMMIT.
 
If it is happening enough to cause a significant performance hit,
please review the documentation cited above and take the suggested
steps.  (Declare transactions READ ONLY where possible, don't leave
connections idle in transaction for extended periods, adjust
configuration settings, etc.)
 
If you're still having problems at that point, it would be helpful
to know more about your overall mix of transactions.
 
-Kevin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Compressed binary field

2012-09-13 Thread Edson Richter

Em 13/09/2012 16:12, Kevin Grittner escreveu:

Edson Richter edsonrich...@hotmail.com wrote:
  

Anything else I can do from here?
  
Did that result in more accurate numbers for pg_class.reltuples?
  
-Kevin


I don't how number were not accurate - for me they always seemed 
consistent with what I knew about it...
Let's repeat all tests again (see, data grows on daily basis, so numbers 
will be a bit different - yes, I've run the vacuum again):


SELECT pg_size_pretty(pg_database_size('mydatabase'));
 pg_size_pretty

 7238 MB

 SELECT pg_size_pretty(pg_relation_size('notafiscalarq'));
 pg_size_pretty

 52 MB

SELECT nspname || '.' || relname AS relation,
pg_size_pretty(pg_relation_size(C.oid)) AS size
FROM pg_class C
LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE nspname NOT IN ('pg_catalog', 'information_schema')
ORDER BY pg_relation_size(C.oid) DESC
LIMIT 10;
relation |  size
-+-
 pg_toast.pg_toast_18409   | 4976 MB
 pg_toast.pg_toast_18146   | 290 MB
 public.cotacao   | 229 MB
 public.elementocalculado| 179 MB
 public.cotacaotransicaosituacao | 155 MB
 public.log| 112 MB
 public.logradouro  | 82 MB
 public.cotacaonf| 60 MB
 public.notafiscal| 60 MB
 public.tabelacalculada  | 60 MB


SELECT nspname || '.' || relname AS relation,
pg_size_pretty(pg_total_relation_size(C.oid)) AS total_size
FROM pg_class C
LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE nspname NOT IN ('pg_catalog', 'information_schema')
AND C.relkind  'i'
AND nspname !~ '^pg_toast'
ORDER BY pg_total_relation_size(C.oid) DESC
LIMIT 10;
relation | total_size
-+
 public.notafiscalarq| 5102 MB
 public.cotacao | 331 MB
 public.elementocalculado  | 313 MB
 public.documentotransportearq   | 294 MB
 public.cotacaotransicaosituacao   | 233 MB
 public.log  | 196 MB
 public.logradouro| 149 MB
 public.cotacaonf  | 118 MB
 public.tabelacalculada| 116 MB
 public.notafiscal  | 94 MB


SELECT relkind, oid, relfilenode, reltoastrelid,
relpages, reltuples
FROM pg_class
ORDER BY relpages DESC
LIMIT 10;
 relkind |  oid  | relfilenode | reltoastrelid | relpages | reltuples
-+---+-+---+--+-
 t   | 18413 |   18413 |  0 |   636949 | 
2.64373e+06
 t   | 18150 |   18150 |  0 | 37086 |  
149502

 r   | 18064 |   18064 | 18086 | 29347 | 639695
 r   | 18179 |   18179 |  0 | 22901 | 
1.8172e+06

 r   | 18116 |   18116 | 18121 | 19779 | 724619
 r   | 18343 |   18343 | 18347 | 14325 | 928805
 r   | 18352 |   18352 |  0 | 10488 |  
917134
 r   | 18092 |   18092 |  0 |   7695 |  
640804

 r   | 18396 |   18396 | 18404 |   7671 | 172792
 r   | 18558 |   18558 |  0 |   7644 |  
388332



show block_size;
 block_size

 8192


Regards,

Edson


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need help in reclaiming disk space by deleting the selected records

2012-09-13 Thread Steve Crawford

On 09/13/2012 06:33 AM, Yelai, Ramkumar IN BLR STS wrote:

Hi All,
I am a beginner in Postgresql and Databases. I have a requirement that 
reclaiming disk space by deleting the rows in a selected time span. I 
went through the documents and articles to know how to get the table 
size (_http://wiki.postgresql.org/wiki/Disk_Usage_)
But before let the user delete, I have to show the size of the records 
size in the selected time span. But here I don’t know how to calculate 
the selected records size.
In addition to this, I assume that after deleted the records I have to 
run VACUUM FULL command to reclaiming the space( Please correct me if 
I am wrong or let me know the best approach) .

The table looks like this
CREATE TABLE IF NOT EXISTS SN_SamplTable
(
ID integer NOT NULL,
“Data” integer,
CLIENT_COUNT_TIMESTAMP timestamp without time zone
);
Please help me to how to proceed on this.


Some things to consider:

1. If you have indexes on the table you need to consider the additional 
disk space recovered there.


2. CLUSTER is typically *way* faster than VACUUM FULL and rebuilds the 
indexes as well but it temporarily requires sufficient disk-space to 
write out a copy of the table being clustered.


3. If you can pre-plan for removing old data, for example you are 
collecting log data and need a rolling 3-months, then table partitioning 
is the way to go. You do this using an empty parent tables and putting 
the data into child tables each of which covers a specific time-span, 
perhaps one child-table per month or per week. When the data is no 
longer required you simply dump the child table if desired and then drop 
the child table. This is a virtually instant process that does not cause 
table bloat. Partitioning by date is only one way. You could determine 
that you need to drop data by user-ID and partition that way. Or by a 
combination of ID and date-range. But this method does not work if you 
need to remove arbitrary date ranges.


Cheers,
Steve





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best free tool for relationship extraction

2012-09-13 Thread Chris Curvey
I usually just forget her birthday, eye color, or name.  Then she extracts
herself from the relationship.

QED.


Re: [GENERAL] pg_dump -Fd must create directory

2012-09-13 Thread François Beausoleil

Le 2012-09-13 à 16:51, David Salisbury a écrit :

 
 It looks to me like you're misusing git..
 
 You should only git init once, and always use that directory.
 Then pg_dump, which should create one file per database
 with the file name you've specified.
 Not sure of the flags but I'd recommend plain text format.
 
 I'm also unsure what you mean by network traffic, as you don't
 mention a remote repository, but there nice visual tools
 for you to see the changes to files between you're committed
 objects.  git init.. will more than likely lose all changes
 to files.

I was just running a test: looking at a way to transfer large amounts of data 
for backup purposes with a tool that's especially suited for deltas. I know 
about rsync, but this was a thought experiment. I was only surprised at the 
restriction of pg_dump that must create a new directory every time. Was looking 
for a rationale.

Also, git init is a safe operation: within a repository, git init says it 
reinitialized, but does not lose files. Haven't tried with local changes, or a 
dirty index.

Finally, when NOT using the plain text format, pg_restore can restore more than 
one table at a time, using the --jobs flag. On a multi-core, multi-spindle 
machine, this can cut down the restore time tremendously.

Bye,
François

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Performance of pl/pgsql functions?

2012-09-13 Thread Wells Oliver
Do these tend to perform well? I have some simple formulas in functions
like so:

CREATE OR REPLACE FUNCTION public.stat_foo(a integer, b integer, c integer)
  RETURNS numeric AS
$BODY$

declare ret numeric;

begin
select (a+b) / c::numeric into ret;
return round(ret, 3);
end

$BODY$
LANGUAGE plpgsql IMMUTABLE COST 100;

The reason I'm doing this is because i repeat this formula in a bunch of
views and queries, and it's easier to have one function. Would this somehow
be slower than reproducing the formula in every view its used? I'm hoping
not...

-- 
Wells Oliver
wellsoli...@gmail.com


Re: [GENERAL] Performance of pl/pgsql functions?

2012-09-13 Thread Pavel Stehule
2012/9/14 Wells Oliver wellsoli...@gmail.com:
 Do these tend to perform well? I have some simple formulas in functions like
 so:

 CREATE OR REPLACE FUNCTION public.stat_foo(a integer, b integer, c integer)
   RETURNS numeric AS
 $BODY$

 declare ret numeric;

 begin
 select (a+b) / c::numeric into ret;
 return round(ret, 3);
 end

 $BODY$
 LANGUAGE plpgsql IMMUTABLE COST 100;

it is not good

CREATE OR REPLACE FUNCTION public.stat_foo(a integer, b integer, c integer)
RETURNS numeric AS $$
BEGIN
  RETURN round((a + b) / c::numeric), 3)::numeric;
END
$$ LANGUAGE plpgsql IMMUTABLE;

will be significantly faster

probably SQL function will be fastest

CREATE OR REPLACE FUNCTION public.stat_foo(a integer, b integer, c integer)
RETURNS numeric AS $$
  SELECT round(($1 + $2) / $3::numeric), 3)::numeric;
$$ LANGUAGE sql;

Regards

Pavel Stehule


 The reason I'm doing this is because i repeat this formula in a bunch of
 views and queries, and it's easier to have one function. Would this somehow
 be slower than reproducing the formula in every view its used? I'm hoping
 not...

 --
 Wells Oliver
 wellsoli...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general