Re: [PERFORM] requested shared memory size overflows size_t

2010-06-25 Thread Jim Montgomery

Remove me from your email traffic.
 
 Date: Thu, 24 Jun 2010 23:05:06 -0400
 Subject: Re: [PERFORM] requested shared memory size overflows size_t
 From: robertmh...@gmail.com
 To: alvhe...@commandprompt.com
 CC: craig_ja...@emolecules.com; pgsql-performance@postgresql.org
 
 On Thu, Jun 24, 2010 at 7:19 PM, Alvaro Herrera
 alvhe...@commandprompt.com wrote:
  Excerpts from Craig James's message of jue jun 24 19:03:00 -0400 2010:
 
  select relname, pg_relation_size(relname) from pg_class
   where pg_get_userbyid(relowner) = 'emol_warehouse_1'
   and relname not like 'pg_%'
   order by pg_relation_size(relname) desc;
  ERROR:  relation rownum_temp does not exist
 
  emol_warehouse_1= select relname from pg_class where relname = 
  'rownum_temp';
  relname
  --
rownum_temp
  (1 row)
 
  What's the full row?  I'd just add a WHERE relkind = 'r' to the above
  query anyway.
 
 Yeah - also, it would probably be good to call pg_relation_size on
 pg_class.oid rather than pg_class.relname, to avoid any chance of
 confusion over which objects are in which schema.
 
 -- 
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise Postgres Company
 
 -- 
 Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-performance
  
_
http://clk.atdmt.com/UKM/go/19780/direct/01/
We want to hear all your funny, exciting and crazy Hotmail stories. Tell us now

Re: [PERFORM] requested shared memory size overflows size_t

2010-06-24 Thread Craig James

Can anyone tell me what's going on here?  I hope this doesn't mean my system 
tables are corrupt...

Thanks,
Craig


select relname, pg_relation_size(relname) from pg_class
where pg_get_userbyid(relowner) = 'emol_warehouse_1'
and relname not like 'pg_%'
order by pg_relation_size(relname) desc;
ERROR:  relation rownum_temp does not exist

emol_warehouse_1= select relname from pg_class where relname = 'rownum_temp';
   relname
--
 rownum_temp
(1 row)

emol_warehouse_1= \d rownum_temp
Did not find any relation named rownum_temp.
emol_warehouse_1= create table rownum_temp(i int);
CREATE TABLE
emol_warehouse_1= drop table rownum_temp;
DROP TABLE
emol_warehouse_1= select relname, pg_relation_size(relname) from pg_class
where pg_get_userbyid(relowner) = 'emol_warehouse_1'
and relname not like 'pg_%'
order by pg_relation_size(relname) desc;
ERROR:  relation rownum_temp does not exist

emol_warehouse_1= select relname, pg_relation_size(relname) from pg_class;
ERROR:  relation tables does not exist





--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] requested shared memory size overflows size_t

2010-06-24 Thread Alvaro Herrera
Excerpts from Craig James's message of jue jun 24 19:03:00 -0400 2010:

 select relname, pg_relation_size(relname) from pg_class
  where pg_get_userbyid(relowner) = 'emol_warehouse_1'
  and relname not like 'pg_%'
  order by pg_relation_size(relname) desc;
 ERROR:  relation rownum_temp does not exist
 
 emol_warehouse_1= select relname from pg_class where relname = 'rownum_temp';
 relname
 --
   rownum_temp
 (1 row)

What's the full row?  I'd just add a WHERE relkind = 'r' to the above
query anyway.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] requested shared memory size overflows size_t

2010-06-24 Thread Robert Haas
On Thu, Jun 24, 2010 at 7:19 PM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
 Excerpts from Craig James's message of jue jun 24 19:03:00 -0400 2010:

 select relname, pg_relation_size(relname) from pg_class
          where pg_get_userbyid(relowner) = 'emol_warehouse_1'
          and relname not like 'pg_%'
          order by pg_relation_size(relname) desc;
 ERROR:  relation rownum_temp does not exist

 emol_warehouse_1= select relname from pg_class where relname = 
 'rownum_temp';
         relname
 --
   rownum_temp
 (1 row)

 What's the full row?  I'd just add a WHERE relkind = 'r' to the above
 query anyway.

Yeah - also, it would probably be good to call pg_relation_size on
pg_class.oid rather than pg_class.relname, to avoid any chance of
confusion over which objects are in which schema.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] requested shared memory size overflows size_t

2010-06-18 Thread Kenneth Marshall
On Fri, Jun 18, 2010 at 12:46:11AM +0100, Tom Wilcox wrote:
 On 17/06/2010 22:41, Greg Smith wrote:
 Tom Wilcox wrote:
 Any suggestions for good monitoring software for linux?

 By monitoring, do you mean for alerting purposes or for graphing purposes? 
  Nagios is the only reasonable choice for the former, while doing at best 
 a mediocre job at the latter.  For the later, I've found that Munin does a 
 good job of monitoring Linux and PostgreSQL in its out of the box 
 configuration, in terms of providing useful activity graphs.  And you can 
 get it to play nice with Nagios.

 Thanks Greg. Ill check Munin and Nagios out. It is very much for graphing 
 purposes. I would like to be able to perform objective, 
 platform-independent style performance comparisons.

 Cheers,
 Tom

Zabbix-1.8+ is also worth taking a look at and it can run off our
favorite database. It allows for some very flexible monitoring and
trending data collection.

Regards,
Ken

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] requested shared memory size overflows size_t

2010-06-18 Thread Greg Smith

Kenneth Marshall wrote:

Zabbix-1.8+ is also worth taking a look at and it can run off our
favorite database. It allows for some very flexible monitoring and
trending data collection.
  


Note that while Zabbix is perfectly reasonable general solution, the 
number of things it monitors out of the box for PostgreSQL:  
http://www.zabbix.com/wiki/howto/monitor/db/postgresql is only a 
fraction of what Munin shows you.  The main reason I've been suggesting 
Munin lately is because it seems to get all the basics right for new 
users without them having to do anything but activate the PostgreSQL 
plug-in.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] requested shared memory size overflows size_t

2010-06-18 Thread Scott Carey

On Jun 16, 2010, at 1:53 PM, Alvaro Herrera wrote:

 Excerpts from Tom Lane's message of lun jun 14 23:57:11 -0400 2010:
 Scott Carey sc...@richrelevance.com writes:
 Great points.  There is one other option that is decent for the WAL:
 If splitting out a volume is not acceptable for the OS and WAL -- 
 absolutely split those two out into their own partitions.  It is most 
 important to make sure that WAL and data are not on the same filesystem, 
 especially if ext3 is involved.
 
 Uh, no, WAL really needs to be on its own *spindle*.  The whole point
 here is to have one disk head sitting on the WAL and not doing anything
 else except writing to that file.
 
 However, there's another point here -- probably what Scott is on about:
 on Linux (at least ext3), an fsync of any file does not limit to
 flushing that file's blocks -- it flushes *ALL* blocks on *ALL* files in
 the filesystem.  This is particularly problematic if you have pgsql_tmp
 in the same filesystem and do lots of disk-based sorts.
 
 So if you have it in the same spindle but on a different filesystem, at
 least you'll avoid that extra fsync work, even if you have to live with
 the extra seeking.

yes, especially with a battery backed up caching raid controller the whole own 
spindle thing doesn't really matter, the WAL log writes fairly slowly and 
linearly and any controller with a damn will batch those up efficiently.

By FAR, the most important thing is to have WAL on its own file system.  If 
using EXT3 in a way that is safe for your data (data = ordered or better), even 
with just one SATA disk, performance will improve a LOT if data and xlog are 
separated into different file systems.  Yes, an extra spindle is better.

However with a decent RAID card or caching storage, 8 spindles for it all in 
one raid 10, with a partition for xlog and one for data, is often better 
performing than a mirrored pair for OS/xlog and 6 for data so long as the file 
systems are separated.   With a dedicated xlog and caching reliable storage, 
you can even mount it direct to avoid polluting OS page cache.



 
 -- 
 Álvaro Herrera alvhe...@commandprompt.com
 The PostgreSQL Company - Command Prompt, Inc.
 PostgreSQL Replication, Consulting, Custom Development, 24x7 support


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] requested shared memory size overflows size_t

2010-06-17 Thread Greg Smith

Tom Wilcox wrote:

Any suggestions for good monitoring software for linux?


By monitoring, do you mean for alerting purposes or for graphing 
purposes?  Nagios is the only reasonable choice for the former, while 
doing at best a mediocre job at the latter.  For the later, I've found 
that Munin does a good job of monitoring Linux and PostgreSQL in its out 
of the box configuration, in terms of providing useful activity graphs.  
And you can get it to play nice with Nagios.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] requested shared memory size overflows size_t

2010-06-17 Thread Tom Wilcox

On 17/06/2010 22:41, Greg Smith wrote:

Tom Wilcox wrote:

Any suggestions for good monitoring software for linux?


By monitoring, do you mean for alerting purposes or for graphing 
purposes?  Nagios is the only reasonable choice for the former, while 
doing at best a mediocre job at the latter.  For the later, I've found 
that Munin does a good job of monitoring Linux and PostgreSQL in its 
out of the box configuration, in terms of providing useful activity 
graphs.  And you can get it to play nice with Nagios.


Thanks Greg. Ill check Munin and Nagios out. It is very much for 
graphing purposes. I would like to be able to perform objective, 
platform-independent style performance comparisons.


Cheers,
Tom

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] requested shared memory size overflows size_t

2010-06-16 Thread Alvaro Herrera
Excerpts from Tom Lane's message of lun jun 14 23:57:11 -0400 2010:
 Scott Carey sc...@richrelevance.com writes:
  Great points.  There is one other option that is decent for the WAL:
  If splitting out a volume is not acceptable for the OS and WAL -- 
  absolutely split those two out into their own partitions.  It is most 
  important to make sure that WAL and data are not on the same filesystem, 
  especially if ext3 is involved.
 
 Uh, no, WAL really needs to be on its own *spindle*.  The whole point
 here is to have one disk head sitting on the WAL and not doing anything
 else except writing to that file.

However, there's another point here -- probably what Scott is on about:
on Linux (at least ext3), an fsync of any file does not limit to
flushing that file's blocks -- it flushes *ALL* blocks on *ALL* files in
the filesystem.  This is particularly problematic if you have pgsql_tmp
in the same filesystem and do lots of disk-based sorts.

So if you have it in the same spindle but on a different filesystem, at
least you'll avoid that extra fsync work, even if you have to live with
the extra seeking.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] requested shared memory size overflows size_t

2010-06-16 Thread Tom Wilcox
Thanks. I will try with a more sensible value of wal_buffers.. I was 
hoping to keep more in memory and therefore reduce the frequency of disk 
IOs..


Any suggestions for good monitoring software for linux?

On 15/06/2010 00:08, Robert Haas wrote:

On Mon, Jun 14, 2010 at 2:53 PM, Tom Wilcoxhungry...@gmail.com  wrote:
   

maintenance_work_mem=4GB
work_mem=4GB
shared_buffers=4GB
effective_cache_size=4GB
wal_buffers=1GB
 

It's pretty easy to drive your system into swap with such a large
value for work_mem - you'd better monitor that carefully.

The default value for wal_buffers is 64kB.  I can't imagine why you'd
need to increase that by four orders of magnitude.  I'm not sure
whether it will cause you a problem or not, but you're allocating
quite a lot of shared memory that way that you might not really need.

   



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] requested shared memory size overflows size_t

2010-06-14 Thread Tom Wilcox


Hi Bob,

Thanks a lot. Here's my best attempt to answer your questions:

The VM is setup with a virtual disk image dynamically expanding to fill 
an allocation of 300GB on a fast, local hard drive (avg read speed = 
778MB/s ).
WAL files can have their own disk, but how significantly would this 
affect our performance?
The filesystem of the host OS is NTFS (Windows Server 2008 OS 64), the 
guest filesystem is Ext2 (Ubuntu 64).
The workload is OLAP (lots of large, complex queries on large tables run 
in sequence).


In addition, I have reconfigured my server to use more memory. Here's a 
detailed blow by blow of how I reconfigured my system to get better 
performance (for anyone who might be interested)...


In order to increase the shared memory on Ubuntu I edited the System V 
IPC values using sysctl:


sysctl -w kernel.shmmax=16106127360*
*sysctl -w kernel.shmall=2097152

I had some fun with permissions as I somehow managed to change the 
owner  of the postgresql.conf to root where it needed to be postgres, 
resulting in failure to start the service.. (Fixed with chown 
postgres:postgres ./data/postgresql.conf and chmod u=rwx ./data -R).


I changed the following params in my configuration file..

default_statistics_target=1
maintenance_work_mem=512MB
work_mem=512MB
shared_buffers=512MB
wal_buffers=128MB

With this config, the following command took  6,400,000ms:

EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org;

With plan:
Seq Scan on match_data  (cost=0.00..1392900.78 rows=32237278 width=232) 
(actual time=0.379..464270.682 rows=2961 loops=1)

Total runtime: 6398238.890 ms

With these changes to the previous config, the same command took  
5,610,000ms:


maintenance_work_mem=4GB
work_mem=4GB
shared_buffers=4GB
effective_cache_size=4GB
wal_buffers=1GB

Resulting plan:

Seq Scan on match_data  (cost=0.00..2340147.72 rows=30888572 width=232) 
(actual time=0.094..452793.430 rows=2961 loops=1)

Total runtime: 5614140.786 ms

Then I performed these changes to the postgresql.conf file:

max_connections=3
effective_cache_size=15GB
maintenance_work_mem=5GB
shared_buffers=7000MB
work_mem=5GB

And ran this query (for a quick look - can't afford the time for the 
previous tests..):


EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org WHERE match_data_id 
 10;


Result:

Index Scan using match_data_pkey1 on match_data  (cost=0.00..15662.17 
rows=4490 width=232) (actual time=27.055..1908.027 rows=9 loops=1)

  Index Cond: (match_data_id  10)
Total runtime: 25909.372 ms

I then ran EntrepriseDB's Tuner on my postgres install (for a dedicated 
machine) and got the following settings and results:


EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org WHERE match_data_id 
 10;


Index Scan using match_data_pkey1 on match_data  (cost=0.00..13734.54 
rows=4495 width=232) (actual time=0.348..2928.844 rows=9 loops=1)

  Index Cond: (match_data_id  10)
Total runtime: 1066580.293 ms

For now, I will go with the config using 7000MB shared_buffers. Any 
suggestions on how I can further optimise this config for a single 
session, 64-bit install utilising ALL of 96GB RAM. I will spend the next 
week making the case for a native install of Linux, but first we need to 
be 100% sure that is the only way to get the most out of Postgres on 
this machine.


Thanks very much. I now feel I am at a position where I can really 
explore and find the optimal configuration for my system, but would 
still appreciate any suggestions.


Cheers,
Tom

On 11/06/2010 07:25, Bob Lunney wrote:

Tom,

First off, I wouldn't use a VM if I could help it, however, sometimes you have 
to make compromises.  With a 16 Gb machine running 64-bit Ubuntu and only 
PostgreSQL, I'd start by allocating 4 Gb to shared_buffers.  That should leave 
more than enough room for the OS and file system cache.  Then I'd begin testing 
by measuring response times of representative queries with significant amounts 
of data.

Also, what is the disk setup for the box?  Filesystem?  Can WAL files have 
their own disk?  Is the workload OLTP or OLAP, or a mixture of both?  There is 
more that goes into tuning a PG server for good performance than simply 
installing the software, setting a couple of GUCs and running it.

Bob

--- On Thu, 6/10/10, Tom Wilcox hungry...@gmail.com wrote:

  

From: Tom Wilcox hungry...@gmail.com
Subject: Re: [PERFORM] requested shared memory size overflows size_t
To: Bob Lunney bob_lun...@yahoo.com
Cc: Robert Haas robertmh...@gmail.com, pgsql-performance@postgresql.org
Date: Thursday, June 10, 2010, 10:45 AM
Thanks guys. I am currently
installing Pg64 onto a Ubuntu Server 64-bit installation
running as a VM in VirtualBox with 16GB of RAM accessible.
If what you say is true then what do you suggest I do to
configure my new setup to best use the available 16GB (96GB
and native install eventually if the test goes well) of RAM
on Linux.

I was considering starting by using Enterprise DBs tuner to
see

Re: [PERFORM] requested shared memory size overflows size_t

2010-06-14 Thread Robert Haas
On Mon, Jun 14, 2010 at 2:53 PM, Tom Wilcox hungry...@gmail.com wrote:
 maintenance_work_mem=4GB
 work_mem=4GB
 shared_buffers=4GB
 effective_cache_size=4GB
 wal_buffers=1GB

It's pretty easy to drive your system into swap with such a large
value for work_mem - you'd better monitor that carefully.

The default value for wal_buffers is 64kB.  I can't imagine why you'd
need to increase that by four orders of magnitude.  I'm not sure
whether it will cause you a problem or not, but you're allocating
quite a lot of shared memory that way that you might not really need.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] requested shared memory size overflows size_t

2010-06-14 Thread Dave Crooke
 disk?  Is the workload OLTP or OLAP, or a mixture of both?  There
 is more that goes into tuning a PG server for good performance than simply
 installing the software, setting a couple of GUCs and running it.

 Bob

 --- On Thu, 6/10/10, Tom Wilcox hungry...@gmail.com wrote:



 From: Tom Wilcox hungry...@gmail.com
 Subject: Re: [PERFORM] requested shared memory size overflows size_t
 To: Bob Lunney bob_lun...@yahoo.com
 Cc: Robert Haas robertmh...@gmail.com,
 pgsql-performance@postgresql.org
 Date: Thursday, June 10, 2010, 10:45 AM
 Thanks guys. I am currently
 installing Pg64 onto a Ubuntu Server 64-bit installation
 running as a VM in VirtualBox with 16GB of RAM accessible.
 If what you say is true then what do you suggest I do to
 configure my new setup to best use the available 16GB (96GB
 and native install eventually if the test goes well) of RAM
 on Linux.

 I was considering starting by using Enterprise DBs tuner to
 see if that optimises things to a better quality..

 Tom

 On 10/06/2010 15:41, Bob Lunney wrote:


 True, plus there are the other issues of increased


 checkpoint times and I/O, bgwriter tuning, etc.  It may
 be better to let the OS cache the files and size
 shared_buffers to a smaller value.


 Bob Lunney

 --- On Wed, 6/9/10, Robert Haasrobertmh...@gmail.com

 wrote:




 From: Robert Haasrobertmh...@gmail.com
 Subject: Re: [PERFORM] requested shared memory


 size overflows size_t


 To: Bob Lunneybob_lun...@yahoo.com
 Cc: pgsql-performance@postgresql.org,


 Tom Wilcoxhungry...@googlemail.com


 Date: Wednesday, June 9, 2010, 9:49 PM
 On Wed, Jun 2, 2010 at 9:26 PM, Bob
 Lunneybob_lun...@yahoo.com
 wrote:


 Your other option, of course, is a nice 64-bit


 linux




 variant, which won't have this problem at all.

 Although, even there, I think I've heard that


 after 10GB


 you don't get
 much benefit from raising it further.  Not


 sure if


 that's accurate or
 not...

 -- Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise Postgres Company











 --
 Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-performance



Re: [PERFORM] requested shared memory size overflows size_t

2010-06-14 Thread Tom Wilcox
 appreciate any suggestions.

Cheers,
Tom


On 11/06/2010 07:25, Bob Lunney wrote:

Tom,

First off, I wouldn't use a VM if I could help it, however,
sometimes you have to make compromises.  With a 16 Gb machine
running 64-bit Ubuntu and only PostgreSQL, I'd start by
allocating 4 Gb to shared_buffers.  That should leave more
than enough room for the OS and file system cache.  Then I'd
begin testing by measuring response times of representative
queries with significant amounts of data.

Also, what is the disk setup for the box?  Filesystem?  Can
WAL files have their own disk?  Is the workload OLTP or OLAP,
or a mixture of both?  There is more that goes into tuning a
PG server for good performance than simply installing the
software, setting a couple of GUCs and running it.

Bob

--- On Thu, 6/10/10, Tom Wilcox hungry...@gmail.com
mailto:hungry...@gmail.com wrote:

 


From: Tom Wilcox hungry...@gmail.com
mailto:hungry...@gmail.com
Subject: Re: [PERFORM] requested shared memory size
overflows size_t
To: Bob Lunney bob_lun...@yahoo.com
mailto:bob_lun...@yahoo.com
Cc: Robert Haas robertmh...@gmail.com
mailto:robertmh...@gmail.com,
pgsql-performance@postgresql.org
mailto:pgsql-performance@postgresql.org
Date: Thursday, June 10, 2010, 10:45 AM
Thanks guys. I am currently
installing Pg64 onto a Ubuntu Server 64-bit installation
running as a VM in VirtualBox with 16GB of RAM accessible.
If what you say is true then what do you suggest I do to
configure my new setup to best use the available 16GB (96GB
and native install eventually if the test goes well) of RAM
on Linux.

I was considering starting by using Enterprise DBs tuner to
see if that optimises things to a better quality..

Tom

On 10/06/2010 15:41, Bob Lunney wrote:
   


True, plus there are the other issues of increased
 


checkpoint times and I/O, bgwriter tuning, etc.  It may
be better to let the OS cache the files and size
shared_buffers to a smaller value.
   


Bob Lunney

--- On Wed, 6/9/10, Robert Haasrobertmh...@gmail.com
mailto:robertmh...@gmail.com  


wrote:
   

 


From: Robert Haasrobertmh...@gmail.com
mailto:robertmh...@gmail.com
Subject: Re: [PERFORM] requested shared memory
   


size overflows size_t
   


To: Bob Lunneybob_lun...@yahoo.com
mailto:bob_lun...@yahoo.com
Cc: pgsql-performance@postgresql.org
mailto:pgsql-performance@postgresql.org,
   


Tom Wilcoxhungry...@googlemail.com
mailto:hungry...@googlemail.com
   


Date: Wednesday, June 9, 2010, 9:49 PM
On Wed, Jun 2, 2010 at 9:26 PM, Bob
Lunneybob_lun...@yahoo.com
mailto:bob_lun...@yahoo.com
wrote:
 


Your other option, of course, is a nice 64-bit
 


linux
   

 


variant, which won't have this problem at all.

Although, even there, I think I've heard that
   


after 10GB
   


you don't get
much benefit from raising it further.  Not
   


sure if
   


that's accurate or
not...

-- Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

 

 

   

   




-- 
Sent via pgsql-performance mailing list

(pgsql-performance@postgresql.org
mailto:pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance





--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] requested shared memory size overflows size_t

2010-06-14 Thread Tom Wilcox
 with
   chown postgres:postgres ./data/postgresql.conf and chmod u=rwx
   ./data -R).

   I changed the following params in my configuration file..

   default_statistics_target=1
   maintenance_work_mem=512MB
   work_mem=512MB
   shared_buffers=512MB
   wal_buffers=128MB

   With this config, the following command took  6,400,000ms:

   EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org;

   With plan:
   Seq Scan on match_data  (cost=0.00..1392900.78 rows=32237278
   width=232) (actual time=0.379..464270.682 rows=2961
loops=1)
   Total runtime: 6398238.890 ms

   With these changes to the previous config, the same command
took
5,610,000ms:

   maintenance_work_mem=4GB
   work_mem=4GB
   shared_buffers=4GB
   effective_cache_size=4GB
   wal_buffers=1GB

   Resulting plan:

   Seq Scan on match_data  (cost=0.00..2340147.72 rows=30888572
   width=232) (actual time=0.094..452793.430 rows=2961
loops=1)
   Total runtime: 5614140.786 ms

   Then I performed these changes to the postgresql.conf file:

   max_connections=3
   effective_cache_size=15GB
   maintenance_work_mem=5GB
   shared_buffers=7000MB
   work_mem=5GB

   And ran this query (for a quick look - can't afford the
time for
   the previous tests..):

   EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org WHERE
   match_data_id  10;

   Result:

   Index Scan using match_data_pkey1 on match_data
(cost=0.00..15662.17 rows=4490 width=232) (actual
   time=27.055..1908.027 rows=9 loops=1)
 Index Cond: (match_data_id  10)
   Total runtime: 25909.372 ms

   I then ran EntrepriseDB's Tuner on my postgres install (for a
   dedicated machine) and got the following settings and results:

   EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org WHERE
   match_data_id  10;

   Index Scan using match_data_pkey1 on match_data
(cost=0.00..13734.54 rows=4495 width=232) (actual
   time=0.348..2928.844 rows=9 loops=1)
 Index Cond: (match_data_id  10)
   Total runtime: 1066580.293 ms

   For now, I will go with the config using 7000MB shared_buffers.
   Any suggestions on how I can further optimise this config for a
   single session, 64-bit install utilising ALL of 96GB RAM. I
will
   spend the next week making the case for a native install of
Linux,
   but first we need to be 100% sure that is the only way to
get the
   most out of Postgres on this machine.

   Thanks very much. I now feel I am at a position where I can
really
   explore and find the optimal configuration for my system, but
   would still appreciate any suggestions.

   Cheers,
   Tom


   On 11/06/2010 07:25, Bob Lunney wrote:

   Tom,

   First off, I wouldn't use a VM if I could help it, however,
   sometimes you have to make compromises.  With a 16 Gb
machine
   running 64-bit Ubuntu and only PostgreSQL, I'd start by
   allocating 4 Gb to shared_buffers.  That should leave more
   than enough room for the OS and file system cache.
 Then I'd
   begin testing by measuring response times of representative
   queries with significant amounts of data.

   Also, what is the disk setup for the box?  Filesystem?  Can
   WAL files have their own disk?  Is the workload OLTP or
OLAP,
   or a mixture of both?  There is more that goes into
tuning a
   PG server for good performance than simply installing the
   software, setting a couple of GUCs and running it.

   Bob

   --- On Thu, 6/10/10, Tom Wilcox hungry...@gmail.com
mailto:hungry...@gmail.com
   mailto:hungry...@gmail.com
mailto:hungry...@gmail.com wrote:

   
   From: Tom Wilcox hungry...@gmail.com

mailto:hungry...@gmail.com
   mailto:hungry...@gmail.com
mailto:hungry...@gmail.com

   Subject: Re: [PERFORM] requested shared memory size
   overflows size_t
   To: Bob Lunney bob_lun...@yahoo.com
mailto:bob_lun...@yahoo.com
   mailto:bob_lun...@yahoo.com
mailto:bob_lun...@yahoo.com

   Cc: Robert Haas robertmh...@gmail.com
mailto:robertmh...@gmail.com
   mailto:robertmh...@gmail.com
mailto:robertmh...@gmail.com,
   pgsql

Re: [PERFORM] requested shared memory size overflows size_t

2010-06-14 Thread Greg Smith

Tom Wilcox wrote:

default_statistics_target=1
wal_buffers=1GB
max_connections=3
effective_cache_size=15GB
maintenance_work_mem=5GB
shared_buffers=7000MB
work_mem=5GB


That value for default_statistics_target means that every single query 
you ever run will take a seriously long time to generate a plan for.  
Even on an OLAP system, I would consider 10,000 an appropriate setting 
for a column or two in a particularly troublesome table.  I wouldn't 
consider a value of even 1,000 in the postgresql.conf to be a good 
idea.  You should consider making the system default much lower, and 
increase it only on columns that need it, not for every column on every 
table.


There is no reason to set wal_buffers larger than 16MB, the size of a 
full WAL segment.  Have you read 
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server yet?  
checkpoint_segments is the main parameter you haven't touched yet you 
should consider increasing.  Even if you have a low write load, when 
VACUUM runs it will be very inefficient running against a large set of 
tables without the checkpoint frequency being decreased some.  Something 
in the 16-32 range would be plenty for an OLAP setup.


At 3 connections, a work_mem of 5GB is possibly reasonable.  I would 
normally recommend that you make the default much smaller than that 
though, and instead just increase to a large value for queries that 
benefit from it.  If someone later increases max_connections to 
something higher, your server could run completely out of memory if 
work_mem isn't cut way back as part of that change.


You could consider setting effective_cache_size to something even larger 
than that,


EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org WHERE 
match_data_id  10;


By the way--repeatedly running this form of query to test for 
improvements in speed is not going to give you particularly good 
results.  Each run will execute a bunch of UPDATE statements that leave 
behind dead rows.  So the next run done for comparison sake will either 
have to cope with that additional overhead, or it will end up triggering 
autovacuum and suffer from that.  If you're going to use an UPDATE 
statement as your benchmark, at a minimum run a manual VACUUM ANALYZE in 
between each test run, to level out the consistency of results a bit.  
Ideally you'd restore the whole database to an initial state before each 
test.


I will spend the next week making the case for a native install of 
Linux, but first we need to be 100% sure that is the only way to get 
the most out of Postgres on this machine.


I really cannot imagine taking a system as powerful as you're using here 
and crippling it by running through a VM.  You should be running Ubuntu 
directly on the hardware, ext3 filesystem without LVM, split off RAID-1 
drive pairs dedicated to OS and WAL, then use the rest of them for the 
database.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] requested shared memory size overflows size_t

2010-06-14 Thread Scott Carey

On Jun 14, 2010, at 11:53 AM, Tom Wilcox wrote:

 
 
 max_connections=3
 effective_cache_size=15GB
 maintenance_work_mem=5GB
 shared_buffers=7000MB
 work_mem=5GB
 

maintenance_work_mem doesn't need to be so high, it certainly has no effect on 
your queries below.  It would affect vacuum, reindex, etc.

With fast disk like this (assuming your 700MB/sec above was not a typo) make 
sure you tune autovacuum up to be much more aggressive than the default 
(increase the allowable cost per sleep by at least 10x).

A big work_mem like above is OK if you know that no more than a couple sessions 
will be active at once.  Worst case, a single connection ... probably ... won't 
use more than 2x that ammount.  


 For now, I will go with the config using 7000MB shared_buffers. Any 
 suggestions on how I can further optimise this config for a single 
 session, 64-bit install utilising ALL of 96GB RAM. I will spend the next 
 week making the case for a native install of Linux, but first we need to 
 be 100% sure that is the only way to get the most out of Postgres on 
 this machine.
 

Getting the most from the RAM does *_NOT_*  mean making Postgres use all the 
RAM.  Postgres relies on the OS file cache heavily.  If there is a lot of free 
RAM for the OS to use to cache files, it will help the performance.  Both 
Windows and Linux aggressively cache file pages and do a good job at it.




-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] requested shared memory size overflows size_t

2010-06-14 Thread Scott Carey

On Jun 14, 2010, at 7:06 PM, Greg Smith wrote:

 I really cannot imagine taking a system as powerful as you're using here 
 and crippling it by running through a VM.  You should be running Ubuntu 
 directly on the hardware, ext3 filesystem without LVM, split off RAID-1 
 drive pairs dedicated to OS and WAL, then use the rest of them for the 
 database.
 

Great points.  There is one other option that is decent for the WAL:
If splitting out a volume is not acceptable for the OS and WAL -- absolutely 
split those two out into their own partitions.  It is most important to make 
sure that WAL and data are not on the same filesystem, especially if ext3 is 
involved.


 -- 
 Greg Smith  2ndQuadrant US  Baltimore, MD
 PostgreSQL Training, Services and Support
 g...@2ndquadrant.com   www.2ndQuadrant.us
 
 
 -- 
 Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-performance


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] requested shared memory size overflows size_t

2010-06-14 Thread Tom Lane
Scott Carey sc...@richrelevance.com writes:
 Great points.  There is one other option that is decent for the WAL:
 If splitting out a volume is not acceptable for the OS and WAL -- absolutely 
 split those two out into their own partitions.  It is most important to make 
 sure that WAL and data are not on the same filesystem, especially if ext3 is 
 involved.

Uh, no, WAL really needs to be on its own *spindle*.  The whole point
here is to have one disk head sitting on the WAL and not doing anything
else except writing to that file.  Pushing WAL to a different partition
but still on the same physical disk is likely to be a net pessimization,
because it'll increase the average seek distance whenever the head does
have to move between WAL and everything-else-in-the-database.

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] requested shared memory size overflows size_t

2010-06-11 Thread Bob Lunney
Tom,

First off, I wouldn't use a VM if I could help it, however, sometimes you have 
to make compromises.  With a 16 Gb machine running 64-bit Ubuntu and only 
PostgreSQL, I'd start by allocating 4 Gb to shared_buffers.  That should leave 
more than enough room for the OS and file system cache.  Then I'd begin testing 
by measuring response times of representative queries with significant amounts 
of data.

Also, what is the disk setup for the box?  Filesystem?  Can WAL files have 
their own disk?  Is the workload OLTP or OLAP, or a mixture of both?  There is 
more that goes into tuning a PG server for good performance than simply 
installing the software, setting a couple of GUCs and running it.

Bob

--- On Thu, 6/10/10, Tom Wilcox hungry...@gmail.com wrote:

 From: Tom Wilcox hungry...@gmail.com
 Subject: Re: [PERFORM] requested shared memory size overflows size_t
 To: Bob Lunney bob_lun...@yahoo.com
 Cc: Robert Haas robertmh...@gmail.com, pgsql-performance@postgresql.org
 Date: Thursday, June 10, 2010, 10:45 AM
 Thanks guys. I am currently
 installing Pg64 onto a Ubuntu Server 64-bit installation
 running as a VM in VirtualBox with 16GB of RAM accessible.
 If what you say is true then what do you suggest I do to
 configure my new setup to best use the available 16GB (96GB
 and native install eventually if the test goes well) of RAM
 on Linux.
 
 I was considering starting by using Enterprise DBs tuner to
 see if that optimises things to a better quality..
 
 Tom
 
 On 10/06/2010 15:41, Bob Lunney wrote:
  True, plus there are the other issues of increased
 checkpoint times and I/O, bgwriter tuning, etc.  It may
 be better to let the OS cache the files and size
 shared_buffers to a smaller value.
  
  Bob Lunney
  
  --- On Wed, 6/9/10, Robert Haasrobertmh...@gmail.com 
 wrote:
  
     
  From: Robert Haasrobertmh...@gmail.com
  Subject: Re: [PERFORM] requested shared memory
 size overflows size_t
  To: Bob Lunneybob_lun...@yahoo.com
  Cc: pgsql-performance@postgresql.org,
 Tom Wilcoxhungry...@googlemail.com
  Date: Wednesday, June 9, 2010, 9:49 PM
  On Wed, Jun 2, 2010 at 9:26 PM, Bob
  Lunneybob_lun...@yahoo.com
  wrote:
       
  Your other option, of course, is a nice 64-bit
 linux
         
  variant, which won't have this problem at all.
  
  Although, even there, I think I've heard that
 after 10GB
  you don't get
  much benefit from raising it further.  Not
 sure if
  that's accurate or
  not...
  
  -- Robert Haas
  EnterpriseDB: http://www.enterprisedb.com
  The Enterprise Postgres Company
  
       
  
  
     
 
 




-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] requested shared memory size overflows size_t

2010-06-10 Thread Bob Lunney
True, plus there are the other issues of increased checkpoint times and I/O, 
bgwriter tuning, etc.  It may be better to let the OS cache the files and size 
shared_buffers to a smaller value.  

Bob Lunney

--- On Wed, 6/9/10, Robert Haas robertmh...@gmail.com wrote:

 From: Robert Haas robertmh...@gmail.com
 Subject: Re: [PERFORM] requested shared memory size overflows size_t
 To: Bob Lunney bob_lun...@yahoo.com
 Cc: pgsql-performance@postgresql.org, Tom Wilcox hungry...@googlemail.com
 Date: Wednesday, June 9, 2010, 9:49 PM
 On Wed, Jun 2, 2010 at 9:26 PM, Bob
 Lunney bob_lun...@yahoo.com
 wrote:
  Your other option, of course, is a nice 64-bit linux
 variant, which won't have this problem at all.
 
 Although, even there, I think I've heard that after 10GB
 you don't get
 much benefit from raising it further.  Not sure if
 that's accurate or
 not...
 
 -- 
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise Postgres Company
 




-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] requested shared memory size overflows size_t

2010-06-09 Thread Robert Haas
On Wed, Jun 2, 2010 at 9:26 PM, Bob Lunney bob_lun...@yahoo.com wrote:
 Your other option, of course, is a nice 64-bit linux variant, which won't 
 have this problem at all.

Although, even there, I think I've heard that after 10GB you don't get
much benefit from raising it further.  Not sure if that's accurate or
not...

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] requested shared memory size overflows size_t

2010-06-02 Thread Tom Wilcox

Hi,

Sorry to revive an old thread but I have had this error whilst trying to 
configure my 32-bit build of postgres to run on a 64-bit Windows Server 
2008 machine with 96GB of RAM (that I would very much like to use with 
postgres).


I am getting:

2010-06-02 11:34:09 BSTFATAL:  requested shared memory size overflows size_t
2010-06-02 11:41:01 BSTFATAL:  could not create shared memory segment: 8
2010-06-02 11:41:01 BSTDETAIL:  Failed system call was MapViewOfFileEx.

which makes a lot of sense since I was setting shared_buffers (and 
effective_cache_size) to values like 60GB..


Is it possible to get postgres to make use of the available 96GB RAM on 
a Windows 32-bit build? Otherwise, how can I get it to work?


Im guessing my options are:

- Use the 64-bit Linux build (Not a viable option for me - unless from a 
VM - in which case recommendations?)

or
- Configure Windows and postgres properly (Preferred option - but I 
don't know what needs to be done here or if Im testing properly using 
Resource Monitor)


Thanks,
Tom


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] requested shared memory size overflows size_t

2010-06-02 Thread Kevin Grittner
Tom Wilcox hungry...@googlemail.com wrote:
 
 Is it possible to get postgres to make use of the available 96GB
 RAM on a Windows 32-bit build?
 
I would try setting shared_memory to somewhere between 200MB and 1GB
and set effective_cache_size = 90GB or so.  The default behavior of
Windows was to use otherwise idle RAM for disk caching, last I
checked, anyway.
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] requested shared memory size overflows size_t

2010-06-02 Thread Stephen Frost
* Kevin Grittner (kevin.gritt...@wicourts.gov) wrote:
 Tom Wilcox hungry...@googlemail.com wrote:
  Is it possible to get postgres to make use of the available 96GB
  RAM on a Windows 32-bit build?
  
 I would try setting shared_memory to somewhere between 200MB and 1GB
 and set effective_cache_size = 90GB or so.  The default behavior of
 Windows was to use otherwise idle RAM for disk caching, last I
 checked, anyway.

Sure, but as explained on -general already, all that RAM will only ever
get used for disk cacheing.  It won't be able to be used for sorts or
hash aggs or any other PG operations (PG would use at most
4GB-shared_buffers, or so).

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [PERFORM] requested shared memory size overflows size_t

2010-06-02 Thread Bob Lunney
Tom,

A 32 bit build could only reference at most 4 Gb - certainly not 60 Gb.  Also, 
Windows doesn't do well with large shared buffer sizes anyway.  Try setting 
shared_buffers to 2 Gb and let the OS file system cache handle the rest.

Your other option, of course, is a nice 64-bit linux variant, which won't have 
this problem at all.

Good luck!

Bob Lunney

--- On Wed, 6/2/10, Tom Wilcox hungry...@googlemail.com wrote:

 From: Tom Wilcox hungry...@googlemail.com
 Subject: Re: [PERFORM] requested shared memory size overflows size_t
 To: pgsql-performance@postgresql.org
 Date: Wednesday, June 2, 2010, 6:58 AM
 Hi,
 
 Sorry to revive an old thread but I have had this error
 whilst trying to configure my 32-bit build of postgres to
 run on a 64-bit Windows Server 2008 machine with 96GB of RAM
 (that I would very much like to use with postgres).
 
 I am getting:
 
 2010-06-02 11:34:09 BSTFATAL:  requested shared memory
 size overflows size_t
 2010-06-02 11:41:01 BSTFATAL:  could not create shared
 memory segment: 8
 2010-06-02 11:41:01 BSTDETAIL:  Failed system call was
 MapViewOfFileEx.
 
 which makes a lot of sense since I was setting
 shared_buffers (and effective_cache_size) to values like
 60GB..
 
 Is it possible to get postgres to make use of the available
 96GB RAM on a Windows 32-bit build? Otherwise, how can I get
 it to work?
 
 Im guessing my options are:
 
 - Use the 64-bit Linux build (Not a viable option for me -
 unless from a VM - in which case recommendations?)
 or
 - Configure Windows and postgres properly (Preferred option
 - but I don't know what needs to be done here or if Im
 testing properly using Resource Monitor)
 
 Thanks,
 Tom
 
 
 -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-performance
 




-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] requested shared memory size overflows size_t

2010-06-02 Thread Joshua Tolley
On Wed, Jun 02, 2010 at 11:58:47AM +0100, Tom Wilcox wrote:
 Hi,

 Sorry to revive an old thread but I have had this error whilst trying to  
 configure my 32-bit build of postgres to run on a 64-bit Windows Server  
 2008 machine with 96GB of RAM (that I would very much like to use with  
 postgres).

 I am getting:

 2010-06-02 11:34:09 BSTFATAL:  requested shared memory size overflows size_t
 2010-06-02 11:41:01 BSTFATAL:  could not create shared memory segment: 8
 2010-06-02 11:41:01 BSTDETAIL:  Failed system call was MapViewOfFileEx.

 which makes a lot of sense since I was setting shared_buffers (and  
 effective_cache_size) to values like 60GB..

I realize other answers have already been given on this thread; I figured I'd
just refer to the manual, which says, The useful range for shared_buffers on
Windows systems is generally from 64MB to 512MB. [1]

[1] http://www.postgresql.org/docs/8.4/static/runtime-config-resource.html

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


[PERFORM] requested shared memory size overflows size_t

2008-07-15 Thread Uwe Bartels
Hi,

When trying to to set shared_buffers greater then 3,5 GB on 32 GB x86
machine with solaris 10 I running in this error:
FATAL: requested shared memory size overflows size_t

The solaris x86 ist 64-bit and the compiled postgres is as well 64-bit.
Postgresql 8.2.5.
max-shm ist allowed to 8GB.

projmod -s -K project.max-shm-memory=(priv,8G,deny) user.postgres


Does anybody have an idea?

Thanks.
Uwe


Re: [PERFORM] requested shared memory size overflows size_t

2008-07-15 Thread Tom Lane
Uwe Bartels [EMAIL PROTECTED] writes:
 When trying to to set shared_buffers greater then 3,5 GB on 32 GB x86
 machine with solaris 10 I running in this error:
 FATAL: requested shared memory size overflows size_t

 The solaris x86 ist 64-bit and the compiled postgres is as well 64-bit.

Either it's not really a 64-bit build, or you made an error in your
math.  What did you try to set shared_buffers to, exactly?  Did you
increase any other parameters at the same time?

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] requested shared memory size overflows size_t

2008-07-15 Thread Stephen Conley
Hey there;

As Tom notes before maybe you're not using the right postgres.  Solaris 10
comes with a postgres, but on SPARC it's 32 bit compiled (I can't speak to
x86 Solaris though).

Assuming that's not the problem, you can be 100% sure if your Postgres
binary is actually 64 bit by using the file command on the 'postgres'
executable.  A sample from 64 bit SPARC looks like this:

postgres:   ELF 64-bit MSB executable SPARCV9 Version 1, UltraSPARC3
Extensions Required, dynamically linked, not stripped

But x86 should show something similar.  I have run Postgres up to about 8
gigs of RAM on Solaris without trouble.  Anyway, sorry if this is obvious /
not helpful but good luck :)

Steve

On Tue, Jul 15, 2008 at 10:25 AM, Tom Lane [EMAIL PROTECTED] wrote:

 Uwe Bartels [EMAIL PROTECTED] writes:
  When trying to to set shared_buffers greater then 3,5 GB on 32 GB x86
  machine with solaris 10 I running in this error:
  FATAL: requested shared memory size overflows size_t

  The solaris x86 ist 64-bit and the compiled postgres is as well 64-bit.

 Either it's not really a 64-bit build, or you made an error in your
 math.  What did you try to set shared_buffers to, exactly?  Did you
 increase any other parameters at the same time?

regards, tom lane

 --
 Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-performance