Re: [GENERAL] max_allowed_packet equivalent in Postgres?

2009-08-13 Thread Bruce Momjian
Alan McKay wrote:
 Hey folks,
 
 I'm installing OTRS/ITSM (and yes, sending the same question to their
 list) and it gives me this warning.  I cannot find an equivalent
 config parameter in Postgres.
 
 Make sure your database accepts packages over 5 MB in size. A MySQL
 database for example accepts packages up to 1 MB by default. In this
 case, the value for max_allowed_packet must be increased. The
 recommended maximum size accepted is 20 MB.

Postgres has no known limit.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_allowed_packet equivalent in Postgres?

2009-08-13 Thread Greg Smith

On Thu, 13 Aug 2009, Alan McKay wrote:


Make sure your database accepts packages over 5 MB in size. A MySQL
database for example accepts packages up to 1 MB by default. In this
case, the value for max_allowed_packet must be increased.


packages-packet for this to make sense; basically they're saying that the 
program sends wide rows back and forth to the client, and as described in 
http://dev.mysql.com/doc/refman/5.1/en/packet-too-large.html there's a low 
default there on that database.


It's possible to run into this general class of issue with PostgreSQL; see 
ttp://archives.postgresql.org/pgsql-bugs/2006-07/msg00051.php for one 
example.


But that is caused by a problem in the client side application, not the 
server.  There is no server-side buffer size here as you'll find in MySQL. 
If your client app is coded correctly to handle large packets of data, it 
should work up to the size limits documented at 
http://www.postgresql.org/about/ , so you probably having nothing to worry 
about here.


--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_allowed_packet equivalent in Postgres?

2009-08-13 Thread Sam Mason
On Thu, Aug 13, 2009 at 03:31:39PM -0400, Greg Smith wrote:
 If your client app is coded correctly to handle large packets of data, it 
 should work up to the size limits documented at 
 http://www.postgresql.org/about/ , so you probably having nothing to worry 
 about here.

Is it worth having a note about having enough memory floating around
for those limits to actually be hit in practice?  There would be no
way of creating a row 1.6TB in size in one go, it would be ~800 UPDATE
statements to get it up to that size as far as I can see.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_allowed_packet equivalent in Postgres?

2009-08-13 Thread Greg Stark
On Thu, Aug 13, 2009 at 11:20 PM, Sam Masons...@samason.me.uk wrote:
 On Thu, Aug 13, 2009 at 03:31:39PM -0400, Greg Smith wrote:
 If your client app is coded correctly to handle large packets of data, it
 should work up to the size limits documented at
 http://www.postgresql.org/about/ , so you probably having nothing to worry
 about here.

 Is it worth having a note about having enough memory floating around
 for those limits to actually be hit in practice?  There would be no
 way of creating a row 1.6TB in size in one go, it would be ~800 UPDATE
 statements to get it up to that size as far as I can see.

That wouldn't work actually. If you did something like UPDATE tab set
a = a || a the first thing Postgres does when it executes the
concatenation operator is retrieve the original a and decompress it
(twice in this case). Then it constructs the result entirely in memory
before toasting. At the very least one copy of a and one copy of the
compressed a have to fit in memory.

To work with objects which don't fit comfortably in memory you really
have to use the lo interface. Toast lets you get away with it only for
special cases like substr() or length() but not in general.

-- 
greg
http://mit.edu/~gsstark/resume.pdf

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_allowed_packet equivalent in Postgres?

2009-08-13 Thread Sam Mason
On Fri, Aug 14, 2009 at 12:03:37AM +0100, Greg Stark wrote:
 On Thu, Aug 13, 2009 at 11:20 PM, Sam Masons...@samason.me.uk wrote:
  Is it worth having a note about having enough memory floating around
  for those limits to actually be hit in practice?  There would be no
  way of creating a row 1.6TB in size in one go, it would be ~800 UPDATE
  statements to get it up to that size as far as I can see.
 
 That wouldn't work actually. If you did something like UPDATE tab set
 a = a || a the first thing Postgres does when it executes the
 concatenation operator is retrieve the original a and decompress it
 (twice in this case). Then it constructs the result entirely in memory
 before toasting. At the very least one copy of a and one copy of the
 compressed a have to fit in memory.

Yup, that would indeed break---I was thinking of a single update per
column.  The ~800 comes from the fact that I think you may just about be
able to squeeze two 1GB literals into memory at a time and hence update
two of your 1600 columns with each update.

 To work with objects which don't fit comfortably in memory you really
 have to use the lo interface. Toast lets you get away with it only for
 special cases like substr() or length() but not in general.

Yup, the lo interface is of course much better for this sort of thing.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_allowed_packet equivalent in Postgres?

2009-08-13 Thread Greg Stark
On Fri, Aug 14, 2009 at 12:33 AM, Sam Masons...@samason.me.uk wrote:
 On Fri, Aug 14, 2009 at 12:03:37AM +0100, Greg Stark wrote:
 On Thu, Aug 13, 2009 at 11:20 PM, Sam Masons...@samason.me.uk wrote:

  There would be no way of creating a row 1.6TB in size in one go

 I was thinking of a single update per column.

Oh, my bad, you did indeed say row and I assumed column. Yes, you
could create a single row of 1.6TB by doing repeated updates setting
one column at a time to a 1G datum. (You would have to be using 32k
blocks though)


-- 
greg
http://mit.edu/~gsstark/resume.pdf

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general