Le 09/02/2010 05:49, John R Pierce a écrit :
David Kerr wrote:
maintenance_work_mem = 1GB
So evidently, when it tries to actually allocate 1GB, it can't do it.
Ergo, that setting is too high for your machine.
...
seems like i've got 2GB free.
is this a 64bit postgres build?
if not,
Guillaume Lelarge wrote:
Le 09/02/2010 05:49, John R Pierce a écrit :
David Kerr wrote:
maintenance_work_mem = 1GB
So evidently, when it tries to actually allocate 1GB, it can't do it.
Ergo, that setting is too high for your machine.
...
seems like i've got 2GB free.
is this a 64bit
Le 09/02/2010 09:35, David Kerr a écrit :
Guillaume Lelarge wrote:
Le 09/02/2010 05:49, John R Pierce a écrit :
David Kerr wrote:
maintenance_work_mem = 1GB
So evidently, when it tries to actually allocate 1GB, it can't do it.
Ergo, that setting is too high for your machine.
...
seems like
Guillaume Lelarge wrote:
Le 09/02/2010 09:35, David Kerr a écrit :
Guillaume Lelarge wrote:
Le 09/02/2010 05:49, John R Pierce a écrit :
David Kerr wrote:
maintenance_work_mem = 1GB
So evidently, when it tries to actually allocate 1GB, it can't do it.
Ergo, that setting is too high for your
Justin Graf wrote:
On 2/8/2010 7:09 PM, Fredric Fredricson wrote:
Hi!
New to the list with a question that I cannot find the answer to in
the manual or on the internet but I suspect is trivial. If somebody
could point me in the correct direction I would be greatful.
This is what I do
On Tue, Feb 9, 2010 at 1:42 AM, Fredric Fredricson
fredric.fredric...@bonetmail.com wrote:
Justin Graf wrote:
On 2/8/2010 7:09 PM, Fredric Fredricson wrote:
Hi!
New to the list with a question that I cannot find the answer to in the
manual or on the internet but I suspect is trivial. If
On Tue, Feb 9, 2010 at 1:42 AM, Fredric Fredricson
fredric.fredric...@bonetmail.com wrote:
I use locale en_US.UTF-8 but why this should affect how leading characters
in strings are ignored is beyond me.
P.s. this page may shed some light on the subject:
http://en.wikipedia.org/wiki/Collation
I have created a index
create index leadaddress_phone_idx on
leadaddress(regexp_replace((phone)::text, '[^0-9]*'::text, ''::text,
'g'::text));
But the index is not using.
explain select * from leadaddress where
regexp_replace(phone,'[^0-9]*','','g') like '%2159438606';
Le 09/02/2010 08:43, AI Rumman a écrit :
I have created a index
create index leadaddress_phone_idx on
leadaddress(regexp_replace((phone)::text, '[^0-9]*'::text, ''::text,
'g'::text));
But the index is not using.
explain select * from leadaddress where
I just answered this less than an hour ago... And please don't cross
post to multiple mailing lists.
On Tue, Feb 9, 2010 at 12:43 AM, AI Rumman rumman...@gmail.com wrote:
I have created a index
create index leadaddress_phone_idx on
leadaddress(regexp_replace((phone)::text, '[^0-9]*'::text,
I am using Postgresql 8.1.
I have to reverse a string like 'abc' to 'cba'.
Is there any function to do this?
2010/2/9 David Kerr d...@mr-paradox.net:
It seems like pg_stat_activity truncates the current_query to about 1024
characters.
The field is a text, so i'm wondering if there is a way to see the full query?
(I know i can turn on log_statement=all, or log_min_duration_statement) but
i'd like
This was what I was after:
test=# select version();
version
PostgreSQL 8.3.9 on x86_64-pc-linux-gnu, compiled by GCC
gcc-4.3.real (Debian 4.3.2-1.1) 4.3.2
2010/1/28 Scott Marlowe scott.marl...@gmail.com:
related to maximum per-process data space. I don't know BSD very well
so I can't say if datasize is the only such value for BSD, but it'd be
worth checking. (Hmm, on OS X which is at least partly BSDish, I see
-m and -v in addition to -d, so
Hi,
I'm trying to log statements that are being fired from hibernate, so that i
can benchmark them without hibernate itself.
I want to fire the SQL at the database directly and see what difference i
can make by tweaking things.
But when i set log_statement = 'all', the logging looks like this:
Scott Marlowe wrote:
On Tue, Feb 9, 2010 at 1:42 AM, Fredric Fredricson
fredric.fredric...@bonetmail.com wrote:
I use locale en_US.UTF-8 but why this should affect how leading characters
in strings are ignored is beyond me.
P.s. this page may shed some light on the subject:
On Tue, Feb 9, 2010 at 3:18 AM, Anton Maksimenkov anton...@gmail.com wrote:
2010/1/28 Scott Marlowe scott.marl...@gmail.com:
related to maximum per-process data space. I don't know BSD very well
so I can't say if datasize is the only such value for BSD, but it'd be
worth checking. (Hmm, on
Hi,
in my plpgsql function I'm looping through tables and their fields and i
want to return their field names and the corresponding values.
It all works fine, except i can't retrieve the values, which is really a
bummer.
I tried a couple of things (in vain), of which this seemed the most
Hi,
I have the $SUBJECT problem loading my own
module in PostgreSQL. The server is HP-UX/ia64,
PostgeSQL 8.4.2 was compiled with HP CC.
pl/PgSQL can be loaded fine.
...:/usr/local/pgsql/pgsql-cc-8.4/lib# ldd odbclink.so
libodbc.so.1 = /usr/local/pgsql/runtime/lib/libodbc.so.1
On 09/02/10 09:51, AI Rumman wrote:
I am using Postgresql 8.1.
I have to reverse a string like 'abc' to 'cba'.
Is there any function to do this?
Nothing built-in, but you can write your own easily enough in plpgsql -
google a bit for examples.
--
Richard Huxton
Archonet Ltd
--
Sent
I've got a database that holds a bunch of articles in a table called
'article'. It has a bunch of columns, and each row might hold a few KB
of data, say.
I'm maintaining a separate fulltext database, and so I added a boolean
flag, 'needs_indexing' to my 'article' table to keep track of which
On Tue, Feb 9, 2010 at 11:21, Fredric Fredricson
fredric.fredric...@bonetmail.com wrote:
Scott Marlowe wrote:
On Tue, Feb 9, 2010 at 1:42 AM, Fredric Fredricson
fredric.fredric...@bonetmail.com wrote:
I use locale en_US.UTF-8 but why this should affect how leading characters
in strings are
On 09/02/10 11:07, Willy-Bas Loos wrote:
Hi,
in my plpgsql function I'm looping through tables and their fields and i
want to return their field names and the corresponding values.
It all works fine, except i can't retrieve the values, which is really a
bummer.
I tried a couple of things (in
On Tue, Feb 9, 2010 at 10:31 PM, Richard Huxton d...@archonet.com wrote:
On 09/02/10 09:51, AI Rumman wrote:
I am using Postgresql 8.1.
I have to reverse a string like 'abc' to 'cba'.
Is there any function to do this?
Nothing built-in, but you can write your own easily enough in plpgsql
On 09/02/10 11:25, Ben Campbell wrote:
[snip]
I need to set all
those flags, but it takes _ages_ to do UPDATE article SET
needs_indexing=true;
[snip]
I _think_ the reason it takes so long is that postgresql doesn't modify
rows in place - it creates an entry for the modified row and zaps the
On Tue, Feb 9, 2010 at 09:53, David Kerr d...@mr-paradox.net wrote:
Guillaume Lelarge wrote:
Le 09/02/2010 09:35, David Kerr a écrit :
Guillaume Lelarge wrote:
Le 09/02/2010 05:49, John R Pierce a écrit :
David Kerr wrote:
maintenance_work_mem = 1GB
So evidently, when it tries to
I have this error when selecting from the pg_stat_user_tables. I restarted
postgres and the error went away.
Has anyone else seen this error?
Chris Barnes
[postg...@preventdb02 londiste]$ cat /data/pgsql/data/pg_log/postgresql-Tue.log
WARNING: pgstat wait timeout
WARNING:
2010/2/9 Shoaib Mir shoaib...@gmail.com:
On Tue, Feb 9, 2010 at 10:31 PM, Richard Huxton d...@archonet.com wrote:
On 09/02/10 09:51, AI Rumman wrote:
I am using Postgresql 8.1.
I have to reverse a string like 'abc' to 'cba'.
Is there any function to do this?
Nothing built-in, but you
Hello
I think that Pavel's stuff is more interesting (and effective):
http://pgfoundry.org/projects/pstcollection/
sorry, it is in orafce package (in pstcollection could be too (in future))
http://pgfoundry.org/projects/orafce
Regards
Pavel Stehule
--
Vincenzo Romano
NotOrAnd
On 09/02/10 13:52, Chris Barnes wrote:
I have this error when selecting from the pg_stat_user_tables. I
restarted postgres and the error went away.
Has anyone else seen this error?
Chris Barnes
[postg...@preventdb02 londiste]$ cat
/data/pgsql/data/pg_log/postgresql-Tue.log
WARNING: pgstat
I have the $SUBJECT problem loading my own
module in PostgreSQL. The server is HP-UX/ia64,
PostgeSQL 8.4.2 was compiled with HP CC.
pl/PgSQL can be loaded fine.
...:/usr/local/pgsql/pgsql-cc-8.4/lib# ldd odbclink.so
libodbc.so.1 = /usr/local/pgsql/runtime/lib/libodbc.so.1
2010/2/9 Scott Marlowe scott.marl...@gmail.com:
On Tue, Feb 9, 2010 at 3:18 AM, Anton Maksimenkov anton...@gmail.com wrote:
Isn't the usual advice here is to log the ulimit setting from the pg
startup script so you can what it really is for the user at the moment
I think that su is enough:
In
Magnus Hagander wrote:
2010/2/9 David Kerr d...@mr-paradox.net:
It seems like pg_stat_activity truncates the current_query to about 1024
characters.
The field is a text, so i'm wondering if there is a way to see the full query?
(I know i can turn on log_statement=all, or
Magnus Hagander wrote:
On Tue, Feb 9, 2010 at 09:53, David Kerr d...@mr-paradox.net wrote:
Guillaume Lelarge wrote:
Le 09/02/2010 09:35, David Kerr a écrit :
Guillaume Lelarge wrote:
Le 09/02/2010 05:49, John R Pierce a écrit :
David Kerr wrote:
maintenance_work_mem = 1GB
So evidently,
Korry Douglas írta:
I have the $SUBJECT problem loading my own
module in PostgreSQL. The server is HP-UX/ia64,
PostgeSQL 8.4.2 was compiled with HP CC.
pl/PgSQL can be loaded fine.
...:/usr/local/pgsql/pgsql-cc-8.4/lib# ldd odbclink.so
libodbc.so.1 =
I've a column that contain something formed by concatenating fields
from another table
create table stuff(
glued varchar(30),
...
);
insert into stuff select 'constA,' || field1 || ',' || field2 from
origin where ...;
insert into stuff select 'constB,' || field1 || ',' || field2 from
origin
Hi,
While copying a large'ish file (~ 7.6 million rows) into a table with:
\copy [tablename] ([columns]) FROM '/path/to/large/file' NULL AS ''
The command stopped running after a few minutes with the message:
---cut here---start--
server
Seb escribió:
---cut here---start--
2010-02-09 10:47:03 CST LOG: checkpoints are occurring too frequently (6
seconds apart)
2010-02-09 10:47:03 CST HINT: Consider increasing the configuration
parameter checkpoint_segments.
2010-02-09
On Mon, Feb 8, 2010 at 5:46 PM, David Kerr d...@mr-paradox.net wrote:
It seems like pg_stat_activity truncates the current_query to about 1024
characters.
The field is a text, so i'm wondering if there is a way to see the full query?
(I know i can turn on log_statement=all, or
On Tue, Feb 9, 2010 at 10:03 AM, Seb splu...@gmail.com wrote:
Hi,
While copying a large'ish file (~ 7.6 million rows) into a table with:
\copy [tablename] ([columns]) FROM '/path/to/large/file' NULL AS ''
The command stopped running after a few minutes with the message:
On Tue, 9 Feb 2010 10:28:50 -0700,
Scott Marlowe scott.marl...@gmail.com wrote:
Are there any PANICs in the log before this? Doesn't seem likely from
your description, but just wanted to be sure.
No, there's no such lines in the log.
AFAICS, the data did make into the table, after psql
Seb escribió:
On Tue, 9 Feb 2010 10:28:50 -0700,
Scott Marlowe scott.marl...@gmail.com wrote:
Are there any PANICs in the log before this? Doesn't seem likely from
your description, but just wanted to be sure.
No, there's no such lines in the log.
AFAICS, the data did make into the
On Tue, 9 Feb 2010 14:44:23 -0300,
Alvaro Herrera alvhe...@commandprompt.com wrote:
[...]
Err, remove the renegotiation code from the server.
Oh I meant something from a user's point of view, not hacking the source
code. If there's not a user option, I don't mind the connection loss as
long
Hello.
I'm putting together a database to store the readings from various
measurement devices for later processing. Since these things (water
pressure monitors attached to very large water pipes) take readings at
200Hz and are typically deployed over multiple sites for several months
at a
Seb escribió:
On Tue, 9 Feb 2010 14:44:23 -0300,
Alvaro Herrera alvhe...@commandprompt.com wrote:
[...]
Err, remove the renegotiation code from the server.
Oh I meant something from a user's point of view, not hacking the source
code. If there's not a user option, I don't mind the
Crosstab is indeed very interesting. Thank you for the suggestion Vincenzo.
regards
Davor
Vincenzo Romano vincenzo.rom...@notorand.it wrote in message
news:3eff28921002081133h4b0d7fabm96cc1bc08e579...@mail.gmail.com...
Look for crosstab in the documentation.
Il giorno 8 feb, 2010 8:21
On 2/9/2010 12:47 PM, Asher wrote:
Hello.
I'm putting together a database to store the readings from various
measurement devices for later processing. Since these things (water
pressure monitors attached to very large water pipes) take readings at
200Hz and are typically deployed over
I've only gone up to about a billion rows, but table partitioning seems
to be the way to go to me. I did per-day partitioning, and just had the
job that inserts the daily data add the table automatically. With the
partitioning, it only has to pull up the tables for the specific days,
and is
Guillaume Lelarge wrote:
is this a 64bit postgres build?
if not, you're probably running out of virtual address space in the 32
bit user space, which is limited to like 2gb.
IIRC, the virtual address space in 32bit platforms is 4GB.
it is, but within that 4gb, the kernel uses the
On Feb 9, 2010, at 9:55 AM, Seb wrote:
On Tue, 9 Feb 2010 14:44:23 -0300,
Alvaro Herrera alvhe...@commandprompt.com wrote:
[...]
Err, remove the renegotiation code from the server.
Oh I meant something from a user's point of view, not hacking the source
code. If there's not a user
On Tue, Feb 9, 2010 at 1:55 PM, John R Pierce pie...@hogranch.com wrote:
Guillaume Lelarge wrote:
is this a 64bit postgres build?
if not, you're probably running out of virtual address space in the 32
bit user space, which is limited to like 2gb.
IIRC, the virtual address space in 32bit
Justin Graf wrote:
Well first is that 200hz meaning 200 samples per channel per second.
That is very fast sampling for pressure sensor, I would be surprised if
the meters are actually giving real results at that rate. I would look
at reducing that down to what the meter is actual capable
On Tue, Feb 9, 2010 at 10:47 AM, Asher as...@piceur.co.uk wrote:
Hello.
I'm putting together a database to store the readings from various
measurement devices for later processing. Since these things (water pressure
monitors attached to very large water pipes) take readings at 200Hz and are
Boszormenyi Zoltan írta:
Korry Douglas írta:
I have the $SUBJECT problem loading my own
module in PostgreSQL. The server is HP-UX/ia64,
PostgeSQL 8.4.2 was compiled with HP CC.
pl/PgSQL can be loaded fine.
...:/usr/local/pgsql/pgsql-cc-8.4/lib# ldd odbclink.so
libodbc.so.1 =
Asher Hoskins wrote:
If I partition so that each partition holds data for a single channel
(and set a CHECK constraint for this) then I can presumably remove the
channel from the index since constraint exclusion will mean that only
partitions holding the channel I'm interested in will be
Le 09/02/2010 17:05, David Kerr a écrit :
Magnus Hagander wrote:
2010/2/9 David Kerr d...@mr-paradox.net:
It seems like pg_stat_activity truncates the current_query to about
1024 characters.
The field is a text, so i'm wondering if there is a way to see the
full query?
(I know i can turn
On 2/9/2010 4:41 PM, Asher Hoskins wrote:
Thanks for that, it looks like partitioning is the way to go. I'm
assuming that I should try and keep my total_relation_sizes less than
the memory size of the machine?
This depends on what the quires look like. As other have stated when
On Feb 9, 2010, at 3:28 PM, Erik Jones wrote:
Greetings,
We've recently had database server crash due to a heavy duty disk failure and
upon rebooting we now have a table showing corruption via the invalid page
header in block X message when querying one table in particular, the rest
Greetings,
We've recently had database server crash due to a heavy duty disk failure and
upon rebooting we now have a table showing corruption via the invalid page
header in block X message when querying one table in particular, the rest have
all checked out via full table dumps with no
On Tue, 2010-02-09 at 15:28 -0800, Erik Jones wrote:
* Set zero_damaged_pages=on, run query that originally showed the
corruption. This reports 3 different blocks with invalid page headers
and reports that they are being zero'd out. Unfortunately,
subsequently querying the table the same
On Feb 9, 2010, at 5:00 PM, Jeff Davis wrote:
On Tue, 2010-02-09 at 15:28 -0800, Erik Jones wrote:
* Set zero_damaged_pages=on, run query that originally showed the
corruption. This reports 3 different blocks with invalid page headers
and reports that they are being zero'd out.
On Tue, 2010-02-09 at 17:14 -0800, Erik Jones wrote:
Anyways, I realized that the dump run with zero_damaged_pages does
actually finish.
Yeah, it should finish, it's just a question of whether the warnings
continue, and if you need to keep zero_damaged_pages on to keep reading.
Also, I found
__ Information from ESET Smart Security, version of virus signature
database 4852 (20100209) __
The message was checked by ESET Smart Security.
http://www.eset.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http
On Tue, Feb 9, 2010 at 8:06 PM, BillR iamb...@williamrosmus.com wrote:
Is it also possible to denormalize by putting the 'channel' data in the
first table (especially if it isn't very much)? Maintaining a foreign key
constraint can impact performance significantly in most RDBMS's, even when
__ Information from ESET Smart Security, version of virus signature
database 4852 (20100209) __
The message was checked by ESET Smart Security.
http://www.eset.com
Hi,
Is there a way to create a new composite type using already existing composite
type?
For example,
CREATE TABLE inventory_item (
nametext,
supplier_id integer REFERENCES suppliers,
price numeric CHECK (price 0)
);
produces the following type
Hello all,
Apologies for the long mail.
I work for a company that is provides solutions mostly on a Java/Oracle
platform. Recently we moved on of our products to PostgreSQL. The main
reason was PostgreSQL's GIS capabilities and the inability of government
departments (especially road/traffic)
How to save 2 GB or more text string in Postgresql?
Which data type should I use?
Please any suggestion.
If you will be selecting sets of data within a time range, it should also
improve performance if you can build a clustered index on the sample_time. It
may also be worth looking at whether partitioning by timestamp channel offers
any advantages.
Brent Wood
Brent Wood
DBA/GIS consultant
Quick note, please stick to text formatted email for the mailing list,
it's the preferred format.
On Tue, Feb 9, 2010 at 9:09 PM, Jayadevan M
jayadevan.maym...@ibsplc.com wrote:
Hello all,
Apologies for the long mail.
I work for a company that is provides solutions mostly on a Java/Oracle
On Tue, Feb 9, 2010 at 9:38 PM, AI Rumman rumman...@gmail.com wrote:
How to save 2 GB or more text string in Postgresql?
Which data type should I use?
If you have to you can use either the lo interface, or you can use
bytea. Large Object (i.e. lo) allows for access much like fopen /
fseek etc
On Feb 9, 2010, at 9:52 PM, Scott Marlowe wrote:
On Tue, Feb 9, 2010 at 9:38 PM, AI Rumman rumman...@gmail.com wrote:
How to save 2 GB or more text string in Postgresql?
Which data type should I use?
If you have to you can use either the lo interface, or you can use
bytea. Large Object
On Tue, Feb 9, 2010 at 11:26 PM, Steve Atkins st...@blighty.com wrote:
On Feb 9, 2010, at 9:52 PM, Scott Marlowe wrote:
On Tue, Feb 9, 2010 at 9:38 PM, AI Rumman rumman...@gmail.com wrote:
How to save 2 GB or more text string in Postgresql?
Which data type should I use?
If you have to you
Thanks for your quick answes.
But if I use a file and then store the name in the database, is it possible
to use TEXT search tsvector and tsquery indexing on these external files?
On Wed, Feb 10, 2010 at 12:26 PM, Steve Atkins st...@blighty.com wrote:
On Feb 9, 2010, at 9:52 PM, Scott Marlowe
Asher wrote:
Once loaded into the database the data will never be deleted or
modified and will typically be accessed over a particular date range
for a particular channel (e.g. sample_time = X AND sample_time = Y
AND channel=Z). A typical query won't return more than a few million
rows and
2010/2/10 Scott Marlowe scott.marl...@gmail.com:
On Tue, Feb 9, 2010 at 11:26 PM, Steve Atkins st...@blighty.com wrote:
On Feb 9, 2010, at 9:52 PM, Scott Marlowe wrote:
On Tue, Feb 9, 2010 at 9:38 PM, AI Rumman rumman...@gmail.com wrote:
How to save 2 GB or more text string in Postgresql?
I have to write a 700 GB large database to a dump to clean out a lot of dead
records on an Ubuntu server with postgres 8.3.8. What is the proper procedure
to succeed with this - last time the dump stopped at 3.8 GB size I guess.
Should I combine the -Fc option of pg_dump and and the split
On Tue, Feb 9, 2010 at 11:51 PM, Pavel Stehule pavel.steh...@gmail.com wrote:
2010/2/10 Scott Marlowe scott.marl...@gmail.com:
On Tue, Feb 9, 2010 at 11:26 PM, Steve Atkins st...@blighty.com wrote:
On Feb 9, 2010, at 9:52 PM, Scott Marlowe wrote:
On Tue, Feb 9, 2010 at 9:38 PM, AI Rumman
Hi all,
I'm playing with 8.5devel aka 9.0 and got that:
test=# with a as
(insert into table_a select s, md5(s::text) from
generate_series(0,25) s returning *),
b as
(insert into table_b select id, md5(value) from a where
substring(value,1,1) between '0' and '9')
On Feb 9, 2010, at 10:38 PM, AI Rumman wrote:
Thanks for your quick answes.
But if I use a file and then store the name in the database, is it possible
to use TEXT search tsvector and tsquery indexing on these external files?
No, no way at all.
A database isn't really the right way to
I have to write a 700 GB large database to a dump to clean out a lot of
dead records on an Ubuntu server with postgres 8.3.8. What is the proper
procedure to succeed with this - last time the dump stopped at 3.8 GB size I
guess. Should I combine the -Fc option of pg_dump and and the
On Wed, Feb 10, 2010 at 12:11 AM, Steve Atkins st...@blighty.com wrote:
A database isn't really the right way to do full text search for single files
that big. Even if they'd fit in the database it's way bigger than the
underlying index types tsquery uses are designed for.
Are you sure that
Hello
2010/2/10 karsten vennemann kars...@terragis.net
I have to write a 700 GB large database to a dump to clean out a lot of
dead records on an Ubuntu server with postgres 8.3.8. What is the proper
procedure to succeed with this - last time the dump stopped at 3.8 GB size I
guess. Should
On Feb 9, 2010, at 11:21 PM, Scott Marlowe wrote:
On Wed, Feb 10, 2010 at 12:11 AM, Steve Atkins st...@blighty.com wrote:
A database isn't really the right way to do full text search for single
files that big. Even if they'd fit in the database it's way bigger than the
underlying index
On Wed, Feb 10, 2010 at 10:11 AM, Steve Atkins st...@blighty.com wrote:
On Feb 9, 2010, at 10:38 PM, AI Rumman wrote:
Thanks for your quick answes.
But if I use a file and then store the name in the database, is it possible
to use TEXT search tsvector and tsquery indexing on these external
karsten vennemann wrote:
I have to write a 700 GB large database to a dump to clean out a lot
of dead records on an Ubuntu server with postgres 8.3.8. What is the
proper procedure to succeed with this - last time the dump stopped at
3.8 GB size I guess. Should I combine the -Fc option of
On Tue, Feb 09, 2010 at 08:19:51PM +0500, Anton Maksimenkov wrote:
It means that on openbsd i386 we have about 2,2G of virtual space for
malloc, shm*. So, postgres will use that space.
But mmap() use random addresses. So when you get big chunk of memory
for shared buffers (say, 2G) then you
Hi,
I would consider telling Hibernate to log the SQL it is generating to a
file. This can be done by setting the logging category org.hibernate.SQL to
debug, and for the parameters used in the prepared statements I think you
must also enable org.hibernate.type on debug (I have not managed to get
On Tue, Feb 9, 2010 at 11:51 PM, Greg Smith g...@2ndquadrant.com wrote:
Asher wrote:
Once loaded into the database the data will never be deleted or modified
and will typically be accessed over a particular date range for a particular
channel (e.g. sample_time = X AND sample_time = Y AND
89 matches
Mail list logo