On Mon, 21 Jul 2008, System/IJS - Joko wrote:
Thx a lot Nicolas,
I finaly success to log query statement because of your simple explanation.
I have other question:
1. Is there posibility to automatically logging that statement to table?
I don't know, never tried that.
2. All of that stateme
On Fri, 18 Jul 2008, System/IJS - Joko wrote:
I added the following to FreeBSD:
/etc/newsyslog.conf:
/var/log/postgresql 600 7 *@T00 JC
make new file?
/etc/syslog.conf:
local0.*/var/log/postgresql
/usr/local/pgsql/data/po
On Thu, 17 Jul 2008, Claus Guttesen wrote:
After setting log_statement='all' at postgres.conf,
then i'm rebooting OS [freeBSD or CentOS],
i can't find where log file created from log_statement='all' located...
FYI, location of postgres.conf at /var/lib/pgsql/data/postgres.conf
many thanks..
I
On Fri, 11 Jan 2008, Andrew Sullivan wrote:
On Fri, Jan 11, 2008 at 05:02:36PM -0500, Michael Stone wrote:
networks), but there's a conspicuous lack of a type for (hosts). I
suppose if you really are sure that you want to store hosts and not
networks
Well, part of the trouble is that in the
On Fri, 11 Jan 2008, Tom Lane wrote:
Pomarede Nicolas <[EMAIL PROTECTED]> writes:
As ip4r seems to work very well with postgresql, is there a possibility to
see it merged in postgresql, to have a native 4 bytes IPv4 address date
type ?
Given that the world is going to IPv6 in a few
On Thu, 10 Jan 2008, Jonah H. Harris wrote:
On Jan 10, 2008 6:25 PM, Steve Atkins <[EMAIL PROTECTED]> wrote:
http://pgfoundry.org/projects/ip4r/
That has the advantage over using integers, or the built-in inet type,
of being indexable for range and overlap queries.
Agreed. ip4r is da bomb.
On Tue, 8 May 2007, Heikki Linnakangas wrote:
Pomarede Nicolas wrote:
There's not too much simultaneous transaction on the database, most of the
time it shouldn't exceed one minute (worst case). Except, as I need to run
a vacuum analyze on the whole database every day, it now takes
On Tue, 8 May 2007, Heikki Linnakangas wrote:
Pomarede Nicolas wrote:
On Tue, 8 May 2007, Heikki Linnakangas wrote:
Pomarede Nicolas wrote:
But for the data (dead rows), even running a vacuum analyze every day is
not enough, and doesn't truncate some empty pages at the end, so the data
On Tue, 8 May 2007, Heikki Linnakangas wrote:
Pomarede Nicolas wrote:
But for the data (dead rows), even running a vacuum analyze every day is
not enough, and doesn't truncate some empty pages at the end, so the data
size remains in the order of 200-300 MB, when only a few effective row
On Tue, 8 May 2007, Guillaume Cottenceau wrote:
Pomarede Nicolas writes:
Hello to all,
I have a table that is used as a spool for various events. Some
processes write data into it, and another process reads the resulting
rows, do some work, and delete the rows that were just processed.
As
On Tue, 8 May 2007, [EMAIL PROTECTED] wrote:
On Tue, 8 May 2007, Pomarede Nicolas wrote:
As you can see, with hundreds of thousands events a day, this table will need
being vaccumed regularly to avoid taking too much space (data and index).
Note that processing rows is quite fast in fact
Hello to all,
I have a table that is used as a spool for various events. Some processes
write data into it, and another process reads the resulting rows, do some
work, and delete the rows that were just processed.
As you can see, with hundreds of thousands events a day, this table will
nee
On Mon, 29 Jan 2007, Florian Weimer wrote:
* Pomarede Nicolas:
I could use PG internal inet/cidr type to store the ip addrs, which
would take 12 bytes per IP, thus gaining a few bytes per row.
I thought it's down to 8 bytes in PostgreSQL 8.2, but I could be
mistaken.
Apart from ga
Hello,
I have an authorization table that associates 1 customer IP to a service
IP to determine a TTL (used by a radius server).
table auth
client varchar(15);
service varchar(15);
ttl int4;
client and service are both ip addr.
The number of distinct clients can be rather large (say ar
14 matches
Mail list logo