Am 29.11.2011 23:44, schrieb Filip Rembiałkowski:
did you look at connlimit?
http://www.netfilter.org/projects/patch-o-matic/pom-external.html#pom-external-connlimit
AFAIK, it applies only to ESTABLISHED state, so maybe it suits you.
No, I didn't, and THANKS! That's exactly the hint I needed.
Am 29.11.2011 23:49, schrieb Tom Lane:
Another way that we've sometimes recommended people handle custom login
restrictions is
(1) use PAM for authentication
(2) find or write a PAM plugin that makes the kind of check you want
Very interesting - I'll first try the connlimit approach hinted at
On Wed, Nov 30, 2011 at 09:23, Heiko Wundram modeln...@modelnine.org wrote:
Am 29.11.2011 23:49, schrieb Tom Lane:
Another way that we've sometimes recommended people handle custom login
restrictions is
(1) use PAM for authentication
(2) find or write a PAM plugin that makes the kind of
Am 30.11.2011 09:26, schrieb Magnus Hagander:
I don't believe we do teardown using PAM, just session start. So you'd
have to have your PAM module check the current state of postgresql
every time - not keep some internal state.
Okay, that's too bad - if connlimit doesn't do the trick, I'll try
Florian Schwendener wrote:
Thank you for your help. As I know little about Linux and only a bit
about make files, I really don't know if
I'm doing the right thing. I've typed this:
root@ubuntu:/home/user/Downloads/odbc_fdw-0.1.0#
PATH=/usr/local/pgsql/bin/:$PATH make USE_PGXS=1 make
make:
MURAT KOÇ wrote:
Version is PostgreSQL 9.0.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc
(GCC) 4.1.2 20080704 (Red
Hat 4.1.2-51), 64-bit.
We set logging parameters as below for DDL DML Logging:
logging_collector = on
log_statement = mod
log_line_prefix =
Hello Everyone,
Can someone please help me know if there exists a pglesslog version for
PG-9.0.
I only see beta version (1.4.2) for pg9.0 being released sometime ago.
Anyone using pg_lesslog_1.4.2_pg90_beta.tar for PG-9.0 production
successfully ?
Can we use the above said version on
Hello,
I've spent a couple of hours trying some WAL archiving functionality on
PostgrSQL 9.1 (running on Mac OS X). I turned on all the needed options as
specified in the documentation:
wal_level = archive
archive_mode = on
archive_command='test ! -f /Volumes/baza/%f cp %p /Volumes/baza/%f'
Dear friends,
I have an somewhat unstable link between two different locations
with OpenVPN established and working.
Now, I've configured PostgreSQL 9.0.5 for asynchronous replication.
This morning I got the following message on Slave PostgreSQL log:
Florian Schwendener wrote:
[has problems building odbc_fdw]
Oh, didn't see that! Now it says:
root@ubuntu:/home/user/Downloads/odbc_fdw-0.1.0#
PATH=/usr/local/pgsql/bin/:$PATH make USE_PGXS=1
make: Nothing to be done for `all'.
I remember trying a few options with the make command. Is it
Thanks for reply.
If we set log_statement='all', all of sql statements will be logged and log
file will grow up immediately (also including unnecessary sql statements).
We don't want all sql statements to be logged, so we continue logging
settings as my previous sending (log_statement = 'mod').
At 16:27 28/11/2011, you wrote:
Hi there!
I built the current PostgreSQL 9.1.1 sources under Ubuntu 11.04 (in
a VMware under Win7).
I followed the steps in this guide:
www.thegeekstuff.com/2009/04/linux-postgresql-install-and-configure-from-source
It seems to work (I can run the server and
On Nov 30, 2011, at 5:02 AM, Edson Richter wrote:
I assume that the OpenVPN got disconnected for a few seconds, and came back
again.
My question is: assuming I have enough wal segments on Master side, does the
Slave get synchronized automatically after the connection is reestablished,
MURAT KOÇ wrote:
If we set log_statement='all', all of sql statements will be logged and log
file will grow up
immediately (also including unnecessary sql statements).
We don't want all sql statements to be logged, so we continue logging
settings as my previous sending
(log_statement =
BK wrote:
I've spent a couple of hours trying some WAL archiving functionality
on PostgrSQL 9.1 (running on Mac
OS X). I turned on all the needed options as specified in the
documentation:
wal_level = archive
archive_mode = on
archive_command='test ! -f /Volumes/baza/%f cp %p
Hi all,
I have taken a base backup of my master server using pg_basebackup command
as below:
pg_basebackup -D /tmp/PostgresBackup/ -Ft -Z 9 -l masterbackup -h
localhost -U replication -w
The above created 4 tar files, namely: 16394.tar.gz 16395.tar.gz
16396.tar.gz base.tar.gz
I do know that
Hi,
An application server does a lot of small inserts on several tables
(kind of users activity logging) in my database. These inserts are
creating a lot of (if not the most of) the traffic on the server. The
server has a hot standby replica working through a very slow channel
between them.
When
Hi Albe,
On Nov 30, 2011, at 2:31 PM, Albe Laurenz wrote:
Verify the current setting with
SELECT setting, source, boot_val, reset_val,
sourcefile, sourceline
FROM pg_settings WHERE name = 'wal_level';
If the setting is not right (which is likely the case), try to find out
the
On 30 Listopad 2011, 17:23, BK wrote:
Hi Albe,
On Nov 30, 2011, at 2:31 PM, Albe Laurenz wrote:
Verify the current setting with
SELECT setting, source, boot_val, reset_val,
sourcefile, sourceline
FROM pg_settings WHERE name = 'wal_level';
If the setting is not right (which is
Em 30-11-2011 11:17, John DeSoi escreveu:
On Nov 30, 2011, at 5:02 AM, Edson Richter wrote:
I assume that the OpenVPN got disconnected for a few seconds, and came back
again.
My question is: assuming I have enough wal segments on Master side, does the
Slave get synchronized automatically
On 11/30/2011 01:43 PM, Tomas Vondra wrote:
On 30 Listopad 2011, 17:23, BK wrote:
Hi Albe,
On Nov 30, 2011, at 2:31 PM, Albe Laurenz wrote:
Verify the current setting with
SELECT setting, source, boot_val, reset_val,
sourcefile, sourceline
FROM pg_settings WHERE name = 'wal_level';
On Tue, 2011-11-22 at 15:55 +, Gregg Jaskiewicz wrote:
for the future it is better to just use text type, and: check
length(field) 35;
thanks to all for the respones.
The above seems a prudent way to go in my future.
My assumption is that converting varchar(n) to text would still force a
Reid Thompson reid.thomp...@ateb.com writes:
My assumption is that converting varchar(n) to text would still force a
re-write of the table? i.e. currently there's no officially 'safe' way
to convert the field type w/o incurring a table re-write.
If you do it through ALTER TABLE, yes. Since
Had to restart the import. This time, I tried with a smaller initial disk
size (1GB) and extended it dynamically. It did not cause any problems.
A different reason might be, that I remounted the volume in between during
the last update to deactivate buffer flushing. Maybe a bad combination.
Let's
I haven't had a chance to experiment with the SET STATISTICS, but
that
got me going on something interesting...
Do these statistics look right?
# SELECT attname, n_distinct, most_common_vals, histogram_bounds FROM
pg_stats WHERE tablename = 'cards';
...
card_set_id 905
Oh, didn't see that! Now it says:
root@ubuntu:/home/user/Downloads/odbc_fdw-0.1.0#
PATH=/usr/local/pgsql/bin/:$PATH make USE_PGXS=1
make: Nothing to be done for `all'.
I remember trying a few options with the make command. Is it maybe
already built?
Am 30.11.2011 09:46, schrieb Albe
Hi Laurenz
Thank you for your help. As I know little about Linux and only a bit
about make files, I really don't know if
I'm doing the right thing. I've typed this:
root@ubuntu:/home/user/Downloads/odbc_fdw-0.1.0#
PATH=/usr/local/pgsql/bin/:$PATH make USE_PGXS=1 make
make: *** No rule to
On Tue, Nov 29, 2011 at 7:49 AM, Heiko Wundram modeln...@modelnine.org wrote:
Hello!
Sorry for that subscribe post I've just sent, that was bad reading on my
part (for the subscribe info on the homepage).
Anyway, the title says it all: is there any possibility to limit the number
of
On 29.11.2011 14:49, Heiko Wundram wrote:
Hello!
Sorry for that subscribe post I've just sent, that was bad reading on my
part (for the subscribe info on the homepage).
Anyway, the title says it all: is there any possibility to limit the
number of connections that a client can have
no easy, standard way of doing this in postgres.
before we go into workarounds - what's the underlying OS?
2011/11/29 Heiko Wundram modeln...@modelnine.org:
Hello!
Sorry for that subscribe post I've just sent, that was bad reading on my
part (for the subscribe info on the homepage).
On Wed, Nov 30, 2011 at 4:19 PM, panam pa...@gmx.net wrote:
Had to restart the import. This time, I tried with a smaller initial disk
size (1GB) and extended it dynamically. It did not cause any problems.
A different reason might be, that I remounted the volume in between during
the last
On 30.11.2011 23:22, Tyler Hains wrote:
I haven't had a chance to experiment with the SET STATISTICS, but
that
got me going on something interesting...
Do these statistics look right?
# SELECT attname, n_distinct, most_common_vals, histogram_bounds FROM
pg_stats WHERE tablename = 'cards';
On 11/30/2011 10:32 PM, Sergey Konoplev wrote:
Would it be more compact from the point of view of streaming
replication if we make the application accumulate changes and do one
COPY instead of lots of INSERTS say once a minute? And if it will be
so how to estimate the effect approximately?
On Nov 30, 2011, at 18:44, Craig Ringer ring...@ringerc.id.au wrote:
On 11/30/2011 10:32 PM, Sergey Konoplev wrote:
Would it be more compact from the point of view of streaming
replication if we make the application accumulate changes and do one
COPY instead of lots of INSERTS say once a
Hello,
I'm trying to use a domain to define a data type constraint, let's say
an hypothetical uk_post_code with pattern LNNLL. I'd enforce no
whitespaces, all uppercase.
I would also need a way to normalize before validate: given an input
such as w3 6bq, normalize it to W36BQ before trying to
Daniele Varrazzo daniele.varra...@gmail.com writes:
I'm trying to use a domain to define a data type constraint, let's say
an hypothetical uk_post_code with pattern LNNLL. I'd enforce no
whitespaces, all uppercase.
I would also need a way to normalize before validate: given an input
such as
I created special custom aggregate function to append arrays defined as:
CREATE AGGREGATE array_accum (anyarray)
(
sfunc = array_cat,
stype = anyarray,
initcond = '{}'
);
On arrays of common types it work without any problems:
SELECT array_accum(i) from (values (ARRAY[1,2]),
Do you have Tablespace directories with a softlink to the data directory ?
Thanks
VB
On Wed, Nov 30, 2011 at 7:42 PM, Samba saas...@gmail.com wrote:
Hi all,
I have taken a base backup of my master server using pg_basebackup command
as below:
pg_basebackup -D /tmp/PostgresBackup/ -Ft -Z 9
38 matches
Mail list logo