On Mon, 12 Aug 2013 16:05:04 -0400
Bruce Momjian wrote:
> On Mon, Aug 12, 2013 at 08:33:04AM -0700, Joshua D. Drake wrote:
> > Put the pg_xlog on spindles, they are more than fast enough and
> > won't eat up the write life of your SSDs.
> Given its small size and need for fast fsync, I have WAL
On Mon, 12 Aug 2013 11:01:09 -0500 (CDT)
Scott Whitney wrote:
> When you say "16 10K drives," do you mean:
I mean 8 RAID-1 pairs with data striped across the pairs. The Linux
software RAID "offset" scheme is described here:
http://www.ilsistemista.net/index.php/linux-a-unix/35-linux-software-r
Hi,
We run a fairly write-intensive workload and are looking at upgrading
our Pg servers. (PostgreSQL 9.1; no practical way to upgrade to 9.2 for
a while because we use what's packaged with Debian.)
I'm considering the following configuration:
Dual 4-core Intel CPU (E5620 at 2.4GHz)
192G
From: German Becker
Subject: Re: archive falling behind
> Actually this seems like a very strange filesystem /hw problem. The
> wal segments keep "changing" even after I stoped the database and
> noone is supposly accesing it:
I've seen this before. It was bad RAM. I bet it's a hardware probl
On Wed, 24 Apr 2013 14:01:13 +0100
Simon Riggs wrote:
[How to ensure query on hot-standby is not canceled by a
vacuum cleanup on the master.]
>> do I need to use UPDATE or INSERT to ensure that a vacuum cleanup
>> isn't applied until the transaction is finished?
> SELECT is sufficient
Great!
Hi,
I ran a query on a hot-standby (PostgreSQL 9.1) and it was canceled as
per the documentation about vacuum cleanups on the master.
Second time testing, I started a transaction on the master (with
BEGIN) but my query on the hot-standby was still canceled.
Is that because a transaction doesn't
Hi,
I have an application that creates a daily table from a "prototype"
table, so something like:
CREATE TABLE data_20130226 LIKE data_prototype INCLUDING DEFAULTS INCLUDING
CONSTRATINTS INCLUDING INDEXES;
It would be really nice to be able to add:
INCLUDING TABLESPACE
so that the daily table
Ray Stell wrote:
>> The machine had bad RAM; we were getting undetected/uncorrected single-bit
>> errors creeping through!
> who's the machine/memory vendor?
I don't know exactly; it's a colocated machine that we don't own.
dmidecode (trimmed down) says:
System Information
Manufacturer
Hi,
Back in October 2009, I reported on strange warm-standby problems in
this thread: http://archives.postgresql.org/pgsql-admin/2009-10/msg00170.php
Just in case anyone still cares or is wondering, we found the problem.
The machine had bad RAM; we were getting undetected/uncorrected single-bit
Kevin Grittner wrote:
>> shared_buffers = 24MB
> You should probably set that higher.
Nah. This machine is totally bored; tweaking PostgreSQL would be pointless
since it's so under-utilized.
>> archive_command = '/usr/bin/wal_archive_command.pl %p'
> It would probably be safer to pass in %f, t
Tom Lane wrote:
> So, when it archives successfully the second time, which if either of
> the two mismatched sha1's proves to have been correct?
The one on the master server (lines wrapped for readability).
"local" refers to the master server, and "remote" to the standby
server.
Oct 27 14:26:35
Kevin Grittner wrote:
> Just out of curiosity, could you show us the non-comment portions of
> your postgresql.conf file?
Sure! Here they are.
Regards,
David.
=
data_directory = '/var/lib/postgresql/8.3/main'
hba_file
Hi,
In an effort to track down the problem, I switched to using rsync rather
than scp to copy the files. I also take the SHA1 hash on each end, and
have my archiving script exit with a non-zero status if there's a mismatch.
Sure enough:
Oct 27 14:26:35 colo2vs1 canit-failover-wal-archive[29118]:
Tom Lane wrote:
> What about the other direction: the script invoked by the archive
> returns "done" before the bits have all been shipped?
Do you mean the wal_archive_command? It waits for scp to finish.
It's written in Perl; here is the relevant part.
Regards,
David.
[Stuff deleted...]
my
Tom Lane wrote:
> No; there's no WAL change between 8.3.7 and 8.3.8. What seems more
> likely is that you're somehow shipping the WAL files before they're
> quite finished.
I doubt it. Our archive_command works like this:
1) scp the file over to the backup server as root. It's stored in a
fil
Hi,
I have one PostgreSQL 8.3 machine feeding WAL files to another PostgreSQL 8.3
machine that's running in recovery mode. However, fairly often (every
few days), the standby machine breaks out of recovery mode with log messages
like:
2009-10-23 21:47:40 EDT LOG: incorrect resource manager data
Hi,
I'm running PostgreSQL 8.3. Suppose I have master server A shipping logs
to backup server B. At some time in the past, I did a full backup from A
to B, and now B is running in recovery mode, happily consuming WALs.
Q1. If I stop and restart master server A gracefully, do I need to do
anyth
Alan McKay wrote:
> So is this not simply easier to implement with a library of functions
> to convert a string to binary and back?
The representation format has nothing to do with making a range search
efficiently use an index, though.
Regards,
David.
--
Sent via pgsql-admin mailing list (pg
I've done some experiments; here are my results for posterity and Google:
I installed the ip4r exension and created the following database:
CREATE TABLE ip4r_networks (
iprange ip4r,
satellite integer
);
CREATE INDEX foo2 ON ip4r_networks USING gist (iprange);
CREATE TABLE networks (
i
Hi,
I have a table like this:
CREATE TABLE networks (
iprange CIDR,
datum INTEGER
);
and I want to efficiently support queries like this:
SELECT * FROM networks WHERE '128.3.4.5' <<= iprange;
There doesn't seem to be any indexing mechanism in core PostgresSQL that
supports this;
> From: "Phillip Smith" <[EMAIL PROTECTED]>
[Automatically-added disclaimer removed]
I run a mailing list and on the signup page, I have this notice:
NOTE: By posting to the MIMEDefang list, you agree that any
disclaimer, legal boilerplate, or restrictions added to your posting
are null an
Hi,
We have two PostgreSQL 8.2 database servers: A master and a warm-standby
server. We plan on making an initial backup of the master onto the standby
and then use log-shipping with "real-time" WAL-file processing as described
in http://www.postgresql.org/docs/8.2/static/warm-standby.html
My qu
Hello,
We have a customer running a rather large installation. There are
about 15 machines talking to a PostgreSQL database server. At any
given time, each machine has between around 20 to 120 connections to
the PG server, and the aggregate query rate probably hovers at over
1000 per second.
Ou
Alvaro Herrera wrote:
> Connect in standalone mode?
Does not work. Tom Lane replied with the only thing that does work,
which we independently discovered about 30 seconds before hearing from
Tom. :-)
The solution is to modify the PostgreSQL source code to skip the check,
and run the modified bi
Hi,
Supposing someone stupidly did this:
UPDATE pg_database SET datallowconn = false;
and then closed all the connections to the server.
Is there any way to recover short of nuking everything and
restoring from a backup dump? :-(
Regards,
David.
---(end of bro
Hi,
Does anyone run a very busy PostgreSQL datatabase, with lots of read
and write operations that run 24x7? (We're talking on the
neighbourhood of 40 to 60 queries/second, with probably 5% to 10% of
them being INSERT or UPDATE.)
Some of our clients run such a DB, and the nightly VACUUM slows th
Hi, Jochen.
> However, we frequently observe the following message in the log files:
>PGRES\_FATAL_ERROR : ERROR: out of free buffers: time to abort!
> Any hints what's going on? Suggestion?
Two things might help:
1) Upgrade to the latest CanIt (2.1a), because earlier versions seemed
to tri
On Wed, 9 Jun 2004, Scott Marlowe wrote:
> If you put the application's data into a specific schema, then you can
> dump just that schema with the -n switch...
Thanks. That's a solution for 7.4, but some of our installed base
(especially the older ones with large DB's that cause the problem) are
Hi,
pg_dump can be used to dump an entire database, or just a single
table. Is there a way to make a consistent dump of more than one
table, but less than all of the tables in the database? Doing a bunch
of single-table pg_dumps isn't really an option, because some tables
may change during the d
Hi,
I've been running some tests, and it seems that PostgreSQL has very
bad behavior when multiple clients try to update the same row at the
same time. I realize of course that concurrent updates are not a Good
Thing, but PostgreSQL's reaction to it makes them very dangerous.
I would expect that
On Thu, 5 Feb 2004, Edoardo Ceccarelli wrote:
> Things are worst only for seqscan, when it uses indexscan timing is good.
It might just be that running VACUUM flushed the operating system's
buffer cache.
I always try running two or three EXPLAIN ANALYZE's in a row for the
same query. Usually, t
On Mon, 29 Dec 2003, Charles Haron wrote:
> I want to be able to run the above command as a cron job. I created a
> script with the following command, but I get "ERROR: Attribute 'f' not
> found":
> su - postgres -c 'psql -c "DELETE FROM prg_dates_members WHERE confirm =
> 'f';" comfire'
Why n
On Sat, 27 Dec 2003, Sai Hertz And Control Systems wrote:
> select exp(3.3234) as a2144
> Gives me
> 27.754555808589792
Right. That's e^3.3234
Try:
select 10^3.3234;
or:
select dpow(10, 3.3234);
or even:
select exp(3.3234 * ln(10.0));
--
David.
---
Hi,
In the HISTORY file for PostgreSQL 7.4, it says:
Make free space map efficiently reuse empty index pages, and other
free space management improvements.
In previous releases, B-tree index pages that were left empty
because of deleted rows could only be
> On Mon, 2003-11-10 at 16:58, Gautam Saha wrote:
> Hi:
>
> Is there a ER diagram tool someone can recommend for PostgreSQL?
DIA isn't too bad:
http://www.lysator.liu.se/~alla/dia/
This goes from DIA to PostgreSQL:
http://freshmeat.net/projects/dia2postgres/?topic_id=66%2C259%2C916
And this go
On Wed, 5 Nov 2003, Juan Miguel wrote:
> Therefore, do you know a better Open Source DBMS than Access, that is
> easy to install and integrate with your applications ?
What is your target environment?
We sell a commercial program (http://www.canit.ca) that uses PostgreSQL
internally. For our Re
Hi,
On Red Hat 9, when I run the pg_dump command, it uses the
"INSERT " dump format. I thought it only did that if you used the
--inserts option. How do I force it NOT to use that format?
Regards,
David.
---(end of broadcast)---
TIP 9: the p
On Sat, 18 Oct 2003, Sam Carleton wrote:
> I am working on creating my first database in Postgres. In the
> script that will create all the tables for the firs time, I would
> like to have a conditional statement to delete a table, if it
> already exists.
As far as I know, there is no way to do
On Fri, 12 Sep 2003, Andrew Sullivan wrote:
> More or less, yes. The significant part here is that the postmaster
> won't notice that the client is gone until it returns from the work
> it was trying to do. It'll eventually come back, but it'll take some
> time. How low does your contention nee
On Thu, 4 Sep 2003, Sam Barnett-Cormack wrote:
> It might be worth racking your brains to think of other ways. Query
> timeouts?
Either way, if the back-end is waiting on a semaphore, will it time
out the query and terminate? The problem is lots of waiting back-end
processes.
> why not make it
On Thu, 4 Sep 2003, Tom Lane wrote:
> Any process that arrives at the row and finds it already modified by
> some concurrent transaction will wait for that concurrent transaction
> to complete.
Right. And it waits on a semaphore, right? So there's no way to
use select() to wait for EITHER the s
Hi,
I have a weird problem and want to know if I understand what's happening.
I have a table like this:
create table statistics (
dateDATE DEFAULT current_date,
key TEXT,
value INTEGER DEFAULT 0,
UNIQUE(date, key)
);
and I have a
Hi,
I have a customer trying to compile PostgreSQL 7.3.1 on HP-UX with GCC, and
he can't to it. He gets these configure errors:
checking build system type... hppa2.0w-hp-hpux11.11
checking host system type... hppa2.0w-hp-hpux11.11
checking which template to use... hpux
[...]
checking sys/socket.
On Mon, 12 May 2003, Naomi Walker wrote:
> We would be interested in replication, so reporting could be done against a
> different server than production.
And I'm interested in replication for failover purposes. Automatic
hot-failover isn't really required for my application, but a "warm"
failov
On Tue, 14 Jan 2003, Vida Luz Arista wrote:
> I am analyzing to Postgresql for e-goverment, nevertheless I need to know
> in that companies, country and so that aims this being used this data
> base.
My anti-spam solution at http://www.canit.ca/ is built around
PostgreSQL. The largest installati
On Mon, 13 Jan 2003, Christian Brink wrote:
> I am looking for any discussions on when to use BLOB's as opposed to storing
> files on a filesystem. If someone knows of a published discussion or has
> some pearls of wisdom on this I would greatly appreciate it.
Furthermore, I would appreciate know
Hi,
I'm having a customer running PostgreSQL 7.2.3 on Solaris 9 with
PHP 4.3.0, and he's getting this error:
could not receive data from server: No such file or directory
grepping through the libpq source, it appears that the only way this
message could happen is if recv() returns -1 wit
On Sun, 5 Jan 2003, Jack Flak wrote:
> So, once again, here's my question: how do I go about deleting the
> duplicate entries WITHOUT also deleting the originals?
Maybe a dump with some awk/perl magic followed by a restore might be
the easiest way. :-(
"pg_dump -a -D" might give output that is
On Fri, 3 Jan 2003, Robert Treat wrote:
> The purpose of a regular aka "lazy" vacuum is to mark dead tuples
> generated from updates and deletions as reusable by future inserts and
> updates. It doesn't recover any disk space, but allows currently wasted
> space to be reused. "Full" vacuums, otoh,
On Tue, 31 Dec 2002, Rob Abernethy IV wrote:
> Is there any problem with this line from my pg_hba.conf file:
> # TYPE DATABASE USER IP-ADDRESS IP-MASK METHOD
> local all all 0.0.0.0 0.0.0.0 md5
A type of "local" doesn't take IP-ADDRES or IP-MASK. It's a UNIX-domain
socket,
On Mon, 23 Dec 2002, Timothy D McKernan wrote:
> Before we push it out into our production environment I wanted to make
> one last check with regards to bugs - is anybody aware of any bugs that
> have popped up in 7.3? I haven't seen any unexpected issues on the
> mailing lists but I wanted to ma
On Sat, 14 Dec 2002, Jesus Sandoval wrote:
> following:
> 1) Backup the database data, pg_dump is of no use because if the table design
> changed then the COPY table FROM stdin produced by pg_dump needs to be
> modified to accomodate the space for the new columns.
Actually, you can use pg_dump wi
On Mon, 2 Dec 2002, Joel Burton wrote:
> Looks right to me: index scan for the less-common option, seqscan for
> the most common. Why don't you think this, as a btree, will work for
> you?
No, I'm sure a btree will work. However, won't the index be
inefficient (i.e., very flat) if there are many
Hi,
I have a table with a column called "state". Each row can be in one
of four states, let's call them 'new', 'pending', 'ok', and 'bad'.
On average, about 95% of the rows will be 'bad', with the remaining
5% being in one of the other three states. If the table has 50K rows
and I just want to p
On Tue, 8 Oct 2002, Elielson Fontanezi wrote:
> to identify Linux user which starts the shell script to avoid password
> prompt.
There's a virtually-undocmented environment variable: PGPASSWORD
PGPASSWORD="secret-password"
export PGPASSWORD
# Now pg_dump will not prompt
55 matches
Mail list logo