Are there any tricks to speeding up pg_dump aside from doing them from a
replicated machine?
I'm using -Fc with no compression.
--
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomai
> 1)the 250 million records are currently whipped and reinserted as a
> "daily snapshot" and the fastest way I have found "COPY" to do this from
> a file is no where near fast enough to do this. SQL*Loader from Oracle
> does some things that I need, ie Direct Path to the db files access
> (skipp
> Each COPY only hits ones CPU for like 2.1% but anything over about 5
> kicks the load avg up.
Sounds like disk IO is slowing down the copy then.
> Ill get some explain analysis and table structures out there pronto.
>
> -Original Message-
> From: Rod Taylor [mailto:[EMA
On Thu, 2005-01-06 at 12:35 -0500, Dave Cramer wrote:
> Reading can be worse for a normalized db, which is likely what the
> developers were concerned about.
To a point. Once you have enough data that you start running out of
space in memory then normalization starts to rapidly gain ground again
On Thu, 2005-01-20 at 15:36 +0100, Hervé Piedvache wrote:
> Le Jeudi 20 Janvier 2005 15:24, Christopher Kings-Lynne a écrit :
> > > Is there any solution with PostgreSQL matching these needs ... ?
> >
> > You want: http://www.slony.info/
> >
> > > Do we have to backport our development to MySQL for
On Sat, 2005-01-22 at 12:41 -0600, Bruno Wolff III wrote:
> On Sat, Jan 22, 2005 at 12:13:00 +0900,
> Tatsuo Ishii <[EMAIL PROTECTED]> wrote:
> >
> > Probably VACUUM works well for small to medium size tables, but not
> > for huge ones. I'm considering about to implement "on the spot
> > salvagi
aster? I've done
If it's for a single request, you cannot get single processors which are
much faster than what you describe as having.
Want to send us a few EXPLAIN ANALYZE's of your longer running queries?
--
Rod Taylor <[EMAIL PROTECTED]>
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
On Tue, 2005-02-15 at 09:34 +0800, Michael Ryan S. Puncia wrote:
> Hi,
>
>
>
> I have 3 tables in the database with 80G of data, one of them is
> almost 40G and the remaining 2 tables has 20G each.
>
> We use this database mainly for query and updating is done only
> quarterly and the database
t the
buffers. I would suppose that toast doesn't bypass them.
--
Rod Taylor <[EMAIL PROTECTED]>
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
On Wed, 2005-02-23 at 15:26 -0300, Bruno Almeida do Lago wrote:
> Is there a real limit for max_connections? Here we've an Oracle server with
> up to 1200 simultaneous conections over it!
If you can reduce them by using something like pgpool between PostgreSQL
and the client, you'll save some head
> I get this problem on my dev (Windows/7.4/Cygwin) environment. But now
> I see that it's also have this problem on my production env. Yes I
> tought I was maybe just a cygwin/Windows problem .. apparently not :-
Care to try again with logging enabled on the PostgreSQL side within the
de
On Wed, 2005-03-16 at 18:58 +0100, Laurent Martelli wrote:
> Consider this query:
>
> SELECT distinct owner from pictures;
The performance has nothing to do with the number of rows returned, but
rather the complexity of calculations and amount of data to sift through
in order to find it.
> Any
On Wed, 2005-03-16 at 19:31 +0100, Laurent Martelli wrote:
> >>>>> "Rod" == Rod Taylor <[EMAIL PROTECTED]> writes:
>
> Rod> On Wed, 2005-03-16 at 18:58 +0100, Laurent Martelli wrote:
> >> Consider this query:
> >>
> >>
On Sun, 2005-03-20 at 00:29 -0400, Alvaro Herrera wrote:
> On Sat, Mar 19, 2005 at 07:05:53PM -0500, Tom Lane wrote:
> > Alvaro Herrera <[EMAIL PROTECTED]> writes:
> > > We probably also need multi-table indexes.
> >
> > As Josh says, that seems antithetical to the main point of partitioning,
> >
many rows from other tables.
Those are not represented in the explains at the moment.
--
Rod Taylor <[EMAIL PROTECTED]>
PGP Key: http://www.rbt.ca/rbtpub.asc
signature.asc
Description: This is a digitally signed message part
what I thought at first, so I dropped the foreign key constraints. The table
> is referenced by 2 tables, one of
> which has around 200 000 records and the other has 0 records.
Hmm... EXPLAIN ANALYZE your select again, but join both of those
referenced tables to the appropriate columns.
--
question, or was
it simply unavailable?
--
Rod Taylor <[EMAIL PROTECTED]>
PGP Key: http://www.rbt.ca/rbtpub.asc
signature.asc
Description: This is a digitally signed message part
tart here. Then
> some questions with sections of the documentation you should go to.
Do you have any examples where this has worked well (for reference)?
The only real example I have is MS's help which never gave me the right
answer.
--
Rod Taylor <[EMAIL PROTECTED]>
PGP
recently re-arranging things to make
them easier to find, but the content hasn't changed much.
Thanks for your thoughts!
--
Rod Taylor <[EMAIL PROTECTED]>
PGP Key: http://www.rbt.ca/rbtpub.asc
signature.asc
Description: This is a digitally signed message part
his sort?
>
> Thanks,
> Michael
>
>
> Michael Mattox
> [EMAIL PROTECTED] / http://www.advweb.com/michael
>
>
>
>
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to [EMAIL PROTECTED] so that your
> message can get through to the mailing list cleanly
--
Rod Taylor <[EMAIL PROTECTED]>
PGP Key: http://www.rbt.ca/rbtpub.asc
signature.asc
Description: This is a digitally signed message part
.
Depending on the data in the table, the index (ms.datex, monitorx) may
give better results along with a single index on (ms.monitorx) as you
currently have. It's not very likely though.
--
Rod Taylor <[EMAIL PROTECTED]>
PGP Key: http://www.rbt.ca/rbtpub.asc
signature.asc
Description: This is a digitally signed message part
rstatusitemx msi
where monitorx.idx = 'M-TEST_1444-TEST_00_10560561260561463219352'
Are you sure you sure you don't have any duplicated constraints by
pulling information in from other tables that you don't need to?
Removing some of those nested loops would make a significant impa
21.36'
AND m.idx = 'M-TEST_1444-TEST_00_10560561260561463219352'
ORDER BY ms.datex DESC;
--
Rod Taylor <[EMAIL PROTECTED]>
PGP Key: http://www.rbt.ca/rbtpub.asc
signature.asc
Description: This is a digitally signed message part
? What does top say
about cached / buffered data (number)
I see you reduced the random_page_cost to 1.5. Why did you do this (how
is your disk subsystem configured)?
--
Rod Taylor <[EMAIL PROTECTED]>
PGP Key: http://www.rbt.ca/rbtpub.asc
signature.asc
Description: This is a digitally signed message part
#x27;::timestamp without time zone))
The above index scan is taking a vast majority of the time (nearly 15
seconds of the 16 second total -- stop thinking about sorts!).. What
happened to the index on monitorx and datex?
--
Rod Taylor <[EMAIL PROTECTED]>
PGP Key: http://www.rbt.ca/rbtpub.asc
signature.asc
Description: This is a digitally signed message part
from the 1st is no longer useful (being queried frequently),
drop that index, and recreate WHERE datex >= .
This will make a significant improvement but will sacrifice the
performance on queries for older timeframes.
Since you're doing frequent updates, don't forget to drop any
rganizing your diskarray (WAL on one drive by itself, etc.)
REINDEX and VACUUM FULL are just as good.
--
Rod Taylor <[EMAIL PROTECTED]>
PGP Key: http://www.rbt.ca/rbtpub.asc
signature.asc
Description: This is a digitally signed message part
Being off by 5 minutes probably doesn't matter for a report
averaging the day, week or month.
--
Rod Taylor <[EMAIL PROTECTED]>
PGP Key: http://www.rbt.ca/rbtpub.asc
signature.asc
Description: This is a digitally signed message part
they would never get reused. (Typically because the index is
> on a sequenced field where old pages never get referenced again...)
I lost the thread, but I believe that I meant that REINDEX + VACUUM FULL
== database dump / reload. The results should be similar.
--
Rod Taylor <[EMAIL PROTE
VACUUM
VERBOSE.
Please attach your postgresql.conf file.
Please include the header lines from 'top' output (memory cache / buffer
lines in particular).
--
Rod Taylor
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL
PGP Key: http://www.rbt.ca/rbtpub.asc
signature.asc
Descript
> > iostat reports heavy disk usage?
Do you know of another way to report disk activity?
> I don't know what that is. I don't have an iostat utility on the
> machine. This is a Debian Linux machine. Is there a package with that
> utility in it?
I don't know. It must be in a package somewhe
> CREATE VIEW foo AS {complex_slow_query};
>
> SET random_page_cost = 1.5; EXPLAIN ANALYZE SELECT * FROM foo;
>
> Note the time taken. Repeat a few times to get the average.
You pulled everything off disk and tossed it into memory with the first
run so the results will NOT match your normal situ
On Thu, 2003-07-03 at 19:05, Sean Chittenden wrote:
> What are the odds of going through and revamping some of the tunables
> in postgresql.conf for the 7.4 release? I was just working with
> someone on IRC and on their 7800 RPM IDE drives, their
> random_page_cost was ideally suited to be 0.32: a
That's one heck of a poor estimate for the number of rows returned.
> -> Seq Scan on mss_fwevent (cost=0.00..223312.60 rows=168478 width=12) (actual
> time=24253.66..24319.87 rows=320 loops=1)
signature.asc
Description: This is a digitally signed message part
> Unless you provide these, it's difficult to help..
http://archives.postgresql.org/pgsql-performance/2003-05/msg00299.php
Note the thread with Tom and Brian.
signature.asc
Description: This is a digitally signed message part
On Fri, 2003-07-04 at 09:20, Shridhar Daithankar wrote:
> On 4 Jul 2003 at 9:11, Rod Taylor wrote:
>
> > > Unless you provide these, it's difficult to help..
> >
> > http://archives.postgresql.org/pgsql-performance/2003-05/msg00299.php
>
> Well, e
> Please understand the limits of how much information a consultant can submit
> to an open list like this about a client's confidential information. I've
> answered every question I _can_ answer and when I get hostility in response
> all I can do is sigh and move on.
Is there any chance you coul
> 2. Postgresql uses shared memory being process based architecture. Mysql uses
> process memory being threaded application. It does not need kernel settings to
> work and usually works best it can.
MySQL has other issues with the kernel due to their threading choice
such as memory limits per
> If I connect using -h 127.0.0.1, however, I can _sometimes_ get the
> query to take as long as 1200 msec. The effect is sporadic (of
SSL plays havoc with our system when using local loopback for the host
on both Solaris 7 and 8. It was probably key renegotiation which 7.4
has addressed.
si
> I am about to propose a patch that will cause the default shared_buffers
> to be more realistic, say 1000, on machines where the kernel will allow
> it. Not sure if people will let me get away with applying it
> post-feature-freeze, but if so that would change the terms of this
> debate noticeab
> To give you some perspective on the size of the dataset and the
> performance level we are hitting, here are some "good" results based on
> some explains:
Before Tom jumps in taking all the fun out of trying to solve it...
The estimates in the slow queries seem perfectly reasonable. In fact,
On Fri, 2003-07-11 at 11:36, Scott Cain wrote:
> Rod,
>
> I see what you mean about the scan on the feature_pkey taking a long
> time. I tried several things to remedy that. I created an index on
> feature (feature_id,type_id) (which I don't think makes sense since
> feature_id is the primary ke
> ...About one year ago I considered moving to a journaling file system, but
> opted not to because it seems like that's what WAL does for us already. How
> does putting a journaling file system under it add more reliability?
WAL only works if the WAL files are actually written to disk and can be
On Tue, 2003-07-22 at 20:34, Castle, Lindsay wrote:
> Hi all,
>
> I'm working on a project that has a data set of approximately 6million rows
> with about 12,000 different elements, each element has 7 columns of data.
Are these 7 columns the same for each element?
signature.asc
Description: Thi
t; units
>
> Thanks,
>
>
> Lindsay Castle
> EDS Australia
> Midrange & Distributed Tools
> Infrastructure Tools AP
> Ph: +61 (0)8 8464 7101
> Fax: +61 (0)8 8464 2135
>
>
> -Original Message-
> From: Rod Taylor [mailto:[EMAIL PROTEC
On Tue, 2003-07-22 at 21:50, Rod Taylor wrote:
> Ok.. Unless I'm missing something, the data will be static (or near
> static). It also sounds as if the structure is common for elements, so
> you probably only want 2 tables.
I misunderstood. Do what Joe suggested.
signature.a
Try re-arranging your join structure:
, A_CARS O
JOIN A_CH_CAR CHT ON (CHT.IDS=O.IDS)
left outer join A_SLUJITELI SLU ON(O.IDS_SLUJITEL=SLU.IDS)
left outer join A_AUTOVOZ AWT ON(O.IDS_AUTOVOZ=AWT.IDS)
left outer join A_COMBOPT CB ON(O.IDS_COMBOPT=CB.IDS)
left outer
> Looks like that IS the case; in fact, it gets that same plan even if I
> don't specify ::country on the country string...
>
> This is obviously something that has changed _big time_ betwixt 7.3 and
> 7.4...
Several issues of this type have been fixed during 7.4, though there are
a few left with
On Fri, 2003-08-08 at 14:53, Andrew Sullivan wrote:
> On Fri, Aug 08, 2003 at 09:40:20AM -0700, Jonathan Gardner wrote:
> >
> > Redhat puts ext3 on by default. Consider switching to a non-journaling FS
> > (ext2?) with the partition that holds your data and WAL.
>
> I would give you exactly the
> > Agreed.. WAL cannot recover something when WAL no longer exists due to a
> > filesystem corruption.
>
> It is true that ext2 isn't good because the file system may not recover,
> but BSD UFS isn't a journalled file system, but does guarantee file
> system recovery after a crash --- it is espec
> specific drives might be a valuble capability. i know that i could go into
> /var/lib/pgsql/data/base and fan the contents out, but this is unweildy and
> impractical. has any consideration been given to providing a way to manage
> such a deployment?
The ability to take various database objects
On Wed, 2003-08-13 at 11:17, ingrid martinez wrote:
> The query that execute is only inserts, I use a batch of 300 and then do
> commit.
>
> insert into FLOWS values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
Any foreign keys on this table? Triggers or Rules?
What kind of hardware do you have?
> Currently there's only a few users in the database for testing purposes,
> and most of the time the user lookup's take 2-3 ms (I have syslog'ing of
> queries and duration turned on), but several times per hour the duration
> for one of the queries is 2-3 seconds (1000 times larger), while the
Couple of questions:
What was the postgresql.conf configuration used? Default?
How many threads of the script ran? Looks like a single user only.
I assume there was nothing else running at the time (cron, sendmail,
etc. were all off?)
Do you know whether the machines were disk or I/O bound?
Wa
> I need to increase the overall performance by a factor of 10, while at
> the same time the DB size increases by a factor of 50. e.g. 3000
> inserts/updates or 25,000 selects per second, over a 25GB database with
> most used tables of 5,000,000 and 1,000,000 rows.
Ok.. I would be surprised if yo
On Thu, 2003-08-28 at 12:37, Matt Clark wrote:
> > Ok.. I would be surprised if you needed much more actual CPU power. I
> > suspect they're mostly idle waiting on data -- especially with a Quad
> > Xeon (shared memory bus is it not?).
>
> In reality the CPUs get pegged: about 65% PG and 35% syste
> With regards to other jobs on the server, there is a MySQL server on it
> as well, which from time to time has some multi-second queries generated
> from a webserver also on this host, but the MySQL is running with nice
> 10 (PostgreSQL isn't nice'd).
Do those MySQL queries hit disk hard?
I'
> The references are all to the same table i.e. they are employee ids, so
> leaving some and not others would make no sense. The table has no
> deletes, small amount of inserts and moderate amount of updates. However
> there are many selects and its their performance I am most concerned with.
F
> My question is: is that current peak rate ("300 inserts/updates
> *or* 2500 selects") based upon 1 connection, or many connections?
> With 4 CPUs, and a 4 disk RAID10, I wouldn't be surprised if 4 con-
> current connections gives the optimum speed.
Optimum number of active workers is probably
> i was hoping there was some trickery with sequences that would allow me to
> easily pick a random valid sequence number..?
I would suggest renumbering the data.
ALTER SEQUENCE ... RESTART WITH 1;
UPDATE table SET pkey = DEFAULT;
Of course, PostgreSQL may have trouble with that update due to
e
> I uppercased it because config parameters are uppercased in the
> documentation. Do we mention config parameters in any other error
> messages? Should it be lowercased?
How about changing the hint?
Consider increasing CHECKPOINT_SEGMENTS in your postgresql.conf
signature.asc
Description: Th
> Under PG: Data is returned in 26 secs!!
> Under SQL Server: Data is returned in 5 secs.
> Under SQLBase: Data is returned in 6 secs.
> Under SAPDB:Data is returned in 7 secs.
What did you use as the client? Do those times include ALL resulting
data or simply the first few lines?
P
> - the way PostgreSQL expects data to be written to disk without the
>fsync calls for things not to get corrupted in the event of a crash,
>and
If you want the filesystem to deal with this, I believe it is necessary
for it to write the data out in the same order the write requests are
su
On Thu, 2003-09-04 at 13:41, Mary Edie Meredith wrote:
> Our port of OSDL DBT3 test suite to PostgreSQL (see Background
> information below) is nearing completion. We would also like to confirm
> our understanding of an outstanding consistency issue.
>
> We have not been able to do meaningful ker
On Thu, 2003-09-04 at 19:50, Neil Conway wrote:
> On Thu, 2003-09-04 at 13:46, Rod Taylor wrote:
> > Run a VACUUM FULL ANALYZE between runs. This will force a full scan of
> > all data for stats
>
> It will? Are you sure about that?
You're right. According to the do
> How can I improve performance and will version 7.4 bring something
> valuable for my task? Rewrite to some other scripting language is not
> a problem. Trigger is simple enough.
Your best bet is to have additional clients connected to the database
requesting work. Approx NUMCPUs * 2 + 1 seems
On Fri, 2003-09-12 at 12:55, Richard Jones wrote:
> The machine is coming from dell, and i have the option of a
> PERC 3/SC RAID Controller (32MB)
> or software raid.
>
> does anyone have any experience of this controller?
> its an additional £345 for this controller, i'd be interested to know w
On Tue, 2003-09-23 at 20:24, Garrett Bladow wrote:
> Recently we upgraded the RAM in our server. After the install a LIKE query that used
> to take 5 seconds now takes 5 minutes. We have tried the usual suspects, VACUUM,
> ANALYZE and Re-indexing.
>
> Any thoughts on what might have happened?
W
> 20,000 itemsTakes on average 0.078seconds/room
> 385,000 items Takes on average .11seconds/room
> 690,000 items takes on average .270seconds/room
> 1,028,000 items Takes on average .475seconds/room
>
> As can be seen the time taken to process each room increas
> Im running a Datawarehouse benchmark (APB-1) on PostgreSql. The objective is to
> choose which of the to main db (PostgreSQL, MySQL) is fastest. I've run into a
> small problem which I hope could be resolved here.
>
> I'm trying to speed up this query:
>
> select count(*) from actvars, prodleve
On Wed, 2003-10-01 at 08:11, Mindaugas Riauba wrote:
> While writing web application I found that it would
> be very nice for me to have "null" WHERE clause. Like
> WHERE 1=1. Then it is easy to concat additional
> conditions just using $query . " AND col=false" syntax.
>
> But which of the poss
On Wed, 2003-10-01 at 13:30, Dimitri Nagiev wrote:
> Hi all,
>
> I haven't found any official documentation about the postgres sql optimizer
> on the web, so please forgive me if there is such a document and point me to
> the right direction.
>
> I've got the following problem: I cannot make the
On Thu, 2003-10-02 at 14:30, Rong Wu wrote:
> Hi,
>
> I have a select like this:
>
> SELECT MAX(transactionid) FROM cbntransaction WHERE transactiontypeid=0;
For various reasons (primarily MVCC and the ability to make custom
aggregates making it difficult) MAX() is not optimized in this fashion
> item_max_date() looks like this:
>select max(dtstamp) from items where channel = $1 and link = $2;
It is too bad the (channel, link) index doesn't have dtstamp at the end
of it, otherwise the below query would be a gain (might be a small one
anyway).
select dtstamp
from items
where
On Fri, 2003-10-03 at 17:53, Dror Matalon wrote:
> On Fri, Oct 03, 2003 at 05:44:49PM -0400, Rod Taylor wrote:
> > > item_max_date() looks like this:
> > >select max(dtstamp) from items where channel = $1 and link = $2;
> >
> > It is too bad the (channel, li
> > I hope it isn't the first or second one ;)
>
> CREATE or REPLACE FUNCTION item_max_date (int4, varchar) RETURNS
> timestamptz AS '
> select max(dtstamp) from items where channel = $1 and link = $2;
> ' LANGUAGE 'sql';
How about the below?
CREATE or REPLACE FUNCTION item_max_date (int4, var
On Fri, 2003-10-03 at 21:39, Bruce Momjian wrote:
> I have updated the FAQ to be:
>
> In comparison to MySQL or leaner database systems, we are
> faster for multiple users, complex queries, and a read/write query
> load. MySQL is faster for SELECT queries done by a few users.
>
> And for those GUIs, wouldn't it be 97% as good to run an ANALYZE and give the
> approximate record counts for large tables?
Interfaces which run a COUNT(*) like that are broken by design. They
fail to consider the table may really be a view which of course could
not be cached with results like
> * Same slide. IIRC postgresql always compresses bytea/varchar. Not too much sure
> about which but there is something that is compressed by default..:-)
I'm not sure about that.
Even toasted values are not always compressed, though they certainly can
be and usually are.
signature.asc
Descrip
On Fri, 2003-10-10 at 18:59, Sean Chittenden wrote:
> > NB> So far:
> >
> > NB> shared_buffers = 1/16th of total memory
> > NB> effective_cache_size = 80% of the supposed kernel cache.
> >
> > Please take into account the blocksize compiled into PG, too...
>
> Would anyone object to a patch that
> > Perhaps you are confusing it with the MySQL query cache?
> Is there plan on developing one (query cache)?
For the most part, prepared queries and cursors give you a greater
advantage due to their versatility -- both of which we do have.
In the cases where an actual cache is useful, the clien
> Any suggestions how to make vacuuming more effective and reducing the
> time it takes to vacuum? I'd settle for less frequent vacuuming or
> perhaps index rebuilding. The database can be re-imported in about an
> hour.
Which version and what are your FSM settings?
signature.asc
Description:
On Thu, 2003-10-23 at 08:21, John Pagakis wrote:
> Greetings.
>
> I have a table that will require 100,000 rows initially.
>
> Assume the following (some of the field names have been changed for
> confidentiality reasons):
>
> CREATE TABLE baz (
> baz_number CHAR(15) NOT NULL,
> customer
On Wed, 2003-10-29 at 09:03, Robert Treat wrote:
> I just noticed on one of my tables I have the following two indexes:
>
> Indexes: entity_watch_map_pkey primary key btree (entity_id, watch_id),
> ewm_entity_id btree (entity_id),
>
>
> I can't think of why the second index is there, as
If it is 7.4 beta 5 or later, I would definitely go with A.
Adding indexes after the fact seems to be much quicker. Foreign keys use
the same algorithm prior to beta 5 regardless of timing.
A primary key and unique index will have approx the same performance (a
check for NULL isn't very costly).
> begin;
> analyze foo;
> ERROR: ANALYZE cannot run inside a BEGIN/END block
>
> i am using version 7.2.3.
Time to upgrade. 7.3 / 7.4 allows this to happen.
signature.asc
Description: This is a digitally signed message part
On Tue, 2003-11-11 at 18:32, Chris Field wrote:
> We are getting ready to spec out a new machine and are wondering about
> the wisdom of buying a quad versus a dual processor machine. Seing as
> how postgres in not a threaded application, and this server will only be
> used for log/transaction ana
On Wed, 2003-11-12 at 12:34, Suchandra Thapa wrote:
> I'm moving a webmail service over to use a postgresql database for
> storage and wanted to get any tips for optimizing performance. The
> machine will be a multiprocessor (either 2 or 4 cpu ) system with a raid
> array. What layout should be u
> > How much in total storage? If you have (or will have) > ~6 disks, go
> > for RAID 5 otherwise 10 is probably appropriate.
>
> I'm not sure but I believe there are about 6-8 10K scsi drives on the
> system. There is quite a bit of storage to spare currently so I think
I see.. With 8 drives
> > Most clients will be interested in say the last 7 days worth of data?
> > Great.. Start out with 4GB ram on a good Dual CPU -- Opterons seem to
> > work quite well -- and make sure the motherboard can hold double that in
> > memory for an upgrade sometime next year when you've become popular.
> shared_buffers = 128# min max_connections*2 or 16, 8KB each
Try 1500.
> sort_mem = 65535# min 64, size in KB
I'd pull this in. You only have 640MB ram, which means about 8 large
sorts to swap.
How about 16000?
> fsync = false
I presume you understand the risks involved with
On Thu, 2003-12-18 at 12:04, Conny Thimren wrote:
> Hi,
> This a kind of newbie-question. I've been using Postgres for a long time in a low
> transction environment - and it is great.
>
> Now I've got an inquiry for using Postgresql in a heavy-load on-line system. This
> system must handle somet
> Especially with very large tables, hearing the disks grind as Postgres scans
> every single row in order to determine the number of rows in a table or the
> max value of a column (even a primary key created from a sequence) is pretty
> painful. If the implementation is not too horrendous, this i
On Mon, 2004-01-05 at 14:48, Stephan Szabo wrote:
> On Mon, 5 Jan 2004, Vivek Khera wrote:
>
> >
> > On Jan 5, 2004, at 1:57 PM, Stephan Szabo wrote:
> >
> > > But, if he's updating the fk table but not the keyed column, it should
> > > no
> > > longer be doing the check and grabbing the locks. I
On Mon, 2004-01-05 at 14:57, David Teran wrote:
> ... wow:
>
> executing a batch file with about 4250 selects, including lots of joins
> other things PostgreSQL 7.4 is about 2 times faster than FrontBase
> 3.6.27. OK, we will start to make larger tests but this is quite
> interesting already: w
> I'd appreciate if anyone could share your experience
> in configuring things on the filer for optimal
> performance or any recomendataion that i should be
> aware of.
Netapps are great things. Just beware that you'll be using NFS, and NFS
drivers on many operating systems have been known to be
On Mon, 2004-02-02 at 13:43, Tomasz Myrta wrote:
> Dnia 2004-02-02 19:30, Użytkownik scott.marlowe napisał:
> > Not entirely, since it only has to sort two columns, it will be smaller,
> > and will therefore be somewhat faster.
>
> Can you say something more about it? Will it be enough faster to
on
> that table if there are some defined?
Copy does nearly everything that standard inserts to. RULES are the only
thing that come to mind. Triggers, indexes, constraints, etc. are all
applied.
--
Rod Taylor
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL
PGP Key: http://www
On Wed, 2004-02-11 at 09:23, [EMAIL PROTECTED] wrote:
> my data base is very slow. The machine is a processor Xeon 2GB with
> 256 MB of RAM DDR. My archive of configuration is this:
I'm not surprised. New values below old.
> sort_mem = 131072 # min 64, size in KB
sort_mem = 8192.
On Wed, 2004-02-11 at 12:15, [EMAIL PROTECTED] wrote:
> I already came back the old conditions and I continue slow in the same
> way!
Dumb question, but did you restart the database after changing the
config file?
---(end of broadcast)---
TIP 5: Ha
101 - 200 of 207 matches
Mail list logo