On Thursday 02 Dec 2004 9:37 pm, Dmitry Karasik wrote:
Hi Thomas!
Thomas Look at the ACTUAL TIME. It dropped from 0.029ms (using the index
Thomas scan) to 0.009ms (using a sequential scan.)
Thomas Index scans are not always faster, and the planner/optimizer knows
Thomas this.
On Thursday 09 Sep 2004 6:26 pm, Vic Cekvenich wrote:
What would be performance of pgSQL text search vs MySQL vs Lucene (flat
file) for a 2 terabyte db?
Well, it depends upon lot of factors. There are few questions to be asked
here..
- What is your hardware and OS configuration?
- What type of
On Wednesday 01 Sep 2004 3:36 pm, G u i d o B a r o s i o wrote:
Dear all,
I am currently experiencing troubles with the performance of my
critical's database.
The problem is the time that the postgres takes to perform/return a
query. For example, trying the \d tablename command takes
On Monday 09 Aug 2004 7:58 pm, Paul Serby wrote:
I've not maxed out the connections since making the changes, but I'm
still not convinced everything is running as well as it could be. I've
got some big result sets that need sorting and I'm sure I could spare a
bit more sort memory.
You could
Hervé Piedvache wrote:
Josh,
Le mercredi 14 Juillet 2004 18:28, Josh Berkus a écrit :
checkpoint_segments = 3
You should probably increase this if you have the disk space. For massive
insert operations, I've found it useful to have as much as 128 segments
(although this means about 1.5GB disk
gnari wrote:
is there a recommended procedure to estimate
the best value for effective_cache_size on a
dedicated DB server ?
Rule of thumb(On linux): on a typically loaded machine, observe cache memory of
the machine and allocate good chunk of it as effective cache.
To define good chunck of it,
Hervé Piedvache wrote:
In my case it's a PostgreSQL dedicated server ...
effective_cache_size = 500
For me I give to the planner the information that the kernel is able to cache
500 disk page in RAM
That is what? 38GB of RAM?
free
total used free shared
Missner, T. R. wrote:
Hello,
I have been a happy postgresql developer for a few years now. Recently
I have discovered a very strange phenomenon in regards to inserting
rows.
My app inserts millions of records a day, averaging about 30 rows a
second. I use autovac to make sure my stats and indexes
Bill Chandler wrote:
Hi,
Using PostgreSQL 7.4.2 on Solaris. I'm trying to
improve performance on some queries to my databases so
I wanted to try out various index structures.
Since I'm going to be running my performance tests
repeatedly, I created some SQL scripts to delete and
recreate
On Wednesday 19 May 2004 13:02, [EMAIL PROTECTED] wrote:
- If you can put WAL on separate disk(s), all the better.
Does that mean only the xlog, or also the clog? As far as I understand, the
clog contains some meta-information on the xlog, so presumably it is
flushed to disc synchronously
Doug Y wrote:
Hello,
I've been having some performance issues with a DB I use. I'm trying
to come up with some performance recommendations to send to the
adminstrator.
Hardware:
CPU0: Pentium III (Coppermine) 1000MHz (256k cache)
CPU1: Pentium III (Coppermine) 1000MHz (256k cache)
Memory:
Anderson Boechat Lopes wrote:
Hi.
I´m new here and i´m not sure if this is the right email to solve my
problem.
Well, i have a very large database, with vary tables and very
registers. Every day, too many operations are perfomed in that DB, with
queries that insert, delete and
Richard Huxton wrote:
Christopher Kings-Lynne wrote:
What's the case of bigger database PostgreSQL (so greate and amount of
registers) that they know???
Didn't someone say that RedSheriff had a 10TB postgres database or
something?
From http://www.redsheriff.com/us/news/news_4_201.html
According
Sending again bacuse of MUA error.. Chose a wrong address in From..:-(
Shridhar
On Wednesday 07 April 2004 17:21, Shridhar Daithankar wrote:
On Wednesday 07 April 2004 16:59, Andrew McMillan wrote:
One thing I recommend is to use ext2 (or almost anything but ext3).
There is no real need
Heiko Kehlenbrink wrote:
[EMAIL PROTECTED]:~ psql -d test -c 'explain analyse select avg(dist)
from massive2 where dist (100*sqrt(3.0))::float8 and dist
(150*sqrt(3.0))::float8;'
NOTICE: QUERY PLAN:
Aggregate (cost=14884.61..14884.61 rows=1 width=8) (actual
time=3133.24..3133.24
Heiko Kehlenbrink wrote:
Hmm... I would suggest if you are testing, you should try 7.4.2. 7.4 has
some
good optimisation for hash agregates though I am not sure if it apply to
averaging.
would be the last option till we are runing other applications on that 7.2
system
I can understand..
Also try
http://www.databasejournal.com/features/postgresql/article.php/3323561
Shridhar
---(end of broadcast)---
TIP 8: explain analyze is your friend
Rosser Schwarz wrote:
shared_buffers = 4096
sort_mem = 32768
vacuum_mem = 32768
wal_buffers = 16384
checkpoint_segments = 64
checkpoint_timeout = 1800
checkpoint_warning = 30
commit_delay = 5
effective_cache_size = 131072
You didn't mention the OS so I would take it as either linux/freeBSD.
On Friday 27 February 2004 21:03, scott.marlowe wrote:
Linux doesn't work with a pre-assigned size for kernel cache.
It just grabs whatever's free, minus a few megs for easily launching new
programs or allocating more memory for programs, and uses that for the
cache. then, when a request
On Thursday 19 February 2004 14:31, Saleem Burhani Baloch wrote:
Hi,
Thanks every one for helping me. I have upgraded to 7.4.1 on redhat 8 ( rh
9 require a lot of lib's) and set the configuration sent by Chris. Now the
query results in 6.3 sec waooo. I m thinking that why the 7.1 process
Josh Berkus wrote:
Bill,
Some functions they prototyped in MSSQL even return different types, based
on certian parameters, I'm not sure how I'll do this in Postgres, but I'll
have to figure something out.
We support that as of 7.4.1 to an extent; check out Polymorphic Functions.
To my
Bill Moran wrote:
I have an application that I'm porting from MSSQL to PostgreSQL. Part
of this
application consists of hundreds of stored procedures that I need to
convert
to Postgres functions ... or views?
At first I was going to just convert all MSSQL procedures to Postgres
functions.
On Wednesday 14 January 2004 18:18, Jón Ragnarsson wrote:
I am writing a website that will probably have some traffic.
Right now I wrap every .php page in pg_connect() and pg_close().
Then I read somewhere that Postgres only supports 100 simultaneous
connections (default). Is that a
On Tuesday 06 January 2004 17:48, D'Arcy J.M. Cain wrote:
On January 6, 2004 01:42 am, Shridhar Daithankar wrote:
cert=# select relpages,reltuples::bigint from pg_class where relname=
'certificate';
relpages | reltuples
--+---
399070 | 24858736
(1 row)
But:
cert
Robert Treat wrote:
On Tue, 2004-01-06 at 07:20, Shridhar Daithankar wrote:
The numbers from pg_class are estimates updated by vacuum /analyze. Of course
you need to run vacuum frequent enough for that statistics to be updated all
the time or run autovacuum daemon..
Ran into same problem on my
On Monday 05 January 2004 16:58, David Teran wrote:
We have some tests to check the performance and FrontBase is about 10
times faster than Postgres. We already played around with explain
analyse select. It seems that for large tables Postgres does not use an
index. We often see the scan
On Monday 05 January 2004 17:35, David Teran wrote:
explain analyze SELECT --columns-- FROM KEY_VALUE_META_DATA t0 WHERE
t0.ID_FOREIGN_TABLE = 21110;
i see that no index is being used whereas when i use
explain analyze SELECT --columns-- FROM KEY_VALUE_META_DATA t0 WHERE
t0.ID_FOREIGN_TABLE
On Monday 05 January 2004 17:48, David Teran wrote:
Hi,
The performance will likely to be the same. Its just that integer
happens to
be default integer type and hence it does not need an explicit
typecast. ( I
don't remember exactly which integer is default but it is either of
On Tuesday 06 January 2004 07:16, Christopher Browne wrote:
Martha Stewart called it a Good Thing when [EMAIL PROTECTED] (Paul
Tuckfield) wrote:
Not that I'm offering to do the porgramming mind you, :) but . .
In the case of select count(*), one optimization is to do a scan of the
On Tuesday 06 January 2004 01:22, Rod Taylor wrote:
Anyway, with Rules you can force this:
ON INSERT UPDATE counter SET tablecount = tablecount + 1;
ON DELETE UPDATE counter SET tablecount = tablecount - 1;
That would generate lot of dead tuples in counter table. How about
select
On Thursday 18 December 2003 09:24, David Shadovitz wrote:
Old server:
# VACUUM FULL abc;
VACUUM
# VACUUM VERBOSE abc;
NOTICE: --Relation abc--
NOTICE: Pages 1526: Changed 0, Empty 0; Tup 91528; Vac 0, Keep 0, UnUsed
32. Total CPU 0.07s/0.52u sec elapsed 0.60 sec.
VACUUM
New
Neil Conway wrote:
How can I get the original server to perform as well as the new one?
Well, you have the answer. Dump the database, stop postmaster and restore it.
That should be faster than original one.
(BTW, SELECT count(*) FROM table isn't a particularly good DBMS
performance
Alfranio Correia Junior wrote:
Postgresql configuration:
effective_cache_size = 35000
shared_buffers = 5000
random_page_cost = 2
cpu_index_tuple_cost = 0.0005
sort_mem = 10240
Lower sort mem to say 2000-3000, up shared buffers to 10K and up effective cache
size to around 65K. That should make it
Ivar Zarans wrote:
On Fri, Dec 05, 2003 at 06:19:46PM +0530, Shridhar Daithankar wrote:
is correct SQL, but not correct, considering PostgreSQL bugs.
Personally I don't consider a bug but anyways.. You are the one facing
problem so I understand..
Well, if this is not bug, then what
Torsten Schulz wrote:
Chester Kustarz wrote:
On Mon, 24 Nov 2003, Torsten Schulz wrote:
shared_buffers = 5000# 2*max_connections, min 16
that looks pretty small. that would only be 40MBytes (8k/page *
5000pages).
http://www.varlena.com/GeneralBits/Tidbits/perf.html
Ok, thats it. I've
William Yu wrote:
My situation is this. We have a semi-production server where we
pre-process data and then upload the finished data to our production
servers. We need the fastest possible write performance. Having the DB
go corrupt due to power loss/OS crash is acceptable because we can
Matthew T. O'Connor wrote:
But we track tuples because we can compare against the count given by
the stats system. I don't know of a way (other than looking at the FSM,
or contrib/pgstattuple ) to see how many dead pages exist.
I think making pg_autovacuum dependent of pgstattuple is very good
Josh Berkus wrote:
Shridhar,
However I do not agree with this logic entirely. It pegs the next vacuum
w.r.t current table size which is not always a good thing.
No, I think the logic's fine, it's the numbers which are wrong. We want to
vacuum when updates reach between 5% and 15% of total
On Thursday 20 November 2003 20:00, Matthew T. O'Connor wrote:
Shridhar Daithankar wrote:
I will submit a patch that would account deletes in analyze threshold.
Since you want to delay the analyze, I would calculate analyze count as
deletes are already accounted for in the analyze threshold
On Thursday 20 November 2003 20:29, Shridhar Daithankar wrote:
On Thursday 20 November 2003 20:00, Matthew T. O'Connor wrote:
Shridhar Daithankar wrote:
I will submit a patch that would account deletes in analyze threshold.
Since you want to delay the analyze, I would calculate analyze
Laurent Martelli wrote:
Shridhar == Shridhar Daithankar [EMAIL PROTECTED] writes:
Shridhar Laurent Martelli wrote:
[...]
Should I understand that a join on incompatible types (such as
integer and varchar) may lead to bad performances ?
Shridhar Conversely, you should enforce strict
Laurent Martelli wrote:
Shridhar == Shridhar Daithankar [EMAIL PROTECTED] writes:
[...]
Shridhar 2. Try following query EXPLAIN ANALYZE SELECT * from lists
Shridhar join classes on classes.id=lists.value where
Shridhar lists.id='16'::integer;
Shridhar classes.id=lists.value::integer
Josh Berkus wrote:
Shridhar,
I was looking at the -V/-v and -A/-a settings in pgavd, and really don't
understand how the calculation works. According to the readme, if I set -v
to 1000 and -V to 2 (the defaults) for a table with 10,000 rows, pgavd would
only vacuum after 21,000 rows had
Laurent Martelli wrote:
scott == scott marlowe [EMAIL PROTECTED] writes:
[...]
scott Note here:
scott Merge Join (cost=1788.68..4735.71 rows=1 width=85) (actual
scott time=597.540..1340.526 rows=20153 loops=1) Merge Cond:
scott (outer.id = inner.id)
scott This estimate is WAY off.
Benjamin Bostow wrote:
I am running RH 7.3 running Apache 1.3.27-2 and PostgreSQL 7.2.3-5.73.
When having 100+ users connected to my server I notice that postmaster
consumes up wards of 90% of the processor and I hardly am higher than
10% idle. I did notice that when I kill apache and postmaster
On Friday 14 November 2003 12:51, Rajesh Kumar Mallah wrote:
Hi ,
my database seems to be taking too long for a select count(*)
i think there are lot of dead rows. I do a vacuum full it improves
bu again the performance drops in a short while ,
can anyone please tell me if anything worng
Fred Moyer wrote:
One thing I learned after spending about a week comparing the Athlon (2
ghz, 333 mhz frontside bus) and Xeon (2.4 ghz, 266 mhz frontside bus)
platforms was that on average the select queries I was benchmarking ran
30% faster on the Athlon (this was with data cached in memory so
Jeff wrote:
On Thu, 30 Oct 2003 17:49:08 -0200 (BRST)
alexandre :: aldeia digital [EMAIL PROTECTED] wrote:
Both use: Only postgresql on server. Buffers = 8192, effective cache =
10
Well, I'm assuming you meant 1GB of ram, not 1MB :)
Check a ps auxw to see what is running. Perhaps X is
Jeff wrote:
On Fri, 31 Oct 2003 09:31:19 -0600
Rob Sell [EMAIL PROTECTED] wrote:
Not being one to hijack threads, but I haven't heard of this
performance hit when using HT, I have what should all rights be a
pretty fast server, dual 2.4 Xeons with HT 205gb raid 5 array, 1 gig
of memory. And it
Dror Matalon wrote:
On Mon, Oct 27, 2003 at 01:04:49AM -0500, Christopher Browne wrote:
Most of the time involves:
a) Reading each page of the table, and
b) Figuring out which records on those pages are still live.
The table has been VACUUM ANALYZED so that there are no dead records.
It's
Kamalraj Singh Madhan wrote:
Hi,
I'am having major performance issues with post gre 7.3.1 db. Kindly suggest all the possible means by which i can optimize the performance of this database. If not all, some ideas (even if they are common) are also welcome. There is no optimisation done to
Vivek Khera wrote:
JB == Josh Berkus [EMAIL PROTECTED] writes:
JB Actually, what OS's can't use all idle ram for kernel cache? I
JB should note that in my performance docs
FreeBSD. Limited by the value of sysctl vfs.hibufspace from what I
understand. This value is set at boot based on
Hilary Forbes wrote:
If I have a fixed amount of money to spend as a general rule
is it better to buy one processor and lots of memory or two
processors and less memory for a system which is transactional
based (in this case it's handling reservations). I realise the
answer will be a
Harry Broomhall wrote:
#effective_cache_size = 1000# typically 8KB each
#random_page_cost = 4 # units are one sequential page fetch cost
You must tune the first one at least. Try
http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html to tune these
parameters.
2) The EXPLAIN
Alexander Priem wrote:
Dell PowerEdge 1750 machine with Intel Xeon CPU at 3 GHz and 4 GB of RAM.
This machine will contain a PERC4/Di RAID controller with 128MB of battery
backed cache memory. The O/S and logfiles will be placed on a RAID-1 setup
of two 36Gb SCSI-U320 drives (15.000rpm). Database
Rob Nagler wrote:
It seems a simple vacuum (not full or analyze) slows down the
database dramatically. I am running vacuum every 15 minutes, but it
takes about 5 minutes to run even after a fresh import. Even with
vacuuming every 15 minutes, I'm not sure vacuuming is working
properly.
There are
Seum-Lim Gan wrote:
I have a table that keeps being updated and noticed
that after a few days, the disk usage has growned to
from just over 150 MB to like 2 GB !
Hmm... You have quite a lot of wasted space there..
I followed the recommendations from the various search
of the archives, changed the
David Griffiths wrote:
It's a slight improvement, but that could be other things as well.
I'd read that how you tune Postgres will determine how the optimizer works
on a query (sequential scan vs index scan). I am going to post all I've done
with tuning tommorow, and see if I've done anything
On Monday 13 October 2003 19:22, Seum-Lim Gan wrote:
I am not sure I can do the full vacuum.
If my system is doing updates in realtime and needs to be
ok 24 hours and 7 days a week non-stop, once I do
vacuum full, even on that table, that table will
get locked out and any quiery or updates
On Monday 13 October 2003 19:34, Vivek Khera wrote:
SC == Sean Chittenden [EMAIL PROTECTED] writes:
echo effective_cache_size = $((`sysctl -n vfs.hibufspace` / 8192))
I've used it for my dedicated servers. Is this calculation correct?
SC Yes, or it's real close at least.
http://www.ussg.iu.edu/hypermail/linux/kernel/0310.1/0208.html
Shridhar
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
Adrian Demaestri wrote:
We've a table with about 8 million rows, and we need to get rows by the value
of two of its fields( the type of the fields are int2 and int4,
the where condition is v.g. partido=99 and partida=123). We created a
multicolumn index on that fields but the planner doesn't use
Jeff wrote:
Let me know if there are blatant errors, etc in there.
Maybe even slightly more subtle blatant errors :)
Some minor nitpicks,
* Slide 5, postgresql already features 64 bit port. The sentence is slightly
confusing
* Same slide. IIRC postgresql always compresses bytea/varchar. Not too
Greg Spiegelberg wrote:
The data represents metrics at a point in time on a system for
network, disk, memory, bus, controller, and so-on. Rx, Tx, errors,
speed, and whatever else can be gathered.
We arrived at this one 642 column table after testing the whole
process from data gathering, methods
Bruce Momjian wrote:
OK, I beefed up the TODO:
* Use a fixed row count and a +/- count with MVCC visibility rules
to allow fast COUNT(*) queries with no WHERE clause(?)
I can always give the details if someone asks. It doesn't seem complex
enough for a separate TODO.detail
Stef wrote:
On Fri, 03 Oct 2003 12:32:00 -0400
Tom Lane [EMAIL PROTECTED] wrote:
= What exactly is failing? And what's the platform, anyway?
Nothing is really failing atm, except the funds for better
hardware. JBOSS and some other servers need to be
run on these machines, along with linux,
Jeff wrote:
I'd be interested in tinkering with this, but I'm more interested at the
moment of why (with proof, not antecdotal) Solaris is so much slower than
Linux and what we cna do about this. We're looking to move a rather large
Informix db to PG and ops has reservations about ditching Sun
David Griffiths wrote:
And finally,
Here's the contents of the postgresql.conf file (I've been playing with
these setting the last couple of days, and using the guide @
http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html to
make sure I didn't have it mis-tuned):
On Sunday 28 September 2003 09:19, David Griffiths wrote:
No difference. Note that all the keys that are used in the joins are
numeric(10)'s, so there shouldn't be any cast-issues.
Can you make them bigint and see? It might make some difference perhaps.
Checking the plan in the meantime.. BTW
[EMAIL PROTECTED] wrote:
Hi guys
Im running a Datawarehouse benchmark (APB-1) on PostgreSql. The objective is to
choose which of the to main db (PostgreSQL, MySQL) is fastest. I've run into a
small problem which I hope could be resolved here.
I'm trying to speed up this query:
select count(*)
On 17 Sep 2003 at 11:48, Nick Barr wrote:
Hi,
I have been following a thread on this list Inconsistent performance
and had a few questions especially the bits about effective_cache_size.
I have read some of the docs, and some other threads on this setting,
and it seems to used by the
On 10 Sep 2003 at 22:44, Tom Lane wrote:
James Robinson [EMAIL PROTECTED] writes:
Is this just a dead end, or is there some variation of this that might
possibly work, so that ultimately an undoctored literal number, when
applied to an int8 column, could find an index?
I think it's
On 8 Sep 2003 at 13:50, Andri Saar wrote:
If this is the best you can get with postgres right now, then I'll just have
to increase the frequency of VACUUMing, but that feels like a hackish
solution :(
Use a autovacuum daemon. There is one in postgresql contrib module. It was
introduced
On 4 Sep 2003 at 0:48, Relaxin wrote:
All of the databases that I tested the query against gave me immediate
access to ANY row of the resultset once the data had been returned.
Ex. If I'm currently at the first row and then wanted to goto the 100,000
row, I would be there immediately, and if
On 3 Sep 2003 at 6:08, Azlin Ghazali wrote:
Hi,
I'm working on a project to make an application run on MySQL and PostgreSQL.
I find that PostgreSQL runs up to 10 times slower than MySQL. For small records
it is not much problems. But as the records grew (up to 12,000 records) the
On 29 Aug 2003 at 0:05, William Yu wrote:
Shridhar Daithankar wrote:
Be careful here, we've seen that with the P4 Xeon's that are
hyper-threaded and a system that has very high disk I/O causes the
system to be sluggish and slow. But after disabling the hyper-threading
itself, our system
Hi all,
I compared 2.6 with elevator=deadline. It did bring some improvement in
performance. But still it does not beat 2.4.
Attached are three files for details.
I also ran a simple insert benchmark to insert a million record in a simple
table with a small int and a varchar(30).
Here are
On 28 Aug 2003 at 1:07, Anders K. Pedersen wrote:
Hello,
We're running a set of Half-Life based game servers that lookup user
privileges from a central PostgreSQL 7.3.4 database server (I recently
ported the MySQL code in Adminmod to PostgreSQL to be able to do this).
The data needed
On 27 Aug 2003 at 15:50, Tarhon-Onu Victor wrote:
Hi,
I have a (big) problem with postgresql when making lots of
inserts per second. I have a tool that is generating an output of ~2500
lines per seconds. I write a script in PERL that opens a pipe to that
tool, reads every
On 28 Aug 2003 at 10:02, Russell Garrett wrote:
The web site queries will jump up one or two orders of magnitude (I have
seen a normally 100ms query take in excess of 30 seconds) in duration at
seemingly random points. It's not always when the transactions are
committing, and it doesn't seem
On 28 Aug 2003 at 11:05, Chris Bowlby wrote:
On Tue, 2003-08-26 at 23:59, Ron Johnson wrote:
What a fun exercises. Ok, lets see:
Postgres 7.3.4
RH AS 2.1
12GB RAM
motherboard with 64 bit 66MHz PCI slots
4 - Xenon 3.0GHz (1MB cache) CPUs
8 - 36GB 15K RPM as RAID10 on a 64 bit
On 28 Aug 2003 at 10:38, Michael Guerin wrote:
IN(subquery) is known to run poorly in 7.3.x and earlier. 7.4 is
generally much better (for reasonably sized subqueries) but in earlier
versions you'll probably want to convert into an EXISTS or join form.
Something else seems to be going on,
Hi all,
I did some benchmarking using pgbench and postgresql CVS head, compiled
yesterday.
The results are attached. It looks like 2.6.0-test4 does better under load but
under light load the performance isn't that great. OTOH 2.4.20 suffer major
degradation compare to 2.6. Looks like linux
On 25 Aug 2003 at 8:44, Stephan Szabo wrote:
On Mon, 25 Aug 2003, Rhaoni Chiu Pereira wrote:
Hi List,
As I said before, I'm not a DBA yet , but I'm learning ... and I
already have a PostgreSQL running, so I have to ask some help...
I got a SQL as folows :
...
Hi all,
Couple of days ago, one of my colleague, Rahul Iyer posted a query regarding
insert performance of 5M rows. A common suggestion was to use copy.
Unfortunately he can not use copy due to some constraints. I was helping him to
get maximum out of it. We were playing with a data set of
On 11 Aug 2003 at 23:42, Ron Johnson wrote:
On Mon, 2003-08-11 at 19:50, Christopher Kings-Lynne wrote:
Well, yeah. But given the Linux propensity for introducing major
features in minor releases (and thereby introducing all the
attendant bugs), I'd think twice about using _any_ Linux
http://kerneltrap.org/node/view/715
Might be interesting for people running 2.6. Last I heard, the anticipatory
scheduler did not yield it's maximum throughput for random reads. So they said
database guys would not want it right away.
Anybody using it for testing? Couple of guys are running it
On 8 Aug 2003 at 12:28, mixo wrote:
I have just installed redhat linux 9 which ships with Pg
7.3.2. Pg has to be setup so that data inserts (blobs) should
be able to handle at least 8M at a time. The machine has
two P III 933MHz CPU's, 1.128G RAM (512M*2 + 128M), and
a 36 Gig hd with 1 Gig
On 5 Aug 2003 at 12:28, Christopher Browne wrote:
On Oracle, I have seen performance Suck Badly when using SQL*Load; if
I grouped too many updates together, it started blowing up the
rollback segment, which was a Bad Thing. And in that kind of
context, there will typically be some sweet spot
On 5 Aug 2003 at 8:09, Jeff wrote:
I've been trying to search through the archives, but it hasn't been
successful.
We recently upgraded from pg7.0.2 to 7.3.4 and things were happy. I'm
trying to fine tune things to get it running a bit better and I'm trying
to figure out how vacuum output
On 28 Jul 2003 at 12:27, Josh Berkus wrote:
Unless you're running PostgreSQL 7.1 or earlier, you should be VACUUMing every
10-15 minutes, not every 2-3 hours. Regular VACUUM does not lock your
database. You will also want to increase your FSM_relations so that VACUUM
is more
On 25 Jul 2003 at 16:38, Kasim Oztoprak wrote:
this is kind of directory assistance application. actually the select statements are
not
very complex. the database contain 25 million subscriber records and the operators
searches
for the subscriber numbers or addresses. there are not much
On 24 Jul 2003 at 9:42, William Yu wrote:
As far as I can tell, the performance impact seems to be minimal.
There's a periodic storm of replication updates in cases where there's
mass updates sync last resync. But if you have mostly reads and few
writes, you shouldn't see this situation.
On 23 Jul 2003 at 16:05, Jean-Christian Imbeault wrote:
I have a database which is constantly being written to. A web server's
log file (and extras) is being written to it. There are no deletions or
updates (at least I think so :).
As the web traffic increases so will the write intensity.
On 21 Jul 2003 at 10:31, Alexander Priem wrote:
What I am thinking about is buying a server with the following specifications:
* 1 or 2 Intel Xeon processors (2.4 GHz).
* 2 Gigabytes of RAM (DDR/ECC).
* Three 36Gb SCSI160 disks (10.000rpm) in a RAID-5 config, giving 72Gb storage
space
Hi Alexander ,
On 21 Jul 2003 at 11:23, Alexander Priem wrote:
So the memory settings I specified are pretty much OK?
As of now yes, You need to test with these settings and make sure that they
perform as per your requirement. That tweaking will always be there...
What would be good
On 21 Jul 2003 at 19:27, Ang Chin Han wrote:
[1] That is, AFAIK, from our testing. Please, please correct me if I'm
wrong: has anyone found that different filesystems produces wildly
different performance for postgresql, FreeBSD's filesystems not included?
well, when postgresql starts
On 17 Jul 2003 at 11:01, Fabian Kreitner wrote:
psql (PostgreSQL) 7.2.2
perg_1097=# VACUUM ANALYZE ;
VACUUM
perg_1097=# EXPLAIN ANALYZEselect notiz_id, obj_id, obj_typ
perg_1097-# fromnotiz_objekt a
perg_1097-# where not exists
perg_1097-# (
perg_1097(# select
On 17 Jul 2003 at 13:12, Fabian Kreitner wrote:
At 11:17 17.07.2003, Shridhar Daithankar wrote:
How about
where ma_id = 2001::integer
and ma_pid = 1097::integer
in above query?
I dont really understand in what way this will help the planner but ill try.
That is typecasting
On Monday 14 July 2003 01:21, Balazs Wellisch wrote:
Unfortunatelly, compiling from source is not really an option for us. We
use RPMs only to ease the installation and upgrade process. We have over a
hundred servers to maintaine and having to compile and recompile software
everytime a new
1 - 100 of 104 matches
Mail list logo