this a performance I can expect?
2. If not, how can I determine the cause?
3. Can I anyhow improve the performance without replacing my hardware,
e.g. by tweaking the software?
4. Is there a Linux (Debian) tool that I can use to benchmark write
performance?
--
Groeten,
Joost Kraaijeveld
), or my Debian is to slow, or my PostgreSQL settings are wrong.
But I have no clue where to begin with determining the bottleneck (it
even may be a normal performance for all I know: I have no experience
with converting such (large) database).
Any suggestions?
--
Groeten,
Joost Kraaijeveld
Aske
e.
I will test this a for perfomance improvement, but still, I wonder if ~
100 inserts/second is a reasonable performance for my software/hardware
combination.
--
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608416
e-mail:
On Fri, 2005-09-23 at 07:05 -0400, Michael Stone wrote:
> On Fri, Sep 23, 2005 at 12:21:15PM +0200, Joost Kraaijeveld wrote:
> >On Fri, 2005-09-23 at 05:55 -0400, Michael Stone wrote:
> I didn't say it was, did I?
No, you did not. But only last week someon'es head was
ine is doing during all the time it is doing nothing? Something
like the performance monitor in Windows but than for Linux?
--
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608416
e-mail: [EMAIL PROTECTED]
web: www.askesis.nl
CTED]:/usr/lib/postgresql/8.0/bin$ ./pgbench -c 10 -t 1 test
tps = 121.486803 (including connections establishing)
tps = 121.493681 (excluding connections establishing)
TIA
--
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608
Hi Gavin,
Thanks for answering.
On Tue, 2005-11-01 at 20:16 +1100, Gavin Sherry wrote:
> On Tue, 1 Nov 2005, Joost Kraaijeveld wrote:
> > 1. Is there a repository somewhere that shows results, using and
> > documenting different kinds of hard- and software setups so that I can
efore, I will look into the OSDL benchmarks. Maybe they are
more suited for my needs: *understanding* performance determinators.
>
> Hope this helps.
You certainly did, thanks.
--
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06
tpskB_read/skB_wrtn/skB_readkB_wrtn
hdc 0.00 0.00 0.00 0 0
sda 0.00 0.00 0.00 0 0
sdb 187.1323.76 8764.36 24 8852
--
Groeten,
Joost Kraaijeveld
Askesis
ain analyse (with PgAdmin):
Seq Scan on orders (cost=0.00..58211.79 rows=1104379 width=40) (actual
time=441.971..3252.698 rows=1104379 loops=1)
Total runtime: 5049.467 ms
Actual execution time: 82163 MS (without getting the data)
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijme
uot; > output.txt
real0m5.554s
user0m1.121s
sys 0m0.470s
Now *I* am confused. What does PgAdmin do more than giving the query to
the database?
(BTW: I have repeated both measurements and the numbers above were all
from the last measurement I did and are about average)
--
Groeten,
Joost
the data
retrieval runtime (see below)?
-- Executing query:
select objectid from prototype.orders
Total query runtime: 78918 ms.
Data retrieval runtime: 188822 ms.
1104379 rows retrieved.
--
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax:
145.67
await 0.67767.19
svctm 0.674.97
%util 0.20100.30
--
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608416
e-mail: [EMAIL PROTECTED
in the grid which can be slow on some
> OS's due to inefficiencies in some grid controls with large data sets.
> That's why we give 2 times - the first is the query runtime on the
> server, the second is data retrieval and rendering (iirc, it's been a
> while).
That is
improvement. I suspect that experimenting with other I/O schedululers
could improve performance. But it is hard to find any useful
documentation about I/O schedulers.
--
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608416
t than again I
am not the only one with these problems:
http://www.issociate.de/board/goto/894541/3ware_+_RAID5_
+_xfs_performance.html#msg_894541
http://lkml.org/lkml/2005/4/20/110
http://seclists.org/lists/linux-kernel/2005/Oct/1171.html
I am happy to share the tables against which I am running my ch
On Tue, 2005-11-15 at 12:41 -0700, Steve Wampler wrote:
> Joost Kraaijeveld wrote:
> > If I understand correctly (I have 4GB ram):
> >
> > [EMAIL PROTECTED]:~/tmp$ dd if=/dev/zero of=bigfile bs=8k count=100
> > 100+0 records in
> > 100+0 records out
&g
in
100+0 records out
819200 bytes transferred in 166.499230 seconds (49201429 bytes/sec)
The quest continues...
--
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608416
e-mail: [EMAIL PROTECTED]
web: ww
IL PROTECTED]:~/tmp$ dd if=/dev/zero of=bigfile bs=8k count=100
100+0 records in
100+0 records out
819200 bytes transferred in 200.982055 seconds (40759858 bytes/sec)
Which is an remarkable increase in speed (38.9 MB/sec vs 25.7 MB/sec).
Thanks for your suggestions.
--
Gro
The customers table has an index on contactaddress and objectid.
The addresses table has an index on zipcode+housenumber and objectid.
TIA
--
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608416
e-mail: [E
record
in the database. The GUI (a Java JTable) works with record /row numbers,
which is handy if one has an ISAM database, but not if one uses
PostgreSQL.
I wonder if using a forward scrolling cursor would be faster.
--
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
you would need to have a persistent connection.
I just tried it: a cursor is not faster (what does not surprise me at
all, as the amount of work looks the same to me)
I guess there is no solution.
--
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06
I
can do a "select values > last value in the cache". So I must always
materialize all rows below the wanted index.
--
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608416
e-mail: [EMAIL PROTECTED]
web: www.ask
Hi,
Are there guidelines (or any empirical data) available how to determine
how often a table should be vacuumed for optimum performance or is this
an experience / trial-and-error thing?
TIA
--
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06
r "all" or should I try to catch these by a
judiciously chosen log_min_duration_statement ?
TIA
--
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608416
web: www.askesis.nl
---(end of broa
How can I move pg_xlog to another drive on Windows? In Linux I can use a
symlink, but how do I that on windows?
--
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608416
web: www.askesis.nl
---(end of
Does PostgreSQL lock the entire row in a table if I update only 1
column?
--
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608416
web: www.askesis.nl
---(end of broadcast
Hi,
I have a table with a file size of 400 MB with an index of 100 MB. Does
PostgreSQL take the file sizes of both the table and the index into account
when determing if it should do a table or an index scan?
TIA
Joost
---(end of broadcast)---
st-Based Vacuum Delay settings ?
vacuum_cost_delay = 200
vacuum_cost_page_hit = 6
#vacuum_cost_page_miss = 10 # 0-1 credits
#vacuum_cost_page_dirty = 20# 0-1 credits
vacuum_cost_limit = 100
--
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijm
www.powerpostgresql.com/PerfList/) which
mentioned a tripling of the the execution time. Not a change from
8201819 ms to 17729 ms.
--
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608416
web: www.askesis.nl
On Tue, 2007-05-29 at 21:43 +0100, Dave Page wrote:
> Cliff, Jason or Rob era? Could be important...
Cliff and Jason.
Rob is in my Ozzy collection ;-)
--
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608416
web: www.askesis
ppointing,
especially the write performance of the card, which is extremely poor.
Reading is OK.
--
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608416
e-mail: [EMAIL PROTECTED]
web: www.askesis.nl
-
runtime: 7941.095 ms
I have indices on :
fki_customers_addresses
customer.lastname (both lastname and trim(uppercase(lastname))
addresses.city (both city and trim(uppercase(city))
I
--
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608416
e-mail: [EMAIL PROTECTED]
web: www.askesis.nl
ses of
which 158003 meet the condition
> i would try to create index test on customers(contactAddress,
> trim(uppercase(lastname)));
> or with other ordring of fields.
>
> try this - create the index, make analyze of customers table, and
> recheck explain.
> then try t
ber of clients: 10
number of transactions per client: 150
number of transactions actually processed: 1500/1500
tps = 730.651970 (including connections establishing)
tps = 748.538852 (excluding connections establishing)
TIA
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
Hi Michael,
Thanls for your response.
Michael Fuhr wrote:
> On Mon, Mar 06, 2006 at 04:29:49PM +0100, Joost Kraaijeveld wrote:
>> Below are some results of running pgbench, run on a machine that
>> is doing nothing else than running PostgreSQL woth pgbench. The
>> stra
ark) is > 40 MB/sec. My primary goal is to understand the differences
( and than sue the guilty ones ;-)), and than maybe either learn to live with
it or find a solution. The number of write operations/sec during the update is
~ 2000 /sec. I suspect that the RAID card cannot handle a lot of small
tabse :-(. Maybe I should invest in other hardware or
re-arrange my RAID5 in a RAID10 (or 50???).
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608416
e-mail: [EMAIL PROTECTED]
web: www.askesis.nl
---(
, 5 disk array:
/dev/sdc with the database (and very safe, my MP3 collection ;-))
As I wrote in one of my posts to Michael, I suspect that the card is not
handling the amount of write operations as well as I expected. I wonder if
anyone else sees the same characteristics with this kind of
e to either
> setup RAID10 or if you don't care about the data, just go to RAID0.
Because it is just my development machine I think I will opt for the last
option. More diskspace left.
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-
onal
> speed.
Could you explain the calculation? Why should the number of transactions
be related to the rotational speed of the disk, without saying anything
about the number of bytes per rotation?
--
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-
On Fri, 2006-03-10 at 23:57 -0800, David Lang wrote:
> On Sat, 11 Mar 2006, Joost Kraaijeveld wrote:
>
> > On Fri, 2006-03-10 at 13:40 +, Richard Huxton wrote:
> >> Your ATA disk is lying about disk caching being turned off. Assuming
> >> each insert is in a s
nd option, of course.
I assume that for PostgreSQL "written to disk" is after fsync returned
successfully. In practice that could very well mean that the data is
still in a cache somewhere (controller or harddisk, not in the OS
anymore, see also man page of fsync)
--
Groeten,
Joost
On Sat, 2006-03-11 at 11:59 -0500, Tom Lane wrote:
> Joost Kraaijeveld <[EMAIL PROTECTED]> writes:
> > I assume that for PostgreSQL "written to disk" is after fsync returned
> > successfully. In practice that could very well mean that the data is
> > stil
Filter: (lastname ~~* 'Boonk'::text)
Total runtime: 18404.883 ms
Both tables are freshly fully vacuumed analyzed.
Why the difference and can I influence the result so that the first
query plan (which is the fastest) is actually used in both cases (I
would expect that t
If so, changing it
to 900 did not make any difference (PostgreSQL restarted, vacuumed
analysed etc).
--
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608416
web: www.askesis.nl
---(end of broadcast)---
hat can I do to find the bottleneck (and eventually make it faster)?
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608416
e-mail: [EMAIL PROTECTED]
web: www.askesis.nl
---(end of broadcast)--
s a little bit...(do you really need to view all 80k
> records at once?).
The application is build in Clarion, a 4 GL environment. We do not have any influence
over the query it generates and executes.
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 0
h the data)?
PGADMIN:
1. What does the "Total query runtime" really mean? (It was my understanding that it
was the time the database needs to collect the data, without any data transfer).
2. What does the "Data retrieval runtime" really mean? (Is this including the filling
o
rdernumer from orders a
QUERY PLAN
Seq Scan on orders a (cost=0.00..46808.79 rows=1104379 width=4)
Running time: 98 seconds
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608416
e-mail: [EMAIL PROTECTED]
web: www.a
B
ON A.Klantnummer=B.Klantnummer ORDER BY A.klantnummer;
fetch 100 in SQL_CUR01;
declare SQL_CUR02 cursor for
SELECT A.ordernummer, B.klantnummer FROM "orders" A LEFT OUTER JOIN "klt_alg" B
ON A.Klantnummer=B.Klantnummer ORDER BY A.klantnummer desc;
fetch 100 in SQL_CUR02;
co
ame large difference: 122494 ms vs. 1297 ms).
(for the query plans see below).
Can I, without changing the SQL (because it is generated by a tool) or
explicitely setting "set enable_seqscan = off" for this query, trick PostgreSQL
in taking the fast variant of the queryplan?
TIA
Groet
with
orders.
I have tested, and if one really wants the whole table the query with "set
enable_seqscan = on" lasts 137 secs, the query with "set enable_seqscan = off"
lasts 473 secs, so (alas), the planner is right.
I sure would like to have ISAM like behaviour once in a whil
ated MS the query will take, what
are the others)?
1. I assume that (cost=41557.43..110069.51 rows=1100836 width=12) is the
estimated cost and (actual time=21263.858..42845.158 rows=1104380 loops=1) the
actual cost. Is the difference acceptable?
2. If not, what can I do about it?
Groeten,
Joost
ate parameters to try to line up estimates and
> reality a bit better.
I I really do a "select *" and display the result, the planner is right (tested
with "set enable_seqscan = off" and "set enable_seqscan = on).
Groeten,
Joost Kraaijeveld
Askesis
ted to the fetch than I support the proposal. A bit late I presume.
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608416
e-mail: [EMAIL PROTECTED]
web: www.askesis.nl
---(end of broadcast)--
Hi all,
Is the number of rows in explain the number of rows that is expected to be
visited or retrieved?
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608416
e-mail: [EMAIL PROTECTED]
web: www.askesis.nl
1 hour (after which I cancelled the query).
My questions are:
- Is this to be expected?
- Especially as the query over just 1 join takes 32 secs? (on f.word =
s.word omitting everything for n )
- Why does explain say it takes "7876150720 rows"?
- Is there a way to rephrase the q
On Mon, 2011-11-28 at 17:42 +0100, Joost Kraaijeveld wrote:
> - Is there a way to rephrase the query that makes it faster?
This query goes faster (6224 ms, but I am not sure it gives the correct
answer as the result differs from my Java program):
select word, count (word) from unique_words
wh
On Mon, 2011-11-28 at 11:05 -0600, Kevin Grittner wrote:
> Joost Kraaijeveld wrote:
>
> > This query goes faster (6224 ms, but I am not sure it gives the
> > correct answer as the result differs from my Java program):
>
> It seems clear that you want to see words which
; from unique_words
> group by word
> )
> select word, least(countf, counts, countn) from x
> where countf > 0 and counts > 0 and countn > 0
> order by word;
>
> Cranked out rather quickly and untested.
I tested it and it worked as advertised. Takes
61 matches
Mail list logo