On 17/08/2011 7:26 PM, Ogden wrote:
I am using bonnie++ to benchmark our current Postgres system (on RAID
5) with the new one we have, which I have configured with RAID 10. The
drives are the same (SAS 15K). I tried the new system with ext3 and
then XFS but the results seem really outrageous
On 17/08/2011 7:56 PM, Ogden wrote:
On Aug 17, 2011, at 1:33 PM, Gary Doades wrote:
On 17/08/2011 7:26 PM, Ogden wrote:
I am using bonnie++ to benchmark our current Postgres system (on RAID 5) with
the new one we have, which I have configured with RAID 10. The drives are the
same (SAS 15K
On 07/12/2010 7:43 PM, Andy Colson wrote:
On 12/7/2010 1:22 PM, Justin Pitts wrote:
Also, as a fair warning: mssql doesn't really care about
transactions, but
PG really does. Make sure all your code is properly starting and
commiting
transactions.
-Andy
I do not understand that
On 07/12/2010 9:29 PM, Tom Polak wrote:
From EXPLAIN ANALYZE I can see the query ran much faster.
Nested Loop Left Join (cost=0.00..138.04 rows=1001 width=1298) (actual
time=0.036..4.679 rows=1001 loops=1)
Join Filter: (pgtemp1.state = pgtemp2.stateid)
- Seq Scan on pgtemp1
On 28/03/2010 10:07 AM, Tadipathri Raghu wrote:
Hi All,
I want to give some more light on this by analysing more like this
1. In my example I have created a table with one column as INT( which
occupies 4 bytes)
2. Initially it occupies one page of space on the file that is (8kb).
So, here is
On 04/01/2010 7:10 PM, Madison Kelly wrote:
Hi all,
I've got a fairly small DB (~850MB when pg_dump'ed) running on PgSQL
v8.1.11 a CentOS 5.3 x86_64 Xen-based virtual machine. The filesystem is
ext3 on LVM with 32MB extents. It's about the only real resource-hungry
VM on the server.
It
On 04/01/2010 8:30 PM, Madison Kelly wrote:
Steve Crawford wrote:
Madison Kelly wrote:
Hi all,
I've got a fairly small DB...
It slows down over time and I can't seem to find a way to get the
performance to return without doing a dump and reload of the
database...
Some questions:
Is
Rajesh Kumar Mallah wrote:
Hi,
I am going to get a Dell 2950 with PERC6i with
8 * 73 15K SAS drives +
300 GB EMC SATA SAN STORAGE,
I seek suggestions from users sharing their experience with
similar hardware if any. I have following specific concerns.
1. On list i read that RAID10 function
Tom Lane wrote:
I increased the size of the test case by 10x (basically s/10/100/)
which is enough to push it into the external-sort regime. I get
amazingly stable runtimes now --- I didn't have the patience to run 100
trials, but in 30 trials I have slowest 11538 msec and fastest
Gary Doades [EMAIL PROTECTED] writes:
I think the reason I wasn't seeing performance issues with normal sort
operations is because they use work_mem not maintenance_work_mem which
was
only set to 2048 anyway. Does that sound right?
Very probable. Do you want to test the theory by jacking
Tom Lane wrote:
Interesting. I tried your test script and got fairly close times
for all the cases on two different machines:
old HPUX machine: shortest 5800 msec, longest 7960 msec
new Fedora 4 machine: shortest 461 msec, longest 608 msec
(the HPUX machine was doing other stuff
Tom Lane wrote:
shared_buffers is unlikely to impact index build time noticeably in
recent PG releases. maintenance_work_mem would affect it a lot, though.
What setting were you using for that?
Also, i tried upping maintenance_work_mem to 65536 and it didn't make
much difference (maybe 10%
Tom Lane wrote:
I tried forcing PG to use src/port/qsort.c on the Fedora machine,
and lo and behold:
new Fedora 4 machine: shortest 434 msec, longest 8530 msec
So it sure looks like this script does expose a problem on BSD-derived
qsorts. Curiously, the case that's much the worst for
Tom Lane wrote:
So it sure looks like this script does expose a problem on BSD-derived
qsorts. Curiously, the case that's much the worst for me is the third
in the script, while the shortest time is the first case, which was slow
for Gary. So I'd venture that the *BSD code has been tweaked
Dave Cramer wrote:
I understand that but I have seen VM's crash.
This does bring up another point. Since postgresql is not threaded a
.NET pl would require a separate VM for each connection (unless you can
share the vm ?). One of the java pl's (pl-j) for postgres has dealt
with this issue.
For
Pierre-Frédéric Caillaud wrote:
On Mon, 10 Jan 2005 12:46:01 -0500, Alex Turner [EMAIL PROTECTED] wrote:
You sir are correct! You can't use perl in MS-SQL or Oracle ;).
Can you benefit from the luminous power of Visual Basic as a pl in
MSSQL ?
The .NET Runtime will be a part of the next MS
Rosser Schwarz wrote:
while you weren't looking, Gary Doades wrote:
The .NET Runtime will be a part of the next MS SQLServer engine.
It won't be long before someone writes a procedural language binding
to PostgreSQL for Parrot [1]. That should offer us a handful or six
more languages that can
Dave Cramer wrote:
I'm curious, why do you think that's serious ? What do you really expect
to do in the stored procedure ? Anything of consequence will seriously
degrade performance if you select it in say a million rows.
I'm not sure what you mean by select it in a million rows. I would
Dave Cramer wrote:
Ok, so one use case is to select a large number of rows and do some
non-trivial operation on them.
I can see where getting the rows inside the server process ( ie some
procedural language ) thereby reducing the round trip overhead would be
beneficial. However how do you deal
Randolf Richardson wrote:
I'm looking for recent performance statistics on PostgreSQL vs. Oracle
vs. Microsoft SQL Server. Recently someone has been trying to convince my
client to switch from SyBASE to Microsoft SQL Server (they originally wanted
to go with Oracle but have since fallen
Tom Lane wrote:
It's also worth noting that work_mem is temporarily set to
maintenance_work_mem, which you didn't tell us the value of:
It's left at the default. (16384).
This would be OK if that is all it used for this type of thing.
My recollection is that hash join chooses hash table
How much RAM can a single postgres backend use?
I've just loaded a moderately sized dataset into postgres and was
applying RI constraints to the tables (using pgadmin on windows). Part
way though I noticed the (single) postgres backend had shot up to using
300+ MB of my RAM!
The two tables are:
On 21 Oct 2004 at 15:50, Thomas F.O'Connell wrote:
If not, should I be REINDEXing manually, as well as VACUUMing manually
after large data imports (whether via COPY or INSERT)? Or will a VACUUM
FULL ANALYZE be enough?
It's not the vacuuming that's important here, just the analyze. If
Oops, forgot to mention:
PostgreSQL 8.0 beta 2 Windows.
Thanks,
Gary.
On 8 Oct 2004 at 20:32, Gary Doades wrote:
I'm looking at one of my standard queries and have encountered some strange
performance
problems.
The query below is to search for vacant staff member date/time slots
On 8 Oct 2004 at 16:04, Tom Lane wrote:
Gary Doades [EMAIL PROTECTED] writes:
If I remove the redundant clauses, the planner now estimates 1000 rows returned
from
the table, not unreasonable since it has no statistics. But *why* in that case,
with *more*
estimated rows does
On 31 Aug 2004 at 20:59, Jean-Max Reymond wrote:
hi,
I want to optimize the following request and avoid the seq scan on the
table article (1000 rows).
explain SELECT art_id, art_titre, art_texte, rub_titre
FROM article inner join rubrique on article.rub_id = rubrique.rub_id
On 31 Aug 2004 at 21:42, Jean-Max Reymond wrote:
- Original Message -
From: Gary Doades [EMAIL PROTECTED]
Date: Tue, 31 Aug 2004 20:21:49 +0100
Subject: Re: [PERFORM] Optimizing a request
To: [EMAIL PROTECTED]
Have you run ANALYZE on this database after creating the indexes
On 31 Aug 2004 at 22:24, Jean-Max Reymond wrote:
On Tue, 31 Aug 2004 21:16:46 +0100, Gary Doades [EMAIL PROTECTED] wrote:
I can only presume you mean 1 GB RAM. What exactly are your
settings for shared buffers and effective_cache_size?
for 1 GB RAM,
shared_buffers = 65536
On 2 Jun 2004 at 16:45, Merlin Moncure wrote:
'better' does not mean 'faster'. Win32 has a pretty decent journaling
filesytem (ntfs) and a good I/O subsystem which includes IPC. Process
management is poor compared to newer linux kernels but this is
unimportant except in extreme cases.
Try using
select * from files_t where parent = ::int8
You have declared parent as int8, but the query will assume int4 for and may not
use the index.
Also make sure you have ANALYZEd this table.
Regards,
Gary.
On 20 May 2004 at 0:07, Eugeny Balakhonov wrote:
Hello for
On 29 Apr 2004 at 19:17, Tom Lane wrote:
Josh Berkus [EMAIL PROTECTED] writes:
Certainly the fact that MSSQL is essentially a single-user database makes
things easier for them.
Our recent testing (cf the Xeon thread) says that the interlocking we
do to make the world safe for multiple
On 30 Apr 2004 at 7:26, Dennis Bjorklund wrote:
On Fri, 30 Apr 2004, Gary Doades wrote:
I should have also pointed out that MSSQL reported that same index scan
as taking 65% of the overall query time. It was just faster. The
overall query took 103ms in MSSQL.
Are your results based
On 30 Apr 2004 at 8:32, Jeff wrote:
A better comparision query may be a simple select a from mytable
where a between foo and bar to get an index scan. In that case its a
straight up, vanilla index scan. Nothing else getting in the way.
Yes, you're right and I have done this just to
On 30 Apr 2004 at 9:37, Kevin Barnard wrote:
I was always under the impression that MSSQL used leaf and row level locking and
therefore
was not a concurrent, in the same sense that postgres is, database. It would still
allow for
concurrent connections and such but updates will get
I guess the real question is, why maintain index flags and not simply
drop the index entry altogether?
A more interesting case would be to have the backend process record
index tuples that it would invalidate (if committed), then on commit
send that list to a garbage collection process.
It's also entirely possible your indices are using inaccurate
statistical information. Have you ANALYZEd recently?
In this example the statistics don't matter. The plans used were the same for
MSSQL and Postgres. I was trying to eliminate the difference in plans
between the two, which
On 29 Apr 2004 at 13:54, Josh Berkus wrote:
Gary,
It's also quite possble the MSSQL simply has more efficient index scanning
implementation that we do.They've certainly had incentive; their storage
system sucks big time for random lookups and they need those fast indexes.
(just
Having picked out an index scan as being the highest time user I
concentrated on that in this case and compared the same index scan on
MSSQL. At least MSSQL reported it as an index scan on the same index
for the same number of rows.
I should have also pointed out that MSSQL reported
I know you will shoot me down, but...
Why is there an entry in the index for a row if the row is not valid?
Wouldn't it be better for the index entry validity to track the row validity.
If a particular data value for a query (join, where etc.) can be satisfied
by the index entry itself this
I can understand the performance loss on non-selects for keeping the
index validity state tracking the row validity, but would that outweigh the
performance gains on selects? Depends on your mix of selects to non
selects I guess, but other database systems seem to imply that keeping
the index
On 5 Apr 2004 at 8:36, Josh Berkus wrote:
Point taken, though, SQL Server has done a better job in opitimizing for
dumb queries. This is something that PostgreSQL needs to work on, as is
self-referential updates for large tables, which also tend to be really slow.
Mind you, in SQL
]
To: Gary Doades [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Saturday, April 03, 2004 1:59 PM
Subject: Re: [PERFORM] PostgreSQL and Linux 2.6 kernel.
Gary,
There are no indexes on the columns involved in the update, they are
not required for my usual select statements. This is an attempt
the information.
Regards,
Gary.
On 3 Apr 2004 at 16:52, Cott Lang wrote:
On Sat, 2004-04-03 at 03:50, Gary Doades wrote:
On 2 Apr 2004 at 22:36, [EMAIL PROTECTED] wrote:
OK, some more detail:
Before wiping 2.4 off my test box for the second time:
Perhaps I missed it, but which io scheduler
developers and administrators in what works and what
doesn't. The initial migration thus compares a tuned to an untuned version.
/Aaron
- Original Message -
From: Josh Berkus [EMAIL PROTECTED]
To: Gary Doades [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Saturday, April 03, 2004 1:59 PM
On 3 Apr 2004 at 21:23, Mike Nolan wrote:
Almost any cross dbms migration shows a drop in performance. The engine
effectively trains developers and administrators in what works and what
doesn't. The initial migration thus compares a tuned to an untuned version.
I think it is also
Actually it hasn't been my experience either. Most of my queries against
the database, large and small are either a little quicker or no real
difference. I have only really noticed big differences under stress when
memory (RAM) is being squeezed. The main winner on 2.6 seems to be
write
form.
Cheers,
Gary.
On 3 Apr 2004 at 6:50, @g v t c wrote:
Use Set Show_Plan or something of the sort in Query Analyzer. Then
run your SQL. This will change the graphical plan to a text plan
similar to Postgresql or at least something close to readable.
Gary Doades wrote:
On 2 Apr 2004 at 22
Following on from Josh's response and my previous reply on SQLServer planning.
The main problem query is this one:
SELECT VS.*,VL.TEL1,SC.CONTRACT_ID,SC.CONTRACT_REF, SC.MAX_HOURS,
SC.MIN_HOURS,
(SELECT COUNT(*) FROM TIMESHEET_DETAIL JOIN MAIN_ORDER ON
(MAIN_ORDER.ORDER_ID =
As part of my ongoing evaluation of PostgreSQL I have been doing a little stress
testing.
I though I would share an interesting result here..
Machine spec:
500 MHz PIII
256MB RAM
old-ish IDE HD (5400RPM)
Linux 2.4.22 kernel (Madrake 9.2)
I have PostgreSQL 7.4.1 installed and have managed to
.
As this could become important I will be doing more analysis, but due to
the nature of the issue and trying to keep as many factors constant as
possible, this may take some time.
Cheers,
Gary.
On 2 Apr 2004 at 1:32, Tom Lane wrote:
Gary Doades [EMAIL PROTECTED] writes:
As a test in PosgreSQL
50 matches
Mail list logo