I like their approach...ddr ram + raid sanity backup + super reliable
power system. Their prices are on jupiter (and i dont mean jupiter,
fl) but hopefully there will be some competition and the invetible
Nothing unique to them. I have a 4 year old SSD from a now out-of-
business
On 3/21/06, Amit Soni [EMAIL PROTECTED] wrote:
I want to compare performance of postgresql database with some other
database.
Somebody must have done some performance testing.
Can you pls. share that data (performance figures) with me? And if possibleu
pls. share procedure also, that how
On 3/28/06, Greg Quinn [EMAIL PROTECTED] wrote:
I am using the OleDb connection driver. In my .NET application, I populate
3000 records into the table to test PostGreSql's speed. It takes about 3-4
seconds.
have you tried:
1. npgsql .net data provider
2. odbc ado.net bridge
merlin
On 3/28/06, Jim C. Nasby [EMAIL PROTECTED] wrote:
Heh, too quick on the send button...
On Tue, Mar 28, 2006 at 09:42:51PM +0200, PFC wrote:
Actually, it's entirely possible to do stuff like web counters, you just
want to do it differently in PostgreSQL. Simply insert into a table
every time
On 3/29/06, Greg Quinn [EMAIL PROTECTED] wrote:
how many rows does it return ? a few, or a lot ?
3000 Rows - 7 seconds - very slow
Which client library may have a problem? I am using OleDb, though haven't
tried the .NET connector yet.
esilo=# create temp table use_npgsql as select v,
On 3/31/06, Magnus Hagander [EMAIL PROTECTED] wrote:
This is a blatant thread steal... but here we go...
Do people have any opinions on the pgsql driver?
I beleive so. I've been using it for a long time with zero problems.
While I don't use many of the exotic features in it, I doubt most
On 4/7/06, Charles A. Landemaine [EMAIL PROTECTED] wrote:
I have a web server with PostgreSQL and RHEL. It hosts a search
engine, and each time some one makes a query, it uses the HDD Raid
array. The DB is not very big, it is less than a GB. I plan to add
more RAM anyway.
What I'd like to do
pdb=# explain analyze SELECT sdate, stime, rbts from lan WHERE (
( bname = 'pluto' ) AND ( cno = 17 ) AND ( pno = 1 ) AND ( ( sdate
= '2004-07-21' ) AND ( sdate = '2004-07-21' ) ) ) ORDER BY sdate, stime
;
this query would benefit from an index on
pluto, cno, pno, sdate
create index
On 4/11/06, Simon Dale [EMAIL PROTECTED] wrote:
I'm trying to evaluate PostgreSQL as a database that will have to store a
high volume of data and access that data frequently. One of the features on
our wish list is to be able to use stored procedures to access the data and
I was wondering if
On 4/11/06, Alvaro Herrera [EMAIL PROTECTED] wrote:
Merlin Moncure wrote:
pl/pgsql procedures are a very thin layer over the query engine.
Generally, they run about the same speed as SQL but you are not making
apples to apples comparison. One of the few but annoying limitations
of pl
On 4/11/06, Rodrigo Sakai [EMAIL PROTECTED] wrote:
Hi,
I think this is an old question, but I want to know if it really is well
worth to not create some foreign keys an deal with the referential integrity
at application-level?
Specifically, the system we are developing is a
On 4/12/06, Josh Berkus josh@agliodbs.com wrote:
People,
Lately I find people are not so receptive to VxFS, and Sun is promoting
ZFS, and we don't have a reasonable near term option for Raw IO in
Postgres, so we need to work to find a reasonable path for Solaris users
IMO. The long
SELECT * FROM wan ORDER BY stime LIMIT 50 OFFSET 81900;
you need to try and solve the problem without using 'offset'. you could do:
BEGIN;
DECLARE crs cursor FOR SELECT * FROM wan ORDER BY stime;
FETCH ABSOLUTE 81900 in crs;
FETCH 49 in crs;
CLOSE crs;
COMMIT;
this may be a bit faster but will
been doing a lot of pgsql/mysql performance testing lately, and there
is one query that mysql does much better than pgsql...and I see it a
lot in normal development:
select a,b,max(c) from t group by a,b;
t has an index on a,b,c.
in my sample case with cardinality of 1000 for a, 2000 for b,
On 5/25/06, Bruno Wolff III [EMAIL PROTECTED] wrote:
On Thu, May 25, 2006 at 16:07:19 -0400,
Merlin Moncure [EMAIL PROTECTED] wrote:
been doing a lot of pgsql/mysql performance testing lately, and there
is one query that mysql does much better than pgsql...and I see it a
lot in normal
On 5/25/06, Steinar H. Gunderson [EMAIL PROTECTED] wrote:
On Thu, May 25, 2006 at 04:07:19PM -0400, Merlin Moncure wrote:
been doing a lot of pgsql/mysql performance testing lately, and there
is one query that mysql does much better than pgsql...and I see it a
lot in normal development
On 5/25/06, Tom Lane [EMAIL PROTECTED] wrote:
Tom Lane [EMAIL PROTECTED] writes:
Merlin Moncure [EMAIL PROTECTED] writes:
recent versions of mysql do much better, returning same set in 20ms.
Are you sure you measured that right? I tried to duplicate this using
mysql 5.0.21, and I see
On 5/26/06, Tom Lane [EMAIL PROTECTED] wrote:
Well, this bears looking into, because I couldn't get anywhere near 20ms
with mysql. I was using a dual Xeon 2.8GHz machine which ought to be
did you have a key on a,b,c? if I include unimportant unkeyed field d
the query time drops from 70ms to ~
On 5/26/06, Tom Lane [EMAIL PROTECTED] wrote:
Merlin Moncure [EMAIL PROTECTED] writes:
did you have a key on a,b,c?
Yeah, I did
create index t1i on t1 (a,b,c);
Do I need to use some other syntax to get it to work?
can't thing of anything, I'm running completely stock, did you do
On 5/26/06, Tom Lane [EMAIL PROTECTED] wrote:
mysql select user_id, acc_id, max(sample_date) from usage_samples group by
1,2
939 rows in set (0.07 sec)
0.07 seconds is not impossibly out of line with my result of 0.15 sec,
maybe your machine is just 2X faster than mine. This is a 2.8GHz
On 6/16/06, Mikael Carneholm [EMAIL PROTECTED] wrote:
We've seen similar results with our EMC CX200 (fully equipped) when
compared to a single (1) SCSI disk machine. For sequential reads/writes
(import, export, updates on 5-10 30M+ row tables), performance is
downright awful. A big DB update
On 6/20/06, Merkel Marcel (CR/AEM4) [EMAIL PROTECTED] wrote:
I use libpqxx to access the database. This might be another bottleneck, but
I assume my query and table setup is the bigger bottleneck. Would it make
sense to fetch the whole array ? (Select map from table where … and parse
the array
Not yet. I would first like to know what is the time consuming part and
what is a work around. If you are sure individual columns for every
entry of the array solve the issue I will joyfully implement it. The
downsize of this approch is that the array dimensions are not always the
same in my
On 7/5/06, andy rost [EMAIL PROTECTED] wrote:
fsync = on # turns forced synchronization
have you tried turning this off and measuring performance?
stats_command_string = on
I would turn this off unless you absoltely require it. It is
expensive for what it
On 7/6/06, Eugeny N Dzhurinsky [EMAIL PROTECTED] wrote:
Hello!
I have a postgresql server serving thousands of tables. Sometime there are
queries which involves several tables.
In postgresql.conf I have these settings:
shared_buffers = 4
work_mem = 8192
maintenance_work_mem = 16384
with all these unsubscribe requests, we can only extrapolate that the
server has no serious performance issues left to solve. good work!
:-)
merlin
---(end of broadcast)---
TIP 4: Have you searched our list archives?
On 29 Jun 2006 10:00:35 -0700, [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
I have SP, which has a cursor iterations. Need to call another SP for
every loop iteration of the cursor. The pseudo code is as follows..
i would suggest converting your code to pl/pgsql and reposting. that
look awfully
On 7/4/06, Luckys [EMAIL PROTECTED] wrote:
Hi all,
I got this query, I'm having indexes for PropertyId and Dates columns across
all the tables, but still it takes ages to get me the result. What indexes
would be proposed on this, or I'm helpless?
I would suggest posting your table schemas
On 7/7/06, andy rost [EMAIL PROTECTED] wrote:
Hi Merlin,
Thanks for the input. Please see below ...
no problem!
[aside: jeff, great advice on tps determination]
fsync = on # turns forced synchronization
have you tried turning this off and measuring
On 7/26/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
Hello,
Sorry for my poor english,
My problem :
I meet some performance problem during load increase.
massive update of 50.000.000 records and 2.000.000 insert with a weekly
frequency in a huge table (+50.000.000 records, ten fields, 12
On 7/27/06, Eliott [EMAIL PROTECTED] wrote:
Hi!
I hope I'm sending my question to the right list, please don't flame if it's
the wrong one.
I have noticed that while a query runs in about 1.5seconds on a 8.xx version
postgresql server on our 7.4.13 it takes around 15-20 minutes. Since we are
On 7/31/06, Jonathan Ballet [EMAIL PROTECTED] wrote:
Hello,
I've read a lot of mails here saying how good is the Opteron with PostgreSQL,
and a lot of people seems to recommend it (instead of Xeon).
I am a huge fan of the opteron but intel certainly seems to have a
winner for workstations.
On 7/29/06, Jochem van Dieten [EMAIL PROTECTED] wrote:
Tweakers.net has done a database performance test between a Sun T2000 (8
core T1) and a Sun X4200 (2 dual core Opteron 280). The database
benchmark is developed inhouse and represents the average query pattern
from their website. It is MySQL
On 8/1/06, George Pavlov [EMAIL PROTECTED] wrote:
I am looking for some general guidelines on what is the performance
overhead of enabling point-in-time recovery (archive_command config) on
an 8.1 database. Obviously it will depend on a multitude of factors, but
some broad-brush statements
On 7/18/06, Alex Turner [EMAIL PROTECTED] wrote:
Remember when it comes to OLTP, massive serial throughput is not gonna help
you, it's low seek times, which is why people still buy 15k RPM drives, and
why you don't necessarily need a honking SAS/SATA controller which can
harness the full
On 8/3/06, Luke Lonergan [EMAIL PROTECTED] wrote:
Merlin,
moving a gigabyte around/sec on the server, attached or no,
is pretty heavy lifting on x86 hardware.
Maybe so, but we're doing 2GB/s plus on Sun/Thumper with software RAID
and 36 disks and 1GB/s on a HW RAID with 16 disks, all SATA.
On 8/7/06, Alvaro Nunes Melo [EMAIL PROTECTED] wrote:
we recently upgraded our dual Xeon Dell to a brand new Sun v40z with 4
opterons, 16GB of memory and MegaRAID with enough disks. OS is Debian
Sarge amd64, PostgreSQL is 8.0.3. on
On 8/9/06, Kenji Morishige [EMAIL PROTECTED] wrote:
I have unlimited rack space, so 2U is not the issue. The boxes are stored in
our lab for internal software tools. I'm going to research those boxes you
mention. Regarding the JBOD enclosures, are these generally just 2U or 4U
units with SCSI
On 8/10/06, Phil Cairns [EMAIL PROTECTED] wrote:
Hi all,
I have an application that uses PostgreSQL to store its data. The
application and an instance of the database have been installed in three
different locations, and none of these three locations have anything to
do with any of the others.
On 8/18/06, Magnus Hagander [EMAIL PROTECTED] wrote:
First off - very few third party tools support debian. Debian is
a
sure fire way to have an unsupported system. Use RedHat or SuSe
(flame me all you want, it doesn't make it less true).
*cough* BS *cough*
Linux is Linux. It
On 8/24/06, Jeff Davis [EMAIL PROTECTED] wrote:
On Thu, 2006-08-24 at 09:21 -0400, Merlin Moncure wrote:
On 8/22/06, Jeff Davis [EMAIL PROTECTED] wrote:
On Tue, 2006-08-22 at 17:56 -0400, Bucky Jordan wrote:
it's not the parity, it's the seeking. Raid 5 gives you great
sequential i/o
On 8/24/06, Scott Marlowe [EMAIL PROTECTED] wrote:
On Thu, 2006-08-24 at 13:57, Merlin Moncure wrote:
On 8/24/06, Jeff Davis [EMAIL PROTECTED] wrote:
On Thu, 2006-08-24 at 09:21 -0400, Merlin Moncure wrote:
On 8/22/06, Jeff Davis [EMAIL PROTECTED] wrote:
On Tue, 2006-08-22 at 17:56
On 8/24/06, Bucky Jordan [EMAIL PROTECTED] wrote:
Here's benchmarks of RAID5x4 vs RAID10x4 on a Dell Perc5/I with 300 GB
10k RPM SAS drives. I know these are bonnie 1.9 instead of the older
version, but maybe it might still make for useful analysis of RAID5 vs.
RAID10.
-- RAID5x4
i dont
We recently here picked up a adtx san and are having good results with
it. It's pretty flexible, having dual 4gb fc controllers and also dual
sas controllers do you can run it as attached sas or fc. Both have
their advantages and unfortuantely I didn't have time to do much
benchmarking becuase
-Tyan dual-core/dual-cpu mainboard (
-One Opteron 270 2.0GHz (although our vendor gave us two for some reason)
-Chenbro 3U case (RM31212B) - OK, but not very well thought-out
-8 Seagate SATA drives (yes, we stuck with our vendor of choice, WD
Raptors may have been a better choice)
-3Ware
On 8/28/06, Christopher Browne [EMAIL PROTECTED] wrote:
On 8/28/06, Tom Lane [EMAIL PROTECTED] wrote:
Christopher Browne [EMAIL PROTECTED] writes:
On 8/28/06, Alvaro Herrera [EMAIL PROTECTED] wrote:
There's no solution short of upgrading.
That's a little too negative. There is at least
On 8/29/06, Willo van der Merwe [EMAIL PROTECTED] wrote:
and it has 743321 rows and a explain analyze select count(*) from
property_values;
you have a number of options:
1. keep a sequence on the property values and query it. if you want
exact count you must do some clever locking however.
On 8/30/06, Willo van der Merwe [EMAIL PROTECTED] wrote:
This was just an example. All queries have slowed down. Could it be that
I've reached some cut-off and now my disk is thrashing?
Currently the load looks like this:
Cpu0 : 96.8% us, 1.9% sy, 0.0% ni, 0.3% id, 0.0% wa, 0.0% hi, 1.0%
On 8/31/06, Cosimo Streppone [EMAIL PROTECTED] wrote:
Good morning,
- postgresql.conf, especially:
effective_cache_size (now 5000)
bgwriter_delay (500)
commit_delay/commit_siblings (default)
while thse settings may help, don't expect too much. ditto shared
buffers. your
On 01 Sep 2006 19:00:52 +0200, Guillaume Cottenceau [EMAIL PROTECTED] wrote:
Hi,
I've been looking at the results from the pg_statio* tables, to
view the impact of increasing the shared buffers to increase
performance.
I think 'shared buffers' is one of the most overrated settings from a
On 9/1/06, Joshua D. Drake [EMAIL PROTECTED] wrote:
I think 'shared buffers' is one of the most overrated settings from a
performance standpoint. however you must ensure there is enough for
things the server does besides caching. It used to be a bigger deal
than it is in modern versionf
On 9/7/06, Nimesh Satam [EMAIL PROTECTED] wrote:
We also noticed that the database slow downs heavily at a particular
time..Can you suggest any tools which will help in diagnosing the root cause
behiond the data load.
possible checkpoint? poorly formulated query? it could be any number
of
On 9/11/06, Scott Marlowe [EMAIL PROTECTED] wrote:
I'd suggest two things.
one: Get a better ERP... :) or at least one you can inject some
intelligence into, and two: upgrade to postgresql 8.1, or even 8.2 which
will be released moderately soon, and if you won't be going into
production
On 9/13/06, Tom Lane [EMAIL PROTECTED] wrote:
IIRC, with these settings PG 8.0 seemed to be about half the speed of
mysql 5.0 w/myisam, which is probably somewhere in the ballpark of the
truth for tests of this nature, ie, single query stream of fairly simple
queries. If you try
On 9/14/06, Scott Marlowe [EMAIL PROTECTED] wrote:
On Wed, 2006-09-13 at 14:36, Merlin Moncure wrote:
another small aside, I caught the sqlite people actually *detuning*
postgresql for performance by turning stats_command_string=on in
postgresql.conf. The way it was portrayed it almost
On 9/18/06, Bucky Jordan [EMAIL PROTECTED] wrote:
My question is at what point do I have to get fancy with those big
tables? From your presentation, it looks like PG can handle 1.2 billion
records or so as long as you write intelligent queries. (And normal PG
should be able to handle that,
On 9/28/06, Carlo Stonebanks [EMAIL PROTECTED] wrote:
We urgently need a major performance improvement. We are running the
PostgreSQL 8.1.4 on a Windows 2003 x64 Server on a dual processor, dual core
3.2Ghz Xeon box with 4gb RAM and a RAID (sorry, I don't know what type) disc
subsystem. Sorry
On 9/28/06, Carlo Stonebanks [EMAIL PROTECTED] wrote:
are you using the 'copy' interface?
Straightforward inserts - the import data has to transformed, normalised and
de-duped by the import program. I imagine the copy interface is for more
straightforward data importing. These are - buy
On 9/28/06, Carlo Stonebanks [EMAIL PROTECTED] wrote:
The deduplication process requires so many programmed procedures that it
runs on the client. Most of the de-dupe lookups are not straight lookups,
but calculated ones emplying fuzzy logic. This is because we cannot dictate
the format of our
On 9/29/06, Carlo Stonebanks [EMAIL PROTECTED] wrote:
For reasons I've exlained elsewhere, the import process is not well suited
to breaking up the data into smaller segments. However, I'm interested in
what can be indexed. I am used to the idea that indexing only applies to
expressions that
On 10/3/06, Carlo Stonebanks [EMAIL PROTECTED] wrote:
Some very helpful people had asked that I post the troublesome code that was
generated by my import program.
I installed a SQL log feature in my import program. I have
posted samples of the SQL statements that cause the biggest delays.
On 10/3/06, Carlo Stonebanks [EMAIL PROTECTED] wrote:
Please ignore sample 1 - now that I have the logging feature, I can see that
my query generator algorithm made an error.
can you do explain analyze on the two select queries on either side of
the union separatly? the subquery is correctly
On 10/4/06, Carlo Stonebanks [EMAIL PROTECTED] wrote:
can you do explain analyze on the two select queries on either side of
the union separatly? the subquery is correctly written and unlikely
to be a problem (in fact, good style imo). so lets have a look at
both sides of facil query and
On 10/5/06, Carlo Stonebanks [EMAIL PROTECTED] wrote:
Hi Merlin,
Here are the results. The query returned more rows (65 vs 12) because of the
vague postal_code.
right. interestingly, the index didn't work properly anyways.
regardless, this is easily solvable but it looks like we might be
On 10/5/06, Carlo Stonebanks [EMAIL PROTECTED] wrote:
do we have an multi-column index on
facility_address(facility_id, address_id)? did you run analyze?
There is an index on facility_address on facility_id.
I didn't create an index on facility_address.address_id because I expected
joins to
On 10/6/06, Scott Marlowe [EMAIL PROTECTED] wrote:
On Fri, 2006-10-06 at 11:44, Carlo Stonebanks wrote:
This didn't work right away, but DID work after running a VACUUM FULL. In
other words, i was still stuck with a sequential scan until after the
vacuum.
I turned autovacuum off in order to
On 10/6/06, Carlo Stonebanks [EMAIL PROTECTED] wrote:
how did you determine that it is done every 500 rows? this is the
The import program pages the import table - it is currently set at 500 rows
per page. With each page, I run an ANALYZE.
right, i just wanted to make sure of something (you
On 10/8/06, Jim C. Nasby [EMAIL PROTECTED] wrote:
On Thu, Oct 05, 2006 at 09:30:45AM -0400, Merlin Moncure wrote:
I personally only use explicit joins when doing outer joins and even
them push them out as far as possible.
I used to be like that too, until I actually started using join syntax
I have two systems running 8.2beta1 getting strange difference of
results in count(*). Query that illistrates the difference is
count(*). this is a synthetic test i use to measure a sytems's cpu
performance.
System A:
2.2 ghz p4 northwood, HT
win xp
vanilla sata (1 disk)
System B:
amd 64
On 10/9/06, Stephen Frost [EMAIL PROTECTED] wrote:
* Merlin Moncure ([EMAIL PROTECTED]) wrote:
explain analyze select 5000!;
A: 2.4 seconds
B: 1.8 seconds
explain analyze select count(*) from generate_series(1,50);
A: 0.85 seconds
B: 4.94 seconds
Try w/o the explain analyze. It adds
On 10/10/06, Jim C. Nasby [EMAIL PROTECTED] wrote:
Try w/o the explain analyze. It adds quite a bit of overhead and that
might be inconsistant between the systems (mainly it may have to do with
the gettimeofday() calls being implemented differently between Windows
and Linux..).
that was
On 10/12/06, Tom Lane [EMAIL PROTECTED] wrote:
[ This is off-topic for -performance, please continue the thread in
-hackers ]
This proposal seems to deliberately ignore every point that has been
made *against* doing things that way. It doesn't separate the hints
from the queries, it doesn't
On 10/15/06, Carlo Stonebanks [EMAIL PROTECTED] wrote:
Hi Merlin,
Well, I'm back. first of all, thanks for your dogged determination to help
me out - it is much appreciated. I owe you a beer or twelve.
The import has been running for a week. The import program got faster as I
tuned things. I
On 10/15/06, Carlo Stonebanks [EMAIL PROTECTED] wrote:
that contains full address data
*/
select
f.facility_id,
null as facility_address_id,
null as address_id,
f.facility_type_code,
f.name,
null as address,
f.default_city as city,
f.default_state_code as
On 10/17/06, Mario Weilguni [EMAIL PROTECTED] wrote:
Am Dienstag, 17. Oktober 2006 11:52 schrieb Alexander Staubo:
Lastly, note that in PostgreSQL these length declarations are not
necessary:
contacto varchar(255),
fuente varchar(512),
prefijopais varchar(10)
Instead, use:
contacto
On 10/18/06, Bucky Jordan [EMAIL PROTECTED] wrote:
On 10/17/06, Rohit_Behl [EMAIL PROTECTED] wrote:
Select events.event_id, ctrl.real_name, events.tsds, events.value,
events.lds, events.correction, ctrl.type, ctrl.freq from table events,
iso_midw_control ctrl where events.obj_id =
On 10/18/06, Heikki Linnakangas [EMAIL PROTECTED] wrote:
I would suggest using setting prepareThreshold=0 in the JDBC driver
connection URL, or calling pstmt.setPrepareThreshold(0) in the
application. That tells the driver not to use server-side prepare, and
the query will be re-planned every
On 10/18/06, Tom Lane [EMAIL PROTECTED] wrote:
Merlin Moncure [EMAIL PROTECTED] writes:
this is not really a jdbc issue, just a practical problem with
prepared statements...
Specifically, that the OP is running a 7.4 backend, which was our
first venture into prepared parameterized statements
On 10/19/06, Jens Schipkowski [EMAIL PROTECTED] wrote:
// select finds out which one has not an twin
// a twin is defined as record with the same attr* values
// decreases speed over time until timeout by postgresql
SELECT *
FROM tbl_reg reg
WHERE register loc1 AND
idreg NOT IN
On 10/19/06, Jens Schipkowski [EMAIL PROTECTED] wrote:
On Thu, 19 Oct 2006 19:32:22 +0200, Merlin Moncure 1. your database design
is the real culprit here. If you want things
to run really quickly, solve the problem there by normalizing your
schema. denomalization is the root cause of many
On 10/19/06, Ron [EMAIL PROTECTED] wrote:
Nonetheless, YMMV. The only sure way to know what is best for your
SW running on your HW under your load conditions is to test, test, test.
anybody have/know of some data on shared buffer settings on 8.1+?
merlin
---(end of
On 10/21/06, Carlo Stonebanks [EMAIL PROTECTED] wrote:
Our Windows-based db server has to integrate with users that work regularily
with Access.When attempting to import user's data from Access MDB files to
PostgreSQL, we try on eof two things: either import using EMS SQL Manager's
Data Import
On 10/21/06, Worky Workerson [EMAIL PROTECTED] wrote:
What is the best COPY performance that you have gotten on a normal table?
I know that this is question is almost too general, but it might help
me out a bit, or at least give me the right things to tweak. Perhaps
the question can be
On 10/20/06, Stuart Bishop [EMAIL PROTECTED] wrote:
I would like to understand what causes some of my indexes to be slower to
use than others with PostgreSQL 8.1. On a particular table, I have an int4
primary key, an indexed unique text 'name' column and a functional index of
type text. The
On 10/21/06, Alvaro Herrera [EMAIL PROTECTED] wrote:
Carlo Stonebanks wrote:
Our Windows-based db server has to integrate with users that work regularily
with Access.When attempting to import user's data from Access MDB files to
PostgreSQL, we try on eof two things: either import using EMS
On 10/23/06, Worky Workerson [EMAIL PROTECTED] wrote:
The disk load is where I start to get a little fuzzy, as I haven't
played with iostat to figure what is normal. The local drives
contain PG_DATA as well as all the log files, but there is a
tablespace on the FibreChannel SAN that contains
On 10/25/06, Worky Workerson [EMAIL PROTECTED] wrote:
I'm guessing the high bursts are checkpoints. Can you check your log
files for pg and see if you are getting warnings about checkpoint
frequency? You can get some mileage here by increasing wal files.
Nope, nothing in the log. I have
On 10/26/06, Carlo Stonebanks [EMAIL PROTECTED] wrote:
This is pretty interesting - where can I read more on this? Windows isn't
actually hanging, one single command line window is - from its behaviour, it
looks like the TCL postgresql package is waiting for pg_exec to come back
from the commit
On 2/5/05, Dirk Lutzebaeck [EMAIL PROTECTED] wrote:
here is a query which produces over 1G temp file in pgsql_tmp. This
is on pgsql 7.4.2, RHEL 3.0, XEON MP machine with 32GB RAM, 300MB
sort_mem and 320MB shared_mem.
Below is the query and results for EXPLAIN and EXPLAIN ANALYZE. All
tables
On 10/27/06, Worky Workerson [EMAIL PROTECTED] wrote:
I'm hoping that the corporate Oracle machine won't shut down my pg
projects. On total side note, if anyone knows how to best limit
Oracle's impact on a system (i.e. memory usage, etc), I'd be
interested.
rm -rf /usr/local/oracle?
merlin
On 10/28/06, Luke Lonergan [EMAIL PROTECTED] wrote:
Worky (that your real name? :-)
On 10/27/06 12:08 PM, Worky Workerson [EMAIL PROTECTED] wrote:
Here it is, taken from a spot about halfway through a 'cat file |
psql' load, with the Oracle-is-installed-and-running caveat:
r b swpd
On 10/28/06, Simon Riggs [EMAIL PROTECTED] wrote:
On Thu, 2006-10-26 at 11:06 -0400, Merlin Moncure wrote:
On 10/26/06, Carlo Stonebanks [EMAIL PROTECTED] wrote:
This is pretty interesting - where can I read more on this? Windows isn't
actually hanging, one single command line window
On 11/6/06, Brian Hurt [EMAIL PROTECTED] wrote:
I'm having a spot of problem with out storage device vendor. Read
performance (as measured by both bonnie++ and hdparm -t) is abysmal
(~14Mbyte/sec), and we're trying to get them to fix it. Unfortunately,
they're using the fact that bonnie++ is
On 11/8/06, Markus Schaber [EMAIL PROTECTED] wrote:
Hi, Brian,
Brian Hurt wrote:
So the question is: is there an easy to install and run, read-heavy
benchmark out there that I can wave at them to get them to fix the
problem?
For sequential read performance, use dd. Most variants of dd I've
On 11/8/06, Spiegelberg, Greg [EMAIL PROTECTED] wrote:
Merlin,
I'm kinda shocked you had such a bad exp. with the AMS200. We have a
unit here hooked up to a 4-node Linux cluster with 4 databases banging
on it and we get good, consistent perfomance out of it. All 4 nodes can
throw 25 to 75
On 11/14/06, Cosimo Streppone [EMAIL PROTECTED] wrote:
I must say I lowered shared_buffers to 8192, as it was before.
I tried raising it to 16384, but I can't seem to find a relationship
between shared_buffers and performance level for this server.
My findings are pretty much the same here. I
On 11/15/06, AMIR FRANCO D. JOVEN [EMAIL PROTECTED] wrote:
Hi!
Im new to PostgreSQL.
My current project uses PostgreSQL 7.3.4.
the problem is like this:
I have a table with 94 fields and a select with only one resultset in only
one client consumes about 0.86 seconds.
The client executes
On 11/15/06, Craig A. James [EMAIL PROTECTED] wrote:
Questions:
1. Any idea what happened and how I can avoid this? It's a *big* problem.
2. Why didn't the database recover? Why are there two processes
that couldn't be killed?
3. Where did the signal 9 come from? (Nobody but me
On 11/25/06, Arnau [EMAIL PROTECTED] wrote:
Hi all,
I have a table with statistics with more than 15 million rows. I'd
like to delete the oldest statistics and this can be about 7 million
rows. Which method would you recommend me to do this? I'd be also
interested in calculate some kind of
On 12/4/06, Joost Kraaijeveld [EMAIL PROTECTED] wrote:
How can I move pg_xlog to another drive on Windows? In Linux I can use a
symlink, but how do I that on windows?
you can possibly attempt it with junction points. good luck:
http://support.microsoft.com/kb/205524
merlin
201 - 300 of 1016 matches
Mail list logo