Martin Foster wrote:
Simon Riggs wrote:
On Tue, 2004-11-02 at 23:52, Martin Foster wrote:
[...]
I've seen this behavior before when restarting the web server during
heavy loads.Apache goes from zero connections to a solid 120,
causing PostgreSQL to spawn that many children in a short order
Hasnul Fadhly bin Hasan wrote:
Hi Bryan,
Just wondering, i ran vacuumdb but didn't get the information that you
get about the free space even when i set the verbose option. How did
you get that?
Thanks,
Hasnul
I believe it is
VACUUM FULL ANALYZE VERBOSE;
At the very end you will get a listing
The first thing to check... Did you do a recent VACUUM ANALYZE? This
updates all the statistics. There are a number of places where it says
rows=1000 which is usually the I have no idea, let me guess 1000.
Also, there are a number of places where the estimates are pretty far
off. For instance:
The first thing to check... Did you do a recent VACUUM ANALYZE? This
updates all the statistics. There are a number of places where it says
rows=1000 which is usually the I have no idea, let me guess 1000.
Also, there are a number of places where the estimates are pretty far
off. For instance:
Pallav Kalva wrote:
Hi Everybody.
I have a table in my production database which gets updated
regularly and the stats on this table in pg_class are totally wrong.
I used to run vacuumdb on the whole database daily once and when i
posted the same problem of wrong stats in the pg_class
Pallav Kalva wrote:
John A Meinel wrote:
Pallav Kalva wrote:
Hi Everybody.
I have a table in my production database which gets updated
regularly and the stats on this table in pg_class are totally
wrong. I used to run vacuumdb on the whole database daily once and
when i posted the same
Dirk Lutzebaeck wrote:
Greg,
Thanks for your analysis. But I dont get any better after bumping
STATISTICS target from 10 to 200.
explain analyze shows that the optimizer is still way off estimating
the rows. Is this normal? It still produces a 1 GB temp file.
I simplified the query a bit, now
Alex wrote:
Hi,
we just got a new dual processor machine and I wonder if there is a
way to utilize both processors.
Our DB server is basically fully dedicated to postgres. (its a dual
amd with 4gb mem.)
I have a batch job that periodically loads about 8 million records
into a table.
for this
Ken Egervari wrote:
I took John's advice and tried to work with sub-selects. I tried
this variation, which actually seems like it would make a difference
conceptually since it drives on the person table quickly. But to my
surprise, the query runs at about 375 milliseconds. I think it's
Jeremiah Jahn wrote:
I have about 5M names stored on my DB. Currently the searches are very
quick unless, they are on a very common last name ie. SMITH. The Index
is always used, but I still hit 10-20 seconds on a SMITH or Jones
search, and I average about 6 searches a second and max out at about
John Arbash Meinel wrote:
Ken wrote:
Richard,
What do you mean by summary table? Basically a cache of the query
into a table with replicated column names of all the joins? I'd
probably have to whipe out the table every minute and re-insert the
data for each carrier in the system. I'm not sure
Tom Lane wrote:
Magnus Hagander [EMAIL PROTECTED] writes:
There is. I beleive QueryPerformanceCounter has sub-mirosecond
resolution.
Can we just replace gettimeofday() with a version that's basically:
No, because it's also used for actual time-of-day calls. It'd be
necessary to hack
Tom Lane wrote:
John A Meinel [EMAIL PROTECTED] writes:
Dave Held wrote:
There is always clock().
My experience with clock() on win32 is that CLOCKS_PER_SEC was 1000, and
it had a resolution of 55clocks / s. When I just did this:
The other problem is it measures process CPU
Karim Nassar wrote:
From rom http://www.powerpostgresql.com/PerfList/
even in a two-disk server, you can put the transaction log onto the
operating system disk and reap some benefits.
Context: I have a two disk server that is about to become dedicated to
postgresql (it's a sun v40z running gentoo
Joost Kraaijeveld wrote:
Hi all,
Is the number of rows in explain the number of rows that is expected to be
visited or retrieved?
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608416
e-mail: [EMAIL PROTECTED]
web: www.askesis.nl
Tom Lane wrote:
Jim C. Nasby [EMAIL PROTECTED] writes:
On Wed, Apr 06, 2005 at 06:09:37PM -0400, Tom Lane wrote:
Can anyone suggest a more general rule? Do we need for example to
consider whether the relation membership is the same in two clauses
that might be opposite sides of a range
[EMAIL PROTECTED] wrote:
Hello,
I'm just in the middle of performance tunning of our database running
on PostgreSQL, and I've several questions (I've searched the online
docs, but without success).
1) When I first use the EXPLAIN ANALYZE command, the time is much
larger than in case of
S.Thanga Prakash wrote:
hi,
I am using psql 7.1.3
I didn't find option analyse in explain command..
how to get time taken by SQL procedure/query?
regards,
stp..
---(end of broadcast)---
TIP 7: don't forget to increase your free space map
Alex Turner wrote:
[snip]
Adding drives will not let you get lower response times than the average seek
time on your drives*. But it will let you reach that response time more often.
[snip]
I believe your assertion is fundamentaly flawed. Adding more drives
will not let you reach that response
Richard van den Berg wrote:
We have a table with 1M rows that contain sessions with a start and
finish timestamps. When joining this table with a 10k table with rounded
timestamps, explain shows me sequential scans are used, and the join
takes about 6 hours (2s per seq scan on session table *
Richard van den Berg wrote:
John A Meinel wrote:
I believe the problem is that postgres doesn't recognize how restrictive
a date-range is unless it uses constants.
And it does when using BETWEEN with int for example? Impressive. :-)
select blah from du WHERE time between '2004-10-10' and '2004
Joel Fradkin wrote:
...
I would of spent more $ with Command, but he does need my data base to help
me and I am not able to do that.
...
What if someone were to write an anonymization script. Something that
changes any of the data of the database, but leaves all of the
relational information. It
Shachindra Agarwal wrote:
Dear Postgres Masters:
We are using postgres 7.4 in our java application on RedHat linux. The
Java application connects to Postgres via JDBC. The application goes
through a discovery phase, whereas it adds large amount of data into
postgres. Typically, we are adding
Shachindra Agarwal wrote:
Thanks for the note. Please see my responses below:
...
We are using JDBC which supports 'inserts' and 'transactions'. We are
using both. The business logic adds one business object at a time. Each
object is added within its own transaction. Each object add results in
Joel Fradkin wrote:
I did think of something similar just loading the data tables with junk
records and I may visit that idea with Josh.
I did just do some comparisons on timing of a plain select * from tbl where
indexed column = x and it was considerably slower then both MSSQL and MYSQL,
so I am
Shoaib Burq (VPAC) wrote:
Just tried it with the following changes:
shared_buffers = 10600
work_mem = 102400
enable_seqscan = false
still no improvement
Ok here's the Plan with the enable_seqscan = false:
ausclimate=# explain ANALYZE select count(*) from getfutureausclimate;
Actually, you
Bill Chandler wrote:
Mischa,
Thanks. Yes, I understand that not having a large
enough max_fsm_pages is a problem and I think that it
is most likely the case for the client. What I wasn't
sure of was if the index bloat we're seeing is the
result of the bleeding you're talking about or
something
Shoaib Burq (VPAC) wrote:
OK ... so just to clearify... (and pardon my ignorance):
I need to increase the value of 'default_statistics_target' variable and
then run VACUUM ANALYZE, right? If so what should I choose for the
'default_statistics_target'?
BTW I only don't do any sub-selection on the
Matthew Nuzum wrote:
I have this query that takes a little over 8 min to run:
select client,max(atime) as atime from usage_access where atime =
(select atime - '1 hour'::interval from usage_access order by atime
desc limit 1) group by client;
I think it can go a lot faster. Any suggestions on
Richard Rowell wrote:
I've ported enough of my companies database to Postgres to make
warehousing on PG a real possibility. I thought I would toss my data
migration architecture ideas out for the list to shoot apart..
1. Script on production server dumps the production database (MSSQL) to
a set
Joel Fradkin wrote:
I spent a great deal of time over the past week looking seriously at
Postgres and MYSQL.
Objectively I am not seeing that much of an improvement in speed with
MYSQL, and we have a huge investment in postgrs.
So I am planning on sticking with postgres fro our production database
Joel Fradkin wrote:
...
I am guessing our app is like 75% data entry and 25% reporting, but the
reporting is taking the toll SQL wise.
This was from my insert test with 15 users.
Test type: Dynamic
Simultaneous browser connections: 15
Warm up time (secs): 0
Test duration: 00:00:03:13
Test
David Roussel wrote:
COPY invokes all the same logic as INSERT on the server side
(rowexclusive locking, transaction log, updating indexes, rules).
The difference is that all the rows are inserted as a single
transaction. This reduces the number of fsync's on the xlog,
which may be a limiting
Jeroen van Iddekinge wrote:
Hi,
I understand that when a table contains only a few rows it is better to
do a sequence scan than an index scan. But is this also for a table with
99 records?
...
explain select * from tblFolders where id=90;
QUERY PLAN
Geoffrey wrote:
Mischa Sandberg wrote:
After reading the comparisons between Opteron and Xeon processors for
Linux,
I'd like to add an Opteron box to our stable of Dells and Sparcs, for
comparison.
IBM, Sun and HP have their fairly pricey Opteron systems.
The IT people are not swell about
Anjan Dave wrote:
You also want to consider any whitebox opteron system being on the
compatibility list of your storage vendor, as well as RedHat, etc. With
EMC you can file an RPQ via your sales contacts to get it approved,
though not sure how lengthy/painful that process might be, or if it's
Kim Bisgaard wrote:
Hi,
I'm having problems with the query optimizer and FULL OUTER JOIN on
PostgreSQL 7.4. I cannot get it to use my indexes with full outer joins.
I might be naive, but I think that it should be possible?
I have two BIG tables (virtually identical) with 3 NOT NULL columns
Alex Stapleton wrote:
What is the status of Postgres support for any sort of multi-machine
scaling support? What are you meant to do once you've upgraded your box
and tuned the conf files as much as you can? But your query load is
just too high for a single machine?
Upgrading stock Dell boxes (I
Adam Haberlach wrote:
I think that perhaps he was trying to avoid having to buy Big Iron at all.
With all the Opteron v. Xeon around here, and talk of $30,000 machines,
perhaps it would be worth exploring the option of buying 10 cheapass
machines for $300 each. At the moment, that $300 buys you,
Edin Kadribasic wrote:
Hi,
I have a query that is giving the optimizer (and me) great headache. When
its in the good mood the optimizer chooses Hash Left Join and the query
executes in 13ms or so, but sometimes (more and more often) it chooses
Nested Loop Left Join and the execution time goes up
Marc Mamin wrote:
Hello,
I'm not an expert, but I'll give some suggestions.
I'd like to tune Postgres for large data import (using Copy from).
I believe that COPY FROM file is supposed to be faster than COPY FROM
STDIN, but file must be available to the backend process. If you can
do it, you
Alex Turner wrote:
Ok - my common sense alarm is going off here...
There are only 6.446 billion people worldwide. 100 Billion page views
would require every person in the world to view 18 pages of yahoo
every day. Not very likely.
http://www.internetworldstats.com/stats.htm
suggests that there
Sebastian Hennebrueder wrote:
Hello,
I could not find any recommandations for the level of set statistics and
what a specific level does actually mean.
What is the difference between 1, 50 and 100? What is recommanded for a
table or column?
Default I believe is 10. The higher the number, the more
Greg Stark wrote:
Sebastian Hennebrueder [EMAIL PROTECTED] writes:
User-Agent: Mozilla Thunderbird 1.0 (Windows/20041206)
...
Nested Loop (cost=1349.13..1435.29 rows=1 width=2541) (actual
time=1640.000..3687.000 rows=62 loops=1)
Join Filter: (inner.fid = outer.faufgaben_id)
- Index Scan
David Parker wrote:
I just got a question from one our QA guys who is configuring a RAID 10
disk that is destined to hold a postgresql database. The disk
configuration procedure is asking him if he wants to optimize for
sequential or random access. My first thought is that random is what we
Michael Stone wrote:
On Tue, May 24, 2005 at 04:35:14PM -0700, Josh Berkus wrote:
Pretty much. There has been discussion about allowing index-only
access to frozen tables, i.e. archive partitions. But it all sort
of hinges on someone implementing it and testing
Is there any way to
SpaceBallOne wrote:
Wondering if someone could explain a pecularity for me:
We have a database which takes 1000ms to perform a certain query on.
If I pg_dump that database then create a new database (e.g. tempdb)
and upload the dump file (thus making a duplicate) then the same query
only
SpaceBallOne wrote:
What version of postgres?
8.0.2 ... but I think I've seen this before on 7.3 ...
There are a few possibilities. If you are having a lot of updates to the
table, you can get index bloat. And vacuum doesn't fix indexes. You have
to REINDEX to do that. Though REINDEX has
Jocelyn Turcotte wrote:
Hi all
i dont know if this is normal, but if yes i would like to know why and
how I could do it another way other than using unions.
The only thing that *might* work is if you used an index on both keys.
So if you did:
CREATE INDEX rt_edge_start_end_node ON
Martin Fandel wrote:
Hi @ all,
i'm trying to tune my postgresql-db but i don't know if the values are
right
set.
I use the following environment for the postgres-db:
# Hardware
cpu: 2x P4 3Ghz
ram: 1024MB DDR 266Mhz
partitions:
/dev/sda3 23G 9,6G 13G
Casey Allen Shobe wrote:
On Wednesday 01 June 2005 20:19, Casey Allen Shobe wrote:
...
Long-term, whenever we hit the I/O limit again, it looks like we really don't
have much of a solution except to throw more hardware (mainly lots of disks
in RAID0's) at the problem. :( Fortunately, with
Michael Stone wrote:
On Mon, Jun 06, 2005 at 10:08:23AM -0500, John A Meinel wrote:
I don't know if you can do it, but it would be nice to see this be 1
RAID1 for OS, 1 RAID10 for pg_xlog,
That's probably overkill--it's a relatively small sequential-write
partition with really small
Marty Scholes wrote:
Has anyone ran Postgres with software RAID or LVM on a production box?
What have been your experience?
Yes, we have run for a couple years Pg with software LVM (mirroring)
against two hardware RAID5 arrays. We host a production Sun box that
runs 24/7.
My experience:
Neil Conway wrote:
Tom Arthurs wrote:
Yes, shared buffers in postgres are not used for caching
Shared buffers in Postgres _are_ used for caching, they just form a
secondary cache on top of the kernel's IO cache. Postgres does IO
through the filesystem, which is then cached by the
Clark Slater wrote:
Hi-
Would someone please enlighten me as
to why I'm not seeing a faster execution
time on the simple scenario below?
there are 412,485 rows in the table and the
query matches on 132,528 rows, taking
almost a minute to execute. vaccuum
analyze was just run.
Well,
Kevin Grittner wrote:
It sure would be nice if the optimizer would consider that it had the
leeway to add any column which was restricted to a single value to any
point in the ORDER BY clause. Without that, the application programmer
has to know what indexes are on the table, rather than being
Alex Stapleton wrote:
Oh, we are running 7.4.2 btw. And our random_page_cost = 1
Which is only correct if your entire db fits into memory. Also, try
updating to a later 7.4 version if at all possible.
On 13 Jun 2005, at 14:02, Alex Stapleton wrote:
We have two index's like so
Veikko Mkinen wrote:
Hey,
How does Postgres (8.0.x) buffer changes to a database within a
transaction? I need to insert/update more than a thousand rows (mayde
even more than 1 rows, ~100 bytes/row) in a table but the changes
must not be visible to other users/transactions before every
Yves Vindevogel wrote:
Hi,
I have a very simple query on a big table. When I issue a limit
and/or offset clause, the query is not using the index.
Can anyone explain me this ?
You didn't give enough information. What does you index look like that
you are expecting it to use?
Generally,
Amit V Shah wrote:
After I sent out this email, I found this article from google
http://jonathangardner.net/PostgreSQL/materialized_views/matviews.html
Looks like we can control as to when the views refresh... I am still kind of
confused, and would appreciate help !!
The create/drop table
Yves Vindevogel wrote:
rvponp=# explain analyze select * from tblPrintjobs order by
loginuser, desceventdate, desceventtime ;
QUERY PLAN
Sort
Jone C wrote:
On second thought... Does a VACUUM FULL help? If so, you might want to
increase your FSM settings.
Thank you for the reply, sorry for delay I was on holiday.
I tried that it had no effect. I benchmarked 2x before, peformed
VACUUM FULL on the table in question post inserts,
Yves Vindevogel wrote:
Hi,
I have another question regarding indexes.
I have a table with a lot of indexes on it. Those are needed to
perform my searches.
Once a day, a bunch of records is inserted in my table.
Say, my table has 1.000.000 records and I add 10.000 records (1% new)
What
Yves Vindevogel wrote:
And, after let's say a week, would that index still be optimal or
would it be a good idea to drop it in the weekend and recreate it.
It depends a little bit on the postgres version you are using. If you
are only ever adding to the table, and you are not updating it or
Yves Vindevogel wrote:
I only add records, and most of the values are random
Except the columns for dates,
I doubt that you would need to recreate indexes. That really only needs
to be done in pathological cases, most of which have been fixed in the
latest postgres.
If you are only
Puddle wrote:
Hello, I'm a Sun Solaris sys admin for a start-up
company. I've got the UNIX background, but now I'm
having to learn PostgreSQL to support it on our
servers :)
Server Background:
Solaris 10 x86
PostgreSQL 8.0.3
Dell PowerEdge 2650 w/4gb ram.
This is running JBoss/Apache as well
Yves Vindevogel wrote:
Hi again all,
My queries are now optimised. They all use the indexes like they should.
However, there's still a slight problem when I issue the offset clause.
We have a table that contains 600.000 records
We display them by 25 in the webpage.
So, when I want the last
Tobias Brox wrote:
[EMAIL PROTECTED] - Tue at 08:33:58PM +0200]
I use FreeBSD 4.11 with PostGreSQL 7.3.8.
(...)
database= explain select date_trunc('hour', time),count(*) as total from
test where p1=53 and time now() - interval '24 hours' group by
date_trunc order by date_trunc ;
Merlin Moncure wrote:
I need a fast way (sql only preferred) to solve the following problem:
I need the smallest integer that is greater than zero that is not in the
column of a table. In other words, if an 'id' column has values
1,2,3,4,6 and 7, I need a query that returns the value of 5.
Merlin Moncure wrote:
Not so bad. Try something like this:
SELECT min(id+1) as id_new FROM table
WHERE (id+1) NOT IN (SELECT id FROM table);
Now, this requires probably a sequential scan, but I'm not sure how
you
can get around that.
Maybe if you got trickier and did some ordering
John A Meinel wrote:
Merlin Moncure wrote:
I need a fast way (sql only preferred) to solve the following problem:
I need the smallest integer that is greater than zero that is not in the
column of a table. In other words, if an 'id' column has values
1,2,3,4,6 and 7, I need a query
Merlin Moncure wrote:
John Meinel wrote:
See my follow up post, which enables an index scan. On my system with
90k rows, it takes no apparent time.
(0.000ms)
John
=:-
Confirmed. Hats off to you, the above some really wicked querying.
IIRC I posted the same question several months ago
Merlin Moncure wrote:
On Tue, Jun 28, 2005 at 12:02:09 -0400,
Merlin Moncure [EMAIL PROTECTED] wrote:
Confirmed. Hats off to you, the above some really wicked querying.
IIRC I posted the same question several months ago with no response
and
had given up on it. I think your solution
Sam Mason wrote:
Hi,
I've just been referred here after a conversion on IRC and everybody
seemed to think I've stumbled upon some strangeness.
The planner (in PG version 8.0.2) is choosing what it thinks is a more
expensive plan. I've got a table of animals (about 3M rows) and their
Emil Briggs wrote:
I'm working with an application where the database is entirely resident in RAM
(the server is a quad opteron with 16GBytes of memory). It's a web
application and handles a high volume of queries. The planner seems to be
generating poor plans for some of our queries which I
Emil Briggs wrote:
I just mentioned random_page_cost, but you should also tune
effective_cache_size, since that is effectively most of your RAM. It
depends what else is going on in the system, but setting it as high as
say 12-14GB is probably reasonable if it is a dedicated machine. With
Niccolo Rigacci wrote:
Hi to all,
I have a performace problem with the following query:
BEGIN;
DECLARE mycursor BINARY CURSOR FOR
SELECT
toponimo,
wpt
FROM wpt_comuni_view
WHERE (
wpt
setSRID('BOX3D(4.83 36, 20.16 47.5)'::BOX3D, 4326)
Dario Pudlo wrote:
(first at all, sorry for my english)
Hi.
- Does left join restrict the order in which the planner must join
tables? I've read about join, but i'm not sure about left join...
- If so: Can I avoid this behavior? I mean, make the planner resolve the
query, using
Chris Travers wrote:
John A Meinel wrote:
jobapply wrote:
The 2 queries are almost same, but ORDER BY x||t is FASTER than ORDER
BY x..
How can that be possible?
Btw: x and x||t are same ordered
phoeniks= explain analyze SELECT * FROM test WHERE i20 ORDER BY x
|| t
jobapply wrote:
The 2 queries are almost same, but ORDER BY x||t is FASTER than ORDER BY x..
How can that be possible?
Btw: x and x||t are same ordered
phoeniks= explain analyze SELECT * FROM test WHERE i20 ORDER BY x || t;
QUERY PLAN
Dan Harris wrote:
I'm trying to improve the speed of this query:
explain select recordtext from eventactivity inner join ( select
incidentid from k_r where id = 94 ) a using ( incidentid ) inner join (
select incidentid from k_b where id = 107 ) b using ( incidentid );
You might try giving
Dan Harris wrote:
On Jul 14, 2005, at 9:42 AM, John A Meinel wrote:
...
Did you try doing this to see how good the planners selectivity
estimates are?
Well, postgres is estimating around 500 rows each, is that way off? Try
just doing:
EXPLAIN ANALYZE SELECT incidentid FROM k_b WHERE id
Dan Harris wrote:
On Jul 14, 2005, at 9:42 AM, John A Meinel wrote:
You might try giving it a little bit more freedom with:
EXPLAIN ANALYZE
SELECT recordtext FROM eventactivity, k_r, k_b
WHERE eventactivity.incidentid = k_r.incidentid
AND eventactivity.incidentid = k_b.incidentid
Tom Lane wrote:
John A Meinel [EMAIL PROTECTED] writes:
What I don't understand is that the planner is actually estimating that
joining against the new table is going to *increase* the number of
returned rows.
It evidently thinks that incidentid in the k_r table is pretty
nonunique. We
Alison Winters wrote:
Hi,
Our application requires a number of processes to select and update rows
from a very small (10 rows) Postgres table on a regular and frequent
basis. These processes often run for weeks at a time, but over the
space of a few days we find that updates start getting
Dan Harris wrote:
On Jul 14, 2005, at 7:15 PM, John A Meinel wrote:
Is the distribution of your rows uneven? Meaning do you have more rows
with a later id than an earlier one?
There are definitely some id's that will have many times more than the
others. If I group and count them
Dan Harris wrote:
On Jul 14, 2005, at 10:12 PM, John A Meinel wrote:
My biggest question is why the planner things the Nested Loop would be
so expensive.
Have you tuned any of the parameters? It seems like something is out of
whack. (cpu_tuple_cost, random_page_cost, etc...)
here's
Oliver Crosby wrote:
Hi,
I'm running Postgres 7.4.6 on a dedicated server with about 1.5gigs of ram.
Running scripts locally, it takes about 1.5x longer than mysql, and the
load on the server is only about 21%.
I upped the sort_mem to 8192 (kB), and shared_buffers and
effective_cache_size to
Dirk Lutzebäck wrote:
Richard Huxton wrote:
Dirk Lutzebäck wrote:
Hi,
I do not under stand the following explain output (pgsql 8.0.3):
explain analyze
select b.e from b, d
where b.r=516081780 and b.c=513652057 and b.e=d.e;
QUERY
Chris Isaacson wrote:
I need COPY via libpqxx to insert millions of rows into two tables. One
table has roughly have as many rows and requires half the storage. In
production, the largest table will grow by ~30M rows/day. To test the
COPY performance I split my transactions into 10,000
Tomeh, Husam wrote:
Nothing was running except the job. The server did not look stressed out
looking at top and vmstat. We have seen slower query performance when
performing load tests, so I run the re-index on all application indexes
and then issue a full vacuum. I ran the same thing on a
I saw a review of a relatively inexpensive RAM disk over at
anandtech.com, the Gigabyte i-RAM
http://www.anandtech.com/storage/showdoc.aspx?i=2480
Basically, it is a PCI card, which takes standard DDR RAM, and has a
SATA port on it, so that to the system, it looks like a normal SATA drive.
Dan Harris wrote:
I am working on a process that will be inserting tens of million rows
and need this to be as quick as possible.
The catch is that for each row I could potentially insert, I need to
look and see if the relationship is already there to prevent multiple
entries. Currently
Luke Lonergan wrote:
Yup - interesting and very niche product - it seems like it's only obvious
application is for the Postgresql WAL problem :-)
Well, you could do it for any journaled system (XFS, JFS, ext3, reiserfs).
But yes, it seems specifically designed for a battery backed journal.
Alex Turner wrote:
Also seems pretty silly to put it on a regular SATA connection, when
all that can manage is 150MB/sec. If you made it connection directly
to 66/64-bit PCI then it could actualy _use_ the speed of the RAM, not
to mention PCI-X.
Alex Turner
NetEconomist
Well, the whole
Matthew Nuzum wrote:
On 7/26/05, Dan Harris [EMAIL PROTECTED] wrote:
I am working on a process that will be inserting tens of million rows
and need this to be as quick as possible.
The catch is that for each row I could potentially insert, I need to
look and see if the relationship is already
Karim Nassar wrote:
I ran into a situation today maintaining someone else's code where the
sum time running 2 queries seems to be faster than 1. The original code
was split into two queries. I thought about joining them, but
considering the intelligence of my predecessor, I wanted to test it.
Lane Van Ingen wrote:
I have in my possession some performance tuning documents authored by Bruce
Momjian, Josh Berkus, and others. They give good information on utilities to
use (like ipcs, sar, vmstat, etc) to evaluate disk, memory, etc. performance
on Unix-based systems.
Problem is, I
Matthew Schumacher wrote:
Okay,
Here is the status of the SA updates and a question:
Michael got SA changed to pass an array of tokens to the proc so right
there we gained a ton of performance due to connections and transactions
being grouped into one per email instead of one per token.
Tom Lane wrote:
Matthew Schumacher [EMAIL PROTECTED] writes:
for i in array_lower(intokenary, 1) .. array_upper(intokenary, 1)
LOOP
_token := intokenary[i];
INSERT INTO bayes_token_tmp VALUES (_token);
END LOOP;
UPDATE
bayes_token
SET
spam_count =
1 - 100 of 167 matches
Mail list logo