Tom Lane wrote:
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
On Wed, Apr 06, 2005 at 06:09:37PM -0400, Tom Lane wrote:
Can anyone suggest a more general rule? Do we need for example to
consider whether the relation membership is the same in two clauses
that might be opposite sides of a range restric
[EMAIL PROTECTED] wrote:
Hello,
I'm just in the middle of performance tunning of our database running
on PostgreSQL, and I've several questions (I've searched the online
docs, but without success).
1) When I first use the EXPLAIN ANALYZE command, the time is much
larger than in case of subsequent
S.Thanga Prakash wrote:
hi,
I am using psql 7.1.3
I didn't find option analyse in explain command..
how to get time taken by SQL procedure/query?
regards,
stp..
---(end of broadcast)---
TIP 7: don't forget to increase your free space map setti
Alex Turner wrote:
[snip]
Adding drives will not let you get lower response times than the average seek
time on your drives*. But it will let you reach that response time more often.
[snip]
I believe your assertion is fundamentaly flawed. Adding more drives
will not let you reach that response t
Richard van den Berg wrote:
We have a table with 1M rows that contain sessions with a start and
finish timestamps. When joining this table with a 10k table with rounded
timestamps, explain shows me sequential scans are used, and the join
takes about 6 hours (2s per seq scan on session table * 1
Richard van den Berg wrote:
John A Meinel wrote:
I believe the problem is that postgres doesn't recognize how restrictive
a date-range is unless it uses constants.
And it does when using BETWEEN with int for example? Impressive. :-)
select blah from du WHERE time between '2004-10-10&
Joel Fradkin wrote:
...
I would of spent more $ with Command, but he does need my data base to help
me and I am not able to do that.
...
What if someone were to write an anonymization script. Something that
changes any of the "data" of the database, but leaves all of the
relational information. It
Shachindra Agarwal wrote:
Dear Postgres Masters:
We are using postgres 7.4 in our java application on RedHat linux. The
Java application connects to Postgres via JDBC. The application goes
through a ‘discovery’ phase, whereas it adds large amount of data into
postgres. Typically, we are adding a
Shachindra Agarwal wrote:
Thanks for the note. Please see my responses below:
...
We are using JDBC which supports 'inserts' and 'transactions'. We are
using both. The business logic adds one business object at a time. Each
object is added within its own transaction. Each object add results in 5
Joel Fradkin wrote:
I did think of something similar just loading the data tables with junk
records and I may visit that idea with Josh.
I did just do some comparisons on timing of a plain select * from tbl where
indexed column = x and it was considerably slower then both MSSQL and MYSQL,
so I am s
Shoaib Burq (VPAC) wrote:
Just tried it with the following changes:
shared_buffers = 10600
work_mem = 102400
enable_seqscan = false
still no improvement
Ok here's the Plan with the enable_seqscan = false:
ausclimate=# explain ANALYZE select count(*) from "getfutureausclimate";
Actually, you proba
Bill Chandler wrote:
Mischa,
Thanks. Yes, I understand that not having a large
enough max_fsm_pages is a problem and I think that it
is most likely the case for the client. What I wasn't
sure of was if the index bloat we're seeing is the
result of the "bleeding" you're talking about or
something
Shoaib Burq (VPAC) wrote:
OK ... so just to clearify... (and pardon my ignorance):
I need to increase the value of 'default_statistics_target' variable and
then run VACUUM ANALYZE, right? If so what should I choose for the
'default_statistics_target'?
BTW I only don't do any sub-selection on the V
Matthew Nuzum wrote:
I have this query that takes a little over 8 min to run:
select client,max(atime) as atime from usage_access where atime >=
(select atime - '1 hour'::interval from usage_access order by atime
desc limit 1) group by client;
I think it can go a lot faster. Any suggestions on impr
Richard Rowell wrote:
I've ported enough of my companies database to Postgres to make
warehousing on PG a real possibility. I thought I would toss my data
migration architecture ideas out for the list to shoot apart..
1. Script on production server dumps the production database (MSSQL) to
a set o
Joel Fradkin wrote:
I spent a great deal of time over the past week looking seriously at
Postgres and MYSQL.
Objectively I am not seeing that much of an improvement in speed with
MYSQL, and we have a huge investment in postgrs.
So I am planning on sticking with postgres fro our production database
Joel Fradkin wrote:
...
I am guessing our app is like 75% data entry and 25% reporting, but the
reporting is taking the toll SQL wise.
This was from my insert test with 15 users.
Test type: Dynamic
Simultaneous browser connections: 15
Warm up time (secs): 0
Test duration: 00:00:03:13
Test itera
Josh Berkus wrote:
Mischa,
Okay, although given the track record of page-based sampling for
n-distinct, it's a bit like looking for your keys under the streetlight,
rather than in the alley where you dropped them :-)
Bad analogy, but funny.
The issue with page-based vs. pure random sampling is th
David Roussel wrote:
COPY invokes all the same logic as INSERT on the server side
(rowexclusive locking, transaction log, updating indexes, rules).
The difference is that all the rows are inserted as a single
transaction. This reduces the number of fsync's on the xlog,
which may be a limiting facto
Jeroen van Iddekinge wrote:
Hi,
I understand that when a table contains only a few rows it is better to
do a sequence scan than an index scan. But is this also for a table with
99 records?
...
explain select * from tblFolders where id=90;
QUERY PLAN
--
Jeroen van Iddekinge wrote:
You could tweak with several settings to get it to do an index scan
earlier, but these would probably break other queries. You don't need to
tune for 100 rows, morelike 100k or 100M.
Thanks for respone.
The index scan was a little bit faster for id=1 and faster for id=
Geoffrey wrote:
Mischa Sandberg wrote:
After reading the comparisons between Opteron and Xeon processors for
Linux,
I'd like to add an Opteron box to our stable of Dells and Sparcs, for
comparison.
IBM, Sun and HP have their fairly pricey Opteron systems.
The IT people are not swell about unsupport
Anjan Dave wrote:
You also want to consider any whitebox opteron system being on the
compatibility list of your storage vendor, as well as RedHat, etc. With
EMC you can file an RPQ via your sales contacts to get it approved,
though not sure how lengthy/painful that process might be, or if it's
gonn
Anjan Dave wrote:
Wasn't the context switching issue occurring in specific cases only?
I haven't seen any benchmarks for a 50% performance difference. Neither
have I seen any benchmarks of pure disk IO performance of specific
models of Dell vs HP or Sun Opterons.
Thanks,
Anjan
Well, I'm speaking mo
[EMAIL PROTECTED] wrote:
Hello
How can i know a capacity of a pg database ?
How many records my table can have ?
I saw in a message that someone have 50 000 records it's possible in a table ?
(My table have 8 string field (length 32 car)).
Thanks for your response.
Nanou
The capacity for a PG datab
Kim Bisgaard wrote:
Hi,
I'm having problems with the query optimizer and FULL OUTER JOIN on
PostgreSQL 7.4. I cannot get it to use my indexes with full outer joins.
I might be naive, but I think that it should be possible?
I have two BIG tables (virtually identical) with 3 NOT NULL columns
Station_
Alex Stapleton wrote:
What is the status of Postgres support for any sort of multi-machine
scaling support? What are you meant to do once you've upgraded your box
and tuned the conf files as much as you can? But your query load is
just too high for a single machine?
Upgrading stock Dell boxes (I
Adam Haberlach wrote:
I think that perhaps he was trying to avoid having to buy "Big Iron" at all.
With all the Opteron v. Xeon around here, and talk of $30,000 machines,
perhaps it would be worth exploring the option of buying 10 cheapass
machines for $300 each. At the moment, that $300 buys you,
Edin Kadribasic wrote:
Hi,
I have a query that is giving the optimizer (and me) great headache. When
its in the good mood the optimizer chooses Hash Left Join and the query
executes in 13ms or so, but sometimes (more and more often) it chooses
Nested Loop Left Join and the execution time goes up to
Marc Mamin wrote:
Hello,
I'm not an expert, but I'll give some suggestions.
I'd like to tune Postgres for large data import (using Copy from).
I believe that COPY FROM is supposed to be faster than COPY FROM
STDIN, but must be available to the backend process. If you can
do it, you should think a
Alex Turner wrote:
Ok - my common sense alarm is going off here...
There are only 6.446 billion people worldwide. 100 Billion page views
would require every person in the world to view 18 pages of yahoo
every day. Not very likely.
http://www.internetworldstats.com/stats.htm
suggests that there ar
Sebastian Hennebrueder wrote:
Hello,
I could not find any recommandations for the level of set statistics and
what a specific level does actually mean.
What is the difference between 1, 50 and 100? What is recommanded for a
table or column?
Default I believe is 10. The higher the number, the more s
Greg Stark wrote:
Sebastian Hennebrueder <[EMAIL PROTECTED]> writes:
User-Agent: Mozilla Thunderbird 1.0 (Windows/20041206)
...
"Nested Loop (cost=1349.13..1435.29 rows=1 width=2541) (actual
time=1640.000..3687.000 rows=62 loops=1)"
" Join Filter: ("inner".fid = "outer".faufgaben_id)"
" -> Ind
David Parker wrote:
> I just got a question from one our QA guys who is configuring a RAID 10
> disk that is destined to hold a postgresql database. The disk
> configuration procedure is asking him if he wants to optimize for
> sequential or random access. My first thought is that random is what we
Michael Stone wrote:
On Tue, May 24, 2005 at 04:35:14PM -0700, Josh Berkus wrote:
Pretty much. There has been discussion about allowing index-only
access to "frozen" tables, i.e. archive partitions. But it all sort
of hinges on someone implementing it and testing
Is there any way to
SpaceBallOne wrote:
Wondering if someone could explain a pecularity for me:
We have a database which takes 1000ms to perform a certain query on.
If I pg_dump that database then create a new database (e.g. "tempdb")
and upload the dump file (thus making a duplicate) then the same query
only tak
SpaceBallOne wrote:
What version of postgres?
8.0.2 ... but I think I've seen this before on 7.3 ...
There are a few possibilities. If you are having a lot of updates to the
table, you can get index bloat. And vacuum doesn't fix indexes. You have
to "REINDEX" to do that. Though REINDEX has
Jocelyn Turcotte wrote:
Hi all
i dont know if this is normal, but if yes i would like to know why and
how I could do it another way other than using unions.
The only thing that *might* work is if you used an index on both keys.
So if you did:
CREATE INDEX rt_edge_start_end_node ON rt_edge(s
Martin Fandel wrote:
Hi @ all,
i'm trying to tune my postgresql-db but i don't know if the values are
right
set.
I use the following environment for the postgres-db:
# Hardware
cpu: 2x P4 3Ghz
ram: 1024MB DDR 266Mhz
partitions:
/dev/sda3 23G 9,6G 13G 44%
Tom Lane wrote:
...
Now that I think about it, you were (if I understood your layout
correctly) proposing to put the xlog on your system's root disk.
This is probably a bad idea for performance, because there will always
be other traffic to the root disk. What you are really trying to
accomplis
Casey Allen Shobe wrote:
> On Wednesday 01 June 2005 20:19, Casey Allen Shobe wrote:
>
...
> Long-term, whenever we hit the I/O limit again, it looks like we really don't
> have much of a solution except to throw more hardware (mainly lots of disks
> in RAID0's) at the problem. :( Fortunately,
Michael Stone wrote:
> On Mon, Jun 06, 2005 at 10:08:23AM -0500, John A Meinel wrote:
>
>> I don't know if you can do it, but it would be nice to see this be 1
>> RAID1 for OS, 1 RAID10 for pg_xlog,
>
>
> That's probably overkill--it's a relative
Marty Scholes wrote:
>> Has anyone ran Postgres with software RAID or LVM on a production box?
>> What have been your experience?
>
> Yes, we have run for a couple years Pg with software LVM (mirroring)
> against two hardware RAID5 arrays. We host a production Sun box that
> runs 24/7.
>
> My ex
Neil Conway wrote:
> Tom Arthurs wrote:
>
>> Yes, shared buffers in postgres are not used for caching
>
>
> Shared buffers in Postgres _are_ used for caching, they just form a
> secondary cache on top of the kernel's IO cache. Postgres does IO
> through the filesystem, which is then cached by th
Clark Slater wrote:
> Hi-
>
> Would someone please enlighten me as
> to why I'm not seeing a faster execution
> time on the simple scenario below?
>
> there are 412,485 rows in the table and the
> query matches on 132,528 rows, taking
> almost a minute to execute. vaccuum
> analyze was just run.
Clark Slater wrote:
> hmm, i'm baffled. i simplified the query
> and it is still taking forever...
>
>
> test
> -
> id| integer
> partnumber| character varying(32)
> productlistid | integer
> typeid| integer
>
>
> Indexes:
> "test_p
Kevin Grittner wrote:
It sure would be nice if the optimizer would consider that it had the
leeway to add any column which was restricted to a single value to any
point in the ORDER BY clause. Without that, the application programmer
has to know what indexes are on the table, rather than being
Alex Stapleton wrote:
Oh, we are running 7.4.2 btw. And our random_page_cost = 1
Which is only correct if your entire db fits into memory. Also, try
updating to a later 7.4 version if at all possible.
On 13 Jun 2005, at 14:02, Alex Stapleton wrote:
We have two index's like so
l1_historica
Veikko Mäkinen wrote:
Hey,
How does Postgres (8.0.x) buffer changes to a database within a
transaction? I need to insert/update more than a thousand rows (mayde
even more than 1 rows, ~100 bytes/row) in a table but the changes
must not be visible to other users/transactions before every
Yves Vindevogel wrote:
Hi,
I have a very simple query on a big table. When I issue a "limit"
and/or "offset" clause, the query is not using the index.
Can anyone explain me this ?
You didn't give enough information. What does you index look like that
you are expecting it to use?
Generally
Amit V Shah wrote:
After I sent out this email, I found this article from google
http://jonathangardner.net/PostgreSQL/materialized_views/matviews.html
Looks like we can control as to when the views refresh... I am still kind of
confused, and would appreciate help !!
The create/drop table doe
Yves Vindevogel wrote:
rvponp=# explain analyze select * from tblPrintjobs order by
loginuser, desceventdate, desceventtime ;
QUERY PLAN
Sort (cost=345699.06..347256.
Jone C wrote:
On second thought... Does a VACUUM FULL help? If so, you might want to
increase your FSM settings.
Thank you for the reply, sorry for delay I was on holiday.
I tried that it had no effect. I benchmarked 2x before, peformed
VACUUM FULL on the table in question post inserts, the
Yves Vindevogel wrote:
Hi,
I have another question regarding indexes.
I have a table with a lot of indexes on it. Those are needed to
perform my searches.
Once a day, a bunch of records is inserted in my table.
Say, my table has 1.000.000 records and I add 10.000 records (1% new)
What would
Yves Vindevogel wrote:
And, after let's say a week, would that index still be optimal or
would it be a good idea to drop it in the weekend and recreate it.
It depends a little bit on the postgres version you are using. If you
are only ever adding to the table, and you are not updating it or
d
Yves Vindevogel wrote:
I only add records, and most of the values are "random"
Except the columns for dates,
I doubt that you would need to recreate indexes. That really only needs
to be done in pathological cases, most of which have been fixed in the
latest postgres.
If you are only ins
Puddle wrote:
Hello, I'm a Sun Solaris sys admin for a start-up
company. I've got the UNIX background, but now I'm
having to learn PostgreSQL to support it on our
servers :)
Server Background:
Solaris 10 x86
PostgreSQL 8.0.3
Dell PowerEdge 2650 w/4gb ram.
This is running JBoss/Apache as well
Yves Vindevogel wrote:
Hi again all,
My queries are now optimised. They all use the indexes like they should.
However, there's still a slight problem when I issue the "offset" clause.
We have a table that contains 600.000 records
We display them by 25 in the webpage.
So, when I want the last p
Tobias Brox wrote:
[EMAIL PROTECTED] - Tue at 08:33:58PM +0200]
I use FreeBSD 4.11 with PostGreSQL 7.3.8.
(...)
database=> explain select date_trunc('hour', time),count(*) as total from
test where p1=53 and time > now() - interval '24 hours' group by
date_trunc order by date_trunc ;
Merlin Moncure wrote:
I need a fast way (sql only preferred) to solve the following problem:
I need the smallest integer that is greater than zero that is not in the
column of a table. In other words, if an 'id' column has values
1,2,3,4,6 and 7, I need a query that returns the value of 5.
I'
Merlin Moncure wrote:
Not so bad. Try something like this:
SELECT min(id+1) as id_new FROM table
WHERE (id+1) NOT IN (SELECT id FROM table);
Now, this requires probably a sequential scan, but I'm not sure how
you
can get around that.
Maybe if you got trickier and did some ordering and
John A Meinel wrote:
Merlin Moncure wrote:
I need a fast way (sql only preferred) to solve the following problem:
I need the smallest integer that is greater than zero that is not in the
column of a table. In other words, if an 'id' column has values
1,2,3,4,6 and 7, I need a
Michael Stone wrote:
Is it possible to tweak the size of a block that postgres tries to read
when doing a sequential scan? It looks like it reads in fairly small
blocks, and I'd expect a fairly significant boost in i/o performance
when doing a large (multi-gig) sequential scan if larger blocks w
Merlin Moncure wrote:
John Meinel wrote:
See my follow up post, which enables an index scan. On my system with
90k rows, it takes no apparent time.
(0.000ms)
John
=:->
Confirmed. Hats off to you, the above some really wicked querying.
IIRC I posted the same question several months ago wi
Merlin Moncure wrote:
On Tue, Jun 28, 2005 at 12:02:09 -0400,
Merlin Moncure <[EMAIL PROTECTED]> wrote:
Confirmed. Hats off to you, the above some really wicked querying.
IIRC I posted the same question several months ago with no response
and
had given up on it. I think your solution
Sam Mason wrote:
Hi,
I've just been referred here after a conversion on IRC and everybody
seemed to think I've stumbled upon some strangeness.
The planner (in PG version 8.0.2) is choosing what it thinks is a more
expensive plan. I've got a table of animals (about 3M rows) and their
movements
Emil Briggs wrote:
I'm working with an application where the database is entirely resident in RAM
(the server is a quad opteron with 16GBytes of memory). It's a web
application and handles a high volume of queries. The planner seems to be
generating poor plans for some of our queries which I ca
Emil Briggs wrote:
I just mentioned random_page_cost, but you should also tune
effective_cache_size, since that is effectively most of your RAM. It
depends what else is going on in the system, but setting it as high as
say 12-14GB is probably reasonable if it is a dedicated machine. With
random_
Niccolo Rigacci wrote:
>Hi to all,
>
>I have a performace problem with the following query:
>
> BEGIN;
> DECLARE mycursor BINARY CURSOR FOR
>SELECT
> toponimo,
> wpt
> FROM wpt_comuni_view
>WHERE (
> wpt &&
> setSRID('BOX3D(4.83 36, 20.16 47.5)'::BOX3
Dario Pudlo wrote:
> (first at all, sorry for my english)
> Hi.
>- Does "left join" restrict the order in which the planner must join
> tables? I've read about join, but i'm not sure about left join...
>- If so: Can I avoid this behavior? I mean, make the planner resolve the
> query, using
jobapply wrote:
> The 2 queries are almost same, but ORDER BY x||t is FASTER than ORDER BY x..
>
> How can that be possible?
>
> Btw: x and x||t are same ordered
>
> phoeniks=> explain analyze SELECT * FROM test WHERE i<20 ORDER BY x || t;
> Q
Chris Travers wrote:
> John A Meinel wrote:
>
>> jobapply wrote:
>>
>>
>>> The 2 queries are almost same, but ORDER BY x||t is FASTER than ORDER
>>> BY x..
>>>
>>> How can that be possible?
>>>
>>> Btw: x and x||t are
jobapply wrote:
> The 2 queries are almost same, but ORDER BY x||t is FASTER than ORDER BY x..
>
> How can that be possible?
>
> Btw: x and x||t are same ordered
>
> phoeniks=> explain analyze SELECT * FROM test WHERE i<20 ORDER BY x || t;
> Q
Dan Harris wrote:
Gurus,
> even the explain never
finishes when I try that.
Just a short bit. If "EXPLAIN SELECT" doesn't return, there seems to be
a very serious problem. Because I think EXPLAIN doesn't actually run the
query, just has the query planner run. And the query planner shouldn'
Dan Harris wrote:
> I'm trying to improve the speed of this query:
>
> explain select recordtext from eventactivity inner join ( select
> incidentid from k_r where id = 94 ) a using ( incidentid ) inner join (
> select incidentid from k_b where id = 107 ) b using ( incidentid );
You might try giv
Dan Harris wrote:
>
> On Jul 14, 2005, at 9:42 AM, John A Meinel wrote:
...
Did you try doing this to see how good the planners selectivity
estimates are?
>> Well, postgres is estimating around 500 rows each, is that way off? Try
>> just doing:
>> EXPLAIN ANALYZE SE
Dan Harris wrote:
>
> On Jul 14, 2005, at 9:42 AM, John A Meinel wrote:
>
>>
>>
>> You might try giving it a little bit more freedom with:
>>
>> EXPLAIN ANALYZE
>> SELECT recordtext FROM eventactivity, k_r, k_b
>> WHERE eventactivity.incidentid
Tom Lane wrote:
> John A Meinel <[EMAIL PROTECTED]> writes:
>
>>What I don't understand is that the planner is actually estimating that
>>joining against the new table is going to *increase* the number of
>>returned rows.
>
>
> It evidently think
Alison Winters wrote:
> Hi,
>
>
>>>Our application requires a number of processes to select and update rows
>>>from a very small (<10 rows) Postgres table on a regular and frequent
>>>basis. These processes often run for weeks at a time, but over the
>>>space of a few days we find that updates sta
Dan Harris wrote:
>
> On Jul 14, 2005, at 7:15 PM, John A Meinel wrote:
>
>>
>>
>> Is the distribution of your rows uneven? Meaning do you have more rows
>> with a later id than an earlier one?
>>
>
> There are definitely some id's that will have m
Dan Harris wrote:
>
> On Jul 14, 2005, at 10:12 PM, John A Meinel wrote:
>
>>
>> My biggest question is why the planner things the Nested Loop would be
>> so expensive.
>> Have you tuned any of the parameters? It seems like something is out of
>> whack.
Oliver Crosby wrote:
Hi,
I'm running Postgres 7.4.6 on a dedicated server with about 1.5gigs of ram.
Running scripts locally, it takes about 1.5x longer than mysql, and the
load on the server is only about 21%.
I upped the sort_mem to 8192 (kB), and shared_buffers and
effective_cache_size to 6553
Dirk Lutzebäck wrote:
> Richard Huxton wrote:
>
>> Dirk Lutzebäck wrote:
>>
>>> Hi,
>>>
>>> I do not under stand the following explain output (pgsql 8.0.3):
>>>
>>> explain analyze
>>> select b.e from b, d
>>> where b.r=516081780 and b.c=513652057 and b.e=d.e;
>>>
>>>
Chris Isaacson wrote:
> I need COPY via libpqxx to insert millions of rows into two tables. One
> table has roughly have as many rows and requires half the storage. In
> production, the largest table will grow by ~30M rows/day. To test the
> COPY performance I split my transactions into 10,000 r
Tomeh, Husam wrote:
>
> Nothing was running except the job. The server did not look stressed out
> looking at top and vmstat. We have seen slower query performance when
> performing load tests, so I run the re-index on all application indexes
> and then issue a full vacuum. I ran the same thing on
I saw a review of a relatively inexpensive RAM disk over at
anandtech.com, the Gigabyte i-RAM
http://www.anandtech.com/storage/showdoc.aspx?i=2480
Basically, it is a PCI card, which takes standard DDR RAM, and has a
SATA port on it, so that to the system, it looks like a normal SATA drive.
Th
Dan Harris wrote:
I am working on a process that will be inserting tens of million rows
and need this to be as quick as possible.
The catch is that for each row I could potentially insert, I need to
look and see if the relationship is already there to prevent multiple
entries. Currently
Luke Lonergan wrote:
Yup - interesting and very niche product - it seems like it's only obvious
application is for the Postgresql WAL problem :-)
Well, you could do it for any journaled system (XFS, JFS, ext3, reiserfs).
But yes, it seems specifically designed for a battery backed journal.
Th
Alex Turner wrote:
Also seems pretty silly to put it on a regular SATA connection, when
all that can manage is 150MB/sec. If you made it connection directly
to 66/64-bit PCI then it could actualy _use_ the speed of the RAM, not
to mention PCI-X.
Alex Turner
NetEconomist
Well, the whole point
Matthew Nuzum wrote:
On 7/26/05, Dan Harris <[EMAIL PROTECTED]> wrote:
I am working on a process that will be inserting tens of million rows
and need this to be as quick as possible.
The catch is that for each row I could potentially insert, I need to
look and see if the relationship is alread
Karim Nassar wrote:
> I ran into a situation today maintaining someone else's code where the
> sum time running 2 queries seems to be faster than 1. The original code
> was split into two queries. I thought about joining them, but
> considering the intelligence of my predecessor, I wanted to test i
Lane Van Ingen wrote:
> I have in my possession some performance tuning documents authored by Bruce
> Momjian, Josh Berkus, and others. They give good information on utilities to
> use (like ipcs, sar, vmstat, etc) to evaluate disk, memory, etc. performance
> on Unix-based systems.
>
> Problem is,
Matthew Schumacher wrote:
> Okay,
>
> Here is the status of the SA updates and a question:
>
> Michael got SA changed to pass an array of tokens to the proc so right
> there we gained a ton of performance due to connections and transactions
> being grouped into one per email instead of one per toke
Tom Lane wrote:
> Matthew Schumacher <[EMAIL PROTECTED]> writes:
>
>> for i in array_lower(intokenary, 1) .. array_upper(intokenary, 1)
>> LOOP
>>_token := intokenary[i];
>>INSERT INTO bayes_token_tmp VALUES (_token);
>> END LOOP;
>
>
>> UPDATE
>>bayes_token
>> SET
>>spam_count
Matthew Schumacher wrote:
> Matthew Schumacher wrote:
>
>>Tom Lane wrote:
>>
>>
>>
>>>I don't really see why you think that this path is going to lead to
>>>better performance than where you were before. Manipulation of the
>>>temp table is never going to be free, and IN (sub-select) is always
>>>
Matthew Schumacher wrote:
> John A Meinel wrote:
>
>
>>Surely this isn't what you have. You have *no* loop here, and you have
>>stuff like:
>> AND
>>(bayes_token_tmp) NOT IN (SELECT token FROM bayes_token);
>>
>>I'm guessing this isn'
Patrick Hatcher wrote:
> [Reposted from General section with updated information]
> Pg 7.4.5
>
> I'm running an update statement on about 12 million records using the
> following query:
>
> Update table_A
> set F1 = b.new_data
> from table_B b
> where b.keyfield = table_A.keyfield
>
> both keyfield
Dan Harris wrote:
On Aug 10, 2005, at 12:49 AM, Steve Poe wrote:
Dan,
Do you mean you did RAID 1 + 0 (RAID 10) or RAID 0 + 1? Just a
clarification, since RAID 0 is still a single-point of failure even if
RAID1 is on top of RAID0.
Well, you tell me if I stated incorrectly. There are two ra
Dan Harris wrote:
On Aug 9, 2005, at 3:51 PM, John A Meinel wrote:
Dan Harris wrote:
On Aug 10, 2005, at 12:49 AM, Steve Poe wrote:
Dan,
Do you mean you did RAID 1 + 0 (RAID 10) or RAID 0 + 1? Just a
clarification, since RAID 0 is still a single-point of failure even if
RAID1 is on top
Dan Harris wrote:
> I have a web page for my customers that shows them count of records and
> some min/max date ranges in each table of a database, as this is how we
> bill them for service. They can log in and check the counts at any
> time. I'd like for the counts to be as fresh as possible
1 - 100 of 154 matches
Mail list logo