lt; '2016-06-01'
AND timerange_transaction @> current_timestamp
ORDER BY metric_value.id_metric, metric_value.id_asset, date;
Which is awesome! Thank you so much for your help, both of you!
Now if only we could make hash joins as fast as JSONB hash lookups :)
Cheers, Chris.
tesian join helps and/or how we can get the same speedup
without materialising it.
SELECT id_metric, id_asset, date, value
FROM metric_value
WHERE
date >= '2016-01-01' and date < '2016-06-01'
AND timerange_transaction @> current_timestamp
ORDER BY date, metric_value.id_metric;
Cheers, Chris.
5GB, work_mem = 100MB, seq_page_cost =
0.5, random_page_cost = 1.0, cpu_tuple_cost = 0.01.
- HP ProLiant DL580 G7, Xeon(R) CPU E7- 4850 @ 2.00GHz * 80 cores,
hardware RAID, 3.6 TB SAS array.
Thanks again in advance for any suggestions, hints or questions.
Cheers, Chris.
at serve
> different users. If each individual requires its own database-level
> user, pgbouncer would not help at all.
>
> I would look seriously into getting rid of the always-open requirement
> for connections.
— Chris Cogdon
al workarounds I can use for this simple case, such as
using a CTE, then doing a rollup on that, but I’m simply reporting what I think
is a bug in the query optimizer.
Thank you for your attention! Please let me know if there’s any additional
information you need, or additional tests you’d like to r
. But
that doesn't seem to exist either.
best regards,
chris
--
chris ruprecht
database grunt and bit pusher extraordinaíre
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Mon, Apr 14, 2014 at 12:27 PM, Robert DiFalco
robert.difa...@gmail.comwrote:
I have several related tables that represent a call state. Let's think of
these as phone calls to simplify things. Sometimes I need to determine the
last time a user was called, the last time a user answered a
On Tue, Apr 15, 2014 at 10:56 AM, Chris Curvey ch...@chriscurvey.comwrote:
On Mon, Apr 14, 2014 at 12:27 PM, Robert DiFalco robert.difa...@gmail.com
wrote:
I have several related tables that represent a call state. Let's think of
these as phone calls to simplify things. Sometimes I need
scan. explain
analyze select * from values_view where fkey1 = 1263;
---
Can anyone suggest a way to rewrite this query, or maybe a workaround of
some kind?
Thanks, Chris
Hi guys,
PG = 9.1.5
OS = winDOS 2008R8
I have a table that currently has 207 million rows.
there is a timestamp field that contains data.
more data gets copied from another database into this database.
How do I make this do an index scan instead?
I did an analyze audittrailclinical to no avail.
Hi guys,
PG = 9.1.5
OS = winDOS 2008R8
I have a table that currently has 207 million rows.
there is a timestamp field that contains data.
more data gets copied from another database into this database.
How do I make this do an index scan instead?
I did an analyze audittrailclinical to no avail.
On Oct 16, 2012, at 20:01 , Evgeny Shishkin itparan...@gmail.com wrote:
Selecting 5 yours of data is not selective at all, so postgres decides it is
cheaper to do seqscan.
Do you have an index on patient.dnsortpersonnumber? Can you post a result
from
select count(*) from patient where
Thanks Bruce,
I have, and I even thought, I understood it :).
I just ran an explain analyze on another table - and ever since the query plan
changed. It's now using the index as expected. I guess, I have some more
reading to do.
On Oct 16, 2012, at 20:31 , Bruce Momjian br...@momjian.us
Daniel Farina-4 wrote
On Fri, Jul 6, 2012 at 4:29 AM, Craig Ringer lt;ringerc@.idgt; wrote:
1) Truncate each table. It is too slow, I think, especially for empty
tables.
Really?!? TRUNCATE should be extremely fast, especially on empty tables.
You're aware that you can TRUNCATE many
?
Thanks
Chris
operations, I'm not worrying too much about them now.
Thanks
Chris
On 1 June 2012 14:47, Tom Lane t...@sss.pgh.pa.us wrote:
Chris Rimmer chr...@we7.com writes:
While investigating some performance issues I have been looking at slow
queries logged to the postgresql.log file. A strange thing
as DAS?
Thanks so much!
Best,
Chris
[1]: http://www.b2net.co.uk/netapp/fas3000.pdf
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
) for data, what would you choose performance-wise?
Again, thanks so much for your help.
Best,
Chris
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
/experiences in benchmarking storage
when the storage is smaller then 2x memory?
Thanks,
Chris
while
reading the table only once for all indexes and building them all at the same
time. Is there an index build tool that I missed somehow, that can do this?
Thanks,
Chris.
best regards,
chris
--
chris ruprecht
database grunt and bit pusher extraordinaíre
--
Sent via pgsql-performance
build' test is done.
Maybe, in a future release, somebody will develop something that can create
indexes as inactive and have a build tool build and activate them at the same
time. Food for thought?
On Apr 9, 2011, at 13:10 , Tom Lane wrote:
Chris Ruprecht ch...@ruprecht.org writes:
I have
On 23/03/11 11:52, felix wrote:
I posted many weeks ago about a severe problem with a table that was
obviously bloated and was stunningly slow. Up to 70 seconds just to get
a row count on 300k rows.
I removed the text column, so it really was just a few columns of fixed
data.
Still very
robertmh...@gmail.com (Robert Haas) writes:
On Thu, Feb 10, 2011 at 11:45 AM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Well, I'm comfortable digging in my heels against doing *lame* hints
just because it's what all the other kids are doing, which I think
is the only thing which would
gnuo...@rcn.com writes:
Time for my pet meme to wiggle out of its hole (next to Phil's, and a
day later). For PG to prosper in the future, it has to embrace the
multi-core/processor/SSD machine at the query level. It has to. And
it has to because the Big Boys already do so, to some extent,
mladen.gog...@vmsinfo.com (Mladen Gogala) writes:
Hints are not even that complicated to program. The SQL parser should
compile the list of hints into a table and optimizer should check
whether any of the applicable access methods exist in the table. If it
does - use it. If not, ignore it.
mladen.gog...@vmsinfo.com (Mladen Gogala) writes:
I must say that this purist attitude is extremely surprising to
me. All the major DB vendors support optimizer hints, yet in the
Postgres community, they are considered bad with almost religious
fervor.
Postgres community is quite unique with
kevin.gritt...@wicourts.gov (Kevin Grittner) writes:
Filip Rembia*kowskiplk.zu...@gmail.com wrote:
2011/1/19 Charles.Hou giveme...@gmail.com:
select * from mybook SQL command also increase the XID ?
Yes. Single SELECT is a transaction. Hence, it needs a transaction
ID.
No, not in
msakre...@truviso.com (Maciek Sakrejda) writes:
Is this normal? I'm afraid because my application doesn't run this kind of
statement, so how can I know what is doing these commands? Maybe pg_dump?
I think pg_dump is likely, yes, if you have that scheduled. I don't
think anything in the log
vindex+lists-pgsql-performa...@apartia.org (Louis-David Mitterrand)
writes:
On Tue, Nov 16, 2010 at 11:35:24AM -0500, Chris Browne wrote:
vindex+lists-pgsql-performa...@apartia.org (Louis-David Mitterrand)
writes:
I have to collect lots of prices from web sites and keep track
vindex+lists-pgsql-performa...@apartia.org (Louis-David Mitterrand)
writes:
I have to collect lots of prices from web sites and keep track of their
changes. What is the best option?
1) one 'price' row per price change:
create table price (
id_price primary key,
gentosa...@gmail.com (A B) writes:
If you just wanted PostgreSQL to go as fast as possible WITHOUT any
care for your data (you accept 100% dataloss and datacorruption if any
error should occur), what settings should you use then?
Use /dev/null. It is web scale, and there are good tutorials.
sgend...@ideasculptor.com (Samuel Gendler) writes:
Geez. I wish someone would have written something quite so bold as
'xfs is always faster than ext3' in the standard tuning docs. I
couldn't find anything that made a strong filesystem
recommendation. How does xfs compare to ext4? I wound
cr...@postnewspapers.com.au (Craig Ringer) writes:
Hey, maybe I should try posting YouTube video answers to a few
questions for kicks, see how people react ;-)
And make sure it uses the same voice as is used in the MongoDB is web
scale video, to ensure that people interpret it correctly :-).
--
mladen.gog...@vmsinfo.com (Mladen Gogala) writes:
I have a logical problem with asynchronous commit. The commit
command should instruct the database to make the outcome of the
transaction permanent. The application should wait to see whether the
commit was successful or not. Asynchronous
jnelson+pg...@jamponi.net (Jon Nelson) writes:
Are there any performance implications (benefits) to executing queries
in a transaction where
SET TRANSACTION READ ONLY;
has been executed?
Directly? No.
Indirectly, well, a *leetle* bit...
Transactions done READ ONLY do not generate actual
david_l...@boreham.org (David Boreham) writes:
Feels like I fell through a worm hole in space/time, back to inmos in
1987, and a guy from marketing has just
walked in the office going on about there's a customer who wants to
use our massively parallel hardware to speed up databases...
... As
g...@2ndquadrant.com (Greg Smith) writes:
Yeb Havinga wrote:
* What filesystem to use on the SSD? To minimize writes and maximize
chance for seeing errors I'd choose ext2 here.
I don't consider there to be any reason to deploy any part of a
PostgreSQL database on ext2. The potential for
j...@commandprompt.com (Joshua D. Drake) writes:
On Sat, 2010-07-24 at 16:21 -0400, Greg Smith wrote:
Greg Smith wrote:
Note that not all of the Sandforce drives include a capacitor; I hope
you got one that does! I wasn't aware any of the SF drives with a
capacitor on them were even
Hi there,
I have a simple query where I don't understand the planner's choice to
use a particular index.
The main table looks like this:
# \d sq_ast_attr_val
Table public.sq_ast_attr_val
Column| Type | Modifiers
swamp...@noao.edu (Steve Wampler) writes:
Or does losing WAL files mandate a new initdb?
Losing WAL would mandate initdb, so I'd think this all fits into the
set of stuff worth putting onto ramfs/tmpfs. Certainly it'll all be
significant to the performance focus.
--
select 'cbbrowne' || '@' ||
I have a lot of centos servers which are running postgres. Postgres isn't used
that heavily on any of them, but lately, the stats collector process keeps
causing tons of IO load. It seems to happen only on servers with centos 5.
The versions of postgres that are running are:
8.1.18
8.2.6
8.3.1
I have a lot of centos servers which are running postgres. Postgres isn't used
that heavily on any of them, but lately, the stats collector process keeps
causing tons of IO load. It seems to happen only on servers with centos 5.
The versions of postgres that are running are:
8.1.18
8.2.6
8.3.1
I'm also wondering if a re-clustering of the table would work based on
the index that's used.
such that:
CLUSTER core_object USING plugins_plugin_addr_oid_id;
and see if that makes any change in the differences that your seeing.
On 04/13/2010 02:24 PM, Kevin Grittner wrote:
norn
reeds...@rice.edu (Ross J. Reedstrom) writes:
http://www.mythtv.org/wiki/PostgreSQL_Support
That's a pretty hostile presentation...
The page has had two states:
a) In 2008, someone wrote up...
After some bad experiences with MySQL (data loss by commercial power
failure, very bad
t...@sss.pgh.pa.us (Tom Lane) writes:
Ross J. Reedstrom reeds...@rice.edu writes:
On Sat, Mar 20, 2010 at 10:47:30PM -0500, Andy Colson wrote:
(I added the and trust as an after thought, because I do have one very
important 100% uptime required mysql database that is running. Its my
MythTV
cr...@postnewspapers.com.au (Craig Ringer) writes:
On 13/03/2010 5:54 AM, Jeff Davis wrote:
On Fri, 2010-03-12 at 12:07 -0500, Merlin Moncure wrote:
of course. You can always explicitly open a transaction on the remote
side over dblink, do work, and commit it at the last possible moment.
Josh Berkus wrote:
Xufei,
List changed to psql-performance, which is where this discussion belongs.
I am testing the index used by full text search recently.
I have install 8.3.9 and 8.4.2 separately.
In 8.3.9, the query plan is like:
postgres=# explain SELECT s.name as source , t.name
is at fault
here.
Regardless of who/what is at fault, I need to fix it. And to do that I
need to find out what isn't getting released properly. How would I go
about that?
Thanks,
Chris
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription
find it?
Thanks again,
Chris
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
with:
php -f test3.php
Note my comment in the php file
UNCOMMENT THIS LINE AND MEMORY ISSUE IS FIXED
Thanks for the help everyone.
Chris
attachment: test3.php
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org
(indexed columns);
Do this regularly to keep the index sizes in check.
- Chris
Peter Meszaros wrote:
Hi All,
I use postgresql 8.3.7 as a huge queue. There is a very simple table
with six columns and two indices, and about 6 million records are
written into it in every day continously
.
-chris
Kevin Kempter wrote:
Hi all;
I have a simple query against two very large tables ( 800million rows
in theurl_hits_category_jt table and 9.2 million in the url_hits_klk1
table )
I have indexes on the join columns and I've run an explain.
also I've set the default statistics to 250 for
August 2009 11:26 PM
To: Chris Dunn
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Performance 8.4.0
On Fri, Jul 31, 2009 at 12:22 AM, Chris Dunnchris.d...@bigredsky.com wrote:
constraint_exclusion = on
This is critical if you need it, but a waste of CPU time if you don't.
Other
Suvankar Roy wrote:
Hi all,
Has anybody worked on Greenplum MapReduce programming ?
It's a commercial product, you need to contact greenplum.
--
Postgresql php tutorials
http://www.designmagick.com/
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
Hi,
Everyone says load test using your app - out of interest how does
everyone do that at the database level?
I've tried playr (https://area51.myyearbook.com/trac.cgi/wiki/Playr) but
haven't been able to get it working properly. I'm not sure what other
tools are available.
TIA.
--
Hi,
I would like to know if my configuration is ok, We run a web application with
high transaction rate and the database machine on Mondays / Tuesdays is always
at 100% CPU with no IO/Wait . the machine is a Dual Xeon Quad core, 12gb RAM,
4gb/s Fibre Channel on Netapp SAN, with pg_xlog on
Robert James wrote:
Thanks for the replies. I'm running Postgres 8.2 on Windows XP, Intel
Core Duo (though Postgres seems to use only one 1 core).
A single query can only use one core, but it will use both if multiple
queries come in.
The queries are self joins on very large tables, with
at the moment of the request?
If it needs to be more real-time, you could expand on this by adding
post insert/delete triggers that automatically update the counts table
to keep it current. In my case it just wasn't necessary.
- Chris
--
Sent via pgsql-performance mailing list (pgsql
Robert James wrote:
Hi. I'm seeing some weird behavior in Postgres. I'm running read only
queries (SELECT that is - no UPDATE or DELETE or INSERT is happening at
all). I can run one rather complicated query and the results come
back... eventually. Likewise with another. But, when I run
kelv...@gmail.com (Kelvin Quee) writes:
I will go look at Slony now.
It's worth looking at, but it is not always to be assumed that
replication will necessarily improve scalability of applications; it's
not a magic wand to wave such that presto, it's all faster!
Replication is helpful from a
Віталій Тимчишин wrote:
2009/7/20 Robert James srobertja...@gmail.com
mailto:srobertja...@gmail.com
Hi. I notice that when I do a WHERE x, Postgres uses an index, and
when I do WHERE y, it does so as well, but when I do WHERE x OR y,
it doesn't. Why is this so?
It's not
Mathieu Nebra wrote:
Alexander Staubo a écrit :
On Tue, Jun 23, 2009 at 1:12 PM, Mathieu Nebramate...@siteduzero.com wrote:
This flags table has more or less the following fields:
UserID - TopicID - LastReadAnswerID
We are doing pretty much same thing.
My problem is
Is tsvector_update_trigger() smart enough to not bother updating a
tsvector if the text in that column has not changed?
If not, can I make my own update trigger with something like
if new.description != old.description
return tsvector_update_trigger('fti_all', 'pg_catalog.english',
Dimitri Fontaine wrote:
Hi,
Le 24 juin 09 à 18:29, Alvaro Herrera a écrit :
Oleg Bartunov wrote:
On Wed, 24 Jun 2009, Chris St Denis wrote:
Is tsvector_update_trigger() smart enough to not bother updating a
tsvector if the text in that column has not changed?
no, you should do check
cl...@uah.es (Angel Alvarez) writes:
more optimal plan...
morreoptimal configuration...
we suffer a 'more optimal' superlative missuse
there is not so 'more optimal' thing but a simple 'better' thing.
im not native english speaker but i think it still applies.
If I wanted to be pedantic
Dimitri wrote:
Hi Craig,
yes, you detailed very well the problem! :-)
all those CHAR columns are so just due historical issues :-) as well
they may contains anything else and not only numbers, that's why..
Also, all data inside are fixed, so VARCHAR will not save place, or
what kind of
craig_ja...@emolecules.com (Craig James) writes:
Dave Cramer wrote:
So I tried writing directly to the device, gets around 250MB/s,
reads at around 500MB/s
The client is using redhat so xfs is not an option.
I'm using Red Hat and XFS, and have been for years. Why is XFS not an option
with
mallah.raj...@gmail.com (Rajesh Kumar Mallah) writes:
why is it not a good idea to give end users control over when they
want to run it ?
It's not a particularly good idea to give end users things that they
are likely then to *immediately* use to shoot themselves in the foot.
Turning off
phoenix.ki...@gmail.com (Phoenix Kiula) writes:
[Ppsted similar note to PG General but I suppose it's more appropriate
in this list. Apologies for cross-posting.]
Hi. Further to my bafflement with the count(*) queries as described
in this thread:
Tom Lane wrote:
Chris dmag...@gmail.com writes:
I can see it's doing the extra filter step at the start (4th line) which
is not present without the coalesce/case statement. I just don't
understand why it's being done at that stage.
It's not that hard to understand. With the original view
Hi all,
I have a view that looks like this:
SELECT
CASE
WHEN r.assetid IS NULL THEN p.assetid
ELSE r.assetid
END AS assetid,
CASE
WHEN r.userid IS NULL THEN p.userid
ELSE r.userid
END AS userid, p.permission,
The reason why the CASE is affecting your query planning is because
you are using a query that compares assetid to a constant:
SELECT * from sq_vw_ast_perm where assetid='30748';
When PostgreSQL evaluates this statement, assetid gets expanded either
into a case statement (with your first view
I thought the where condition would cut down on the rows returned, then the
case statement would take effect to do the null check. It seems to be doing
it in reverse ??
# explain analyze SELECT * from sq_vw_ast_perm where assetid='30748';
It aperas to me that both of your statements have
[EMAIL PROTECTED] (Merlin Moncure) writes:
I think the SSD manufacturers made a tactical error chasing the
notebook market when they should have been chasing the server
market...
That's a very good point; I agree totally!
--
output = reverse(moc.enworbbc @ enworbbc)
On Wed, Aug 13, 2008 at 10:59 AM, Decibel! [EMAIL PROTECTED] wrote:
On Aug 12, 2008, at 4:59 PM, Chris Kratz wrote:
Ran into a re-occuring performance problem with some report queries again
today. In a nutshell, we have filters on either multiple joined tables, or
multiple columns
the
incorrect estimates from snowballing up through the chain of joins.
Are there any other solutions to this problem?
Thanks,
-Chris
Praveen wrote:
Hi All,
I am having a trigger in table, If I update the the table manually
trigger is firing immediately(say 200ms per row), But if I update the
table through procedure the trigger is taking time to fire(say 7 to 10
seconds per row).
Please tell me what kind of
/postgis-users
or
http://www.faunalia.com/cgi-bin/mailman/listinfo/gfoss
(italian).
Anyway, as long as you just compute the difference between
2 given shapes, no index can help you. Indices speed up
searches...
Bye,
Chris.
--
Sent via pgsql-performance mailing list (pgsql-performance
?
Thanks.
-chris
production= select version();
version
--
PostgreSQL 8.2.6 on x86_64-pc-linux-gnu, compiled by GCC
I'm doing some analysis on temporal usages, and was hoping to make use
of OVERLAPS, but it does not appear that it makes use of indices.
Couching this in an example... I created a table, t1, thus:
metadata=# \d t1
Table public.t1
Column | Type
On Fri, May 30, 2008 at 02:23:46AM +0930, Shane Ambler wrote:
Chris Shoemaker wrote:
[Attn list-queue maintainers: Please drop the earlier version
of this email that I accidentally sent from an unsubscribed address. ]
Hi,
I'm having a strange problem with a slow-running select query
Joshua,
did you try to run the 345 on an IBM ServeRAID 6i?
I have one in mine, but I never actually ran any speed test.
Do you have any benchmarks that I could run and compare?
best regards,
chris
--
chris ruprecht
database grunt and bit pusher extraordinaíre
On May 12, 2008, at 22:11
[EMAIL PROTECTED] (Gauri Kanekar) writes:
Basically we have some background process which updates table1 and
we don't want the application to make any changes to table1 while
vacuum. Vacuum requires exclusive lock on table1 and if any of
the background or application is ON vacuum don't kick
[EMAIL PROTECTED] (A B) writes:
So, it is time to improve performance, it is running to slow.
AFAIK (as a novice) there are a few general areas:
1) hardware
2) rewriting my queries and table structures
3) using more predefined queries
4) tweek parameters in the db conf files
Of these
[EMAIL PROTECTED] (Gauri Kanekar) writes:
We have a table table1 which get insert and updates daily in high
numbers, bcoz of which its size is increasing and we have to vacuum
it every alternate day. Vacuuming table1 take almost 30min and
during that time the site is down. We need to cut down
[EMAIL PROTECTED] (Jesper Krogh) writes:
I have this message queue table.. currently with 8m+
records. Picking the top priority messages seem to take quite
long.. it is just a matter of searching the index.. (just as explain
analyze tells me it does).
Can anyone digest further optimizations
[EMAIL PROTECTED] (Thomas Spreng) writes:
On 16.04.2008, at 01:24, PFC wrote:
The queries in question (select's) occasionally take up to 5 mins
even if they take ~2-3 sec under normal conditions, there are no
sequencial scans done in those queries. There are not many users
connected (around
[EMAIL PROTECTED] (Marinos Yannikos) writes:
This helped with our configuration:
bgwriter_delay = 1ms # 10-1ms between rounds
bgwriter_lru_maxpages = 1000 # 0-1000 max buffers written/round
FYI, I'd be inclined to reduce both of those numbers, as it should
reduce the
1. Which datatype should I use to represent the hash value? UUIDs are
also 16 bytes...
md5's are always 32 characters long so probably varchar(32).
2. Does it make sense to denormalize the hash set relationships?
The general rule is normalize as much as possible then only denormalize
Craig Ringer wrote:
Christian Bourque wrote:
Hi,
I have a performance problem with a script that does massive bulk
insert in 6 tables. When the script starts the performance is really
good but will degrade minute after minute and take almost a day to
finish!
Would I be correct in guessing
* Read about configuring and using persistent database connections
(http://www.php.net/manual/en/function.pg-pconnect.php) with PHP
Though make sure you understand the ramifications of using persistent
connections. You can quickly exhaust your connections by using this and
also cause
, actual=2k) causing the planner to
choose nested loops instead of another join type, you might try running the
query with nested loops set to off and see if that helps w/ performance.
Thanks,
-Chris
Y, turning nested loops off in specific cases has increased performance
greatly. It didn't fix the planner mis-estimation, just the plan it chose.
It's certainly not a panacea, but it's something we now try early on when
trying to speed up a query that matches these characteristics.
-Chris
On 3
So my question is this: Shouldn’t VACUUM FULL clean Table C and reclaim
all its space?
You've got concepts mixed up.
TRUNCATE deletes all of the data from a particular table (and works in
all dbms's).
http://www.postgresql.org/docs/8.3/interactive/sql-truncate.html
VACUUM FULL is a
[EMAIL PROTECTED] wrote:
On Wed, 12 Mar 2008, sathiya psql wrote:
In the home page itself they were saying testing ... unstable
you are talking about the debian home page right?
then we should not use that for live.
so i prefer 8.1 .
Debian selected the version of
petchimuthu lingam wrote:
C5BK4513
Ahh - you are sending this to the wrong address, these are not being
sent by the postgres mailing list.
Check which address you are replying to next time...
--
Postgresql php tutorials
http://www.designmagick.com/
--
Sent via pgsql-performance mailing
sathiya psql wrote:
count(*) tooks much time...
but with the where clause we can make this to use indexing,... what
where clause we can use??
Am using postgres 7.4 in Debian OS with 1 GB RAM,
am having a table with nearly 50 lakh records,
Looks suspiciously like a question asked
is a bad idea, and why or why not.
Any other thoughts or suggestions?
Thanks,
-Chris
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.orgextra=pgsql-performance
to have to modify the code to prepend the
problematic queries with this setting and hope the estimator is able to
better estimate this particular query in 8.3.
Thanks for the suggestions,
-Chris
1 - 100 of 362 matches
Mail list logo