Hi all,
I have a table which have more than 20 records. I need to get
the records which matches like this
where today::date = '2004-11-05';
This is the only condition in the query. There is a btree index on the
column today.
Is there any way to optimise it.
rgds
Antony Paul
-
On Thu, Nov 04, 2004 at 23:32:57 +,
Matt Clark <[EMAIL PROTECTED]> wrote:
> >
> >I think in the future there will be a good bit of presentation
> >login in the client...
>
> Not if Bruno has his way ;-)
Sure there will, but it will be controlled by the client, perhaps taking
suggestio
On Fri, 2004-11-05 at 02:47, Chris Browne wrote:
> Another thing that would be valuable would be to have some way to say:
>
> "Read this data; don't bother throwing other data out of the cache
>to stuff this in."
This is similar, although not exactly the same thing:
http://www.opengroup.or
On Thu, 2004-11-04 at 23:29, Pierre-Frédéric Caillaud wrote:
> There is also the fact that syncing after every transaction could be
> changed to syncing every N transactions (N fixed or depending on the data
> size written by the transactions) which would be more efficient than the
> cu
On Fri, 2004-11-05 at 06:20, Steinar H. Gunderson wrote:
> You mean, like, open(filename, O_DIRECT)? :-)
This disables readahead (at least on Linux), which is certainly not we
want: for the very case where we don't want to keep the data in cache
for a while (sequential scans, VACUUM), we also want
After a long battle with technology, [EMAIL PROTECTED] (Simon Riggs), an earthling,
wrote:
> On Thu, 2004-11-04 at 15:47, Chris Browne wrote:
>
>> Another thing that would be valuable would be to have some way to say:
>>
>> "Read this data; don't bother throwing other data out of the cache
>>
Matt Clark wrote:
Pierre-Frédéric Caillaud wrote:
check this marvelus piece of 5 minutes of work :
http://boutiquenumerique.com/test/iframe_feed.html
cela m'a fait le sourire :-)
(apologies for bad french)
M
Javascript is not an option for the scripts, one of the mandates of the
project is to su
In an attempt to throw the authorities off his trail, [EMAIL PROTECTED] (Markus
Schaber) transmitted:
> We should create a list of those needs, and then communicate those
> to the kernel/fs developers. Then we (as well as other apps) can
> make use of those features where they are available, and u
Pierre-Frédéric Caillaud wrote:
check this marvelus piece of 5 minutes of work :
http://boutiquenumerique.com/test/iframe_feed.html
cela m'a fait le sourire :-)
(apologies for bad french)
M
---(end of broadcast)---
TIP 7: don't forget to increase yo
A note though : you'll have to turn off HTTP persistent
connections in your server (not in your proxy) or youre back to
square one.
I hadn't considered that. On the client side it would seem to be up to
the client whether to use a persistent connection or not. If it does,
then yeah, a
check this marvelus piece of 5 minutes of work :
http://boutiquenumerique.com/test/iframe_feed.html
Yup. If you go the JS route then you can do even better by using JS to
load data into JS objects in the background and manipulate the page
content directly, no need for even an Iframe. Ignore t
Javascript is too powerful to turn for any random web page. It is only
essential for web pages because people write their web pages to only
work with javascript.
Hmm... I respectfully disagree. It is so powerful that it is impossible
to ignore when implementing a sophisticated app. And it is
All 3 plans have crappy estimates.
Run ANALYZE in production, then send another explain analyze (as an
attachment please, to avoid linewrap).
Er, no other possible answer except Rod's :-)
---(end of broadcast)---
TIP 2: you can get off all lists a
On Thu, Nov 04, 2004 at 22:37:06 +,
Matt Clark <[EMAIL PROTECTED]> wrote:
> >...
>
> Yup. If you go the JS route then you can do even better by using JS to
> load data into JS objects in the background and manipulate the page
> content directly, no need for even an Iframe. Ignore the dul
On Thu, 2004-11-04 at 17:42, [EMAIL PROTECTED] wrote:
> Citando Rod Taylor <[EMAIL PROTECTED]>:
> > Please send an explain analyze from both.
> I'm sendin three explains. In the first the Dell machine didn't use existing
> indexes, so I turn enable_seqscan off (this is the second explain). The tota
[EMAIL PROTECTED] wrote:
Citando Rod Taylor <[EMAIL PROTECTED]>:
Please send an explain analyze from both.
I'm sendin three explains. In the first the Dell machine didn't use existing
indexes, so I turn enable_seqscan off (this is the second explain). The total
cost dec
Citando Rod Taylor <[EMAIL PROTECTED]>:
> Please send an explain analyze from both.
I'm sendin three explains. In the first the Dell machine didn't use existing
indexes, so I turn enable_seqscan off (this is the second explain). The total
cost decreased, but the total time not. The third explain re
- ITEM table will, grow, grow, grow (sold items are not deleted)
WHERE PRODUCT.SECTION_USED_FK IS NOT NULL AND ITEM.STATUS=1 and
(ITEM.KIND=2 or ITEM.KIND=3)
Partial index on item.status ?
---(end of broadcast)---
TIP 1: subscribe and unsubscribe c
In your webpage include an iframe with a Javascript to refresh it
every five seconds. The iframe fetches a page from the server which
brings in the new data in form of generated JavaScript which writes
in the parent window. Thus, you get a very short request every 5
seconds to fetch new
Hello,
I am seeking some advice on appropriate indexing. I think I have a rough
idea where to place my indices but would be grateful for some tips from
more experienced people.
The following example shows what is probably the most complex query of
the application.
A few points to give you a rou
I'm guessing (2) - PG doesn't give the results of a query in a stream.
In 1- I was thinking about a cursor...
but I think his problem is more like 2-
In that case one can either code a special purpose server or use the
following hack :
In your webpage include an iframe with a Java
Alvaro Nunes Melo <[EMAIL PROTECTED]> writes:
> I have a very tricky situation here. A client bought a Dell dual-machine
> to be used as Database Server, and we have a cheaper machine used in
> development. With identical databases, configuration parameters and
> running the same query, our machine
On Thu, 2004-11-04 at 16:06, Alvaro Nunes Melo wrote:
> Hi,
>
> I have a very tricky situation here. A client bought a Dell dual-machine
> to be used as Database Server, and we have a cheaper machine used in
> development. With identical databases, configuration parameters and
> running the same q
Hi,
I have a very tricky situation here. A client bought a Dell dual-machine
to be used as Database Server, and we have a cheaper machine used in
development. With identical databases, configuration parameters and
running the same query, our machine is almost 3x faster.
I tried to increase the sh
These are CGI scripts at the lowest level, nothing more and nothing
less. While I could probably embed a small webserver directly into
the perl scripts and run that as a daemon, it would take away the
portability that the scripts currently offer.
If they're CGI *scripts* then they just use the
1- You have a query that runs for half an hour and you spoon feed
the results to the client ?
(argh)
2- Your script looks for new data every few seconds, sends a
packet, then sleeps, and loops ?
If it's 2 I have a readymade solution for you, just ask.
I'm guessing (2) - PG do
On Thu, Nov 04, 2004 at 03:30:19PM -0500, Martin Foster wrote:
> This should be my last question on the matter, does squid report the
> proper IP address of the client themselves?That's a critical
> requirement for the scripts.
AFAIK it's in some header; I believe they're called "X-Forwarded
Simon Riggs <[EMAIL PROTECTED]> writes:
> On Thu, 2004-11-04 at 19:34, Tom Lane wrote:
>> But only for Postgres' own shared buffers. The kernel cache still gets
>> trashed, because we have no way to suggest to the kernel that it not
>> hang onto the data read in.
> I guess a difference in viewpoi
On Thu, 2004-11-04 at 19:34, Tom Lane wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes:
> > On Thu, 2004-11-04 at 15:47, Chris Browne wrote:
> >> Something like a "read_uncached()" call...
> >>
> >> That would mean that a seq scan or a vacuum wouldn't force useful data
> >> out of cache.
>
> > ARC
Matt Clark wrote:
Correct the 75% of all hits are on a script that can take
anywhere from
a few seconds to a half an hour to complete.The script
essentially
auto-flushes to the browser so they get new information as it arrives
creating the illusion of on demand generation.
This is more li
On Thu, 4 Nov 2004 18:20:18 -, Matt Clark <[EMAIL PROTECTED]> wrote:
Correct the 75% of all hits are on a script that can take
anywhere from
a few seconds to a half an hour to complete.The script
essentially
auto-flushes to the browser so they get new information as it arrives
creating the
Simon Riggs <[EMAIL PROTECTED]> writes:
> On Thu, 2004-11-04 at 15:47, Chris Browne wrote:
>> Something like a "read_uncached()" call...
>>
>> That would mean that a seq scan or a vacuum wouldn't force useful data
>> out of cache.
> ARC does almost exactly those two things in 8.0.
But only for P
On Thu, Nov 04, 2004 at 10:47:31AM -0500, Chris Browne wrote:
> Another thing that would be valuable would be to have some way to say:
>
> "Read this data; don't bother throwing other data out of the cache
>to stuff this in."
>
> Something like a "read_uncached()" call...
You mean, like, o
On Thu, 2004-11-04 at 15:47, Chris Browne wrote:
> Another thing that would be valuable would be to have some way to say:
>
> "Read this data; don't bother throwing other data out of the cache
>to stuff this in."
>
> Something like a "read_uncached()" call...
>
> That would mean that a se
> Correct the 75% of all hits are on a script that can take
> anywhere from
> a few seconds to a half an hour to complete.The script
> essentially
> auto-flushes to the browser so they get new information as it arrives
> creating the illusion of on demand generation.
This is more like a s
Matt Clark wrote:
Apache::DBI overall works better to what I require, even if
it is not a
pool per sey. Now if pgpool supported variable rate pooling like
Apache does with it's children, it might help to even things
out. That
and you'd still get the spike if you have to start the webserver
Matt Clark wrote:
Case in point: A first time visitor hits your home page. A
dynamic page is generated (in about 1 second) and served
(taking 2 more seconds) which contains links to 20 additional
The gain from an accelerator is actually even more that that, as it takes
essentially zero seconds
Myself, I like a small Apache with few modules serving static files (no
dynamic content, no db connections), and with a mod_proxy on a special
path directed to another Apache which generates the dynamic pages (few
processes, persistent connections...)
You get the best of both, static files
> Case in point: A first time visitor hits your home page. A
> dynamic page is generated (in about 1 second) and served
> (taking 2 more seconds) which contains links to 20 additional
The gain from an accelerator is actually even more that that, as it takes
essentially zero seconds for Apache
> Apache::DBI overall works better to what I require, even if
> it is not a
> pool per sey. Now if pgpool supported variable rate pooling like
> Apache does with it's children, it might help to even things
> out. That
> and you'd still get the spike if you have to start the webserver and
>
[EMAIL PROTECTED] (Pierre-Frédéric Caillaud) writes:
>> posix_fadvise(2) may be a candidate. Read/Write bareers another pone, as
>> well asn syncing a bunch of data in different files with a single call
>> (so that the OS can determine the best write order). I can also imagine
>> some interaction w
Kevin Barnard wrote:
I am generally interested in a good solution for this. So far our
solution has been to increase the hardware to the point of allowing
800 connections to the DB.
I don't have the mod loaded for Apache, but we haven't had too many
problems there. The site is split pretty good b
Mario Ivankovits <[EMAIL PROTECTED]> writes:
> After populating the table with 8920 records and "analyze" the scenario
> gets even worser:
> select * from tt where seckey = 1;
> Seq Scan on tt (cost=0.00..168.50 rows=1669 width=12) (actual
> time=0.000..15.000 rows=1784 loops=1)
> Filter: (se
On Wed, Nov 03, 2004 at 03:53:16PM -0500, Andrew Sullivan wrote:
> and may bust your query out of the cache. Also, we'd need some more
Uh, the data you're querying, of course. Queries themselves aren't
cached.
A
--
Andrew Sullivan | [EMAIL PROTECTED]
I remember when computers were frustratin
Simon Riggs wrote
All workloads are not created equally, so mixing them can be tricky.
This will be better in 8.0 because seq scans don't spoil the cache.
Apache is effectively able to segregate the workloads because each
workload is "in a directory". SQL isn't stored anywhere for PostgreSQL
to sa
Matt Clark wrote:
I have a dual processor system that can support over 150 concurrent
connections handling normal traffic and load. Now suppose I setup
Apache to spawn all of it's children instantly, what will
...
This will spawn 150 children in a short order of time and as
this takes
"Doct
posix_fadvise(2) may be a candidate. Read/Write bareers another pone, as
well asn syncing a bunch of data in different files with a single call
(so that the OS can determine the best write order). I can also imagine
some interaction with the FS journalling system (to avoid duplicate
efforts).
The
On Wed, 2004-11-03 at 21:25, Martin Foster wrote:
> Simon Riggs wrote:
> > On Tue, 2004-11-02 at 23:52, Martin Foster wrote:
> >
> >>Is there a way to restrict how much load a PostgreSQL server can take
> >>before dropping queries in order to safeguard the server?I was
> >>looking at the log
Hi, Leeuw,
On Thu, 21 Oct 2004 12:44:10 +0200
"Leeuw van der, Tim" <[EMAIL PROTECTED]> wrote:
> (I'm not sure if it's a good idea to create a PG-specific FS in your
> OS of choice, but it's certainly gonna be easier than getting FS code
> inside of PG)
I don't think PG really needs a specific FS
Mario Ivankovits wrote:
Hello !
Sorry if this has been discussed before, it is just hard to find in the
archives using the words "or" or "in" :-o
I use postgres-8.0 beta4 for windows.
I broke down my problem to a very simple table - two columns
"primary_key" and "secondary_key". Creates and Inse
> I have a dual processor system that can support over 150 concurrent
> connections handling normal traffic and load. Now suppose I setup
> Apache to spawn all of it's children instantly, what will
...
> This will spawn 150 children in a short order of time and as
> this takes
"Doctor, it h
51 matches
Mail list logo