, and with Modern machines often having
Gigabytes of memory the issue of, possibly, having a disk cache of 200MB
would be one often asked.
On Fri, Feb 27, 2004 at 12:46:08PM +0530, Shridhar Daithankar wrote:
Dror Matalon wrote:
Let me try and say it again. I know that setting effective_cache_size
On Thu, Feb 26, 2004 at 11:55:31AM -0700, scott.marlowe wrote:
On Thu, 26 Feb 2004, Dror Matalon wrote:
Hi,
We have postgres running on freebsd 4.9 with 2 Gigs of memory. As per
repeated advice on the mailing lists we configured effective_cache_size
= 25520 which you get by doing
I was asking. Seems a little on the bleeding edge. Has
anyone tried this?
On Thu, Feb 26, 2004 at 04:36:01PM -0600, Kevin Barnard wrote:
On 26 Feb 2004 at 13:58, Dror Matalon wrote:
which brings me back to my question why not make Freebsd use more of its
memory for disk caching
beyond the 200 megs and
provide a bigger cache to postgres? I looked both on the postgres and
freebsd mailing lists and couldn't find a good answer to this.
Well, maybe butnot necessarily. It's better to leave the OS to look after
most of your RAM.
Chris
--
Dror Matalon
Zapatec Inc
the hibufspace beyond the 200 megs and
provide a bigger cache to postgres? I looked both on the postgres and
freebsd mailing lists and couldn't find a good answer to this.
If yes, any suggestions on what would be a good size on a 2 Gig machine?
Regards,
Dror
--
Dror Matalon
Zapatec Inc
1700
marked as valid, and replays any transactions that haven't been
commited ot the db. The end result being that you might loose your last
transaction(s) if the db crashes, but nothing ever gets corrupted.
So what am I missing in this picture?
Regards,
Dror
--
Dror Matalon
Zapatec Inc
1700 MLK
)---
TIP 4: Don't 'kill -9' the postmaster
--
Dror Matalon
Zapatec Inc
1700 MLK Way
Berkeley, CA 94709
http://www.fastbuzz.com
http://www.zapatec.com
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
)
Regards,
William
---(end of broadcast)---
TIP 8: explain analyze is your friend
--
Dror Matalon
Zapatec Inc
1700 MLK Way
Berkeley, CA 94709
http://www.fastbuzz.com
http://www.zapatec.com
---(end of broadcast
On Mon, Oct 27, 2003 at 12:52:27PM +0530, Shridhar Daithankar wrote:
Dror Matalon wrote:
On Mon, Oct 27, 2003 at 01:04:49AM -0500, Christopher Browne wrote:
Most of the time involves:
a) Reading each page of the table, and
b) Figuring out which records on those pages are still live
On Mon, Oct 27, 2003 at 11:12:37AM -0500, Vivek Khera wrote:
DM == Dror Matalon [EMAIL PROTECTED] writes:
DM effective_cache_size = 25520 -- freebsd forumla: vfs.hibufspace / 8192
DM 1. While it seems to work correctly, I'm unclear on why this number is
DM correct. 25520*8 = 204160
On Mon, Oct 27, 2003 at 07:52:06AM -0500, Christopher Browne wrote:
In the last exciting episode, [EMAIL PROTECTED] (Dror Matalon) wrote:
I was answering an earlier response that suggested that maybe the actual
counting took time so it would take quite a bit longer when there are
more rows
On Sun, Oct 26, 2003 at 10:49:29PM -0500, Greg Stark wrote:
Dror Matalon [EMAIL PROTECTED] writes:
explain analyze select count(*) from items where channel 5000;
QUERY PLAN
for 7.4
4.22) Why are my subqueries using IN so slow?
--
Josh Berkus
Aglio Database Solutions
San Francisco
--
Dror Matalon
Zapatec Inc
1700 MLK Way
Berkeley, CA 94709
http://www.zapatec.com
---(end of broadcast)---
TIP 6: Have you
a Perl script in a month or so, but not before that
--
Josh Berkus
Aglio Database Solutions
San Francisco
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
--
Dror Matalon
Zapatec Inc
1700 MLK Way
Berkeley, CA 94709
http
On Thu, Oct 09, 2003 at 07:07:00PM -0400, Greg Stark wrote:
Dror Matalon [EMAIL PROTECTED] writes:
Actually what finally sovled the problem is repeating the
dtstamp last_viewed
in the sub select
That will at least convince the optimizer to use an index range lookup. But it
still
did the trick, but it
seems like there should be a more elegant/clear way to tell the planner
which constraint to apply first.
Dror
On Wed, Oct 08, 2003 at 10:54:24AM -0400, Greg Stark wrote:
Rod Taylor [EMAIL PROTECTED] writes:
On Fri, 2003-10-03 at 17:53, Dror Matalon wrote:
On Fri
and numeric(24,12) are for future expansion.
I hate limits.
Greg
Dror Matalon wrote:
It's still not quite clear what you're trying to do. Many people's gut
reaction is that you're doing something strange with so many columns in
a table.
Using your example, a different approach might be to do
make a difference either. Explain analyze shows that it didn't
use it.
--
-Josh Berkus
Aglio Database Solutions
San Francisco
--
Dror Matalon
Zapatec Inc
1700 MLK Way
Berkeley, CA 94709
http://www.zapatec.com
---(end of broadcast
timestamptz AS '
select max(dtstamp) from items where channel = $1 and link = $2;
' LANGUAGE 'sql';
--
Dror Matalon
Zapatec Inc
1700 MLK Way
Berkeley, CA 94709
http://www.zapatec.com
---(end of broadcast)---
TIP 2: you can get off all lists at once
3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
--
Dror Matalon
Zapatec Inc
1700 MLK Way
Berkeley, CA 94709
http://www.zapatec.com
On Fri, Oct 03, 2003 at 06:10:29PM -0400, Rod Taylor wrote:
On Fri, 2003-10-03 at 17:53, Dror Matalon wrote:
On Fri, Oct 03, 2003 at 05:44:49PM -0400, Rod Taylor wrote:
item_max_date() looks like this:
select max(dtstamp) from items where channel = $1 and link = $2;
It is too
21 matches
Mail list logo