On Thu, Feb 26, 2004 at 06:06:06PM -0500, Christopher Weimann wrote:
> On 02/26/2004-11:16AM, Dror Matalon wrote:
> > >
> > > effective_cache_size changes no cache settings for postgresql, it simply
> > > acts as a hint to the planner on about how much of the d
On Mon, Mar 01, 2004 at 08:30:58PM +1300, Mark Kirkwood wrote:
>
>
> Shridhar Daithankar wrote:
>
> >Dror Matalon wrote:
> >
> >>I've read Matt Dillon's discussion about the freebsd VM at
> >>http://www.daemonnews.org/21/freebsd_vm.html a
, and with Modern machines often having
Gigabytes of memory the issue of, possibly, having a disk cache of 200MB
would be one often asked.
On Fri, Feb 27, 2004 at 12:46:08PM +0530, Shridhar Daithankar wrote:
> Dror Matalon wrote:
>
> >Let me try and say it again. I know
the hibufspace beyond the 200 megs and
provide a bigger cache to postgres? I looked both on the postgres and
freebsd mailing lists and couldn't find a good answer to this.
If yes, any suggestions on what would be a good size on a 2 Gig machine?
Regards,
Dror
--
Dror Matalon
Zapatec Inc
>
> Well, maybe butnot necessarily. It's better to leave the OS to look after
> most of your RAM.
>
> Chris
>
--
Dror Matalon
Zapatec Inc
1700 MLK Way
Berkeley, CA 94709
http://www.fastbuzz.com
http://www.zapatec.com
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faqs/FAQ.html
what I was asking. Seems a little on the bleeding edge. Has
anyone tried this?
On Thu, Feb 26, 2004 at 04:36:01PM -0600, Kevin Barnard wrote:
> On 26 Feb 2004 at 13:58, Dror Matalon wrote:
>
> >
> > which brings me back to my question why not make Freebsd use more of its
>
On Thu, Feb 26, 2004 at 11:55:31AM -0700, scott.marlowe wrote:
> On Thu, 26 Feb 2004, Dror Matalon wrote:
>
> > Hi,
> >
> > We have postgres running on freebsd 4.9 with 2 Gigs of memory. As per
> > repeated advice on the mailing lists we configured effective_cache_
re) as valid. When the db
crashes, it reads the log, and discards the last "log entry" if it wasn't
marked as valid, and "replays" any transactions that haven't been
commited ot the db. The end result being that you might loose your last
transaction(s) if the db crashes, b
--
> TIP 5: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/docs/faqs/FAQ.html
--
Dror Matalon
Zapatec Inc
1700 MLK Way
Berkeley, CA 94709
http://www.fastbuzz.com
http://www.zapatec.com
---(end of broadcast)
did a mv instead of a cp which duplicates ownership & permissions exactly.
>
>
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster
--
Dror Matalon
Zapatec Inc
1700 MLK Way
Berkeley, CA 94709
h
Sort Key: probeset_id, chip, gene_symbol,
> title, sequence_description, pfam
> -> Index Scan using gene_symbol_idx_fun1
> on affy_array_annotation (cost=0.00..3391.70
> rows=857 width=265)
>Index Cond:
> (lower((gene_symbol)::text) = '
On Mon, Oct 27, 2003 at 07:52:06AM -0500, Christopher Browne wrote:
> In the last exciting episode, [EMAIL PROTECTED] (Dror Matalon) wrote:
> > I was answering an earlier response that suggested that maybe the actual
> > counting took time so it would take quite a bit longer
On Mon, Oct 27, 2003 at 11:12:37AM -0500, Vivek Khera wrote:
> >>>>> "DM" == Dror Matalon <[EMAIL PROTECTED]> writes:
>
> DM> effective_cache_size = 25520 -- freebsd forumla: vfs.hibufspace / 8192
>
> DM> 1. While it seems to work correc
On Mon, Oct 27, 2003 at 12:52:27PM +0530, Shridhar Daithankar wrote:
> Dror Matalon wrote:
>
> >On Mon, Oct 27, 2003 at 01:04:49AM -0500, Christopher Browne wrote:
> >>Most of the time involves:
> >>
> >>a) Reading each page of the table, and
> >>
On Mon, Oct 27, 2003 at 01:04:49AM -0500, Christopher Browne wrote:
> [EMAIL PROTECTED] (Dror Matalon) wrote:
> > On Sun, Oct 26, 2003 at 10:49:29PM -0500, Greg Stark wrote:
> >> Dror Matalon <[EMAIL PROTECTED]> writes:
> >>
> >> > explain analyze se
On Sun, Oct 26, 2003 at 10:49:29PM -0500, Greg Stark wrote:
> Dror Matalon <[EMAIL PROTECTED]> writes:
>
> > explain analyze select count(*) from items where channel < 5000;
> >
btree (dtstamp)
"item_signature" btree (signature)
"items_channel_article" btree (channel, articlenumber)
"items_channel_tstamp" btree (channel, dtstamp)
5. Any other comments/suggestions on the above setup.
Thanks,
Dror
--
Dror Matalon
Z
her than under "My
queries are slow or don't make use of the indexes. Why?"
Also, you might want to take out for 7.4
4.22) Why are my subqueries using IN so slow?
>
> --
> Josh Berkus
> Aglio Database Solutions
> San Francisco
--
Dror Matal
On Thu, Oct 09, 2003 at 08:35:22PM -0500, Bruno Wolff III wrote:
> On Thu, Oct 09, 2003 at 17:44:46 -0700,
> Dror Matalon <[EMAIL PROTECTED]> wrote:
> >
> > How is doing order by limit 1 faster than doing max()? Seems like the
> > optimizer will need to sort o
On Thu, Oct 09, 2003 at 07:07:00PM -0400, Greg Stark wrote:
> Dror Matalon <[EMAIL PROTECTED]> writes:
>
> > Actually what finally sovled the problem is repeating the
> > dtstamp > last_viewed
> > in the sub select
>
> That will at least convince the opti
; script; nobody so far has stepped forward
> to get it done for 7.4, and I doubt we have time now.
>
> I'll probably create a Perl script in a month or so, but not before that
>
> --
> Josh Berkus
> Aglio Database Solutions
> San Francisco
>
> --
ndred columns with only 3000 or so rows?
Regards,
Dror
> BTW, the int8 and numeric(24,12) are for future expansion.
> I hate limits.
>
> Greg
>
>
> Dror Matalon wrote:
> >It's still not quite clear what you're trying to do. Many people's gut
>
Phone: 614.318.4314
> Fax: 614.431.8388
> Email: [EMAIL PROTECTED]
> Cranel. Technology. Integrity. Focus.
>
>
>
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/docs/faqs/FAQ.html
--
Dror Matalon
Zapatec Inc
1700 MLK Way
Berkeley, CA 94709
http://www.zapatec.com
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
that
satisfy "dtstamp > last_viewed". Obviously I want to run the max() only on
on a few items. Repeating "dtstamp > last_viewed" did the trick, but it
seems like there should be a more elegant/clear way to tell the planner
which constraint to apply first.
Dror
On Wed,
On Fri, Oct 03, 2003 at 06:10:29PM -0400, Rod Taylor wrote:
> On Fri, 2003-10-03 at 17:53, Dror Matalon wrote:
> > On Fri, Oct 03, 2003 at 05:44:49PM -0400, Rod Taylor wrote:
> > > > item_max_date() looks like this:
> > > >select max(dtstamp) from items
se Solutions
> San Francisco
>
>
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to [EMAIL PROTECTED] so that your
> message can get
n't the first or second one ;)
CREATE or REPLACE FUNCTION item_max_date (int4, varchar) RETURNS
timestamptz AS '
select max(dtstamp) from items where channel = $1 and link = $2;
' LANGUAGE 'sql';
--
Dror Matalon
Zapatec Inc
1700 MLK Way
Berkeley, CA 94709
http://www.zapat
m like it should be something like 15 +
(0.5 * 5) + small overhead, = 30 msec or so rather than the 300 I'm
seeing.
> and possibly build an index on channel, link, dtstamp
Didn't make a difference either. Explain analyze shows that it didn't
use it.
>
> --
> -Josh
19..6977.08 rows=1
width=259) (actual time=1.94..150.55 rows=683 loops=1)
Index Cond: (channel = 2)
Filter: ((dtstamp = item_max_date(2, link)) AND (NOT (hashed subplan)))
SubPlan
-> Seq Scan on viewed_items (cost=0.00..8.19 rows=2 width=4) (actua
On Thu, Oct 02, 2003 at 10:08:18PM -0400, Christopher Browne wrote:
> The world rejoiced as [EMAIL PROTECTED] (Dror Matalon) wrote:
> > I don't have an opinion on how hard it would be to implement the
> > tracking in the indexes, but "select count(*) from some table" is, i
-
> let name="cbbrowne" and tld="libertyrms.info" in String.concat "@" [name;tld];;
> <http://dev6.int.libertyrms.com/>
> Christopher Browne
> (416) 646 3304 x124 (land)
>
> ---(end of broadcast)
round 60 Megs and goes through it, which it could
do in a fraction of the time.
Dror
--
Dror Matalon
Zapatec Inc
1700 MLK Way
Berkeley, CA 94709
http://www.zapatec.com
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please se
32 matches
Mail list logo