depending on the type of data
> involved, and queries are normally date qualified.
That sounds interesting. I have to admit that I havn't touched iheritance in
pg at all yet so I find it hard to imagine how this would work. If you have
a chance, would you mind elaborating on i
derived from a parent table makes a lot of sense.
regards
Iain
- Original Message -
From: "Joe Conway" <[EMAIL PROTECTED]>
To: "Iain" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Thursday, September 16, 2004 1:07 PM
Subject: Re: [PERFORM]
that you could still do all that that if
you wanted to, by building the predicated view with UNION ALL of each of the
child tables.
regards
Iain
- Original Message -
From: "Joe Conway" <[EMAIL PROTECTED]>
To: "Christopher Browne" <[EMAIL PROTECTED]>
Cc: <[EM
have experienced the same speedup as the data tables would
be more compact.
regards
Iain
- Original Message -
From:
Igor
Maciel Macaubas
To: [EMAIL PROTECTED]
Sent: Friday, October 15, 2004
3:38 AM
Subject: [PERFORM] Performance vs
Schemas
Hi all,
I rec
or even better to disable caching of any sql not
using binds. I don't think even the mighty Oracle has that option.
As you may have guessed, my vote is for implementing a query cache that
includes plans.
I have no specific preference as to data caching. It doesn't seem to be so
important
;s what I plan to
(Bdo with future PG/Oracle/Hypersonic (my 3 favourite DBMSs) application
(Bdevelopment anyway.
(B
(BRegards
(BIain
(B
(B
(B- Original Message -
(BFrom: "Tom Lane" <[EMAIL PROTECTED]>
(BTo: "Iain" <[EMAIL PROTECTED]>
(BCc: &q
Turbo linux 7 sems to be agreeing with Curtis,
(B
(Bbi: $B%V%m%C%/%G%P%$%9$KAw$i$l$?%V%m%C%/(B (blocks/s)$B!#(B
(Bbo: $B%V%m%C%/%G%P%$%9$+$i
(B
(BSorry it's in Japanese but bi says "blocks sent to block device" and bo is
(B"blocks received from block device".
(B
(BI don't know that mu
mber
what they call it offhand.
If anyone has opinions about that, I'd be happy to hear.
regards
Iain
- Original Message -
From: "Daniel Ceregatti" <[EMAIL PROTECTED]>
To: "Merlin Moncure" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Saturd
Hi Andrew,
(B
(BI had never heard of Ubuntu before, thanks for the tip.
(B
(Bregards
(Biain
(B- Original Message -
(BFrom: "Andrew McMillan" <[EMAIL PROTECTED]>
(BTo: "Iain" <[EMAIL PROTECTED]>
(BSent: Monday, November 08, 2004 12:51 PM
(BSubject:
hich won't be amenable to useful
comparative searching (I didn't read any of the earlier posts so if that
isn't the case, just ignore this). If this is the case, try storing the data
in a date column and see what happens then.
regards
Iain
test=# explain analyse select * from bigta
conversion but if
it is, I'd expect it to use a -MM-DD format which is what I see here.
Something like ... WHERE date>= to_date('11/03/04','DD/MM/YY')
regards
Iain
- Original Message -
From: "BBI Edwin Punzalan" <[EMAIL PROTECTED]>
To: &qu
I always say 'If you pay for quality it only hurts once', but then again I
don't equate high price with high quality ;-)
- Original Message -
From: "Geoffrey" <[EMAIL PROTECTED]>
Something to be said for the old saying, 'you get what you pay for.'
---(end of bro
Hi,
(B
(Bwithout knowing much about your system, it seems to me that the current
(Bstatus of a client should be represented by a status code on the client
(Brecord. History is the list of *past* status codes. The full history,
(Bincluding the current status of a client would be obtained usi
Hi,
(B
(BSorry, I didn't catch the original message, so I'm not sure if the original
(Bposter mentioned the postgres version that he's using.
(B
(BI just thought that I'd contribute this observation.
(B
(BI have a DB that takes several hours to restore under 7,1 but completes in
(Baround
have to copy the dump file from the old server.
I was thinking you could set up a
backup server in this way. On a busy system, it may take a load of the main
server so that running backups with users online shouldn't be a problem. That's
in theory anyway.
regards
Iain
- O
Iain
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
checkpoint_segments 8
In theory, effective cache size is the amount of memory left over for the OS
to cache the filesystem after running all programs and having 100 users
connected, plus a little slack.
regards
Iain
- Original Message -
From: "Amrit Angsusingh" <[EMAIL PROTECTE
ur
aim is to reduce swapping by tuning your memory usage for busy times.
Also, I heard that (most?what versions?) 32 bit linux kernals are slow at
handling more than 2GB memory so a kernal upgrade might be worth
considering.
regards
Iain
---(end of
best you can, then you can decide if that
is fast enough for you. More memory might help, but I can't say for sure.
There are many other things to consider. I suggest that you spend some time
reading through the performance and maybe the admin lists.
regards
Iain
- Original Message -
e it could cause swapping when the system is busy. In the the
not so bad case, the effective cache size estimate will just be completely
wrong.
Maybe a global sort memory limit would be a good idea, I don't know.
regards
Iain
Iain wrote:
sort_mem 4096 (=400MB RAM for 100 connections)
If I unde
stering technology I guess.
Nonetheless, I would love to see this kind of functionality in postgres.
Regards
Iain
- Original Message -
From: "Jim C. Nasby" <[EMAIL PROTECTED]>
To: "Bruno Almeida do Lago" <[EMAIL PROTECTED]>
Cc: "'Mitch Pirtle
that doing so should generally reduce the overall processing
time, but if there are contention problems then it could conceivably get
much worse.
regards
Iain
- Original Message -
From: "Alex" <[EMAIL PROTECTED]>
To: "John A Meinel" <[EMAIL PROTECTED]>
Cc:
S
Hi,
just make sure that your freespace map is
big enough and then do a vacuum analyse without the full option.
I can imagine that database performance
might not be as good as it would be after a vacuum full, though I expect that it
wouldn't make much difference.
regards
anyway.
Would the total time be
reduced by dropping the indexes, then vacuuming and rebuilding the indexes? I
havn't tried anything like this so I can't say.
You should probably say
what version of the db you are using and describe your system a
little.
Regards
Iain
-
Orig
much weight to seq scans based on single user, straight line
performance comparisons. If your assumption is correct, then addressing that
might help, though it's bound to have it's compromises too...
regards
Iain
- Original Message -
From: "Mark Aufflick" <
too long, but hasn't
(Bgiven any specific requirements or limitations. Hopefully you can find
(Bsomething suitable in the alternatives listed above.
(B
(Bregards
(BIain
(B
(B- Original Message -
(BFrom: "Tom Lane" <[EMAIL PROTECTED]>
(BTo: "Iain"
Hi Rod,
(B
(B> Any solution fixing buffers should probably not take into consideration
(B> the method being performed (do you really want to skip caching a
(B> sequential scan of a 2 tuple table because it didn't use an index) but
(B> the volume of data involved as compared to the size of the
27 matches
Mail list logo