On Thu, 27 Jan 2005 12:43:56 -0500, Alexandre Leclerc
<[EMAIL PROTECTED]> wrote:
> On Thu, 27 Jan 2005 17:27:40 +0100, Dawid Kuroczko <[EMAIL PROTECTED]> wrote:
> > On Thu, 27 Jan 2005 10:23:34 -0500, Alexandre Leclerc
> > <[EMAIL PROTECTED]> wrote:
> > > Here a normal listing of design.product_dep
On Fri, 28 Jan 2005 09:07:59 +0100, Dawid Kuroczko <[EMAIL PROTECTED]> wrote:
> On Thu, 27 Jan 2005 12:43:56 -0500, Alexandre Leclerc
> <[EMAIL PROTECTED]> wrote:
> > On Thu, 27 Jan 2005 17:27:40 +0100, Dawid Kuroczko <[EMAIL PROTECTED]>
> > wrote:
> > > On Thu, 27 Jan 2005 10:23:34 -0500, Alexand
Hi Folks ,
I am running this query on postgres 8 beta version and it is not
using the right index, where as if i run the same query on postgres 7.4
version it uses the right index . Here are the explain analyze output
for both the versions.can anyone explain this ?
tks.
tables: att
On Thu, Jan 20, 2005 at 04:02:39PM +0100, Hervé Piedvache wrote:
>
> I don't insist about have data in RAM but when you use PostgreSQL with
> big database you know that for quick access just for reading the index file
> for example it's better to have many RAM as possible ... I just want to
On Thu, Jan 20, 2005 at 10:40:02PM -0200, Bruno Almeida do Lago wrote:
>
> I was thinking the same! I'd like to know how other databases such as Oracle
> do it.
You mean "how Oracle does it". They're the only ones in the market
that really have this technology.
A
--
Andrew Sullivan | [EMAIL
On Thu, Jan 20, 2005 at 03:54:23PM +0100, Hervé Piedvache wrote:
> Slony do not use RAM ... but PostgreSQL will need RAM for accessing a
> database
> of 50 Gb ... so having two servers with the same configuration replicated by
> slony do not slove the problem of the scalability of the database .
On Thu, Jan 20, 2005 at 04:07:51PM +0100, Hervé Piedvache wrote:
> Yes seems to be the only solution ... but I'm a little disapointed about
> this ... could you explain me why there is not this kind of
> functionnality ... it seems to be a real need for big applications no ?
I hate to be snarky,
On Mon, Jan 24, 2005 at 01:28:29AM +0200, Hannu Krosing wrote:
>
> IIRC it hates pg_dump mainly on master. If you are able to run pg_dump
> from slave, it should be ok.
For the sake of the archives, that's not really a good idea. There
is some work afoot to solve it, but at the moment dumping fr
I was wondering about index types. Oracle has an index type called a
'bitmap' index. They describe this as an index for low cardinality
fields, where only the cardinal values are indexed in a b-tree, and
then it uses a bitmap below that to describe rows. They say that this
type of index is very
At this point I will interject a couple of benchmark numbers based on
a new system we just configured as food for thought.
System A (old system):
Compaq Proliant Dual Pentium III 933 with Smart Array 5300, one RAID
1, one 3 Disk RAID 5 on 10k RPM drives, 2GB PC133 RAM. Original
Price: $6500
Syst
contrib/intarray has an index type which could be what you need.
I was wondering about index types. Oracle has an index type called a
'bitmap' index. They describe this as an index for low cardinality
fields, where only the cardinal values are indexed in a b-tree, and
then it uses a bitmap b
Alex Turner <[EMAIL PROTECTED]> writes:
> I was wondering about index types. Oracle has an index type called a
> 'bitmap' index.
There's a great deal about this in the list archives (probably more in
pgsql-hackers than in -performance). Most of the current interest has
to do with building in-mem
> With the right configuration you can get very serious throughput. The
> new system is processing over 2500 insert transactions per second. We
> don't need more RAM with this config. The disks are fast enough.
> 2500 transaction/second is pretty damn fast.
fsync on/off?
Merlin
--
Pallav Kalva <[EMAIL PROTECTED]> writes:
> I am running this query on postgres 8 beta version and it is not
> using the right index, where as if i run the same query on postgres 7.4
> version it uses the right index .
1. Beta which, exactly?
2. Have you ANALYZEd both tables lately?
3. I
On Fri, 28 Jan 2005 10:59:58 -0500
Alex Turner <[EMAIL PROTECTED]> wrote:
> At this point I will interject a couple of benchmark numbers based on
> a new system we just configured as food for thought.
>
> System A (old system):
> Compaq Proliant Dual Pentium III 933 with Smart Array 5300, one RAI
Alexandre Leclerc wrote:
Here a normal listing of design.product_department_time:
product_id | department_id | req_time
+---+--
906 | A | 3000
906 | C | 3000
906 | D | 1935
907 | A
Tom Lane wrote:
Pallav Kalva <[EMAIL PROTECTED]> writes:
I am running this query on postgres 8 beta version and it is not
using the right index, where as if i run the same query on postgres 7.4
version it uses the right index .
1. Beta which, exactly?
Beta 4
2. Have you ANALYZEd bot
Pallav Kalva <[EMAIL PROTECTED]> writes:
>> begin;
>> alter table common.string drop constraint pk_string_stringid;
>> explain analyze ... same query ...
>> rollback;
>>
>what do u mean by rollback exactly ? i can drop the pk constraint
> and run explain analyze and see how it behaves.
T
There's a great deal about this in the list archives (probably more in
pgsql-hackers than in -performance). Most of the current interest has
to do with building in-memory bitmaps on the fly, as a way of decoupling
index and heap scan processing. Which is not quite what you're talking
about but sh
As far as dropping/recreating triggers, there seem to be two strategies:
1. Perform the drop-import-create operation in a transaction, thereby
guaranteeing the accuracy of the counts but presumably locking the
table during the operation, which could take many minutes (up to an
hour or two) in ex
Thomas,
> I forgot to mention that I'm running 7.4.6. The README includes the
> caveat that pgmemcache is designed for use with 8.0.
Well, you could always hire Sean to backport it.
> 1. Perform the drop-import-create operation in a transaction, thereby
> guaranteeing the accuracy of the counts
On Fri, 28 Jan 2005 08:34:27 -0800, Joe Conway <[EMAIL PROTECTED]> wrote:
> Alexandre Leclerc wrote:
> >Here a normal listing of design.product_department_time:
> > product_id | department_id | req_time
> >+---+--
> >906 | A |
Alexandre Leclerc wrote:
I'm a little bit confused on how to install this contirb. I know my
contrib package is installed, but I don't know how to make it work in
postgresql. (Using 7.4.5-1mdk on Mandrake Linux.)
Find the file tablefunc.sql and redirect it into your database, e.g.
psql mydatabase <
Hi Tom,
I dropped the primary key constraint and ran the explain analyze on
the same query and here is what i get seq scans on both the tables ,
still doesnt make use of the index on common.attribute table .
QUERY PLAN
-
On Fri, 28 Jan 2005 10:24:37 -0800, Joe Conway <[EMAIL PROTECTED]> wrote:
> Alexandre Leclerc wrote:
> > I'm a little bit confused on how to install this contirb. I know my
> > contrib package is installed, but I don't know how to make it work in
> > postgresql. (Using 7.4.5-1mdk on Mandrake Linux.
Pallav Kalva <[EMAIL PROTECTED]> writes:
> still doesnt make use of the index on common.attribute table .
What do you get from just plain
explain analyze select * from common.string text1_
where text1_.value='squareFeet';
I get the impression that it must think this will yield a lot of rows.
explain analyze select * from common.string text1_
where text1_.value='squareFeet';
QUERY PLAN
Seq Scan on string text1_ (cost=0.00..4.41 rows=1 wi
On Fri, 28 Jan 2005 10:24:37 -0800, Joe Conway <[EMAIL PROTECTED]> wrote:
> Alexandre Leclerc wrote:
> > I'm a little bit confused on how to install this contirb. I know my
> > contrib package is installed, but I don't know how to make it work in
> > postgresql. (Using 7.4.5-1mdk on Mandrake Linux.
[EMAIL PROTECTED] (Andrew Sullivan) writes:
> On Mon, Jan 24, 2005 at 01:28:29AM +0200, Hannu Krosing wrote:
>>
>> IIRC it hates pg_dump mainly on master. If you are able to run pg_dump
>> from slave, it should be ok.
>
> For the sake of the archives, that's not really a good idea. There
> is som
[EMAIL PROTECTED] (Alex Turner) writes:
> I was wondering about index types. Oracle has an index type called a
> 'bitmap' index. They describe this as an index for low cardinality
> fields, where only the cardinal values are indexed in a b-tree, and
> then it uses a bitmap below that to describe
PFC wrote:
> So, here is something annoying with the current approach : Updating rows
> in a table bloats ALL indices, not just those whose indexed values have
> been actually updated. So if you have a table with many indexed fields and
> you often update some obscure timestamp field, all the
I was able to duplicate this behavior with dummy data that had only a
few distinct values for fknamestringid --- the planner then thinks that
the index probe into attribute will match a lot of rows and hence take a
long time. Could we see your pg_stats row for fknamestringid, ie
select * from pg_
On 7.4 I get
select * from pg_stats
where tablename = 'attribute' and attname = 'fknamestringid';
schemaname | tablename |attname | null_frac | avg_width |
n_distinct | most_common_vals
|
most_common_freqs
Pallav Kalva <[EMAIL PROTECTED]> writes:
> On 8
> common | attribute | fknamestringid | 0 | 4
> | 80 | {2524,2434,2530,2522,2525,2523,2527,2526,2574,2531} |
> {0.219333,0.199333,0.076,0.064,0.0616667,0.05,0.045,0.042,0.04,0.0286667}
>
> | {2437,2528,2529,253
The random_page_cost value is same on both the versions, the only thing
difference between 7.4 version and 8 version is that 7.4 ver has 100k
less records. For, now i created index on numericvalue column on
attribute table and it used that index and it is much faster that way.
it came down to 2
Hervé Piedvache wrote:
My point being is that there is no free solution. There simply isn't.
I don't know why you insist on keeping all your data in RAM, but the
mysql cluster requires that ALL data MUST fit in RAM all the time.
I don't insist about have data in RAM but when you use PostgreS
On Fri, 28 Jan 2005 11:54:57 -0500, Christopher Weimann
<[EMAIL PROTECTED]> wrote:
> On 01/28/2005-10:59AM, Alex Turner wrote:
> > At this point I will interject a couple of benchmark numbers based on
> > a new system we just configured as food for thought.
> >
> > System A (old system):
> > Compaq
William Yu <[EMAIL PROTECTED]> writes:
> 1 beefy server w/ 32GB RAM = $16K
>
> I know what I would choose. I'd get the mega server w/ a ton of RAM and skip
> all the trickyness of partitioning a DB over multiple servers. Yes your data
> will grow to a point where even the XXGB can't cache everyt
38 matches
Mail list logo