Anton <[EMAIL PROTECTED]> writes:
> I want ask about problem with partioned tables (it was discussed some
> time ago, see below). Is it fixed somehow in 8.2.5 ?
No. The patch you mention never was considered at all, since it
consisted of a selective quote from Greenplum source code. It would
not
These are the EXPLAIN ANALIZE:
I ran both queries on a CLUSTER and ANALYZEd tables:
UNION QUERY
explain analyze
select e, p, sum( c) as c
from (
select e, p, count( *) as c
from tt_3
group by e, p
union
select e, p, count( *) as c
from tt_6
I want ask about problem with partioned tables (it was discussed some
time ago, see below). Is it fixed somehow in 8.2.5 ?
2007/8/24, Luke Lonergan <[EMAIL PROTECTED]>:
> Below is a patch against 8.2.4 (more or less), Heikki can you take a look at
> it?
>
> This enables the use of index scan of a
On Sat, 27 Oct 2007 03:04:47 +0100
Gregory Stark <[EMAIL PROTECTED]> wrote:
> > O.k. first you might be grinding through your 20 checkpoint segments
> > but in reality what I think is happening is you are doing foreign
> > key checks against all of it and slowing things down.
>
> If you're going
"Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> On Fri, 26 Oct 2007 15:31:44 -0500
> "Campbell, Lance" <[EMAIL PROTECTED]> wrote:
>
>> I forgot to include an additional parameter I am using in
>> Postgresql.conf:
>>
>
> O.k. first, just to get it out of the way (and then I will try and
> help).
I changed CLOG Buffers to 16
Running the test again:
# ./read.d
dtrace: script './read.d' matched 2 probes
CPU IDFUNCTION:NAME
0 1027 :tick-5sec
/export/home0/igen/pgdata/pg_clog/0024
-27530282192961
/export/hom
On Fri, 26 Oct 2007 15:31:44 -0500
"Campbell, Lance" <[EMAIL PROTECTED]> wrote:
> I forgot to include an additional parameter I am using in
> Postgresql.conf:
>
O.k. first, just to get it out of the way (and then I will try and
help). Please do not top post, it makes replying contextually very
"Pablo Alcaraz" <[EMAIL PROTECTED]> writes:
> Hi List!
>
> I executed 2 equivalents queries. The first one uses a union structure. The
> second uses a partitioned table. The tables are the same with 30 millions of
> rows each one and the returned rows are the same.
>
> But the union query perform
I forgot to post the times:
query-union: 21:59
query-heritage: 1:31:24
Regards
Pablo
Pablo Alcaraz wrote:
Hi List!
I executed 2 equivalents queries. The first one uses a union
structure. The second uses a partitioned table. The tables are the
same with 30 millions of rows each one and the
"Campbell, Lance" <[EMAIL PROTECTED]> writes:
>QUERY PLAN
>
>
>
>
> Hash Join (cost=1437.71..1046983.94 rows=17333178 width=32)
>Hash Cond: (result_entry.fk_question_id = question_num
On Fri, 2007-10-26 at 16:37 -0400, Pablo Alcaraz wrote:
> Hi List!
>
> I executed 2 equivalents queries. The first one uses a union structure.
> The second uses a partitioned table. The tables are the same with 30
> millions of rows each one and the returned rows are the same.
>
> But the union
Hi List!
I executed 2 equivalents queries. The first one uses a union structure.
The second uses a partitioned table. The tables are the same with 30
millions of rows each one and the returned rows are the same.
But the union query perform faster than the partitioned query.
My question is: w
I forgot to include an additional parameter I am using in
Postgresql.conf:
checkpoint_segments = 30
Thanks,
Lance Campbell
Project Manager/Software Architect
Web Services at Public Affairs
University of Illinois
217.333.0382
http://webservices.uiuc.edu
PostgreSql version 8.2.4
Memory = 8 Gig
CPUs 1 dual core Zeon running at 3.0
I have a problem with an update query taking over 10 hours in order to
run. I rebooted my server. I ran the SQL command "analyze". Could
you please help me with any suggestions? I have included the two tables
in
"Jignesh K. Shah" <[EMAIL PROTECTED]> writes:
> So the ratio of reads vs writes to clog files is pretty huge..
It looks to me that the issue is simply one of not having quite enough
CLOG buffers. Your first run shows 8 different pages being fetched and
the second shows 10. Bearing in mind that w
Also to give perspective on the equivalent writes on CLOG
I used the following script which runs for 10 sec to track all writes to
the clog directory and here is what it came up with... (This is with 500
users running)
# cat write.d
#!/usr/sbin/dtrace -s
syscall::write:entry
/execname=="postg
Hi George,
I have seen the 4M/sec problem first actually during an EAStress type
run with only 150 connections.
I will try to do more testing today that Tom has requested.
Regards,
Jignesh
Gregory Stark wrote:
"Jignesh K. Shah" <[EMAIL PROTECTED]> writes:
CLOG data is not cached in a
Tom,
Here is what I did:
I started aggregating all read information:
First I also had added group by pid(arg0,arg1, pid) and the counts
were all coming as 1
Then I just grouped by filename and location (arg0,arg1 of reads) and
the counts came back as
# cat read.d
#!/usr/sbin/dtrace -s
On Fri, 26 Oct 2007, Jean-David Beyer wrote:
I think it was Jon Louis Bently who wrote (in his book, "Writing
Efficient Programs") something to the effect, "Premature optimization is
the root of all evil."
That quote originally comes from Tony Hoare, popularized by a paper
written by Donald
Chris Browne wrote:
> Further, the Right Thing is to group related data together, and come
> up with a policy that is driven primarily by the need for data
> consistency. If things work well enough, then don't go off trying to
> optimize something that doesn't really need optimization, and perhap
I agree with Tom.. somehow I think increasing NUM_CLOG_BUFFERS is just
avoiding the symptom to a later value.. I promise to look more into it
before making any recommendations to increase NUM_CLOG_BUFFERs.
Because though "iGen" showed improvements in that area by increasing
num_clog_buffer
The problem I saw was first highlighted by EAStress runs with PostgreSQL
on Solaris with 120-150 users. I just replicated that via my smaller
internal benchmark that we use here to recreate that problem.
EAStress should be just fine to highlight it.. Just put pg_clog on
O_DIRECT or something s
Heikki Linnakangas wrote:
> Jean-David Beyer wrote:
>
>> My IO system has two Ultra/320 LVD SCSI controllers and 6 10,000rpm SCSI
>> hard drives. The dual SCSI controller is on its own PCI-X bus (the machine
>> has 5 independent PCI-X busses). Two hard drives are on one SCSI controller
>> and the
Hello Tom,
I can confirm that adding the indexes used by the deferred constraint
triggers solved the issue.
Thank you very much for your suggestions.
Best regards,
Giulio Cesare
On 10/24/07, Giulio Cesare Solaroli <[EMAIL PROTECTED]> wrote:
> On 10/24/07, Tom Lane <[EMAIL PROTECTED]> wrote:
>
"Josh Berkus" <[EMAIL PROTECTED]> writes:
> Actually, 32 made a significant difference as I recall ... do you still have
> the figures for that, Jignesh?
Well it made a difference but it didn't remove the bottleneck, it just moved
it. IIRC under that benchmark Jignesh was able to run with x sess
25 matches
Mail list logo