I used to use the connect-by patch, but have since rewritten everything
to use a nested set model. I was
having problems that, while not immediately traceable back to the
patch, showed up when I started
using it and went away when I stopped (strange locking behavior,
crashing with vacuum full, p
Fabio Esposito <[EMAIL PROTECTED]> writes:
> We've recently integrated postgres into an existing mature app. Its a
> time sensitive 24x7 system. It runs on HP9000, a K370 Dual Processor
> system. Postgres is version 7.3.2. Its spawned as a child from a parent
> supervisory process, and they com
Fabio Esposito <[EMAIL PROTECTED]> writes:
>> Did you start from a fresh initdb, or just drop and recreate user
>> tables? I'm wondering about index bloat on the system tables ...
> I don't think I re initdb it, just dropped. We did try a reindex command
> in the interactive editor, with no succ
On Tue, 23 Mar 2004, Fabio Esposito wrote:
>
> Hello fellow PostgreSQL users.
>
> We've been working on this interesting issue for some time now, and we're
> hoping that someone can help.
>
> We've recently integrated postgres into an existing mature app. Its a
> time sensitive 24x7 system. I
On Thu, 25 Mar 2004, Ara Anjargolian wrote:
> I've run into this odd planner choice which I don't quite understand.
>
> I have two tables articles, users and
> articles.article_id and users.user_id are primary keys.
>
> Insides articles there are two optional fields author_id1, author_id2
> whic
Fabio,
> I'll have to get back to you on that, but I'm 90% sure its the default out
> of the box.
Raise it, a lot. Perhaps to 30,000 or 50,000. VACUUM VERBOSE ANALYZE
should show you how many data pages are being reclaimed between vacuums.
Because of your very high rate of updates and delet
On Fri, 26 Mar 2004, Fabio Esposito wrote:
>
> On Fri, 26 Mar 2004, scott.marlowe wrote:
>
> > > It maintains 48hours of data, so its not a large database; roughly
> > > <600mbs. We do this by running a housekeeping program in a cron job.
> > > It deletes all data older then 48hours, then vaccu
"Ara Anjargolian" <[EMAIL PROTECTED]> writes:
> jargol=# explain select user_id, first_names, last_name from articles, users
> where article_id = 5027 and (articles.author_id1 = users.user_id or
> articles.author_id2 = users.user_id);
> Why does it think it MUST do a seq-scan in the second case?
Fabio,
> Postgres initially worked wonderfully, fast and solid. It
> preformed complex joins in 0.01secs, and was able to keep up with our
> message queue. It stayed this way for almost a year during our
> development.
>
> Recently it started eating up the cpu, and cannot keepup with the system
On Fri, Mar 26, 2004 at 07:25:53AM +0100, Manfred Spraul wrote:
> >Compare file sync methods with one 8k write:
> > (o_dsync unavailable)
> > open o_sync, write 6.270724
> > write, fdatasync13.275225
> > write, fsync, 13.359847
> >
> >
> Odd. Whic
Shiar <[EMAIL PROTECTED]> writes:
>> EXPLAIN ANALYZE SELECT id FROM userinfo WHERE icq <> '0';
> Seq Scan on userinfo (cost=0.00..47355.90 rows=849244 width=4) (actual
> time=0.563..1222.963 rows=48797 loops=1)
>Filter: (icq <> 0::bigint)
> Total runtime: 1258.703 ms
The rows estimate is w
hi
Josh Berkus wrote:
Stalin,
We are evaluating the options for having multiple databases vs. schemas on a
single database cluster for a custom grown app that we developed. Each app
installs same set of tables for each service. And the service could easily
be in thousands. so Is it better to ha
Hi All,
We are evaluating the options for having multiple databases vs. schemas on a
single database cluster for a custom grown app that we developed. Each app
installs same set of tables for each service. And the service could easily
be in thousands. so Is it better to have 1000 databases vs 1000
We've got a table containing userdata, such as a bigint column 'icq'. To
easily check whether a user has an icq number entered, we made the following
index:
userinfo_icq_ne0_id_key btree (id) WHERE (icq <> 0::bigint),
However, it doesn't seem to be used:
> EXPLAIN ANALYZE SELECT id FROM
I've run into this odd planner choice which I don't quite understand.
I have two tables articles, users and
articles.article_id and users.user_id are primary keys.
Insides articles there are two optional fields author_id1, author_id2
which all reference users.user_id.
And now the plans:
(by the
We have a similarly sized database and we went with schemas. We did
something different, though, we created one schema that contained all
of the tables (we used the public schema) and then created the hundreds
of schemas with views that access only the related rows for a
particular schema. S
Hello fellow PostgreSQL users.
We've been working on this interesting issue for some time now, and we're
hoping that someone can help.
We've recently integrated postgres into an existing mature app. Its a
time sensitive 24x7 system. It runs on HP9000, a K370 Dual Processor
system. Postgres is
[EMAIL PROTECTED] wrote:
Compare file sync methods with one 8k write:
(o_dsync unavailable)
open o_sync, write 6.270724
write, fdatasync13.275225
write, fsync, 13.359847
Odd. Which filesystem, which kernel? It seems fdatasync is broken and
On Mar 23, 2004, at 11:16 AM, Subbiah, Stalin wrote:
And we also created rules to allow update, delete, and insert on those
views so that they looked like tables. The reason we did this is
because we ran into issues with too many open files during pg_dump
when
we had thousands of tables instead
Tom Lane wrote:
[EMAIL PROTECTED] writes:
I could certainly do some testing if you want to see how DBT-2 does.
Just tell me what to do. ;)
Just do some runs that are identical except for the wal_sync_method
setting. Note that this should not have any impact on SELECT
performance, only ins
On 26 Mar, Manfred Spraul wrote:
> [EMAIL PROTECTED] wrote:
>
>>Compare file sync methods with one 8k write:
>>(o_dsync unavailable)
>>open o_sync, write 6.270724
>>write, fdatasync13.275225
>>write, fsync, 13.359847
>>
>>
> Odd. Which fi
Quig,
> Your responses have made it much clear for us. BTW, do you
> think that it's better for us just to rewrite everything so we don't
> need to use the patch at all? Why do others still use it?
Others use it because of the same reason you do.
If you want to use the patch for seemless port
[EMAIL PROTECTED] wrote:
> On 26 Mar, Manfred Spraul wrote:
> > [EMAIL PROTECTED] wrote:
> >
> >>Compare file sync methods with one 8k write:
> >>(o_dsync unavailable)
> >>open o_sync, write 6.270724
> >>write, fdatasync13.275225
> >>write, fsync,
On 26 Mar, Bruce Momjian wrote:
> [EMAIL PROTECTED] wrote:
>> On 26 Mar, Manfred Spraul wrote:
>> > [EMAIL PROTECTED] wrote:
>> >
>> >>Compare file sync methods with one 8k write:
>> >>(o_dsync unavailable)
>> >>open o_sync, write 6.270724
>> >>write, fdatasync
Thanks a lot! We were migrating to Postgres from Oracle and
every now and then, we ran into something that we do not
understand completely and it is a learning process for us.
Your responses have made it much clear for us. BTW, do you
think that it's better for us just to rewrite everything so we
25 matches
Mail list logo