Stephan Szabo <[EMAIL PROTECTED]> writes:
> Actually, just thought of something else. If you remove
> the probably redundant p.song_id=s.song_id from the second
> query (since the join ... using should do that) does it
> change the explain output?
I was just about to point that out. The WHERE
Actually, just thought of something else. If you remove
the probably redundant p.song_id=s.song_id from the second
query (since the join ... using should do that) does it
change the explain output?
On Fri, 9 Mar 2001, David Olbersen wrote:
> On Fri, 9 Mar 2001, Stephan Szabo wrote:
>
> ->As
Darn. Well, one of the queries picked that 1 row was going to survive
the nested loop step and the other said 14. I was wondering which one
was closer to being correct at that time.
On Fri, 9 Mar 2001, David Olbersen wrote:
> On Fri, 9 Mar 2001, Stephan Szabo wrote:
>
> ->As a question, how
On Fri, 9 Mar 2001, Stephan Szabo wrote:
->As a question, how many rows does
->select * from playlist p join songs s using (song_id) where
->p.waiting=TRUE;
->actually result in?
Well it depends. Most of the time that playlist table is "empty" (no rows where
waiting = TRUE), however users can (i
On Fri, 9 Mar 2001, David Olbersen wrote:
> On Fri, 9 Mar 2001, Stephan Szabo wrote:
>
> -> Hmm, what were the two queries anyway?
>
> The "slower" query
>
> SELECT
> to_char( p.insertion_time, 'HH:MI AM MM/DD' ) as time_in,
> s.nameas title,
>
On Sun, 11 Mar 2001 05:27, Najm Hashmi wrote:
> I have PosgreSQL 7.1b3 running on one of our test servers. It seems
> like PosgreSQL 7.1b3 is not very stable. I want to go back to 7.0.3v
> since it it the most stable version available.
Another, imho better, alternative is to move forward to e
On Fri, 9 Mar 2001, Stephan Szabo wrote:
-> Hmm, what were the two queries anyway?
The "slower" query
SELECT
to_char( p.insertion_time, 'HH:MI AM MM/DD' ) as time_in,
s.nameas title,
a.nameas artist,
s.length as le
> On Fri, 9 Mar 2001, Stephan Szabo wrote:
>
> ->Not entirely. Those are only estimates, so they don't entirely line up
> ->with reality. Also, I notice the first estimates 14 rows and the second
> ->1, which is probably why the estimate is higher. In practice it probably
> ->won't be signif
On Fri, 9 Mar 2001, Stephan Szabo wrote:
->Not entirely. Those are only estimates, so they don't entirely line up
->with reality. Also, I notice the first estimates 14 rows and the second
->1, which is probably why the estimate is higher. In practice it probably
->won't be significantly diffe
On Fri, 9 Mar 2001, David Olbersen wrote:
> Greetings,
> I've been toying aroudn with postgres 7.1beta5's ability to control the
> planner via explicitely JOINing tables. I then (just for giggles) compare the
> difference in the EXPLAIN results.
>
> I'm no super-mondo-DBA or anything, b
Greetings,
I've been toying aroudn with postgres 7.1beta5's ability to control the
planner via explicitely JOINing tables. I then (just for giggles) compare the
difference in the EXPLAIN results.
I'm no super-mondo-DBA or anything, but in my two attempts so far, the numbers
I get out of
On Fri, 9 Mar 2001, Creager, Robert S wrote:
>
> Well, that explains why I wasn't seeing any appreciable speed increase with
> the INITIALLY DEFERRED. I tried mucking in pg_class, and saw a 3 fold
> increase in insert speed on inserts into my table with 2 relational
> triggers. SET CONSTRAINTS
Well, that explains why I wasn't seeing any appreciable speed increase with
the INITIALLY DEFERRED. I tried mucking in pg_class, and saw a 3 fold
increase in insert speed on inserts into my table with 2 relational
triggers. SET CONSTRAINTS ALL DEFERRED does nothing to very little to
increase th
On Fri, 9 Mar 2001, Josh Berkus wrote:
> Robert,
>
> > I suspect that the INSERT INTO SELECT in this case will take longer than a
> > CREATE TABLE AS because of the referential integrity check needed on every
> > INSERT (per Tom Lane).
>
> In that case, what about:
>
> a) dropping the referent
Robert,
> I suspect that the INSERT INTO SELECT in this case will take longer than a
> CREATE TABLE AS because of the referential integrity check needed on every
> INSERT (per Tom Lane).
In that case, what about:
a) dropping the referential integrity check;
2) making the referential integrity c
Creager, Robert S writes:
> psql -d tassiv -c "\
> create table observationsII ( \
> ra float8 not null, \
> decl float8 not null, \
> mag float8 not null, \
> smag float8 not null, \
> obs_id serial, \
> file_id int4 references files on delete cascade, \
> star_id int4 references comp_loc on del
Robert,
> How then can I add in a DEFAULT nextval in place of SERIAL and get
> the
> REFERENCES in there? Or can I?
You can't (as far as I know). If that's important to you, you need to
create the table first with a regular CREATE TABLE statement, then do
INSERT INTO. CREATE TABLE AS is, I b
"Creager, Robert S" <[EMAIL PROTECTED]> writes:
> And the next question, should this really be taking 3 hours to insert 315446
> records? I noticed the disk is basically idle during the few times when I
> watched. Would this be because of the index created on obs_id?
Not for a single index. I
Robert,
> Thanks for the pointers. I'm actually working on modifying the structure of
> an existing db, so this is all within Pg. Those INSERT INTOs with SELECTs
> are painfully slow, and I have an larger table to do this to... I guess
> Perl will have to rescue me...
Why don't you post your
Robert,
I can't help you with your performance problem, but I can help you with
CREATE TABLE AS. You've mistaken the syntax; CREATE TABLE AS does not
use column definitions other than the query. Thus, the correct syntax
should be:
> create table observationsII
> AS select o.ra, o.decl
Tom, Richard,
Thanks for the advice, guys! This being Postgres, I *knew* there would
be other options.
> > create aggregate catenate(sfunc1=textcat, basetype=text,
> stype1=text, initcond1='');
>
> > Then group by client and catenate(firstname || ' ' || lastname)
>
> With a custom agg
Najm Hashmi <[EMAIL PROTECTED]> writes:
> By the way 7.1b3 is crashing 3 to 4 times a week.
It would be nice to have some bug reports that might allow us to fix
those crashes.
And no, you can't go back to 7.0 without dump/initdb/reload.
regards, tom lane
Richard Huxton <[EMAIL PROTECTED]> writes:
> But - if you don't care about the order of contacts you can define an
> aggregate function:
> create aggregate catenate(sfunc1=textcat, basetype=text, stype1=text, initcond1='');
> Then group by client and catenate(firstname || ' ' || lastname)
With
I'm sure I'm doing something wrong, and I'm hoping someone can show me the
way of things. Running 7.1beta5 on an Ultra 5, Solaris 2.6 w/256Mb mem. If
I remove the AS, the table creates correctly and I can do the INSERT INTO
with the SELECT clause
psql -d tassiv -c "\
create table observationsI
I have PosgreSQL 7.1b3 running on one of our test servers. It seems like
PosgreSQL 7.1b3 is not very stable. I want to go back to 7.0.3v since it it
the most stable version available. I am just wondering what should I do. can
I reinstall 7.0.3 on 7.1b3 directly ? If not then what steps should
Hi,
probably some environment variables are not set... these are used by
the DBD::Pg install to determine include and lib directories:
POSTGRES_INCLUDE
POSTGRES_LIB
if you installed postgres into /opt/postgres do
export POSTGRES_INCLUDE=/opt/postgres/include
export POSTGRES_LIB=/opt/postgres/lib
Josh Berkus wrote:
> I have an interesting problem. For purpose of presentation to users,
> I'd like to concatinate a list of VARCHAR values from a subtable. To
> simplify my actual situation:
>
> What I'd like to be able to do is present a list of clients and their
> comma-seperated co
27 matches
Mail list logo