WHEN lft BETWEEN cleft AND cright THEN lft + treeshift
ELSE lft END,
rht = CASE
WHEN rht BETWEEN leftbound AND rightbound THEN rht + cwidth
WHEN rht BETWEEN cleft AND cright THEN rht + treeshift
ELSE rht END;
RETURN ''Tree ha
TABLE tree (
idINTEGER NOT NULL,
parent_id INTEGER,
"name"TEXT NOT NULL,
PRIMARY KEY (id)
);
INSERT INTO tree VALUES (1,NULL,'');
INSERT INTO tree VALUES (2,1,'usr');
INSERT INTO tree VALUES (3,1,'tmp');
INSERT INTO tree VALUES (4,1,'ho
WHEN rgt BETWEEN cleft AND cright THEN rgt + treeshift
ELSE rgt END
WHERE lft < leftrange OR rgt > rightrange;
RETURN ''Tree has been moved'';
END;
'
LANGUAGE 'plpgsql';
Greg Sabino Mullane [EMAIL PROTECTED]
PGP Key: 0x14964AC8 200211251526
--
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
If they are truly identical, then you must use the hidden 'oid' column to
differentiate the two. No need to peek at the oid, just do this:
DELETE FROM test WHERE oid = (SELECT oid FROM test WHERE column_id=5 LIMIT 1);
- --
Greg Sabi
(SELECT DISTINCT date_opened
FROM mydates
WHERE date_opened BETWEEN '23-Jan-2003' AND '26-Jan-2003') AS T2
USING (date_opened)
ORDER BY "day" ASC;
The DISTINCT is not strictly needed, but is a safeguard in case the
mydates table has more than one entry w
T JOIN
(SELECT oid::integer-15 AS mym
FROM pg_type ORDER BY oid ASC LIMIT 12) AS dos
USING (mym);
- --
Greg Sabino Mullane [EMAIL PROTECTED]
PGP Key: 0x14964AC8 200306072131
-BEGIN PGP SIGNATURE-
Comment: http://www.turnstep.com/pgp.html
iD8DBQE+4phlvJuQZxSWSsgRAqL
w are each of these tables being populated? Why the distinct? Is it because
there may be duplicate rows in the table? The reason I as is that it might be
better to ue triggers to compute some of the information as it comes in,
depending on which tables are changes and how often.
- --
Greg S
ped in with
B simply because B has already been lumped in with A?
Perhaps showing us the sample output of an ideal query would be best,
along with some sample rows (e.g. use real insert statements)
- --
Greg Sabino Mullane [EMAIL PROTECTED]
PGP Key: 0x14964AC8 200307151045
-BEGIN PGP SIGNATU
, you are best off using a function.
- --
Greg Sabino Mullane [EMAIL PROTECTED]
PGP Key: 0x14964AC8 200307151137
-BEGIN PGP SIGNATURE-
Comment: http://www.turnstep.com/pgp.html
iD8DBQE/FCAgvJuQZxSWSsgRAg1jAJ9kS9PpIiMkij6TtOg63O59TeezPACgzhMF
ZM/84SEPP4doDR8fsGpnUBU=
=w5Wa
-END PGP
e tablename. You
also probably want to use EXISTS, not "count..::bool".
- --
Greg Sabino Mullane [EMAIL PROTECTED]
PGP Key: 0x14964AC8 200307171005
-BEGIN PGP SIGNATURE-
Comment: http://www.turnstep.com/pgp.html
iD8DBQE/Fq/7vJuQZxSWSsgRAj01AKCz9BA4aYrp8pn
';
LOOP
myinfo := myinfo || \'0\';
mydec := mydec - 1;
EXIT WHEN mydec < 1;
END LOOP;
END IF;
myinfo := myinfo || \'\'\')\';
ELSE
myinfo := myinfo || myrec.attname;
END IF;
myinfo := m
back. Add a
$dbh->commit()
after your execute line and it should work as expected.
Also note that your "die" on connect will not work because you have RaiseError
set: Leave it off for the connect, then turn it on again immediately via:
$dbh->{RaiseError}=1;
- --
Greg Sabino Mullan
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
> How can I produce the following desired result?
SELECT * FROM mytable
JOIN
(SELECT goodid, MAX(storehistoryid) AS storehistoryid FROM mytable GROUP by 1) AS a
USING (goodid,storehistoryid);
- --
Greg Sabino Mullane [EMAIL PROTECTED]
PGP
END LOOP;
RETURN reversed;
END
' LANGUAGE plpgsql IMMUTABLE RETURNS NULL ON NULL INPUT;
pg=> SELECT reverse('A man, a plan, a canal, Panama');
reverse
- ----
amanap ,lanac a ,nalp a ,nam A
- --
Greg Sabino Mullane [EMAIL
t columns in the table,
and what you wish to do?
- --
Greg Sabino Mullane [EMAIL PROTECTED]
PGP Key: 0x14964AC8 200312022156
-BEGIN PGP SIGNATURE-
iD8DBQE/zVDsvJuQZxSWSsgRAnXaAJ9ndHTI7ha9ZyWS+Bnybgbve09jVQCffWTa
H/csLQmY29QTReOH7XYMtHs=
=7xo3
-END PGP
e:
SELECT 1;
If you perhaps want the column names, query the system tables.
Otherwise the LIMIT 1 should be very fast, especially if you have no
ordering. Avoid the "SELECT *" if you don't need all the columns
for a little more speed.
- --
Greg Sabino Mullane [EMAIL PROTECTED]
PGP
Hello,Announcing the release of a new web site : www.SQL-Scripts.comAt www.SQL-Scripts.Com you can find a collection of SQL Scripts for
manydifferent database system. Using our search system you can find
scriptsthat you need quickly and simply. If you have scripts that you
use why notlodge
I am using postgreSQL with Perl::CGI and Perl::DBI::Pg... I would like to be
able to insert a row from my Perl script [$SQL->exec();] and have postgreSQL
return the id of the newly inserted record (new.id) directly to the Perl
script for further processing... Anyone with a solution / idea ???
Nea
This seem rather limiting... Suppose I want a trigger that after insert,
returns the currval(sequence) of the newly inserted row automatically
without having to run another query ???
GP
---(end of broadcast)---
TIP 3: if posting/reading throug
ements in the supplied array.
--
Josh Berkus
[EMAIL PROTECTED]
Aglio Database Solutions
San Francisco
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
--
Greg Johnson <[EMAIL PROTECTED]>
essions" sections of this:
http://www.ca.postgresql.org/users-lounge/docs/7.3/postgres/functions.html
--
greg
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
Metnetsky <[EMAIL PROTECTED]> writes:
> It's for a class and my professor has a thing for brain teaser type
> questions.
Incidentally, TAs and Profs aren't stupid, and have been known to check on
newsgroups and mailing lists for students asking for people to do their
homew
~* 'kate'
*confusion*. Oooh, Yeah, this one is a big problem. It means it's not clear
which end of the join to start with. Maybe it would be better to separate this
into two separate queries, give the user the option to search for a user
"kate" or a package "kate" but not both simultaneously.
--
greg
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/users-lounge/docs/faq.html
loop.
Also, as a beneficial side effect will also limit the damage if one your users
does a search for "e"...
This only really helps if you can get rid of the OR CMS.package clause...
otherwise it actually needs all the records in case they match a summary
record with a kate package.
--
Greg Stark <[EMAIL PROTECTED]> writes:
> Can I have a GiST index on (foo_id, attribute_set_array) and have it be just
> as fast at narrowing the search to just foo_id = 900 but also speed up the ~
> operation?
Hm, so if I understand what I'm reading I can do this if I
om amongst the equivalent choices based on performance decisions.
Can you show an example where the join order would affect the result set? I
can't think of any.
--
greg
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
greatest(a,b) FROM bar
would return one tuple for every record in the table with a single value
representing the greater of bar.a and bar.b.
You could define your own functions to do this but it would be tiresome to
define one for every datatype.
--
greg
---(end of broadca
ad a column that contained the seq of the parent record then this would be
easy. I'm really puzzled how this query as currently specified could be
useful.
--
greg
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
standard functions is very hard and
inefficient.
The functions to do them don't fit well within the SQL universe either, which
might be why they don't exist yet in postgres.
--
greg
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
text start_with, int max_depth [, text branch_delim])
- returns keyid, parent_keyid, level, and an optional branch string
- requires anonymous composite type syntax in the FROM clause. See
the instructions in the documentation below.
--
g
e 26 partitions right off
the bat. Not try to make up arbitrary break points. If those arbitrary
breakpoints turn out to be poorly chosen it'll be a complex manual job to move
them. Whereas if you just have 26 partitions some will be large and some small
and you can move par
ks (like the
arrays are zero-based instead of 1-based) but it works more or less, and it
does exactly what you describe.
But the 7.4 stuff should be much cleaner and more flexible, so if you don't
need it right now you're better off waiting.
--
greg
--
ll the other goodies
you actually need to make it useful?
If not it seems more useful to just use the handy unknown-value thing SQL
already has and turn NaN into a NULL.
--
greg
---(end of broadcast)---
TIP 7: don't forget to increase yo
er to 0, whereas they could
> be closer to +/-1, as well as both closer to the lower or upper bound.
Wouldn't 0.5 and -0.5 be representable exactly as floats?
--
greg
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
s even though the objects in
the two databases are identical.
--
greg
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
e
table which isn't what you want and in any case currval takes an argument.
--
greg
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
o if the rest of the database is still active the tables and indexes being
updated may grow larger than normal. If it goes on for a _really_ long time
they might need a VACUUM FULL at some point to clean them up.
--
greg
---(end of broadcast)---
T
xpressions other than simple function calls.
They still should be things that always return the same value, which excludes
subqueries.
--
greg
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faqs/FAQ.html
ies like
SELECT * FROM my_table WHERE userid = ? AND create_date >= date '2003-11-02'
And it'll be able to efficiently pull out just those records, even if there
are thousands more records that are older than 2003-11-02.
This avoids having to create a two-column index with a low-s
27;{1,2}'::integer[]))
(2 rows)
--
greg
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
er by
clause then, well, you're SOL. Since the OFFSET and LIMIT clauses only kick in
after the where clause restrictions are taken into account.
--
greg
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
pression came out of the
query you gave. You'll have to give a lot more information about how you're
defining *= and why you think it's related to the function you used to define
the index.
--
greg
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
other datatypes. You might be able to get a
universal function working using anyelement now, I haven't tried.
--
greg
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
from transactions
where UPPER(pop) = UPPER('79bcdc8a4a4f99e7c111')
offset 0
)
order by order_date DESC;
The offset 0 prevents the optimizer from pulling the subquery into the outer
query. I think this will prevent it from even considerin
meone
reading the query in that it makes it look like you're trying to encourage it
to use the index on upper(pop). In theory "order by"s on subqueries are
useless and postgres could ignore them, but it doesn't.
--
greg
---(end of broadcast)
x scans aggressively
even when they're not faster.
--
greg
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
You might be better off with something more like this:
SELECT COUNT(P.*) AS p_count, P.p_product_category_id FROM p_product P
WHERE P.p_product_category_id IN (SELECT DISTINCT id FROM
p_product_category) GROUP BY P.p_product_category_id;
obviously tailored to YOUR schema... not mine...
--
Greg
IMARY KEY default nextval('nextbz_seq'),
x varchar NOT NULL REFERENCES a(x),
);
--
Greg Patnude / The Digital Demention
2916 East Upper Hayden Lake Road
Hayden Lake, ID 83835
(208) 762-0762
"Bruno Wolff III" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
&
New feature for posgreSQL 7.3 & later -- sequences are automatically dropped
when the owning table is dropped -- you need to recreate the sequence
The easiest way is in your CREATE TABLE tblName (
id serial PRIMARY KEY,
blah...,
blah...,
) WITH OIDS;
--
Greg Patnude /
;
Table A would have
1, 99
2, 99
..
99,99
and table B would have
1, 1
1, 2
..
1, 99
Greg Patnude / The Digital Demention
2916 East Upper Hayden Lake Road
Hayden Lake, ID. 83835
(208) 762-0762
Send replies to: [EMAIL PROTECTED]
Website: http://www.left-center.com
-Original M
000-12-3456'
Either that or the question is: How can I coerce postgreSQL into using an
input / output "mask"...
--
Greg Patnude / The Digital Demention
2916 East Upper Hayden Lake Road
Hayden Lake, ID 83835
(208) 762-0762
---(end of broadcast)-
ny leading zeros - I just need to make sure that something "looks"
like a valid SSN in the formattig
(nnn-nn-) and that I can store / retrieve it with the approoriate
format -- what I am really trying to accomplish is an "input mask"...
I hadn't considered using a Doma
t the result to be cached in a single
per-table counter, but that can't be done as simply as that because of
transactions. People have discussed complex solutions to this but it's a much
more complex problem than it appears.
They're really two entirely separate issues.
--
greg
have an array of complete records and when UNNEST is
eventually incorporated into the array code then you could expand those
instead of using the int_array_enum function. Neither of those things are
ready yet as far as I know though.
--
greg
---(end of broadcast)-
of vendors, kiss
> real standards goodbye.
In the case of SQL was there ever any pretension otherwise? Was the SQL
standard ever really useful as a "real standard"? I can write useful ANSI C89
code that will compile and work on any C compiler. Trying to write portable
SQL92 code t
cept:
> SD["nextno"] = 1
>return SD["nextno"]
> ' language 'plpythonu';
>
> And clearly it can be done faster as a little
> C function.
Does this approach have a hope of working if it's used twice in the same
query?
--
greg
--
_2 ... would be easier to handle?
Or perhaps making $ not require quoting would be helpful?
--
greg
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
Tom Lane <[EMAIL PROTECTED]> writes:
> Greg Stark <[EMAIL PROTECTED]> writes:
> > Is there a reason postgres goes out of its way to pick names that
> > will be harder to work with than necessary?
>
> If we use ordinary identifiers for system-generated names the
array_aggregate
{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,28,29,30,31,32,33,34,36,37,38,39,40,41,42,43,27,26,44,45,46,47,48,49,50,51,52,53,54,35}
(1 row)
--
greg
--
E FUNCTION concat_agg_accum(text, text) RETURNS text
AS 'select $1 || '', '' || $2'
LANGUAGE sql IMMUTABLE STRICT;
CREATE AGGREGATE concat_agg (
BASETYPE = text,
SFUNC = concat_agg_accum,
STYPE = text
);
--
greg
---(end o
the progress of
large batch updates or loads.
--
greg
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
the second faster (or if they're both still slow) you might
consider posting explain analyze output for both queries. It may be that you
have other issues preventing the optimizer from finding a good plan.
You have run analyze on these tables recently? And you vacuum regularly? And
for the
mple has to be small enough that even using an index scan
you're winning over reading the entire data.
--
greg
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL
Markus Bertheau <[EMAIL PROTECTED]> writes:
> oocms=# SELECT ARRAY(SELECT 1 WHERE FALSE);
> ?column?
> --
>
This one seems strange to me. Shouldn't it result in an empty array?
--
greg
---(end of broadcast)---
x27;t use the new binary placeholder syntax and emulate it by putting the
parameters directly into the SQL.
--
greg
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
en realize they were special this way)
--
greg
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
Ray Aspeitia <[EMAIL PROTECTED]> writes:
> I also would like to pass the delimiter to the aggregate as a parameter and
> I am not sure if it can handle that.
Currently aggregates that take multiple parameters are just not supported.
--
greg
---(end
ix userid. You
would have to figure out how to recognize message-ids from your MUA.
--
greg
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faqs/FAQ.html
#x27;return
sleep(shift)' language plperlu;
CREATE FUNCTION
test=# \timing
Timing is on.
test=# select sleep(10) ;
sleep
---
10
(1 row)
Time: 10002.493 ms
--
greg
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
r,columns FROM tab ORDER BY id,day
That gets the lowest value of "day". Using "ORDER BY id, day DESC" to get the
greatest value of "day".
--
greg
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, pleas
w the changes made to the specific database structure.
Well, CVS can still be useful even if the changes are all in one file. Look at
"cvs annotate" for example. And CVS diff would still give you useful
information.
--
greg
---(end of broadcast)---
This can easily be switched around to make it "last_6" and you can write
functions to handle 6 records or 9 records. And all of these could be combined
in a single query, so you only have to do the sort once.
Unfortunately you cannot make aggregate functions that take multiple
pa
st efficient integer
index lookups. You should still include a comparison against the original
content column since it is possible for there to be a rare crc32 collision.
This doesn't let you do range lookups efficiently. But it does let you look up
specific values quic
=> explain select * from botched_view where content_id = 1;
QUERY PLAN
Index Scan using idx_botched_table on botched_table (cost=0.00..3.72 rows=3 width=32)
Index Cond: ((content)::integer = 1
Greg Stark <[EMAIL PROTECTED]> writes:
> Theo Galanakis <[EMAIL PROTECTED]> writes:
>
> > I created the Index you specified, however it chooses to run a seq scan on
> > the column rather than a Index scan. How can you force it to use that
> > Index..
> >
Tom Lane <[EMAIL PROTECTED]> writes:
> Greg Stark <[EMAIL PROTECTED]> writes:
> > That's all well and good. But when I tried to make a version of your
> > situation that used a function I found it doesn't work so well with
> > functional indexes:
ta it needs to process.
You might need some kind of queue table, or you might be able to get by
without one.
--
greg
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAI
strange symptoms when your code has a bug.
Other than that there are things like bit(4) which has a cast to and from
integer. But they won't save you any storage space. If you have multiple
columns like this and want to define a new type that aggregates them all for
storage but lets you a
n.
Which book is it?
Postgres indexes NULLs. It can use them for ORDER BY clauses.
Where it cannot use them is to satisfy "WHERE foo IS NULL" or "WHERE foo IS
NOT NULL" constraints though. That's an implementation detail, but it can be
worked around with part
.
(This may have all changed in Oracle 9+. The last I saw of Oracle was 8i)
I wonder if they just tried explain on a bunch of queries and noticed that
postgres wasn't using an index for SELECT * FROM foo ORDER BY bar and came up
with explanations for the patterns they saw?
--
greg
--
nd count(*) > 1
) on (model_fk = model_pk)
This would let the planner have a more plans to choose from and might be a big
win if there are lots of brands and models but few that satisfy the criteria
you're looking for.
--
greg
---(end of broadcast)---
TIP 8: explain analyze is your friend
T E Schmitz <[EMAIL PROTECTED]> writes:
> SELECT
> BRAND_NAME,MODEL_NAME
...
> intersect
...
Huh, I never think of the set operation solutions. I'm curious how it compares
speed-wise.
--
greg
---(end of broadcast)---
TIP
like a good idea to emit
* an error message that's better than "syntax error".
*/
So where can I officially register my complaint? :)
--
greg
---(end of broadcast)---
TIP 9: the planner will ignore your desi
er just wrap the non-strict expression in a conditional
that tests whether the row was found?
--
greg
---(end of broadcast)---
TIP 8: explain analyze is your friend
er by ...) ss;
But I thought she wanted to get the row number within a group. Not the row
number for the entire result set. A sequence can't do that. Or at least, I
suppose it could but it would be mighty strange to see setval() in a SELECT
query. And I can't think of how to detec
nto your queries
using $column1 then you've got a potential security problem. Unless you're
quoting every variable everywhere religiously using postgres's quoting
functions an attacker can sneak extra SQL into your queries. Potentially
including whole new statements such as "DELETE FROM table"...
--
greg
---(end of broadcast)---
TIP 8: explain analyze is your friend
"C. Bensend" <[EMAIL PROTECTED]> writes:
> The risk of a cron gone wild is acceptable to me at this moment.
Gee, now I have images of late-night advertisements for bofh-porn video tapes
of Cron Jobs Gone Wild(tm) dancing through my head... thanks.
--
greg
---
t in 7.4 and later. If you're still on 7.3 then the
outer join will be better. But in 7.4 I would actually expect the NOT IN or
the NOT EXISTS to be faster.
--
greg
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
g records and will give an
error if there are multiple records instead of doing strange things.
Neither of these will be particularly pretty on the performance front.
--
greg
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
;explain verbose" use "explain analyze". I don't know if it was in
7.1 but you say you reproduced the problem with 7.4. It would be helpful to
see the results of "explain analyze select ..." on the query before and after
the vacuum analyze.
--
greg
less of what page is being fetched. Yours will perform more
and more slowly as the user gets deeper into the results.
Note that both queries are wrong however. You need an "ORDER BY itemkey" or
else nothing guarantees the second page has any relation at all to the first
page.
--
greg
e expecting.
What string? That's just integer arithmetic.
--
greg
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
onnect to the
database and do the batch jobs.
They can output progress logs or keep information about their progress in some
shared space. They can also control the transaction more freely committing in
the middle of the job if it's safe.
--
greg
---(end of broad
ting a constraint
> similar to the original poster's and not realising it has no effect.
well
WHERE foo IN (null, ...)
returns null if foo isn't explicitly in the list (ie, "it may or may not equal
the unknown value in the list"). And I think constraints that retur
terrupted without losing work.
Oh, and this won't work if you have any external references to these values
from outside your database. Say if the value is something like a customer
account number that you've previously sent to customers...
--
greg
---(end o
ill always come to the same postgres session and has some
garbage collection if the user disappears. And it means the URL is only good
for a limited amount of time. If they bookmark it it'll break if they come
back the next day.)
--
greg
---(end of broadcast)---
Isn't this a prime example of when to use a servlet or something similar
in function? It will create the cursor, maintain it, and fetch against
it for a particular page.
Greg
-Original Message-
From: Richard Huxton [mailto:[EMAIL PROTECTED]
Sent: Thursday, January 20, 2005 10:21
vantage if the data were stored in a temporary table
marked as not having to be WAL logged. Instead it could be automatically
cleared on every database start.
--
greg
---(end of broadcast)---
TIP 6: Have you searched our list archiv
"Joel Fradkin" <[EMAIL PROTECTED]> writes:
> I also tried a simple select * from tblcase where clientum = 'SAKS'
Try:
explain analyze select * from tblcase where clientum = 'SAKS'
Send the output.
--
greg
---(end of broa
Total runtime: 1034.434 ms"
Well that says it only took 1s. So it seems this is highly dependent on
whether the data is in cache. Perhaps it was in cache on MSSQL when you
profiled it there and not on postgres?
You could put an index on clientnum, but if the data is usually in cache like
this it m
1 - 100 of 201 matches
Mail list logo