For Oracle info, check out one of my recent posts:
http://archives.postgresql.org/pgsql-sql/2005-01/msg00231.php
For TSQL, well, I was involved in project where we converted an Oracle db
(with procs, functions, triggers, etc) to PostgreSQL and MS Sql Server.
plpgsql and plsql are close enough whe
Hello everyone,
given is a table with a version history kind of thing I am currently
working on. Upon this table there is a view and the application interacts
with the view only, updating/inserting/deleting is controlled by rules. It
seems like the record set "OLD" gets changed when it is used i
Finally, I built the table with all the additional columns created
during the initial creation of the table. The original speed was
obtained!
Quite strange !
Did you vacuum full ? analyze ? Did you set a default value for the
columns ? mmm maybe it's not the fact of adding the columns,
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
We've done some pretty extensive benchmarking and load testing on a
couple of platforms including the Xeon and Opteron. You may have already
bought that Dell box, but I'll say it anyway. Xeon quad processors are a
terrible platform for postgres. Trying
I was puzzled as to why my search slowed down when I added columns.
The VACUUM did not restore the former speed,
which I had obtained before adding the columns.
So, I rebuilt the table with only the smiles column and my original
speed was again obtained (not surprising).
After I added the extra col
I'm quite happy with the speedup in 3, but puzzled over the slowdown in
2.
Could you provide :
- SELECT count(*) FROM structure;
=> NRows
- SELECT avg(length(smiles)) FROM structure;
Then VACUUM FULL ANALYZE structure
Redo your timings and this tim
Bruno Wolff III wrote:
On Tue, Jan 25, 2005 at 21:21:08 -0700,
Dennis Sacks <[EMAIL PROTECTED]> wrote:
One of the things you'll want to do regularly is run a "vacuum analyze".
You can read up on this in the postgresql docs. This is essential to the
indexes being used properly. At a bare minim
On Tue, Jan 25, 2005 at 21:21:08 -0700,
Dennis Sacks <[EMAIL PROTECTED]> wrote:
>
> One of the things you'll want to do regularly is run a "vacuum analyze".
> You can read up on this in the postgresql docs. This is essential to the
> indexes being used properly. At a bare minimum, after you im
On Tue, Jan 25, 2005 at 10:11:40 -0500,
Joel Fradkin <[EMAIL PROTECTED]> wrote:
> Hi all working my way through our views and all is going very well.
>
> We use datediff in MSSQL a bit and I read about the field1::date -
> field2::date to return the days numerically.
>
> Is there any way to get
Joel Fradkin wrote:
Thank you I will look at that info.
I did do an EXPLAIN ANALYSE on the view and could see it was doing the seq
scan on 3 fields, so I did an index for the three fields and it then chose
an index scan and ran in 27 seconds.
I also did adjust my defaults to much smaller numbers on
Sibtay,
> As you might have observed here, the actual problem is
> how to do assignment to multidimensional array locations using the
> subscript operater.
Maybe post your results, too?
--
Josh Berkus
Aglio Database Solutions
San Francisco
---(end of broadcast)
Maybe you should tweak the cpu_index_tuple_cost parameter instead of
disabling sequential scans. De default value is 0.001, you should
change it to a lower value (0.0005 or something).
Joel Fradkin wrote:
I tried the SET ENABLE_SEQSCAN=FALSE;
And the result took 29 secs instead of 117.
Aft
Thank you I will look at that info.
I did do an EXPLAIN ANALYSE on the view and could see it was doing the seq
scan on 3 fields, so I did an index for the three fields and it then chose
an index scan and ran in 27 seconds.
I also did adjust my defaults to much smaller numbers on shared buffers (pe
Joel Fradkin wrote:
The postgres is running on Linux Fedora core 3 (production will be redhat on
Dell 4 proc 8 gig box).
My client pgadminIII is running on XP.
Sorry I was not clearer on this.
Ah! you're the gent who had the problems with SE-Linux on Fedora 3.
Sorry - should have made the connecti
"Joel Fradkin" <[EMAIL PROTECTED]> writes:
> I tried the SET ENABLE_SEQSCAN=FALSE;
> And the result took 29 secs instead of 117.
>
> After playing around with the cache and buffers etc I see I am no longer
> doing any swapping (not sure how I got the 100 sec response might have been
> shared buff
I tried the SET ENABLE_SEQSCAN=FALSE;
And the result took 29 secs instead of 117.
After playing around with the cache and buffers etc I see I am no longer
doing any swapping (not sure how I got the 100 sec response might have been
shared buffers set higher, been goofing around with it all morning)
"Joel Fradkin" <[EMAIL PROTECTED]> writes:
> QUERY PLAN
> "Seq Scan on tblcase (cost=0.00..30066.21 rows=37401 width=996) (actual
> time=0.344..962.260 rows=22636 loops=1)"
> " Filter: ((clientnum)::text = 'SAKS'::text)"
> "Total runtime: 1034.434 ms"
Well that says it only took 1s. So it seems
The postgres is running on Linux Fedora core 3 (production will be redhat on
Dell 4 proc 8 gig box).
My client pgadminIII is running on XP.
Sorry I was not clearer on this.
I am playing with the settings now, I got it to return in 100 secs (the view
that is that took 135 on MSSQL). My testing is
I've tested in a relation of mine, with about 20 attributes, and here
are the results:
test=# select count(*) from gestionestareas;
count
447681
(1 row)
test=# explain analyze select * from gestionestareas where agrupable;
QU
Well last evening (did not try it this morning) it was taking the extra
time.
I have made some adjustments to the config file per a few web sites that you
all recommended my looking at.
It is now using 137 of 756 meg avail.
it is now taking 8 secs to return 22,000 rows (using pgadminIII in a sql
Joel Fradkin wrote:
Well last evening (did not try it this morning) it was taking the extra
time.
I have made some adjustments to the config file per a few web sites that you
all recommended my looking at.
The crucial one I'd say is the performance guide at:
http://www.varlena.com/varlena/General
> ...
> Where can I find
> primer on PL/pgsql, with lots of examples?
> ...
Download openacs at http://openacs.org/projects/openacs/download/. Look at
directories matching
the pattern openacs-*/packages/*/sql/postgresql/. The older openacs version
4.6.3 has more
examples than openacs 5.
Ge
Joel Fradkin wrote:
QUERY PLAN
"Seq Scan on tblcase (cost=0.00..30066.21 rows=37401 width=996) (actual
time=0.344..962.260 rows=22636 loops=1)"
" Filter: ((clientnum)::text = 'SAKS'::text)"
"Total runtime: 1034.434 ms"
That's only 1 second - to return 22,636 rows. Not 27 seconds, as in the
origi
Alex Turner wrote:
As I read the docs, a temp table doesn't solve our problem, as it does
not persist between sessions. With a web page there is no guarentee
that you will receive the same connection between requests, so a temp
table doesn't solve the problem. It looks like you either have to
cre
QUERY PLAN
"Seq Scan on tblcase (cost=0.00..30066.21 rows=37401 width=996) (actual
time=0.344..962.260 rows=22636 loops=1)"
" Filter: ((clientnum)::text = 'SAKS'::text)"
"Total runtime: 1034.434 ms"
Joel Fradkin
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel. 941-753-7111 ext 305
thank you very much for your reply
I did as you specified and it worked fine
thankz :)
On Wed, 26 Jan 2005 09:29:53 +, Richard Huxton wrote:
> Sibtay Abbas wrote:
> > hello everyone
> >
> > i am having problem with multidimensional arrays in plpgsql following
> > is the source code of the fu
The problems still stays open.
The thing is that I have about 20 - 30 clients that are using that SQL query
where the offset and limit are involved. So, I cannot create a temp table,
because that means that I'll have to make a temp table for each session...
which is a very bad ideea. Cursors som
Sibtay Abbas wrote:
hello everyone
i am having problem with multidimensional arrays in plpgsql following
is the source code of the function which i am trying to run
DECLARE
x INTEGER[10][10];
x[3][1] := '20'; --i have even tried x[3][1] = 20
As you might have observed here, t
Din Adrian wrote:
Hello,
I am want to use a private table in postgresql(every client to see his
own data).
Is this possible? How can I do it!
If I understand what you want, then this is traditionally done using views.
CREATE TABLE base_table (
a integer NOT NULL,
b text,
u name,
PRIMARY K
> > I'm using the PostGIS spatial extension. Some of my spatial
> queries (like
> > live zooming and panning) should only be performed when the column
> > containing the spatial data is spatially indexed, otherwise
> the first query
> > takes almost forever and users will just kill the
> appl
Joel Fradkin wrote:
Basically the question was why would a view use an indexed search on one
result set but a seq search on a larger result set. Same view only
difference is how many rows are returned. The large result set was doing a
seq search and did not return after several minutes. The same sq
Hello,
I am want to use a private table in postgresql(every client to see his own
data).
Is this possible? How can I do it!
Thank you,
Adrian Din
--
Using Opera's revolutionary e-mail client: http://www.opera.com/m2/
---(end of broadcast)---
TIP 9:
am 26.01.2005, um 10:13:52 +0200 mailte Din Adrian folgendes:
> Hello,
> I am want to use a private table in postgresql(every client to see his own
> data).
> Is this possible? How can I do it!
Why?
You can create different users and/or different databases.
Regards, Andreas
--
Andreas Krets
33 matches
Mail list logo