Hello everyone,First time poster to the
mailing list here.
We have been running pgsql for about a year now at
a pretty basic level (I guess) as a backend for custom
web (intranet) application software. Our database so far is a
"huge" (note sarcasm) 10 Mb containing of about 6 or so princi
On Sun, 2005-01-23 at 15:40 -0500, Tom Lane wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes:
> > Changing the idea slightly might be better: if a row update would cause
> > a block split, then if there is more than one row version then we vacuum
> > the whole block first, then re-attempt the update
Hi,
I have a query which is executed using ilike. The query values are
received from user and it is executed using PreparedStatement.
Currently all queries are executed as it is using iilike irrespective
of whether it have a pattern matching character or not. Can using =
instead of ilike boot p
SpaceBallOne wrote:
Hello everyone,
First time poster to the mailing list here.
We have been running pgsql for about a year now at a pretty basic
level (I guess) as a backend for custom
web (intranet) application software. Our database so far is a "huge"
(note sarcasm) 10 Mb containing of abou
On Mon, 24 Jan 2005 08:18 pm, Antony Paul wrote:
> Hi,
> I have a query which is executed using ilike. The query values are
> received from user and it is executed using PreparedStatement.
> Currently all queries are executed as it is using iilike irrespective
> of whether it have a pattern mat
> I then re-wrote the page to use a single select query to call all the
> information needed by PHP to draw the screen. That managed to shave it
> down to 3.5 seconds... but this so far is as fast as I can get the
> page to load. Have tried vacuuming and creating indexes but to no
> avail. (increas
Russell wrote:
> I am not sure what the effect of it being prepared will be, however
I've
> had much success
> with the method above without the queries being prepared. Others may
be
> able to offer advice
> about if prepare will effect it.
>
There are two general cases I tend to use prepared que
I used PreparedStatements to avoid SQL injection attack and it is the
best way to do in JDBC.
rgds
Antony Paul
On Mon, 24 Jan 2005 09:01:49 -0500, Merlin Moncure
<[EMAIL PROTECTED]> wrote:
> Russell wrote:
> > I am not sure what the effect of it being prepared will be, however
> I've
> > had muc
Alex wrote:
> How do you create a temporary view that has only a small subset of the
> data from the DB init? (Links to docs are fine - I can read ;). My
> query isn't all that complex, and my number of records might be from
> 10 to 2k depending on how I implement it.
Well, you can't. My point
Tatsuo,
I agree completely that vacuum falls apart on huge tables. We could
probably do the math and figure out what the ratio of updated rows per
total rows is each day, but on a constantly growing table, that ratio
gets smaller and smaller, making the impact of dead tuples in the table
propo
Tatsuo,
> > Depends on your connection pooling software, I suppose. Most connection
> > pooling software only returns connections to the pool after a user has
> > been inactive for some period ... generally more than 3 seconds. So
> > connection continuity could be trusted.
>
> Not sure what you
On Mon, 2005-01-24 at 09:52 -0800, Josh Berkus wrote:
> [about keeping connections open in web context]
> Ah, clarity problem here.I'm talking about connection pooling tools from
> the client (webserver) side, such as Apache::DBI, PHP's pg_pconnect,
> Jakarta's connection pools, etc. Not po
Ragnar,
> note that these sometimes do not provide connection pooling as such,
> just persistent connections (Apache::DBI)
Yes, right.
> no. you can only count on web-server-process==connection, but not
> web-user==connection, unless you can garantee that the same user
> client always connects t
Hi,
I noticed that reltuples are way off if
I vacuum the table and analyze the table.
And the data (296901) after vacuum seems
accurate while
the reltuples (1.90744e+06)
after anlayze is too wrong.
My PG version is 7.3.2 (I know it is old).
Any thought?
Thanks,
my_db=# analyze my_tab;
ANALYZE
> On Mon, 2005-01-24 at 09:52 -0800, Josh Berkus wrote:
> > [about keeping connections open in web context]
> > Ah, clarity problem here.I'm talking about connection pooling tools
> > from
> > the client (webserver) side, such as Apache::DBI, PHP's pg_pconnect,
> > Jakarta's connection pools
Litao Wu <[EMAIL PROTECTED]> writes:
> I noticed that reltuples are way off if
> I vacuum the table and analyze the table.
> And the data (296901) after vacuum seems
> accurate while
> the reltuples (1.90744e+06)
> after anlayze is too wrong.
VACUUM derives an exact count because it scans the who
Thanks for the replies guys,
Chris -
very cool feature timing - didnt know about that one. Appears to be taking
the following times in pulling up the page:
web browser: 1.15 sec
postgres: 1.52 sec
other: 0.83 sec
Andrew:
Query looks like the following:
explain analyse SELECT
job.*,
customer.*,
ub
Thanks for the reply John,
There are approximately 800 rows total in our job table (which stays
approximately the same because 'completed' jobs get moved to a 'job_archive'
table).The other jobs not shown by the specific query could be on backorder
status, temporary deleted status, etc etc.
You
On Sun, Jan 23, 2005 at 03:40:03PM -0500, Tom Lane wrote:
> The real issue with any such scheme is that you are putting maintenance
> costs into the critical paths of foreground processes that are executing
> user queries. I think that one of the primary advantages of the
> Postgres storage design
SpaceBallOne wrote:
Thanks for the reply John,
There are approximately 800 rows total in our job table (which stays
approximately the same because 'completed' jobs get moved to a
'job_archive' table).The other jobs not shown by the specific query
could be on backorder status, temporary deleted stat
I'm also an autodidact on DB design, although it's well more than a year now.
If you are planning to clean up the design, I strongly suggest getting a visual
tool. Google for something like "database design tool". Some are extremely
expensive (e.g. ERwin, which I think is renamed having been bou
21 matches
Mail list logo