Hello all,
i have a problem with index usage and joins.
Attached is some SQL demonstrating my problem;
Why is the index only used in the 2nd query?
Can anybody explain me how to avoid/fix this.
Thanks in advance
Sebastian
CREATE TABLE users (
login NAME NOT NULL PRIMARY KEY,
datum
Am Samstag, 11. Dezember 2004 20:06 schrieb Bruno Wolff III:
On Thu, Dec 09, 2004 at 18:32:19 +0100,
Janning Vygen [EMAIL PROTECTED] wrote:
id should be positive
id should not have gaps within the same account
id should start counting by 1 for each account
i cant use sequences
Hi,
On Mon, 2004-12-13 at 10:58 +0100, Janning Vygen wrote:
Am Samstag, 11. Dezember 2004 20:06 schrieb Bruno Wolff III:
On Thu, Dec 09, 2004 at 18:32:19 +0100,
Janning Vygen [EMAIL PROTECTED] wrote:
id should be positive
id should not have gaps within the same account
id should
Sebastian Böck wrote:
Richard Huxton wrote:
Can you post the output from your explain analyse calls too? The
statistics aren't going to be the same on different machines.
Sure, here it is.
Thanks. (PS - remember to cc the list too).
EXPLAIN ANALYZE SELECT * FROM v;
Richard Huxton wrote:
Sebastian Böck wrote:
Richard Huxton wrote:
Can you post the output from your explain analyse calls too? The
statistics aren't going to be the same on different machines.
Sure, here it is.
Thanks. (PS - remember to cc the list too).
[output of EXPLAIN ANALYZE]
OK - so what
Sebastian Böck wrote:
Richard Huxton wrote:
Sebastian Böck wrote:
Richard Huxton wrote:
Can you post the output from your explain analyse calls too? The
statistics aren't going to be the same on different machines.
Sure, here it is.
Thanks. (PS - remember to cc the list too).
[output of
Richard Huxton wrote:
Sebastian Böck wrote:
Richard Huxton wrote:
Sebastian Böck wrote:
Richard Huxton wrote:
Can you post the output from your explain analyse calls too? The
statistics aren't going to be the same on different machines.
Sure, here it is.
Thanks. (PS - remember to cc the list
Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
Tom Lane wrote:
One of the things on the TODO list is making the size of temp-table
buffers user-configurable. (Temp table buffers are per-backend, they
are not part of the shared buffer arena.) With a large temp-table arena
we'd
If you attempted the inserts within a single transaction and any of
them fail, they will all fail. The server will automatically undo any
and all changes made by the transaction, and any further steps in the
transaction will simply result in the error message you are getting.
You will not be
When
I tried to subscribe I got:
Not Found
The requested URL /mj/mj_wwwusr was not found on this server.
Apache/1.3.33 Server at webmail.anachronic.net Port 80
Thank you,
Jim Apsey
Title: Re: disabling OIDs?
For what it's worth, OIDs are required if you ever want to use updateable cursors with the ODBC driver. We discovered this the hard way. Mark Dexter
On Mon, 13 Dec 2004, Jimmie H. Apsey wrote:
When I tried to subscribe I got:
Not Found
The requested URL /mj/mj_wwwusr was not found on this server.
How/where did you subscribe from? We made some changes this past weekend
to deal with some issues that were reported, so the URL should be
I cannot find any information on how to handle the more unusual or
complex data types in embedded SQL with C. The only data type treated
in the docs is VARCHAR which is handled by the proprocessor.
Especially I am interested in handling BYTEA in embedded SQL. How is it
declared?
--
With kind
I am trying to select a part of a text field based on a regular expression,
the data looks like this
Rv0001c_f
Rv0002_r
Rv1003c_r
Etc
I would like to be able to select like this (this is a regular expression I
would do in perl)
SELECT substring(primer_name, '(\w+)\d\d\d\d[c]*_[fr]$') from
When I tried to subscribe I got:
Not Found
The requested URL /mj/mj_wwwusr was not found on this server.
How/where did you subscribe from? We made some changes this
past weekend
to deal with some issues that were reported, so the URL should be
http://mail.postgresql.org/mj/mj_wwwusr
Am Montag, 13. Dezember 2004 17:37 schrieb Bruno Wolff III:
On Mon, Dec 13, 2004 at 10:58:25 +0100,
Janning Vygen [EMAIL PROTECTED] wrote:
Am Samstag, 11. Dezember 2004 20:06 schrieb Bruno Wolff III:
maybe your are right. But with Sequences i thought to have problems when
i do inserts
At 12:16 AM 12/13/2004 -0600, Guy Rouillier wrote:
(3) If we go with more disks, should we attempt to split tables and
indexes onto different drives (i.e., tablespaces), or just put all the
disks in hardware RAID5 and use a single tablespace?
Fast inserts = fast writes.
RAID5 = slower writes.
You
On Mon, Dec 13, 2004 at 19:37:41 +0100,
Janning Vygen [EMAIL PROTECTED] wrote:
ok, i have users which wants to manage their sporting competitions which
(simplified) has games and fixtures (in german Spieltage, i hope the word
fixtures is understandable). Like German Bundesliga has 9 games
Phil Endecott [EMAIL PROTECTED] writes:
Does this make sense? I imagine that the temporary table is being added
to these tables and then removed again.
Yes, a temp table has the same catalog infrastructure as a regular
table, so creation and deletion of a temp table will cause some activity
Michael Fuhr [EMAIL PROTECTED] writes:
On Mon, Dec 13, 2004 at 06:17:27PM +, Adam Witney wrote:
I would like to be able to select like this (this is a regular expression I
would do in perl)
Remember that the backslash (\) already has a special meaning in
PostgreSQL string literals.
I'm attempting to select records from my postgresql database using php
based on whether someone is at least 17 years old on the date of a
particular visit.
My sql is:
$db_sql = SELECT * from list WHERE ((visit_date - birth_date) = 17)'
$db_result = db_exec($db_sql)
$num =
On Mon, 2004-12-13 at 15:15, Kall, Bruce A. wrote:
I'm attempting to select records from my postgresql database using php
based on whether someone is at least 17 years old on the date of a
particular visit.
My sql is:
$db_sql = SELECT * from list WHERE ((visit_date - birth_date) = 17)'
Marek Lewczuk wrote:
I've made some tests with plperl and I see that when plperl function is
executed for the first time, then it takes much more time. I know that
this is a shared library problem - is there a way to preload plperl
every connection or maybe I can build plperl into postgresql
When I tried to subscribe I got:
Not Found
The requested URL /mj/mj_wwwusr was not found on this server.
How/where did you subscribe from? We made some changes this
past weekend
to deal with some issues that were reported, so the URL should be
http://mail.postgresql.org/mj/mj_wwwusr
It seems that upon dump restore, UPPER indexes either aren't recreated
correctly or not listed somewhere the query analyzer can know it exist.
I've encountered first encountered this problem doing an upgrade to
7.3.7 to 7.4.6. I again encountered this program replicating a server
(same 7.4.6
I don't know why they use OID's for cursors. But I do know that if you
run a trace the SQL that creates the cursor uses OID's,
so it doesn't work if the table is created without OID's. Also, if you
want to have updateable cursors against views (i.e., a view with rules
for INSERT, UPDATE, and
Mark Dexter [EMAIL PROTECTED] writes:
For what it's worth, OIDs are required if you ever want to use
updateable cursors with the ODBC driver. We discovered this the hard
way. Mark Dexter
That's unfortunate. Is it because it's difficult to track down the primary key
of the table? Is it
Phil Endecott [EMAIL PROTECTED] writes:
What would happen if I were to rollback at the end of the transaction,
rather than committing (having made no changes)? Would that eliminate
some or all of the catalog writes?
It would avoid fsync'ing the changes at commit time, but not really
reduce
Certainly did analyze. Here's the query plans. Note the non-UPPER query
uses an indexscan just fine.
INFO: analyzing public.fin_vendors
INFO: fin_vendors: 4207 pages, 3000 rows sampled, 63063 estimated
total rows
ANALYZE
talisman=# explain analyze select * from fin_vendors where name like
Hello all,
I have just loaded Postgresql 7.3.6-7 onto a new server on the
recommendation of Tom Lane. It is part of Red Hat AS 3.
I have Postgresql 7.1.3-5 running on Red Hat AS 2.1.
I have a simple view from which I select on both systems. The 7.3.6-7
version requires 18+ seconds to do a
Yeah, that suggestion sounds good as long as you ensure that the sort
column has sufficient precision to handle the in-between values. I
would suggest checking for value-above and value-below when inserting,
then using their midpoint. In the event that there is no value-above,
add some
On Mon, 13 Dec 2004, Magnus Hagander wrote:
When I tried to subscribe I got:
Not Found
The requested URL /mj/mj_wwwusr was not found on this server.
How/where did you subscribe from? We made some changes this
past weekend
to deal with some issues that were reported, so the URL should be
William Yu [EMAIL PROTECTED] writes:
It seems that upon dump restore, UPPER indexes either aren't recreated
correctly or not listed somewhere the query analyzer can know it exist.
Seems unlikely. Perhaps you forgot to ANALYZE after reloading?
regards, tom lane
I assume that the schema is identical on both systems.
After running vacuum on both systems [for each of the underlying tables
in tpv], what does explain say about the queries?
Are the shared memory buffers identical on both systems?
-Original Message-
From: [EMAIL PROTECTED]
I'm in a situation where it would be useful to bind a field in a table via
foreign keys to N other tables simultaneously. The table holds a common
type of info which all those other tables use. The many tables refer to the
common table by keeping references to its serial field.
By doing this, I
Tom Lane [EMAIL PROTECTED] writes:
I wonder if it's actually corrupt, or if it's just that the index
semantics don't truly match the operator. If the latter, REINDEXing
won't fix it.
I think the index always worked properly in the past. But of course it would
be hard to tell if that was
Greg Stark [EMAIL PROTECTED] writes:
So you don't think this case is worth doing forensics on?
If the problem goes away after REINDEX then I'll write it off as missing
WAL support. rtree is not high enough on my list of priorities to
justify more effort :-(
regards, tom
Hello...
I want to increase my max_connections up to 128
connections, but I got that I need to recompile my
kernel. I'm newbie in postgresql and freebsd. How to
increase max_connections and recompile freebsd kernel.
Help Me please.
Thank's
__
On Sat, 11 Dec 2004 10:44:59 -0600, Wes wrote:
On 12/9/04 9:23 AM, Peter Haworth [EMAIL PROTECTED] wrote:
It runs RHEL ES v3, kernel 2.4.21-20.ELsmp
It's generally a very stable box which runs a number of postgresql
instances. But last night we hit very high low averages - 10+, vs the
Greetings all,
I'm trying to drop template1 so that I can re-create it from
template0 - I made the mistake of loading a whole pile of crud into
template1 (luckily it's a test installation on my WinXP Pro laptop).
However, when using -
dropdb -U postgres -W template1
at the command line,
Traditionally, PostgreSQL has consciously omitted such things where
they would merely be replicating existing operating system
functionality.
On Unix, cron is the traditional service that provides this
functionality.
I think there's a port to Windows NT, so you could presumably use that
if you
You might want to check some of these sites:
http://www.silverwraith.com/papers/freebsd-kernel.php
http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/
kernelconfig.html
Anyone have a clue why he would need to recompile his kernel for this,
though?
On Dec 14, 2004, at 5:46 AM, Hengki
Raymond O'Donnell [EMAIL PROTECTED] writes:
dropdb -U postgres -W template1
at the command line, all I get back is dropdb: database removal
failed: ERROR: cannot drop the currently open database.
That's because the dropdb command itself connects to template1.
See the article on
On Fri, 2004-12-10 at 17:49, Nilesh Doshi wrote:
Hi All,
I'm new to postgres, so I need your help.
We are in the process of migrating from oracle to postgres. DB size is about
400gb.
My question is about schemas in oracle and postgres. Does every schema in
oracle becomes a separate
On Tue, 2004-12-14 at 07:49, Christopher Browne wrote:
Traditionally, PostgreSQL has consciously omitted such things where
they would merely be replicating existing operating system
functionality.
On Unix, cron is the traditional service that provides this
functionality.
I think there's
Hi,
it wasn't that! :)
That brrr.,*.:$;,^%^%roaaggh BDE :{ on the client side... that was the
problem.
There were stuck in connections in the BDE stack. After closing all BDE
client, the problem went away.
Sorry for disturbing ...
I discover again and again that Postgres is really GOOD. If
Hi!
-Ursprüngliche Nachricht-
Von: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Im Auftrag von
Guy Rouillier
Gesendet: Montag, 13. Dezember 2004 07:17
An: PostgreSQL General
Betreff: [GENERAL] High volume inserts - more disks or more CPUs?
(1) Would we be better off with more
Guy Rouillier wrote:
Seeking advice on system configuration (and I have read the techdocs.)
Probably worth reading the archives for the performance list.
We are converting a data collection system from Oracle to PostgreSQL
8.0. We are currently getting about 64 million rows per month; data is
put
Thank you for your answer. I think it's very interesting behaviour. Is
it a feature or bug ?
I have try this my jUnit test for another DB systems (e.g. Oracle 9i,
MS SQL Server 2000, MySQL, DB2, Sybase, SAP DB) and it works for each of
these databases (it was possible tu run next command
On Mon, Dec 13, 2004 at 17:04:17 +0100,
[EMAIL PROTECTED] wrote:
Thank you for your answer. I think it's very interesting behaviour. Is
it a feature or bug ?
Until version 8 (which is in release candidate status now), there was
no way to recover from an error within a transaction other than
On Mon, 2004-12-13 at 10:46, Jimmie H. Apsey wrote:
When I tried to subscribe I got:
Not Found
The requested URL /mj/mj_wwwusr was not found on this server.
__
Apache/1.3.33 Server at webmail.anachronic.net Port 80
Hi all,
PostgreSQL v8.0.0rc1, two variants of a user_msg table:
create table user_msg (
message_id integer not null references message(id) on update cascade on
delete cascade,
user_id integer not null,
status smallint not null default 0,
is_read boolean not null default false,
unique
On Mon, Dec 13, 2004 at 06:17:27PM +, Adam Witney wrote:
I am trying to select a part of a text field based on a regular expression,
the data looks like this
Rv0001c_f
Rv0002_r
Rv1003c_r
Etc
I would like to be able to select like this (this is a regular expression I
would do in
On Mon, Dec 13, 2004 at 19:37:41 +0100,
Janning Vygen [EMAIL PROTECTED] wrote:
the other reason why i wanted gapless sequences was that i would love to use
the id in an URL. But this is easy to manage to translate a positional id in
an URL to the database id.
For this you probably
Hi Tom,
I thought of a quicker way to investiage this than strace and did an ls
-lt in the data directory and looked up the tables that seem to change
on every transaction in pg_class. They are the catalog tables:
# ls -lt /var/lib/postgres/data/base/17142/
total 530108
-rw---1
Igor Shevchenko [EMAIL PROTECTED] writes:
In both cases, tables are filled with ~10m of rows, is_read is false in the
1st case, and NULL in the 2nd. I did VACUUM FULL ANALYSE after both
imports.
Here's the problem: in the 2nd case, planner wouldn't choose an index scan
using partial index
On Mon, Dec 13, 2004 at 20:18:57 +0200,
Igor Shevchenko [EMAIL PROTECTED] wrote:
Here's the problem: in the 2nd case, planner wouldn't choose an index scan
using partial index on is_read for the following queries:
explain select * from user_msg where is_read=true;
explain select * from
Tom Lane wrote:
Phil Endecott [EMAIL PROTECTED] writes:
Does this make sense? I imagine that the temporary table is being added
to these tables and then removed again.
Yes, a temp table has the same catalog infrastructure as a regular
table, so creation and deletion of a temp table will cause
I have what appears to be a corrupt RTREE index.
The first query shows that of the fifteen records I'm looking at, every one of
them has the @ based condition showing as true. The second shows one record
that really ought to be there not being listed.
I just tried the second query with
William Yu [EMAIL PROTECTED] writes:
Index Scan using idx_finvendors_name on fin_vendors (cost=0.00..4.01
rows=1 width=600) (actual time=0.029..0.036 rows=2 loops=1)
Index Cond: ((name = 'NBC'::bpchar) AND (name 'NBD'::bpchar))
Filter: (name ~~ 'NBC%'::text)
Hmm. Apparently
Greg Stark [EMAIL PROTECTED] writes:
I have what appears to be a corrupt RTREE index.
I wonder if it's actually corrupt, or if it's just that the index
semantics don't truly match the operator. If the latter, REINDEXing
won't fix it.
As for the first theory, have you had any database crashes
On Tue, Dec 14, 2004 at 02:06:24 +0200,
Ciprian Popovici [EMAIL PROTECTED] wrote:
I'm in a situation where it would be useful to bind a field in a table via
foreign keys to N other tables simultaneously. The table holds a common
type of info which all those other tables use. The many tables
Tom Lane [EMAIL PROTECTED] writes:
Greg Stark [EMAIL PROTECTED] writes:
So you don't think this case is worth doing forensics on?
If the problem goes away after REINDEX then I'll write it off as missing
WAL support. rtree is not high enough on my list of priorities to
justify more effort
Im using 7.3.4 is there another way?
On Monday 13 December 2004 14:52, Shridhar Daithankar wrote:
On Thursday 09 Dec 2004 10:37 am, JM wrote:
Hi ALL,
Im wondering sooner or later my disk will be filled-up by postgres's
data..
Can anyone give some suggestion on how to deal with
On Tue, Dec 14, 2004 at 02:04:08PM +0800, Jerome Macaranas wrote:
On Monday 13 December 2004 14:52, Shridhar Daithankar wrote:
You could use tablespaces in postgresql 8.0..
Im using 7.3.4 is there another way?
See my earlier followup in this thread:
Hi,
I've made some tests with plperl and I see that when plperl function is
executed for the first time, then it takes much more time. I know that
this is a shared library problem - is there a way to preload plperl
every connection or maybe I can build plperl into postgresql source ?
Thanks in
Joe Conway napisa(a):
Marek Lewczuk wrote:
I've made some tests with plperl and I see that when plperl function
is executed for the first time, then it takes much more time. I know
that this is a shared library problem - is there a way to preload
plperl every connection or maybe I can build
67 matches
Mail list logo