[ Charset ISO-8859-1 unsupported, converting... ]
Hi,
In Oracle, there are 2 ways to do the equivalent of vacuum analyze :
* analyze table xx compute statitics
* analyze table xx estimate statistics
In the second form, you can tell on what percentage of the file you
will do your
You can create an auto incrementing field with SERIAL. Take a
look at the FAQ
(http://www.postgresql.org/docs/faq-english.html#4.16.1).
Brent
--- Raymond Chui [EMAIL PROTECTED] wrote:
If I create a table like
create table tablename (
aNuminteger not null,
namevarchar(10)
);
Yes there is. You can find it in the TODO list under
Performance - Indexes
(http://www.postgresql.org/docs/todo.html). It isn't
slated for the 7.1 release however.
Brent
--- adb [EMAIL PROTECTED] wrote:
I've noticed that select max(the_primary_key) from
some_table
does a table scan. Is
On Tuesday 20 February 2001 13:38, Thierry Besancon wrote:
Hello
I'd like to run postmaster on a workstation with several IP
addresses. What I'd like is to have it bind to one and only one of
those IP addresses.
maybe you can block incoming connections to pgport on the other ip
on my DB server I ave 245 "idle" postmasters process for 130 httpd
persistant DB connections alive.
can anyone give me a clue how to manage this number of unused/idles
children and how to have them quickerly deallocated ?
--
EDENJOB / APR-Job
Email: [EMAIL PROTECTED]Home:
There currently is no simple SQL command that accomplishes this.
It can be accomplished by creating an identical new table with
the bool data type change and then running a "INSERT INTO
new_table (SELECT * FROM old_table)". Then you can check your
results, drop the old table, and rename the new
I ran into the same issue over the weekend. If you look in the
pgsql-ports email archives or the cygwin email archives you will see
that this is a known problem with cygwin 1.1.8. (I believe it is fixed
in current sources for cygwin). The workaround is to install cygwin
1.1.7. That solved
Hooray! These instructions are just what an
almost-novice needs.
With the exception of changing the password to
'postgresql', the procedures started smoothly.
Ran into a hitch at 'make' which reported that 'bison'
was not installed. I'm running debian potato, so used
the apt-get install of
Hi,
I am currently porting a database from MS Access to
PostgreSQL. I have many views and occasionally some UNIONS among these VIEWS.
I have to keep PostgreSQL 7.0.2 for the moment
(that's what my ISP provides).
So I need to know if anyone has any suggestions
about how to simulate a
Hi,
in regards to my former question about using -F from perl, would the following
be the correct line to do it?
$dbh = DBI-connect("dbi:Pg:dbname=logs;options=-F");
--
Konstantin Agouros - NetAge Solutions, Dingolfinger Str. 6, 81673 Muenchen
Tel.: 089 666584-0, Fax: 089 666584-11, Email:
From: "Emmanuel Pierre" [EMAIL PROTECTED]
I republish my question for I had no answer, and this is a serious
problem to me... I've used explain, vacuum, indexes... and so on, few
nested requests...
Doesn't appear to be on the list.
I am running PGSql 7.0.3 over Linux 2/ELF with a
If you are going to install DBD::Pg you need lib and include directories
just to install the module
On Tue, Feb 20, 2001 at 04:29:34PM +0100, Jose Manuel Lorenzo Lopez wrote:
Hello PG's,
I have a question concerning the DBI module for postgresql.
I want to use the DBI interface for
Bruce Momjian [EMAIL PROTECTED] writes:
No, we have no ability to randomly pick rows to use for estimating
statistics. Should we have this ability?
That would be really slick, especially given the fact that VACUUM runs
much faster than VACUUM ANALYZE for a lot of PG users. I could
Bill Barnes [EMAIL PROTECTED] writes:
Ran into a hitch at 'make' which reported that 'bison'
was not installed. I'm running debian potato, so used
the apt-get install of bison. Bison is installed in
/usr/bin. I copied it to /home/billb/pgsql.
Still getting the 'bison missing' message.
Bruce Momjian writes:
No, we have no ability to randomly pick rows to use for estimating
statistics. Should we have this ability?
How's reading a sufficiently large fraction of random rows going to be
significantly faster than reading all rows? If you're just going to read
the first n rows
Hi all,
My pgsql DBMS works great except that when I want to dump a database,
pg_dump says that database template1 doesn't exist (actually it does !)
and the dump is aborted...
Any ideas ???
Thanks a lot
Lionel
What you describe is what we do. Full history of all actions in the
data tables are stored elsewhere via a trigger on INSERT, UPDATE /
DELETE and a generic function written in C (to get the transaction ID
they were a part of for postdated rollbacks or transactions where
applicable -- unmodified
Bruce Momjian writes:
No, we have no ability to randomly pick rows to use for estimating
statistics. Should we have this ability?
How's reading a sufficiently large fraction of random rows going to be
significantly faster than reading all rows? If you're just going to read
the first
Bruce Momjian [EMAIL PROTECTED] writes:
No, we have no ability to randomly pick rows to use for estimating
statistics. Should we have this ability?
That would be really slick, especially given the fact that VACUUM runs
much faster than VACUUM ANALYZE for a lot of PG users. I could change
my
Tressens Lionel [EMAIL PROTECTED] writes:
My pgsql DBMS works great except that when I want to dump a database,
pg_dump says that database template1 doesn't exist (actually it does !)
and the dump is aborted...
Curious. Can you connect to template1 by hand (eg "psql template1")?
If not, try
Bruce Momjian [EMAIL PROTECTED] writes:
How's reading a sufficiently large fraction of random rows going to be
significantly faster than reading all rows? If you're just going to read
the first n rows then that isn't really random, is it?
Ingres did this too, I thought. You could specify a
Thanks. That cleared the bison problem.
flex didn't work the same way though. Copied it also
to /home/billb/pgsql. Reported missing. Needs to go
someplace else?
TIA
Bill
--- Tom Lane [EMAIL PROTECTED] wrote:
Bill Barnes [EMAIL PROTECTED] writes:
Ran into a hitch at 'make' which
Hello,
In our app we must keep a trace of all changes (UPDATEs) done to an
important_table, so that it's possible to get a snapshot of a given
record at a given date.
The implementation strategy we are thinking about:
1. create an important_table_archive which inherits from
important_table,
On Mon, Feb 19, 2001 at 08:34:47PM -0600, Larry Rosenman wrote:
* Dave Edmondson [EMAIL PROTECTED] [010219 14:40]:
yes. I ran VACUUM ANALYZE after creating the indicies. (Actually, I VACUUM
the database twice a day.) The data table literally has 145972 rows, and
145971 will match
To get a partial VACUUM ANALYZE that was actually usefully faster than
the current code, I think you'd have to read just a few percent of the
blocks, which means much less than a few percent of the rows ... unless
maybe you picked selected blocks but then used all the rows in those
blocks
Bruce Momjian [EMAIL PROTECTED] writes:
I find it hard to believe that VAC ANALYZE is all that much slower than
plain VACUUM anyway; fixing the indexes is the slowest part of VACUUM in
my experience. It would be useful to know exactly what the columns are
in a table where VAC ANALYZE is
Tom Lane writes:
Re-run configure, and watch to make sure that it finds bison this time.
You'll need flex too, if you intend to build from CVS sources.
And if you're going to use the ODBC drivers under Linux (or any other
OS that links C "strings" into read only memory) you'll need pretty
Joseph Shraibman [EMAIL PROTECTED] writes:
Maybe I'm not making myself understood. Another way of asking the same
thing:
Say there is a transaction that is looking at a non-current version of a
row. 'non-current' could be the value it was at the start of the
transaction (and was updated by
Joseph Shraibman [EMAIL PROTECTED] writes:
A caveat on this reply: I've been studying the Postgres internals, but
I have not mastered them.
I understand that keeping different views for different open
transactions can be difficult, but after a transaction that updates a
row is over why
Ian Lance Taylor wrote:
Joseph Shraibman [EMAIL PROTECTED] writes:
A caveat on this reply: I've been studying the Postgres internals, but
I have not mastered them.
I understand that keeping different views for different open
transactions can be difficult, but after a transaction
On Tue, 20 Feb 2001, Joseph Shraibman wrote:
That's because the estimate in this case was 50 and so it's estimating
that going through the index and checking the heap is faster than a
sequence scan. The *estimator* didn't use the index to figure that out,
it's just saying that the best
Err I wan't complaing about count(*) per se, I was just using that as a
simple example of something that should be done with an index. Because
if the index doesn't have to worry about rows that aren't current then
you don't even have to go into the heap because the index alone should
have enough
Joseph Shraibman [EMAIL PROTECTED] writes:
Then it should do the same thing no matter what value I use, but when I
do different searches in one case it estimates 50 when there are 16 and
in the other it estimeates 502 where there are 502.
Well, it does know the difference between searching
Joseph Shraibman [EMAIL PROTECTED] writes:
I understand that keeping different views for different open
transactions can be difficult, but after a transaction that updates a
row is over why isn't the row marked as 'universally visible' for all
new transactions until another update
Stephan Szabo wrote:
On Tue, 20 Feb 2001, Joseph Shraibman wrote:
Stephan Szabo wrote:
Where are you seeing something that says the estimator/planner using the
index to get an upper bound? The estimator shouldn't be asking either the
index or the heap for anything, it should be
On Tue, 20 Feb 2001, Joseph Shraibman wrote:
Err I wan't complaing about count(*) per se, I was just using that as a
simple example of something that should be done with an index. Because
if the index doesn't have to worry about rows that aren't current then
you don't even have to go into
Re-Sending due to rejection after subscribing, before confirming.
Sorry if two make it through...
Background: Trying to use a Cobalt box that has PostgreSQL pre-installed.
I can change localhost "crypt" to "trust" in pg_hba.conf, but I don't
really want to do that long-term.
If I'm reading
On Tue, Feb 20, 2001 at 05:02:22PM -0800, Stephan Szabo wrote:
IIRC, There's something which is effectively :
estimated rows = most common value's frequency*fraction
I think fraction defaults to (is always?) 1/10 for the standard
index type. That's where the 50 comes from. And the
Ian Lance Taylor wrote:
Joseph Shraibman [EMAIL PROTECTED] writes:
I understand that keeping different views for different open
transactions can be difficult, but after a transaction that updates a
row is over why isn't the row marked as 'universally visible' for all
new
Joseph Shraibman [EMAIL PROTECTED] writes:
Why? There is a mechanism for keeping track of which heap tuples are
valid, why not index tuples? It is the nature of indices to be updated
on inserts, why not deletes?
An index is a hint: these tuples *might* be of interest to your
transaction.
Greetings,
Please, what am I doing wrong?
chris=# \d phone_prefix
Table "phone_prefix"
-[ RECORD 1 ]--
Attribute | number
Type | integer
Modifier | not null default nextval('"phone_prefix_number_seq"'::text)
-[ RECORD 2
On Wed, 21 Feb 2001, Christopher Sawtell wrote:
chris=# select phone_prefix.prefix order by random() limit 1 || '-' ||
lpad((random()*1)::int, 4, '0')::text as "Phone Number";
All the things you are selecting need to come in the first part of the
query.
like,
SELECT prefix || '-' ||
Christopher Sawtell writes:
chris=# select phone_prefix.prefix order by random() limit 1 || '-' ||
lpad((random()*1)::int, 4, '0')::text as "Phone Number";
ERROR: parser: parse error at or near "||"
This sure won't fix everything, but at the very least you need to
parenthesize that
Dan Lyke [EMAIL PROTECTED] writes:
So one might think that, with appropriate casting, something more
like:
select (select phone_prefix.prefix order by random() limit 1) || ...
would be more likely to work (modulo some casting and such).
Note this will not work in pre-7.1 releases --- 7.1
Joseph Shraibman [EMAIL PROTECTED] writes:
Note that this all implies that when walking through the index to find
heap tuples, you must check the current validity of each heap tuple.
It is normal for an index tuple to point to a heap tuple which has
been deleted.
snip
I'm
Hey All,
We have a need to grant privileges on entire databases to users and/or
groups. It looks like GRANT just grants on tables and sequences, but I'd
like to know if there's a more direct way to do it. What I'm doing now
is getting a list of tables and sequences and calling grant for
46 matches
Mail list logo