Antonios Christofides [EMAIL PROTECTED] writes:
Is this filenames-instead-of-BLOBs for easier backup common practice?
Any other ideas or comments?
This is a major point of contention. Some people think keeping all data in the
database is a better approach, others think data that isn't
Sean Chittenden [EMAIL PROTECTED] writes:
Some light weight multi-threaded proxy that
relays active connections to the backend and holds idle connections
more efficiently than PostgreSQL...
What excuse is there for postgres connections being heavyweight to begin with?
The only real resource
Dmitry Tkach [EMAIL PROTECTED] writes:
The good news though is that, if you drop (or disable) your pk index
That's what I did, except I had to cascade to the foreign keys and then
recreate them too. And you can't really recreate a primary key constraint, you
just get a unique index which I
Shelby Cain [EMAIL PROTECTED] writes:
My question is: is the builtin MD5 appropriate for this use or should I be
using a function from pl/something? Figures on collision rates would be
nice as well - the typical chunk of text is probably 1k-8k.
Note that MD5 is slow and CPU-intensive. By
Simon Riggs [EMAIL PROTECTED] writes:
If you really do need that many, you can go to the trouble of grouping
them in two levels of nesting, so you have a root table, multiple month
tables and then each month table with multiple day tables (etc).
I wonder if testing deeply nested inheritance
Greg Sabino Mullane [EMAIL PROTECTED] writes:
I maintain that it makes more sense for those few people who regularly look
at system functions to add a S than to have everyone else have to do
things such as \df public.
@@aol(me too).
fwiw, i think few may be a bit optimistic here. Don't
David Pratt [EMAIL PROTECTED] writes:
I just want to get this right because it will be an important part of what I
am
preparing. Sorry for the really long message but I don't know if it would
make
any sense if I did not fully explain what i am wanting to do. I am not french
so excuse my
Tom Lane [EMAIL PROTECTED] writes:
Last chance for any Kerberos 4 users to speak up --- otherwise I'll
apply this soon.
If you just want someone to test it I can do that. I don't actually use it
normally though.
As far as security issues the only issues I'm aware of is a) it uses plain DES
David Pratt [EMAIL PROTECTED] writes:
It was suggested that I look at an array.
I think that was me. I tried not to say there's only one way to do it. Only
that I chose to go this way and I think it has worked a lot better for me.
Having the text right there in the column saves a *lot* of
David Pratt [EMAIL PROTECTED] writes:
Hi Greg. Sorry for getting back to you so late on this. I think your idea on
the design is spot on since it will give me referential integrity with my
other
and the multi-language will just be a simple two field table with id and
multi-dimensional
I wonder if you could make an SQL type that used text[] as its storage format
but had an output function that displayed the correct text for the current
locale. Where current locale could be something you set by calling a
function at the beginning of the transaction.
Do pg_dump and all the
Oleg Bartunov oleg@sai.msu.su writes:
Hi there,
sorry if just misunderstanding but we have contrib/hstore available from
http://www.sai.msu.su/~megera/postgres/gist/
which could be used for storing as many languages as you need.
It's sort of perl hash.
Huh. That's pretty neat. I don't
Oleg Bartunov oleg@sai.msu.su writes:
I'd like the search to be completely case insensitive. Can anyone point me
in
the right direction?
use your own dictionary ! Read gendict tutorial for details.
http://www.sai.msu.su/~megera/oddmuse/index.cgi/Gendict
I would create simple
Net Virtual Mailing Lists [EMAIL PROTECTED] writes:
The query I want to run against these two tables is something like this:
SELECT
count(*) as count,
category.category,
nlevel(category.category) AS level,
subpath(category.category,0,nlevel(category.category)-1) as parent,
Tom Lane [EMAIL PROTECTED] writes:
Certainly the idea of not having to store a length word for CHAR(1) fields
is not going to inspire anyone to invest the effort involved ;-)
That's a pretty big motivation though. Storage space efficiency is a huge
factor in raw sequential scan speed.
This looks very strange to me:
staging= select '1.10'::cidr;
cidr
-
1.10.0.0/16
(1 row)
The normal way to read 1.10 would be as synonymous with 1.0.0.10. This is
even mandated by the POSIX spec for getaddrinfo and company. q
--
greg
---(end of
Michael Fuhr [EMAIL PROTECTED] writes:
On Thu, Jul 21, 2005 at 06:38:01PM -0400, Tom Lane wrote:
Greg Stark [EMAIL PROTECTED] writes:
The normal way to read 1.10 would be as synonymous with 1.0.0.10.
That might be the case for IPv6, but it's never been a standard
convention for IPv4
Bryan Field-Elliot [EMAIL PROTECTED] writes:
We have this simple query:
select status, count(*) from customer group by status;
There is already a btree index on status, but, the customer table is
huge, and this query must be executed very frequently... an explain on
this query shows that
Jan Wieck [EMAIL PROTECTED] writes:
Then again, the stats file is only written. There is nothing that actually
forces the blocks out. On a busy system, one individual stats file will be
created, written to, renamed, live for 500ms and be thrown away by the next
stat files rename operation. I
Jan Wieck [EMAIL PROTECTED] writes:
PostgreSQL itself doesn't work too well with tens of thousands of tables.
Really? AFAIK it should be pretty OK, assuming you are on a filesystem
that doesn't choke with tens of thousands of entries in a directory.
I think we should put down a TODO
Phil Endecott [EMAIL PROTECTED] writes:
Hello again,
Just to give a bit of background, in case it is useful: this is my family tree
website, treefic.com. I have a schema for each user, each with about a dozen
tables. In most cases the tables are small, i.e. tens of entries, but the
users
John D. Burger [EMAIL PROTECTED] writes:
If you use a large enough space for the number you can reduce that
probability of an accidental collision to much less than that of
catastrophic hardware failure at which point it isn't noticably better
than having no chance of collisions.
I
Phil Endecott [EMAIL PROTECTED] writes:
Those aren't questions that I need to answer often.
But the fact that they're utterly infeasible in your current design is a bad
sign. Just because you don't need them now doesn't mean you won't need
*something* that spans users later. Sometimes you
Phil Endecott [EMAIL PROTECTED] writes:
Greg Stark wrote:
You're omitting the time spent finding the actual table for the correct
user in your current scheme. That's exactly the same as the log(u) factor
above.
I hope not - can anyone confirm?
I have the impression that within
Tom Lane [EMAIL PROTECTED] writes:
Salem Berhanu [EMAIL PROTECTED] writes:
What exactly is the SQL_ASCII encoding in postgres?
SQL_ASCII isn't so much an encoding as the declaration that you don't
care about encodings.
It's too late to consider renaming this SQL_RAW or something like
Bruce Momjian pgman@candle.pha.pa.us writes:
E'' is more a marker than a type. I realize making E a type might work,
but it seems unusual.
What we could do is backpatch E'' to 8.0.X as a no-op like it will be in
8.1.
What happens if someone already has a type called e ?
--
greg
Bruce Momjian pgman@candle.pha.pa.us writes:
What happens if someone already has a type called e ?
That would be the same problem as someone having a type 'b' or 'x'. It
would still work, but not for casts like text'str'. Those letters are
caught in the lexer before getting into to the
Roman Neuhauser [EMAIL PROTECTED] writes:
Yeah, it would be nice if select, insert, update, delete could
be set to log their progress at configured intervals.
One feature that would be handy for doing this would be if Postgres supported
READ DIRTY. Few databases support it, and
Tom Lane [EMAIL PROTECTED] writes:
Would it? My first thought on reading the OP was to recommend
contrib/pgstattuple, but I refrained after thinking that if the table
is all that big, the last thing you need is someone doing a seqscan of
the whole table to see where you are. Much less
I have a job that does a big batch delete/insert. I want it to vacuum (or
probably cluster) when it's finished. But I figure I should sleep for a while
before doing the vacuum to be sure there are no old transactions still
running.
Is there a simple query I can have it do against the system
Kevin Murphy [EMAIL PROTECTED] writes:
This is just an FYI for other people out there with large tables: table
clustering sped up my queries from 10-100 times, which I am very happy about.
I'm posting this in case it's ever useful to anybody. If someone reading this
feels that I did
Junaili Lie [EMAIL PROTECTED] writes:
Quick questions:
For big tables with frequent insert, no update, and frequent read
(using indexes), will clustering help?
what should be done on such table other than regular analyze?
comments are appreciated.
If you never have any deletes or updates
Oliver Jowett [EMAIL PROTECTED] writes:
Bernard was also objecting to the overhead of pushing the data down a
TCP pipe when it's already available locally, I think.. I didn't find
any real difference there when I compared the two methods, though.
What makes you think it's necessarily
William Yu [EMAIL PROTECTED] writes:
Allocation of unique IDs that don't collide across servers is a must. For 1
project, instead of using numeric IDs, we using CHAR and pre-append a unique
server code so record #1 on server A is A01 versus ?x01 on
other
servers. For the
Ian Harding [EMAIL PROTECTED] writes:
Brand X doesn't do it in their backend either.
If your Brand X is the same as my Brand X then it's worth noting that they
didn't previously do anything sane in their backend. It used to invalidate all
your views and you had to recompile them before they
Douglas McNaught [EMAIL PROTECTED] writes:
You can use \copy in 'psql' on the client side, but you have to be a
superuser to do COPY on the server side, for security reasons.
I wonder if there's any way to relax this constraint.
If you're connected via a unix domain socket we can know the
Douglas McNaught [EMAIL PROTECTED] writes:
Greg Stark [EMAIL PROTECTED] writes:
If you're connected via a unix domain socket we can know the UID of the
client
end. I don't see reproducing the entire unix semantics but if file is owned
by
the same uid as the user connecting it seems
Bruce Momjian pgman@candle.pha.pa.us writes:
How is this different from materialized views, which is already on the
TODO list?
It's entirely unrelated.
Materialized views are about having all the data stored in advance. They're
really tables that have some sort of process to keep the data in
Bruce Momjian pgman@candle.pha.pa.us writes:
Well, I just added to TODO:
* Allow VIEW/RULE recompilation when the underlying tables change
Is dynamic view a industry-standard name? If so, I will add it to the
TODO.
DYNAMIC is something I made up.
ALTER VIEW RECOMPILE is Oraclese
Tom Lane [EMAIL PROTECTED] writes:
Jim C. Nasby [EMAIL PROTECTED] writes:
How is this different from materialized views, which is already on the
TODO list?
The idea behind the DYNAMIC VIEW is that if you made a DDL change in the
table it could be reflected in the view. So for example,
Martijn van Oosterhout kleptog@svana.org writes:
On Sat, Aug 27, 2005 at 01:20:29AM -0400, Greg Stark wrote:
For that matter it might be handy to be able to grant permission to regular
users to load or dump files to arbitrary locations. The security
consequences
would have
Martijn van Oosterhout kleptog@svana.org writes:
There's a lot of discussion about this, yet no-one has demonstrated that
COPY FROM STDIN isn't just as good and avoids all the issues entirely.
In any case here's some quick results from my system. There seems to a greater
than 21% slowdown
John D. Burger [EMAIL PROTECTED] writes:
Well, they would have access to every world readable file on the
system, ie /etc, /usr, /lib, ... most files are world readable. There's
a lot of discussion about this, yet no-one has demonstrated that COPY
FROM STDIN isn't just as good and avoids
Tom Lane [EMAIL PROTECTED] writes:
Greg Stark [EMAIL PROTECTED] writes:
In any case here's some quick results from my system. There seems to a
greater
than 21% slowdown associated with piping the data through two processes
instead of reading directly.
Well, if the penalty is order
CSN [EMAIL PROTECTED] writes:
I'm considering setting up a script that listens for
notifications for a table and if a row is deleted the
script will delete that row's corresponding files.
One way to deal with this would be to have a boolean flag in the table like
deleted. Update that flag
Clodoaldo Pinto [EMAIL PROTECTED] writes:
2005/8/29, Tom Lane [EMAIL PROTECTED]:
What is the function doing to the table, exactly? DDL changes generally
take exclusive locks ...
This is the transaction:
begin;
select update_last_date();
truncate times_producao;
I think truncate
Bernard [EMAIL PROTECTED] writes:
The postgresql.org server is the only braindead list server I have
seen so far.
Well, welcome to the real Internet.
--
greg
---(end of broadcast)---
TIP 6: explain analyze is your friend
Clodoaldo Pinto [EMAIL PROTECTED] writes:
I'm already doing a vacuum (not full) once a day.
A vacuum full or a cluster is totally out of reach since each take
about one hour. The biggest table is 170 million rows long.
Well a regular vacuum will mark the free space for reuse. If you insert
Antimon [EMAIL PROTECTED] writes:
Thanks for the reply.
I checked new 5.1 pg_ functions and i wanna ask something else. What do
you think about PDO? It is not an abstraction layer, just something
like wrapper. I thought as it supports both widely used dbmss, php
developers would focus on it
Scott Marlowe [EMAIL PROTECTED] writes:
Plus, how is the server supposed to KNOW that you have access to the
file? psql may know who you are, but the server only knows who you are
in the postgresql sense, not the OS sense.
My original suggestion was that clients connected via unix domain
Roman Neuhauser [EMAIL PROTECTED] writes:
I'm looking for an equivalent of my_composite_type[] for use as a
parameter of a pl/pgsql function. What do people use to dodge this
limitation?
Background: I have a few plpgsql functions that basically accept an
array of objects decomposed into
Tom Lane [EMAIL PROTECTED] writes:
Alvaro Herrera [EMAIL PROTECTED] writes:
Yes, if it is selective enough. (It _can_ use the index, which does not
mean that it _will_ use it.) Note that if your example query used the
columns (lname, dob), the answer would be no.
Actually, that last
Chris Browne [EMAIL PROTECTED] writes:
Well, what you clearly want/need is asynchronous multimaster...
I didn't catch anything in his description that answered whether he needs
multimaster or a simple single master with many slaves model would suffice.
I'm involved with Slony-I, which is
John DeSoi [EMAIL PROTECTED] writes:
If you need data to propagate from the clients back to the server then
things
get more complicated. Even then you could side step a lot of headaches if
you
can structure the application in specific ways, such as guaranteeing that
the
John Seberg [EMAIL PROTECTED] writes:
I recently tried to CREATE a UNIQUE INDEX and could
not, due to duplicate values:
CREATE UNIQUE INDEX usr_login ON usr (login);
To try to find the offending row(s), I then executed
the following:
SELECT count(*), login FROM usr GROUP BY login
Tom Lane [EMAIL PROTECTED] writes:
If that does change the results, it indicates you've got strings which
are bytewise different but compare equal according to strcoll(). We've
seen this and other misbehaviors from some locale definitions when faced
with data that is invalid per the
Greg Stark [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] writes:
If that does change the results, it indicates you've got strings which
are bytewise different but compare equal according to strcoll(). We've
seen this and other misbehaviors from some locale definitions when faced
Martijn van Oosterhout kleptog@svana.org writes:
On Sat, Sep 17, 2005 at 12:45:17PM -0500, Mike Nolan wrote:
I don't know if it's guarenteed by spec, but it certainly seems silly
for strings to compare equal when they're not. Just because a locale
sorts ignoring case doesn't mean that
Guy Fraser [EMAIL PROTECTED] writes:
So to answer his question he would likely want :
SELECT
array_upper(item,1) - array_upper(item,0) + 1 as elements
FROM
arraytest ;
Note that this doesn't work for empty arrays.
It will return NULL instead of 0.
--
greg
Neil Conway [EMAIL PROTECTED] writes:
I think debbugs is fairly close to what we'd need, for reasons stated
earlier:
http://archives.postgresql.org/pgsql-hackers/2005-05/msg01156.php
(I think Bugzilla is *completely* the wrong tool for the Postgres
development model.)
I've heard vague
Tom Lane [EMAIL PROTECTED] writes:
If you write, say,
select max(relpages) from pg_class;
and the lexer thinks that it should fold unquoted identifiers to upper
case, then the catalog entries defining these names had better read
PG_CLASS, RELPAGES, and MAX, not the lower-case
Tom Lane [EMAIL PROTECTED] writes:
Greg Stark [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] writes:
and the lexer thinks that it should fold unquoted identifiers to upper
case, then the catalog entries defining these names had better read
PG_CLASS, RELPAGES, and MAX, not the lower
Tom Lane [EMAIL PROTECTED] writes:
Greg Stark [EMAIL PROTECTED] writes:
So two identifiers match if either is an unquoted identifier and they match
case insensitively. Or if both are quoted and they match case sensitively.
Which part of adhere to the standard are you failing to get
Tom Lane [EMAIL PROTECTED] writes:
And it's been looked at a few times, and rejected as being far too
bug-prone. The number of ways to screw up by using physical column
number where you should have used logical, or vice versa, is daunting.
One way to make sure there are no such bugs would be
Guy Rouillier [EMAIL PROTECTED] writes:
Tino Wildenhain wrote:
experiment=# SELECT 'a '::char = 'a '::char;
?column?
--
t
This does't show anything useful, because the ::char casting simply
takes the first char of any string:
select 'abc'::char = 'axy'::char
Tom Lane [EMAIL PROTECTED] writes:
Chris Travers [EMAIL PROTECTED] writes:
If I understand the spec correctly, it seems to indicate that this is
specific to the locale/character set.
The spec associates padding behavior with collations, which per spec are
separate from the datatypes ---
Tom Lane [EMAIL PROTECTED] writes:
My advice to the Rails people would be to fix whatever it is in their
code that is assuming a particular sequence name, or indeed assuming
a sequence at all...
Well how else do you find the id of the last inserted record without assuming
a sequence?
--
Joshua D. Drake wrote:
On Fri, 22 Feb 2008 16:49:10 -0500
Kynn Jones [EMAIL PROTECTED] wrote:
Hi!
Does PostgreSQL expose its hash function? I need a fast way to hash a
string to a short code using characters in the set [A-Za-z0-9_].
Is md5 good enough? :)
Probably not. He said
Frank D. Engel, Jr. [EMAIL PROTECTED] writes:
Yep, that could cause problems. Okay, now I'm joining the program.
The only thing I can see that would fix this
...
There are well understood mechanisms to fix this. It's a SMOP or simple
matter of programming. What you would do is insert
Florian G. Pflug [EMAIL PROTECTED] writes:
Lets say, you have an query select * from table where field = function().
Maybe this would be clearer with a more egregious example of volatility.
Say you had a function odd() that returns 1 and 0 alternating. That is, it
returns 1 the first time
Bo Lorentsen [EMAIL PROTECTED] writes:
I understand that, I just can't see why an index lookup can't be used on per
row basis.
Well, how would that work?
--
greg
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
Tom Lane [EMAIL PROTECTED] writes:
Dann Corbit [EMAIL PROTECTED] writes:
Even at that, I think that being able to insert more than one null value
into a unique index should be considered as a bug (or diagnosed as an
error).
Direct your complaints to the ISO SQL standards committee.
Jim C. Nasby [EMAIL PROTECTED] writes:
I recall this being discussed before, but I couldn't manage to find it
in the archives.
Is there any way to see how many rows a running transaction has written?
vacuum analyze verbose only reports visible rows.
Not AFAIK. In the past I've done ls -l
Alex [EMAIL PROTECTED] writes:
I actually just wanted to know if there is a way around this problem.
Obviously
it is implemented that way for whatever reason.
The way around is to make all the columns NOT NULL. For most applications
unique indexes don't make much sense on nullable columns.
Tom Lane [EMAIL PROTECTED] writes:
Greg Stark [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] writes:
Direct your complaints to the ISO SQL standards committee.
The SQL standard generally treats NULLs as a escape hatch for constraints.
Huh? I thought I was agreeing with you
Dawid Kuroczko [EMAIL PROTECTED] writes:
Don't worry about index bloat. These additional indexes will be used
only when your main (foo_abc_index) is not used, so there won't be
any duplicate data in them.
The main index will have _all_ the tuples in them, even where some of the
columns are
Frank D. Engel, Jr. [EMAIL PROTECTED] writes:
I'm sure this won't work for some reason, but something similar might; why not
create a unique index on a constant where all three are null; something along
these lines (in addition to the others):
CREATE UNIQUE INDEX foo_trio_index ON foo (1)
Dann Corbit [EMAIL PROTECTED] writes:
Would the constraint not be satisfied if each combination (including
NULL) were not also forced to be unique?
I maintain that the constraint is still satisfied.
So, it is satisfied if I stuff thousands of NULL values in there.
And it is satisfied
[EMAIL PROTECTED] writes:
I have a table that gets a column added to it for every insert on another
table.
I think you're going to run into a lot of problems with something like that.
Perhaps you should describe the original problem you're trying to solve and
people would be able to suggest
Vanole, Mike [EMAIL PROTECTED] writes:
I need to calculate a moving average and I would like to do it with SQL,
or a Pg function built for this purpose. I'm on Pg 7.4. Is this possible
in Pg without a bunch of self joins, or is there a funtion available?
Unfortunately moving averages fall
Marco Colombo [EMAIL PROTECTED] writes:
Exaclty. Or, one could say: the standard text format is the one the
platform you are running on dictates. Which is what python does.
Egads. So the set of valid Python programs is different depending on what
platform you're on? That's just, uhm, insane.
Jim C. Nasby [EMAIL PROTECTED] writes:
If you're feeling adventurous, you might look at Oracle's documentation
on their analytic functions and see if you can come up with something
generic for PostgreSQL.
I think the hard part of doing even a simple implementation is precisely the
point I
Dann Corbit [EMAIL PROTECTED] writes:
If someone wanted to put arbitrary aggregates into PostgreSQL, I would
suggest something akin to the RED BRICK API, or better yet, the ATLAS
API:
I also found a good reference for the DB2's SQL2003 Standard OLAP functions:
Kristaps Armanis [EMAIL PROTECTED] writes:
Is there any solution?
I think what you have to do is use pg_resetxlog with -x and a value just shy
of 2^32. then pg_dump and restore into a fresh database. I've never done it
though so perhaps you should wait until someone with more experience speaks
Jim C. Nasby [EMAIL PROTECTED] writes:
Well, every other database I've used can do index covering, which means
index scans *are* faster.
Still not necessarily true. In a case like this it would still be random
access which would be slower than sequential access.
Though Oracle is capable of
Florian G. Pflug [EMAIL PROTECTED] writes:
when deleting a lot of rows from a large table - and each time it has to find
referencing tuples by doing an index scan
Are you sure it was even an index scan? And not doing a sequential table scan
for every deletion? In order to do an index scan you
Martijn van Oosterhout kleptog@svana.org writes:
Now, how to store the relationships between them to handle
multiplication and division. Probably go back to base types...
I've thought about this myself quite a bit. I decided that trying to implement
multiplication and division is a bad idea.
TJ O'Donnell [EMAIL PROTECTED] writes:
However, I am concerned that I must place the most selective column first in
my index. I cannot tell, a priori, which column will be most selective. That
depends on the nature of search, which can vary widely each time.
If you're always using operators
Frank D. Engel, Jr. wrote [01/27/05 9:19 AM]:
On Jan 26, 2005, at 6:57 PM, PFC wrote:
Isn't there some free open source algebraic computation toolkit with
equations and units somewhere ?
You mean like the traditional units program available on virtually all Unix
machines?
$ units
Dawid Kuroczko [EMAIL PROTECTED] writes:
Why only -core?
I think it is in good taste that when you find a bug/vulnerability/etc
first you contact the author (in this case: core), leave them some
time to fix the problem and then go on announcing it to the
world.
I think it is perfectly
RTFM.
You have: tempK(1)
You want: tempC
-272.15
--
greg
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
TJ O'Donnell [EMAIL PROTECTED] writes:
- Seq Scan on structure (cost=0.00..191619.56 rows=1765669 width=32)
(actual
time=8050.437..42117.062 rows=1569 loops=1)
And do you vacuum regularly? Have you done batch updates or deletes and then
never inserted enough
Thomas F.O'Connell [EMAIL PROTECTED] writes:
UPDATE groups
SET count1 = v_group_count1, count2 = v_group_count2, count3
=
For instance, when run, this stored procedure could try to acquire a lock on
users2_groups despite not directly referencing it.
Venkatesh Babu [EMAIL PROTECTED] writes:
Hi,
There aren't any triggers but there are 75262 update
statements. The problem is that we have a datatype
called as Collection and we are fetching the data
rows into it, modifying the data and call
Collection.save(). This save method generates
Jan Wieck [EMAIL PROTECTED] writes:
No, Peter.
Posting a vulnerability on a public mailing list before there is a known fix
for it means that you put everyone who has that vulnerability into jeopardy.
Vulnerabilities are a special breed of bugs and need to be exterminated a
little
Ed L. [EMAIL PROTECTED] writes:
and I don't know what else.
How about being able to search for queries where expected rows and actual rows
are far apart.
--
greg
---(end of broadcast)---
TIP 7: don't forget to increase your free space map
Tom Lane [EMAIL PROTECTED] writes:
Martijn van Oosterhout kleptog@svana.org writes:
I've been wondering about that. A while ago the change was made from
outputting a NOTICE with the EXPLAIN output to returning a resultset.
If you could agree on what columns to return it might not be so
Tom Lane [EMAIL PROTECTED] writes:
Well, the point is that there are potentially three types of statements
involved:
1. SELECTs
2. Utility statements that can return tuples (EXPLAIN, SHOW, etc)
3. Utility statements that can't return tuples (ALTER, etc)
I'm not sure
Tom Lane [EMAIL PROTECTED] writes:
In the cases I'm concerned about, DROP VIEW would fail unless you used
CASCADE, and the subsequent CREATE VIEW would not magically bring back
the dependent objects the CASCADE had killed. The entire *point* of
REPLACE VIEW is that it does not have the side
Neil Dugan [EMAIL PROTECTED] writes:
Hi,
I am trying to find out how to get the next record according to a
particular index.
I have a table with a name field and a serial field. The name field
isn't unique so I made an index on name(varchar) serialno(bigserial).
I also have an index just
1 - 100 of 519 matches
Mail list logo