Why is a trigger faster than doing a ALTER after table is created? I
thought a trigger would be slower because it would be invoked every
iteration (a new row is inserted) during the COPY process.
Benjamin
On Aug 26, 2007, at 8:43 PM, Tom Lane wrote:
Gregory Stark [EMAIL PROTECTED]
I'm trying to see if pgloader will make my work easier for bulkloads.
I'm testing it out and I'm stucked basically because it can't find the
module TextReader or CSVreader.
Googling doesn't help as there seems to be no reference to a module
named textreader or csvreader.
I'm on Python 2.4.4
I've just accidently stumbled upon
http://www.postgresql.org/docs/8.2/static/libpq-ldap.html
and thought hey, this is what my friend, a huge BigRDBMS fan, was
telling me about.
Now that I've read it, I think it could be very useful in an
enterpisish sort of way
(addressing databases as
Mark wrote:
I am writing a function to extract data either from a table or a query
and output it in xml. I have the logic down, but I cannot work out a
few
things.
1. How can I read the column headings from the returned data set? I
have
resorted to writing the same function in tcl in
On 23.08.2007, at 16:10, Michael Glaesemann wrote:
On Aug 23, 2007, at 7:44 , Kristo Kaiv wrote:
On 23.08.2007, at 11:23, Alban Hertroys wrote:
Since you're setting up replication to another database, you
might as
well try replicating to a newer release and swap them around once
it's
Tom Lane wrote:
Would it be an option to have a checksum somewhere in each
data block that is verified upon read?
That's been proposed before and rejected before. See the archives ...
I searched for checksum and couldn't find it. Could someone
give me a pointer? I'm not talking about WAL
On 8/26/07, Bill Moran [EMAIL PROTECTED] wrote:
I'm curious as to how Postgres-R would handle a situation where the
constant throughput exceeded the processing speed of one of the nodes.
Such situation is not a specific problem to Postgres-R or to
synchronous replication in general.
Le lundi 27 août 2007, Ow Mun Heng a écrit :
I'm trying to see if pgloader will make my work easier for bulkloads.
I'm testing it out and I'm stucked basically because it can't find the
module TextReader or CSVreader.
Googling doesn't help as there seems to be no reference to a module
named
Lange Marcus wrote:
Not that it matters in your case. The password might as well be
password - if they get access to the files/application,
it's game
over.
What about having some of the columns encrypted in the database ?
Will that improve things abit ?
Not unless you can
On Mon, 2007-08-27 at 12:22 +0200, Dimitri Fontaine wrote:
Le lundi 27 août 2007, Ow Mun Heng a écrit :
I'm trying to see if pgloader will make my work easier for bulkloads.
I'm testing it out and I'm stucked basically because it can't find the
module TextReader or CSVreader.
It's a
Le lundi 27 août 2007, Ow Mun Heng a écrit :
It's a pgloader provided module, and the error arise because I forgot to
make sure you can use pgloader without installing it properly in the
system.
After some testing, it seems pgloader is still usable without system
installation at all. But
On Mon, 2007-08-27 at 11:27 +0200, Dimitri Fontaine wrote:
We've just made some tests here with 2.2.1 and as this release contains the
missing files, it works fine without any installation.
Yep.. I can confirm that it works.. I am using the csv example.
Goal : similar functionality much like
Hi guys!
I use pgsql for some time already and am happy with it. Heh, sure this
post has it's big BUT :-)
Starting few months ago, one of our projects encoutered loss of one DB
table (in that time, it was version 8.0 or so...) I did some research
and found out, the vacuuming was set wrong
Hi,
Marko Kreen wrote:
Such situation is not a specific problem to Postgres-R or to
synchronous replication in general. Asyncronous replication
will break down too.
Agreed, except that I don't consider slowness as 'breaking down'.
Regards
Markus
---(end of
For some weid reason, I can't use a stored function nor return data from
sql, just send sql to the database, that's my constraint for now and I have
to deal with it.
I have to create a schema and just after a table in this schema. I can't
check for the existence of the table nor the schema. If
On Mon, Aug 27, 2007 at 08:24:51AM -0300, Marcelo de Moraes Serpa wrote:
With this in mind, I'd like to know if there is something like CREATE OR
REPLACE for tables and schemas so that if the object already exists, it will
just replace it.
Looks lke DROP IF EXISTS was made for you.
Have a
Thanks Martijn
On 8/27/07, Martijn van Oosterhout [EMAIL PROTECTED] wrote:
On Mon, Aug 27, 2007 at 08:24:51AM -0300, Marcelo de Moraes Serpa wrote:
With this in mind, I'd like to know if there is something like CREATE OR
REPLACE for tables and schemas so that if the object already exists,
Hello list,
I'm trying to execute the following sentences in a pl/pgsql function.
aNomeProcAudita and pTabAudit are both variables.
DROP FUNCTION IF EXISTS aNomeProcAudita;
DROP TRIGGER IF EXISTS 'Audita_' || pTabAudit || '_trigger';
When I try to create this function
Kevin Kempter wrote:
Hi List;
I have a very large table (52million rows) - I'm creating a copy of it to rid
it of 35G worth of dead space, then I'll do a sync, drop the original table
and rename table2.
Once I have the table2 as a copy of table1 what's the best way to select all
rows
am Mon, dem 27.08.2007, um 9:40:45 -0300 mailte Marcelo de Moraes Serpa
folgendes:
Hello list,
I'm trying to execute the following sentences in a pl/pgsql function.
aNomeProcAudita and pTabAudit are both variables.
DROP FUNCTION IF EXISTS aNomeProcAudita;
Which version? DROP
On 8/27/07, Albe Laurenz [EMAIL PROTECTED] wrote:
it could be used as an advocacy lever (you think LDAP directory with
DB-services
is neat? PostgreSQL already has it).
I'm glad that *somebody* else appreciates it :^)
Oh, I do, I do. :)
Then again, apart from libpq I don't see it
How many rows are in this table?
Sanjay wrote:
Hi All,
Say I have a simple table WEBSITE(website_id int4 PRIMARY KEY, name
VARCHAR(30)). While I try this:
EXPLAIN ANALYZE SELECT * FROM WEBSITE WHERE website_id = 1
the output is:
--- Joshua D. Drake [EMAIL PROTECTED] wrote:
Having log_line_prefix with at least %p and %m (or
%t) plus a
log_min_messages of DEBUG2 would be great.
i am getting the additional timestampt/pid on my log
lines nowbut no additional debug output...
is log_min_messages one of them that
David Fetter wrote:
Tom Lane committed:
- Restrict pg_relation_size to relation owner, pg_database_size to DB
owner, and pg_tablespace_size to superusers. Perhaps we could
weaken the first case to just require SELECT privilege, but that
doesn't work for the other cases, so use
On Aug 25, 2007, at 1:34 AM, Benjamin Arai wrote:
There has to be another way to do incremental indexing without
loosing that much performance.
This is the killer feature that prevents us from using the tsearch2
full text indexer on postgres. we're investigating making a foreign
table
Dawid Kuroczko wrote:
Then again, apart from libpq I don't see it mentioned anywhere.
[...]
Looking at the 8.3devel documentation...
I think it should be mentioned in 18. Server Configuration. probably
somewhere in 18.3 Connections and Authentication, that there is
a possibility of using
On Aug 25, 2007, at 8:12 AM, Phoenix Kiula wrote:
The sentence that caught my attention is Nokia, Alcatel and Nortel
are all building real-time network nodes on top of MySQL Cluster.
My experiences with MySQL so far have been less than exhilerating
(only tried it for our web stuff, which is
Benjamin Arai [EMAIL PROTECTED] writes:
Why is a trigger faster than doing a ALTER after table is created? I
thought a trigger would be slower because it would be invoked every
iteration (a new row is inserted) during the COPY process.
Yeah, you'd have the trigger overhead, but the above
On Sat, Aug 25, 2007 at 11:13:45AM -0400, Tom Lane wrote:
In case you hadn't noticed the disconnect between these statements:
if they have to be that close together, there *will* be a single point
of failure. Fire in your data center, for instance, will take out every
copy of your data. So
Albe Laurenz [EMAIL PROTECTED] writes:
Tom Lane wrote:
Would it be an option to have a checksum somewhere in each
data block that is verified upon read?
That's been proposed before and rejected before. See the archives ...
I searched for checksum and couldn't find it. Could someone
give
Kamil Srot [EMAIL PROTECTED] writes:
One more thing:
The project runs proprietal CMS system and there are more instances of
it with the same database layout in different databases. Every time the
lost table is the same one - the bussiest one (mostly read)... and
everytime the lost table is
On 8/27/07, Tom Lane [EMAIL PROTECTED] wrote:
that and the lack of evidence that they'd actually gain anything
I find it somewhat ironic that PostgreSQL strives to be fairly
non-corruptable, yet has no way to detect a corrupted page. The only
reason for not having CRCs is because it will slow
Marcelo de Moraes Serpa [EMAIL PROTECTED] writes:
DROP FUNCTION IF EXISTS aNomeProcAudita;
DROP TRIGGER IF EXISTS 'Audita_' || pTabAudit || '_trigger';
Neither of those match the documented syntax for the commands: you have
left off required information. Also, as Andreas
Tom Lane wrote:
Kamil Srot [EMAIL PROTECTED] writes:
One more thing:
The project runs proprietal CMS system and there are more instances of
it with the same database layout in different databases. Every time the
lost table is the same one - the bussiest one (mostly read)... and
everytime
Jeff Amiel [EMAIL PROTECTED] writes:
is log_min_messages one of them that requires a
restart?
No, SIGHUP (pg_ctl reload) should be sufficient.
regards, tom lane
---(end of broadcast)---
TIP 3: Have you checked our
Joseph S [EMAIL PROTECTED] writes:
Tom Lane committed:
- Restrict pg_relation_size to relation owner, pg_database_size to DB
owner, and pg_tablespace_size to superusers. Perhaps we could
weaken the first case to just require SELECT privilege, but that
doesn't work for the other cases, so use
On 8/27/07, Jonah H. Harris [EMAIL PROTECTED] wrote:
On 8/27/07, Tom Lane [EMAIL PROTECTED] wrote:
that and the lack of evidence that they'd actually gain anything
I find it somewhat ironic that PostgreSQL strives to be fairly
non-corruptable, yet has no way to detect a corrupted page. The
Jonah H. Harris wrote:
On 8/27/07, Tom Lane [EMAIL PROTECTED] wrote:
that and the lack of evidence that they'd actually gain anything
I find it somewhat ironic that PostgreSQL strives to be fairly
non-corruptable, yet has no way to detect a corrupted page. The only
reason for not having
Trevor Talbot [EMAIL PROTECTED] writes:
On 8/27/07, Jonah H. Harris [EMAIL PROTECTED] wrote:
I find it somewhat ironic that PostgreSQL strives to be fairly
non-corruptable, yet has no way to detect a corrupted page.
But how does detecting a corrupted data page gain you any durability?
All it
On Aug 27, 2007, at 11:04 AM, Andrew Sullivan wrote:
It was a way to scale many small systems for certain kinds of
workloads. My impression is that in most cases, it's a SQL-ish
solution to a problem where someone decided to use the SQL nail
because that's the hammer they had. I can think of
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Tom Lane wrote:
Joseph S [EMAIL PROTECTED] writes:
Tom Lane committed:
- Restrict pg_relation_size to relation owner, pg_database_size to DB
owner, and pg_tablespace_size to superusers. Perhaps we could
weaken the first case to just require
Postgres can't be embedded or serverless. Firebird has the embedded feature.
Most of the databases have this capability (hsqldb, derby,oracle,mysql,
firebird, and db2). Derby and hsqldb are the only free embedded databases
for commercial use.
I recently ported a schema from postgres to
--- Tom Lane [EMAIL PROTECTED] wrote:
Jeff Amiel [EMAIL PROTECTED] writes:
is log_min_messages one of them that requires a
restart?
No, SIGHUP (pg_ctl reload) should be sufficient.
Weird
looks like some items are going to syslog and some to
my defined postgres logfile (from -L
--- Tom Lane [EMAIL PROTECTED] wrote:
Jeff Amiel [EMAIL PROTECTED] writes:
is log_min_messages one of them that requires a
restart?
No, SIGHUP (pg_ctl reload) should be sufficient.
Weird
looks like some items are going to syslog and some to
my defined postgres logfile (from -L
--- Joshua D. Drake [EMAIL PROTECTED] wrote:
We are actually diagnosing a similar problem on this
end, where we get a
failure at 1920... I am currently trying to get some
DEBUG output.
Tracking for last few days.
Does not appear to happen when little or no user
activity (like Saturday) I
On 8/27/07, Tom Lane [EMAIL PROTECTED] wrote:
Indeed. In fact, the most likely implementation of this (refuse to do
anything with a page with a bad CRC) would be a net loss from that
standpoint, because you couldn't get *any* data out of a page, even if
only part of it had been zapped.
At
Marcelo de Moraes Serpa [EMAIL PROTECTED] writes:
I know that this PostgreSQL C module has a static var that in turn keeps the
integer set by the function set_session_id - but is this var global to the
server's service ? Does PostgreSQL mantain one instance of this var per
requested
brian wrote:
Tom Lane wrote:
Kamil Srot [EMAIL PROTECTED] writes:
One more thing:
The project runs proprietal CMS system and there are more instances
of it with the same database layout in different databases. Every
time the lost table is the same one - the bussiest one (mostly
read)...
Stephen Ince wrote:
Postgres can't be embedded or serverless. Firebird has the embedded
feature. Most of the databases have this capability (hsqldb,
derby,oracle,mysql, firebird, and db2). Derby and hsqldb are the only
free embedded databases for commercial use.
A lot of Firebird users
On Mon, Aug 27, 2007 at 06:37:17PM +0200, Kamil Srot wrote:
I don't say, it's gone by itself, I'm asking for help debuging this
situation and hopefully find a solution. For the first time it happened,
it had the same symptoms - this specific table was missing and
transaction counter was
Jeff Amiel [EMAIL PROTECTED] writes:
Tracking for last few days.
Does not appear to happen when little or no user
activity (like Saturday) I don't know if that rules
out autovacuum or not (if no update threshholds are
reached, no vacuuming will take place anyway)
Can you correlate these
Martijn van Oosterhout wrote:
On Mon, Aug 27, 2007 at 06:37:17PM +0200, Kamil Srot wrote:
I don't say, it's gone by itself, I'm asking for help debuging this
situation and hopefully find a solution. For the first time it happened,
it had the same symptoms - this specific table was missing
On Aug 27, 2007, at 11:47 , Tony Caduto wrote:
Good call on the name limit, I remember running into that when
porting something from MS SQL server to Firebird about 4 years ago.
Just a quick note: PostgreSQL's identifiers are limited to
NAMEDATALEN - 1 (IIRC), which by default is 64 - 1 =
Martijn van Oosterhout wrote:
On Mon, Aug 27, 2007 at 06:57:54PM +0200, Kamil Srot wrote:
Correct...the script does echo vacuum full; | $PGDIR/bin/psql -U
postgres $db for each database...
Hope it's correct?
Well, I'd drop the full part, it tends to bloat indexes. Also, did
you
On Mon, Aug 27, 2007 at 09:12:17AM -0700, Jeff Amiel wrote:
Tracking for last few days.
Does not appear to happen when little or no user
activity (like Saturday) I don't know if that rules
out autovacuum or not (if no update threshholds are
reached, no vacuuming will take place anyway)
I
On Aug 27, 2007, at 12:15 PM, Martijn van Oosterhout wrote:
On Mon, Aug 27, 2007 at 09:12:17AM -0700, Jeff Amiel wrote:
Tracking for last few days.
Does not appear to happen when little or no user
activity (like Saturday) I don't know if that rules
out autovacuum or not (if no update
On 8/27/07, Stephen Ince [EMAIL PROTECTED] wrote:
I recently ported a schema from postgres to firebird and found name size
limitations. Firebird has a limitation on the size of it's column names,
table names, constraint names and index names. I think the size limitation
on firebird is 31
--- Original Message ---
From: Stephen Ince [EMAIL PROTECTED]
To: Tony Caduto [EMAIL PROTECTED], Greg Smith [EMAIL PROTECTED],
pgsql-general@postgresql.org
Sent: 27/08/07, 17:02:21
Subject: Re: [GENERAL] PostgreSQL vs Firebird feature comparison finished
Postgres can't be
On Mon, Aug 27, 2007 at 06:57:54PM +0200, Kamil Srot wrote:
Correct...the script does echo vacuum full; | $PGDIR/bin/psql -U
postgres $db for each database...
Hope it's correct?
Well, I'd drop the full part, it tends to bloat indexes. Also, did
you check it was actually completing (no
Tom Lane wrote:
Joseph S [EMAIL PROTECTED] writes:
Tom Lane committed:
- Restrict pg_relation_size to relation owner, pg_database_size to DB
owner, and pg_tablespace_size to superusers. Perhaps we could
weaken the first case to just require SELECT privilege, but that
doesn't work for the other
Has anyone come across this error before?
LOG: PickSplit method of 2 columns of index
'asset_position_lines_asset_cubespacetime_idx' doesn't support secondary
split
This is a multi-column GiST index on an integer and a cube (a data type
from the postgres cube extension module).
I traced
Kamil Srot wrote:
Martijn van Oosterhout wrote:
On Mon, Aug 27, 2007 at 06:57:54PM +0200, Kamil Srot wrote:
Correct...the script does echo vacuum full; | $PGDIR/bin/psql -U
postgres $db for each database...
Hope it's correct?
Well, I'd drop the full part, it tends to bloat
Alvaro Herrera wrote:
Kamil Srot wrote:
Martijn van Oosterhout wrote:
On Mon, Aug 27, 2007 at 06:57:54PM +0200, Kamil Srot wrote:
Correct...the script does echo vacuum full; | $PGDIR/bin/psql -U
postgres $db for each database...
Hope it's correct?
Well, I'd
Stephen Ince wrote on 27.08.2007 18:02:
Derby and hsqldb are the only free embedded databases for commercial use.
Well, there are some more:
H2 Database, OneDollarDB (OpenSource version of DaffodilDB), Berkely DB and
McKoi are free as well (although McKoi seems to be dead).
Then there are a
The time seems entirely spent in fetching rows from table rid.
Perhaps that table is bloated by lack of vacuuming --- can you
show the output from vacuum verbose rid?
INFO: vacuuming firma1.rid
INFO: scanned index rid_pkey to remove 7375 row versions
DETAIL: CPU 0.01s/0.39u sec elapsed 5.46
Perhaps that table is bloated by lack of vacuuming --- can you
show the output from vacuum verbose rid?
Thank you.
After running vacuum and analyze commands the query takes 18 seconds.
This is still very slow because my tables are indexed.
How to speed up this ?
set search_path to
I'm using PostgreSQL PostgreSQL 8.2.4 from ODBC 08.02.0300 client.
Postgres log files are polluted with messages
2007-08-27 06:10:38 WARNING: nonstandard use of \\ in a string literal at
character 190
2007-08-27 06:10:38 HINT: Use the escape string syntax for backslashes,
e.g., E'\\'.
Hi All,
Say I have a simple table WEBSITE(website_id int4 PRIMARY KEY, name
VARCHAR(30)). While I try this:
EXPLAIN ANALYZE SELECT * FROM WEBSITE WHERE website_id = 1
the output is:
--
Seq Scan on website
Hi,
Does the psql's \copy command run as a transaction? I think it does, but
somehow when I cancel (in a script) a running import, seems (I can't
seem to duplicate it on the cli though) like a few lines/rows gets
inserted anyway..
---(end of
Hello
I am trying to write a query to find all points that fall within a
given box. However, I cannot seem to find the functionality for
determining whether a point is within a box.
e.g. select box '((0,0),(1,1))' @ point '(0.5,0.5)';
operator does not exist: box @ point
Is this operator for
Hello,
We tried many things but didnt succeed.
Our DB crashed without any recent backup.
We have 3 elements:
- a backup we did in February,
- 4 WAL files in the pg_xlog folder created in august,
- the base folder (in which there are table files created in august)
Q1 : Is it possible to
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Erik Jones wrote:
On Aug 27, 2007, at 12:15 PM, Martijn van Oosterhout wrote:
On Mon, Aug 27, 2007 at 09:12:17AM -0700, Jeff Amiel wrote:
Tracking for last few days.
Does not appear to happen when little or no user
activity (like Saturday) I
In response to Sanjay [EMAIL PROTECTED]:
Hi All,
Say I have a simple table WEBSITE(website_id int4 PRIMARY KEY, name
VARCHAR(30)). While I try this:
EXPLAIN ANALYZE SELECT * FROM WEBSITE WHERE website_id = 1
the output is:
On Monday 27 August 2007 05:21, Sanjay [EMAIL PROTECTED] wrote:
Wondering why it is not using the index, which would have
been
automatically created for the primary key.
Because you not only have just one row in the whole table, 100% of them will
match the query. In short, one page fetch for a
On Mon, Aug 27, 2007 at 12:08:17PM -0400, Jonah H. Harris wrote:
On 8/27/07, Tom Lane [EMAIL PROTECTED] wrote:
Indeed. In fact, the most likely implementation of this (refuse to do
anything with a page with a bad CRC) would be a net loss from that
standpoint, because you couldn't get *any*
--- Joshua D. Drake [EMAIL PROTECTED] wrote:
The machine we are tracking this problem on is also 64bit.
H.looks like 3 different people are tracking a similar issue on 64 bit
platforms.you,
Erik and myself.
On Mon, Aug 27, 2007 at 07:15:44PM +0200, Kamil Srot wrote:
OK, I'll drop the full part and do it less often...
This doesn't address your problem, but when you move from VACUUM FULL
to VACUUM, you want to do it _more_ often, not less.
But given what you've posted, I am not even a little bit
On Mon, Aug 27, 2007 at 02:00:02PM +0300, Andrus wrote:
Postgres log files are polluted with messages
2007-08-27 06:10:38 WARNING: nonstandard use of \\ in a string literal at
character 190
2007-08-27 06:10:38 HINT: Use the escape string syntax for backslashes,
e.g., E'\\'.
That's not
Maybe someone here can figure it out. Everything updates fine with
this code, except where there's an exception, it's not rolling back
by the transaction. What I'm trying to do:
Begin a transaction
Do the update, insert, delete checks on each of the data tables,
using a different
I have setup a Postgres server on Debian Etch and successfully connected
to it with various *nix clients but I now have to connect a WinXP
client. On accessing the Postgres site I am directed to a download page,
click on the appropriate link and get automatically directed to a
University of Kent
Andrew Sullivan wrote:
On Mon, Aug 27, 2007 at 07:15:44PM +0200, Kamil Srot wrote:
OK, I'll drop the full part and do it less often...
This doesn't address your problem, but when you move from VACUUM FULL
to VACUUM, you want to do it _more_ often, not less.
Sure, I ment it like
Point taken for the enterprise comparison. The reason for having the
embedded database is to hide the complexity for installing, using, and
configuration of the database from the user of the application. You don't
want a scaled version of the database.
- Original Message -
From:
On Mon, Aug 27, 2007 at 10:03:04PM +0200, Kamil Srot wrote:
Sure, I ment it like I'll do the FULL vacuum less often than daily and
do daily the plain vacuum command.
If you have your servers set up correctly, you should never need to
perform VACUUM FULL.
Well, I do list all databases
Andrew Sullivan wrote:
On Mon, Aug 27, 2007 at 02:00:02PM +0300, Andrus wrote:
Postgres log files are polluted with messages
2007-08-27 06:10:38 WARNING: nonstandard use of \\ in a string literal at
character 190
2007-08-27 06:10:38 HINT: Use the escape string syntax for backslashes,
Andrew Sullivan wrote:
On Mon, Aug 27, 2007 at 10:03:04PM +0200, Kamil Srot wrote:
Sure, I ment it like I'll do the FULL vacuum less often than daily and
do daily the plain vacuum command.
If you have your servers set up correctly, you should never need to
perform VACUUM
Dave,
Thx I will take a look. I was trying to port a postgres schema to a
database that had embedded capability. I could not find any non-commerical
databases that supported triggers, sequences, udf function, and stored
procedure. I as I remembered firebird has pretty weak UDF function
Yes, but fortunately for me, unfortunately for the list, it's only
happened to me once so I don't really have anything to go on wrt
repeating the problem. I can only say, Yep! It's happened! I am
watching my db closely, though. Well, my monitoring scripts are :)
On Aug 27, 2007, at
On Mon, Aug 27, 2007 at 10:31:11PM +0200, Kamil Srot wrote:
The script is very simple one:
Well, I don't see anything obvious, but. . .
I can easily rewrite it to use the vacuumdb command, but I doubt it'll
make any difference.
The point is that you don't have to rewrite it. Just run
Hello
I am currently working on creating a build system for an open source portable
project that should be able to build the project on many platforms, POSIX and
non-POSIX such as Windows. Our project has the option for using PostgreSQL.
Searching for PostgreSQL includes/libraries is very easy
There are some limitations to SQL Server Express:
http://www.microsoft.com/sql/downloads/trial-software.mspx
Download SQL Server 2005 Express Edition
Complete a SQL Server Express download, free. There are no time limits
and the software is freely redistributable (with registration). With a
Andrew Sullivan wrote:
I can easily rewrite it to use the vacuumdb command, but I doubt it'll
make any difference.
The point is that you don't have to rewrite it. Just run vacuumdb
-a and it vacuums _all_ databases.
Oh, I have it now! It takes some time, but at the end, I'll
List,
One of the reasons why I use postgres is because you can insert data and
it will work or give you an error instead of converting, truncating,
etc... well I found a place where postgres makes an erroneous
assumption and I'm not sure this is by design.
When inserting a float such as
Kamil Srot [EMAIL PROTECTED] writes:
# select xmin, age(xmin) from pg_class;
xmin|age
---+
2 | 2147483647
2 | 2147483647
2 | 2147483647
2 | 2147483647
2 | 2147483647
2 | 2147483647
236838019 |
--- Original Message ---
From: Stephen Ince [EMAIL PROTECTED]
To: Dave Page [EMAIL PROTECTED]
Sent: 27/08/07, 21:30:06
Subject: Re: [GENERAL] PostgreSQL vs Firebird feature comparison finished
Dave,
Thx I will take a look. I was trying to port a postgres schema to a
--- Original Message ---
From: Dizzy [EMAIL PROTECTED]
To: pgsql-general@postgresql.org
Sent: 27/08/07, 21:12:55
Subject: [GENERAL] pgsql Windows installer fixed registry key
The pgsql MSI installer does register a registry key but it's random
everytime
it installs (probably
Tom Lane wrote:
Kamil Srot [EMAIL PROTECTED] writes:
# select xmin, age(xmin) from pg_class;
xmin|age
---+
2 | 2147483647
2 | 2147483647
2 | 2147483647
2 | 2147483647
2 | 2147483647
2 | 2147483647
236838019
Matthew Schumacher [EMAIL PROTECTED] writes:
template1=# create table test (number int);
CREATE TABLE
template1=# insert into test (number) values (4.123123123);
INSERT 0 1
Perhaps you'd be happier doing it like this:
regression=# insert into test (number) values ('4.123123123');
ERROR:
On Mon, Aug 27, 2007 at 12:48:34PM -0800, Matthew Schumacher wrote:
When inserting a float such as 4.12322345 into a int column postgres
inserts 4 instead of returning an error telling you that your value
won't fit. I would much rather have the error and check for it since I
can be sure I'll
On Aug 27, 2007, at 4:08 PM, Tom Lane wrote:
Kamil Srot [EMAIL PROTECTED] writes:
# select xmin, age(xmin) from pg_class;
xmin|age
---+
2 | 2147483647
2 | 2147483647
2 | 2147483647
2 | 2147483647
2 | 2147483647
Bill Moran [EMAIL PROTECTED] writes:
First off, clustering is a word that is too vague to be useful, so
I'll stop using it.
Right. MySQL Cluster, on the other hand, is a very specific technology.
http://dev.mysql.com/doc/refman/5.0/en/mysql-cluster.html
It is, however, capable of being d*mn
1 - 100 of 136 matches
Mail list logo