On Thu, 6 Jul 2006, Dann Corbit wrote:
It's the data that contains all the value. The hardware becomes
obsolete when it can no longer keep up with business needs.
. or can no longer be repaired. :)
---(end of broadcast)---
TIP 5: don't
Hello,
i have a PostgreSQL (8.1) installation for testing purposes which was
running fine for several months now (Windows XP). I was working with it
yesterday, and today after booting my computer and restarting the
service (I'm starting the service manually, because I don't need the
server
On 7/6/06, Rodrigo Sakai [EMAIL PROTECTED] wrote:
Anyone knows a good tool for do the reverse engineering
of a postgresql database? I tried to use DBDesigner, but I couldn't get the
relationships!we used case studio 2. worked quite well.depesz-- http://www.depesz.com/ - nowy, lepszy
On Thu, 2006-07-06 at 20:57, Karl O. Pinc wrote:
Hi,
What is the best pg_dump format for long-term database
archival? That is, what format is most likely to
be able to be restored into a future PostgreSQL
cluster.
Should we want to restore a 20 year old backup
nobody's going to want to
Sorry if this is a duplicat, someone suggested posting to general as well,
I'm using PostgreSQL 8.1.4 in a Hibernate Application and I am
attempting to use partitioning via Inherited tables. At first I was
going to create a rule per sub-table based on a date range, but found
out with multiple
On Fri, Jul 07, 2006 at 04:39:53AM -0400, Gene wrote:
Sorry if this is a duplicat, someone suggested posting to general as well,
I'm using PostgreSQL 8.1.4 in a Hibernate Application and I am
attempting to use partitioning via Inherited tables. At first I was
going to create a rule per
On 07.07.2006 09:20 Thomas Kellerer wrote:
Hello,
i have a PostgreSQL (8.1) installation for testing purposes which was
running fine for several months now (Windows XP). I was working with it
yesterday, and today after booting my computer and restarting the
service (I'm starting the service
# kleptog@svana.org / 2006-06-22 09:19:44 +0200:
On Tue, Jun 20, 2006 at 02:06:19AM -0700, [EMAIL PROTECTED] wrote:
Such construction is very slow but when I modify SQL to:
OPEN cursor1 FOR SELECT * FROM alias WHERE mask=alias_out
ORDER BY mask LIMIT 100;
it works very fast. It
On 7/7/2006 17:49, Csaba Nagy [EMAIL PROTECTED] wrote:
On Thu, 2006-07-06 at 20:57, Karl O. Pinc wrote:
Hi,
What is the best pg_dump format for long-term database
archival? That is, what format is most likely to
be able to be restored into a future PostgreSQL
cluster.
Should we want
Good morning List,
I have seen several posts on this concept but I don’t find a complete
response.
I’m using BenchmarkSQL to evaluate PostgreSQL in transaction processing
and I work with PostgreSQL 8.1.3 on RHEL4-AS, Itanium-2 processor, 8GB RAM.
The database, generated via BenchmarkSQL and
Hello,
I am currently working on a heavily stressed system and i am
having some difficulties to make the background writer work properly.
But before going any further, i would like to be sure that i understood
how transaction
processing, and management are handled by postgreSQL. Correct me if
Hello,
I was wondering if each checkpoint would flush all transactions from
the shared buffers or if there could be some left at the end of the
checkpoint ?
Because i was trying to lower I/Os of the disks, i got the checkpoint
timeout lowered to 150 seconds so i get twice the number the
# [EMAIL PROTECTED] / 2006-07-06 22:41:27 -0600:
OK I know this is an odd question but I'm working on an app that will rely
more and more on database driven functions, and while the app's source is
in SVN, and I intend for the source of the SQL scripts to also be there, I
was
On Fri, Jul 07, 2006 at 11:49:13AM +0200, Fabrice Franquenk wrote:
2. If the result of the transaction is within the shared buffers, we get
our result
instantly. Else some searching is done within the database datafiles to
get the result
which is copied to the shared buffers memory zone.
On Fri, Jul 07, 2006 at 11:30:35AM +, Roman Neuhauser wrote:
With the LIMIT it might take a different approach, which
might be worse if you read the whole lot, but better for a limited set.
A fast-start plan so to speak.
That looks like a better approach for a cursor.
For a
Csaba Nagy schrieb:
...
Karl, I would say that if you really want data from 20 years ago, keep
it in the custom format, along with a set of the sources of postgres
which created the dump. then in 20 years when you'll need it, you'll
compile the sources and load the data in the original postgres
Dear friends, I'm resending my
Dear friends,
owing to a very poor
webmail I'm compelled to use I'm resending my messed-up message of
yesterday.
in postgresql 8.0.7 I have the following table
\d
basedati
Tabella public.basedati
Colonna
|Tipo |
Michael Loftis wrote:
OK I know this is an odd question but I'm working on an app that will
rely more and more on database driven functions, and while the app's
source is in SVN, and I intend for the source of the SQL scripts to
also be there, I was wondering...what are people doing for
Kenneth Downs [EMAIL PROTECTED] writes:
We went for generating all server-side code out of a data dictionary. This
makes for a significant change in the way change management is handled.
In this scenario change management becomes the analysis of before and
after data dictionaries. If the
What is the most appropriate way to migrate a database form Sybase (SQL
Anywhere 5.504) to Postgres? I found some shareware of freeware migration tools
on the net, which are the best? Is it faster to use any of them them or just
simply do it manually?
Thanks
Tomas
On 7 Jul 2006 12:50:22 -, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
What is the most appropriate way to migrate a database form Sybase (SQL
Anywhere 5.504) to Postgres? I found some shareware of freeware migration tools
on the net, which are the best? Is it faster to use any of them them
Hi,
I am the maintainer of Debian's packages for exim4, a powerful and
versatile Mail Transfer Agent developed in Cambridge and in wide use
throughout the Free Software Community (http://www.exim.org/).
One of our daemon flavours has PostgreSQL support. Our security guys
have found a flaw in
On 7/7/06, Michael Loftis [EMAIL PROTECTED] wrote:
OK I know this is an odd question but I'm working on an app that will rely
more and more on database driven functions, and while the app's source is
in SVN, and I intend for the source of the SQL scripts to also be there, I
was wondering...what
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Ben wrote:
On Thu, 6 Jul 2006, Dann Corbit wrote:
It's the data that contains all the value. The hardware becomes
obsolete when it can no longer keep up with business needs.
. or can no longer be repaired. :)
Hi all,
I've got a database of URLs, and when inserting new data into it I want
to make sure that there are no functionally equivalent URLs already
present. For example, 'umist.ac.uk' is functionally the same as
'umist.ac.uk/'.
I find that searching for the latter form, using string
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
Is this for me? I am
Hi Merlin, as I tried to explain, I do not need just to send some sql
to postgres, I am working at an higher level of abstraction, where I
need the information (that must be) provided by (any) OleDb Provider
in my opinion the support for the npgsql driver is ok (not great). It
is much better
On Fri, Jul 07, 2006 at 03:48:00PM +0200, Marc Haber wrote:
Hi,
I am the maintainer of Debian's packages for exim4, a powerful and
versatile Mail Transfer Agent developed in Cambridge and in wide use
throughout the Free Software Community (http://www.exim.org/).
One of our daemon flavours
Jorge Godoy wrote:
Kenneth Downs [EMAIL PROTECTED] writes:
We went for generating all server-side code out of a data dictionary. This
makes for a significant change in the way change management is handled.
In this scenario change management becomes the analysis of "before" and
PostgreSQL has been providing reliable service for our web hosting
company since 1997. Thanks!
Last night we got the following error during a dump of an 8.0.6
database:
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: could not access status of
transaction 245900066
I am currently using a log with the file name format:
log_filename = 'postgresql-%Y-%m.log'
Is there any way to change the filename do start witht he database name?
For now just added to add the database name to each line, but it would be
usefull to have each DB written to it's own file. Or
On Fri, Jul 07, 2006 at 03:48:00PM +0200, Marc Haber wrote:
From what I understand, the correct way would be to use
PQescapeStringConn, but that function needs an established connection,
and exim performs string escape early, way before the actual
connection is established.
I just downloaded
Postgresql 8.0.4 on FreeBSD 5.4
I have a table consisting of some 300million rows that, every couple of
months, has 100 million rows deleted from it (an immediately vacuumed
afterward). Even though it gets routinely vacuumed (the only
deletions/updates are just the quarterly ones), the freespace
badlydrawnbhoy wrote:
Hi all,
I've got a database of URLs, and when inserting new data into it I want
to make sure that there are no functionally equivalent URLs already
present. For example, 'umist.ac.uk' is functionally the same as
'umist.ac.uk/'.
I find that searching for the latter form,
Mark Stosberg [EMAIL PROTECTED] writes:
Last night we got the following error during a dump of an 8.0.6
database:
pg_dump: Error message from server: ERROR: could not access status of
transaction 245900066
Another dump run during the same time frame did not have this problem,
and running
of course you might need to also keep an image of the current
OS and the hardware you're running on if you really want to be sure it
will work in 20 years :-)
I think that in twenty years, I think most of us will be more worried about our
retirement than
the long terms data conserns of the
On Thu, 2006-07-06 at 17:30, Weerts, Jan wrote:
Scott Marlowe wrote:
On Thu, 2006-07-06 at 16:36, Weerts, Jan wrote:
Hi all!
This was 8.1.3 and now is 8.1.4 running on Debian Sarge, locally
compiled without any fancy options.
While the first answer seems much more valid (the
[snip]
as I drop them prior to the huge delete, then create them anew). What
would be the recommended method for reclaiming the disk space lost due
to bloat? Dropping the 5 indexes on the table and doing a VACUUM FULL,
keeping the indexes and doing a VACUUM FULL (does FULL perform the same
On Fri, 2006-07-07 at 01:57, DANTE Alexandra wrote:
Good morning List,
I have seen several posts on this concept but I don’t find a complete
response.
I’m using BenchmarkSQL to evaluate PostgreSQL in transaction processing
and I work with PostgreSQL 8.1.3 on RHEL4-AS, Itanium-2 processor,
You could build a function-based index that contains the "simplified"
version of each URL (in your case, the field with the '/' stripped).
Then use the same function on the URL going in. In that case
PostgreSQL will use the index that you created already.
Take a look at the PostgreSQL
Hey,I am using Postgresql 8.1.4 on windows. I have a large amount of data stored in .xls files which I want to insert into my database. The columns in .xls files are not exactly compatible with the database schema. For example the event_id in every .xls file starts with 1 while for my database
On Friday 07 July 2006 08:19, Sven Willenberger wrote:
Postgresql 8.0.4 on FreeBSD 5.4
I have a table consisting of some 300million rows that, every couple of
months, has 100 million rows deleted from it (an immediately vacuumed
afterward). Even though it gets routinely vacuumed (the only
Csaba Nagy [EMAIL PROTECTED] writes:
I won't know for sure, but I guess the least downtime you would get by
not dropping the indexes before the delete, but do a reindex after it.
Then cluster on the primary key...
My reasoning (correct me if I'm wrong): the deletion speed won't be
affected
On Fri, 2006-07-07 at 09:55 -0700, Joshua D. Drake wrote:
On Friday 07 July 2006 08:19, Sven Willenberger wrote:
Postgresql 8.0.4 on FreeBSD 5.4
I have a table consisting of some 300million rows that, every couple of
months, has 100 million rows deleted from it (an immediately vacuumed
Sincerely,
Joshua D. Drake
Doing a quick check reveals that the relation in question currently
consumes 186GB of space (which I highly suspect is largely bloat).
Good lord.. .186 gig for a 300 million row table? Unless those are seriously
large rows, you have a TON of bloat.
Joshua D.
How long does it take do a database dump (with gzip -1 via | and ), drop
this database
and create the database and restore it from the backup. That is my solution,
but I dont
know how long it will take to restore your database and i dont have so large
databases.
Secondly this sounds like a
On Fri, 2006-07-07 at 10:41 -0700, Joshua D. Drake wrote:
Sincerely,
Joshua D. Drake
Doing a quick check reveals that the relation in question currently
consumes 186GB of space (which I highly suspect is largely bloat).
Good lord.. .186 gig for a 300 million row table? Unless
On Jul 7, 2006, at 1:19 AM, Csaba Nagy wrote:
On Thu, 2006-07-06 at 20:57, Karl O. Pinc wrote:
Hi,
What is the best pg_dump format for long-term database
archival? That is, what format is most likely to
be able to be restored into a future PostgreSQL
cluster.
Should we want to restore a
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Parang Saraf wrote:
Hey,
I am using Postgresql 8.1.4 on windows. I have a large amount of data
stored
in .xls files which I want to insert into my database.
The columns in .xls files are not exactly compatible with the database
schema. For
--On July 7, 2006 12:35:53 PM + Roman Neuhauser [EMAIL PROTECTED]
wrote:
# [EMAIL PROTECTED] / 2006-07-06 22:41:27 -0600:
OK I know this is an odd question but I'm working on an app that will
rely more and more on database driven functions, and while the app's
source is in SVN, and I
On Friday 07 July 2006 13:08, Michael Loftis wrote:
--On July 7, 2006 12:35:53 PM + Roman Neuhauser [EMAIL PROTECTED]
wrote:
# [EMAIL PROTECTED] / 2006-07-06 22:41:27 -0600:
OK I know this is an odd question but I'm working on an app that will
rely more and more on database driven
I guess the solution depends on what is a 'large amount of data'. The most
time consuming part is going to be converting the single data elements at the
top of each sheet into multiple elements. I would create columns for the data
in the sheet. At the same time I would order the columns to
On Fri, Jul 07, 2006 at 04:29:51AM -0700,
badlydrawnbhoy [EMAIL PROTECTED] wrote
a message of 48 lines which said:
I've got a database of URLs, and when inserting new data into it I
want to make sure that there are no functionally equivalent URLs
already present. For example, 'umist.ac.uk'
OK I'm either insane or found a bug in 8.1.3
If you execute say:
SELECT DISTINCT(ua.user_id),pa.poll_id FROM user_answers ua, poll_answers
pa WHERE pa.poll_answer_id = ua.poll_answer_id AND ua.active='Y';
Everything is fine, however if you run
SELECT pa.poll_id,DISTINCT(ua.user_id) FROM
On Fri, 7 Jul 2006, Michael Loftis wrote:
OK I'm either insane or found a bug in 8.1.3
If you execute say:
SELECT DISTINCT(ua.user_id),pa.poll_id FROM user_answers ua, poll_answers
pa WHERE pa.poll_answer_id = ua.poll_answer_id AND ua.active='Y';
Everything is fine, however if you run
SELECT DISTINCT(ua.user_id),pa.poll_id FROM user_answers ua, poll_answers
pa WHERE pa.poll_answer_id = ua.poll_answer_id AND ua.active='Y';
Everything is fine, however if you run
SELECT pa.poll_id,DISTINCT(ua.user_id) FROM user_answers ua, poll_answers
pa WHERE pa.poll_answer_id =
[mailto:[EMAIL PROTECTED] On Behalf Of Adrian Klaver
I guess the solution depends on what is a 'large amount of data'. The
most time consuming part is going to be converting the single data
elements at the top of each sheet into multiple elements. I would
create columns for the data in
--On July 7, 2006 3:22:01 PM -0700 Richard Broersma Jr
[EMAIL PROTECTED] wrote:
Notice: http://www.postgresql.org/docs/8.1/interactive/sql-select.html
According to the syntax for a select a distinct | distinct on must be the
first column specified in the syntax.
So perhaps it is designed
One option is to write a Python translator to create CSV files, or even
an uploader to go directly from the Excel files to the database. There
is at least one module to read Excel files, in all their complexity:
http://cheeseshop.python.org/pypi/xlrd/0.5.2
and a number of Postgres modules:
So far I have only got this to work with the Postgres SDBC driver-
http://dba.openoffice.org/drivers/postgresql/index.html
1) Open the Data Source (F4) window in the spreadsheet.
2) Make a connection to the database. I usually do this by opening a table.
This is fairly important, otherwise when
Hello,
i have a PostgreSQL (8.1) installation for testing purposes
which was running fine for several months now (Windows XP). I
was working with it yesterday, and today after booting my
computer and restarting the service (I'm starting the service
manually, because I don't need the
61 matches
Mail list logo