[EMAIL PROTECTED] (scott.marlowe) wrote in message news:[EMAIL PROTECTED]...
For quite some time. I believe the max table size of 32 TB was in effect
as far back as 6.5 or so. It's not some new thing. Now, the 8k row
barrier was broken with 7.1. I personally found the 8k row size barrier
I need to insert a Blob into a table without using
org.postgresql.largeobject.*; classes, that's because the db pool I'm
using (Resin) doesn't allow me to cast:
((org.postgresql.PGConnection)db) to get access at the Postgres
LargeObjectAPI.
// Generates ClassCastExep.
LargeObjectManager lom
Hi all,
i'm having a problem when connect vb6 with pgsql-7.3.1 using the psqlodbc.
i have a simple select (select * from mytable where pkey = 1;) in a 6 row
table, when i explain analize it give me 63ms (milliseconds). but there are
13 seconds between i order de execution of the select and it
Greeting,
i have 2 identical queries. One of that finish faast another takes ages.
there are couble of trigers that run on update. I want to know where
exactly the sencond query takes time.
is there any way to increase debug logging from the psql prompt
so that i can see whats going behind?
Regds
[EMAIL PROTECTED] (Tony Reina) writes:
The PostgreSQL limitations on the users' page
(http://www.postgresql.org/users-lounge/limitations.html) still says
that tables are limited to 16 TB, not 32 TB.
Perhaps it should be updated?
There was some concern at the time it was written as to whether
G'day ...
I've got a script that runs on all the servers that dump's IP traffic
data to a 7.4 database ... they all run at the same time, but I'm starting
to get the following on a reasonably regular basis:
ERROR: deadlock detected at /usr/local/abin/ipaudit2ams.pl line 175.
The code that
Marc G. Fournier [EMAIL PROTECTED] writes:
Now, the scripts are wrap'd in a BEGIN/END ... if a file fails to be
loaded, I want the whole thing to rollback ... the deadlock itself, I'm
presuming, is because two servers are trying to update the same
$ip_id/$port/$company_id record, at the same
Can I use the pg_dump from 7.4.1 on 7.4.2?
I tried copying a couple of libraries that it needed, and I can
copy one-by-one that it looks for, but I am not sure how many does it need in
all for the new version to work
Purpose is to dump/restore a pretty good-sized db (19GB) on
7.2.4,
On Mon, 5 Apr 2004, Tom Lane wrote:
Marc G. Fournier [EMAIL PROTECTED] writes:
Now, the scripts are wrap'd in a BEGIN/END ... if a file fails to be
loaded, I want the whole thing to rollback ... the deadlock itself, I'm
presuming, is because two servers are trying to update the same
Anjan Dave wrote:
Purpose is to dump/restore a pretty good-sized db (19GB) on 7.2.4, so
I can upgrade it to 7.4.1.
pg_dump should be able to dump databases back to about 7.1, so you
should be good to go. As always, if you have problems, we would like
to hear about it.
By the way, the latest
[EMAIL PROTECTED] (Jaime Casanova) asked:
so, the real question is what is the best filesystem for optimal speed
in postgresql?
The smart-alec answer would be... Veritas, of course!
But seriously, it depends on many factors you have not provided
information about.
- Different operating
After takin a swig o' Arrakan spice grog, [EMAIL PROTECTED] (Jaime Casanova) belched
out:
Can you tell me (or at least guide me to a palce where i can find the
answer) what are the benefits of filesystems over raw devices?
For PostgreSQL, filesystems have the merit that you can actually use
Hi All,
Any nice docs which talk about complete linux windows installation.
Thanks
Suresh A.
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
What is your recommendations for code editin in Postgresql?
I'm a student at Regis university in Denver co.
Thanks
Mark Bross
13565 Detroit St.
Thornton, Co. 80241
Email: [EMAIL PROTECTED]
Ph: 303-252-9255
Fax: 303-252-9556
---(end of
On Mon, 5 Apr 2004, Tom Lane wrote:
Marc G. Fournier [EMAIL PROTECTED] writes:
D'oh ... just tested my assumption, it was wrong ... *sigh* okay, back
to the drawing board on the code ...
Can't you just change
foreach $company_id ( keys %traffic ) {
to
foreach $company_id ( sort
Hi All,
Can any one suggest me a good url for linux and windows from download to
configuration.
Thanks in advance.
Suresh
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
at a point the postgresql ODBC driver issues a command like this (not
questioning here how useful or correct it could be)
select * from table1 where ctid = '(,)';
this command works (?) returning zero rows without error, even from
psql,
when the db is hosted in the following systems:
-
Avner [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
I need some information to understand what is the impact of Database size
on
the performemce.
CUT
1. Is there any impact?
In queries that make use of the data, yes.
2. Does one very large table impact the performence of the the
Bradley Kieser wrote:
No, it isn't. Oracle is expensive but it is also the Rolls Royce, it
seems. I am a strictly OpenSource man so I don't really get into the
pricing thing, but I do know that it is also deal-by-deal and depending
on who and what you are, the prices can vary. E.g. Educational
Avner ([EMAIL PROTECTED]) writes:
I need some information to understand what is the impact of Database
size on the performemce.
Few questions :
1. Is there any impact?
Maybe. Maybe not. Depends whether you query the large table, and not the
least how you query them, and what indexes you
I recently built my RH9 system and updated the binary RPMs to 7.4.2.
I am new to Postgresql, but I think I understand the PDF documentation
provided. However, I couldn't find any documentation on starting with
the binary RPMs. Instead, the documentation seems to be bent on
building postgresql
Marc G. Fournier [EMAIL PROTECTED] writes:
D'oh ... just tested my assumption, it was wrong ... *sigh* okay, back
to the drawing board on the code ...
Can't you just change
foreach $company_id ( keys %traffic ) {
to
foreach $company_id ( sort keys %traffic ) {
etc.
I need some information to understand what is the impact of Database size on
the performemce.
Few questions :
1. Is there any impact?
2. Does one very large table impact the performence of the the whole DB,
meaning the overall performence and access the other tables.
3. What are the ways to
Hi !
My name is Mariusz Wojtkiewicz,
im from Poland.
In my company we using base MS SQL Server 2000.
In next time we will need new base in next office.
Our deliverer suggest as to use free base PostgreSQL
put on free Linux.
All is fine and beautiful but is one problem:
We will need
I am having some trouble restoring the data back on 7.4.1 (made from
pg_dump on 7.2.4), that's the reason I would like to try dumping using
the pg_dump version of 7.4.1.
I had read in a post somethere by Tom Lane that the pg_dump of 7.4.1 can
be used to dump data on earlier versions, which is
After a long battle with technology, [EMAIL PROTECTED] (Andrew Biagioni), an
earthling, wrote:
Can anyone recommend an editor (windows OR linux) for writing plpgsql
code, that might be friendlier than a standard text editor?
Nice features I can think of might be:
- smart tabbing (1 tab = N
1. Is there any impact?
2. Does one very large table impact the performence of the the whole DB,
meaning the overall performence and access the other tables.
3. What are the ways to reduce the impact of the DB size on the DB
performence
I've run into 2 problems w/ very large DB.
A)
Colleagues,
What is the internal difference between an implicit sequence (created
automatically by the serial data type) and an explicit sequence
(created manually)?
I think I have hit something that can qualify as a bug. How to
reproduce:
== cut here =
I am planning on running PosgreSQL on a Linux box. I will write an
application using VB.NET. I am already able to connect using then npgsql
provider to my local database running on windows (cygwin).
Will it be possible for me to connect to the Linux box using the windows
authentication
In an attempt to throw the authorities off his trail, [EMAIL PROTECTED] (Bradley
Kieser) transmitted:
I think as far as PG storage goes you're really on a losing streak
here because PG clustering really isn't going to support this across
multiple servers. We're not even close to the mark as
Hi Team,
Any good url which talk about linux and windows installation.
Thanks in advance
suresh a.
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
1. a traffic table is read in, and loaded into a hash table that is
ordered by company_id, ip_id and port:
$traffic{$ip_rec{$ip}{'company_id'}}{$ip_id}{$port} += $bytes1 + $bytes2;
2. a foreach loop is run on that resultant list to do the updates to the
database:
foreach
On Fri, Apr 02, 2004 at 03:32:27PM +, Bricklen wrote:
Anyways, ss they say, You get what you pay for.
This has not been my experience at all. The correlation between
software price and quality looks to me to be something very close to
random.
A
--
Andrew Sullivan | [EMAIL PROTECTED]
Hi all,
I have a slave disk with an old PostgreSQL installation. Now I want to
migrate its information to my new primary disk with a new PGSQL
installation. I have to do it this way because de old disk does not boot as
primary any more. It prints a kernel error.
Can somebody help me?
ever try www.postgresql.org?
--( Forwarded letter 1 follows )-
Date: Fri, 02 Apr 2004 01:26:56 +0530
To: [EMAIL PROTECTED]
From: [EMAIL PROTECTED]
Sender: [EMAIL PROTECTED]
Subject: [ADMIN] Installation Docs for Linux and Windows
Hi Team,
Any good url which
D'oh ... just tested my assumption, it was wrong ... *sigh* okay, back
to the drawing board on the code ...
On Mon, 5 Apr 2004, Marc G. Fournier wrote:
On Mon, 5 Apr 2004, Tom Lane wrote:
Marc G. Fournier [EMAIL PROTECTED] writes:
Now, the scripts are wrap'd in a BEGIN/END ... if a
That appears to have fixed it, thanks ... at least it hasn't happened in a
few hours, and it was happening at least once an hour previously ...
On Mon, 5 Apr 2004, Matt Clark wrote:
1. a traffic table is read in, and loaded into a hash table that is
ordered by company_id, ip_id and
On Mon, 5 Apr 2004, Gastón Simone wrote:
Hi all,
I have a slave disk with an old PostgreSQL installation. Now I want to
migrate its information to my new primary disk with a new PGSQL
installation. I have to do it this way because de old disk does not boot as
primary any more. It
No point to beating a dead horse (other than the sheer joy of the thing) since
postgres does not have raw device support, but ...
raw devices, at least on solaris, are about 10 times as fast as cooked file systems
for Informix. This might still be a gain for postgres' performance, but the
On Fri, 2 Apr 2004, Mark Bross wrote:
What is your recommendations for code editin in Postgresql?
I'm a student at Regis university in Denver co.
Do you mean for editing the backend code itself, stylewise, or do you mean
for editing your own code, like plpgsql functions?
I'll assume you mean
Marc G. Fournier [EMAIL PROTECTED] wrote:
On Mon, 5 Apr 2004, Tom Lane wrote:
Marc G. Fournier [EMAIL PROTECTED] writes:
D'oh ... just tested my assumption, it was wrong ... *sigh* okay, back
to the drawing board on the code ...
Can't you just change
foreach $company_id (
[EMAIL PROTECTED] (Bud Curtis) wrote:
[snip]
When I attempt to start the server, ie.:
-bash-2.05b$ postmaster -D /usr/local/pgsql/data
FATAL: /usr/local/pgsql/data is not a valid data directory
DETAIL: File /usr/local/pgsql/data/PG_VERSION is missing.
-bash-2.05b$
[snip]
42 matches
Mail list logo