On Sat, Nov 22, 2008 at 6:35 AM, David [EMAIL PROTECTED] wrote:
I am trying to use pgAdmin 1.8.4 to edit the pg_hba.conf file on a
PostgreSQL 8.3 database running on Ubuntu 8.10. I get the following
error message:
An error has occurred:
ERROR: absolute path not allowed
CONTEXT: SQL
On Friday 21 November 2008 19:10:45 Tom Lane wrote:
Yeah, I think this is most probably explained by repeat postings
of successive versions of large patches. Still, Ron might be on to
something. I had not considered message lengths in my previous
numbers ...
Also consider that since we
Dave Page wrote:
On Sat, Nov 22, 2008 at 6:35 AM, David [EMAIL PROTECTED] wrote:
I am trying to use pgAdmin 1.8.4 to edit the pg_hba.conf file on a
PostgreSQL 8.3 database running on Ubuntu 8.10. I get the following
error message:
An error has occurred:
ERROR: absolute path not allowed
Alvaro Herrera wrote:
Sam Mason wrote:
the following has links to more:
http://markmail.org/search/?q=list:org.postgresql
Wow, the spanish list is the 3rd in traffic after hackers and general!
yeah and that tom lane guy sent over 77000(!!!) mails to the lists up to
now ...
Stefan
Ron Mayer wrote:
Joshua D. Drake wrote:
On Fri, 2008-11-21 at 08:18 -0800, Ron Mayer wrote:
Bruce Momjian wrote:
Tom Lane wrote:
... harder to keep
up with the list traffic; so something is happening that a simple
volume count doesn't capture.
If measured in bytes of the gzipped
Bruce Momjian wrote:
Ron Mayer wrote:
Joshua D. Drake wrote:
On Fri, 2008-11-21 at 08:18 -0800, Ron Mayer wrote:
Bruce Momjian wrote:
Tom Lane wrote:
... harder to keep
up with the list traffic; so something is happening that a simple
volume count doesn't capture.
If measured in bytes of
Michelle, I don't think the list is going to change its operations for
one disgruntled user. Since you seem unwilling or unable to employ
the advice already given, maybe your only acceptable option is to
unsubscribe from the list. At least that would eliminate much of the
noise that currently
You did not understand the problem
1) You did not explain the problem.
2) It's your problem, not ours. Your demand that thousands of other people
adapt to your unusual problem is absurdly self-absorbed. Get decent email
service, then subscribe from there. (Or go away. Either works for
I am running the Postgres(8.2.11) on Windows.
I have 2 tables, one with users and one with locations.
user_table
---
user_id user_code price value
1 245.23 -97.82
2 3 42.67 -98.32
3
I am using Postgresql to store all my research related data. At the
moment I am just finishing my PhD thesis and I want to cite postgresql
correctly but can't find how to do it. Could somebody give me an advice?
Many thanks
tomas
--
Sent via pgsql-general mailing list
Hi,
When I call pgsql procedure in Postgres 8.3 from pgAdmin III
everything is printed in one line:
WARNING: Odczytany token'+48'WARNING: v_processed_strb-
zWARNING: Odczytany tokenb-zWARNING: v_processed_strWARNING:
Odczytany tokenWARNING: v_processed_str
Całkowity czas wykonania
I am using plpythonu on linux version postgresql-plpython-8.2.9-1.fc7.
Consider a python class called Wibble which is written into a python
module wibble.py.
I wish to use Wibble from within a plpythonu procedure.
If I simply do:
from wibble import Wibble
then I am told the object was not
Hi All,
I need to connect to a version 7.1 PostgreSQL database. Unfortunately, I
cannot get the 7.1.3 source to compile. configure gives this error:
checking types of arguments for accept()... configure: error: could not
determine argument types
I'm not posting to pgsql-bugs because I
Hi all,
I'm developing a little tool in Java that manages database update throught
external text files.
In this tool there is an option that allows the user accepts a defined amount
of errors during the update and save the data well formed.
To do this I start commitment control when the
On 21 Lis, 13:50, [EMAIL PROTECTED] (Ciprian Dorin Craciun)
wrote:
Hello all!
I would like to ask some advice about the following problem
(related to the Dehems project:http://www.dehems.eu/):
* there are some clients; (the clients are in fact house holds;)
* each device has
On Fri, Nov 21, 2008 at 3:12 PM, Michal Szymanski [EMAIL PROTECTED] wrote:
On 21 Lis, 13:50, [EMAIL PROTECTED] (Ciprian Dorin Craciun)
wrote:
Hello all!
I would like to ask some advice about the following problem
(related to the Dehems project:http://www.dehems.eu/):
* there are
(I'm adding the discussion also to the Postgres list.)
On Fri, Nov 21, 2008 at 11:19 PM, Dann Corbit [EMAIL PROTECTED] wrote:
What is the schema for your table?
If you are using copy rather than insert, 1K rows/sec for PostgreSQL seems
very bad unless the table is extremely wide.
The
On 22/11/2008 04:33, Michael Thorsen wrote:
select count(*)
from user_table u, locations l
where u.user_code = l.code
and u.price = l.price
and u.value = l.value;
The answer to this should be 2, but when I run my query I get 4 (in fact
Are you sure that's the query that's
On 22/11/2008 16:07, Michael Thorsen wrote:
For the most part yes. The price and value were real columns,
otherwise the rest of it is the same. On a small data set I seem to get
That's almost certainly the problem, so - rounding errors are causing
the equality test in the join to fail. You
Ciprian Dorin Craciun wrote:
I would try it if I would know that it could handle the load... Do
you have some info about this? Any pointers about the configuration
issues?
Ciprian.
Apart from the configure options at build time you should read -
http://www.sqlite.org/pragma.html
Michael Thorsen [EMAIL PROTECTED] writes:
... I gave a simple example above, but the query runs over 2 tables
with about a million entries in each. So I am unable to verify what is
wrong, but I know the count is incorrect as I should not have more than what
is in the user_table.
You could
I have table in 8.1.4 which tracks users logged into db
CREATE TABLE session
(
workplace character(16) NOT NULL,
ipaddress character(20),
logintime character(28),
loggeduser character(10),
CONSTRAINT session_pkey PRIMARY KEY (workplace)
);
Commands executed at logon in same transaction
On Sat, Nov 22, 2008 at 8:04 PM, Shane Ambler [EMAIL PROTECTED] wrote:
Ciprian Dorin Craciun wrote:
I would try it if I would know that it could handle the load... Do
you have some info about this? Any pointers about the configuration
issues?
Ciprian.
Apart from the configure
On Sat, Nov 22, 2008 at 2:37 PM, Ciprian Dorin Craciun
[EMAIL PROTECTED] wrote:
Hello all!
SNIP
So I would conclude that relational stores will not make it for
this use case...
I was wondering you guys are having to do all individual inserts or if
you can batch some number together into
On Sat, Nov 22, 2008 at 11:51 PM, Scott Marlowe [EMAIL PROTECTED] wrote:
On Sat, Nov 22, 2008 at 2:37 PM, Ciprian Dorin Craciun
[EMAIL PROTECTED] wrote:
Hello all!
SNIP
So I would conclude that relational stores will not make it for
this use case...
I was wondering you guys are
On Sat, Nov 22, 2008 at 4:54 PM, Ciprian Dorin Craciun
[EMAIL PROTECTED] wrote:
On Sat, Nov 22, 2008 at 11:51 PM, Scott Marlowe [EMAIL PROTECTED] wrote:
On Sat, Nov 22, 2008 at 2:37 PM, Ciprian Dorin Craciun
[EMAIL PROTECTED] wrote:
Hello all!
SNIP
So I would conclude that relational
Ciprian Dorin Craciun escribió:
I've tested also Sqlite3 and it has the same behavior as
Postgres... Meaning at beginning it goes really nice 20k inserts,
drops to about 10k inserts, but after a few million records, the HDD
led starts to blink non-stop, and then it drops to unde 1k
On 21 Lis, 13:50, [EMAIL PROTECTED] (Ciprian Dorin Craciun)
wrote:
What have I observed / tried:
* I've tested without the primary key and the index, and the
results were the best for inserts (600k inserts / s), but the
readings, worked extremly slow (due to the lack of
Is there a datatype in postgres that will automatically update the date when
the row is updated? I know I can do a timestamp and set the default to
now() but once the row is inserted, and then edited, I want the column
updated without editing my application code or adding a trigger. Is this
Alvaro Herrera [EMAIL PROTECTED] writes:
Richard Huxton wrote:
Some of the EXPLAINs on the performance list are practically impossible
to read unless you've got the time to cut+paste and fix line-endings.
Maybe we should start recommending people to post those via
Tom Lane [EMAIL PROTECTED] writes:
Bruce Momjian [EMAIL PROTECTED] writes:
Tom Lane wrote:
So, to a first approximation, the PG list traffic has been constant
since 2000. Not the result I expected.
I also was confused by its flatness. I am finding the email traffic
almost impossible to
Alvaro Herrera [EMAIL PROTECTED] writes:
The problem is, most likely, on updating the indexes. Heap inserts
should always take more or less the same time, but index insertion
requires walking down the index struct for each insert, and the path to
walk gets larger the more data you have.
It's
blackwater dev [EMAIL PROTECTED] writes:
Is there a datatype in postgres that will automatically update the date when
the row is updated?
No, and it's conceptually impossible to make that happen at the datatype
level. Use a trigger.
regards, tom lane
--
Sent via
Create Table TableName(
ColumnName DATE Sysdate;
);
ALTER TABLE TableName
ALTER COLUMN ColumnName { SET DEFAULT newdefaultvalue | DROP OLDDEFAULT }
HTH
Martin
__
Disclaimer and confidentiality note
Everything in this e-mail and any
Andrus [EMAIL PROTECTED] writes:
I have table in 8.1.4 which tracks users logged into db
There have been a number of index-corruption bugs fixed since 8.1.4 ...
In particular, if it's possible that any of these clients abort before
committing these insertions, the vacuum race condition bug
Since you always need the timestamp in your selects, have you tried indexing
only the timestamp field?
Your selects would be slower, but since client and sensor don't have that many
distinct values compared to the number of rows you are inserting maybe the
difference in selects would not be
On Sat, Nov 22, 2008 at 5:54 PM, Scara Maccai [EMAIL PROTECTED] wrote:
Since you always need the timestamp in your selects, have you tried indexing
only the timestamp field?
Your selects would be slower, but since client and sensor don't have that
many distinct values compared to the number
Hello,
searched documentation, FAQ and mailing list archives
(mailing list archive search is volumous :-) )
but could not find an answer:
I would like to be able to update
several rows to different values at the same time
In oracle this used to be called Array update or
'collect' update or
Magnus Hagander wrote:
Bruce Momjian wrote:
Ron Mayer wrote:
Joshua D. Drake wrote:
On Fri, 2008-11-21 at 08:18 -0800, Ron Mayer wrote:
Bruce Momjian wrote:
Tom Lane wrote:
... harder to keep
up with the list traffic; so something is happening that a simple
volume count doesn't
On Sun, Nov 23, 2008 at 1:02 AM, Tom Lane [EMAIL PROTECTED] wrote:
Alvaro Herrera [EMAIL PROTECTED] writes:
The problem is, most likely, on updating the indexes. Heap inserts
should always take more or less the same time, but index insertion
requires walking down the index struct for each
On Sun, Nov 23, 2008 at 12:26 AM, Alvaro Herrera
[EMAIL PROTECTED] wrote:
Ciprian Dorin Craciun escribió:
I've tested also Sqlite3 and it has the same behavior as
Postgres... Meaning at beginning it goes really nice 20k inserts,
drops to about 10k inserts, but after a few million records,
On Sun, Nov 23, 2008 at 3:09 AM, Scott Marlowe [EMAIL PROTECTED] wrote:
On Sat, Nov 22, 2008 at 5:54 PM, Scara Maccai [EMAIL PROTECTED] wrote:
Since you always need the timestamp in your selects, have you tried indexing
only the timestamp field?
Your selects would be slower, but since client
On Sun, Nov 23, 2008 at 12:32 AM, Alvaro Herrera
[EMAIL PROTECTED] wrote:
On 21 Lis, 13:50, [EMAIL PROTECTED] (Ciprian Dorin Craciun)
wrote:
What have I observed / tried:
* I've tested without the primary key and the index, and the
results were the best for inserts (600k inserts
43 matches
Mail list logo