On Tue, Dec 06, 2005 at 03:51:17PM +0800, Christopher Kings-Lynne wrote:
PL/SQL or PL/PGSQL...
Sheesh, arbitrary restrictions ;-) Something like this then:
CREATE FUNCTION inet2num(inet) RETURNS numeric AS $$
DECLARE
a text[] := string_to_array(host($1), '.');
BEGIN
RETURN
Sheesh, arbitrary restrictions ;-) Something like this then:
CREATE FUNCTION inet2num(inet) RETURNS numeric AS $$
DECLARE
a text[] := string_to_array(host($1), '.');
BEGIN
RETURN a[1]::numeric * 16777216 +
a[2]::numeric * 65536 +
a[3]::numeric * 256 +
On Tue, Dec 06, 2005 at 01:05:12AM -0700, Michael Fuhr wrote:
CREATE FUNCTION inet2num(inet) RETURNS numeric AS $$
DECLARE
a text[] := string_to_array(host($1), '.');
BEGIN
RETURN a[1]::numeric * 16777216 +
a[2]::numeric * 65536 +
a[3]::numeric * 256 +
On Tue, Dec 06, 2005 at 04:10:22PM +0800, Christopher Kings-Lynne wrote:
Sheesh, arbitrary restrictions ;-) Something like this then:
CREATE FUNCTION inet2num(inet) RETURNS numeric AS $$
DECLARE
a text[] := string_to_array(host($1), '.');
BEGIN
RETURN a[1]::numeric * 16777216 +
2005/12/4, Andrew Dunstan [EMAIL PROTECTED]:
Tom said:
Would it work to modify c.h so that it #include's libintl.h, then #undefs
these macros, then #includes port.h to define 'em the way we want?
Some or all of this might need to be #ifdef WIN32, but that seems like
a reasonably noninvasive
2005/12/6, Nicolai Tufar [EMAIL PROTECTED]:
IIRC last time I tried this it didn't work too well ;-( I will have
another go. I think it's the best way to go.
Very well, I will try to put up a patch to implement it in a couple of days.
Oh boy, it is already fixed. Sorry folks, my error.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Bruce Momjian
Sent: 06 December 2005 04:40
To: Andrew Dunstan
Cc: Tom Lane; [EMAIL PROTECTED]; [EMAIL PROTECTED];
[EMAIL PROTECTED]; pgsql-hackers@postgresql.org
Subject: Re: [PATCHES] [HACKERS]
But, wouldn't the performance be better? And wouldn't asynchronous messages be better processed?
Thanks for replies,
Gustavo.2005/12/6, Jan Wieck [EMAIL PROTECTED]:
On 12/5/2005 8:18 PM, Gustavo Tonini wrote: replication (master/slave, multi-master, etc) implemented inside postgres...I would like
Tom Lane wrote:
Please test ...
Well, if you look here you'll see a bunch of Turkish messages, because I
forgot to change the locale back ;-)
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=lorisdt=2005-12-06%2011:57:20
Which raises another question: can we force the locale on
On Tue, 2005-12-06 at 10:03 -0200, Gustavo Tonini wrote:
But, wouldn't the performance be better? And wouldn't asynchronous
messages be better processed?
At least for synchronous multi-master replication, the performance
bottelneck is going to be the interconnect between the nodes -
Ühel kenal päeval, L, 2005-12-03 kell 09:21, kirjutas Simon Riggs:
First off, I think we need to do some more work on partitioning so that
some knowledge about the union set is understood by the optimizer. At
the moment there is no concept of partition key, so its hard to spot
when two union
Andrew Dunstan [EMAIL PROTECTED] writes:
Which raises another question: can we force the locale on Windows, or
are we stuck with the locale that the machine is set to? But maybe that
belongs in another thread.
I thought we'd put in some sort of no-locale switch specifically for
the buildfarm
Actually, scratch that - I'm wrong... It appeared separately from the
other DROP commands...
Chris
Christopher Kings-Lynne wrote:
Hi,
Playing around with this MySQL compatibility library, I noticed that
pg_dump -c does not emit DROP commands for casts. Seems like a bug...?
Chris
On 12/6/2005 8:10 AM, Markus Schiltknecht wrote:
On Tue, 2005-12-06 at 10:03 -0200, Gustavo Tonini wrote:
But, wouldn't the performance be better? And wouldn't asynchronous
messages be better processed?
At least for synchronous multi-master replication, the performance
bottelneck is going
Hello Jan,
On Tue, 2005-12-06 at 10:10 -0500, Jan Wieck wrote:
We need a general API. It should be possible to define on a per-database
level which shared replication module to load on connect. The init
function of that replication module then installs all the required
callbacks at
Andrew Dunstan wrote:
Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
Which raises another question: can we force the locale on Windows,
or are we stuck with the locale that the machine is set to? But
maybe that belongs in another thread.
I thought we'd put in some sort
Does anyone else find this odd:
mysql=# select extract(microseconds from timestamp '2005-01-01
00:00:00.123');
date_part
---
123000
(1 row)
mysql=# select extract(microseconds from timestamp '2005-01-01
00:00:01.123');
date_part
---
1123000
(1 row)
No other extracts
Andrew Dunstan wrote:
I committed the pg_regress change back in Nov but didn't change
buildfarm to use it. And now I look at it more closely I think it
won't work. We have:
/ # locale
/ NOLOCALE :=
ifdef NO_LOCALE
NOLOCALE += --no-locale
endif
I think instead of the += line
OK, AndrewSN just pointed out that it's documented to work like that...
...still seems bizarre...
Chris
Christopher Kings-Lynne wrote:
Does anyone else find this odd:
mysql=# select extract(microseconds from timestamp '2005-01-01
00:00:00.123');
date_part
---
123000
(1 row)
[EMAIL PROTECTED] (Gustavo Tonini) writes:
But, wouldn't the performance be better? And wouldn't asynchronous
messages be better processed?
Why do you think performance would be materially affected by this?
The MAJOR performance bottleneck is normally the slow network
connection between
IMO this is not true. You can get affordable 10GBit network adapters, so you
can have plenty of bandwith in a db server pool (if they are located in the
same area). Even 1GBit Ethernet greatly helps here, and would make it possible
to balance read-intensive (and not write intensive)
Christopher Kings-Lynne wrote:
OK, AndrewSN just pointed out that it's documented to work like that...
...still seems bizarre...
So it's a gotcha!
--
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7
Christopher Kings-Lynne [EMAIL PROTECTED] writes:
OK, AndrewSN just pointed out that it's documented to work like that...
...still seems bizarre...
It seems reasonably consistent to me. extract() doesn't consider
seconds and fractional seconds to be distinct fields: it's all one
value. The
Tom Lane wrote:
Christopher Kings-Lynne [EMAIL PROTECTED] writes:
OK, AndrewSN just pointed out that it's documented to work like that...
...still seems bizarre...
It seems reasonably consistent to me. extract() doesn't consider
seconds and fractional seconds to be distinct fields: it's
Bruce Momjian pgman@candle.pha.pa.us writes:
Why aren't 'minutes' considered too? Because they aren't 'seconds'.
Well, seconds aren't microseconds either.
Yeah, they are: it's just one field. The other way of looking at it
(that everything is seconds) is served by extract(epoch).
On Fri, 2005-12-02 at 15:49 -0500, Greg Stark wrote:
Rod Taylor [EMAIL PROTECTED] writes:
In the extreme, no amount of added intelligence in the optimizer is going
to
help it come up with any sane selectivity estimate for something like
WHERE radius_authenticate(user) = 'OK'
Tom Lane wrote:
Hm ... between that, the possible crypto connection, and John's
personal
testimony that he actually uses PG for calculations in this range, I'm
starting to lean to the idea that we shouldn't cut the range.
Just to be clear, this John has yet to use NUMERIC for any
John D. Burger [EMAIL PROTECTED] writes:
Tom Lane wrote:
Hm ... between that, the possible crypto connection, and John's
personal testimony
Just to be clear, this John has yet to use NUMERIC for any
calculations, let alone in that range.
My mistake, got confused as to who had said what.
Tom Lane wrote:
John D. Burger [EMAIL PROTECTED] writes:
Tom Lane wrote:
Hm ... between that, the possible crypto connection, and John's
personal testimony
Just to be clear, this John has yet to use NUMERIC for any
calculations, let alone in that range.
My mistake, got confused
On Tue, 2005-12-06 at 16:12 +0200, Hannu Krosing wrote:
Ühel kenal päeval, L, 2005-12-03 kell 09:21, kirjutas Simon Riggs:
First off, I think we need to do some more work on partitioning so that
some knowledge about the union set is understood by the optimizer. At
the moment there is no
I have added your suggestions to the 8.1.X release notes.
---
Paul Lindner wrote:
-- Start of PGP signed section.
On Sat, Dec 03, 2005 at 10:54:08AM -0500, Bruce Momjian wrote:
Neil Conway wrote:
On Wed, 2005-11-30 at
Bruce Momjian pgman@candle.pha.pa.us writes:
I have added your suggestions to the 8.1.X release notes.
Did you read the followup discussion? Recommending -c without a large
warning seems a very bad idea.
regards, tom lane
---(end of
Tom Lane wrote:
Bruce Momjian pgman@candle.pha.pa.us writes:
I have added your suggestions to the 8.1.X release notes.
Did you read the followup discussion? Recommending -c without a large
warning seems a very bad idea.
Well, I said it would remove invalid sequences. What else should we
On 12/5/05, Hannu Krosing wrote:
Concurrent CREATE INDEX
Concurrent index NDX1 on table TAB1 is created like this:
1) start transaction. take a snapshot SNAP1
1.1) optionally, remove pages for TAB1 from FSM to force (?) all newer
inserts/updates to happen at end
Postgres-R, pgcluster, Slony-II. Some more advanced, some less. But
certainly nothing I would send into the ring against Oracle-Grid.
Assuming that you mean Oracle Real Application Cluster (the Grid is more,
right?) I wonder if this technology technically still counts as replication.
Just like MySql!
On Dec 5, 2005, at 10:35 PM, Jan Wieck wrote:
On 12/5/2005 8:18 PM, Gustavo Tonini wrote:
replication (master/slave, multi-master, etc) implemented inside
postgres...I would like to know what has been make in this area.
We do not plan to implement replication inside the
- Asynchronous master to multi-slave. We have a few of those with
Mommoth-Replicator and Slony-I being the top players. Slony-I does
need some cleanup and/or reimplementation after we have a general
pluggable replication API in place.
Is this API actually have people working on it
I don't see anything in the TODO list. I'm very interesting in work that. If is possible...
Gustavo.
Jochem van Dieten [EMAIL PROTECTED] writes:
On 12/5/05, Hannu Krosing wrote:
3) record the index in pg_class, but mark it as do not use for lookups
in a new field. Take snapshot SNAP2. commit transaction.
What happens if another transaction takes a snapshot between SNAP2 and
the commit?
Bruce Momjian wrote:
Tom Lane wrote:
Bruce Momjian pgman@candle.pha.pa.us writes:
I have added your suggestions to the 8.1.X release notes.
Did you read the followup discussion? Recommending -c without a large
warning seems a very bad idea.
Well, I said it would remove invalid
Ühel kenal päeval, T, 2005-12-06 kell 20:50, kirjutas Jochem van Dieten:
On 12/5/05, Hannu Krosing wrote:
Concurrent CREATE INDEX
Concurrent index NDX1 on table TAB1 is created like this:
1) start transaction. take a snapshot SNAP1
1.1) optionally,
Ühel kenal päeval, T, 2005-12-06 kell 15:12, kirjutas Tom Lane:
Jochem van Dieten [EMAIL PROTECTED] writes:
On 12/5/05, Hannu Krosing wrote:
3) record the index in pg_class, but mark it as do not use for lookups
in a new field. Take snapshot SNAP2. commit transaction.
What happens if
Can someone comment on this?
---
Joshua D. Drake wrote:
Hello,
The below seems incorrect. If I am in the schema the behavior seems
correct. I can't see or select from the table.
However if I am not in the schema I am
I just found an email that took 5 days to be delivered. Looking at the
headers below, the holdup was between m2x.hub.org and postgresql.org.
Can someone take a look at the two boxes and see what's going on?
Also, would -www have been the better place for this? I'm not sure if
they handle email
Hannu Krosing [EMAIL PROTECTED] writes:
What I have in mind would be something like this to get both SNAP2 and
commit between any transactions:
LOOP:
LOCK AGAINST STARTING NEW TRANSACTIONS
I can hardly credit that let's block startup of ALL new transactions
is a more desirable
Hannu Krosing [EMAIL PROTECTED] writes:
Is it possible to release a lock without commit ?
Yes, but I don't see where that helps you here.
(To do any of this, you'd need to use the same kluge VACUUM does to hold
selected locks across a series of transactions. So in reality you'd
probably be
Ühel kenal päeval, T, 2005-12-06 kell 15:38, kirjutas Tom Lane:
Hannu Krosing [EMAIL PROTECTED] writes:
What I have in mind would be something like this to get both SNAP2 and
commit between any transactions:
LOOP:
LOCK AGAINST STARTING NEW TRANSACTIONS
I can hardly credit that
Added to TODO:
* Add GUC variable to run a command on database panic or
smart/fast/immediate shutdown
---
Peter Eisentraut wrote:
Martijn van Oosterhout wrote:
3. Add either a GUC or a command line
Bruce Momjian wrote:
One nice solution would be if iconv would report the lines with
errors and you could correct them, but I see no way to do that. The
only thing you could do is to diff the old and new files to see the
problems. Is that helpful? Here is new text I have used:
I think this
Ühel kenal päeval, T, 2005-12-06 kell 15:41, kirjutas Tom Lane:
Hannu Krosing [EMAIL PROTECTED] writes:
Is it possible to release a lock without commit ?
Yes, but I don't see where that helps you here.
(To do any of this, you'd need to use the same kluge VACUUM does to hold
selected
Nice, updated.
---
Peter Eisentraut wrote:
Bruce Momjian wrote:
One nice solution would be if iconv would report the lines with
errors and you could correct them, but I see no way to do that. The
only thing you
Hannu Krosing [EMAIL PROTECTED] writes:
1) run a transaction repeatedly, trying to hit a point of no concurrent
transactions,
In the sort of 24x7 environment that people are arguing this is needed
for, it's entirely possible that that will *never* succeed.
regards, tom
Ühel kenal päeval, T, 2005-12-06 kell 16:01, kirjutas Tom Lane:
Hannu Krosing [EMAIL PROTECTED] writes:
1) run a transaction repeatedly, trying to hit a point of no concurrent
transactions,
In the sort of 24x7 environment that people are arguing this is needed
for, it's entirely possible
Bruce Momjian pgman@candle.pha.pa.us writes:
Can someone comment on this?
It's operating as designed. Schemas you don't have USAGE privilege on
are ignored if listed in your search path.
regards, tom lane
---(end of
I set my locale to Turkish, then did initdb --no-locale. pg_controldata
is set up correctly, as is postgresql.conf, but messages still come out
in Turkish on the log file. So either we aren't doing it right or my
(modern) libintl is hijacking some more stuff. Same result for French,
so it's
I set my locale to Turkish, then did initdb --no-locale.
pg_controldata is set up correctly, as is postgresql.conf,
but messages still come out in Turkish on the log file. So
either we aren't doing it right or my
(modern) libintl is hijacking some more stuff. Same result
for French, so
I set my locale to Turkish, then did initdb --no-locale.
pg_controldata is set up correctly, as is postgresql.conf, but
messages still come out in Turkish on the log file. So either we
aren't doing it right or my
(modern) libintl is hijacking some more stuff. Same result
for French,
I would classify it as a clustered database system (Oracle 10g that is).
Clustered meaning more than one node in the cluster.
ALy.
On Tue, 6 Dec 2005, Michael Meskes wrote:
Postgres-R, pgcluster, Slony-II. Some more advanced, some less. But
certainly nothing I would send into the
Magnus Hagander wrote:
I set my locale to Turkish, then did initdb --no-locale.
pg_controldata is set up correctly, as is postgresql.conf,
but messages still come out in Turkish on the log file. So
either we aren't doing it right or my
(modern) libintl is hijacking some more stuff. Same
On Sat, Dec 03, 2005 at 03:07:19PM -0800, Uwe C. Schroeder wrote:
the ctid seems to be the solution to my problem. I'm inserting the record in
a
transaction so the ctid shouldn't change while the transaction isn't finished
(either rolled back or committed).
One question though. How would I
It was sent by someone not subscribed to the mailing list, and was delayed
for moderator approval ...
On Tue, 6 Dec 2005, Jim C. Nasby wrote:
I just found an email that took 5 days to be delivered. Looking at the
headers below, the holdup was between m2x.hub.org and postgresql.org.
Can
Jim C. Nasby [EMAIL PROTECTED] writes:
Right now you don't. :( ISTM there should be a way to get back the row
you just inserted. Whether a ctid is the right way to do that I don't
know...
I'm going to move this over to -hackers to see what people over there
have to say.
Perhaps the right
Greg Stark [EMAIL PROTECTED] writes:
The benefits of providing something based on ctid is to avoid the inefficiency
of the index lookup on the primary key and it would work on tables without any
primary key. I'm not sure it's worth the effort it would entail for those
narrow use cases
Hannu Krosing [EMAIL PROTECTED] writes:
The scenario where concurrent create index command is be needed is 24/7
OLTP databases, which can't be taken down for maintenance. Usully they
can be arranged to tolerate postponing a few transactions for one
second.
Well, the dominant defining
Why aren't 'minutes' considered too? Because they aren't 'seconds'.
Well, seconds aren't microseconds either.
Yeah, they are: it's just one field. The other way of looking at it
(that everything is seconds) is served by extract(epoch).
Well, it's different in MySQL unfortunately - what does
On Dec 7, 2005, at 10:46 , Christopher Kings-Lynne wrote:
In case you didn't know btw, MySQL 5.1 is out with rather extensive
table partition support. So get coding :D
You do mean MySQL 5.1 alpha is out, right?
Michael Glaesemann
grzm myrealbox com
---(end of
Hi,
I'm doing some tests with a 700 columns' table. But when I try to load
some data with INSERT or COPY I got that message. I verified that the
BLCKZ is limiting the tuple size but I couldn't have a clue why it's
not using TOAST. I'm using PostgreSQL 8.0.3 in Slackware 10.1 box.
Let me know if
On Wed, Dec 07, 2005 at 09:43:30AM +0800, Christopher Kings-Lynne wrote:
Why aren't 'minutes' considered too? Because they aren't 'seconds'.
Well, seconds aren't microseconds either.
Yeah, they are: it's just one field. The other way of looking at it
(that everything is seconds) is served
MySQL 5.0.16 gives an error:
mysql SELECT EXTRACT (MICROSECOND FROM '2003-01-02 10:30:00.00123');
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual
that corresponds to your MySQL server version for the right syntax to use
near 'FROM '2003-01-02 10:30:00.00123')' at line
Euler Taveira de Oliveira [EMAIL PROTECTED] writes:
I'm doing some tests with a 700 columns' table.
Redesign your schema...
regards, tom lane
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
Looks like MySQL doesn't allow a space before the open parenthesis
(there isn't one in the manual's example):
mysql SELECT EXTRACT(MICROSECOND FROM '2003-01-02 10:30:00.00123');
+---+
| EXTRACT(MICROSECOND FROM '2003-01-02 10:30:00.00123') |
On Wed, Dec 07, 2005 at 10:32:20AM +0800, Christopher Kings-Lynne wrote:
MySQL 5.0.16 gives an error:
mysql SELECT EXTRACT (MICROSECOND FROM '2003-01-02 10:30:00.00123');
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual
that corresponds to your MySQL server version
On Wed, Dec 07, 2005 at 10:47:45AM +0800, Christopher Kings-Lynne wrote:
Ok, and what does this give:
SELECT EXTRACT(MICROSECOND FROM '2003-01-02 10:30:01.00123');
mysql SELECT EXTRACT(MICROSECOND FROM '2003-01-02 10:30:01.00123');
+---+
|
mysql SELECT EXTRACT(MICROSECOND FROM '2003-01-02 10:30:01.00123');
+---+
| EXTRACT(MICROSECOND FROM '2003-01-02 10:30:01.00123') |
+---+
| 1230
I set my locale to Turkish, then did initdb --no-locale. pg_controldata
is set up correctly, as is postgresql.conf, but messages still come out
in Turkish on the log file. So either we aren't doing it right or my
(modern) libintl is hijacking some more stuff. Same result for French,
so it's
Thanks Jim.
Right now I just keep using the oid's - but it would be nice to eliminate the
need for that completely.
UC
On Tuesday 06 December 2005 15:01, Jim C. Nasby wrote:
On Sat, Dec 03, 2005 at 03:07:19PM -0800, Uwe C. Schroeder wrote:
the ctid seems to be the solution to my problem.
Hi,
On Tue, 6 Dec 2005, Bruce Momjian wrote:
Nice, updated.
---
I think my suggestion from the other day is useful also.
---
Omar Kilani and I have spent a few hours looking at the problem. For
situations where there
On Tue, Dec 06, 2005 at 11:03:16PM -0300, Euler Taveira de Oliveira wrote:
I'm doing some tests with a 700 columns' table. But when I try to load
some data with INSERT or COPY I got that message. I verified that the
BLCKZ is limiting the tuple size but I couldn't have a clue why it's
not using
It would be nice if Postgresql supported multi-octet raw data. Certainly a
lot of what you would do with it would be similar to bytea, but the basic
string functions would be overloaded so that the unit of work would be a
multi-octet word.
Multi-octet instances could be cast to bytea when
On 12/6/2005 11:23 AM, Mario Weilguni wrote:
IMO this is not true. You can get affordable 10GBit network adapters, so you
can have plenty of bandwith in a db server pool (if they are located in the
same area). Even 1GBit Ethernet greatly helps here, and would make it possible
to balance
Exactly what does vim do that iconv does not? Fuzzy encoding sounds
scary to me.
---
Gavin Sherry wrote:
Hi,
On Tue, 6 Dec 2005, Bruce Momjian wrote:
Nice, updated.
Tom Lane [EMAIL PROTECTED] writes:
Rather than hard-wiring a special case for any of these things, I'd much
rather see us implement INSERT...RETURNING and UPDATE...RETURNING as per
previous suggestions. Then you can fetch pkey, ctid, or whatever you
need.
I happen to think UPDATE RETURNING
Greg Stark [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] writes:
Rather than hard-wiring a special case for any of these things, I'd much
rather see us implement INSERT...RETURNING and UPDATE...RETURNING as per
previous suggestions.
I wonder whether the ui tools need anything more low
On 12/6/05, Jan Wieck [EMAIL PROTECTED] wrote:
IMO this is not true. You can get affordable 10GBit network adapters, so
you can have plenty of bandwith in a db server pool (if they are located in
the same area). Even 1GBit Ethernet greatly helps here, and would make it
possible to
Euler Taveira de Oliveira wrote:
Hi,
I'm doing some tests with a 700 columns' table. But when I try to load
some data with INSERT or COPY I got that message. I verified that the
BLCKZ is limiting the tuple size but I couldn't have a clue why it's
not using TOAST. I'm using PostgreSQL 8.0.3 in
On Tue, 2005-12-06 at 23:19 -0500, Jan Wieck wrote:
It's not so much the bandwidth but more the roundtrips that limit your
maximum transaction throughput.
I completely agree that the latency is counting, not the bandwith.
Does anybody have latency / roundtrip measurements for current
Ühel kenal päeval, T, 2005-12-06 kell 19:32, kirjutas Greg Stark:
Hannu Krosing [EMAIL PROTECTED] writes:
The scenario where concurrent create index command is be needed is 24/7
OLTP databases, which can't be taken down for maintenance. Usully they
can be arranged to tolerate postponing a
87 matches
Mail list logo