Mark Woodward wrote:
I tried to cluster the data along a particular index but had to cancel it
after 3 hours.
If the data is in random order, it's faster to do
SELECT * INTO foo_sorted FROM foo ORDER BY bar
then CREATE INDEX, than to run CLUSTER.
That's because CLUSTER does a full index
Bruce Momjian napsal(a):
OK, great information, updated comment is:
/*
* Many versions of Solaris have broken strtod() --- see bug #4751182.
* This has been fixed in current versions of Solaris:
*
*
Hi,
I've been planning to implement a caching mechanism for non-volatile
(and non-SRF) procedures in PL/scheme, that'll return pre-computed
values in case of a procedure signature overlap[1].
First, I decided to use fcinfo-flinfo-fn_mcxt context for the storage.
But then I learnt it'll survive
Lukas Kahwe Smith wrote:
Peter Eisentraut wrote:
Am Montag, 18. September 2006 09:20 schrieb Lukas Kahwe Smith:
This just reminds me, are there plans to take into account multibyte
server encodings inside the client quote function?
Huh?
Ah, I just checked the libpq docs and there seems to
Hi, Tom,
Tom Lane wrote:
I miss some info in context: function's oid, function's argumenst and
schema. Maybe:
199292 function public.foo(int, int, int) language plpgsql statement
return
line 10
Putting the OID there is a seriously awful idea, not least because it
would make it
On Wed, Oct 04, 2006 at 04:41:44PM -0400, Bruce Momjian wrote:
That will prevent it from being changed by pgindent in the future.
Thanks.
Michael
--
Michael Meskes
Email: Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
ICQ: 179140304, AIM/Yahoo: michaelmeskes, Jabber:
[EMAIL PROTECTED] writes:
I've been planning to implement a caching mechanism for non-volatile
(and non-SRF) procedures in PL/scheme, that'll return pre-computed
values in case of a procedure signature overlap[1].
This has generally been considered a bad idea even if implemented for
all
Pavel Stehule [EMAIL PROTECTED] writes:
And what two variants of errcontext drived by GUC? First current
(compatible) and second enhanced (with oid, params, maybe all possible debug
values) and possible machine's readable. This enhanced variant can be
compatible and shared in all
Pavel Stehule [EMAIL PROTECTED] writes:
And what two variants of errcontext drived by GUC? First current
(compatible) and second enhanced (with oid, params, maybe all possible
debug
values) and possible machine's readable. This enhanced variant can be
compatible and shared in all
I am using the netflix database:
Table public.ratings
Column | Type | Modifiers
+--+---
item | integer |
client | integer |
day| smallint |
rating | smallint |
The query was executed as:
psql -p 5435 -U pgsql -t -A -c select client, item, rating, day
On Thu, Oct 05, 2006 at 11:56:43AM -0400, Mark Woodward wrote:
The query was executed as:
psql -p 5435 -U pgsql -t -A -c select client, item, rating, day from
ratings order by client netflix netflix.txt
My question, it looks like the kernel killed psql, and not postmaster. The
postgresql
On Oct 5, 2006, at 11:56 , Mark Woodward wrote:
I am using the netflix database:
Table public.ratings
Column | Type | Modifiers
+--+---
item | integer |
client | integer |
day| smallint |
rating | smallint |
The query was executed as:
psql -p 5435
Mark Woodward [EMAIL PROTECTED] writes:
psql -p 5435 -U pgsql -t -A -c select client, item, rating, day from
ratings order by client netflix netflix.txt
My question, it looks like the kernel killed psql, and not postmaster.
Not too surprising.
Question, is this a bug in psql?
It's really
Mark Woodward [EMAIL PROTECTED] writes:
psql -p 5435 -U pgsql -t -A -c select client, item, rating, day from
ratings order by client netflix netflix.txt
My question, it looks like the kernel killed psql, and not postmaster.
Not too surprising.
Question, is this a bug in psql?
It's
On Thu, Oct 05, 2006 at 11:56:43AM -0400, Mark Woodward wrote:
The query was executed as:
psql -p 5435 -U pgsql -t -A -c select client, item, rating, day from
ratings order by client netflix netflix.txt
My question, it looks like the kernel killed psql, and not postmaster.
The
postgresql
FWIW, there's a feature in CVS HEAD to instruct psql to try to use a
cursor to break up huge query results like this. For the moment I'd
suggest using COPY instead.
That's sort of what I was afraid off. I am trying to get 100 million
records into a text file in a specific order.
Tom Lane wrote:
Mark Woodward [EMAIL PROTECTED] writes:
psql -p 5435 -U pgsql -t -A -c select client, item, rating, day from
ratings order by client netflix netflix.txt
FWIW, there's a feature in CVS HEAD to instruct psql to try to use a
cursor to break up huge query results like
Graham Davis [EMAIL PROTECTED] writes:
The documentation for to_char states that:
|to_char(interval)| formats HH and HH12 as hours in a single day, while
HH24 can output hours exceeding a single day, e.g. 24.
However I can not get it to work with time intervals that span more than
1 day.
FWIW, there's a feature in CVS HEAD to instruct psql to try to use a
cursor to break up huge query results like this. For the moment I'd
suggest using COPY instead.
That's sort of what I was afraid off. I am trying to get 100 million
records into a text file in a specific order.
Create table as select ... Order by ...
Copy to ...
- Luke
Msg is shrt cuz m on ma treo
-Original Message-
From: Andrew Dunstan [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 05, 2006 12:51 PM Eastern Standard Time
To: Tom Lane
Cc: Mark Woodward;
Tom Lane wrote:
Mark Woodward [EMAIL PROTECTED] writes:
psql -p 5435 -U pgsql -t -A -c select client, item, rating, day from
ratings order by client netflix netflix.txt
FWIW, there's a feature in CVS HEAD to instruct psql to try to use a
cursor to break up huge query results like this.
Hi there,
I'm looking into strange locking, which happens on WinXP SP2 SMP
machine running 8.1.4 with stats_row_level=on. This is the only
combination (# of cpu and stats_row_level) which has problem -
SMP + stats_row_level.
The same test runs fine with one cpu (restarted machine with
It's interesting, that there is no problem with 8.2beta1 in all
combinations ! Any idea what changes from 8.1.4 to 8.2beta1 could
affect the problem ?
What do you mean locking? Do you mean the postgresql process locks up?
E.g; can you still connect to PostgreSQL from another connection? If
On Thu, 5 Oct 2006, Joshua D. Drake wrote:
It's interesting, that there is no problem with 8.2beta1 in all
combinations ! Any idea what changes from 8.1.4 to 8.2beta1 could
affect the problem ?
What do you mean locking? Do you mean the postgresql process locks up?
E.g; can you still connect
Hi there,
I'm looking into strange locking, which happens on WinXP SP2
SMP machine running 8.1.4 with stats_row_level=on. This is
the only combination (# of cpu and stats_row_level) which has
problem - SMP + stats_row_level.
The same test runs fine with one cpu (restarted machine with
MW == Mark Woodward [EMAIL PROTECTED] writes:
MW Yea, I've been toying with the idea of that setting lately, I
MW can't for the life of me understand why it isn't the default
MW behavior.
Lots of programs handle malloc() failures very badly. Including
daemons. Often it's better in practice to
On Thu, 2006-10-05 at 12:52 -0400, Luke Lonergan wrote:
Create table as select ... Order by ...
Copy to ...
Or in 8.2, COPY TO (SELECT ... ORDER BY) (My, that's a neat feature.)
-Neil
---(end of broadcast)---
TIP 5: don't forget to increase
:-D
Is that in the release notes?
- Luke
Msg is shrt cuz m on ma treo
-Original Message-
From: Neil Conway [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 05, 2006 02:35 PM Eastern Standard Time
To: Luke Lonergan
Cc: Andrew Dunstan; Tom Lane; Mark Woodward;
On Thu, 2006-10-05 at 14:53 -0400, Luke Lonergan wrote:
Is that in the release notes?
Yes: Allow COPY to dump a SELECT query (Zoltan Boszormenyi, Karel Zak)
-Neil
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore
Hi,
When I allocate a new memory context via
oldmcxt = AllocSetContextCreate(TopMemoryContext, ...)
persistent_mcxt = CurrentMemoryContext;
How can I store the persistent_mcxt in a persistent place that I'll be
able to reach it in my next getting invoked? Is that possible? If not,
how can I
On Thu, 2006-10-05 at 14:53 -0400, Luke Lonergan wrote:
Is that in the release notes?
Yes: Allow COPY to dump a SELECT query (Zoltan Boszormenyi, Karel Zak)
I remember this discussion, it is cool when great features get added.
---(end of
Volkan YAZICI [EMAIL PROTECTED] writes:
When I allocate a new memory context via
oldmcxt = AllocSetContextCreate(TopMemoryContext, ...)
persistent_mcxt = CurrentMemoryContext;
ITYM
persistent_mcxt = AllocSetContextCreate(TopMemoryContext, ...)
because the other doesn't do what you
Not to cause any arguments, but this is sort a standard discussion that
gets brought up periodically and I was wondering if there has been any
softening of the attitudes against an in place upgrade, or movement to
not having to dump and restore for upgrades.
I am aware that this is a difficult
Mark Woodward wrote:
I am currently building a project that will have a huge number of records,
1/2tb of data. I can't see how I would ever be able to upgrade PostgreSQL
on this system.
Slony will help you upgrade (and downgrade, for that matter) with no
downtime at all, pretty much. Of
Hi everyone,
After over a year of problems (old site
http://developer.osdl.org/markw/postgrescvs/) I have resumed producing
daily results of dbt-2 against PostgreSQL CVS code with results here:
http://dbt.osdl.org/dbt2.html
The only really new thing is better described stats on the
Mark Woodward wrote:
I am currently building a project that will have a huge number of
records,
1/2tb of data. I can't see how I would ever be able to upgrade
PostgreSQL
on this system.
Slony will help you upgrade (and downgrade, for that matter) with no
downtime at all, pretty much. Of
On Oct 5, 2006, at 15:46 , Mark Woodward wrote:
Not to cause any arguments, but this is sort a standard discussion
that
gets brought up periodically and I was wondering if there has been any
softening of the attitudes against an in place upgrade, or
movement to
not having to dump and
Indeed. The main issue for me is that the dumping and replication
setups require at least 2x the space of one db. That's 2x the
hardware which equals 2x $$$. If there were some tool which modified
the storage while postgres is down, that would save lots of people
lots of money.
Its time and
On Thu, Oct 05, 2006 at 04:39:22PM -0400, Mark Woodward wrote:
Indeed. The main issue for me is that the dumping and replication
setups require at least 2x the space of one db. That's 2x the
hardware which equals 2x $$$. If there were some tool which modified
the storage while postgres
I came across the following obviously corrupt macro in pg_lzcompress.h:
#define PGLZ_IS_COMPRESSED(_lzdata) ((_lzdata)-varsize !=
\
e
(_lzdata)-rawsize + e
On 10/5/2006 5:04 PM, Tom Lane wrote:
I came across the following obviously corrupt macro in pg_lzcompress.h:
#define PGLZ_IS_COMPRESSED(_lzdata) ((_lzdata)-varsize !=
\
e
Jan Wieck [EMAIL PROTECTED] writes:
That's quite possible that there is a fair amount of dead code in there
because it was originally the header for the lztext datatype, which if
memory served only existed quietly in one release to support slightly
larger rewrite rules before it was
Testing out the new pg_dump exclusion switches I've found that excluding a
table means that no functions or types will be dumped. Excluding one
table shouldn't exclude these objects. My real use case for this
functionality is that I have a database that has 99% of its space used by
one big
Kris Jurka [EMAIL PROTECTED] writes:
Testing out the new pg_dump exclusion switches I've found that excluding a
table means that no functions or types will be dumped. Excluding one
table shouldn't exclude these objects.
I tend to agree ... will see if I can make it happen. (I never did get
Mark Woodward [EMAIL PROTECTED] writes:
Not to cause any arguments, but this is sort a standard discussion that
gets brought up periodically and I was wondering if there has been any
softening of the attitudes against an in place upgrade, or movement to
not having to dump and restore for
Mark Wong [EMAIL PROTECTED] writes:
After over a year of problems (old site
http://developer.osdl.org/markw/postgrescvs/) I have resumed producing
daily results of dbt-2 against PostgreSQL CVS code with results here:
http://dbt.osdl.org/dbt2.html
This is good to hear! I am curious
On Oct 6, 2006, at 1:50 , Tom Lane wrote:
I'm tempted to propose that we remove the justify_hours call, and tell
anyone who really wants the old results to apply justify_hours() to
the
subtraction result for themselves. Not sure what the fallout would
be,
though.
I'm tempted to support
Well, there is a TODO item ( somewhere only we know ...).
Administration
* Allow major upgrades without dump/reload, perhaps using pg_upgrade
http://momjian.postgresql.org/cgi-bin/pgtodo?pg_upgrade
pg_upgrade resists itself to be born, but that discussion seems to
seed *certain* fundamentals
Michael Glaesemann [EMAIL PROTECTED] writes:
Considering how late it is in the cycle, perhaps the change in
behavior should come in 8.3.
Yeah, there's not really enough time to think through the consequences
now. I'd like to experiment with it for 8.3 though.
On Wed, 4 Oct 2006, Tom Lane wrote:
I've applied the patch and it seems to fix the problems.
8.2beta1 + the patch passes all of the regression tests on the IRIX box
(accept the expected difference with the geometry test).
I've applied the attached patch which merges ideas from your
On Oct 5, 2006, at 11:50 AM, Tom Lane wrote:
regression=# select ('2006-09-15 23:59:00'::timestamp - '2006-09-01
09:30:41'::timestamp);
?column?
--
14 days 14:28:19
(1 row)
should be reporting '350:28:19' instead.
This is a hack that was done to minimize the changes in
On Oct 5, 2006, at 9:30 AM, Pavel Stehule wrote:
With func oid I can get all other info later, without it, I need
estimate which functions are in stack track.
Why do you need the OID to know exactly what function something is?
What's wrong with schema.function(args)?
--
Jim Nasby
On Oct 3, 2006, at 5:32 PM, Tom Lane wrote:
Jim C. Nasby [EMAIL PROTECTED] writes:
Why is it timestamptz can store a date and time to 1 microsecond in 8
bytes but a timetz needs 12 to store just the time to 1 microsecond?
It's tracking the timezone explicitly ... something that timestamptz
On Oct 3, 2006, at 2:23 PM, Gregory Stark wrote:
If the space set aside for these transaction ids is full when
you're inserting
i suppose you could just go back to the FSM for another page. But I
don't see
any way out when you're deleting. You have to mark xmax one way or
another and
if
On Oct 3, 2006, at 4:06 PM, Merlin Moncure wrote:
On 10/3/06, Gregory Stark [EMAIL PROTECTED] wrote:
I can't shake the feeling that merely tweaking the way our
varlenas work with
a shortvarlena or with compressed varlena headers is missing the
real source
of our headaches. It seems very
On Oct 4, 2006, at 10:52 AM, Markus Schaber wrote:
Andrew Dunstan wrote:
It's not only about documenting the pure existence of the aliases
(which
was already documented in the table on the datatype TOC page),
it's also
about telling the user which of the names are the ones to avoid,
and
On Oct 5, 2006, at 11:15 AM, Mark Woodward wrote:
On Thu, Oct 05, 2006 at 11:56:43AM -0400, Mark Woodward wrote:
The query was executed as:
psql -p 5435 -U pgsql -t -A -c select client, item, rating, day
from
ratings order by client netflix netflix.txt
My question, it looks like the
57 matches
Mail list logo