---
Kevin Grittner wrote:
My mistake.
- Any idea when it will get to beta test status?
- Is it currently stable enough to make a real-world test worthwhile?
- Would this be the snapshot you mentioned?:
http://wwwmaster.postgresql.org/download/mirrors-ftp?file=dev%2Fpostgresql
We have what may be a bug in beta2. We have two databases running beta2,
one on Linux and one on Windows. The hardware is identical. The
configurate files are identical. They are being fed identical streams
of data modifications (primarily inserts, very few deletes). We've been
running this
I can't help wondering how a couple thousand context switches per
second would affect the attempt to load disk info into the L1 and
L2 caches. That's pretty much the low end of what I see when the
server is under any significant load.
---(end of
PM
Kevin Grittner [EMAIL PROTECTED] writes:
I can't hold the database in the problem state much longer -- if there
are any other diagnostic steps you'd like me to take before we clear
the problem, please let me know very soon.
Not at the moment ...
INFO: vacuuming pg_catalog.pg_constraint
In both the 8.1beta2 and using a build from this morning's
dev snapshot, this query ran slower than expected:
select count(*)
from DbTranRepository AS dtr
inner join DbTranLogRecord AS dtlr
on (dtlr.countyNo = dtr.countyNo
and dtlr.tranImageSeqNo = dtr.tranImageSeqNo
the archives for prior discussions of the problem.
-Kevin
Tom Lane [EMAIL PROTECTED] 10/06/05 9:28 PM
Kevin Grittner [EMAIL PROTECTED] writes:
In both the 8.1beta2 and using a build from this morning's
dev snapshot, this query ran slower than expected:
There's a known issue that the planner
Thanks, Tom.
I spent a few hours trying different searches in the archives, and
found three very interesting threads on the topic. All were from
2003. Should I keep digging for more recent threads, or would
these probably represent the current state of the issue?
These left me somewhat
[EMAIL PROTECTED] 10/10/05 4:23 PM
Kevin Grittner [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] 10/06/05 9:28 PM
There's a known issue that the planner tends to overestimate the cost of
inner-index-scan nestloops, because it doesn't allow for the strong
caching effects associated
You don't get to read part of a page, but you may be dealing with
probabilities. For example, consider a case where there are ten data
pages, and you are going to read 15% of the tuples. There is a 50%
chance that your scan will start in the first half of a leaf page and
only need two leaf
(gdb) p BufferDescriptors[781]
$1 = {tag = {rnode = {spcNode = 1663, dbNode = 16385, relNode = 2666}, blockNum
= 1}, flags = 70, usage_count = 5, refcount = 4294967294,
wait_backend_pid = 748, buf_hdr_lock = 0 '\0', buf_id = 781, freeNext = -2,
io_in_progress_lock = 1615, content_lock = 1616}
09:10:05.734 CDT] 16620 LOG: redo done at 8/A6112F8C
[2005-10-12 09:10:05.761 CDT] 16620 LOG: database system is ready
[2005-10-12 09:10:05.762 CDT] 16620 LOG: transaction ID wrap limit is
1073743018, limited by database dtr
Tom Lane [EMAIL PROTECTED] 10/12/05 9:26 AM
I wrote:
Kevin
*$PostgreSQL: pgsql/src/backend/storage/buffer/bufmgr.c,v 1.195
2005/08/12 23:13:54 momjian Exp $
bufmgr.s file coming in separate (off-list) email.
Tom Lane [EMAIL PROTECTED] 10/12/05 10:10 AM
Alvaro Herrera [EMAIL PROTECTED] writes:
Kevin Grittner wrote:
I'm not sure what you mean regarding
)
movw16(%ebx), %ax
movb$0, 28(%ebx)
shrl%eax
andl$1, %eax
movl%eax, %edi
movlPrivateRefCount, %eax
Tom Lane [EMAIL PROTECTED] 10/12/05 11:00 AM
Kevin Grittner [EMAIL PROTECTED] writes:
bufmgr.s file coming in separate (off-list
I'm running right now with a dev build which was configured with
--enable-cassert (and --enable-debug). Just out of curiosity, I re-ran
the query which chose the sort and mergejoin over the (faster) nested
index scan. The results seem interesting, although I'd need more tests
to consider them
a better way to get clues, see my next
post.
-Kevin
Tom Lane [EMAIL PROTECTED] 10/12/05 5:15 PM
Kevin Grittner [EMAIL PROTECTED] writes:
[$subject]
No, the planner has no idea about the cost of Asserts.
regards, tom lane
---(end of broadcast
I'm looking at trying to fix some clear flaws in costing which cause
of our real-world queries to choose sub-optimal plans under PostgreSQL.
It's clear that there needs to be a tool to analyze the accuracy of
costing for a variety of queries, both to direct any efforts to fix
problems and to test
Yes I have looked at the TODO list. There is arguably a relationship
to:
* Have EXPLAIN ANALYZE highlight poor optimizer estimates
* Log queries where the optimizer row estimates were dramatically
different from the number of rows actually found?
Neither of these, however, provides a
totally agree -- any one subplan which has an
unusual ratio in either direction needs to be examined. If you're
getting at something else, please elaborate -- I don't want to miss
anything.
Thanks for your response.
-Kevin
Tom Lane [EMAIL PROTECTED] 10/13/05 12:01 AM
Kevin Grittner [EMAIL
Ouch! I just remembered locale and character sets and
encoding. I can't even begin to get my head around what to
do with those, unless it is just to make the tool agnostic
regarding those issues and test against a variety of setups.
Does that seem adequate?
I flash back to my first attempts to
Thanks, Josh, for the feedback.
It sounds as though you are more focused on picking up costing
problems which happen during production -- which is clearly
valuable, but addresses a somewhat different set of needs than
I was looking at. That said, it seems like there is potential to share
I can confirm that the patch was in the snapshot I picked up this
morning at about 10:30 CDT. We've been using it since then and
have not seen the problem in spite of attempting to provoke it with
database vacuums.
-Kevin
Tom Lane [EMAIL PROTECTED] 10/13/05 2:09 PM
Robert Creager [EMAIL
at 01:52:10PM -0500, Kevin Grittner wrote:
Thanks, Josh, for the feedback.
It sounds as though you are more focused on picking up costing
problems which happen during production -- which is clearly
valuable, but addresses a somewhat different set of needs than
I was looking at. That said
I think I get your point now. If I understand it, you could
accomplish what you want under my rough (exists only in my
head so far) design by creating your own test cases and putting
together a script to run just those. I would be exremely leary of
comparing tests against a database under load
for it, but it is sitting there, pretty
much free for the taking, and seems potentially useful.
-Kevin
Tom Lane [EMAIL PROTECTED] 10/14/05 1:37 PM
Kevin Grittner [EMAIL PROTECTED] writes:
I propose capturing only three values from the output of explain
analyze, and saving it with many columns of context
Dang. Obviously, that's inverted. Also, I'd need to
factor in the setup time. Bother. Are you sure we can't
just make sure the test scripts operate against tables
with accurate statistics?
Kevin Grittner [EMAIL PROTECTED] 10/14/05 3:34 PM
The ratio I've been looking at should perhaps
I run into this problem sometimes, especially when I realize that
the query I've just started is going to run for a very long time and
not really provide anything useful. I find that I have to close the
shell window to get out of it, and I'm always a bit uncomforatble
doing that.
-Kevin
, 2005 at 03:34:43PM -0500, Kevin Grittner wrote:
of the two times as a reliability factor. Unfortunately, that
means doubling the number of cache flushes, which is likely
to be the most time-consuming part of running the tests. On
the bright side, we would capture the top level runtimes you
If we stored the actual queries and the EXPLAIN ANALYZE results (when
generated) in the database, what would be the purpose of the node_name,
db_object, and condition_detail columns? They don't seem like they
would be useful for statistical analysis, and it seems like the
information would be
Summary of schema I'm considering. Comments welcome.
When it gets downt to the detail, it may make sense to combine
or split some of these. For example, runtime_options should
probably not have a column for each currently known option,
but a child table which maps to all non-default option
I'm not interested in storing less information. I'm trying to make sure
that all redundant information is justified. Since I plan to store the
actual query text and the full EXPLAIN ANALYZE output, every
column I pull out and put in another table is redundant data. The
questions are, why do we
Maybe we could associate a set of defaults to runtime_environment,
and you would associate any overrides with the runtime_options.
Does this address both your concerns?
Josh Berkus josh@agliodbs.com
Kevin,
When it gets downt to the detail, it may make sense to combine
or split some of
Dann Corbit wrote:
Try this query in Oracle, SQL*Server, DB/2, Informix, etc.:
connxdatasync=# select 1 where cast('a' as varchar(30)) = cast('a ' as
varchar(30));
?column?
--
(0 rows)
For what it's worth, on Sybase ASE I get:
---
1
(1 row affected)
I wonder how widespread the MicroSoft behavior is Sybase ASE,
for example, gives this result set:
30 5 30 5
That seems more appropriate to me.
-Kevin
Dann Corbit [EMAIL PROTECTED] writes:
I guess that additional ambiguity arises if you add additional
Remember the suggestion I made that PostgreSQL add the capability
to define named caches and bind specific objects to those caches?
One of the reasons Sybase recommends using such named caches
(per their performance tuning documentation) is to reduce spinlock
contention.
I don't know whether
Try getting lucky in google with spinlock contention is greater
Bruce Momjian pgman@candle.pha.pa.us
How do other databases deal with this? I can't imagine we are the only
ones. Are we doing something different than them?
---(end of
If you are going to roll this back in 8.1 to reevaluate the issue, I
think the ANSI/ISO standards should be reviewed as part of that
reevaluation. The standard seems rich enough in this area to
address all of the concerns I've seen expressed on this thread.
All the usual advantages for standards
zone of the timestamp.
What am I missing?
-Kevin
Tom Lane [EMAIL PROTECTED]
Kevin Grittner [EMAIL PROTECTED] writes:
The standard seems rich enough in this area to
address all of the concerns I've seen expressed on this thread.
All the usual advantages for standards compliance accrue
to a timestamp would use the time zone of the timestamp.
What am I missing?
-Kevin
Tom Lane [EMAIL PROTECTED]
Kevin Grittner [EMAIL PROTECTED] writes:
The standard seems rich enough in this area to
address all of the concerns I've seen expressed on this thread.
All the usual advantages
Posting here because it may be a 8.1 pre-release problem. I'll take
it to the admin list if it looks like it's not.
File dumped from 8.1beta3 using pg_dump -Fc on Linux box.
This dump restored successfully onto 8.1RC1 on Linux box.
File FTP'd to Windows box; attempt to restore onto 8.1RC1 fails
in the normal amount of time, so I assume
that all is fine, although I haven't reviewed the tables closely.
-Kevin
Tom Lane [EMAIL PROTECTED]
Kevin Grittner [EMAIL PROTECTED] writes:
File dumped from 8.1beta3 using pg_dump -Fc on Linux box.
This dump restored successfully onto 8.1RC1 on Linux box
I think you're on to something, Magnus. From a Windows cmd prompt
the size matches Linux:
11/01/2005 12:35 PM40,874,747,876 dtr.dump
From an msys command prompt, ti does not look good:
-rw-r--r--1 Administ Administ 18446744071634626532 Nov 1 12:35
dtr.dump
I believe we got the
I don't know if this fills in any of the gaps, but...
We passed regression tests with 8.1RC1 on dual hyperthreaded
Xeon systems. One had SUSE 9.3 Professional; the other had
Windows Server 2003 Enterprise Edition (Service Pack 1).
Both had 8 GB RAM and six disk drives set up as RAID 5.
I think the crucial point is that the common IEEE floating point
formats are unable to store an EXACT representation of common
decimal fractions (such as .1) -- they can only store an
APPROXIMATION.
Peter Eisentraut [EMAIL PROTECTED]
Csaba Nagy wrote:
[snip]
Floating points numbers are
No, the IEEE formats can not store .1 exactly. How close it
comes depends on the rest of the number. For single and
double precision, respectively, the IEEE representations fall
at about:
0.10001490116119384765625
0.155511151231257827021181583404541015625
Libraries must do
Where someone is doing real work and pgindent creates so many
cosmetic changes for the current CVS repository, would it be
feasible to first commit a whitespace only noop revision, so that
real changes can be easily identified. I have seen this approach
work well for others.
-Kevin
Neil
We have spent a lot of time on this to be able to do Windows builds
during the beta cycle. Our DBA manager is going to start with a
fresh machine and attempt to do a full setup from our existing notes
and produce a how to document. When we have this, somebody
here will submit it to the community
Are you talking about make install, initdb, or somethingi else?
In any event, is it really something you want to give Everyone
the right to directly modify?
-Kevin
Gevik babakhani [EMAIL PROTECTED]
Here I found an installation issue:
- I was installing pg 8.1 on a Windows 2000 box.
- I
Hi Tony,
As the referenced documentation states, the PostgreSQL SERIALIZABLE
transaction isolation level complies with the ANSI/ISO requirements, but
not with a mathematically pure interpretation of the term. (The only
quibble I have with that documentation is that you have to be averting
your
is the
definitive document on how to build on Win32.
---
Kevin Grittner wrote:
We have spent a lot of time on this to be able to do Windows builds
during the beta cycle. Our DBA manager is going to start with a
fresh machine
We were initially logging out of the Windows GUI environment and back in
again to do the Windows builds. Discovering runas made the whole
process MUCH less painful. So far I haven't needed to use any advanced
features of sudo or runas; in my view either is easy to use for the
common
cases. I'll
1) We run a couple Java applications on the same box to provide
middle tier access. When the box is heavily loaded, I think I've
seen about 80% PostgreSQL, 20% Java load.
2) I checked that no antivirus software was running, and had the
techs pare down the services running on that box to the
Tom Lane [EMAIL PROTECTED]
Kevin Grittner [EMAIL PROTECTED] writes:
None of this seems material, however. It's pretty clear that the
problem was exhaustion of the Windows page pool.
...
If we don't want to tell Windows users to make highly technical
changes to the Windows registry in order
There weren't a large number of connections -- it seemed to be
that the one big update query, by itself, would do this. It seemed
to get through a lot of rows before failing. This table is normally
insert only -- so it would likely be getting most or all of the space
for inserting the updated
A couple clarifications:
There were only a few network sockets open.
I'm told that the eventlog was reviewed for any events which
mgiht be related to the failures before it was cleared. They
found none, so that makes it fairly certain there was no 2020
event.
-Kevin
Kevin Grittner [EMAIL
In Sybase ASE (and I'm pretty sure the same is true in Microsoft SQL
Server) the leaf level of the narrowest index on the table is scanned,
following a linked list of leaf pages. Leaf pages can be pretty dense
under Sybase, because they do use prefix compression. A count(*)
on a table with 100
That sounds very much like a CLUSTERED INDEX under Sybase ASE
(or the derivative Microsoft SQL Server). In those products, when you
create a clustered index, the data pages are sorted according to the
index sequence, and are used as the leaf pages in the index. A
clustered index does not have
Due to its size, in the Windows environment we can't dump this
database in any format except plain text, so the zlib issues don't
apply here.
-Kevin
Qingqing Zhou [EMAIL PROTECTED]
By they way, they found that they were getting this on a pg_dump,
too. We will test both failure cases. If
[Apologies for the delayed response; fighting through a backlog.]
I checked with out DBAs, and they are willing to test it.
By they way, they found that they were getting this on a pg_dump,
too. We will test both failure cases. If the test goes OK, we would
be happy to leave it in production
I've just been bitten by the backslash in string literals issue. I
have reviewed the mailing lists and the TODO list. I see that the
direction PostgreSQL is headed is to drop the nonstandard escapes,
unless an extended literal is explicitly used. I've attached a patch
which supports this as a
On Fri, Dec 9, 2005 at 11:24 am, in message
[EMAIL PROTECTED], Peter Eisentraut
[EMAIL PROTECTED] wrote:
Kevin Grittner wrote:
direction PostgreSQL is headed is to drop the nonstandard escapes,
unless an extended literal is explicitly used. I've attached a
patch
which supports
On Sat, Dec 10, 2005 at 8:01 pm, in message
[EMAIL PROTECTED], Bruce Momjian
pgman@candle.pha.pa.us wrote:
Kevin Grittner wrote:
Since the
non- standard behavior is in the lexer, I couldn't see any
reasonable way
to base it on a runtime switch. I'm curious what is intended here.
Can
We found a bug in the code from my first patch. Since it was a low
frequency, non-destructive type of problem for us, I was able to take my
time and look over the task a little more closely. Attached is a patch
which should come close to implementing the TODO. In particular, it is
now
On Wed, Jan 25, 2006 at 4:46 pm, in message
[EMAIL PROTECTED], Kevin Grittner
[EMAIL PROTECTED] wrote:
(2) There should probably be some tests added to exercise these
options.
Attached is a patch to address this one. Note that until psql is
fixed, this test will fail. I manually
On Wed, Feb 1, 2006 at 10:50 am, in message
[EMAIL PROTECTED], Bruce Momjian
pgman@candle.pha.pa.us wrote:
Kevin Grittner wrote:
We found a bug in the code from my first patch. Since it was a low
frequency, non- destructive type of problem for us, I was able to
take my
time and look over
On Thu, Feb 9, 2006 at 10:31 pm, in message
[EMAIL PROTECTED], Bruce Momjian
pgman@candle.pha.pa.us wrote:
OK, I got it working. The fix is to add GUC_REPORT to guc.c for
standard_conforming_strings. See the same flag on
session_authorization. That will cause libpq to see any changes
This patch doesn't leave the standard_conforming_strings entry in guc.c
with the GUC_REPORT flag, which it needs for psql to work right. Should
I submit one last patch with this fix and the proper expected
regression file? If so, where should I send it? (The hackers list
won't take a file as
As part of integrating PostgreSQL into our production environment, we're
working on monitoring software, to provide the same kinds of status
reporting and alerts we have implemented for our outgoing commercial
database product. One of the things we show on our big board is
impending failure
On Thu, Feb 23, 2006 at 8:43 pm, in message
[EMAIL PROTECTED],
Neil Conway [EMAIL PROTECTED] wrote:
Kevin Grittner wrote:
Peter Brant, a consultant working with us, has written code which
is
working for this under both Linux and Windows. [...] For Linux, he
used statvfs.
statvfs(2
On Fri, Feb 24, 2006 at 9:34 am, in message
[EMAIL PROTECTED],
Rod Taylor [EMAIL PROTECTED] wrote:
You don't need to know the free diskspace in real time. A 2 minute
old
value is probably just as good.
Not really, this sort of monitoring has kept us from crashing under our
old database
On Fri, Feb 24, 2006 at 10:57 am, in message
[EMAIL PROTECTED],
Rod Taylor [EMAIL PROTECTED] wrote:
PostgreSQL seems to deal with out of diskspace situations pretty
well
when it impacts a tablespace (global stuff like WAL or
subtransactions
have issues -- but they grow slowly) as far as
On Fri, Feb 24, 2006 at 5:00 pm, in message
[EMAIL PROTECTED], Jim C. Nasby
[EMAIL PROTECTED]
wrote:
Oh, so does it actually involve any server modifications? Or can it
just
go into pgfoundry?
No server modifications. I've got it bundled up as though it were
going to be under contrib, and
On Fri, Feb 24, 2006 at 5:25 pm, in message
[EMAIL PROTECTED],
Alvaro Herrera [EMAIL PROTECTED] wrote:
Having it build with PGXS would be a definite plus for ease of
installation.
It does.
---(end of broadcast)---
TIP 1: if
Just a wouldn't it be nice if sort of feature request. I'm not sure
how practical it is.
Someone in our organization wrote a data fix query, which has sort of
odd logic, but it does what they need. The problem is that it ran for
14 hours in a test against a copy of the data. I looked at it and
On Tue, Feb 28, 2006 at 11:05 am, in message
[EMAIL PROTECTED],
Tom Lane [EMAIL PROTECTED] wrote:
The issue at hand really has nothing to do with temp indexes, it's
with
the constrained way that the planner deals with EXISTS subplans.
Yet when the index exists, the query is optimized well.
On Tue, Feb 28, 2006 at 11:36 am, in message
[EMAIL PROTECTED], Kevin Grittner
Also, EXISTS works in situations where
you need to compare on multiple columns, so it is useful in many
situations where EXISTS or MIN/MAX techniques just don't work.
Sorry. That should have read:
EXISTS works
On Tue, Feb 28, 2006 at 12:06 pm, in message
[EMAIL PROTECTED],
Tom Lane [EMAIL PROTECTED] wrote:
IN works fine on multiple columns:
(foo, bar, baz) IN (SELECT x, y, z FROM ...)
Thanks for pointing that out. I recognize it as valid ANSI/ISO syntax,
using a row value constructor list.
On Tue, Feb 28, 2006 at 11:05 am, in message
[EMAIL PROTECTED],
Tom Lane [EMAIL PROTECTED] wrote:
The limiting factor is that EXISTS subplans
aren't flattened ... and once that's fixed, I doubt the example
would
need any new kind of join support.
I rewrote the query to use IN predicates
On Tue, Feb 28, 2006 at 3:02 pm, in message
[EMAIL PROTECTED], Jim C. Nasby
[EMAIL PROTECTED]
wrote:
Maybe it's just the way my twisted mind thinks, but I generally
prefer
using a JOIN when possible...
Definitely. But sometimes you don't want one row from a table for each
qualifying row
On Tue, Feb 28, 2006 at 7:22 am, in message
[EMAIL PROTECTED], Simon Riggs
[EMAIL PROTECTED] wrote:
OTOH a few hackers discussed this recently and found that nobody
used
serializable transactions (ST) except during pg_dump.
I've not been able to keep up with all messages on these lists,
On Wed, Mar 1, 2006 at 11:02 am, in message
[EMAIL PROTECTED], Simon Riggs
[EMAIL PROTECTED] wrote:
On Wed, 2006- 03- 01 at 10:22 - 0600, Kevin Grittner wrote:
On Tue, Feb 28, 2006 at 7:22 am, in message
[EMAIL PROTECTED], Simon Riggs
[EMAIL PROTECTED] wrote:
We use serializable
Attached is a simplified example of a performance problem we have seen,
with a workaround and a suggestion for enhancement (hence both the
performance and hackers lists).
Our software is allowing users to specify the start and end dates for a
query. When they enter the same date for both, the
I have some odd results from timing two versions of an update query, and
was hoping to get a better handle on how to interpret this. The query
does an update of one table. One version does three NOT IN tests
against three related tables. The other version does the logically
equivalent NOT
On Wed, Mar 22, 2006 at 8:59 pm, in message
[EMAIL PROTECTED],
Tom Lane [EMAIL PROTECTED] wrote:
Kevin Grittner [EMAIL PROTECTED] writes:
You didn't show us the explain analyze results,
The below is cut paste directly from a psql run without editing.
bigbird= UPDATE User SET isActive
On Thu, Mar 23, 2006 at 11:27 am, in message
[EMAIL PROTECTED],
Tom Lane [EMAIL PROTECTED] wrote:
The run time of the NOT IN query, as measured by elapsed time
between
SELECT CURRENT_TIMESTAMP executions, increased by 31 ms.
Works out to about 30 microsec per node execution, which seems a
On Thu, Mar 23, 2006 at 11:27 am, in message
[EMAIL PROTECTED],
Tom Lane [EMAIL PROTECTED] wrote:
Profiling with gprof or some such tool might be educational.
I've never used gprof before, and from a quick scan of the info, it
appears that I need to compile and link a special version of the
Tom Lane [EMAIL PROTECTED] wrote:
If you're still interested in testing CVS HEAD's handling of EXISTS,
I've about finished what I wanted to do with it.
It's been hectic here, but I've managed to let some stuff run in the
background using an old test case from here:
Greg Smith [EMAIL PROTECTED] wrote:
name | Recommended | Current | Min | Default | Max
-+-+-+---+-+-
wal_buffers | 1024kB | 64kB| 32 kB | 64 kB | 2048 MB
Personally, I would take the Min, Default, and Max to
I was testing a very complex statistical query, with (among other
things) many EXISTS and NOT EXISTS tests against a build of the source
snapshot from 3 September. (The query looks pretty innocent, but
those aren't tables, they're complicated views.) Under 8.3.3 this
query runs successfully, but
Tom Lane [EMAIL PROTECTED] wrote:
Kevin Grittner [EMAIL PROTECTED] writes:
PortalHeapMemory: 1620992 total in 200 blocks; 5856 free (8
chunks);
1615136 used
ExecutorState: 2787288448 total in 364 blocks; 328 free (5
chunks);
2787288120 used
Ouch. We have created a memory
Tom Lane [EMAIL PROTECTED] wrote:
Also, does the leak still occur if
you just run the query as-is rather than EXPLAIN ANALYZE it?
The machine became unresponsive similar to the early symptoms of the
apparent memory leak cited in this post:
Tom Lane [EMAIL PROTECTED] wrote:
The patch includes regression test changes that illustrate what it
does.
I am not sure about some of the corner cases --- anyone want to see
if
their understanding of the spec for interval string is different?
The patch seems to support extensions to the
Kevin Grittner [EMAIL PROTECTED] wrote:
(4) I'm not 100% sure on this one, but it seemed to me that they
were
requiring year to be four digits and other components (except for
fractional seconds) to be two digits.
That can't be right. Maybe I saw that in datetime literal specs
On Mon, Sep 15, 2008 at 4:58 PM, in message
[EMAIL PROTECTED],
Tom Lane [EMAIL PROTECTED] wrote:
Ron Mayer [EMAIL PROTECTED] writes:
Unless I'm compiling stuff wrong, it seems HEAD is giving me
slightly different output on Intervals than 8.3 in the roundoff
of seconds. 8.3 was rounding to
Kevin Grittner [EMAIL PROTECTED] wrote:
I find the results on 8.3.3 with integer timestamps surprising:
Even more surprising is the behavior for interval(1) here:
ccdev=# select '1 year 2 mons 3 days 04:05:06.64321'::interval;
interval
Tom Lane [EMAIL PROTECTED] wrote:
Robert Haas [EMAIL PROTECTED] writes:
I am not sure, if these rule is good. Somebody who develop on
postgresql should have a problems when they will be port to other
databases in future. Reserved words in standards should be
respected.
If people want to
Tom Lane [EMAIL PROTECTED] wrote:
Paul Schlie [EMAIL PROTECTED] writes:
- yes, if you're willing to compute true CRC's as opposed to
simpler
checksums, which may be worth the price if in fact many/most data
check failures are truly caused by single bit errors somewhere in
the
chain,
Tom Lane [EMAIL PROTECTED] wrote:
Kevin Grittner [EMAIL PROTECTED] writes:
The tables and views aren't that hard; finding a way to generate
enough fake data may be a challenge. (I assume that since it took
over a half hour to run out of memory, the volume of data needs to
be
sufficient
Kenneth Marshall [EMAIL PROTECTED] wrote:
Even more surprising is the behavior for interval(1) here:
[ some context with nonsurprising examples removed ...]
ccdev=# select '1 year 2 mons 3 days 04:05:06.64321'::interval(1);
interval
--
1
Jim Cox [EMAIL PROTECTED] wrote:
if present an INFO message is generated which displays
the schema.tblname just before actual clustering is kicked off (see
example
below).
postgres=# CLUSTER VERBOSE ;
INFO: clustering public.my_b
INFO: clustering public.my_c
INFO: clustering
Tom Lane [EMAIL PROTECTED] wrote:
The attached patch cures the leak for me
I see that the patch was applied. A CVS checkout from this morning
fixes the leak for me, too; the vmstat output stayed rock steady
during the run.
Thanks!
-Kevin
--
Sent via pgsql-hackers mailing list
1 - 100 of 3976 matches
Mail list logo