u wanted.
That said, why not just pull what Heikki did into contrib, and add the
necessary mode to heap_update?
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To
0% came from. We don't know if
it's indexes or what.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 4/24/15 3:34 PM, Alvaro Herrera wrote:
Jim Nasby wrote:
Honestly, I'd prefer we exposed some way to influence where a new tuple gets
put, and perhaps better ways of accessing tuples on a specific page. That
would make it a lot easier to handle this in userspace, but it would also
ma
On 4/24/15 5:30 PM, Alvaro Herrera wrote:
Jim Nasby wrote:
It looks like the biggest complaint (aside from allowing a limited number of
tuples to be moved) is in [1] and [2], where Tom is saying that you can't
simply call heap_update() like this without holding an exclusive lock on the
t's still not fixing, but I think it's definitely worth
documenting.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscript
ould be limited to far less than 4B
rows. If each row has 10 fields that toast, you'd be limited to just
400M rows.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.
's no reason for these to be tied to
specific versions of Postgres. Adding to PGXN would make sense though.
(Though, I dislike using timestamps to do change/history tracking, but
that's just me...)
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http:/
s affect client and log the same way?
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
in the past.
I have tried to search in archive, but not getting what is the exact
problem.
Unfortunately I can't find prior discussion now either... :/
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hacker
On 4/25/15 6:30 AM, Simon Riggs wrote:
On 24 April 2015 at 22:36, Jim Nasby mailto:jim.na...@bluetreble.com>> wrote:
Instead of adding forcefsm, I think it would be more useful to
accept a target block number. That way we can actually control where
the new tuple goes. Fo
nitial use case of
create function cn(record) returns bigint language sql as $$
SELECT count(*)
FROM json_each_text( row_to_json($1) ) a
WHERE value IS NULL $$;
Attached patches both pass make check. The plpgsql is WIP, but I think
the SQL one is OK.
--
Jim Nasby, Data Architect, Blue Tr
#x27;d
definitely want to do it with a range. If you're only keeping the change
time then you can handle it differently.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql
other functions (such as
json_from_record()). Since that's my original motivation for looking at
this, I'd like that patch to be considered unless there's a big drawback
to it that I'm missing. (For 9.6, of course.)
--
Jim Nasby, Data Architect, Blue Treble Consu
Or does this affect client and log the same way?
it affect client and log together
maybe "min_context"
+1
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresq
On 4/28/15 5:41 AM, José Luis Tallón wrote:
On 04/27/2015 08:49 AM, Jim Nasby wrote:
On 4/25/15 1:19 PM, Bruce Momjian wrote:
Note if you are storing a table with rows that exceed 2KB in size
(aggregate size of each row) then the "Maximum number of rows in a
table" may
On 4/28/15 7:11 AM, Robert Haas wrote:
On Fri, Apr 24, 2015 at 4:09 PM, Jim Nasby
wrote:>>> When I read that I think about something configurable at
>>>relation-level.There are cases where you may want to have more
>>>granularity of this information at block level b
urns text language plpgsql as $$declare m moo;
begin m := null; return m.t; end$$;
select t(); -- Expected error
alter table moo add t text;
select t(); -- Unexpected error
So it seems the correct fix would be to remember the list of every xmin
for every type we saw... unless there's some
On 4/28/15 1:31 PM, Andrew Dunstan wrote:
On 04/28/2015 01:44 PM, Jim Nasby wrote:
On 4/27/15 10:06 PM, Andrew Dunstan wrote:
My point remains that we really need methods of a) getting the field
names from generic records and b) using text values to access fields of
generic records, both as
everything that's necessary? Or are you worried
that doing this could be user-visible (which as long as it's a manual
process I think is OK)?
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mai
On 4/29/15 12:18 PM, Robert Haas wrote:
On Mon, Apr 27, 2015 at 5:01 PM, Jim Nasby wrote:
The problem with just having the value is that if *anything* changes between
how you evaluated the value when you created the index tuple and when you
evaluate it a second time you'll corrupt your
ecina.local=# insert into t values('a');
ERROR: "" is null? t
decibel@decina.local=# drop trigger t on t;
DROP TRIGGER
decibel@decina.local=# create trigger t before insert on t for each row
execute procedure tg('null');
CREATE TRIGGER
decibel@decina.local=# inser
that anyone is working on it.
ISTR a comment to the effect of the SQL standard effectively requires
current behavior.
I'd still love to see a way around that though, even if it means some
kind of additional syntax; WITH is a lot nicer way to factor a query
than 10 nested subselects.
you now have all that definition both in your VCS and the
database itself, but ISTM that's a much bigger problem than the small
amount of info we lose from stored functions...
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent
matic for backwards-compatibility
reasons. It also seems like rather a lot of new mechanism to add for
something with (evidently) near-zero user demand.
Ahh, I thought the array started life as an actual array, not char **.
So yeah, not nearly as easy to fix. :(
--
Jim Nasby, Data Architect,
ith you on one point: the solution to
index bloat (and probably heap bloat, too) is not to clean it up
faster but to create less of it in the first place. Making more
updates HOT is one way to do that.
+1.
1:
http://stackoverflow.com/questions/16401294/how-to-know-linux-scheduler-time-slice
--
RBM_ZERO_NO_BM_VALID)
+ TerminateBufferIO(bufHdr, false, 0);
+ else
+ TerminateBufferIO(bufHdr, false, BM_VALID);
Simply passing in a 0 seems a bit odd to me; is there anywhere else we
do that?
--
Jim Nasby, Data Architect, Blue Treble Consulting
column?
--
-1.00
(1 row)
The problem is that :: binds more tightly than -:
select (-1)::loan.loan_amount;
ERROR: value for domain loan.loan_amount violates check constraint
"loan_amount__greater_equal_0"
Is this by design? If not, any ideas how bad it'd be to fi
ar.
To put this in perspective... I've seen things like this take query
runtime from minutes to multiple hours or worse; bad enough that
"behavior break" becomes a valid description.
We definitely need to highlight this in the release notes, and I think
the GUC would be mandatory.
= trunc( random()*45000) ;
You could create a temp table with the random value and JOIN to it:
CREATE TEMP TABLE rnd AS SELECT random()*45000;
Another option might be to use a prepared statement:
PREPARE test AS SELECT ... WHERE customer_id = $1;
EXECUTE test( random()*45000 );
--
Jim Nas
option as we add each one. We need a more general
approach.
Yeah. I think one of the big use cases here is that many environments
are OK with at least ident (if not trust) but only from the local
machine. So you'd probably want to handle that somehow.
--
Jim Nasby, Data Architect, Blue Tre
ws some people that could cause big problems for.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
put something into PGXN first; this
doesn't really feel like it's baked enough for contrib yet. (And I say
that as someone who's really wanted this ability in the past...)
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
nd whether it succeeded or not. (log in a table, not
just a logfile).
This isn't something that can be done at higher layers either; only the
scheduler will know if the job failed to even start, or whether it tried
to run the job.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data
s callbacks for everytime the bottom-end
scheduler tries to start a job. Otherwise, the top has no clue what the
bottom has actually attempted.
To be clear, I don't think these need to be done in a first pass. I am
concerned about not painting ourselves into a corner though.
--
Jim Nasby, Data Ar
ls motivated to fix, there's a typo in the comment for
IndexNextWithReorder (s/his/this/):
+ * Like IndexNext, but his version can also re-check any
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-
n a DB (Firebird had
something like that, but with no payload).
FWIW, numerous people made similar comments about Postgres at this
month's AustinPUG meeting when someone mentioned they haven't used it
yet. Basically, they said he'd love using it. :)
--
Jim Nasby, Data Architect, Blu
trate custom-scan interface, however, it is
not certain an example always needs to be in-core.
FWIW, having TIDGreaterOperator would be very useful for anyone trying
to un-bloat a table, so it'd be nice if this was at least available as a
PGXN extension.
--
Jim Nasby, Data Architect, B
nger marked as committer I don't think it's
fair to hold you accountable for people not stepping back up.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postg
On 8/26/14, 6:52 PM, Andres Freund wrote:
On 2014-03-21 19:22:31 +0100, Andres Freund wrote:
>Hi,
>
>I've been annoyed at the amount of memory used by the backend local
>PrivateRefCount array for a couple of reasons:
>
>a) The performance impact of AtEOXact_Buffers() on Assert() enabled
>bui
On 8/26/14, 8:45 PM, Michael Paquier wrote:
Hi all,
As mentioned here, we support multiple logging format:
http://www.postgresql.org/docs/devel/static/runtime-config-logging.html
Now what about a json format logging with one json object per log entry?
A single json entry would need more space t
On 8/27/14, 2:23 AM, Heikki Linnakangas wrote:
Does this make sense? In essence, make the relations work like PL/pgSQL
variables do. If you squint a little, the new/old relation is a variable from
the function's point of view, and a parameter from the planner/executor's point
of view. It's jus
On 8/26/14, 8:40 AM, Heikki Linnakangas wrote:
Just so everyone is on the same page on what kind of queries this helps with,
here are some examples from the added regression tests:
-- Test join removals for semi and anti joins
CREATE TEMP TABLE b (id INT NOT NULL PRIMARY KEY, val INT);
CREATE
On 8/24/14, 6:22 AM, Haribabu Kommi wrote:
Yes, we are mainly targeting CPU-limited sequential scans, Because of
this reason
only I want the worker to handle the predicates also not just reading
the tuples from
disk.
In that case, I would suggest focusing on parallel execution of conditions
re
On 8/27/14, 7:33 AM, Fujii Masao wrote:
On Tue, Aug 19, 2014 at 1:07 AM, Robert Haas wrote:
On Fri, Aug 15, 2014 at 7:17 AM, Fujii Masao wrote:
On Fri, Aug 15, 2014 at 3:40 AM, Andres Freund wrote:
On 2014-08-14 14:37:22 -0400, Robert Haas wrote:
On Thu, Aug 14, 2014 at 2:21 PM, Andres Fre
On 8/27/14, 1:38 AM, Andres Freund wrote:
It occurs to me that it'd also be nice to have some
>stats available on how this is performing; perhaps a dtrace probe for
>whenever we overflow to the hash table, and one that shows maximum
>usage for a statement? (Presumably that's not much extra code o
On 9/17/14, 7:40 PM, Jan Wieck wrote:
Exactly. Doing something like
ASSERT (select count(*) from foo
where fk not in (select pk from bar)) = 0;
is a perfectly fine, arbitrary boolean expression. It will probably work well
in a development environment too. And I am very sure that
On 10/2/14, 2:43 PM, Josh Berkus wrote:
>Questions:
>
> 1. Do you agree that a systematic way to report what a DDL command (or
> script, or transaction) is going to do on your production database
> is a feature we should provide to our growing user base?
Yes.
+1
> 2. What do you thin
On 10/2/14, 6:51 AM, Pavel Stehule wrote:
EXECUTE format('UPDATE tbl SET %I = newvalue WHERE key = %L',
colname, keyvalue)
or
-1, because of quoting issues
EXECUTE format('UPDATE tbl SET %I = newvalue WHERE key = $1',
colname)
USING keyvalue;
Better, b
On 10/2/14, 11:06 PM, David G Johnston wrote:
Jim Nasby-5 wrote
On 10/2/14, 6:51 AM, Pavel Stehule wrote:
EXECUTE format('UPDATE tbl SET %I = newvalue WHERE key = %L',
colname, keyvalue)
or
-1, because of quoting issues
EXECUTE format('UPDATE tbl SET %I =
On 10/3/14, 4:02 PM, David G Johnston wrote:
Should we also allow:
SELECT
, col1
, col2
, col3
FROM ...
?
I would say yes, if we're going to do this. I don't see it being any worse than
trailing commas.
If we are going to do this, we need to do it EVERYWHERE.
FWIW, the way I normally "work
On 10/4/14, 2:58 PM, Bruce Momjian wrote:
I've committed changes for this in advance of the upcoming 9.4beta3
>release. Hopefully, if this is seriously bad for anyone, we'll hear
>about it from beta testers before it gets into any official back-branch
>releases.
The changes for the Russian Fede
On 10/4/14, 1:21 PM, Jeff Janes wrote:
On Thu, Oct 2, 2014 at 6:21 AM, Michael Banck mailto:michael.ba...@credativ.de>> wrote:
Hi,
we have seen repeatedly that users can be confused about why PostgreSQL
is not shutting down even though they requested it. Usually, this is
because
On 10/2/14, 7:28 AM, Robert Haas wrote:
On Thu, Oct 2, 2014 at 4:49 AM, Heikki Linnakangas
wrote:
>An origin column in the table itself helps tremendously to debug issues with
>the replication system. In many if not most scenarios, I think you'd want to
>have that extra column, even if it's no
On 10/4/14, 3:25 PM, Bruce Momjian wrote:
On Sat, Oct 4, 2014 at 03:01:45PM -0500, Jim Nasby wrote:
On 10/4/14, 2:58 PM, Bruce Momjian wrote:
I've committed changes for this in advance of the upcoming 9.4beta3
release. Hopefully, if this is seriously bad for anyone, we'll hear
abo
ons more than once in userspace where I need a way
to properly separate schema and object name. Generally I can make do using reg*
casts and then hitting catalog tables, but it'd be nice if there was an easier
way.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get
or the server's timezone. The
part that seems hard (at least to me) is the question of how to actually store the
timezone, because I don't think storing the text string "America/Central" is
going to cut it. :/
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data
rmation to generate the
list of commands psql shouldn't do that with.
2) Always run the regression test with auto-commit turned off.
3) Run the regression in both modes (presumably only on the build farm due to
how long it would take).
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in
On 10/6/14, 6:19 PM, Jim Nasby wrote:
FWIW, I agree for timestamptz, but I do wish we had a timestamp datatype that stored the
exact timezone in effect when the data was entered. That can really, REALLY save your
rear if you screw up either timezone in postgresql.conf, or the server's tim
On 10/7/14, 2:11 AM, Feike Steenbergen wrote:
On 7 October 2014 01:41, Jim Nasby wrote:
>The options I see...
>
>1) If there's a definitive way to tell from backend source code what
>commands disallow transactions then we can just use that information to
>generate the l
the only "reasonable" way to do that is to throw them in a _blah
schema and try to further hide them with permissions games. :(
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pg
handle it. But C is actually quite well-suited to such tasks.
Yeah, I wouldn't want to attempt this in SQL; I was saying that a built-in
function to do this would be broadly useful, not just for replicating DROPs.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Troubl
ncy is good.
(From 9.3)
* LWLockConditionalAcquire - acquire a lightweight lock in the specified mode
*
* If the lock is not available, return FALSE with no side-effects.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hac
f it returns whether we must
wait. Note that it's not an exported function.
ISTM that a function attempting a lock would return success, not failure. Even
though it's internal now it could certainly be made external at some point in
the future. But I suppose it's ultimately a matter
y set
hint bits...
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
horrid. :( There's also the potential to get stuck
in a loop where a BEFORE INSERT trigger turns the tuple into an UPDATE and a
BEFORE UPDATE trigger turns it into an INSERT.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
On 10/9/14, 6:59 PM, Gavin Flower wrote:
On 10/10/14 12:38, Jim Nasby wrote:
On 10/8/14, 5:51 PM, Peter Geoghegan wrote:
On Wed, Oct 8, 2014 at 2:01 PM, Kevin Grittner wrote:
>Although the last go-around does suggest that there is at least one
>point of difference on the semantics. Yo
che either.
*/
if (!IsSystemRelation(relation))
return;
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscrip
ctual backups, but I
think it is very common to use it to produce schema-only (maybe with data from
a few tables as well) dumps for developers. I've certainly wished I could offer
that ability without going full-blown super-user.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Tr
odify PITR infrastructure
#2 is probably a weak case that may not be needed; I include it because someone
mentioned stopping a backup that someone else started.
I think #3 should just require superuser.
#1 is what you'd want a more junior person to handle. "Bob needs a snapshot of
that's an acceptable way to solve both problems: if
your smart shutdown is hung, cancel it and connect to see what's going on.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hac
rt the old pg_newsysviews project.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
it as internal (especially if the added field defaults to an invalid
value).
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscript
s well as supporting leading extra commas).
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
e with that I can take a stab at implementing it.
Since I tend to be paranoid, I like smart being the default, but seems I'm in
the minority there.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers
le, you'd certainly want to
use an index on (field_we_care_about, smallint_field) over an index on
(field_we_care_about, big_honking_text_field).
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing li
o 9.5, but the intention should certainly be that we support extra
delimiters *everywhere*.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your su
to come up with a
per-page system that could be used to determine when a table needs background
work to be done. The visibility map could serve a lot of this purpose, but I'm
not sure if it would work for getting hint bits set in the background. I think
it would also be a win if we had a w
On 10/19/14, 11:41 AM, Andres Freund wrote:
On 2014-10-18 21:36:48 -0500, Jim Nasby wrote:
On 10/9/14, 4:19 PM, Andres Freund wrote:
On 2014-10-09 18:16:46 -0300, Alvaro Herrera wrote:
Andres Freund wrote:
On 2014-10-09 18:03:00 -0300, Alvaro Herrera wrote:
Bruce Momjian wrote:
I agree
ugh theoretically in
this case I'd think it should be the same for indexes we care about...
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To
pages elog(LOG) how many we skipped. If we
skip more than 1% of the pages we visited (not relpages) then elog(WARNING)
instead.
Comments?
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list
shing this with a straight domain. That
would work, except for this:
WARNING: cast will be ignored because the source data type is a domain
Why do we ignore casts from domains to other data types? I'm guessing because
it's simply not what domains were meant for?
--
Jim Nasby, Data Ar
s are
copied over... what happens if we puke before the files get copied? Would it be
better to enable after the scripts are in place?
BTW, Mavericks has a comment that /etc/hostconfig is going away, but google
isn't telling me what's replacing it...
--
Jim Nasby, Data Archite
be automated...
Are you thinking we'd commit the expected output of the perl script and have
the regression suite call that script to verify it? That seems like a good way
to fix this. The only better option I can come up with is if the perl script
generated an actual test that we know wo
cess because of stuff the system is doing.
This is actually a huge problem for anyone who's trying to figure out how
useful indexes are; they see usage and thing they have queries that are using
the index when in reality they don't.
--
Jim Nasby, Data Architect, Blue Treble Consulting
D
On 10/20/14, 10:29 AM, Greg Stark wrote:
On Mon, Oct 20, 2014 at 2:57 AM, Jim Nasby wrote:
Currently, a non-freeze vacuum will punt on any page it can't get a cleanup
lock on, with no retry. Presumably this should be a rare occurrence, but I
think it's bad that we just assume that
On 10/20/14, 3:11 PM, Andres Freund wrote:
On 2014-10-19 20:43:29 -0500, Jim Nasby wrote:
On 10/19/14, 11:41 AM, Andres Freund wrote:
On 2014-10-18 21:36:48 -0500, Jim Nasby wrote:
The "weird" part is that if it's not doing a freeze it will just punt
on a page if it can't
On 10/20/14, 7:31 PM, Andres Freund wrote:
On 2014-10-20 19:18:31 -0500, Jim Nasby wrote:
>In the meantime, I think it's worth adding this logging. If in fact this
basically never happens (the current assumption), it doesn't hurt anything. If it
turns out our assumption is wron
On 10/20/14, 11:16 AM, Andrew Dunstan wrote:
On 10/20/2014 11:59 AM, David E. Wheeler wrote:
On Oct 18, 2014, at 7:06 PM, Jim Nasby wrote:
Yes.
The only case I can think of where we wouldn't want this is COPY.
BTW, this should also apply to delimiters other than commas; for example,
locking page
Did this get committed? Should probably add it to the commitfest if not...
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your
On 10/21/14, 4:36 PM, Jeff Janes wrote:
On Mon, Oct 20, 2014 at 5:46 PM, Andres Freund mailto:and...@2ndquadrant.com>> wrote:
On 2014-10-20 17:43:26 -0700, Josh Berkus wrote:
> On 10/20/2014 05:39 PM, Jim Nasby wrote:
> > Or maybe vacuum isn't the right way to
make check.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
diff --git a/src/backend/access/heap/pruneheap.c
b/src/backend/access/heap/pruneheap.c
index 06b5488..4c40f7e 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/a
On 10/21/14, 5:39 PM, Alvaro Herrera wrote:
Jim Nasby wrote:
Currently, a non-freeze vacuum will punt on any page it can't get a
cleanup lock on, with no retry. Presumably this should be a rare
occurrence, but I think it's bad that we just assume that and won't
warn the user i
On 10/13/14, 8:28 PM, Tom Lane wrote:
Jim Nasby writes:
CacheInvalidateHeapTuple currently does the following tests first; would there
be a performance improvement to testing the system relation case first? We're
almost never in bootstrap mode, so that test is almost always a waste. Is
eded is to limit
the number of connections a user can open. Or perhaps another option would be
to change the permissions on the related functions (do we check ACLs for
internal functions?)
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
On 10/24/14, 12:21 PM, Robert Haas wrote:
On Fri, Oct 24, 2014 at 1:13 PM, Jim Nasby wrote:
It's a valid concern, but I think the way to handle it if needed is to limit
the number of connections a user can open. Or perhaps another option would
be to change the permissions on the re
On 10/24/14, 2:23 PM, Jim Nasby wrote:
On the serialization structure itself, should we be worried about a mismatch
between available GUCs on the sender vs the receiver? Presumably if the sender
outputs a GUC that the receiver doesn't know about we'll get an error, but what
if
;unknown message type: %c (%zu
bytes)",
+msg.data[0], nbytes);
It'd be useful to also provide DEBUG output with the message itself...
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list
On 10/24/14, 3:26 PM, Robert Haas wrote:
On Fri, Oct 24, 2014 at 3:30 PM, Jim Nasby wrote:
On 10/24/14, 2:23 PM, Jim Nasby wrote:
On the serialization structure itself, should we be worried about a
mismatch between available GUCs on the sender vs the receiver? Presumably if
the sender outputs
On 10/24/14, 4:03 PM, Robert Haas wrote:
On Fri, Oct 24, 2014 at 4:46 PM, Jim Nasby wrote:
On 10/24/14, 12:21 PM, Robert Haas wrote:
- What should we call dsm_unkeep_mapping, if not that?
Only option I can think of beyond unkeep would be
dsm_(un)register_keep_mapping. Dunno that it's
're doing that), but if you were going to go to that extent perhaps it'd be
better to just support cross-database access in a single backend...
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hacke
701 - 800 of 2230 matches
Mail list logo