e:
> So, what you're really asking for boils down to nestable transactions?
That's how I've thought of savepoints from day one. When I use them
in Python code, I use a with_transaction wrapper, which transparently
uses a transaction or a savepoint.
--
Glenn Maynard
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
user_name, log_type.log_type
ORDER BY user_name, log_type.log_type;
user_name | log_type | count
---+--+---
a |1 | 1
a |2 | 2
a |3 | 0
b |1 | 1
b |2 | 0
b |
to tidy up your existing fixes and wrap Django's ORM
> as cleanly as you can. That's assuming they're not interested in patches.
The ORM on a whole is decent, but there are isolated areas where it's
very braindamaged--this is one of them. They have a stable-release
API-comp
se that simply says "wrap this in a
transaction block if one isn't already started, otherwise wrap it in a
savepoint". I don't want to use that code here, because it's nitty
code: it needs to poke at Django internals to figure out whether it's
in a transaction block or not, and dealing with other API
compatibility issues.
--
Glenn Maynard
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
nd up dropping out of the ORM
and using some uglier SQL to work around this, but this is so trivial
that it's silly to have to do that. I can't do it within the ORM; it
doesn't have the vocabulary.
Any tricks I'm missing? It feels like Postgres is fighting me at
every turn with t
notonic (renumber them in the copy if
necessary). I can't easily test this code, of course, but it's a
simple binary search. Depending on what's triggering this, it may or
may not be able to narrow in on a test case.
Tangentally, is there a better way of rolling back a function t
;
CREATE INDEX parents_2to4 ON product(parents[2], parents[3], parents[4]);
... but this throws a parse error. I don't have an immediate need for
this, but I'm curious if this is possible--it seems a natural part of
having a native array type.
--
Glenn Maynard
--
Sent via pgsql-sql
#x27;s also odd that the "1. Mittelschule ..." line is getting sorted after those.
--
Glenn Maynard
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
rk. (It'd probably be workable for real
serials, too, with a much larger offset.)
If someone else creates a new sense for that entry after the first
update, it'll sit on the order number you were about to use and the
operation will fail. Serialize so nobody else will insert until
you
00/3" and clicks "delete", you
need to make sure that the one you delete is the same /100/3 that the
user was viewing at the time. That's harder to do...
--
Glenn Maynard
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
action, if you're not very careful with
locking...
--
Glenn Maynard
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
lar problems if
your sense is deleted entirely: instead of the row simply ceasing to
exist (and resulting in predictable, checkable errors), you may end up
silently referring to another sense.
Maybe I'm misunderstanding what you're doing, though.
You'd have to have no UNIQUE constraint on th
quot;, and so on.)
This seems embarrassingly simple: return the top rounds for each
stage--but I'm banging my head on it for some reason.
--
Glenn Maynard
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
n't in
the table already.
Ideally I'd like it to operate like MySQL's on_duplicate_key_update
option, but for now I'll suffice with just ignoring existing rows and
proceeding with everything else.
Thanks,
--
Glenn
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresq
ame)) from table1;
This works, so I tried to put that in the cascade but it failed.
Is there any way to accomplish this?
Thanks
Glenn MacGregor
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.post
Thanks in advance
Glenn
ps sorry if this isn't the appropriate use for this forum
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faqs/FAQ.html
16 matches
Mail list logo