On Sun, 28 Apr 2019 at 11:30, Adam Johnson wrote:
> I don't think this would be that helpful, Django relies on ducktyping, and
> the router docs (
> https://docs.djangoproject.com/en/2.2/topics/db/multi-db/ ) clearly
> document the signatures, and that the methods are all optional:
>
> A router
Hi guys,
It seems to me that an (abstract) base class for the database routers would
be useful to inherit for people. Makes implementing a router slightly
easier.
I was thinking something along these lines:
import abc
class ConnectionRouterBase(abc.ABC):
@abstractmethod
def
this slowly.
On Monday 15 June 2015 22:52:09 Rick van Hattem wrote:
> > On 15 June 2015 at 21:34, Florian Apolloner <f.apo...@gmail.com
> > wrote:
> > > On Monday, June 15, 2015 at 7:07:38 PM UTC+2, Rick van Hattem (wolph)
> > >
> > >
On 15 June 2015 at 21:34, Florian Apolloner <f.apollo...@gmail.com> wrote:
>
>
> On Monday, June 15, 2015 at 7:07:38 PM UTC+2, Rick van Hattem (wolph)
> wrote:
>>
>> Would anyone oppose a pull request like this?
>>
>
> Yes, it is highly backwards
While I understand the rationale, it's not really possible due to the
underlying Python object:
>>> import datetime
>>> datetime.date()
Traceback (most recent call last):
File "", line 1, in
TypeError: Required argument 'year' (pos 1) not found
>>> datetime.datetime()
Traceback (most recent
While there are several solutions to this problem, I find myself
scaffolding the Django admin every time I create a new app/model. I even
created an app to do just that (harmless
plug: https://pypi.python.org/pypi/django-admin-generator/).
Anyhow... I've wondered for some time why Django
Thanks for the help but writing the custom database backend won't be a
problem, I've written one before :)
My goal was simply to move the Django project forward but it seems the
problems I've encountered in the field are too uncommon for most other
developers to care or understand.
Thank you all
t; On Nov 24, 2014, at 3:36 AM, Rick van Hattem <wo...@wol.ph> wrote:
> > If you fetch N+1 items you know if there are over N items in your list.
>
> Let's stop there. Unfortunately, because of the way libpq works, just
> sending the query and checking the result set size w
without any traceable cause as the resource starvation
can kill the machine without ever showing anything useful in the logs.
On 24 November 2014 at 12:16, Christophe Pettus <x...@thebuild.com> wrote:
>
> On Nov 24, 2014, at 1:08 AM, Rick van Hattem <wo...@wol.ph> wrote:
>
&g
On 23 November 2014 at 22:57, Christophe Pettus <x...@thebuild.com> wrote:
>
> On Nov 23, 2014, at 1:53 PM, Rick van Hattem <wo...@wol.ph> wrote:
>
> > Very true, that's a fair point. That's why I'm opting for a configurable
> option. Patching this within Django has
s well.
>
> Django's ORM is just the wrong level in the software stack for these
> limits, since there are hundreds of other ways to kill the performance of a
> server, and the number of results in a queryset is a poor indicator of
> performance issues.
>
> On Sund
On 23 Nov 2014 22:13, "Christophe Pettus" <x...@thebuild.com> wrote:
>
>
> On Nov 23, 2014, at 1:07 PM, Rick van Hattem <wo...@wol.ph> wrote:
>
> > > Not really, cause psycopg already fetched everything.
> >
> > Not if Django limits it b
Hi Florian,
On 23 Nov 2014 16:22, "Florian Apolloner" <f.apollo...@gmail.com> wrote:
>
> Hi Rick,
>
>
> On Sunday, November 23, 2014 1:11:13 PM UTC+1, Rick van Hattem wrote:
>>
>> If/when an unsliced queryset were to reach a certain limit (say, 10,00
o')
> cursor.execute('BIG QUERYSET SQL')
> for row in cursor: # fetches rows in cursor.itersize chunks
> pass
>
> I use this in a few scripts that iterate over 10GB of table data. But a
> way to map the rows to Django objects would be nice.
>
>
> On Thursday, 20 No
On Thursday, November 20, 2014 8:31:06 AM UTC+1, Christian Schmitt wrote:
>
> Nope. a large OFFSET of N will read through N rows, regardless index
>> coverage. see
>> http://www.postgresql.org/docs/9.1/static/queries-limit.html
>>
>
> That's simple not true.
> If you define a Order By with a
Definitely agree on this, silently altering a query's limit is probably not
the way to go. Raising an exception in case of no limit and lots of results
could be useful.
For the sake of keeping the discussion useful:
- Let's say you have a table with 50,000 items, not an insanely large
amount
ings by yourself.
>
> If your patches are enough than live with it, but I don't see a reason for
> optimizing django against big tables.
>
>
>
> 2014-11-18 19:27 GMT+01:00 Rick van Hattem <wo...@wol.ph>:
>
>> That certainly solves one part of the problem. After that I
Paroz <cla...@2xlibre.net> wrote:
> On Tuesday, November 18, 2014 1:58:00 PM UTC+1, Rick van Hattem wrote:
>>
>> Hi guys,
>>
>> As it is right now Django has the tendency to kill either your browser
>> (if you're lucky) or the entire application server w
set size.
>
> I don't think Django should automatically limit these queries.
>
> Regards,
> Michael Manfre
>
> On Tue, Nov 18, 2014 at 7:58 AM, Rick van Hattem <wol...@gmail.com
> > wrote:
>
>> Hi guys,
>>
>> As it is right now Django has the
Hi guys,
As it is right now Django has the tendency to kill either your browser (if
you're lucky) or the entire application server when confronted with a large
database. For example, the admin always does counts for pagination and a
count over a table with many rows (say, in the order of 100M)
On Thursday 07 January 2010 01:35:50 Russell Keith-Magee wrote:
> From a cursory inspection, I'm not sure there is much we can do with
> (1) - there isn't a lot of detail on Exception that can be used for a
> capability check, and the only attribute that is actually needed is
> 'source' (albeit in
Hi,
Maybe this question has already been asked, but I am wondering why Jinja2
compatibility can't be fixed in a clean way. Currently the code assumes that
if an exception has a "source" attribute that it's a Django exception and can
be processed as such. (the code:
22 matches
Mail list logo