2014-11-23 13:27 GMT+01:00 Shai Berger :
> Hi Rick,
>
> On Sunday 23 November 2014 14:11:13 Rick van Hattem wrote:
> >
> > So please, can anyone give a good argument as to why any sane person
> would
> > have a problem with a huge default limit which will kill the performance
>
2014-11-20 8:30 GMT+01:00 Schmitt, Christian :
> Nope. a large OFFSET of N will read through N rows, regardless index
>> coverage. see
>> http://www.postgresql.org/docs/9.1/static/queries-limit.html
>>
>
> That's simple not true.
> If you define a Order By with a well
will make the system more complex and django
> is already really complex inside the core orm.
>
Let's first understand what's needed, than we can decide if it has a story
inside the django core
>
>
> 2014-11-19 13:59 GMT+01:00 Marco Paolini <markopaol...@gmail.com>:
>
>&
also, the offset + limit pagination strategy of django paginator is
sub-optimal as it has N complexity: doing SELECT * FROM auth_user LIMIT 100
offset 100 causes a 10-long table scan
2014-11-19 13:56 GMT+01:00 Marco Paolini <markopaol...@gmail.com>:
>
>
> 2014-11-19 13:50
2014-11-19 13:50 GMT+01:00 Rick van Hattem :
> Definitely agree on this, silently altering a query's limit is probably
> not the way to go. Raising an exception in case of no limit and lots of
> results could be useful.
>
> For the sake of keeping the discussion useful:
> -
What if we do it with asyncio?
2014-10-28 22:47 GMT+01:00 Aymeric Augustin <
aymeric.augus...@polytechnique.org>:
> No, there isn’t.
>
> I assume that “including in core” means at least “making usable in
> combination with WSGI and with the ORM”.
>
> Even if we disregard for a minute the fact
On 18/01/2013 12:55, Anssi Kääriäinen wrote:
On 18 tammi, 13:00, Florian Apolloner wrote:
Hi,
On Thursday, January 17, 2013 11:08:01 PM UTC+1, Russell Keith-Magee wrote:
So - while I'm not sure there's a place for this in core (unless you can
demonstrate how to
On 17/01/2013 22:08, Russell Keith-Magee wrote:
On Mon, Dec 31, 2012 at 5:56 PM, Marco Paolini <markopaol...@gmail.com
<mailto:markopaol...@gmail.com>> wrote:
Hi all,
sorry for the noise, forget my previous mail as it was pointing to the
wrong commit,
here's
On Thursday, January 17, 2013, Simon Litchfield wrote:
> Also, did you see psycopg2.extras.DateTimeRange?
>
> No, I missed that one !
Thanks I 'll see if it can be used somehow, but since the range types have
to be used by the core django code, I doubt they can be imported from a
third party
On Thursday, January 17, 2013, Simon Litchfield wrote:
> Marco, this is great.
Thanks, did you give it a try?
>
> I wonder if it would be possible to add range fields without modifying
> django?
>
Very difficult, there are many small changes scattered around the core
django ORM and driver code
Hi all,
sorry for the noise, forget my previous mail as it was pointing to the wrong
commit,
here's the good one:
https://github.com/mpaolini/django/commit/b754abdeab204949510500ccb1b845b7ad143542
copying here the rest of the original mail:
postgresql since version 9.2 added support for
Hi all,
postgresql since version 9.2 added support for range types [1]
they have a nice set of specialized operators like "overlaps", "left of",
etc... [2]
So I decided to work on a reference implementation for Django
even if it looks like psycopg2 does not fully support yet these data types
On 11/10/2012 15:53, Tom Evans wrote:
On Wed, Oct 10, 2012 at 7:52 AM, Moonlight wrote:
Here is an article comparing various URL dispatchers:
http://mindref.blogspot.com/2012/10/python-web-routing-benchmark.html
What cause django URL dispatcher that much... slow?
On 04/11/2011 03:05, Anssi Kääriäinen wrote:
On Nov 4, 3:38 am, Marco Paolini<markopaol...@gmail.com> wrote:
Postgresql:
.chunked(): 26716.0kB
.iterator(): 46652.0kB
what if you use .chunked().iterator() ?
Quick test shows that the actual memory used by the queryset is around
1.2Mb.
On 04/11/2011 01:50, Anssi Kääriäinen wrote:
On Nov 4, 1:20 am, Marco Paolini<markopaol...@gmail.com> wrote:
time to write some patches, now!
Here is a proof of concept for one way to achieve chunked reads when
using PyscoPG2. This lacks tests and documentation. I think the
approach i
On 04/11/2011 01:50, Anssi Kääriäinen wrote:
On Nov 4, 1:20 am, Marco Paolini<markopaol...@gmail.com> wrote:
time to write some patches, now!
Here is a proof of concept for one way to achieve chunked reads when
using PyscoPG2. This lacks tests and documentation. I think the
approach i
...
The SQLite3 shared cache mode seems to suffer from the same problem
than mysql:
"""
At any one time, a single table may have any number of active read-
locks or a single active write lock. To read data a table, a
connection must first obtain a read-lock. To write to a table, a
connection
Now, calling the .iterator() directly is not safe on SQLite3. If you
do updates to objects not seen by the iterator yet, you will see those
changes. On MySQL, all the results are fetched into Python in one go,
and the only saving is from not populating the _results_cache. I guess
Oracle will just
On 02/11/2011 17:33, Tom Evans wrote:
On Wed, Nov 2, 2011 at 4:22 PM, Marco Paolini<markopaol...@gmail.com> wrote:
On 02/11/2011 17:12, Tom Evans wrote:
If you do a database query that quickly returns a lot of rows from the
database, and each row returned from the database require
On 02/11/2011 17:12, Tom Evans wrote:
On Wed, Nov 2, 2011 at 11:28 AM, Marco Paolini<markopaol...@gmail.com> wrote:
mysql can do chunked row fetching from server, but only one row at a time
curs = connection.cursor(CursorUseResultMixIn)
curs.fetchmany(100) # fetches 100 rows, one
On 02/11/2011 15:18, Anssi Kääriäinen wrote:
On 11/02/2011 01:36 PM, Marco Paolini wrote:
maybe we could implement something like:
for obj in qs.all().chunked(100):
pass
.chunked() will automatically issue LIMITed SELECTs
that should work with all backends
I don't think
On 02/11/2011 14:36, Ian Kelly wrote:
On Wed, Nov 2, 2011 at 5:05 AM, Anssi Kääriäinen
wrote:
For PostgreSQL this would be a nice feature. Any idea what MySQL and Oracle
do currently?
If I'm following the thread correctly, the oracle backend already does
chunked
On 02/11/2011 12:05, Anssi Kääriäinen wrote:
On 11/02/2011 12:47 PM, Marco Paolini wrote:
if that option is true, sqlite shoud open one connection per cursor
and psycopg2 should use named cursors
The sqlite behavior leads to some problems with transaction management -
different connections
On 02/11/2011 12:05, Anssi Kääriäinen wrote:
On 11/02/2011 12:47 PM, Marco Paolini wrote:
if that option is true, sqlite shoud open one connection per cursor
and psycopg2 should use named cursors
The sqlite behavior leads to some problems with transaction management -
different connections
On 02/11/2011 12:05, Anssi Kääriäinen wrote:
On 11/02/2011 12:47 PM, Marco Paolini wrote:
if that option is true, sqlite shoud open one connection per cursor
and psycopg2 should use named cursors
The sqlite behavior leads to some problems with transaction management -
different connections
On 02/11/2011 10:10, Luke Plant wrote:
On 02/11/11 08:48, Marco Paolini wrote:
thanks for pointing that to me, do you see this as an issue to be fixed?
If there is some interest, I might give it a try.
Maybe it's not fixable, at least I can investigate a bit
Apparently, the protocol
On 02/11/2011 09:43, Luke Plant wrote:
On 02/11/11 00:41, Marco Paolini wrote:
so if you do this:
for obj in Entry.objects.all():
pass
django does this:
- creates a cursor
- then calls fetchmany(100) until ALL rows are fetched
- creates a list containing ALL fetched rows
- passes
On 28/10/2011 15:55, Tom Evans wrote:
On Fri, Oct 28, 2011 at 1:05 PM, Marco Paolini<markopaol...@gmail.com> wrote:
it's a bit more complex: there are basically two phases:
1) row fetching from db using cursor.fetchmany()
2) model instance creation in queryset
both are delayed a
On 07/10/2011 06:29, Tai Lee wrote:
Why is ROUND_HALF_EVEN superior? Perhaps for statistics, where
rounding all halves up would skew results, but I guess not for most
other cases.
If the default rounding behaviour produces different results than a
regular calculator, common spreadsheet and
On 06/10/2011 02:45, Paul McMillan wrote:
.. (A) silent rounding issue:
when a decimal value is saved in db
its decimal digits exceeding decimal_places are rounded off using
.quantize(). Rounding defaults to ROUND_HALF_EVEN (python default).
There is no mention of this behavior
Hi,
I would like to share some thoughts regarding django.db.models.DecimalField:
.. (A) silent rounding issue:
when a decimal value is saved in db
its decimal digits exceeding decimal_places are rounded off using
.quantize(). Rounding defaults to ROUND_HALF_EVEN (python default).
On 29/09/2011 12:12, momo2k wrote:
Hello,
this is my first post here and I hope this is the right place for
discussing ideas django-features before reporting a ticket.
you don't need to open a ticket: #8995 covers this issue already
I know that searching tickets is not the most thrilling
Karen Tracey ha scritto:
On Tue, Jun 28, 2011 at 8:44 AM, Jim Dalton > wrote:
I have not had time to try out the patch, but did look at it.
Doesn't the base implementation of disable_foreign_key_checks
need to return False
bpeschier ha scritto:
what if you add an alters_data class attribute to your ModelForm subclass?
class MyForm(ModelForm):
alters_data = True
this way __call__ is not going to get called...
No, but it will be replaced with settings.TEMPLATE_STRING_IF_INVALID.
See:
bpeschier ha scritto:
I am agreeing with the changes as made in [14992] as they make things
more consistent. The snag I run into was a flexible filter for making
ModelForm-instances which took a form-class as an argument via a
Variable. Since classes are callable (their constructor), suddenly I
Russell Keith-Magee ha scritto:
On Tue, Dec 7, 2010 at 11:21 PM, mpaolini wrote:
Maybe unrelated...
have you had a look at #14662?
It's related, but in the sense that it's the manual manifestation of
what #14799 needed to correct.
The contenttype and auth
I think call_command should return something significant
to let the caller know if the command was successful or not.
Another issue ralated to this is: having an error retcode when calling
management commands from
commandline
The problem now is that for instance loaddata swallows all
37 matches
Mail list logo