Hi Robert,
> Den 26. sep. 2016 kl. 03.03 skrev Robert Roskam :
>
> > The unit in the second graph is requests per minute, which is inconsistent
> > since the first graph is requests per second. This also makes comparison
> > difficult. Also, it doesn't actually show the
> Den 22. sep. 2016 kl. 13.38 skrev Alex Gaynor :
>
> If Django were a different framework, I'd probably think this was a
> reasonable idea. However, Django's ORM is _incredibly_ good at deterring SQL
> injection. In many many years of using and reviewing Django
> Den 13. sep. 2016 kl. 09.28 skrev Erik Cederstrand
> <erik+li...@cederstrand.dk>:
>
> First of all, thanks for taking the time to actually do the measurements!
> It's insightful and very much appreciated.
>
> [...]300K requests in 10 minutes is 500 rps, but t
> Den 13. sep. 2016 kl. 03.41 skrev Robert Roskam :
>
> Hey Chris,
>
> The goal of these tests is to see how channels performs with normal HTTP
> traffic under heavy load with a control. In order to compare accurately, I
> tried to eliminate variances as much as
I think this is better directed at a MySQL list. MySQL shouldn't crash, nothing
I see indicates that this is a Django issue.
Of course, it's best if you can reproduce the error. Barring that, you'll get a
much more useful stack trace if you build MySQL with debugging symbols. A quick
look at
Hello Raony,
I'm sure I'm not aware of all the implications of changing the field length,
but the first question should be "how long is long enough"? In answering this
question, this Quora question comes to mind:
https://www.quora.com/Why-are-South-Indian-names-often-long
Kind regards,
Erik
Plus, it's 3 lines of code and faster to implement than to look up the
documentation:
class ReprFieldsMixIn:
class Meta:
repr_fields = ('bar', 'baz')
def __repr__(self):
'<%s: %s>' % (self.__class__.__name__, ', '.join('%s=%s' % (f,
repr(getattr(self, f))) for f in
> Den 24. apr. 2016 kl. 20.58 skrev Claude Paroz :
>
> - I'm afraid this change may result in boilerplate as most custom user models
> will revert to Django's historical (and in my opinion sensible) username
> validation rules.
>
> That's a tough question to estimate.
> Den 6. apr. 2016 kl. 13.42 skrev Marc Tamlyn :
>
> Does anyone (potentially from OS packaging worlds maybe) have a good reason
> NOT to have a dependency?
Here is a list off the top of my head. This is not necessarily an argument
against dependencies, just some things
> Den 6. apr. 2016 kl. 07.29 skrev Anssi Kääriäinen :
>
> It is notable that if the number of items is a secret (say, you don't
> want to reveal how many sales items you have), just having information
> about sequential numbers is bad. In that case you should use UUID,
>
> Den 20. dec. 2015 kl. 01.04 skrev Cristiano Coelho :
>
> About using a custom datetime field that strips microseconds, that won't work
> for raw queries I believe, not even .update statements as they ignore
> pre-save? As the stripping happens (or used to happen) at
> Den 19. dec. 2015 kl. 16.01 skrev Aymeric Augustin
> :
>
> To be fair, this has a lot to do with MySQL’s lax approach to storing data.
> There’s so many situations where it just throws data away happily that one
> can’t really expect to read back data
> Den 19. dec. 2015 kl. 13.15 skrev Cristiano Coelho :
>
> Erik,
> I'm using MySQL 5.6.x and indeed it has microseconds support, but that's not
> the issue.
>
> The issue is that every datetime column created has no microseconds (since
> they were created with django
> Den 19. dec. 2015 kl. 07.52 skrev Cristiano Coelho :
>
> Hello,
>
> After django 1.8, the mysql backend no longer strips microseconds.
> This is giving me some issues when upgrading from 1.7 (I actually upgraded to
> 1.9 directly), since date times are not stored
Hi devs,
My lengthy email to this list about improving performance of
prefetch_related() seems to have disappeared. Instead, I created a ticket
motivating my need for Prefetch() to be able to tell the prefetched to
trust the queryset provided by Prefetch() and not generate a huge and
Hi devs,
When prefetching related items for a queryset returning a large amount of
items, the generated SQL can be quite inefficient. Here's an example:
class Category(models.Model):
type = models.PositiveIntegerField(db_index=True)
class Item(models.Model):
category =
16 matches
Mail list logo