#3631: the built-in truncatewords filter can't deal with chinese
-------------------------+--------------------------------------------------
   Reporter:  anonymous  |                Owner:  jacob        
     Status:  closed     |            Component:  Uncategorized
    Version:  SVN        |           Resolution:  wontfix      
   Keywords:             |                Stage:  Unreviewed   
  Has_patch:  0          |           Needs_docs:  0            
Needs_tests:  0          |   Needs_better_patch:  0            
-------------------------+--------------------------------------------------
Changes (by SmileyChris):

  * status:  new => closed
  * needs_better_patch:  => 0
  * resolution:  => wontfix
  * needs_tests:  => 0
  * needs_docs:  => 0

Comment:

 I have zero knowledge of Chinese, so I ask: does Chinese use normal spaces
 in between "words"? The filter uses `len(value.split())` to count the
 words, and if the `.split()` isn't working then the problem is obvious.
 
 Since that is all the built-in filter does, I don't think there's anything
 we can do to make it work for special language cases. Better to write your
 own filter that works (and perhaps submit it as an enhancement)

-- 
Ticket URL: <http://code.djangoproject.com/ticket/3631#comment:1>
Django Code <http://code.djangoproject.com/>
The web framework for perfectionists with deadlines
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Django updates" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-updates?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to