Peter Otten:
In general I think that if you want to promote a particular coding style you
should pick an example where you can demonstrate actual benefits.
That good thing is that Python 3 has only xrange (named range), so
this discussion will be mostly over ;-)
Bye,
bearophile
--
Mensanator:
Ever tried to iterate 403 septillion times?
The OP is talking about formulas, like:
X + Y * Z = W
Where X, Y, Z, W distinct and in [1, 26], so you have C(26, 4)
combinations that's way less than 26!
binomial(26, 4)
14950
So this can be solved with a xcombinations() generator.
John Krukoff:
One possibility for the performance difference, is that as I understand
it the psyco developer has moved on to working on pypy, and probably
isn't interested in keeping psyco updated and optimized for new python
syntax.
Somebody correct me if I'm wrong, but last I heard there's
jlist:
I think what makes more sense is to compare the code one most
typically writes. In my case, I always use range() and never use psyco.
If you don't use Python 3 and your cycles can be long, then I suggest
you to start using xrange a lot :-) (If you use Psyco you don't need
xrange).
alex23:
Honestly, performance benchmarks seem to be the dick size comparison
of programming languages.
I don't agree:
- benchmarks can show you what language use for your purpose (because
there are many languages, and a scientist has to choose the right tool
for the job);
- it can show where a
jdh2358:
delaunay triangularization
[and more amazing things]
I'm impressed, it's growing very well, congratulations, I use it now
and then. I know people in University that use Python only/mostly
because of matplotlib.
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
David C. Ullrich:
Thanks. If I can get it installed and it works as advertised
this means I can finally (eventually) finish the process of
dumping MS Windows: the only reason I need it right now is for
the small number of Delphi programs I have for which straight
Python is really not
Using something like PyParsing is probably better, but if you don't
want to use it you may use something like this:
raw_data =
Field f29227: Ra=20:23:46.54 Dec=+67:30:00.0 MJD=53370.06797690 Frames
5 Set 1
Field f31448: Ra=20:24:58.13 Dec=+79:39:43.9 MJD=53370.06811620 Frames
5 Set 2
Field
Erik Max Francis:
If len(bytes) is large, you might want to use `xrange`, too. `range`
creates a list which is not really what you need.
That's right for Python, but Psyco uses normal loops in both cases,
you can time this code in the two situations:
def foo1(n):
count = 0
for i in
On Aug 7, 2:05 am, Jack [EMAIL PROTECTED] wrote:
I know one benchmark doesn't mean much but it's still disappointing to see
Python as one of the slowest languages in the test:
http://blog.dhananjaynene.com/2008/07/performance-comparison-c-java-p...
That Python code is bad, it contains range()
Paul McGuire:
This code creates a single dict for the input lines, keyed by id.
Each value contains elements labeled 'id', 'ra', and 'mjd'.
...
d = dict(
(rec.split()[1][:-1],
dict([('id',rec.split()[1][:-1])] +
[map(str.lower,f.split('='))
Heiko Wundram:
how about changing the precious self. to .
imagine
self.update()
.update()
simple right?
I suggest you to start using Ruby instead.
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
Simon Strobl:
I had a file bigrams.py with a content like below:
bigrams = {
, djy : 75 ,
, djz : 57 ,
, djzoom : 165 ,
, dk : 28893 ,
, dk.au : 854 ,
, dk.b. : 3668 ,
...
}
In another file I said:
from bigrams import bigrams
Probably there's a limit in the module size here. You can
Fred Mangusta:
Could I use the join function to reform the string?
You can write a function to split the words, for example taking in
account the points too, etc.
And, regarding the casetest() function, what do you suggest to do?
Python strings have isupper, islower, istitle methods, they
Bruce Frederiksen:
Your solution is a bit different from what I was thinking about (I was
thinking about a generator function, with yield), but it works.
This line:
return itertools.chain(
itertools.imap(lambda ys: x + ys, ncsub(xs, s + p1)),
Alexnb:
How can I test if the list item is empty without getting that exception?
In Python such list cell isn't empty, it's absent. So you can use
len(somelist) to see how much long the list is before accessing its
items. Often you can iterate on the list with a for, so you don't need
to care of
fprintf:
and yet they all end at the point where a person has enough
knowledge of the syntax, but not really enough to do anything.
A programming language is a tool to solve problems, so first of all:
do you have problems to solve? You can create some visualizations,
some program with GUI, some
Kay Schluehr:
[Yes, I have too much time...]
Thank you for your code. If you allow me to, I may put some code
derived from yours back into the rosettacode.org site.
Here is an evil imperative, non-recursive generator:
I like imperative, non-recursive code :-)
If you count the number of items
boblatest:
I have a long list of memory-heavy objects that I would like to access
in differently sorted order. Let's say I'd like to have lists called
by_date or by_size that I can use to access the objects in the
specified order.
Just create a new list with a different sorting order, for
This post is not about practical stuff, so if you have little time,
you may ignore it.
This is a task of the rosettacode.org site:
http://www.rosettacode.org/wiki/Non_Continuous_Subsequences
A subsequence contains some subset of the elements of this sequence,
in the same order. A continuous
Brett g Porter:
Fredrik Lundh has written a very clear explanation of this
athttp://effbot.org/pyfaq/why-must-self-be-used-explicitly-in-method-de...
Today lot of people know that Ruby exists, so such FAQ explanation
must explain why Python doesn't use a shorter syntax like for example
@foo
John Nagle:
Personally, I think the Shed Skin approach
is more promising.
ShedSkin will probably have scaling problems: as the program size
grows it may need too much time to infer all the types. The author has
the strict policy of refusing any kind of type annotation, this make
it unpractical.
Something like this may be fast enough:
from itertools import izip
xpartition = lambda seq, n=2: izip(*(iter(seq),) * n)
xprimes = (x for x in xrange(2, 100) if all(x % i for i in xrange(2, x)))
list(xpartition(xprimes))
[(2, 3), (5, 7), (11, 13), (17, 19), (23, 29), (31, 37), (41, 43),
(47,
kj:
OK, I guess that in Python the only way to do what I want to do
is with objects...
There are other ways, like assigning the value out of the function,
because Python functions too are objects:
def iamslow():
return 100
def foo(x):
return x + foo.y
foo.y = iamslow() # slow
Suresh Pillai:
Or 4, since the order of my nodes doesn't matter: swap the node to be
deleted with the last node in the list and then remove the last node of
the list. This is the fastest to date, if using native structures, for
low number nodes being deleted per cycle (def if only deleting
Once in a while I feel free to write about less defined things, saying
mostly wrong things. This post is mostly chat, if you aren't
interested please ignore it.
Python is fit enough for newbie programmers, but some of its
characteristics can confuse them still, like the variables referenced
by
Kay Schluehr:
Sure, use a fixed point combinator. I've just added this recipe:
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/576366
Does it work?
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
Found from Reddit, it's for e ECMA(Java)Script, but something similar
may be useful for Python too:
http://jsclass.jcoglan.com/methodchain.html
http://blog.jcoglan.com/2008/07/16/where-did-all-my-code-go-using-ojay-chains-to-express-yourself-clearly/
Bye,
bearophile
--
Marc 'BlackJack' Rintsch:
What's called `MethodChain` there seems to be function composition in
functional languages. Maybe `functools` could grow a `compose()` function.
To me it looks like a quite more refined thing, it's an object, it
has some special methods, etc. I think it's not too much
Peter Otten:
PS: Take these numbers with a grain of salt, they vary a lot between runs.
Another possibility :-)
from itertools import imap
id(x) in imap(id, items)
If you want efficiency you should use a dictionary instead of the list anyway:
I agree, but sometimes you have few items to look
On Jul 17, 9:50 am, Alexnb:
how can I test to see if the first char of a string is ?
I suggest you to try the interactive shell:
hello[0]
'h'
hello[0] ==
False
hello[0] == h
True
hello.startswith(h)
True
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
On Jul 16, 4:14 pm, Fredrik Lundh [EMAIL PROTECTED] wrote:
Beema shafreen wrote:
How do I write a regular expression for this kind of sequences
gi|158028609|gb|ABW08583.1| CG8385-PF, isoform F [Drosophila melanogaster]
MGNVFANLFKGLFGKKEMRILMVGLDAAGKTTILYKLKLGEIVTTIPTIGFNVETVE
Sebastian:
I've heard that a 'str' object is immutable. But is there *any* way to
modify a string's internal value?
No, but you can use other kind of things:
s = hello
sl = list(s)
sl[1] = a
sl
['h', 'a', 'l', 'l', 'o']
.join(sl)
'hallo'
from array import array
sa = array(c, s)
sa
Larry Bates:
The only case where it would be faster would be if most of the keys were NOT
in
the dictionary (rather odd use case). Otherwise I believe you will find the
first way quicker as the exceptions are infrequent.
I have written a small benchmark:
from random import shuffle
def
bukzor:
You need to use two dictionaries. Here's a class that someone's
written that wraps it up into a single dict-like object for you:
http://www.faqts.com/knowledge_base/view.phtml/aid/4376
It contains code like:
try:
del self.data[item]
except KeyError:
pass
Exceptions are useful
Giampaolo Rodola':
Even if I avoid to re-heapify() it seems that the first element
returned by heappop() is always the smaller one.
Yes, the heappop() function keeps the heap invariant, so it will keep
giving you the smallest item.
I'd like to understand if there are cases where
deleting or
James Fassett:
# the first Pythonic attempt using comprehensions
result_list = [x[0] for x in tuple_list]
# the final functional way
[result_list, _] = zip(*tuple_list)
I really like how Python allows me to do what I feel is the most
natural solution (for a seasoned procedural programmer)
Following links from this thread:
http://groups.google.com/group/comp.lang.python/browse_thread/thread/179e1a45485ab36a#
I have found this perfect hash (minimal too) implementation:
http://burtleburtle.net/bob/hash/perfect.html
I have already translated part of it to D, and it seems to work well
Denis Kasak:
spam = ['a', 'n', 'n', 'a']
eggs = spam[:]
if spam.reverse() == eggs:
print Palindrome
An alternative version:
txt = anna
txt == txt[::-1]
True
txt = annabella
txt == txt[::-1]
False
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
Paddy:
You could argue that if the costly RE features are not used then maybe
simpler, faster algorithms should be automatically swapped in but
Many Python implementations contains a TCL interpreter. TCL REs may be
better than Python ones, so it can be interesting to benchmark the
same RE
Cédric Lucantis:
PAT = re.compile('^[ ]*(public|protected|private)[ ]+([a-zA-Z0-9_]+)
[ ]+([a-zA-Z0-9_]+)[ ]+\((.*)\).*$')
...
It might be hard to read but will avoid a lot of obscure parsing code.
You can use the VERBOSE mode, to add comments and split that RE into
some lines.
I think the
Terry Reedy:
I believe
wordlist = open('words.txt','r').read().split('\n')
should give you the list in Python.
This looks better (but it keeps the newlines too. Untested. Python
2.x):
words = open(words.txt).readlines()
for i, word in enumerate(words):
if word.startswith(b):
print
[EMAIL PROTECTED]:
I believe Python 3k will (when out of beta) will have a speed
similar to what it has currently in 2.5, possibly with speed ups
in some locations.
Python 3 uses by default unicode strings and multiprecision integers,
so a little slowdown is possible.
Michele Simionato:
It
Kris Kennaway:
I am trying to parse a bit-stream file format (bzip2) that does not have
byte-aligned record boundaries, so I need to do efficient matching of
bit substrings at arbitrary bit offsets.
Is there a package that can do this?
You may take a look at Hachoir or some other modules:
eliben:
Python's pack/unpack don't have the binary format for some reason, so
custom solutions have to be developed. One suggested in the ASPN
cookbook is:http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/111286
However, it is very general and thus inefficient.
Try mine, it may be fast
Kris Kennaway:
Unfortunately I didnt find anything else useful here yet :(
I see, I'm sorry, I have found hachoir quite nice in the past. Maybe
there's no really efficient way to do it with Python, but you can
create a compiled extension, so you can see if it's fast enough for
your purposes.
To
Michele Simionato:
Also consider the famous Clinger's maxim
“Programming languages should be designed not by piling feature
on top of feature, but by removing the weaknesses and restrictions
that make additional features appear necessary.”
I'm relaxed, don't worry :-)
I know that maxim, but
Knut Saua Mathiesen:
Any help? :p
My faster suggestion is to try ShedSkin, it may help you produce a
fast enough extension.
If ShedSkin doesn't compile it, its author (Mark) may be quite willing
to help.
bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
[EMAIL PROTECTED]:
My non-directed graph will have about 63,000 nodes
and and probably close to 500,000 edges.
That's large, but today not very large anymore. Today very large
graphs probably have more than millions of nodes...
You have to try, but I think any Python graph lib may be fit for
dbpoko...:
Which should be 12 bytes on a 32-bit machine. I thought the space for
growth factor for dicts was about 12% but it is really 100%.
(Please ignore the trailing .2 in my number in my last post, such
precision is silly).
My memory value comes from experiments, I have created a little
Duncan Booth:
What do you get if you change the output to exclude the integers from
the memory calculation so you are only looking at the dictionary
elements themselves? e.g.
The results:
318512 (kbytes)
712124 (kbytes)
20.1529344 (bytes)
Bye,
bearophile
--
Martin v. L.:
However, I think the PEP (author) is misguided in assuming that
making byindex() a method of odict, you get better performance than
directly doing .items()[n] - which, as you say, you won't.
In Python 2.5 .items()[n] creates a whole list, and then takes one
item of such list.
An
dbpoko...:
Why keep the normal dict operations at the same speed? There is a
substantial cost this entails.
I presume now we can create a list of possible odict usages, because I
think that despite everyone using it for different purposes, we may
find some main groups of its usage. I use odicts
Martin v. L.:
http://wiki.python.org/moin/TimeComplexity
Thank you, I think that's not a list of guarantees, while a list of
how things are now in CPython.
If so, what's the advantage of using that method over d.items[n]?
I think I have lost the thread here, sorry. So I explain again what I
Kirk Strauser:
Hint: recursion. Your general algorithm will be something like:
Another solution is to use a better (different) language, that has
built-in pattern matching, or allows to create one.
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
Oh, very good, better late than never.
This is my pure Python version, it performs get, set and del
operations too in O(1):
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/498195
Working with C such data structure becomes much faster, because it can
use true pointers.
Then another data
Martin v. L.:
For this API, I think it's important to make some performance guarantees.
I may appreciate them for all Python collections :-)
It seems fairly difficult to make byindex O(1), and
simultaneously also make insertion/deletion better than O(n).
It may be possible to make both of
Nader:
d = {('a' : 1), ('b' : 3), ('c' : 2),('d' : 3),('e' : 1),('f' : 4)}
I will something as :
d.keys(where their values are the same)
That's magic.
With this statement I can get two lists for this example:
l1= ['a','e']
l2=['b','d']
Would somebody tell me how I can do it?
You can
Andrea Gavana:
Maybe. But I remember a nice quote made in the past by Roger Binns (4
years ago):
The other thing I failed to mention is that the wxPython API isn't very
Pythonic. (This doesn't matter to people like me who are used to GUI
programming - the wxPython API is very much in the
kj:
I have some functions
that require a very long docstring to document, and somehow I find
it a bit disconcerting to stick a few screenfuls of text between
the top line of a function definition and its body.
You may put the main function(s) documentation in the docstring of the
module, and
[EMAIL PROTECTED]:
Do you mean something like this? (notice the many formatting
differences, use a formatting similar to this one in your code)
coords = []
for i in xrange(1, 5):
for j in xrange(1, 5):
for k in xrange(1, 2):
coords.append( (i, j, k) )
coords *= 10
print
Dennis Lee Bieber, the ghost:
I'd have to wonder why so many recursive calls?
Why not? Maybe the algorithm is written in a recursive style. A
language is good if allows you to use that style too.
On modern CPUs 5 levels don't look that many levels.
Bye,
bearophile
--
I V:
You might instead want to
wrap the lambdas in an object that will do the comparison you want:
This looks very nice, I haven't tried it yet, but if it works well
then it may deserve to be stored in the cookbook, or better, it may
become the built-in behavior of hashing functions.
Bye,
This may have some bugs left, but it looks a bit better:
from inspect import getargspec
class HashableFunction(object):
Class that can be used to wrap functions, to allow their
hashing,
for example to create a set of unique functions.
func_strings = ['x', 'x+1', 'x+2', 'x']
Ben Finney:
In Python, the philosophy we're all consenting adults here applies.
Michael Foord:
They will use whatever they find, whether it is the best way to
achieve a goal or not. Once they start using it they will expect us to
maintain it - and us telling them it wasn't intended to be used
Helmut Jarausch:
I'd ask in comp.compression where the specialists are listening and who are
very helpful.
Asking in comp.compression is a good starting point.
My suggestions (sorry if they look a bit unsorted): it depends on what
language you want to use, how much you want to compress the
bearophile:
So you need to store only this 11 byte long string to be able to
decompress it.
Note that maybe there is a header, that may contain changing things,
like the length of the compressed text, etc.
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
John Salerno:
What does everyone think about this?
The Example 2 builds a list, that is then thrown away. It's just a
waste of memory (and time).
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
dave, few general comments to your code:
- Instead of using a comment that explains the meaning of a function,
add such things into docstrings.
- Your names can be improved, instead of f you can use file_name or
something like that, instead of convert_file you can use a name that
denotes that the
dave:
Can you have doctests on random functions?
Yes, you can add doctests to methods, functions, classes, module
docstrings, and in external text files.
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
This may be interesting for Python developers of the random module,
SIMD-oriented Fast Mersenne Twister (SFMT): twice faster than
Mersenne Twister:
http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/
One function may be useful to generate integers (randint, randrange,
choice, shuffle, etc),
David:
What do you mean by best possible? Most efficient? Most readable?
What's a good wine? It's not easy to define what's good/best. In
such context it's a complex balance of correct, short, fast and
readable (and more, because you need to define a context. This context
refers to Psyco too).
George Sakkis:
A faster algorithm is to create a 'key' for each word, defined as the
tuple of ord differences (modulo 26) of consecutive characters.
Very nice solution, it uses the same strategy used to find anagrams,
where keys are
.join(sorted(word))
Such general strategy to look for a
Sometimes different languages suggests me ways to cross-pollinate
them.
(Note: probably someone has already suggested (and even implemented)
the following ideas, but I like to know why they aren't fit for
Python).
Python generators now allow me to package some code patterns common in
my code,
In some algorithms a sentinel value may be useful, so for Python 3.x
sys.maxint may be replaced by an improvement of the following infinite
and neginfinite singleton objects:
class Infinite(object):
def __repr__(self): return infinite
def __cmp__(self, other):
if other is
AlexyBut in Python it's very slow...
I'm the first one to say that CPython is slow, but almost any language
is slow if you use such wrong algorithms like you do.
There are many ways to solve your problem efficiently, one of such
ways, among the simpler ones is to to not modify the original list:
Jochen Schulz:
Could you please send me an email with an existing From: address? I
tried to reply to you but apparently your From: is forged.
Sorry for the delay, I'll send you an email.
In the meantime I have created a fast BK-tree implementation for
Psyco, on my PC it's about 26+ times faster
Sanhita Mallickwhere can I find more in depth info about how to
express graphs in python, and how to use them in a code?
You can look here:
https://networkx.lanl.gov/wiki
My version:
http://sourceforge.net/projects/pynetwork/
Bye,
bearophile
--
ShenLei:
t = self.a.b
t.c = ...
t.d = ...
.vs.
self.a.b.c = ...
self.a.b.d = ...
which one is more effective? Since each dot invokes a hash table lookup, it
may be time consuming. If the compiler can do expression folding, then no
manual folding is needed.
Removing dots creating a
Jochen Schulz:
This solution may be more than you actually need, but I implemented two
metric space indexes which would do what you want (and I wanted to plug
it anyway :)):
Please plug such good things. It seems the Python community is an
endless source of interesting modules I didn't know
Few more notes on the code:
You may use the @property in such situations (or you may just use
attributes, dropping the property). Note that Python doesn't inline
functions calls like Java HotSpot does quite often.
def __children(self):
raise NotImplementedError()
children =
More bits from your code:
neighbours = list()
==
neighbours = []
If you have a recent enough version of Python you can use:
candidate_is_neighbour = any(distance n[1] for n in neighbours)
Instead of:
candidate_is_neighbour = bool([1 for n in neighbours if distance
n[1]])
It's shorter
Dennis Benzinger:
You could use the Aho-Corasick algorithm http://en.wikipedia.org/wiki/
Aho-Corasick_algorithm.
I don't know if there's a Python implementation yet.
http://hkn.eecs.berkeley.edu/~dyoo/python/ahocorasick/
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
cokofree...:
def s(c):return[]if c==[]else s([_ for _ in c[1:]if _c[0]])+[c[0]]+s([_ for
_ in c[1:]if _=c[0]])
That QuickSort can be written as a lambda too:
s=lambda l:[]if l==[]else s([x for x in l[1:]if xl[0]])+[l[0]]+s([x
for x in l[1:]if x=l[0]])
Bye,
bearophile
--
hdante:
it's already time that programmer editors
have input methods advanced enough for generating this:
if x ≠ 0:
∀y ∈ s:
if y ≥ 0: f1(y)
else: f2(y)
Take a look at Fortress language, by Sun. A free (slow) interpreter is
already available.
(Mathematica too allows you
Duncan Booth:
Both this and raj's suggestion create a single sorted list. Your suggestion
creates two lists: the unsorted one and a separate sorted one. In most
cases the difference is probably insignificant, but if you have a *lot* of
values it might make a difference.
The good thing of
On Mar 27, 3:54 pm, [EMAIL PROTECTED] wrote:
Writing Tkinter menu code used to be rather tedious, uninspiring work.
I figured that I could delegate the job to a program:
I did develop a proggy that takes the following as input, it's part of
my agui project, you can use it as an idea to improve
Diez B. Roggisch:
the author says that the approach is flawed, so at *some*
point it will be discontinued.
Can't Psyco be improved, so it can compile things like:
nums = (i for i in xrange(20) if i % 2)
print sum(nums)
I think the current Psyco runs slower than Python with generators/
[EMAIL PROTECTED]:
Mine does less. But you tell it less to do it.
Some of those fields are optional :-)
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
dmitrey:
As I have mentioned, I don't know final length of the list, but
usually I know a good approximation, for example 400.
There is no reserve()-like method, but this is a fast enough operation
you can do at the beginning:
l = [None] * 400
It may speed up your code, but the final resizing
Sean DavisJava has a BitSet class that keeps this kind of thing
pretty clean and high-level, but I haven't seen anything like it for
python.
If you look around you can usually find Python code able to do most of
the things you want, like (you can modify this code to add the boolean
operations):
dmitrey:
As I have mentioned, I don't know final length of the list, but
usually I know a good approximation, for example 400.
You may want to use collections.deque too, it doesn't produce a Python
list, but it's quite fast in appending (it's a linked list of small
arrays).
Bye,
bearophile
--
Michał Bentkowski:
Why does python create a reference here, not just copy the variable?
I think to increase performance, in memory used and running time (and
to have a very uniform way of managing objects).
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
Kelly Greer:
What is the best way to filter a Python list to its unique members?
If Python is batteries included, then an industrial-strength
unique() seems one of the most requested 'batteries' that's not
included :-) I feel that it's coming in Python 2.6/3.x. In the
meantime:
Paul RubinIt's at least pretty good. It's not ideal, but nothing
ever is.
I agree. At the moment Python is among the best languages for that,
but it's not perfect for that purpose. I think newbies may need a bit
more structured language, where you have to put things in a certain
order to have a
Yatsek:
Is there simple way to copy a into b (like a[:]) with all copies of
all objects going as deep as possible?
If you only want to copy up to level-2, then you can do something
like:
cloned = [subl[:] for subl in somelist]
Or sometimes safer:
cloned = [list(subl) for subl in somelist]
If
ernesto:
best = list(test)
may be faster than:
best = [x for x in test]
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
bearophile:
A more computer-friendly (and Pythonic) syntax may be ('are' is a keyword):
Sorry for causing confusion, I was just thinking aloud. English isn't
my first language, and sometimes I slip a bit. Replace that with:
A more computer-friendly (and Pythonic) syntax may be ('are' is meant
[EMAIL PROTECTED]:
a=filter ( lambda b: b != None, a)
With None it's better to use is/not is instead of ==/!=
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
Ricky Zhou:
Look at the this line:
if 'one' and 'two' in f:
Very cute, it's the first time I see a bug like this. I think it's not
a common enough pattern to justify a language change, but a bit
smarter computer language may be able to do that too ;-) (it's not
easy to tell the two meanings
401 - 500 of 1178 matches
Mail list logo