Hi all,
anybody can point me to a description of how the default comparison of
list objects (or other iterables) works?
Apparently l1 l2 is equivalent to all ( x y for x,y in
zip( l1, l2) ), has is shown in the following tests, but I can't find
it described anywhere:
[1,2,3] [1,3,2]
On Mon, 16 Aug 2010 13:46:07 +0300, Francesco Bochicchio bieff...@gmail.com
wrote:
anybody can point me to a description of how the default comparison of
list objects (or other iterables) works?
Sequences of the same type are compared using lexicographical ordering:
Francesco Bochicchio wrote:
Hi all,
anybody can point me to a description of how the default comparison of
list objects (or other iterables) works?
Apparently l1 l2 is equivalent to all ( x y for x,y in
zip( l1, l2) ), has is shown in the following tests, but I can't find
it
I would like to know if it is possible, and how to do this with Python:
I want to design a function to compare lists and return True only if
both lists are equal considering memory location of the list.
I suppose it would be the equivalent of comparing 2 pointers in c++
lets call this function
Loic wrote:
I would like to know if it is possible, and how to do this with Python:
I want to design a function to compare lists and return True only if
both lists are equal considering memory location of the list.
I suppose it would be the equivalent of comparing 2 pointers in c++
lets
Loic [EMAIL PROTECTED] writes:
I want to design a function to compare lists and return True only if
both lists are equal considering memory location of the list.
I suppose it would be the equivalent of comparing 2 pointers in c++
Use the is keyword.
print (l1 is l2)
print (l0 is l2)
--
Christian Stapfer [EMAIL PROTECTED] wrote:
This is why we would like to have a way of (roughly)
estimating the reasonableness of the outlines of a
program's design in armchair fashion - i.e. without
having to write any code and/or test harness.
And we would also like to consume vast amounts
Alex Martelli [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Christian Stapfer [EMAIL PROTECTED] wrote:
This is why we would like to have a way of (roughly)
estimating the reasonableness of the outlines of a
program's design in armchair fashion - i.e. without
having to write any
Christian Stapfer [EMAIL PROTECTED] wrote:
Alex Martelli [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Christian Stapfer [EMAIL PROTECTED] wrote:
This is why we would like to have a way of (roughly)
estimating the reasonableness of the outlines of a
program's design in
[EMAIL PROTECTED] (Alex Martelli) writes:
implementation of the components one's considering! Rough ideas of
*EXPECTED* run-times (big-Theta) for various subcomponents one is
sketching are *MUCH* more interesting and important than asymptotic
worst-case for amounts of input tending to
Alex Martelli [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Christian Stapfer [EMAIL PROTECTED] wrote:
Alex Martelli [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Christian Stapfer [EMAIL PROTECTED] wrote:
This is why we would like to have a way of (roughly)
Christian Stapfer wrote:
This discussion begins to sound like the recurring
arguments one hears between theoretical and
experimental physicists. Experimentalists tend
to overrate the importance of experimental data
(setting up a useful experiment, how to interpret
the experimental data one
Ron Adam [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Christian Stapfer wrote:
This discussion begins to sound like the recurring
arguments one hears between theoretical and
experimental physicists. Experimentalists tend
to overrate the importance of experimental data
(setting
Ron Adam [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Christian Stapfer wrote:
This discussion begins to sound like the recurring
arguments one hears between theoretical and
experimental physicists. Experimentalists tend
to overrate the importance of experimental data
(setting
Christian Stapfer wrote:
As to the value of complexity theory for creativity
in programming (even though you seem to believe that
a theoretical bent of mind can only serve to stifle
creativity), the story of the discovery of an efficient
string searching algorithm by D.E.Knuth provides an
On Sun, 16 Oct 2005 15:16:39 +0200, Christian Stapfer wrote:
Come to think of an experience that I shared
with a student who was one of those highly
creative experimentalists you seem to have
in mind. He had just bought a new PC and
wanted to check how fast its floating point
unit was as
Christian Stapfer wrote:
Ron Adam [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Christian Stapfer wrote:
This discussion begins to sound like the recurring
arguments one hears between theoretical and
experimental physicists. Experimentalists tend
to overrate the importance of
Fredrik Lundh [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Christian Stapfer wrote:
As to the value of complexity theory for creativity
in programming (even though you seem to believe that
a theoretical bent of mind can only serve to stifle
creativity), the story of the
Christian Stapfer wrote:
It turned out that the VAX compiler had been
clever enough to hoist his simple-minded test
code out of the driving loop. In fact, our VAX
calculated the body of the loop only *once*
and thus *immediately* announced that it had finished
the whole test - the
Ron Adam [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Christian Stapfer wrote:
Ron Adam [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Christian Stapfer wrote:
This discussion begins to sound like the recurring
arguments one hears between theoretical and
experimental
On Sun, 16 Oct 2005 19:42:11 +0200, Christian Stapfer wrote:
Pauli's prediction of
the existence of the neutrino is another. It took
experimentalists a great deal of time and patience
(about 20 years, I am told) until they could finally
muster something amounting to experimental proof
of
Steven D'Aprano [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
On Sun, 16 Oct 2005 15:16:39 +0200, Christian Stapfer wrote:
Come to think of an experience that I shared
with a student who was one of those highly
creative experimentalists you seem to have
in mind. He had just
Steven D'Aprano [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
On Sun, 16 Oct 2005 19:42:11 +0200, Christian Stapfer wrote:
Pauli's prediction of
the existence of the neutrino is another. It took
experimentalists a great deal of time and patience
(about 20 years, I am told) until
Christian Stapfer wrote:
Ron Adam [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Christian Stapfer wrote:
Ron Adam [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Christian Stapfer wrote:
This discussion begins to sound like the recurring
arguments one hears between
Steven D'Aprano [EMAIL PROTECTED] wrote:
On Sun, 16 Oct 2005 15:16:39 +0200, Christian Stapfer wrote:
It turned out that the VAX compiler had been
clever enough to hoist his simple-minded test
code out of the driving loop.
Optimizations have a tendency to make a complete mess of Big
Ognen Duzlevski [EMAIL PROTECTED] writes:
Optimizations have a tendency to make a complete mess of Big O
calculations, usually for the better. How does this support your
theory that Big O is a reliable predictor of program speed?
There are many things that you cannot predict, however if
On Sun, 16 Oct 2005 20:28:55 +0200, Christian Stapfer wrote:
Experiments
(not just in computer science) are quite
frequently botched. How do you discover
botched experiments?
Normally by comparing them to the results of other experiments, and being
unable to reconcile the results. You may
On Sun, 16 Oct 2005 14:07:37 -0700, Paul Rubin wrote:
The complexity of hashing depends intricately on the the data and if
the data is carefully constructed by someone with detailed knowledge
of the hash implementation, it may be as bad as O(n) rather than O(1)
or O(sqrt(n)) or anything like
Steven D'Aprano wrote:
On Sat, 15 Oct 2005 18:17:36 +0200, Christian Stapfer wrote:
I'd prefer a (however) rough characterization
of computational complexity in terms of Big-Oh
(or Big-whatever) *anytime* to marketing-type
characterizations like this one...
Oh how naive.
Why is it that even
Steven D'Aprano [EMAIL PROTECTED] writes:
But if you are unlikely to discover this worst case behaviour by
experimentation, you are equally unlikely to discover it in day to
day usage.
Yes, that's the whole point. Since you won't discover it by
experimentation and you won't discover it by day
On Sat, 15 Oct 2005 06:31:53 +0200, Christian Stapfer wrote:
jon [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
To take the heat out of the discussion:
sets are blazingly fast.
I'd prefer a (however) rough characterization
of computational complexity in terms of Big-Oh
(or
Steven D'Aprano [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
On Sat, 15 Oct 2005 06:31:53 +0200, Christian Stapfer wrote:
jon [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
To take the heat out of the discussion:
sets are blazingly fast.
I'd prefer a (however)
On Sat, 15 Oct 2005 18:17:36 +0200, Christian Stapfer wrote:
I'd prefer a (however) rough characterization
of computational complexity in terms of Big-Oh
(or Big-whatever) *anytime* to marketing-type
characterizations like this one...
Oh how naive.
Why is it that even computer science
Steven D'Aprano [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
On Sat, 15 Oct 2005 18:17:36 +0200, Christian Stapfer wrote:
I'd prefer a (however) rough characterization
of computational complexity in terms of Big-Oh
(or Big-whatever) *anytime* to marketing-type
characterizations
Let me begin by apologizing to Christian as I was too snippy in
my reply, and sounded even snippier than I meant to.
Christian Stapfer wrote:
Scott David Daniels [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
a better set implementation will win if
it can show better performance
jon [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
To take the heat out of the discussion:
sets are blazingly fast.
I'd prefer a (however) rough characterization
of computational complexity in terms of Big-Oh
(or Big-whatever) *anytime* to marketing-type
characterizations like
To take the heat out of the discussion:
sets are blazingly fast.
--
http://mail.python.org/mailman/listinfo/python-list
Christian Stapfer wrote:
Steve Holden [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Christian Stapfer wrote:
George Sakkis [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Christian Stapfer [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote:
try to use set.
A
Scott David Daniels [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Christian Stapfer wrote:
Steve Holden [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Christian Stapfer wrote:
George Sakkis [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Christian Stapfer
I have to lists, A and B, that may, or may not be equal. If they are not
identical, I want the output to be three new lists, X,Y and Z where X has
all the elements that are in A, but not in B, and Y contains all the
elements that are B but not in A. Z will then have the elements that are
in
Odd-R. wrote:
I have to lists, A and B, that may, or may not be equal. If they are not
identical, I want the output to be three new lists, X,Y and Z where X has
all the elements that are in A, but not in B, and Y contains all the
elements that are B but not in A. Z will then have the elements
try to use set.
L1 = [1,1,2,3,4]
L2 = [1,3, 99]
A = set(L1)
B = set(L2)
X = A-B
print X
Y = B-A
print Y
Z = A | B
print Z
Cheers,
pujo
--
http://mail.python.org/mailman/listinfo/python-list
[EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
try to use set.
L1 = [1,1,2,3,4]
L2 = [1,3, 99]
A = set(L1)
B = set(L2)
X = A-B
print X
Y = B-A
print Y
Z = A | B
print Z
But how efficient is this? Could you be a bit
more explicit on that
Christian Stapfer [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote:
try to use set.
Sorting the two lists and then extracting
A-B, B-A, A|B, A B and A ^ B in one single
pass seems to me very likely to be much faster
for large lists.
Why don't you implement it, test it and time it to
On Mon, 10 Oct 2005 14:34:35 +0200, Christian Stapfer wrote:
Sorting the two lists and then extracting
A-B, B-A, A|B, A B and A ^ B in one single
pass seems to me very likely to be much faster
for large lists.
Unless you are running a Python compiler in your head, chances are your
intuition
George Sakkis [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Christian Stapfer [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote:
try to use set.
Sorting the two lists and then extracting
A-B, B-A, A|B, A B and A ^ B in one single
pass seems to me very likely to be much
Christian Stapfer wrote:
George Sakkis [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Christian Stapfer [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote:
try to use set.
Sorting the two lists and then extracting
A-B, B-A, A|B, A B and A ^ B in one single
pass seems to me
Steve Holden [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Christian Stapfer wrote:
George Sakkis [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Christian Stapfer [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote:
try to use set.
Sorting the two lists and then
48 matches
Mail list logo