python CPU usage 99% on ubuntu aws instance using eventlet

2012-02-02 Thread Teddy Toyama
Okay, I am crossposting this from the eventlet dev mailing list since I am
in urgent need of some help.

I am running eventlet 0.9.16 on a Small (not micro) reserved ubuntu
11.10 aws instance.

I have a socketserver that is similar to the echo server from the examples
in the eventlet documentation. When I first start running the code,
everything seems fine, but I have been noticing that after 10 or 15 hours
the cpu usage goes from about 1% to 99+%. At that point I am unable to make
further connections to the socketserver.

This is the important (hopefully) parts of the code that I'm running:

code
   # the part of the code that listens for incoming connections
def socket_listener(self, port, socket_type):
L.LOGG(self._CONN, 0, H.func(), 'Action:Starting|SocketType:%s' %
socket_type)
listener = eventlet.listen((self._host, port))
listener.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
pool = eventlet.GreenPool(2)
while True:
connection, address = listener.accept()
connection.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
 I want this loop to run as fast as possible.

I previously grabbed the first message that a plug/device
sent here
and used that information to add a new object to the
socket_hash.
Instead of doing that here I've relocated that logic to the
spawned
object so that this loop is doing as little work as
possible.

L.LOGG(self._CONN, 0, H.func(),
'IPAddress:%s|GreenthreadsFree:%s|GreenthreadsRunning:%s' %
(str(address[0]), str(pool.free()),str(pool.running(
pool.spawn_n(self.spawn_socketobject, connection, address,
socket_type)
listener.shutdown(socket.SHUT_RDWR)
listener.close()
/code
The L.LOGG method simply logs the supplied parameters to a mysql table.

I am running the socket_listener in a thread like so:

code
def listen_phones(self):
self.socket_listener(self._port_phone, 'phone')

t_phones = Thread(target = self.listen_phones)
t_phones.start()
/code

From my initial google searches I thought the issue might be similar to the
bug reported at
https://lists.secondlife.com/pipermail/eventletdev/2008-October/000140.html but
I am using a new version of eventlet so surely that cannot be it?

Is there any additional information I can provide to help further
troubleshoot the issue?

Teddy
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-04 Thread Gregory Ewing

Paul Rubin wrote:


You can order 144-core Forth chips right now,

   http://greenarrays.com/home/products/index.html

They are asynchronous cores running at around 700 mhz, so you get an
astounding amount of raw compute power per watt and per dollar.  But for
me at least, it's not that easy to figure out applications where their
weird architecture fits well.


Hmmm... Maybe compiling Python to Forth would make sense?-)

--
Greg
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-04 Thread Gregory Ewing

John Nagle wrote:


A tagged machine might make Python faster.  You could have
unboxed ints and floats, yet still allow values of other types,
with the hardware tagging helping with dispatch.   But it probably
wouldn't help all that much.  It didn't in the LISP machines.


What might help more is having bytecodes that operate on
arrays of unboxed types -- numpy acceleration in hardware.

--
Greg
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-04 Thread Paul Rubin
Gregory Ewing greg.ew...@canterbury.ac.nz writes:
 What might help more is having bytecodes that operate on
 arrays of unboxed types -- numpy acceleration in hardware.

That is an interesting idea as an array or functools module patch.
Basically a way to map or fold arbitrary functions over arrays, with a
few obvious optimizations to avoid refcount churning.  It could have
helped with a number of things I've done over the years.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-04 Thread geremy condra
On Mon, Apr 4, 2011 at 12:47 AM, Gregory Ewing
greg.ew...@canterbury.ac.nz wrote:
 John Nagle wrote:

    A tagged machine might make Python faster.  You could have
 unboxed ints and floats, yet still allow values of other types,
 with the hardware tagging helping with dispatch.   But it probably
 wouldn't help all that much.  It didn't in the LISP machines.

 What might help more is having bytecodes that operate on
 arrays of unboxed types -- numpy acceleration in hardware.

I'd be interested in seeing the performance impact of this, although I
wonder if it'd be feasible.

Geremy Condra
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-04 Thread Terry Reedy

On 4/4/2011 5:23 AM, Paul Rubin wrote:

Gregory Ewinggreg.ew...@canterbury.ac.nz  writes:

What might help more is having bytecodes that operate on
arrays of unboxed types -- numpy acceleration in hardware.


That is an interesting idea as an array or functools module patch.
Basically a way to map or fold arbitrary functions over arrays, with a
few obvious optimizations to avoid refcount churning.  It could have
helped with a number of things I've done over the years.


For map, I presume you are thinking of an array.map(func) in system code 
(C for CPython) equivalent to


def map(self,func):
  for i,ob in enumerate(self):
self[i] = func(ob)

The question is whether it would be enough faster. Of course, what would 
really be needed for speed are wrapped system-coded funcs that map would 
recognize and pass and received unboxed array units to and from. At that 
point, we just about invented 1-D numpy ;-).


I have always thought the array was underutilized, but I see now that it 
only offers Python code space saving at a cost of interconversion time. 
To be really useful, arrays of unboxed data, like strings and bytes, 
need system-coded functions that directly operate on the unboxed data, 
like strings and bytes have. Array comes with a few, but very few, 
generic sequence methods, like .count(x) (a special-case of reduction).


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-04 Thread Terry Reedy

On 4/4/2011 1:14 PM, Terry Reedy wrote:

On 4/4/2011 5:23 AM, Paul Rubin wrote:

Gregory Ewinggreg.ew...@canterbury.ac.nz writes:

What might help more is having bytecodes that operate on
arrays of unboxed types -- numpy acceleration in hardware.


That is an interesting idea as an array or functools module patch.
Basically a way to map or fold arbitrary functions over arrays, with a
few obvious optimizations to avoid refcount churning. It could have
helped with a number of things I've done over the years.


For map, I presume you are thinking of an array.map(func) in system code
(C for CPython) equivalent to

def map(self,func):
for i,ob in enumerate(self):
self[i] = func(ob)

The question is whether it would be enough faster. Of course, what would
really be needed for speed are wrapped system-coded funcs that map would
recognize and pass and received unboxed array units to and from. At that
point, we just about invented 1-D numpy ;-).

I have always thought the array was underutilized, but I see now that it
only offers Python code space saving at a cost of interconversion time.
To be really useful, arrays of unboxed data, like strings and bytes,
need system-coded functions that directly operate on the unboxed data,
like strings and bytes have. Array comes with a few, but very few,
generic sequence methods, like .count(x) (a special-case of reduction).


After posting this, I realized that ctypes makes it easy to find and 
wrap functions in a shared library as a Python object (possibly with 
parameter annotations) that could be passed to array.map, etc. No 
swigging needed, which is harder than writing simple C functions. So a 
small extension to array with .map, .filter, .reduce, and a wrapper 
class would be more useful than I thought.



--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-04 Thread John Nagle

On 4/4/2011 12:47 AM, Gregory Ewing wrote:

John Nagle wrote:


A tagged machine might make Python faster. You could have
unboxed ints and floats, yet still allow values of other types,
with the hardware tagging helping with dispatch. But it probably
wouldn't help all that much. It didn't in the LISP machines.


What might help more is having bytecodes that operate on
arrays of unboxed types -- numpy acceleration in hardware.


That sort of thing was popular in the era of the early
Cray machines.  Once superscalar CPUs were developed,
the overhead on tight inner loops went down, and several
iterations of a loop could be in the pipeline at one time,
if they didn't conflict.  Modern superscalar machines have
register renaming, so the same program-visible register on
two successive iterations can map to different registers within
the CPU, allowing two iterations of the same loop to execute
simultaneously.  This eliminates the need for loop unrolling and
Duff's device.

John Nagle

--
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-04 Thread Paul Rubin
John Nagle na...@animats.com writes:
 That sort of thing was popular in the era of the early
 Cray machines.  Once superscalar CPUs were developed,
 the overhead on tight inner loops went down, and several
 iterations of a loop could be in the pipeline at one time,

Vector processors are back, they just call them GPGPU's now.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-04 Thread Gregory Ewing

Terry Reedy wrote:
So a 
small extension to array with .map, .filter, .reduce, and a wrapper 
class would be more useful than I thought.


Also useful would be some functions for doing elementwise
operations between arrays. Sometimes you'd like to just do
a bit of vector arithmetic, and pulling in the whole of
numpy as a dependency seems like overkill.

--
Greg
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-04 Thread Gregory Ewing

geremy condra wrote:


I'd be interested in seeing the performance impact of this, although I
wonder if it'd be feasible.


A project I have in the back of my mind goes something
like this:

1) Design an instruction set for a Python machine and
a microcode architecture to support it

2) Write a simulator for it

3) Use the simulator to evaluate how effective it would
be if actually implemented, e.g. in an FPGA.

And if I get that far:

4) (optional) Get hold of a real FPGA and implement it

--
Greg
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-04 Thread Gregory Ewing

Paul Rubin wrote:


Vector processors are back, they just call them GPGPU's now.


Also present to some extent in the CPU, with
MMX, Altivec, etc.

--
Greg
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-03 Thread John Nagle

On 4/2/2011 9:01 PM, Steven D'Aprano wrote:

There were also Forth chips, which let you run Forth in hardware. I
believe they were much faster than Forth in software, but were killed by
the falling popularity of Forth.


The Forth chips were cute, and got more done with fewer gates than
almost anything else.  But that didn't matter for long.
Willow Garage has a custom Forth chip they use in their Ethernet
cameras, but it's really a FPGA.

A tagged machine might make Python faster.  You could have
unboxed ints and floats, yet still allow values of other types,
with the hardware tagging helping with dispatch.   But it probably
wouldn't help all that much.  It didn't in the LISP machines.

John Nagle
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-03 Thread Carl Banks
It'd be kind of hard.  Python bytecode operates on objects, not memory slots, 
registers, or other low-level entities like that.  Therefore, in order to 
implement a Python machine one would have to implement the whole object 
system in the hardware, more or less.

So it'd be possible but not too practical or likely.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-03 Thread Paul Rubin
John Nagle na...@animats.com writes:
 The Forth chips were cute, and got more done with fewer gates than
 almost anything else.  But that didn't matter for long.
 Willow Garage has a custom Forth chip they use in their Ethernet
 cameras, but it's really a FPGA.

You can order 144-core Forth chips right now,

   http://greenarrays.com/home/products/index.html

They are asynchronous cores running at around 700 mhz, so you get an
astounding amount of raw compute power per watt and per dollar.  But for
me at least, it's not that easy to figure out applications where their
weird architecture fits well.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-03 Thread Werner Thie
You probably heard of the infamous FORTH chips like the Harris RTX2000, 
or ShhBoom, which implemented a stack oriented very low power design 
before there were FPGAs in silicon. To my knowledge the RTX2000 is still 
used for space hardened application and if I search long enough I might 
fine the one I had sitting in my cellar.


The chip was at that time so insanely fast that it could produce video 
signals with FORTH programs driving the IO pins. Chuck Moore, father of 
FORTH developed the chip on silicon in FORTH itself.


Due to the fact, that the instruction sets of a FORTH machine, being a 
very general stack based von Neumann system, I believe that starting 
with an RTX2000 (which should be available in VHDL) one could quite fast 
be at  a point where things make sense, meaning not going for the 
'fastest' ever CPU but for the advantage of having a decent CPU 
programmable in Python sitting on a chip with a lot of hardware available.


Another thing worth to mention in this context is for sure the work 
available on http://www.myhdl.org/doku.php.


Werner

On 4/3/11 3:46 AM, Dan Stromberg wrote:


On Sat, Apr 2, 2011 at 5:10 PM, Gregory Ewing
greg.ew...@canterbury.ac.nz mailto:greg.ew...@canterbury.ac.nz wrote:

Brad wrote:

I've heard of Java CPUs. Has anyone implemented a Python CPU in VHDL
or Verilog?


Not that I know of.

I've had thoughts about designing one, just for the exercise.

It's doubtful whether such a thing would ever be of practical
use. Without as much money as Intel has to throw at CPU
development, it's likely that a Python chip would always be
slower and more expensive than an off-the-shelf CPU running
a tightly-coded interpreter.

It could be fun to speculate on what a Python CPU might
look like, though.


One with the time and inclination could probably do a Python VM in an
FPGA, no?

Though last I heard, FPGA's weren't expected to increase in performance
as fast as general-purpose CPU's.



--
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-03 Thread John Nagle

On 4/3/2011 8:44 AM, Werner Thie wrote:

You probably heard of the infamous FORTH chips like the Harris RTX2000,
or ShhBoom, which implemented a stack oriented very low power design
before there were FPGAs in silicon. To my knowledge the RTX2000 is still
used for space hardened application and if I search long enough I might
fine the one I had sitting in my cellar.

The chip was at that time so insanely fast that it could produce video
signals with FORTH programs driving the IO pins. Chuck Moore, father of
FORTH developed the chip on silicon in FORTH itself.


He did version 1, which had a broken integer divide operation.
(Divisors which were odd numbers produced wrong answers. Really.)
I came across one of those in a demo setup at a surplus store in
Silicon Valley, driving the CRT and with Moore's interface that
did everything with chords on three buttons.


Due to the fact, that the instruction sets of a FORTH machine, being a
very general stack based von Neumann system, I believe that starting
with an RTX2000 (which should be available in VHDL) one could quite fast
be at a point where things make sense, meaning not going for the
'fastest' ever CPU but for the advantage of having a decent CPU
programmable in Python sitting on a chip with a lot of hardware available.


Willow Garage has VHDL available for a Forth CPU.  It's only 200
lines.

The Forth CPUs have three separate memories - RAM, Forth stack,
and return stack. All three are accessed on each cycle.  Back before
microprocessors had caches, this was a win over traditional CPUs,
where memory had to be accessed sequentially for those functions.
Once caches came in, it was a lose.

It's interesting that if you wanted to design a CPU for Googles's
nativeclient approach for executing native code in the browser,
a separate return point stack would be a big help.  Google's
nativeclient system protects return points, so that you can tell,
from the source code, all the places control can go.  This is
a protection against redirection via buffer overflows, something
that's possible on x86 because the return points and other data
share the same stack.

Note that if you run out of return point stack, or parameter
stack, you're stuck.  So there's a hardware limit on call depth.
National Semiconductor once built a CPU with a separate return
point stack with a depth of 20.  Big mistake.

(All of this is irrelevant to Python, though. Most of Python's
speed problems come from spending too much time looking up attributes
and functions in dictionaries.)

John Nagle
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-03 Thread Nobody
On Sun, 03 Apr 2011 10:15:34 -0700, John Nagle wrote:

  Note that if you run out of return point stack, or parameter
 stack, you're stuck.  So there's a hardware limit on call depth.
 National Semiconductor once built a CPU with a separate return
 point stack with a depth of 20.  Big mistake.

The 8-bit PIC microcontrollers have a separate return stack. The PIC10 has
a 2-level stack, the PIC16 has 8 levels, and the PIC18 has 31 levels.

But these chips range from 16 bytes of RAM and 256 words of flash for a
PIC10, through 64-256 bytes of RAM and 1-4K words of flash for a PIC16, up
to 2KiB of RAM and 16K words of flash for a PIC18, so you usually run out
of something else long before the maximum stack depth becomes an issue.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-02 Thread BartC



Brad hwfw...@gmail.com wrote in message 
news:01bd055b-631d-45f0-90a7-229da4a9a...@t19g2000prd.googlegroups.com...

Hi All,

I've heard of Java CPUs. Has anyone implemented a Python CPU in VHDL
or Verilog?


For what purpose, improved performance? In that case, there's still plenty 
of scope for that on conventional CPUs.


The Java VM is fairly low level (I would guess, not being too familiar with 
it), while the Python VM seems much higher level and awkward to implement 
directly in hardware.


I don't think it's impossible, but the benefits probably would not match 
those of improving, say, Cpython on conventional hardware. And if a Python 
CPU couldn't also run non-Python code efficiently, then on a typical 
workload with mixed languages, it could be much slower!


However, wasn't there a Python version that used JVM? Perhaps that might run 
on a Java CPU, and it would be interesting to see how well it works.


--
Bartc 


--
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-02 Thread Dan Stromberg
On Sat, Apr 2, 2011 at 3:06 PM, BartC b...@freeuk.com wrote:


 However, wasn't there a Python version that used JVM? Perhaps that might
 run on a Java CPU, and it would be interesting to see how well it works.


Jython's still around - in fact, it had a new release not too long ago.

Also, Pypy formerly work on the JVM, and there's been some interest lately
in reviving that.

And it sounds like the Jython and Pypy(+JVM) people might work together
some.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-02 Thread Gregory Ewing

Brad wrote:


I've heard of Java CPUs. Has anyone implemented a Python CPU in VHDL
or Verilog?


Not that I know of.

I've had thoughts about designing one, just for the exercise.

It's doubtful whether such a thing would ever be of practical
use. Without as much money as Intel has to throw at CPU
development, it's likely that a Python chip would always be
slower and more expensive than an off-the-shelf CPU running
a tightly-coded interpreter.

It could be fun to speculate on what a Python CPU might
look like, though.

--
Greg
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-02 Thread Dan Stromberg
On Sat, Apr 2, 2011 at 5:10 PM, Gregory Ewing
greg.ew...@canterbury.ac.nzwrote:

 Brad wrote:

  I've heard of Java CPUs. Has anyone implemented a Python CPU in VHDL
 or Verilog?


 Not that I know of.

 I've had thoughts about designing one, just for the exercise.

 It's doubtful whether such a thing would ever be of practical
 use. Without as much money as Intel has to throw at CPU
 development, it's likely that a Python chip would always be
 slower and more expensive than an off-the-shelf CPU running
 a tightly-coded interpreter.

 It could be fun to speculate on what a Python CPU might
 look like, though.


One with the time and inclination could probably do a Python VM in an FPGA,
no?

Though last I heard, FPGA's weren't expected to increase in performance as
fast as general-purpose CPU's.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-02 Thread Steven D'Aprano
On Sun, 03 Apr 2011 12:10:35 +1200, Gregory Ewing wrote:

 Brad wrote:
 
 I've heard of Java CPUs. Has anyone implemented a Python CPU in VHDL or
 Verilog?
 
 Not that I know of.
 
 I've had thoughts about designing one, just for the exercise.
 
 It's doubtful whether such a thing would ever be of practical use.
 Without as much money as Intel has to throw at CPU development, it's
 likely that a Python chip would always be slower and more expensive than
 an off-the-shelf CPU running a tightly-coded interpreter.

I recall back in the late 80s or early 90s, Apple and Texas Instruments 
collaborated to build a dual-CPU Lisp machine. I don't remember all the 
details, but it was an Apple Macintosh II with a second CPU running (I 
think) a TI Explorer (possibly on a Nubus card?), with an integration 
layer that let the two hardware machines talk to each other. It was dual-
branded Apple and TI.

It was a major flop. It was released around the time that general purpose 
CPUs started to get fast enough to run Lisp code faster than a custom-
made Lisp CPU could. I don't remember the actual pricing, so I'm going to 
make it up... you got better performance from a standard Mac II with 
software Lisp for (say) $12,000 than you got with a dedicated Lisp 
machine for (say) $20,000.

(These are vaguely recalled 1980s prices. I'm assuming $10K for a Mac II 
and $2K for the Lisp compiler. Of course these days a $400 entry level PC 
is far more powerful than a Mac II.)

There were also Forth chips, which let you run Forth in hardware. I 
believe they were much faster than Forth in software, but were killed by 
the falling popularity of Forth.


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-02 Thread Steven D'Aprano
On Sat, 02 Apr 2011 23:06:52 +0100, BartC wrote:

 However, wasn't there a Python version that used JVM? Perhaps that might
 run on a Java CPU, and it would be interesting to see how well it works.

Not only *was* there one, but there still is: Jython. Jython is one of 
the Big Three Python implementations:

* CPython (the one you're probably using)
* Jython (Python on Java)
* IronPython (Python on .Net)

with PyPy (Python on Python) catching up.

http://www.jython.org/


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Python CPU

2011-04-01 Thread Brad
Hi All,

I've heard of Java CPUs. Has anyone implemented a Python CPU in VHDL
or Verilog?

-Brad
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-01 Thread jkn
On Apr 1, 4:38 pm, Brad hwfw...@gmail.com wrote:
 Hi All,

 I've heard of Java CPUs.

And Forth CPUs as well, I suspect ;-)

 Has anyone implemented a Python CPU in VHDL
 or Verilog?


I don't think so - certainly not in recent memory. If you look at the
documentation for the python byte code, for example:

http://docs.python.org/release/2.5.2/lib/bytecodes.html

you can see why. It starts off nicely enough, and then ...

HTH
J^n

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-01 Thread Nobody
On Fri, 01 Apr 2011 08:38:27 -0700, Brad wrote:

 I've heard of Java CPUs. Has anyone implemented a Python CPU in VHDL
 or Verilog?

Java is a statically-typed language which makes a distinction between
primitive types (bool, int, double, etc) and objects. Python is a
dynamically-typed language which makes no such distinction. Even something
as simple as a + b can be a primitive addition, a bigint addition, a
call to a.__add__(b) or a call to b.__radd__(a), depending upon the values
of a and b (which can differ for different invocations of the same code).

This is one of the main reasons that statically-typed languages exist, and
are used for most production software.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-01 Thread Stefan Behnel

Nobody, 01.04.2011 18:52:

Java is a statically-typed language which makes a distinction between
primitive types (bool, int, double, etc) and objects. Python is a
dynamically-typed language which makes no such distinction. Even something
as simple as a + b can be a primitive addition, a bigint addition, a
call to a.__add__(b) or a call to b.__radd__(a), depending upon the values
of a and b (which can differ for different invocations of the same code).

This is one of the main reasons that statically-typed languages exist, and
are used for most production software.


I doubt that the reason they are used for most production software is a 
technical one.


Stefan

--
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-01 Thread geremy condra
On Fri, Apr 1, 2011 at 10:00 AM, Stefan Behnel stefan...@behnel.de wrote:
 Nobody, 01.04.2011 18:52:

 Java is a statically-typed language which makes a distinction between
 primitive types (bool, int, double, etc) and objects. Python is a
 dynamically-typed language which makes no such distinction. Even something
 as simple as a + b can be a primitive addition, a bigint addition, a
 call to a.__add__(b) or a call to b.__radd__(a), depending upon the values
 of a and b (which can differ for different invocations of the same code).

 This is one of the main reasons that statically-typed languages exist, and
 are used for most production software.

 I doubt that the reason they are used for most production software is a
 technical one.

I also suspect that there's some confusion between duck typing and
typelessness going on here.

Geremy Condra
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-01 Thread Dan Stromberg
On Fri, Apr 1, 2011 at 10:00 AM, Stefan Behnel stefan...@behnel.de wrote:

 Nobody, 01.04.2011 18:52:

  Java is a statically-typed language which makes a distinction between
 primitive types (bool, int, double, etc) and objects. Python is a
 dynamically-typed language which makes no such distinction. Even something
 as simple as a + b can be a primitive addition, a bigint addition, a
 call to a.__add__(b) or a call to b.__radd__(a), depending upon the values
 of a and b (which can differ for different invocations of the same code).

 This is one of the main reasons that statically-typed languages exist, and
 are used for most production software.


 I doubt that the reason they are used for most production software is a
 technical one.

 Agreed.

In school, I was taught by a VHDL expert that the distinction between what
gets done in hardware and what gets down in software is largely arbitrary -
other than hardware often being faster than software, and hardware being
less mutable than software.

Lisp machines exist, or at least did at one time - from there, a Python
machine doesn't seem much of stretch.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-01 Thread Emile van Sebille

On 4/1/2011 8:38 AM Brad said...

Hi All,

I've heard of Java CPUs. Has anyone implemented a Python CPU in VHDL
or Verilog?

-Brad



http://code.google.com/p/python-on-a-chip/

Emile


--
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-01 Thread Emile van Sebille

On 4/1/2011 11:28 AM Emile van Sebille said...

On 4/1/2011 8:38 AM Brad said...

Hi All,

I've heard of Java CPUs. Has anyone implemented a Python CPU in VHDL
or Verilog?

-Brad



http://code.google.com/p/python-on-a-chip/


Sorry - wrong url in the cut'n paste buffer -

  http://tsheffler.com/software/python/



Emile





--
http://mail.python.org/mailman/listinfo/python-list


Re: Python CPU

2011-04-01 Thread John Nagle

On 4/1/2011 11:35 AM, Emile van Sebille wrote:

On 4/1/2011 11:28 AM Emile van Sebille said...

On 4/1/2011 8:38 AM Brad said...

Hi All,

I've heard of Java CPUs. Has anyone implemented a Python CPU in VHDL
or Verilog?

-Brad



http://code.google.com/p/python-on-a-chip/


Sorry - wrong url in the cut'n paste buffer -

http://tsheffler.com/software/python/



Emile


   Neither of those is a hardware implementation of Python.
Python on a chip is a small Python-subset interpreter for
microcontrollers.  That could be useful if it's not too slow.

   Sheffler's software is a means for controlling and extending
Verilog simulations in Python.  (Often, you're simulating some
hardware at the gate level which interfaces with other existing
hardware, say a disk, which can be simulated at a much coarser
level.  Being able to write the simulator for the external devices
in Python is useful.)

John Nagle
--
http://mail.python.org/mailman/listinfo/python-list