On Fri, Dec 6, 2013 at 7:56 AM, Kevin LaTona <li...@studiosola.com> wrote:

>
>
>
> Like I said the Python community has an amazingly smart and diverse crowd.
>
>
> So if I am understanding some of what Geremy is saying…... is that Micro
> Python really will not be "Python" as we now know it on a x86 processor.
>
> I am okay with that.
>

It doesn't have anything to do with ARM vs x86- this was my point up front.

You can run python- real Python, CPython or PyPy or Jython or $whatever- on
ARM today. Because the CPython developers are pretty careful when they
write code, compiling and running it pretty much Just Works. If you like I
can probably dredge up a working binary for an Android phone.

The issue is how small a device can you run Python on. In particular,
Python consumes a lot of memory- the max RSS for an empty python
interpreter a few years ago was something like 3.3MB, which at that time
was completely unacceptable for embedded development.

On top of that, a lot of the semantics are most easily realized in an
essentially all-heap environment with no particular sense of dirty-vs-clean
pages. This has the side effect of essentially forbidding certain common
systems programming practices, like mmap'ing large files to avoid
exhausting physical memory (lots of used heap pages means fewer available
large contiguous virtual memory regions) or taking advantage of COW while
forking (reference counting means lots of dirtied heap pages after a fork).

So doing *really, really* small systems with Python was (and probably still
is) a no-go.

All is not lost, however. Today's "embedded" devices- a modern cellphone,
say- are freaking huge compared to what we would once have been able to
use. They're monster machines even compared to the desktop PCs that Python
was originally developed on! On these devices, Python is perfectly fine.

For me what Micro Python can be is…….
>
> A way to use Python to do some task with a ARM based IO board.
>

>
> Personally I would prefer using Python to write this kind of code to do
> that … than having to deal with all the other options or possible other
> current ways.
>
>
>
>
> Damien has planted a seed to get Python growing on a "micro" processor I/O
> board.
>
> From what have I read so far… no reason why this can't grow and get better
> than some of the other current options out there.
>
>
>
> The rest of the world is starting to notice as well.
>
> We are on wired today: http://bit.ly/1eUIjbk
>
>
> http://www.geeky-gadgets.com/micro-python-offers-python-for-micro-controllers-video-03-12-2013/
>  .
>
>
> http://hackaday.com/2013/11/27/interview-with-damien-george-creator-of-the-micro-python-project/
>
>
>
> What we have here is an Australian theoretical physicist who works at the
> University of Cambridge in the UK.
>
> Who also uses Python as I do.
>
> And trust me I have no idea what his day job really is.
>
>
> But both us like robots and CNC machines etc. etc..
>
> And he was smart enough to have figured out how to get Python on some
> level or other ……..running with 32 bit ARM chips in a stand alone mode.
>
>
> Bottom line for me is even if it never truly will be a full fledge version
> of Python3.
>
> I am good with that .
>
>
> As I still will be able to use parts of the Python language.
>

I think this is the issue- it is the python grammar, but probably not
python in a lot of meaningful ways. Imagine taking mathematical notation
and making all the symbols mean something different for fermat primes, or
on every second Tuesday.


> To travel down some new roads ……..that currently are not possible when
> using Python.
>
>
> This is why I am excited by this project and hope others might see these
> possibilities as well.
>
> Jump on board at
> http://www.kickstarter.com/projects/214379695/micro-python-python-for-microcontrollersand
>  make it happen.
>
> As you now have less than 6 days to get in on this production run of IO
> boards.
>
> For all these possibilities it will only set up back about $45 bucks.
>

Hmm. Python, for all that I love it, isn't a systems programming language.
Trying to shoehorn fine grained memory management into the language doesn't
feel like a meaningful win to me.

Having said that, supporting open source hardware is a win. If that's what
you're going for I'd prefer you spent your time/money/resources elsewhere,
but hey- that diversity of opinions and options is part of what makes
openness amazing.


> -Kevin
>
>
>
>
>
>
> On Dec 6, 2013, at 12:56 AM, geremy condra <debat...@gmail.com> wrote:
>
>
> On Dec 5, 2013 11:32 PM, "Chris Barker" <chris.bar...@noaa.gov> wrote:
> >
> > On Thu, Dec 5, 2013 at 11:27 PM, geremy condra <debat...@gmail.com>
> wrote:
> >>
> >> > very cool!
> >>
> >> Hmm, dunno about this.
> >>
> >> His stated goal is to match Python's grammar, but it's possible to
> conform completely to Python's grammar and still not be very much like
> python. In particular, adopting C integer semantics is pretty wild. I can't
> think of a sizeable body of Python code I would count on to handle that
> gracefully.
> >
> > well, I think this is about being able to write python code specifically
> to drive this device -- much less about porting other code to it. And I
> think the C integers are optional. But as a proof of concept, Cython lets
> you optionally use straight C ints, and it's very useful.
>
> I don't think this is optional behavior. At the least it can't
> interoperate between the two modes, because a conversion to a bigint style
> integer would cause an unacceptable heap allocation (per his comments),
> potentially in an interrupt handler.
>
> >
> >> Also, I don't understand his comments about avoiding heap allocations
> in interrupt handlers. Stack allocating integers is fine, but how do you
> avoid a heap allocation in a dictionary comprehension or string slice?
> >
> >
> > out of my depth here!
>
> The issue is this: hardware occasionally sends an interrupt which must be
> handled by software. Until it is, the code you wanted to spend time running
> isn't.
>
> (Depending on the interrupt and OS you may also miss other interrupts, but
> I digress.)
>
> This means that interrupt handlers should be very, very fast. By
> comparison garbage collection is very, very slow. QED, you must avoid the
> possibility of triggering a garbage collection pass. The way you do this in
> his scheme is to avoid a (potentially GC-triggering) heap allocation.
>
> The problem is that basically everything in a normal python program is
> heap allocated. Creating an integer or a string allocates space from the
> heap (think of C's malloc()) to hold that data.
>
> When this happens, your malloc implementation essentially rummages around
> in the heap looking for a close-to-optimally sized spot for the variable to
> live. At no point will a C program automatically reclaim this space
> (leading to memory leaks) and so developers must manually call free() to
> return the space to the heap (leading to use-after-free bugs).
>
> The alternative is to allocate on the stack. This is usually used in C
> like languages for allocation of variables in the local scope- things which
> are free to be destroyed once we leave that scope. When we do that the
> variable is basically just added to the end of the stack, and when we leave
> the local scope we just move the apparent end of the stack back to where it
> was when we entered that scope. This has the effect of automatically
> freeing the used memory.
>
> Of course, this has a few drawbacks.
>
> First, the stack is typically small and fixed size, so it's a bit of a
> scarce shared resource. On top of that, there's some stack overhead every
> time you call a function (if you've ever had your code killed because it
> exceeded the recursion depth limit, this is why). So allocating a lot of
> data on the stack means less room for calling other functions, including
> interrupt handlers.
>
> Second, it means that a statement of the form global_int = local_int would
> leave global_int pointing off into space when local_int was collected
> unless you copy the value somewhere.
>
> So, where do you copy the value to? Ah, the heap. But now an innocent
> assignment between scopes can lead to a heap allocation- which can't be
> done in an interrupt handler.
>
> So now we know that you can't allocate to the heap in an interrupt
> handler, or assign to an outer scope. I think this rules out
> comprehensions, but am not sure. What else can't you do?
>
> You also probably can't do string operations, since this would take up
> tons of stack memory. Or bigint operations, since this could lead to an
> allocation, or... the list goes on.
>
> So, I'm skeptical about how python-like this will be. I don't know I would
> fix that without a lot of hackery.
>
> >
> > -Chris
> >
> >
> > --
> >
> > Christopher Barker, Ph.D.
> > Oceanographer
> >
> > Emergency Response Division
> > NOAA/NOS/OR&R            (206) 526-6959   voice
> > 7600 Sand Point Way NE   (206) 526-6329   fax
> > Seattle, WA  98115       (206) 526-6317   main reception
> >
> > chris.bar...@noaa.gov
>
>
>

Reply via email to