Re: [Python-Dev] Can Python implementations reject semantically invalid expressions?

2010-07-02 Thread Craig Citro
 This question has an easy answer - can you possibly tell the difference?


Ok, I'm obviously being silly here, but sure you can:

 dis.dis(raise TypeError())
  0 114   26977
  3 1158293
  6 IMPORT_STAR
  7 SETUP_EXCEPT25968 (to 25978)
 10 69
 11 114   28530
 14 114   10536
 dis.dis(1 + '1')
  0 49
  1 SLICE+2
  2 STORE_SLICE+3
  3 SLICE+2
  4 39
  5 49
  6 39

That said, I agree with the point you're making -- they have the same
semantics, so you should be fine substituting one for the other.

Honestly, though, I'd come down on the side of letting the compiler
raise an error -- while I understand that it means you have
*different* behavior, I think it's *preferable* behavior.

-cc
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Can Python implementations reject semantically invalid expressions?

2010-07-02 Thread Craig Citro
 Whoa.  That's very peculiar looking bytecode.  Is dis.dis behaving as
 it should here?
 BTW, I think you want 'raise TypeError', not 'raise TypeError()'.


Yep, that's embarrassing. I was being lazy: I was expecting different
bytecodes, and I got it ... so I apparently didn't bother to actually
read the bytecodes? It's interesting -- if I'd just had TypeError
instead of TypeError(), I would have found this out, because raise
TypeError is not the bytes representation of a valid sequence of
bytecodes. ;)

Anyway, here's what I was going for:

 def foo():
... return 1+'1'
...
 def bar():
... raise TypeError
...
 dis.dis(foo)
  2   0 LOAD_CONST   1 (1)
  3 LOAD_CONST   2 ('1')
  6 BINARY_ADD
  7 RETURN_VALUE
 dis.dis(bar)
  2   0 LOAD_GLOBAL  0 (TypeError)
  3 RAISE_VARARGS1
  6 LOAD_CONST   0 (None)
  9 RETURN_VALUE

That said, I totally concede Martin's point -- this is an
implementation-specific thing. It happens that all the major Python
implementations compile to some VM, and I'd bet that these two compile
to different bytecodes on any of them, but that doesn't preclude
another implementation from making a different choice there.

-cc
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Can Python implementations reject semantically invalid expressions?

2010-07-02 Thread Craig Citro
 But you would be taking a module that will compile and making it uncompilable.


You're absolutely right, and since I definitely *don't* think that the
program raise TypeError should cause a CompileError, you could say
it's safer to have a simple rule like vaild syntax = will compile
-- it's probably a slippery slope once you start deciding which bits
of semantics raise CompileErrors and which don't.

However, in this particular case, here's a question: *why* would
someone write return 1 + '1'? Did they do it *knowing* what would
happen, or because they just didn't realize it was just an error?

 * If they knew what it was going to do, then I'd say shame on them --
they should have just raised a TypeError instead, and anyone who comes
along to read or maintain that code would thank them for the change.
My impression is that we generally try to discourage people from
writing tricky code with Python, and this would do exactly that.

 * If, on the other hand, it's an accident, then I think it's a
service to the user to let them know as soon as possible. Here I'm
thinking both of someone new to Python, or even a seasoned pro who
makes a few quick fixes before sending some code to someone, and
doesn't happen to test that code path before handing it off.

Either way, I personally prefer the CompileError -- it helps me catch
a stupid mistake if I've made one, and it prevents other people from
writing code I find less clear.

My real motive, though, is that I'd like to have more freedom for
Python implementations, *especially* things that let you make more
decisions at compile-time. (This is no doubt influenced by the fact
that I've spent a lot of time thinking about Cython lately.) In this
case, I see it as a win-win -- it gives more freedom to the folks
writing the implementation, and (personally) I find it more pleasing
as a user. Again, I don't think this *particular* case allows us to do
something awesome behind the scenes with Cython -- but the community
starting to consider changes of this ilk *would* be a big win, I
think.

-cc
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Can Python implementations reject semantically invalid expressions?

2010-07-02 Thread Craig Citro
 1/0 is much faster to type than raise SomeError and serves the same
 purpose sometimes for debugging purposes.  Let's not forget that not
 all code is written for eternity :)


Doesn't raise do the same thing for just two extra characters?

I agree that not all code lives forever -- but I bet we all have
stories about debugging code living on a lot longer than it should
have. ;)

-cc
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Can Python implementations reject semantically invalid expressions?

2010-07-02 Thread Craig Citro
 To test that adding a string to an integer raises TypeError at
 runtime. That is, something along the lines of:

  with self.assertRaises(TypeError):
     1 + 1


Well, this would just mean the test suite would have to change -- that
test would become something like

with self.assertRaises(TypeError):
import operator
operator.add(1, 1)

Of course, checking that the literal syntax now raises a compile error
could be ugly:

with self.assertRaises(CompileError):
eval('1 + 1')

... or it could move to test_parser. ;)

 If an end user is doing it rather than an implementation's own test
 suite... well, I have no idea why anyone else would want to do that :)


Exactly -- and if it's a clear win for users, I don't think it makes
test-writing harder but not impossible should really be a
counter-argument. Of course, there's lots of existing code it would
break is a very good counter-argument ... maybe Py4k. ;)

 It's definitely something of a grey area. The existing test suite
 would likely fail if obviously insane operations between literals
 started throwing SyntaxError, but it would be possible to classify
 some of those tests as implementation specific. However, an
 implementation that did that would need to completely fork any
 affected parts of the test suite, since the implementation specific
 test decorators won't help protect against failures to compile the
 code.


Well, I think there's some momentum towards splitting some of the
tests into Python-standard and implementation-specific things, so that
they can get shared between implementations, right? As long as clear
lines are drawn, isn't it just a question of which bucket these tests
go into?

-cc
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] python compiler

2010-04-05 Thread Craig Citro
 for a college project, I proposed to create a compiler for python. I've
 read something about it and maybe I saw that made a bad choice. I hear
 everyone's opinion respond.


I don't think everyone thinks this is a bad idea -- for instance,
those of us working on Cython [1], which is itself a descendant of
Pyrex [2]. :)

 Python itself is a highly dynamic language and not amenable to direct
 compilation. Instead modern just-in-time compiler technology is seen as the
 way to improve Python performance. Projects that are doing this are PyPy and
 Unladen Swallow. A static subset of Python can be statically compiled,
 projects that do that include RPython (part of PyPy) and ShedSkin. These are
 not really Python though, just Python like languages that happen to be valid
 subsets of Python.


It's true that JIT compilation really opens up a whole world of
possibilities that Cython currently can't touch. On the other hand,
for some kinds of Python code -- especially, for example, things
related to scientific computing or mathematics -- Cython's a quick
road to massive speedups, because a little bit of static typing can go
a *long* way. It's true that Cython doesn't yet support the full
Python syntax, but this is considered a bug -- we're working hard on
being able to compile all of Python soon.

-cc

[1] http://www.cython.org/

[2] http://www.cosc.canterbury.ac.nz/greg.ewing/python/Pyrex/
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] python compiler

2010-04-05 Thread Craig Citro
 I hate to remind you but Cython is *not* python. It does not even plan
 to support all of the parts which are considered python semantics
 (like tracebacks and frames).


It's true -- we basically compile to C + the Python/C API, depending
on CPython being around for runtime support, and I don't see that
changing anytime soon. (I don't think I tried to claim that we were a
full Python implementation in my original email ...) I'm curious about
the bit you mention, though -- is constructing a call frame for every
Python call really part of the semantics, and not just a CPython
implementation detail? (I've never played with Jython or IronPython to
know if they do this.) We actually *do* construct all the call frames
when doing profiling, so we could turn this on if we needed to for a
strict mode, but usually the additional runtime speedup is more
desirable.

Independent of this, the OP was asking about working on something as
part of a school-related project. I think that if you're looking to
see how a Python to C compiler works, you could get quite a bit from
checking out Cython and/or Pyrex, even if your real goal was to create
a Python implementation independent of CPython.

-cc
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-02-02 Thread Craig Citro
 Done. The diff is at
 http://codereview.appspot.com/186247/diff2/5014:8003/7002. I listed
 Cython, Shedskin and a bunch of other alternatives to pure CPython.
 Some of that information is based on conversations I've had with the
 respective developers, and I'd appreciate corrections if I'm out of
 date.


Well, it's a minor nit, but it might be more fair to say something
like Cython provides the biggest improvements once type annotations
are added to the code. After all, Cython is more than happy to take
arbitrary Python code as input -- it's just much more effective when
it knows something about types. The code to make Cython handle
closures has just been merged ... hopefully support for the full
Python language isn't so far off. (Let me know if you want me to
actually make a comment on Rietveld ...)

Now what's more interesting is whether or not U-S and Cython could
play off one another -- take a Python program, run it with some
generic input data under Unladen and record info about which
functions are hot, and what types they tend to take, then let
Cython/gcc -O3 have a go at these, and lather, rinse, repeat ... JIT
compilation and static compilation obviously serve different purposes,
but I'm curious if there aren't other interesting ways to take
advantage of both.

-cc
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com