[pypy-dev] Looking for a thesis
Hi, my name is Antonio and I'm studying computer science at university of Genoa (Italy). Since I need a thesis for graduating and since I love Python very much I'd like to contribute to the pypy project, if you think I could help. I have already read the online docs and I have played a little with the current version of pypy. I have two questions: 1) do you think I could help the project, as I hope? 2) what tasks do you think could be suitable for a thesis? Obviously before begin coding I have to discuss with my professor to decide if the task if not too big or too small for our purposes. Sorry for my far-from-perfect language, but as you could have guessed I am not a native speaker :-). Cheers, Antonio ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] Re: Looking for a thesis
Michael Hudson wrote: Sure. Will you be able to make it to a sprint? I have never take part to a sprint, but I'd enjoy to try, if the sprint takes place in a reasonable distance from my home. Yes, Tokio is definitively too far for me. :-) There are so many things you could look at... what are your interests? I'm interested in a number of topics related to programming language implementations, such as virtual machines, code analysis optimization, code generation, etc. To give examples of what I'd like to do, I would have liked to implement a javascript backend, but if I'm right there is already an implementation, isn't it? Or, to remain in the already working things, I would have enjoyed to works on the type annotator. Off the top of my head I can think of: tuning GC parameters, working on alternate representations of app level types (e.g. using the bottom bit of a 'pointer' to mark a integer), exploring threading models, doing stuff with the greenlets/tasklets/coroutines interface, working on interfacing to external functions, ... the last two points sounds interesting, especially the one about greenlets/takslets/coroutines. As soon as I have finished to check out pypy-dist from svn I will go to take a look of what is already implemented. And as Christian says, how long do you have? I began programming in 1994, when I was 12; in 2001 I discovered Python and since then I have used it for almost every task which I could choose the language for. I think I have a good understanding of how the language works, either from a theoretical point of view and an implementation point of view, because I have already done some study about CPython internals for the university. I have also written a bit compiler for a toy language targeting the CPython Virtual Machine. Apart from python I have had a lot of experience with other platform and languages such as C#, C/C++, pascal, javascript and VB (but I'm not proud of this ;-). cheers Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] Re: Looking for a thesis
Beatrice During wrote: Hi there You are very welcome to the team Antonio, and thanks for choosing pypy ;-) Regarding Michaels last question about time - I think he was referring to how much time you will have to finish your thesis. Although it was good to get to know a bit about your background. Oops... I've said that I'm not so good with english :-). The thesis should take 3-4 months, thought I hope to continue with the project after I've completed it, too. Hope to see you at one of our upcoming sprints - maybe Europython in CERN/Switzerland in end of June? Yes, it may be: it's very near to me (I live in Genoa) and I would be happy to participate. ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
[pypy-dev] Svn account?
Hi, as I said I've begun writing the .NET CLI backend; it is still very experimental but it can already compile correctly some code snippets such as the algorithm for computing fibonacci's numbers. How can I check my work in svn? I think I should obtain an account, shouldn't I? Are there any guidelines for checking-in? Have I to follow only the coding rules exposed in doc/coding-guide.txt or is there some other rule I don't know about? ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] CLI code generation (was: Svn account?)
holger krekel wrote: Hi Antonio, On Sun, Mar 19, 2006 at 20:53 +0100, Antonio Cuni wrote: as I said I've begun writing the .NET CLI backend; it is still very experimental but it can already compile correctly some code snippets such as the algorithm for computing fibonacci's numbers. cool! I would be interested to hear a bit more about your concrete current approach. I respond here so that other can read, if they are interested. The first decision I took is whether to generate IL code (to be assembled with ilasm) or C# code: I choose the first mainly because C# lacks the goto statement and it would be difficult to implement flow control. Given this, my current approach if fairly naive and certainly not efficient: at the moment the compiler does a 1-to-1 translation between low level operation expressed in SSA; for example the simple function: def bar(a,b): return a+b is compiled into the following IL code: .method static public int32 bar(int32 a_1, int32 b_1) il managed { .locals (int32 v6, int32 v12) block0: ldarg.s 'a_1' ldarg.s 'b_1' add stloc.s 'v12' ldloc.s 'v12' stloc.s 'v6' br.s block1 block1: ldloc.s 'v6' ret } As you can see there are many unnecessary operations: the result of 'add' is stored to v12, loaded from v12, stored in v6 and then loaded from v6! The same is true for the branch instruction, which is unneeded. I think it should be fairly simple to translate from the SSA form to a more stack-friendly form useful for stack-based machines; the question is: where to put such code? Since it could be useful even for other backends, it would be nice to put it in some place where it can be shared from several backends: one option could be to write it as a backend optimization, but in this case we have to introduce new low level operations for stack manipulation, such as 'push', 'pop' or 'dup'. ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] CLI code generation
Hi Armin, Armin Rigo wrote: I wonder how important this is at the moment. Maybe the .NET JIT compiler is good enough to remove all this. How does the resulting machine code look like? I have not tried the CLR by Microsoft yet but in this stage I'm using mono under linux, just because I'd like to stay in windows as less as possibile ;-). BTW, mono doesn't seems smart enough to optimize the code; consider the following IL methods: the first is generated by my compiler, the second is written by hand: .method static public int32 slow(int32 a_1, int32 b_1) il managed { .locals (int32 v6, int32 v12) block0: ldarg.s 'a_1' ldarg.s 'b_1' add stloc.s 'v12' ldloc.s 'v12' stloc.s 'v6' br.s block1 block1: ldloc.s 'v6' ret } .method static public int32 fast(int32 a_1, int32 b_1) il managed { ldarg.s 'a_1' ldarg.s 'b_1' add ret } I used mono's ahead-of-time compiler with all optimizations enabled, then I disassembled the result with objdump -d; here is an extract of the output: Disassembly of section .text: 04f0 methods: 4f0: 55 push %ebp 4f1: 8b ec mov%esp,%ebp 4f3: 8b 45 08mov0x8(%ebp),%eax 4f6: 03 45 0cadd0xc(%ebp),%eax 4f9: c9 leave 4fa: c3 ret 4fb: 90 nop 4fc: 8d 74 26 00 lea0x0(%esi,1),%esi 500: 55 push %ebp 501: 8b ec mov%esp,%ebp 503: 57 push %edi 504: 8b 45 08mov0x8(%ebp),%eax 507: 8b f8 mov%eax,%edi 509: 03 7d 0cadd0xc(%ebp),%edi 50c: 8b c7 mov%edi,%eax 50e: 8d 65 fclea0xfffc(%ebp),%esp 511: 5f pop%edi 512: c9 leave 513: c3 ret 514: 8d 74 26 00 lea0x0(%esi,1),%esi I don't know x86 assembly very well (to be honest I don't know it at all ;-) but it seems that the 'fast' method spans from 4f0 to 4fc and the 'slow' methods spans from 500 to 514, and I think that the first should be more efficient than the latter, don't I? I don't know how smart are the JIT and AOT shipped with MS CLR, but perhaps it is worth the pain of trying to generate smarter code, so that it can run efficiently even under mono. Sure, it is not the task with the highest priority. It would probably make sense to write this as a function that takes a single block and produces a list of complex expression objects -- to be defined in a custom way, instead of trying to push this into the existing flow graph model. I agree, I think this is the simplest solution. ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] CLI code generation
Hi Armin, Armin Rigo wrote: Actually, my comment about this was guided by the fact that I didn't see how the existing flow graph model can represent stack operations... Antonio, did you have something more precise in mind? A bunch of of operations that have often no argument and/or no return value, but implicitly operate on the stack, e.g. stack_int_mul(), stack_push(v), v=stack_pop()? Yes, that was precisely what I though, but it seems a little untidy: as you pointed out I think the best way to handle stack machines is to transform SSI operations into a completely new list of stack based operation (or, equivalently, into a complex expression tree). With this schema we could also easily handle other problems in the future: for example, we could also write a transformer for register-based machines that takes a list of SSI operations and produces a list of operations specifically tailored for that task. Given this it each backend could choose the instruction set that fit best its needs, just as at the moment each backed can choose whether to use lltypesystem or ootypesystem. Just my 2 euro-cents, ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] CLI code generation
Hi Niklaus Niklaus Haldimann wrote: I'm the guy who's working a bit on the Smalltalk backend at the moment. I'm very interested to see how your CLI backend progresses! Ideally, all high-level backends (those based on the ootypesystem) should be able to share some code and concepts. I haven't yet put much thought into this while working on gensqueak, though. But if you see ways to share some abstractions between gencli and gensqueak, you're very welcome to share your ideas or refactor code. I will also watch what you are doing to look for opportunities to unify things. I agree, it would be nice to share code, so that it could be reused for future backends, too. I have already taken a look to gensquak (it was my starting-point for writing gencil), but I haven't studied it in deep, also because I don't know smalltalk, so it is not easy to follow the code. I will check in my work as soon as I have a svn account, so that you will be able to take a look on it. A question about the coding rules: must the translator package be written in rpython? I've seen a lot of yield statements, so I think it's not the case: why? ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
[pypy-dev] First version of the CLI backend checked in
Hi, I've just checked in the first version of the CLI backend; for now I've checked it in a branch located at http://codespeak.net/svn/user/antocuni/pypy-antocuni/ At the moment I've tested it only with mono under linux; for running the tests you need to have ilasm and mono in your path; the IL code is generated even if mono is not installed and can be found in /tmp/usession-*. Comments are welcome, ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] First version of the CLI backend checked in
Carl Friedrich Bolz wrote: Hi Carl, I just took a cursory glance at the code, but the tests passed for me out of the box :-). Great! :-) I would say you could check it in the main repository (maybe after adding appropriate skips to the tests, if mono is not installed). The test already skips when either mono or ilasm is not found in the path; maybe I check that it runs under windows too, then commit to the main repository. Just a question about subversion, since it is the first time I use it and I'm not sure to do things correctly; what's the best way to commit my work into the main repository? After a bit of googling I figured out that I should use the svn merge command, but I haven't understood how :-(. ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] First version of the CLI backend checked in
Hi Carl Carl Friedrich Bolz wrote: I guess you have not changed anything outside of the cli directory in your branch. If this is the case it is easy: Just do an svn cp pypy-antocuni/pypy/translator/cli pypy-dist/pypy/translator/ done! Now the code should be in the main repository. Thanks for the help ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
[pypy-dev] Bug in lltypesystem
Hi, I've just found a bug in lltypesystem; to reproduce type the following in translatorshell.py: from __future__ import division def bug(x,y): ... return x/y ... t = Translation(bug) t.annotate([int, int]) [cut] t.rtype() [translation:info] already done: Annotatingsimplifying [translation:info] RTyping... [flowgraph] (pypy.rpython.lltypesystem.rclass:619)ll_runtime_type_info [annrpython] FunctionGraph of (pypy.rpython.lltypesystem.rclass:619)ll_runtime_type_info__objectPtr at 0xb79ef94c - SomePtr(ll_ptrtype=* RuntimeTypeInfo (opaque)) Traceback (most recent call last): File stdin, line 1, in ? File /home/anto/pypy/pypy-dist/pypy/translator/interactive.py, line 122, in rtype return self.driver.rtype() File /home/anto/pypy/pypy-dist/pypy/translator/driver.py, line 68, in proc return self.proceed(backend_goal) File /home/anto/pypy/pypy-dist/pypy/translator/driver.py, line 355, in proceed return self._execute(goals, task_skip = self._maybe_skip()) File /home/anto/pypy/pypy-dist/pypy/translator/tool/taskengine.py, line 108, in _execute res = self._do(goal, taskcallable, *args, **kwds) File /home/anto/pypy/pypy-dist/pypy/translator/driver.py, line 140, in _do res = func() File /home/anto/pypy/pypy-dist/pypy/translator/driver.py, line 186, in task_rtype crash_on_first_typeerror=not opt.insist) File /home/anto/pypy/pypy-dist/pypy/rpython/rtyper.py, line 144, in specialize self.specialize_more_blocks() File /home/anto/pypy/pypy-dist/pypy/rpython/rtyper.py, line 175, in specialize_more_blocks self.specialize_block(block) File /home/anto/pypy/pypy-dist/pypy/rpython/rtyper.py, line 286, in specialize_block self.translate_hl_to_ll(hop, varmapping) File /home/anto/pypy/pypy-dist/pypy/rpython/rtyper.py, line 413, in translate_hl_to_ll resultvar = hop.dispatch() File /home/anto/pypy/pypy-dist/pypy/rpython/rtyper.py, line 625, in dispatch return translate_meth(self) File None/fat/pypy/pypy-dist/py/code/source.py:215, line 5, in translate_op_truediv File /home/anto/pypy/pypy-dist/pypy/rpython/rint.py, line 93, in rtype_truediv return _rtype_template(hop, 'truediv', [ZeroDivisionError]) File /home/anto/pypy/pypy-dist/pypy/rpython/rint.py, line 186, in _rtype_template return hop.genop(repr.opprefix+func, vlist, resulttype=repr) File /home/anto/pypy/pypy-dist/pypy/rpython/rmodel.py, line 102, in __getattr__ raise AttributeError(%s instance has no attribute %s % ( AttributeError: FloatRepr instance has no attribute opprefix ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
[pypy-dev] Low level operations and ootypesystem
Hi, I have some doubts about the semantic of some low level operations I have found during my development of the CLI backend. The first doubt is about overflow-checked operations: I've noticed there are a number of checked operations that can never fail due to their semantic, such as int_lt_ovf or int_rshift_ovf: am I missing something or are they simply redundant? Moreover, I've found that the rtyper produces both int_lshift_ovf and int_lshift_ovf_val: what's the difference between the two? And what's the semantic of int_floordiv_ovf_zer and int_mod_ovf_zer? The second question regards uint_neg and uint_abs: considering the an unsigned integer can't be negative, what are they intended to do? Are they simply no-op? Finally, the last question is ootypesystem-specific: I've noticed that the rtyper sets the 'meta' field of every instance just after it has been created: what does it contain? It seems to me that it contains the class the object belongs to: am I correct? If so I could safely ignore that operations, because the CLI automatically tracks the type of each object, couldn't I? Just a curiosity: why the name 'meta' instead of 'class' or 'type'? Thanks for the help and good coding! :-) ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] Low level operations and ootypesystem
Hi Samuele, On 3/29/06, Samuele Pedroni [EMAIL PROTECTED] wrote: this meta field can contain instances that have further fields beyond class_. class_ contains something of type ootype.Class, what is expected to be the runtime representation of a class in the backend type system, The extra fields are used to implement dynamically looked up class attributes, unless the CLI has direct support for such things, which are _not_ static class attributes, the simplest thing is to follow what the rtyper is asking for. ok, perhaps I've understood: are we talking about things like classmethods? Or attrbiutes like the one in the following example? class MyClass: ClassAttribute = 42 class MyDerivedClass(MyClass): ClassAttribute = 43 Just a curiosity: why the name 'meta' instead of 'class' or 'type'? to avoid simply thinking that is the type/class in Java/JVM etc sense. This is more similar to smalltalk metaclass. I don't know smalltalk... are smalltalk metaclasses similar to the python ones? ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
[pypy-dev] List and string in ootypesystem
Hi, I've spent last hours reading sources in the rpython directory, trying to deeper understand how lltypesystem and ootypesystem work: I've noticed that the low level representations of strings and lists are the same in both typesystem. My question is: is it a design choice or nobody has not refactored that part, yet? I think that if a target natively supports classes and other object oriented constructs probably it supports strings and sequences, too, so we should try to use these facilities as much as possible. I don't remember which people are working on ootypesystem, but if they agree I could try to refactory such things. If I have understood the source well I should begin by adding 'rlist' and 'rstr' to the list of lazily imported modules in rpython.typesystem.TypeSystem, shouldn't I? thanks for the attention ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] List and string in ootypesystem
Hi Niklaus, you have just preceded me by about 2 hours... I would have sent a mail about rlist anyway. I've spent last days working on this topic: I tried to refactor the code for making rlist type-system specific as rtuple, rclass and others already was. It has been a bit difficult because it was my first hacking in the rpython directory: I had to read the sources carefully for understanding in deep how things works, and I'm not sure if I have completely understood the whole logic. I've just committed my work in the http://codespeak.net/svn/user/antocuni/pypy-antocuni directory. I'm not satisfied of it because it is a bit messy and I'm happy to know that now it's no longer needed because yours is surely better :-). By the way I think it has not been a waste of time because it let me to gain some knowledge of pypy's internals that will be useful in future. Niklaus Haldimann wrote: I have two more days here at the Leysin sprint to work on lists. After that I again won't have much time to work on PyPy, so Antonio (or anyone else), you're very welcome to continue from here. I think the list code can serve as a good starting point for the other data structures. If there are any questions, I'm happy to answer them (if I can ;)). Sure, I'll be happy to continue from here: it is much better than starting from scratch as I supposed to do! :-) I think you've saved me from a lot of headaches, thanks! Remind me to offer you a beer the first time we meet ;-). ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] List and string in ootypesystem
Niklaus Haldimann wrote: Oops, I didn't intend to invalidate your work. ;) I actually checked your user directory yesterday, because you said in an earlier mail that you would work on the branch there. But since I didn't see any changes related to rlist I assumed you decided to postpone this ... No, I was working on that but I didn't commit because I wanted to get all things working before doing that... I've had some problem fixing all various modules that accessed rpython.rlist directly. What you did doesn't look so bad, I just looked at it. In general, I'm impressed that you found your way around the RTyper so easily. ;) Don't worry, it has not been so easy! :-) The main difference to our work is that you created many new low-level operations for the list interface. Since there will also have to be a dict and string interface this would lead to an explosion of low-level operations. Our approach also makes testing of these data structures at a lower level easier (see test_oolist.py and test_oortype.py). The reason beyond that is that I've found no other way to do this: I really wanted to create as few low-level ops as possibile, but I coudn't figure out how to do. I try to explain better what I did: my first attempt was to provide a low-level operation 'list_lenght' and then implement rtype_is_true in terms of 'list_lenght', so I wrote a thing like this: class ListRepr(AbstractListRepr): def rtype_len(self, hop): v_lst, = hop.inputargs(self) return hop.genop('list_len', [v_lst], resulttype=Signed) def rtype_is_true(self, hop): v_lst, = hop.inputargs(self) return hop.gendirectcall(ll_list_is_true, v_lst) def ll_list_is_true(lst): return lst is not None and len(lst) != 0 I hoped that the rtyper was smart enough to convert 'len(lst)' into my low-level op 'list_len', but it wasn't: indeed, it generated code that called len function for a generic PyObject* and that was not what I wanted. I tried to copy the implementation of lltypesystem.rlist.ll_list_is_true, but I couldn't because that can call the ll_lenght() method that I didn't have. Now it has no longer importance, but how could I do to get thing working? ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] Rtyping classes definitions?
Hi Niklaus, Niklaus Haldimann wrote: There must be a misunderstanding here, the rtyping step is not at all missing. ;) If you look at rtyped graphs you'll see that instances (and classes) have low-level types of the ootype.Instance kind. These Instance types have a _field attribute that is a dict mapping field names to their low-level types. You should be operating with these Instance types not with ClassDefs, the latter are mostly an implementation detail of the annotation phase. Yes, there was a misunderstanding :-). I was searching for something containing the list of classes to be rendered, but now I've understood that I should collect informations about classes as long as I generate the code, that is the way gensqueak works (I was looking at your sources just now). Thanks a lot for your help, I'm sorry if I make you bored with my many questions, I hope they will decrease as soon as I will get acquainted with pypy's logic. ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
[pypy-dev] A problem with unbound methods
Hi, I have some problems for translating calls to unbound methods. Let's show with an example: class MyClass: def __init__(self, x): self.x = x class MyDerivedClass(MyClass): def __init__(self, x): MyClass.__init__(self, x) During rtyping the field and method names are mangled, so the __init__ method became something like 'o__initvariant0': as long as I call bound methods this is not a problem because the low-level op oosend contains the right mangled name, but difficult arises when I try to call an unbound method such as the one showed above; the low-level op that I obtain is this: v9 = direct_call((pypy.rpython.ootypesystem.ootype._static_meth object at 0xb78e51ac), self_1, x_2) Let 'x = op.args[0].value': (Pdb) x._name 'Base.__init__' (Pdb) x.graph.name 'Base.__init__' As you can see I can read only the original unmangled name, but it is useless because I've to deal with the mangled one. I've tried to search around for a place where to read that but I couldn't find it. How can I do? thanks for the help ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] A problem with unbound methods
Hi Samuele, Samuele Pedroni wrote: you should not trust or use graph names in the backend, apart for givin names to things. If a function is reused in more than one class the information would not be useful (this can happen in Python/RPython). The graph would get the name based on the first class under which it was found, this may be unrelated for example for the class for self to the method name under which the graph is attached. nice to know this, I didn't know. I think I have to rethink to my approach for code generation... Because there are too many variations about what is allowed in terms of supporting functions vs. just methods, calling superclass implementations of methods even when the method is overriden in a subclass etc in the targets, right now it is up to the backend to traverse and consider all classes and direct_calls and if the same graph appears both attached to a method (or methods) in a class (or classes) and in static method(s) in a direct call(s) decide what to do. This is also true in general for graphs that appear as more than on method in one place. Ok, now it's clearer, thanks. So, to respond to my original question, I should create a sort of graph database to lookup when I need to know where have I put the code for that graph, right? Well, let's begin refactoring! :-) ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] List and string in ootypesystem
Hi Armin, hi Niklaus Armin Rigo wrote: I see you added SomeOOList to annoation.model.lltype_to_annotation(). There is already a generic 'def len()' in the base class SomeObject, so that's how the annotator is happy with your ll function's 'len(lst)'. Fine here. If you wanted a .length() method instead, you would need a 'def method_length()' in unaryop.py. On the rtyper side, you need something similar to rpython/rptr.py that maps SomeOOList back to its low-level type, with yet another Repr. It's this repr that must implement the operations you want to be able to use in low-level helpers; e.g. rtype_len() if you want len(lst) to work; or if instead you use .length() in low-level helpers, then you would need an rtype_method_length() in the repr corresponding to SomeOOList (by opposition to the rtype_len() in the repr corresponding to SomeList). Nik's approach is to map lists to reguar OO classes and instances, which are already supported in the annotator/rtyper with a similarly indirect approach: SomeOOClass/SomeOOInstance in the annotator, which rpython/ootypesystem/rootype.py maps back to the low-level OO types. Just like rptr.py, this rootype.py is only needed to support low-level helpers. Thank you for the great explanation: now I see things much more clearly, especially the role played by rptr.py and rootype.py, which I didn't understand very well before now. It seems that question by question I'm really getting into PyPy's logic... I hope I will be able to finish my apprenticeship soon :-) ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] Summer of Code 2006
Niklaus Haldimann wrote: Hi there Google is doing Summer of Code again this year: http://code.google.com/soc/ It would be possible to enter PyPy directly as a mentoring organization this time, instead of going through the PSF. Last year, student slots were given to mentoring organizations proportional to the number of applications. If there are enough applications for PyPy proper that might bring more students on board than taking slots from the PSF pool as last year (but maybe PyPy's influence in the PSF is big enough, and this doesn't really matter). That's a good news! I'd like to participate to soc for doing pypy's stuffs, I hope I'll able to apply (and to win :-)). ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] Avoiding code duplication
Armin Rigo wrote: I think this is kind-of-reasonable. The ADT method approach of the lltypesystem was introduced late during the development of the rtyper; by now, it would be reasonable to define common method names between the ADT methods of the lltypesystem and the GENERIC_METHODS of the ootypesystem. I am unsure about the performance penalty. The current version of many ll helpers, for example, read the 'items' pointer only once and reuse it; if this gets replaced by ADT methods like 'getitem_nonneg()', it means that althought the call is probably inlined there is still the overhead of reading 'items' through each iteration in the list. Who knows, maybe C compilers will notice and move the read out of the loop. Just give it a try on a small example like ll_listindex(), I guess... Well, as we decided on #pypy I've changed the ADT interface. As I wrote in the commit log: The interface of ListRepr and FixedSizeListRepr has changed: two accessor methods has been added: ll_getitem_fast and ll_setitem_fast. They should be used instead of the ll_items()[index] idiom: that way when ootypesystem's list will support that interface we will able to write function useable with both typesystem with no modification. The various ll_* helper function has been adapted to use the new interface. Moreover function that accessed directly to the l.length field has been changed to call the ll_length() method instead, for the same reasons as above. The next step is to rename ootypesystem's list _GENERIC_METHODS to match the ADT methods in lltypesystem's list, then we could try to share most of ll_* function that currently belongs only to lltypesystem/rlist.py. I hope I will do it tomorrow. A different comment: as you mentioned on IRC it would be nice if the back-end could choose which methods it implements natively. At one point there was the idea that maybe the 'oopspec' attributes that started to show up in lltypesystem/rlist.py (used by the JIT only) could be useful in this respect. If I remember correctly, the idea didn't work out because of the different 'lowleveltype' needed, and the difference in the interface. Merging the ADT method names of lltyped lists and the GENERIC_METHODS of ootyped lists could be a step in this direction again. The interesting point is that each oo back-end could then choose to special-case the ll_xxx() functions with the oopspecs that they recognize, and just translate the other ones normally. (The ll back-ends always translate them all.) I saw that 'oopspec' attributes, but I didn't understand the exact semantic; your proposal sounds reasonable to me: if I can figure out correctly this way the typesystem specific code would be reduced to the minimum and will help to port other Repr such as rdict to ootypesystem, too. I'll investigate a bit in this direction as soon as I can. good Easter to all, ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] Question on snake server test runs
Hi Sanghyeon, Sanghyeon Seo wrote: http://snake.cs.uni-duesseldorf.de/pypytest/summary.html I added a py.test option to pretty-print Common Lisp source files, and tests on snake server started to fail. Samuele told me that it might depend on the working directory from which py.test is run, but no matter how I try, I couldn't reproduce the failure in my machine. So my question is: how are tests run on snake server? How can I fix this failure? E if conftest.option.prettyprint: AttributeError: Values instance has no attribute 'prettyprint' I had the same problem with gencli: I solved using an helper function that catches AttributeError and returns a default value instead, even if it seems more a hack than a solution. See translator/cli/option.py. ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
[pypy-dev] PyPy is flooding Genova :-)
Hi all, as the subject suggests, people seems to get interested in PyPy here at Genova's university, due to my thesis :-). As a consequence next week I'll probably have a talk for introducing some interested people to PyPy. Since there is a number of introductory presentations I was thinking of using one of those instead of writing yet another one; the more updated seems to be the accu2006 one, right? I've noticed that it has been written with Keynote: can someone send it me in a format I can open with OpenOffice or PowerPoint, so that I can add some slides specific to the CLI backend, please? Moreover my supervising professor is considering the possibility to establish a more actual collaboration between my university and the PyPy project, so he would be glad to speak with someone of the core team about this. ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
[pypy-dev] Summer of code
Hi all, I'm considering the possibility of applying to Google Summer of Code 2006: obviously the topic of my application would be pypy :-). As you can guess I'd like to continue working on the CLI backend, also considering that I probably won't be able to finish it before I graduate. The point is that by now I can't know how mature gencli will be when soc starts, so it's difficult to write a good proposal: how can I say where I'll arrive to if I don't know where I have to start from? I could submit a vague proposal, but I guess that something like working on the CLI backend is a bit too elusive for being accepted. Any suggestions? ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] Re: Summer of code
Hi Terry, Terry Reedy wrote: I thought it was your thesis project, which you would need to finish. In any case, assuming you do not already have a summer stipend for the same work, I would encourage you to apply -- after reading the FAQ carefully. It is my thesis project, but I don't need to finish: my supervising professor is happy for my work and told me to code as much as I can, but fortunately I have no mandatory goal to reach. This doesn't mean that I'll abandon gencli as soon as I graduate: I'd like to finish my work at best, and if I can get payed is much better! :-) In a couple of sentences, describe PyPy in relation to Python and link to site. Describe your CLI (what is that?) backend project and how it fits into PyPy and why it is a useful thing (to other people) to do. List what you have done (and when you began) up to application date. Then list your next several steps. Indicate what you anticipate doing before the project starts and what you anticipate doing during the project. (I think the FAQ addresses the question of starting 'early' -- after approval but before the official start date -- but forget the answer. I recommend you find it.) If you think needed, add a caveat about minor adjustments of schedule. Mention where code is being deposited and if publicly accessible. If your CLI backend is already approved in principle (when sufficiently well done) as one of the PyPy backends, say that too. And make sure your proposed mentor(s) have contacted Neal to get URL to signup with Google. Thanks for the suggestions, they will be useful. I read the student FAQ but I missed the one of starting early: it seems that it's fine, so there should be no problem for this. ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] oopspec
Hi Seo, Sanghyeon Seo wrote: Is there some documentation on oopspec attribute? How one may use it in the backend? I think there are no docs about the oopspec attribute. Armin wrote these lines some time ago to respond to the same question: The oopspec string tells what is the abstract list operation that this particular ll_*() function implement. For example: def ll_prepend(l, newitem): ... ll_prepend.oopspec = 'list.insert(l, 0, newitem)' means that ll_prepend() is equivalent to an insert with the index set to zero. In the stirng, the pseudo-arguments between the ( ) are either real argument names of the ll_ function, or constants. So for example, if a backend has got its own way to implement the insert() calls in general, it could figure out from the oopspec that the ll_prepend() helper can be replaced by a custom stub invoking the backend's own version of insertion with an index of 0. That's essentially what the JIT does -- see handle_highlevel_operation() in jit/hintannotator/model.py. The CLI backend uses the oopspec attribute for replacing calls to selected low-level helpers with native builtin methods; the code is still very experimental since it doesn't parse the argument line: it simply forwards the call using the first parameter as the target object and subsequent parameters as method's arguments. By now the only recognized oopspec is 'list.append' (i.e., ll_append) that is translated to the 'Add' method. If you want to look at my code see translator/cli/oopspec.py and the _Call.render method in translator/cli/metavm.py; at the moment I put these files in the cli/ directory, but they are general enough to be shared among multiple backends (metavm would be useful only for backends emitting bytecode, so not for gencl nor gensqueak, I guess). The rationale behind this is that this way a backend can quickly gain full list support by simply supply basic operations such as ll_getitem_fast Co; then each backend can choose what operation to optimize based on their knowledge of the target system. Btw, I have a doubt about oopspec, too: Armin told that the 'oopspec' specifies the abstract operation that each ll_* helper implements; does this abstract operation have to be one of standard list methods or can I add new operations? I was thinking to add a 'll_remove_range' or similar to be used by other helpers such as delslice and company; this way backends can replace ll_remove_range with their almost-surely-present equivalent without having to care about python-specific logic such as slices or so. ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] oopspec
Hi Armin Armin Rigo wrote: It's quite open; every piece of code using oopspec should be prepared to see names that it doesn't know about, and ignore them. So feel free to add new names. As a guideline, let's stick as far as possible to the Python name for the method or for the __xxx__ special method name (with __ removed): 'remove_range' looks like it could be called 'delslice' in oopspec. I'm unsure about using 'delslice': it make me think that such an operation behave exactly like the corresponding python statement, but that was not my intent: in my mind the difference between 'remove_range' and 'delslice' is that the first doesn't handle negative indexes and so it is likely that the target has some built-in method that implement it natively. Btw I have not thought to it too much, so I don't know if such a refactoring is worth the pain: maybe it is simply more effective to write a delslice method in C# (or whatever language fit other backends) and forwarding the hypothetic 'delslice' to it, even it would mean that each backend has to write its own implementation if they don't get satisfaction with the default one. ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
[pypy-dev] MetaVM design
Hi Maciek, I all as I announced in my previous e-mail here are some ideas on how to use metavm for genjs2. I think the best way to use metavm for your purposes is to write a bunch of MicroInstruction classes designed for emitting source code. For example I guess that some rpython low level instruction need to be mapped to infix operator (such as int_add co.) while others need to be mapped to some support function (such as int_abs). You could write something like this: class InfixOperator(MicroInstruction): def __init__(self, operator): self.operator = operator def render(self, generator, op): generator.emit('(%s %s %s)' %\ (op.args[0], self.operator, op.args[1]) class CallHelper(MicroInstruction): def __init__(self, helper): self.helper = helper def render(self, generator, op): arglist = ... # compute arglist from op.args generator.emit('%s(%s)' % (self.helper, arglist) Then, in your opcodes.py: opcodes = { 'int_add': InfixOperator('+'), 'int_abs': CallHelper('abs') } The difficult thing here is to design the Generator interface in a way that is suitable for emitting both asm code and source code. Maybe we try to separate the two worlds, in this way: genoo/metavm.py --- class Generator: def function_signature(self, graph): pass # put here all methods shared by both AsmGenerator and SourceGenerator class StackBasedGenerator(Generator): def emit(self, instr, *args): pass def call(self, func_name): pass def load(self, v): pass def store(self, v): pass ... class SourceCodeGenerator(Generator): def emit(self, expression): pass # and so on genoo/function.py - class Function(Node, Generator): # some shared code ... class StackBasedFunction(Node, StackBasedGenerator): ... class SourceCodeFunction(Node, SourceCodeGenerator): ... gencli/function.py -- class CliFunction(StackBasedFucntion): ... genjs2/function.py -- class JsFunction(SourceCodeFunction): ... Once we have this design, each MicroInstruction subclass can decide whether to target the generic Generator interface or one of its subinterfaces. ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] SoC
Maciek Fijalkowski wrote: Maybe that's a little bit late, but I've got broadband finally. I would like to thank all of pypy team for helping me with my SoC. A big 'thank you all' from me, too. I hope we'll be able to repay your trust by completing our works nicely :-). ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
[pypy-dev] Missed pypy-sync
Hi all, I've realized just now that I've missed the pypy-sync yesterday... sorry! Btw, here is my activity report: LAST: work on unifying rpython/ and cli/ tests; basic string support in gencli NEXT: DDorf sprint BLOCKERS: uni stuffs ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
[pypy-dev] Ladies and gentlemen...
... I'm proud to announce that gencli is the first high level backend that can compile and run rpystone and richards :-). Some early benchmarks (I post them here so I'll know where to find them when I need it later :-)): pystone --- gencli: 177429.668653 rpystones/second genc: 4926108.374384 rpystones/second genc w/o backendopt:1592356.687898 rpystoned/second richards gencli: 28.652590 ms/iteration genc:7.431851 ms/iteration genc w/o backendopt:16.203486 ms/iteration ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] [pypy-svn] r29534 - pypy/dist/pypy/rpython/ootypesystem/test
Samuele Pedroni wrote: Records are really like lltypesystem Structs, although because some of their code was copied from Instance, they may give the impression that you can add fields after the fact, but that should not be done, it breaks the fact that they are supposed to compare by structure. If you are needing that you need some other approach or introduce some kind of forward definition. _add_fields should really disappear from Record. ok, so it's my fault, but it don't solve my original problem. The problem is that TestCliTuple.test_inst_tuple_add_getitem in test_rpython.py used to fail because the IL code contained two copies of the same Record. After a bit of investigation I discovered that the reason was because I got two Records that compare equal but have different hashes; again, the problem was that the __cached_hash was different from the real one. Then I tried to reproduce the bug, and so I wrote the failing test, thinking that using _add_fields was fine. But after your comment I've understood that this is not the point, because Record._add_fields is called only in its __init__. For now the problem is worked-around in database.py (the lines marked with XXX: temporary hack), but the bug is still here: any idea of why I get two equal Records with different hashes? ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] [pypy-svn] r29534 - pypy/dist/pypy/rpython/ootypesystem/test
Ok, I've found the bug, it's entirely a my fault, because in translator/cli/record.py I attach a new attribute _name to the Record, so the hash is no longer valid. Probably it's safer to store the name in some dictionary. ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
[pypy-dev] Tomorrow morning...
... I will graduate!! :-) :-) :-) A big, big 'thank you' to all pypy developers, for having welcomed and supported me during last months: it has been very nice to work with you (and I hope to continue in future). I still remember my first post on pypy-dev: I was Looking for a thesis (cit.) and I was rather confused about pypy architecture: it was only 5 months ago, but it seems years. Then I started coding and I've never stopped :-). For those interested in, the final version of my thesis is here: http://codespeak.net/svn/user/antocuni/tesi/Implementing%20Python%20in%20.NET.pdf ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] .NET build failures
[EMAIL PROTECTED] wrote: Hi Antocuni, Hi Ben, I ran the .NET build last night and it got a lot further. The errors I get now are: [translation:ERROR] * FAILURE * [translation:ERROR] [translation:ERROR] c:\docume~1\ben~1.you\locals~1\temp\usession-16\main.il(66076) : error -- Duplicate label: '__check_block_5' as you correctly spotted out, the problem was in opcodes._check; I've fixed that, thanks for the help. I also notice that you appear to have identifiers with hundreds of underscores appended. Is this expected? [translation:ERROR] Assembled global method memo_get_unique_interplevel_subclass_4__ Yes, it's expected. Appending underscores it's a way to get unique names for methods and classes, though I didn't think to get such a big number of underscores. I changed the code and now I append a unique number at the end of the name instead of a list of underscores. ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] A bug in cc?
Armin Rigo wrote: In any case, if the bug is still there in the latest gcc, yes, I'd consider reporting it. PyPy is good to push many limits of its backends -- e.g. it gave quite a few LLVM bug reports and I wouldn't be surpized if mono was next :-) mono *will be* the next, definitively :-). ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
[pypy-dev] Reference letters for a PhD application
Hi all, as you might know I've not yet decided what to do in the future when my experience at HHU will end; one of the possibilities is to take a PhD here in Genoa, so I'm trying to fill up the form for the application. To participate I need one to three reference letters stating what are my capacity especially in the research fields. I was wondering if someone that has an official position in a company/university is willing to write such a letter (hoping it will contain a good evaluation :-). The template for the letters is here: http://www.disi.unige.it/dottorato/AMMISSIONE/adm-rules/RecommendationLetter.html Thanks a lot for the help ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] Status of pypy-c under Windows
Scott Dial wrote: Scott Dial wrote: It does indeed appear to be working. I'm currently in the middle of a run. The run completely successfully, it can be found at the same URL it was at before: http://scottdial.com/pypytest/ This is very cool, thank you! I've noticed that most of CLI tests are skipped because the .NET SDK is not installed: could you install it please, if it's not a problem? I've got a windows box under vmware but running all the CLI tests there on a regular basis is not very convenient: il would be very useful to have the daily tests also on windows. ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] Google Summer of Code
Hi Armin, hi all! Armin Rigo wrote: Hi all, A few PyPy developers have signed up as mentors for the Google Summer of Code. Students interested in a summer project about PyPy (or any Python-related project) should have a look at: http://wiki.python.org/moin/SummerOfCode So, outside there is the sun (well, not right now :-)), the temperature is getting warmer and it's already time to think again about summer of code :-). First of all, I don't know who and how many signed up as a mentors: if the PyPy project needs more, I'm available to sign up as a mentor as well. Next, I'm also considering the possibility of applying as a student again: the PhD grant I will receive from 1st of April is very low, so I will likely look for an additional job, and it would be great if the job were pypy-related instead of one of those boring projects I used to do before :-). But before applying I want to be sure not to be unfair, because my PhD is also about PyPy and I don't want to be paid twice for the same work: so my idea would be to do something unrelated to gencliCo., just to be sure it's something that does not fit in my PhD program. Moreover, I will be well-disposed to withdraw my application if there are many others interesting pypy proposals, because I would not want to steal someone else the opportunity of doing such as great experience and to be involved in the project. What do you think about it? As I said I basically worry to be judged unfair, so I would like to know the opinions of other developers before begin thinking of a proposal (and in that case I guess that #pypy is much more convenient for discussing it :-)). ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
[pypy-dev] Towards pypy-jvm
Hi Niko, hi all! I've read in the IRC logs that there has been a bit of discussion about what genjvm still lacks in order to translate pypy. Some weeks ago I also tried to translate pypy-jvm; it seems that the two most important missing features are r_dict and weakrefs. The good news is that with some hacks it's possible to get a pypy version that doesn't make use of r_dict or weakrefs: have a look at this IRC log: http://tismerysoft.de/pypy/irc-logs/pypy/%23pypy.log.20070307 The bad news is that even with those changes, jvm crashed because of a failed assertion, then I gave up. I've no clue what it's going wrong, but maybe it's not something terribly wrong to fix. I hope that this infos can help you in some way :-). ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] Towards pypy-jvm
Niko Matsakis wrote: The last time I checked what java's Hasttable offers and I saw you can't pass to it custom hashing and equality functions, but maybe there is a simple way to do it that I don't know. No, there isn't, but it shouldn't be too hard to cook up some kind of Hashtable substitute that uses small wrapper classes to handle that. I think that's what you did for C#, right? No, for .NET it was simpler because the standard Dictionary type also accepts an optional class that implements the custom functions, so all I need to do is to create a class for each unique pair of equality and hashing functions (see cli/comparer.py). ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] Towards pypy-jvm
Niko Matsakis wrote: Some weeks ago I also tried to translate pypy-jvm; it seems that the two most important missing features are r_dict and weakrefs. Ok, I implemented r_dicts now and checked it in. Not too much work, actually, ended up fitting fairly naturally into the existing code. I Hi Niko! This is very cool! :-) Most of r_dict tests are in test_objectmodel, though you may want to have also a look at rpython/test/test_rconstantdict. About the distinction between r_dict and custom dict: do you find any place where they are used interchangeably? I would say that r_dict referes so the rpython-level type (objectmodel.r_dict), while custom dict should refer to the low level type used by the rtyper (ootype.CustomDict). Also, probably CustomDict would be a better name than RDict for your java class, I guess. guess I'll look at weakrefs next, though no promises as to when that will be. :) Adding weakrefs to gencli was very simple: I just needed to map lltypesystem.llmemory.WeakGcAddress to 'System.WeakReference', add the straightforward support for constants to cli/constant.py and add the also straightforward 'cast_ptr_to_weakadr' and 'cast_weakadr_to_ptr' operations in opcodes.py. I don't know for jvm, but I guess it would not be much more complicate. ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] PyPy JVM Backend
Carl Friedrich Bolz wrote: I think it is the time now to do away with the file descriptor simulation, it was useful at one point but is very silly now. Instead, a subclass of pypy.rlib.streamio.Stream should be created that only delegates to the Java/.NET Stream classes, probably making use of the facilities for buffering that the platforms offer. I think it is perfectly reasonable to not have os.open and friends on pypy.net as long as file works. If another placeof pypy still uses os.open I am strongly for fixing that. I agree, and this is why I mentioned the problem :-). I think there are two ways to make it working: 1) write a dummy CliFile (or JvmFile) subclass of stream, then special-case that class in the backend to map directly to System.Io.FileStream (or the Java equivalent) 2) make CliFile or JvmFile real classes, using the interpret-level bindings to forward the methods; then, we should modify open_file_as_stream and construct_stream_tower to instantiate these classes instead of the standard ones. In both cases I also think it's not trivial to get all the combination of mode/options working, because .NET uses a slightly different set of options than posix to determine how to open a file (I don't know about Java). I think that solution (2) is easier to implement and more readable, but so far it's possible only for gencli because genjvm doens't provide interp-level bindings to java libraries. By contrast Solution (1) is not trivial to implement if the interfaces of our Stream class and the Java's one are very different. Maybe a better solution would be to map the dummy streamio.JvmFile to a class written in Java doing the necessary conversions and forwarding to the native stream class. About the app-level os.* functions; I also think that for now we could simply omit them, but in the long term we should write an alternative implementation based on streamio (IronPython does it in a similar way). ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] PyPy JVM Backend
Maciek Fijalkowski wrote: Probably one good step would be to make our tools (mostly py.test) work without applevel os.dup and friends (it uses it in few places, also for capturing, but that's quite shallow and capturing can be even tuned with options). +1 (and maybe add a new --noposix option that turns off all those features when running on a platform != posix) ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] PyPy JVM Backend
Maciek Fijalkowski wrote: Part of it is a SoC anyway, so I don't think we care in what order SoC is done. (and personally I think it makes sense to provide Java bindings first and than to care about the translation). Well, strictly speaking java bindings for rpython are not part of Paul's SoC proposal but indeed they are probably the most effective way to implement other features that he promised. Btw, I'm not sure it's a good idea to develop them as the first task in PyPy, because it's not straightforward if you have no experience with the rtyper. My suggestion is to start by porting tests to genjvm and fixing the discovered bugs, because it should be a more newcomer-friendly task; then java-bindings for rpython and I/O layer; finally finally translation of pypy-jvm. Paul, what do you think of this plan? ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] DLS paper on RPython
[I respond only to you because I don't think Massimo and Davide are interested in such details; I'm also cc-ing pypy-dev] Niko Matsakis wrote: Ok. I'll look at that some. I may have to rewrite my section on exceptions to describe the type-wrapping solution; I wrote it to describe the newer system I started to checked in today, but then I found that RPython allows any object, not just Exceptions, to be thrown, meaning that this newer system won't work as well as I thought (and so is currently disabled). Probably the simplest way to solve the problem is not to allow arbitrary objects to be thrown in RPython; I quick grep inside objspace/std and interpret didn't show any case in which this feature is used, and in any case it should be very easy to patch. Else, you could special-case the raise/except when using subclasses of Exception, and don't do the wrapping in those cases; of course you would have to be careful when catching all the exceptions, something like this: try: foo() except: bar() needs to be translated to: try { foo() } catch(Exception e) { // this is __builtin__.Exception bar() } catch(ExceptionWrapper e) { bar() } or something similar. I think we should really try to avoid ExceptionWrapper beacuse it's both ugly and probably veeery slow. ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] PyPy sync next week?
Maciek Fijalkowski wrote: Ok, so what about usual Thursday 17:00? Fine for me. ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] Work plan for PyPy
Maciek Fijalkowski wrote: * What we do with ootypesystem backends and external functions? Right now this is implemented by backends which tends to be a bit ugly implementation. My idea would be to have backend-sensitive implementations which access backend-specific RPython functions for accessing underlaying platform classes/functions/whatever. I agree. Supporting external functions directly in the backend is easier, but now that gencli can call .NET code we don't need those functions to be external anymore, we can just provide a real implementation for them. The problem is that at the moment genjvm can't call java functions, so this solution would not work for it. ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] [pypy-svn] r44599 - in pypy/dist/pypy/lang/scheme: . test
Jakub Gustak wrote: I that case error messages from pylint: E:142:add_lst.lambda: Using unavailable keyword 'lambda' E:145:mul_lst.lambda: Using unavailable keyword 'lambda' I would say that in this case pylint is wrong, though I agree than in most cases lambda is used in a way that is not allowed in RPython. ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] sprint report
Carl Friedrich Bolz wrote: Hi all! Since I didn't manage to come to Europython and the sprint afterwards, I would really appreciate it if somebody wrote a sprint report. Since I guess that is kind of unlikely to happen now, could at least everybody write a paragraph about what he worked on? On the first day I worked with Jakub to make the scheme interpreter translatable. On the second day, I paired with Maciek trying to make pypy-c self-hosted: we fixed few bugs and now it is self-hosted, as long as pypy-c is being translated with the same opcodes as the hosting pypy-c. Finally, I spent the third day by working again with Jakub on the scheme interpreter and by experimenting with method lookup in the interpreter, without concluding anything interesting :-). ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] rffi feature request
Simon Burton wrote: I would like to expose some functions as external symbols when i build a .so def foo(i, j): return i+j foo._expose_ = [rffi.INT, rffi.INT] well, the above code would produce: extern int foo(int i, int j) { return i+j; } (and perhaps an accompanying .h file) thereby providing an interface for other C programs. This is rffi producing rather than consuming a C interface. I think that what you need is similar to what carbonpython does: basically, carbonpython is a frontend for the translation toolchain that takes all the exposed functions/classes in an input file and produce a .NET libraries to be reused by other programs. Functions are exposed with the @export decorator: @export(int, int) def foo(a, b): return a+b The frontend creates a TranslationDriver objects, but instead of calling driver.setup() it calls driver.setup_library(), which allows to pass more than one entry point. I think all this could be reused for your needs. Then, the next step is to teach the backend how to deal with libraries; for genc, this would probably mean not to mangle functions names, to create a companying .h and to modify the Makefile to produce a .so instead of an executable. About name mangling: one possible solution could be to mangle the names as now and put some #define in the .h to allow the programmer to use non-mangled names. Probably this is the less invasive solution. ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] pypy-jvm translating magic?
Hi Nicholas, Nicholas Riley wrote: I was able to translate targetnopstandalone to Java so it seems to work a little, but obviously there is a big difference between that and PyPy. :) Trawling the IRC logs I found: [20:39] antocuni fijal: unfortunately pypy-jvm does not compile out of the box [20:39] antocuni you have to manually patch two lines in rbigint Is this still true? Is the patch available somewhere? yes, it's still true; the patch is very little and simple: Index: rlib/rbigint.py === --- rlib/rbigint.py (revision 47111) +++ rlib/rbigint.py (working copy) @@ -176,7 +176,8 @@ return _AsLong(self) def tolonglong(self): -return _AsLongLong(self) +raise OverflowError # XXX +#return _AsLongLong(self) def tobool(self): return self.sign != 0 With this, you should be able to translate and compile pypy-jvm. About performances, there are a lot of factors that could affect them. For example, I found that if we inline too much, performances are worse, probably because jvm's JIT compiles again and again the same code. Moreover, PyPy provides also a number of optimizations to the interpreter which can greatly improve the efficiency: there is a --faassen option that includes all the optimizations known to be good for pypy-c, but I found that maybe not all of them are also good for pypy-jvm. For example, I got better results with shadowtracking and methodcache disabled than with the full --fassen; I don't know why, it's probably worth of being investigated. To conclude, you should try with the following command line to get best results: ./translate.py -b jvm --inline-threshold 0 targetpypystandalone.py --faassen --no-objspace-std-withshadowtracking --no-objspace-std-withmethodcache You can also try to use only --fassen to see if the performances are worse also for you. Any other experiment with the options is very welcome, of course :-). ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] PyPy JVM builds out of the box
Niko Matsakis wrote: It is no longer necessary to apply any patches to get PyPy JVM to build. It should work out of the box now (does for me, anyhow). There are a still a few external functions unimplemented, though I have some partial implementations hanging around. Wow, that's extremely cool! Thanks for your work and for this very good news, Niko :-). ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
[pypy-dev] Small optimization
Hi all, between some interesting and a lot of not-so-interesting talks here at oospla, I also find the time to hack a bit on pypy :-). Yesterday I tried the hand-written optimization attached to this mail; it seems to make pystone about 6% faster; it's not much, but it's not even so few not to take into account, IMHO. Btw, richards does not show any improvement, as expected. Also, it seems that it makes pypy-cli slower, at least on mono; this is somewhat surprising, because IronPython entirely relies on this kind of if-else if tests to dispath all operations... I would have bet that our multimethod dispatching was slower, but it does not seems so. I should try also on MS .NET, though. I know that this is not the way to do such an optimization; I guess that the correct way would be to teach the mutlimethod installer which are the hot cases to test first, before going for a full dispatch; I've not clue how to do it, though. :-) ciao Anto Index: objspace/descroperation.py === --- objspace/descroperation.py (revision 47491) +++ objspace/descroperation.py (working copy) @@ -413,6 +413,19 @@ def _make_binop_impl(symbol, specialnames): left, right = specialnames def binop_impl(space, w_obj1, w_obj2): +if symbol == '+': +from pypy.objspace.std.intobject import W_IntObject, wrapint +from pypy.objspace.std.longobject import add__Long_Long, delegate_Int2Long +from pypy.rlib.rarithmetic import ovfcheck +if isinstance(w_obj1, W_IntObject) and isinstance(w_obj2, W_IntObject): +x = w_obj1.intval +y = w_obj2.intval +try: +z = ovfcheck(x + y) +except OverflowError: +return add__Long_Long(space, delegate_Int2Long(space, w_obj1), delegate_Int2Long(space, w_obj2)) +return wrapint(space, z) + w_typ1 = space.type(w_obj1) w_typ2 = space.type(w_obj2) w_left_src, w_left_impl = space.lookup_in_type_where(w_typ1, left) ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] Small optimization
Christian Tismer wrote: Isn't this the optimization that Michael already tried? See objspace/std/objspace.py line 65. command line option is --objspace-std-withsmallint I turned out to not be worth it in many cases. But it is known as a useful special case in the CPython interpreter loop. moreover, according to Jim Hugunin, it's also an useful optimization in IronPython, that's why I decided to try it. Btw, I choose int+int for the example but in theory we could apply the same technique to other cases, says getitem on list and int or getattr on instances and string. I mean, they are the common cases so it might be worth of special casing them; of course not manually, that's not the pypy way :-). ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] bern sprint finished
Carl Friedrich Bolz wrote: So, it seems many people liked the blog thing. How about we start a general PyPy blog where we can all post? Should we try to set something up on codespeak or just keep using blogspot? The latter increases the chances that things are happening soon :). Any ideas for a title? pypy.blogspot.com seems already taken by a dead one-entry block :-(. We could then try to get it picked up by the python planet. +1 for the pypy blog: I think it would be cool to have. I think that the best would be to have it on codespeak, with maybe one general pypy blog and several ad-personam ones. Ideally it would be integrated with svn, because we all know that we prefer to write inside emacs/vi/whatever than inside a browser :-), but I admit that it would require much more work to setup such a thing. About the name: if pypy.blogspot.com is already taken, we could try to move to another blog site; for example, pypy.blogger.com seems to be free; I've no clue about advantages/features of one site or another, though. ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] bern sprint finished
Carl Friedrich Bolz wrote: See? If we decide to do that we won't get a blog anytime soon :-). But I agree that some rest/svn integration later would be nice. indeed, not being lazy is not our best value :-). blogger and blogspot are the same thing, I think. I went with Maciek's suggestion and registered morepypy.blogspot.com . Whoever wants to be able to post, please send me your google account name. We should move this thing to codespeak eventually, but for now I am happy to get things moving. Ok, makes sense for me; me google account is, surprisingly enough, [EMAIL PROTECTED] :-). ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] [pypy-svn] r48350 - pypy/dist/pypy/objspace/flow
Christian Tismer wrote: understood and done yesterday, alreasy ^ nice typo: I bet I hasn't been hard :-) ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
[pypy-dev] __file__ attribute of mixed modules
Hi all, I'm having troubles running pypy-cli on windows. One of the problem is due to the __file__ attribute of mixed modules; currently it's something like path/to/mixed/module/*.py. When site.py runs, it tries to compute the abspath of every module loaded; after a bit of indirections, abspath call the posix__getfullpath helper written in C#. The most obvious (and probably most correct) way to implement posix__getfullpath is to delegate to System.IO.Path.GetFullPath; here is where problems come, since the CLR implementation of GetFullPath complains if we pass it a name with an asterisk (Mono doesn't). This prevents pypy-cli to start. One possible solution would be to place a check inside the C# helper and not call GetFullPath in case there is an asterisk in the name (or maybe remove the asterisk, call GetFullPath and re-insert the asterisk). I think this solution is ugly and hackish. Another solution is to change the way we assign __file__ to mixed modules; e.g., we could use the name of the directory itself, or __init__.py instead of *.py. Or maybe some weird name like 'i_dont_really_exist.py', etc. What do you think? ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] unicode error when building the JVM version of PyPy
Martijn Faassen wrote: Hi there, I just tried to build the JVM version of the trunk with the following command (which may be altogether wrong): python2.4 translate.py --text --batch --backend=jvm targetpypystandalone.py A while into the translation, I got the following error, which I'm dumping here in case it might be useful to someone: [cut] Might have something to do with the recent unicode work? Hi Martijn, thank you for the signaling. Indeed, the bug was caused by genjvm not handling properly unicode constants. It has been fixed in revision 48638. ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] missing things for making PyPy production ready (for some value of production)
Carl Friedrich Bolz wrote: - for PyPy-JVM: bindings to allow the interaction with arbitrary Java libraries, threading support Moreover, I would add to this list that the possibility to compile python to jvm bytecode instead of python bytecode; maybe a pypy-jvm would be usable even without it, but e.g. developing applets requires it. Hopefully, I'll be able to work on this (an its counterpart for .NET) in the next months. ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] missing things for making PyPy production ready (for some value of production)
Martijn Faassen wrote: - for PyPy-JVM: bindings to allow the interaction with arbitrary Java libraries, threading support Does this already exist for PyPy-CLR? yes, but it's more or less only a proof of concept. You can use .NET classes from Python but you can't, e.g., inherit from a .NET class and override a method in Python. Also, I suspect it to be a bit slow since it's mostly implemented at application level, though maybe this is not an issue. My personal plan is to port this feature to pypy-jvm soon or later; it will be even possible to share most of the code between the two backends, I think. ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] missing things for making PyPy production ready (for some value of production)
Charles Oliver Nutter wrote: Moreover, I would add to this list that the possibility to compile python to jvm bytecode instead of python bytecode; maybe a pypy-jvm would be usable even without it, but e.g. developing applets requires it. Hopefully, I'll be able to work on this (an its counterpart for .NET) in the next months. Even just a simple backend python compiler for JVM would be a huge start, since it would cover 90% of cases people are interested in any sort of python-on-JVM. On the JRuby project we've gotten the most interest so far from folks simply wanting access to what the JVM and Java platform have to offer, and only recently from people interested in improved performance. The same could likely apply to Python if PyPy could produce a compiler backend for JVM. Uhm, I'm not sure what you mean with backend python compiler for JVM. In PyPy's terminology, backends are the last piece of the translation toolchain and they are needed to translate RPython programs. If you meant such a backend, it's already there and works very well (it can compile the whole pypy interpreter for example). If you meant a Python to JVM bytecode compiler, that's what I was talking about too :-). I'm not sure that having it will cover 90% of people interested in pypy-jvm; to fully exploit python on jvm we need to allow access to Java classes from Python (and vice-versa), but this is unrelated to having a python-to-jvm compiler (for example, pypy-cli allows to access .NET classes without compiling python code to CLI). Also, the current plan for having a Python-to-jvm compiler is not to write it by hand, but to automatically generating it by reusing the same techniques we use for JIT, i.e. by partially evaluating the main interpreter loop assuming the bytecode as a constant. This approach will have the advantages of a) being correct by design b) working for both pypy-jvm and pypy-cli and c) working not only for python, but also for other languages implemented in rpython. ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] more problems building PyPy-JVM
Martijn Faassen wrote: Hi there, Hi Martijn, To start off, please note I haven't worked with Java in the past. I'm the proper kind of newbie to try out PyPy-JVM. :) [cut] Do I need a different version of jasmin? 'jasmin -version' reports: Jasmin version: v sable-1.2 Yes, it looks like you need another version of jasmin; the mine is this: $ jasmin -version Jasmin version: v2.2 I think you can download it from here: http://jasmin.sourceforge.net/ I honestly don't know what sable means in you version string. Also for my next step, assuming I get it to work after this, what'd I need do to start it up to run a script or see a Python prompt? The pypy docs on the website still have the JVM project being in an earlier stage than it really is, so it doesn't really say yet. Ah, good point, thanks; I suppose I could update the docs to explain how to install everything needed and how to translate pypy-jvm. Anyway, after the compilation you can find your pypy-jvm.jar inside the translator/goal directory; to run it simply type: $ java -jar pypy-jvm.jar Together with the jar file we create also a script called pypy-jvm that invokes java -jar, so if you are on unix you can also type: $ ./pypy-jvm ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] missing things for making PyPy production ready (for some value of production)
Charles Oliver Nutter wrote: Wiring in support for calling Java libraries won't be particularly difficult, given the many reflective capabilities already present for Java. But I agree it needs to be there for most people to find a lot of use. Indeed, I think this is the easiest part to do. However, if PyPy on JVM can be brought to a point where it can actually run real Python apps (especially apps that would be useful in the domains where Java is useful) the integration question can be delayed a bit. Jython, for example, can't run anything. If PyPy could beat that in the short term and get something non-trivial running well, it would be a huge bonus. I think that since we are highly compatible with CPython, existing Python apps are likely to work in pypy-jvm, modulo the fact that we still miss many libraries. For example, as Jean Paul said he was able to run a twisted IRC bot using pypy-c; I think/hope it to work also on pypy-jvm once we have implemented modules like socket co. That's the work I meant, thanks for clarifying. Is this actively being worked right now? The JVM backend has been mostly written by me and Niko Matsakis. At the moment both of us work on it in our spare time, but if my phd proposal gets accepted I should be able to work on it much more continuously. ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] is anyone actively hacking on the JVM backend?
Niko Matsakis wrote: Right now I am wringing out the last few bugs from a check-in that would allow multiple PyPy-generated interpreters to be used simultaneously. Currently, the code uses a few static fields that would conflict if, for example, someone tried to load both a Python and a Smalltalk interpreter into the same JVM. These static fields were used to allow the helper code (src/PyPy.java and friends) to create instances of exception classes and other RPython-generated code. The new code creates a single instance of the PyPy helper class per generated interpreter and use non-static-methods instead, which should be better. I think this new feature is very cool. Is the logic for all the InterLink stuff in jvm or in oosupport? I think it would be nice to reuse the same ideas also for the cli backend. ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] is anyone actively hacking on the JVM backend?
Niko Matsakis wrote: Right now it's mostly in the JVM backend, though I had to insert a few hooks into oosupport. I wasn't sure whether the CLI had the same interlinking problem. yes, the problems are nearly the same for the cli backend; basically, we want: 1) to raise RPython exception from C#/Java code 2) to create records with a specific set of fields Right now the solution for (1) is very hackish: I have a stub file called main.exe which contains dummy functions to raise exceptions; pypylib.dll is compiled against main.exe, but then we compile the real executable so at runtime pypylib.dll is linked to a new main.exe which contains functions with the same namesignature as the dummy ones. Basically, we have a circular dependency between pypylib.dll and main.exe; believe it or not, in .NET it works :-). I know, this solution is very ugly and moreover it forces the executable to be named main.exe. Interlink sound a much better approach :-). For (2), the current solution is to write the records we want directly in C#/Java, and let the backend to special case those when rendering the name. Unfortunately, I can't see a general solution here; the interlink approach works only for records which you only need to create and not to manipulate, but e.g. it can't work for StatResult because you still need to set individual fields after its creation. I guess this is the cause why StatResult is still implemented the old way in genjvm. I can look into whether more of the code could be pulled into oosupport, but if I recall the two main places I had to add hooks were: 1. When rewrite the opcode table, I check now if the action for an opcode is a virtual method: if so, instead of rewriting it to PushAllArgs, Invoke Method, Store Result, I rewrite it to: Push PyPy object, PushAllArgs, Invoke Method, Store Result. Uhm... sorry, I can't see why this is necessary; could you explain a bit more in detail please? 2. I added prepare_ hooks for oostring, oounicode, and primitives, which basically just push the receiver object. We could probably pull more into oosupport, particularly if it gained the ability to talk about static fields and things. That would be nice; it would be a nice sprint topic, if only there were a sprint planned soon :-). ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] mapping C# iterator to Python iterator
amit wrote: Taking example of System.Collection.ArrayList class. If the class uses the interface IEnumerable that can be checked using reflection b_type = System.Type.GetType(fullname) ifaces = b_type.GetInterfaces() for interface in ifaces: if interface ==IEnumerable: print yes Problems mapping the two functions d['__iter__'] = d['GetEnumerator'] d['next'] = d['MoveNext'] to make C# objects iterable in PyPy-Cli module. a) MoveNext() is not available in both methods and staticmethods passed to build_wrapper sure, that's expected. MoveNext() is a method defined for the IEnumerator interface, not for IEnumerable. Moreover, I just realized that there is a discrepancy between .NET's MoveNext() and Python's next() methods: the former only advances the iterator without returning the current object, while the latter both advances the iterator and returns the current objects; also, we should throw StopIterationError when we reach the end of the iterator. In other words, next() should be implemented this way: def next(self): if not self.MoveNext(): raise StopIterationError return self.get_Current() To summarize: - for classes implementing IEnumerable: you should map __iter__ to GetEnumerator - for classes implementing IEnumerator: you should add a next() method like the one above, and add an __iter__ that returns self. b) and assignment of GetEnumerator to __iter__ gives following TypeError. TypeError: Can't convert object System.Collections.ArrayList+SimpleEnumerator to Python could you check in a failing test please? I had in mind something like class Dummy: __iter__(self): return CS.Current() next(self): bool = CS.MoveNext() if bool == false: raise StopIteration this is also a good solution; you should remind to return CS.get_Current() at the end of next(). One thing I don't understand: what is CS? ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] System calls in Java
Niko Matsakis wrote: there's been some discussion on Jython lists and some experimenting by JRuby people to use JNA (https://jna.dev.java.net/ ) which is a sort of ctypes for java, for this sort of purposes. This is perfect! I just tried it out and it worked very well. Hi Niko, I noticed that your recent changes have broken the translation of pypy-jvm on tuatara, because now pypy-jvm requires jna, and it's not installed there. I tried to download jna from the web but I failed to understand what package(s) I need; do I need only jna.jar and/or the platform specific jar? I wonder whether it might make sense to include the .jar directly in our svn repo, to drop the external dependency on jna and simplify the life of those that want to build pypy-jvm by themselves. ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] mapping C# Delegates to Python
amit wrote: /* * Now in Python what I might want to do is * * import System.Delegate * * class Deleg(Delegate): * __init__(self, ...): * pass I don't think we want to do this. In .NET System.Delegate is treated specially by the virtual machine, and it needs an exact declaration of the signature. The interoperability between python and delegates is two-ways: 1) having a .NET delegate in a Python variable and being able to call it using the usual Python syntax 2) having a .NET method taking a delegate and being able to pass it a python callable Point 1 is quite easy: it's just a matter do map Invoke on __call__. Point 2 is probably very hard to solve for the general case, because .NET delegates are statically typed and Python callables are not; I didn't think very deeply about the problem, but I think the solution involves the creation of a thin wrapper method on the fly. I suggest to do only point 1 for now. The hardest part could be to find a way to test it properly, because we need to find a place in the standard library that returns a delegate we can call; maybe you can find something useful here: http://msdn2.microsoft.com/en-us/library/at6657xa.aspx ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] System calls in Java
holger krekel wrote: If you put it somewhere below svn/pypy/dist please insert at least an XXX into the LICENSE file and provide a reference to the project/license of JNA. Done. I added the following lines to the LICENSE file: License for 'pypy/translator/jvm/src/jna.jar' = The file 'pypy/translator/jvm/src/jna.jar' is licensed under the GNU Lesser General Public License of which you can find a copy here: http://www.gnu.org/licenses/lgpl.html I didn't add any info about the copyright because I couldn't find any copyright note neither in the sources nor in the web site. ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] [pypy-svn] r49617 - pypy/branch/clr-module-improvements/pypy/module/clr
Hi Amit, [EMAIL PROTECTED] wrote: +assembliesToScan = [ /usr/lib/mono/1.0/mscorlib.dll, + /usr/lib/mono/1.0/System.dll, + /usr/lib/mono/1.0/System.Web.dll, + /usr/lib/mono/1.0/System.Data.dll, + /usr/lib/mono/1.0/System.Xml.dll, + /usr/lib/mono/1.0/System.Drawing.dll +] +assems = [] +for assemblyName in assembliesToScan: +assems.append(System.Reflection.Assembly.LoadFile(assemblyName)) sorry for bugging you again :-), but this is not what we want. We don't need to preload assemblies that the user might not want. For example, in IronPython to access the System.Xml namespace you need to explicitly load System.Xml.dll by using one of the clr.LoadAssembly* methods. Moreover, what happens if I wanto to load an assembly that it's not on this list? And what if I load an assembly after the list of namespaces has already been computed? I think what you need to do is to maintain a cache of valid namespaces, and check whether a new assembly has been loaded every time you import something. You can easily get the list of currently loaded assemblies by calling AppDomain.CurrentDomain.GetAssemblies(). ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] adding support for Generic classes in the /module/clr
On Dec 12, 2007 2:11 AM, amit [EMAIL PROTECTED] wrote: Support for generic classes ---IronPythonic way- from System.Collections.Generic import * l = List[str]() l.Add(Hello) l.Add(Hi) l.Add(3)# fails with ValueError: d = Dictionary[str, int]() d.Add('abc', 1) d.Add('def', 2) for i in d.Keys: print d[i] So here in pypy-clr what we'd like to do is import System.Collections.Generic.LinkedListstr we can't use this syntax, because the parser would complain. We need to use the same trick as IronPython, i.e. use square brackets. l= System.Collections.Generic.LinkedListstr() l.Add(hello) l.Add(hi) l.Add(3)# throw ValueError exception the import is to be analysed if its an import for a Generic Class and loaded as clr.load_cli_generic_class('System.Collections.Generic','LinkedListstr') No, I don't think we want to do so. I think the best is to mimic IronPython as much as possible. I.e., let the user to import the generic class by itself, e.g. import System.Collections.Generic.List; then, if you do List[str] it returns a concrete class that can be istantiated as in IronPython. The implementation of load_cli_generic_class is a BlackBox to me as of now. From the import line we can check if the class is a generic and then reflection can be used to determine: a) Generic type b) type arguments c) Parameter attributes and more I hope somehow the existing build_wrapper could be used after putting some checks but I am not sure how. well, basically you have to check whether a class is generic; if it is, we need to construct it by using another metaclass which has support for __getitem__; something like this: class MetaGenericCliClassWrapper(...): def __getitem__(self, *types): # do something return load_cli_class(...) But before you can use this metaclass, you need to ensure that load_cli_class works fine with concrete instantiations of generic classes. So, the first thing to do is to write a test like this: def test_generic_class(self): ListInt = load_cli_class(System.Collections.Generic, List`1int32) x = ListInt() x.Add(42) # etc... Then we need to write other tests for the error cases (e.g., test that if you put a string inside x it raises ValueError). Maybe these test swould work out of the box, I don't know. After that, all we need to do is to provide some nice syntactic sugar like square brackets co. ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] [pypy-sprint] AVM2 / Tamarin backend at the sprint
Toby Watson wrote: Hi Antonio, Hi Toby, Thanks for the advice and pointers into the code. Would you say this is still a fair assessment of the tasks that have to be done to target a new backend? : (pulled from PyPy[cli-backend]) • map ootypesystem's types to CLI Common Type System's types; • map ootypesystem's low level operation to CLI instructions; • map Python exceptions to CLI exceptions; • write a code generator that translates a flow graph into a list of CLI instructions; • write a class generator that translates ootypesystem classes into CLI classes. Was this pre or post the refactoring you describe? this list was written before the refactoring, but I think it's more or less still valid. First of all, you need a strong understanding of both ootypesystem and the type system of the VM you want to targets: then you can think how to map the types; for CLI and JVM it was mostly straightforward, but maybe it wouldn't be so for AVM2, e.g. if it doesn't support classes in the same way ootypesystem expects. After you have mapped types, mapping instructions would be just a matter of coding but it shouldn't be too hard. The paragraph about exceptions is mostly historical; at least at the beginning, you shouldn't need to do anything special about exceptions as long as you keep the hierarchy of RPython exceptions separated from the hierarchy of the VM exceptions. The cool thing is that the hardest point (code generator) have been already implemented in oosupport, and it's very easy to subclass it for targeting yet another VM. If you want to have a closer look to how each point is implemented, here are some pointers in the code: * mapping types - for cli, it's cli/cts.py; the main entry-point is the function lltype_to_cts - for jvm, it's jvm/typesystem.py; however, the main entry-point is the function lltype_to_cts in jvm/database.py * mapping operations: see cli/opcodes.py and jvm/opcodes.py * code generator: the base class is in oosupport/function.py, subclassed in cli/function.py and jvm/nodes.py (class GraphFunction); the Function class knows how to deal with graphs, but it delegates the actual code generation to something specific to each backend; for cli, it's in cli/ilgenerator.py, for jvm it's the GraphFunction class itself. * class generator: cli/class_.py and the Class class in jvm/node.py ciao Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
[pypy-dev] First draft of the openjdk proposal
Hi all, I just finished to write the first draft of the openjdk proposal; you can find it under extradoc/proposal/openjdk-challenge.txt. It would be nice to hear comments, remarks and suggetions from whoever is interested as soon as possible, since the deadline is close (2nd of march). ciao, Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] enchancing flow objspace
Carl Friedrich Bolz wrote: weell. That sounds like an extreme hack to me. Maybe we should do something else, like 'assert 0, XXX' instead of just putting XXXs everywhere. +1 ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
[pypy-dev] [Fwd: Re: Fan Programing Language]
Hi Niko, hi all, there is an interesting thread going on on the jvm-languages mailing list; among the other things, I discovered that the JVM can handle the exception much faster if you override the fillInStack method to do nothing instead of building the traceback. I think that since we don't rely on jvm tracebacks for exceptions, overriding such a method in all our exception classes might lead to some speedup. Note that hotspot is smart enough to optimize well the case in which you raise a prebuilt exception, but in all cases in which you have to dynamically construct a new exception (e.g., OperationError) it can't. ciao, Anto Original Message Subject: [jvm-l] Re: Fan Programing Language Date: Tue, 22 Apr 2008 15:05:04 -0400 From: John Cowan [EMAIL PROTECTED] Reply-To: [EMAIL PROTECTED] To: [EMAIL PROTECTED] References: [EMAIL PROTECTED] [EMAIL PROTECTED] [EMAIL PROTECTED] [EMAIL PROTECTED] On Tue, Apr 22, 2008 at 2:41 PM, Jon Harrop [EMAIL PROTECTED] wrote: 2) you are allocating a new exception every time; the optimization described here [1] works only if the exception is pre-allocated. [1] http://blogs.sun.com/jrose/entry/longjumps_considered_inexpensive I think that is not thread safe. Specifically, when the branch conveys information (passed as arguments using a tail call, or embedded in the exception) then you must use a locally allocated exception, right? Yes, you must. However, what makes allocating an exception expensive is the fillInStack method, which has to walk the JVM stack. If you override that in your exception class with a do-nothing method, then locally allocating exceptions is very cheap. -- GMail doesn't have rotating .sigs, but you can see mine at http://www.ccil.org/~cowan/signatures --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups JVM Languages group. To post to this group, send email to [EMAIL PROTECTED] To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/jvm-languages?hl=en -~--~~~~--~~--~--~--- ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
[pypy-dev] Running pysqlite on pypy
Hi Gerhard, hi all, in the last days, we have been trying to run django on pypy, using the ctypes based implementation of pysqlite. In doing this, we encountered a problem; we have exchanged a bit of private mails, so I try to sum up here: This snippet explodes when run with pysqlite-ctypes, either on cpython or pypy-c: from pysqlite2.dbapi2 import connect db = connect(':memory:') db.execute('BEGIN IMMEDIATE TRANSACTION') pysqlite2.dbapi2.Cursor object at 0xb7dc4f6c db.execute('COMMIT') Traceback (most recent call last): File stdin, line 1, in ? File /home/exarkun/Scratch/Source/pysqlite3/pysqlite2/dbapi2.py, line 315, in execute return cur.execute(*args) File /home/exarkun/Scratch/Source/pysqlite3/pysqlite2/dbapi2.py, line 483, in execute raise self.connection._get_exception() pysqlite2.dbapi2.OperationalError: SQL logic error or missing database The very same thing happens on python 2.4 + pysqlite2 (non-ctypes version): from pysqlite2.dbapi2 import connect db = connect(':memory:') db.execute('BEGIN IMMEDIATE TRANSACTION') pysqlite2.dbapi2.Cursor object at 0xb7cb0860 db.execute('COMMIT') Traceback (most recent call last): File stdin, line 1, in ? pysqlite2.dbapi2.OperationalError: cannot commit - no transaction is active However, it works perfectly on cpython 2.5 + sqlite3: from sqlite3.dbapi2 import connect db = connect(':memory:') db.execute('BEGIN IMMEDIATE') sqlite3.Cursor object at 0xf7cff050 db.execute('COMMIT') sqlite3.Cursor object at 0xf7cff020 Samuele pointed out that maybe it's just a difference between pysqlite2 and pysqlite3; after more digging, he changed pysqlite-ctypes to print every SQL statement before sending it to sqlite; what he got is this: Python 2.5.1 (r251:54863, Jan 17 2008, 19:35:17) [GCC 4.0.1 (Apple Inc. build 5465)] on darwin Type help, copyright, credits or license for more information. from pysqlite2 import dbapi2 db = dbapi2.connect(':memory:') db.execute('BEGIN IMMEDIATE TRANSACTION') BEGIN IMMEDIATE TRANSACTION pysqlite2.dbapi2.Cursor object at 0x55e90 db.execute('COMMIT') COMMIT COMMIT Traceback (most recent call last): File stdin, line 1, in module File pysqlite2/dbapi2.py, line 318, in execute return cur.execute(*args) File pysqlite2/dbapi2.py, line 489, in execute raise self.connection._get_exception() pysqlite2.dbapi2.OperationalError: SQL logic error or missing database The double COMMIT is probably causing the problem; not sure if it's a bug in pysqlite-ctypes or an expected behavior. Gerhard, what do you think about all of this? What's the best way to solve? ciao, Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] Running pysqlite on pypy
Gerhard Häring wrote: I just pushed an ad-hoc fix for this particular problem to the hg repo: changeset: 328:c42db28b5031 branch: ctypes tag: tip parent: 317:89e6da6ea1cb user:Gerhard Haering [EMAIL PROTECTED] date:Sat May 17 03:19:37 2008 +0200 summary: Quick fix: only use implicit transactions when autocommit is off. HTH good night; just shout if there's anything else that needs fixing Hi Gerhard, thank you very much for the fix; indeed, your fix, together with a small patches by me, solved the problem: --- a/pysqlite2/dbapi2.py Tue Jan 15 16:31:23 2008 +0100 +++ b/pysqlite2/dbapi2.py Sat May 17 09:57:52 2008 +0200 @@ -236,7 +236,7 @@ def connect(database, **kwargs): return factory(database, **kwargs) class Connection(object): -def __init__(self, database, isolation_level=, detect_types=0, *args, **kwargs): +def __init__(self, database, isolation_level=None, detect_types=0, *args, **kwargs): self.db = c_void_p() ret = sqlite.sqlite3_open(database, byref(self.db)) Now the django test that was previously failing passes; I've not run the others yet, but I expect them to pass as well, or to be broken for other reasons :-) One more questions: I have two versions of pysqlite-ctypes around: 1) the one which was originally at http://hg.ghaering.de/pysqlite3/; that link is now broken but you can still download it from http://codespeak.net/~cfbolz/pysqlite3.tar.gz 2) the one in the official pysqlite repo; I've got it by doing hg clone http://oss.itsystementwicklung.de/hg/pysqlite/ hg up -C ctypes pysqlite I found that while (2) is supposed to be newer, it misses some features that (1) has, in particular Connection.create_function (which is needed by django). I'm not an expert of mercurial, but it seems that some changesets went lost when moving from the old url to the new one. thank you again for your efforts! ciao, Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] problems running pypy-jvm
Hi Maciek, Maciej Fijalkowski wrote: First of all, I cannot get jar to work. It's complaining about '@' in front of paths. Even if I remove it, it still cannot find Main class. what do you exactly mean? Could you paste the error message please? When I run it by hand I get: debug: WARNING: library path not found, using compiled-in sys.path 'import site' failed Error calling sys.excepthook: debug: OperationError: debug: operror-type: ImportError debug: operror-value: cannot import name 'curdir' What's wrong? No clue. I've translated pypy-jvm yesterday just after the merging of the less-meta-instance branch, and it works fine: [EMAIL PROTECTED] goal $ ./pypy-jvm Python 2.4.1 (pypy 1.0.0 build 56259) on linux2 Type help, copyright, credits or license for more information. ``nothing is true'' If I move pypy-jvm to another dir and rename my wc to prevent it to find the compiled-in sys.path, I get another error, which seems reasonable: [EMAIL PROTECTED] tmp $ ./pypy-jvm debug: WARNING: library path not found, using compiled-in sys.path 'import site' failed Python 2.4.1 (pypy 1.0.0 build 56259) on linux2 Type help, copyright, credits or license for more information. debug: OperationError: debug: operror-type: ImportError debug: operror-value: No module named _pypy_interact Which revision is your pypy-jvm? Any clue why it doesn't find the library path? What ./pypy-jvm -c 'import sys; print sys.path' prints? ciao, Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] new -O option
Armin Rigo wrote: While I'm at it, should I remove --allworkingmodules and make it the default? Should this also include the thread module? If additionally the default value for --opt would be 3, then just typing ./translate.py would produce a pypy-c executable of the most recommended and most useful kind. +1 The only drawback I see is that with --allworkingmodules the translations takes longer to complete, but I agree that the casual user probably cares more about functionality than translation speed. Does --allworkingmodules also increase the ram usage? ciao, anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
[pypy-dev] bugday
Hi pypy-dev, we are trying to organize the first PyPy Bugday, but it seems it's hard to find a proper date for it; so, we've set up a poll on doodle: http://doodle.com/73tepx6ktg9cyd29 the poll is mainly targeted at core pypy developers, but of course everyone can vote. Please spend two minutes for voting, so that we can pick the best date for it. ciao a bientot, Anto Armin ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] Sprint dates
Armin Rigo wrote: Hi, About the February sprint, the proposed dates (mostly the original ones) are: 7-14th. After sorting out the Duesseldorf situation, these dates could be ok too. Anyone has strong objections? the dates should be ok for me, though I don't promise I will come because I'm already doing a lot of travels in that period and I'm not sure I feel like doing another one... I'll decide later. ciao, Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] Threaded interpretation (was: Re: compiler optimizations: collecting ideas)
Paolo Giarrusso wrote: Hi all, Hi! after the completion of our student project, I have enough experience to say something more. We wrote in C an interpreter for a Python subset and we could make it much faster than the Python interpreter (50%-60% faster). That was due to the usage of indirect threading, tagged pointers and unboxed integers, and a real GC (a copying one, which is enough for the current benchmarks we have - a generational GC would be more realistic but we didn't manage to do it). That's interesting but says little about the benefit of threaded interpretation itself, as the speedup could be given by the other optimizations. For example, I suspect that for the benchmark you showed most of the speedup is because of tagged pointers and the better GC. Is it possible to make you eval loop non-threaded? Measuring the difference with and without indirect threading could give a good hint of how much you gain with it. What kind of bytecode you use? The same as CPython or a custom one? E.g. I found that if we want to handle properly the EXTENDED_ARG CPython opcode it is necessary to insert a lot of code before jumping to the next opcode. Moreover, tagging pointer with 1 helps a lot for numerical benchmarks, but it is possible that causes a slowdown for other kind of operations. Do you have non-numerical benchmarks? (though I know that it's hard to get fair comparison, because the Python object model is complex and it's not easy to write a subset of it in a way that is not cheating) Finally, as Carl said, it would be nice to know which kind of subset it is. E.g. does it support exceptions, sys.settrace() and sys._getframe()? In fact, we are at least 50% faster on anything we can run, but also on this benchmark, with the usage of unboxed integers and tagged pointer (we tag pointers with 1 and integers with 0, like V8 does and SELF did, so you can add integers without untagging them): def f(): y = 0 x = 1000 while x: x-=1 y+=3 return y - 2873 And since we do overflow checking (by checking EFLAGS in assembly, even if the code could be improved even more, probably), I don't think a comparison on this is unfair in any way. is your subset large enough to handle e.g. pystone? What is the result? 1) in Python a lot of opcodes are quite complex and time-consuming, That's wrong for a number of reasons - the most common opcodes are probably loads from the constant pool, and loads and stores to/from the locals (through LOAD/STORE_FAST). Right now, our hotspot is indeed the dispatch after the LOAD_FAST opcode. if you do benchmarks as the one showed above, I agree with you. If you consider real world applications, unfortunately there is more than LOAD_CONST and LOAD_FAST: GETATTR, SETATTR, CALL, etc. are all much more time consuming than LOAD_{FAST,CONST} That's your problem - threading helps when you spend most of the time on dispatch, and efficient interpreters get to that point. the question is: is it possible for a full python interpreter to be efficient as you define it? 2) due to Python's semantics, it's not possible to just jump from one opcode to the next, as we need to do a lot of bookkeeping, like remembering what was the last line executed, etc. No, you don't need that, and not even CPython does it. For exception handling, just _when an exception is thrown_, [cut] Sorry, I made a typo: it is needed to remember the last *bytecode* executed, not the last line. This is necessary to implement properly sys.settrace(). I never mentioned exception handling, that was your (wrong :-)) guess. If the interpreter loop is able to overflow the Icache, that should be fought through __builtin_expect first, to give hint for jump prediction and lay out slowpaths out-of-line. I think that Armin tried once to use __builtin_expect, but I don't remember the outcome. Armin, what was it? Well, my plan is first to try, at some point, to implant threading into the Python interpreter and benchmark the difference - it shouldn't take long but it has a low priority currently. That would be cool, tell us when you have done :-). ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] Threaded interpretation (was: Re: compiler optimizations: collecting ideas)
Carl Friedrich Bolz wrote: About better implementations of the bytecode dispatch I am unsure. Note however, that a while ago we did measurements to see how large the bytecode dispatch overhead is. I don't recall the exact number, but I think it was below 10%. I think it's something more. There is the 'rbench' module that contains geninterpreted versions of both richards and pystone; IIRC last time I tried they where ~50% faster than they interpreted counterparts, on both pypy-c and pypy-cli. Of course, with geninterp you remove more than just the interpretatiom overhead, as e.g. locals are stored on the stack instead that on a frame. ciao, Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] Support for __getitem__ in rpython?
Paolo Giarrusso wrote: There are at least two ways, once you have a singleton (maybe static) None object around: - box all integers and use only pointers - the slow one; - tagged integers/pointers that you already use elsewhere. So integers of up to 31/63 bits get represented directly, while the other ones are through pointers. I think you are confusing level: here we are talking about RPython, i.e. the language which our Python interpreter is implemented in. Hence, RPython ints are really like C ints, and you don't want to manipulate C ints as tagged pointer, do you? ciao, Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] Threaded interpretation (was: Re: compiler optimizations: collecting ideas)
Paolo Giarrusso wrote: the question is: is it possible for a full python interpreter to be efficient as you define it? Well, my guess is if Prolog, Scheme and so on can, why can't Python? a possible answer is that python is much more complex than prolog; for example, in PyPy we also have an rpython implementation of both prolog and scheme (though I don't know how much complete is the latter one). I quickly counted the number of lines for the interpreters, excluding the builtin types/functions, and we have 28188 non-empty lines for python, 5376 for prolog and 1707 for scheme. I know that the number of lines does not mean anything, but I think it's a good hint about the relative complexities of the languages. I also know that being more complex does not necessarily mean that it's impossible to write an efficient interpreter for it, it's an open question. Thanks for the interesting email, but unfortunately I don't have time to answer right now (xmas is coming :-)), I just drop few quick notes: And while you don't look like that, the mention of tracking the last line executed seemed quite weird. And even tracking the last bytecode executed looks weird, even if it is not maybe. I'm inspecting CPython's Python/ceval.c, and the overhead for instruction dispatch looks comparable. The only real problem I'm getting right now is committing the last bytecode executed to memory. If I store it into a local, I have no problem at all, if I store it into the interpreter context, it's a store to memory, so it hurts performance a lot - I'm still wondering about the right road to go. by tracking the last bytecode executed I was really referring to the equivalent of f_lasti; are you sure you can store it in a local and still implement sys.settrace()? Ok, just done it, the speedup given by indirect threading seems to be about 18% (see also above). More proper benchmarks are needed though. that's interesting, thanks for having tried. I wonder I should try again with indirect threading in pypy soon or later. Btw, are the sources for your project available somewhere? And as you say in the other mail, the overhead given by dispatch is quite more than 50% (maybe). no, it's less. 50% is the total speedup given by geninterp, which removes dispatch overhead but also other things, like storing variables on the stack and turning python level flow control into C-level flow control (so e.g. loops are expressed as C loops). Am I correct in assuming that geninterpreting _basically_ pastes the opcode handlers together? I guess with your infrastructure, you can even embed easily the opcode parameters inside the handlers, it's just a trivial partial evaluation that's (part of) what our JIT is doing/will do. But it does much more than that, of course. Merry Christmas to you and all pypyers on the list! ciao, Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
[pypy-dev] [Fwd: [Fwd: Re: Threaded interpretation]]
Hi, Antoine Pitrou told me that his mail got rejected by the mailing list, so I'm forwarding it. Message transféré De: Antoine Pitrou solip...@pitrou.net À: pypy-dev@codespeak.net Sujet: Re: Threaded interpretation Date: Fri, 26 Dec 2008 21:16:36 + (UTC) Hi people, By reading this thread I had the idea to write a threaded interpretation patch for py3k. Speedup on pybench and pystone is 15-20%. http://bugs.python.org/issue4753 Regards Antoine. ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] Non-circular path through pypy module imports
VanL wrote: Hello, Hi! Sorry for the late response, everyone thought that someone else would have been answered, but nobody did at the end :-) I have been going through the pypy sources recently and I wanted to start from the leaf modules first (in terms of import dependencies)... but there don't appear to be any leaves. For example, starting with baseobjspace in the interpreter, it has 30 or so non-stdlib imports, many of them delayed by placing them in functions. Even something like rarithmetic eventually pulls in most of the lltype hierarchy. The closest I found to a leaf module was extregistry.py. Is there a reason for the circularities that I just don't get? If not, why are there so many? Well, for at least some of circularities is partly our fault, we know that the import relationships between our modules are a mess currently. Sorry for that. Anyway, I don't think that starting from the leaves is the best option to grasp pypy: probably it's better to pick a topic you are interested in, and start from there. If you are more interested in the translation toolchain (i.e., the rpython-to-{c,cli,jvm} compiler), you can start from either the beginning or the end of the chain; look for example at the annotation/ directory or at the various backends in translator/{c,cli,jvm}. On the other hand, if you are more interested in the interpreter (i.e., the python implementation written in rpython) you want to have a look to interpreter/ and objspace/std. The former directory contains the core of the interpreter, while the latter contains the implementation of all the standard builtin types such as lists, dictionaries, classes, etc. A good starting point could be the main loop of the interpreter: look at the dispatch_bytecode function inside interpreter/pyopcode.py. I assume that you have already read the pypy documentation online; if not, you are strongly encouraged to do that before looking at the source; in particular, this document describes the high level architecture of pypy: http://codespeak.net/pypy/dist/pypy/doc/architecture.html More generally, all the pypy documentation is available here: http://codespeak.net/pypy/dist/pypy/doc/ ciao, Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] [pypy-svn] r61220 - pypy/trunk/lib-python
fi...@codespeak.net wrote: we need to have RegrTest for each file starting with test_ anyway... Modified: pypy/trunk/lib-python/conftest.py == --- pypy/trunk/lib-python/conftest.py (original) +++ pypy/trunk/lib-python/conftest.py Thu Jan 22 10:34:15 2009 @@ -294,6 +294,7 @@ RegrTest('test_mmap.py'), RegrTest('test_module.py', core=True), RegrTest('test_multibytecodec.py', skip=unsupported codecs), +RegrTest('test_multibytecodec.py_support', skip=not a test), uhm... did you want to write test_multibytecodec_support.py, maybe? ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] parser compiler and _ast
Maciej Fijalkowski wrote: I did a bit of research on a matter of different python interfaces to parsing. This is a bit of a mess, but it is as follows: [cut] 1a. We just always generate the code and throw it away. We loose performance, but we don't care (and we pass tests) a quick google code search reveals that there are a couple of projects using parser, like pychecker, epydoc and quixote. If it's not too time consuming, I would go for this option. 2. We slowly deprecate parser compiler, with some removal in future +1 3. If we develop a new parser, we drop compatibility with exact concrete syntax trees and we only keep ast around (as ast module) if people want to interact with it (this is one less worry for writing parser). That's an option, but not for 1.1, I think. ciao, Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] Europython sprint
Hi Jacob, hi all, Jacob Hallén wrote: The outine of the programme is as follows: * Sunday 28th June and Monday 29th June : Tutorial Days * Tuesday 30th June to Thursday 2nd July : Conference * Friday 3rd July, as long as needed : Sprints We could have a sprint(an internal one?) in parallel with the tutorials if we want. How long would we like to sprint after the conference? Friday-Sunday? Just a quick input from my side: when deciding the sprint dates, we should keep in mind that ECOOP 2009 starts on Monday 6th July. I will surely be there, and possibly other pypyers as well (both Carl Friedrich and Armin showed interest e.g.). ciao, Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] pypy sandbox
Dalius Dobravolskas wrote: Hello, All, Hi Dalius, [cut] /home/dalius/projects/pypy-dist/pypy/annotation/model.py(537)__init__() Unfortunately pypy/dist is largely outdated nowadays (we plan to copy trunk to dist soon) please try with pypy/trunk. ciao, Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] pypy sandbox
Dalius Dobravolskas wrote: trunk has worked just perfect. nice :-) I will try to implement WSGI sample similar to Django (as mentioned here http://morepypy.blogspot.com/2009/02/wroclaw-2009-sprint-progress-report.html). that's cool. Please keep us informed, and feel free to come to #pypy on irc.freenode.net if you have any question. ciao, Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
[pypy-dev] holydays
Hi all, I'm going on holydays next week (skiing :-)), so I won't be online until the 9th. ciao, Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] Leysin Winter Sprint
Armin Rigo wrote: Please *confirm* that you are coming so that we can adjust the reservations as appropriate. The rate so far has been around 60 CHF a night all included in 2-person rooms, with breakfast. There are larger rooms too (less expensive) and maybe the possibility to get a single room if you really want to. is 60 CHF per-person or per-room? Anyone interested in sharing one room? ciao, Anto ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] did we submit any europython talks?
Maciej Fijalkowski wrote: Does anyone knows so far he's going to the EP? besides Laura of course. yes, I plan to go there. Submitting a JIT talk would be nice. Armin et. al, what do you think? ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
Re: [pypy-dev] Upcoming Sprint
Carl Friedrich Bolz wrote: I wouldn't mind sharing a room with you and having to pay more for the remaining days (and having a bit of quiet), since the uni is paying anyway. Could you add yourself to this file: I'd also like to share a room. Maybe we can get a triple room? ___ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev