[fonc] MODULARITY: aosd 2012 - Announcement of Student Events
ANNOUNCEMENT OF STUDENT EVENTS === 11th International Conference on Aspect-Oriented Software Development MODULARITY: aosd 2012 Announcement of Student Events http://www.aosd.net/2012/program/student-events.html stude...@aosd.net March 25-30, 2012 Hasso-Plattner-Institut Potsdam, Germany === We'd like to announce the student events for the upcoming MODULARITY: aosd 2012 conference. These events are a valuable experience to meet other students, to identify new research ideas, and to present results to other researchers. *** Student Grants for AOSD Registration *** We are happy to announce that AOSD Europe will fund up to 10 grants supporting student registrations for MODULARITY: aosd 2012. We target a balanced mix of European countries and select students based on received applications (a short motivation letter with no more than 250 words). Please note: The grants are limited to students from European universities (independent of nationality). A grant includes 250€ that will only be reimbursed after a student has registered for the full conference (350€). Please, apply for a grant by e-mail (stude...@aosd.net) until February 20th, 2012. === Student Forum / Spring School (full-day) - Sunday March 25th === The Student Forum will take place on the Sunday preceding the conference to allow students to meet other students before the main conference begins. As in previous years, the Student Forum will be an interactive format that allows students to interact brainstorm innovative ways that AOSD research interests intersect. Students will also have the opportunity to hear from and ask questions of domain experts both in a panel and small group settings. Please register for the student forum by use of the AOSD registration system or e-mail (stude...@aosd.net) not later than March 18th, 2012. === Student Poster Session (at the workshop reception) - Monday March 26th === The Poster Event is always one of the most exciting and well attended social events of the conference. This event, held during the workshop reception on Monday, allows students to present their research to conference attendees while mingling in a social setting. Students that participate in both the Student Forum and the Poster Event have the added advantage of already knowing other participants. Do not miss out on this opportunity to take your research to the next level, clarify problem statements, vet solutions, identify evaluation methods or just prepare for your dissertation. You can submit a poster by e-mail (stude...@aosd.net) until February 29th, 2012. === Student Research Competition - Monday March 26th / Wednesday March 28th === AOSD is hosting an ACM SIGPLAN Student Research Competition. The competition, sponsored by Microsoft Research, is an internationally- recognized venue that enables undergraduate and graduate students to experience the research world, share their research results with other students and AOSD attendees, and compete for prizes. The ACM SIGPLAN Student Research Competition shares the Poster session's goal to facilitate students' interaction with researchers and industry practitioners; providing both sides with the opportunity to learn of ongoing, current research. Additionally, the Student Research Competition affords students with experience with both formal presentations and evaluations. The first and second round will take place as follows: First Round (Posters): Monday March 26th, 2012 (at the workshop reception) Second Round (Presentations): Wednesday March 28th, 2012 (early afternoon) Student participants: * A Unified Formal Model for Service Oriented Architecture to Enforce Security Contracts Diana Allam * Compositional Verification of Events and Aspects Cynthia Disenfeld * Controlling Aspects with Membranes Ismael Figueroa * Aspect-Oriented Framework for Developing Dynamic Contents Kohei Nagashima * Tearing Down The Multicore Barrier For Web Applications Jens Nicolay * Adding high-level concurrency to EScala Jurgen M. Van Ham * A Scalable and Accurate Approach Based on Count Matrix for Detection Code Clones and Aspects Yang Yuan === For more information please
Re: [fonc] Raspberry Pi
Hi Loup Actually, your last guess was how we thought most of the optimizations would be done (as separate code guarded by the meanings). For example, one idea was that Cairo could be the optimizations of the graphics meanings code we would come up with. But Dan Amelang did such a great job at the meanings that they ran fast enough tempt us to use them directly (rather than on a supercomputer, etc.). In practice, the optimizations we did do are done in the translation chain and in the run-time, and Cairo never entered the picture. However, this is a great area for developing more technique for how math can be made practical -- because the model is so pretty and compact -- and there is much more that could be done here. Why can't a Nile backend for the GPU board be written? Did I miss something? Cheers, Alan From: Loup Vaillant l...@loup-vaillant.fr To: fonc@vpri.org Sent: Wednesday, February 8, 2012 1:29 AM Subject: Re: [fonc] Raspberry Pi Jecel Assumpcao Jr. wrote: Alan Kay wrote: We have done very little of this so far, and very few optimizations. We can give live dynamic demos in part because Dan Amelang's Nile graphics system turned out to be more efficient than we thought with very few optimizations. Here is were the binary blob thing in the Raspberry Pi would be a problem. A Nile backend for the board's GPU can't be written, and the CPU can't compare to the PCs you use in your demos. Maybe as a temporary workaround, it would possible to use OpenGL (or OpenCL, if available) as the back-end? It would require loading a whole Linux kernel, but maybe this could work? /wild_speculation I think it could be an valuable project for interested parties to see about how to organize the separate optimization spaces that use the meanings as references. I didn't get the part about meanings as references. I understood that meanings meant the concise version of Frank. The optimization space would then be a set of correctness-preserving compilers passes or interpreters. (I believe Frank already features some of that.) Or, re-written versions that are somehow guaranteed to behave the same as the concise version they derive from, only faster. But I'm not sure that's the spirit. Loup. ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Raspberry Pi
On 8 February 2012 15:23, Alan Kay alan.n...@yahoo.com wrote: Hi Loup Why can't a Nile backend for the GPU board be written? Did I miss something? You can't drive it directly because its specs aren't public. If you use its closed-source Linux driver, you can of course use OpenGL. -- http://rrt.sc3d.org ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Raspberry Pi
On Wednesday 08 Feb 2012 7:50:34 AM Jecel Assumpcao Jr. wrote: That is a very good point, but the reason why I have tried to design my computers with 100% removable media is so they can be shared in a classroom. Even at $35 a class might not have one per student. If you have 10 machines and 30 students, each with their own SD card, it works better than if you have permanent stuff inside each computer. Though today's users have learned to cope with hard disks and installing software, the old floppy-only machines were actually easier for them. +1. A second reason to have shareability is reuse. In a classroom the utility of projects are primarily in the lessons learnt while building it, not in its end use. So it makes sense to be able to dismantle it and reuse it for another project. Regards .. Subbu ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Raspberry Pi
Alan, Hi Loup Actually, your last guess was how we thought most of the optimizations would be done (as separate code guarded by the meanings). For example, one idea was that Cairo could be the optimizations of the graphics meanings code we would come up with. But Dan Amelang did such a great job at the meanings that they ran fast enough tempt us to use them directly (rather than on a supercomputer, etc.). In practice, the optimizations we did do are done in the translation chain and in the run-time, and Cairo never entered the picture. However, this is a great area for developing more technique for how math can be made practical -- because the model is so pretty and compact -- and there is much more that could be done here. Here is an old idea I had for a cache manager (as described in http://www.merlintec.com/lsi/tech.html): One feature that distinguishes programs by experts from those of novices is the use of caching as a performance enhancement. Saving results for later reuse greatly decreases source code readability, unfortunately, obscuring program logic and making debugging much harder. Reflection allows us to move caching to a separate implementation layer in a cache manager object. So the application can be written and debugged naively and, after it works, can be annotated to use the cache manager at critical points to significantly improve performance without having to write a new version. This is only possible because Merlin uses message passing at the bottom and includes a reflective access to such a fundamental operation. In other words, user applications are never made up of mainly big black boxes which the OS can do nothing about. Even simple math expressions as '(x * 2 ) + y' are entirely built from messages that are (in theory - the compiler actually eliminates most of the overhead) handled by a set of meta-objects. So all that the system has to do when the user annotates an expression as cacheable is to replace the standard send meta-object with one that looks up its arguments in a table (cache) and returns a previously calculated answer if it is found there. Otherwise it works exactly like a normal send meta-object. An example of how this works is in rendering text. A given font's glyphs might be given as sets of control points for Bezier curves describing their outlines plus some hints for adjusting these points when scaling. We could then draw a character from a font called myFont on aCanvas with the expression: aCanvas drawPixmap: ((myFont glyphFor: 'a' Size: 12) asVectors asPixmap) This should work perfectly, but will be unacceptably slow. For each time some character must be shown in the display its points must be scaled by the 'glyphFor:Size:' method, then the control points must be rendered as short vectors approximating the indicated Bezier curves ('asVectors') and finally these vectors must be used to fill in a picture ('asPixmap') which can finally simply be blasted on the screen for the final result. By marking each of these messages as cacheable, the next time 'glyphFor:Size:' is sent to myFont with exactly 'a' and 12 as arguments it will return the same list of control points without executing the method again. Sending a cacheable 'asVectors' message to the same list of point as before will fetch the same list of vectors as was created the first time, and sending 'asPixmap' to that results in the same character image without actually invoking the filling method once more. So we have replaced three very complex and slow calculations with three simple table lookups. If you think that even that is too much, you are right. The cached control point lists and short vector lists are not really needed. Unfortunately, the cache manager can do nothing about that, but if the user moves multiply cached expression to their own methods like this: pixmapForFont: f Char: c Size: s = ( (f glyphFor: c Size: s) asVectors asPixmap). aCanvas drawPixmap: (pixmapForFont: myFont Char: 'a' Size: 12) Now we can make only the 'pixmapForFont:Char:Size:' method cacheable if we want. This will save the final pixmaps without also storing the intermediate (and useless to us) results. This did involve rewriting application code, but actually made it a little more readable, unlike when caching is hand coded. Why can't a Nile backend for the GPU board be written? Did I miss something? As Reuben Thomas pointed it, the needed information is not available. In fact, there was no information at all about the chip until this week, but now we have some 30% (the most important part for porting the basic funcionality of some OS). This is the same issue that the OLPC people had (worse in their case, because people would promise to release the information to get them to use a chip and then just didn't do it). Even dealing with FPGA is a pain because of the secret parts. -- Jecel
Re: [fonc] Raspberry Pi
Alan Kay wrote: Hi Loup Actually, your last guess was how we thought most of the optimizations would be done (as separate code guarded by the meanings). […] In practice, the optimizations we did do are done in the translation chain and in the run-time, […] Okay, thanks. I can't recall the exact reference, but I have read once about a middle ground: mechanical optimization passes that are brittle in the face of meaning change. I mean, if you change the concise version of your program, you may have to re-think your optimizations passes, but you don't necessarily have to re-write your optimized version directly. Example { The guy where at the university, and was assigned to write optimized multiplication for big numbers. Each student would be graded according to the speed of their program. No restriction on the programming language. Everyone started coding in C, but _he_ preferred to start with scheme. He coded a straightforward version of the algorithm, then set out to manually (but mostly mechanically) apply a set of correctness-preserving transformations, most notably a CPS transformation, and a direct translation to C with gotos. His final program, written in pure C, was the second fastest of his class (and very close to the first, which used assembly heavily). Looking back at what he could have done better, he saw that his program spent most of his time in malloc(). He didn't know how to do it at the time, but he had managed his memory directly, his program would have been first. Oh, and of course, he had much less trouble dealing with mistakes than his classmates. So, his conclusion was that speed comes from beautiful programs, not prematurely optimized ones. } About Frank, we may imagine using this method in a more automated way, for instance by annotating the source and intermediate code with specialized optimizations that would only work in certain cases. It could be something roughly like this: Nile Source- Annotations to optimize the Nile program | | Compiler pass that check the validity of the | optimizations then applies them. v Maru Intermediate code - Annotations to optimize that maru program | | Compiler pass like the above v C Backend code - Annotations (though at this point…) | | - GCC v Assembly - (no, I won't touch that :-) (Note that instead of annotating the programs, you could manually control the compilers.) Of course, the second you change the Nile source is the second your annotations at every level won't work any more. But (i) you would only have to redo your annotations, and (ii), maybe not all of them anyway, for there is a slight chance that your intermediate representation didn't change too much when you changed your source code. I can think of one big caveat however: if the generated code is too big or too cryptic, this approach may not be feasible any more. And I forgot about profiling your program first. But Dan Amelang did such a great job at the meanings that they ran fast enough tempt us to use them directly [so] Cairo never entered the picture. If I had to speculate from an outsider's point of view, I'd say these good surprises probably apply to almost any domain specific language. The more specialized a language is, the more domain knowledge you can embed in the compiler, the more efficient the optimizations may be. I know it sounds like magic, but I recall a similar example with Haskell, applied to bioinformatics (again, can't find the paper). Example { The goal was to implement a super-fast algorithm. The catch was, the resulting program has to accept a rather big number of parameters. Written directly in C, the fact that those parameters changed meant the main loop had to make several checks, slowing the whole thing down. So they did an EDSL based on monads that basically generated a C program after the parameters were read and knows, then ran it. Not quite Just-In-Time compilation, but close. The result was of course noticeably faster than the original C program. } Therefore, I'm quite optimistic. My best guess right now is that smart compilers will be more than enough to make Frank fast enough, as fast as C[1] or possibly even faster, for 2 reasons: 1. As far as I know, most languages in Frank are quite specialized. That prior knowledge can be exploited by compilers. 2. The code volume is sufficiently small that aggressive whole program optimizations are actually feasible. Such compilers may cost 10 to 100 thousands lines or more, but at least those lines would be strictly contained. Then, potential end users wouldn't give up too much hackability in the name of performance. [1]: http://c2.com/cgi/wiki?AsFastAsCee Loup. ___ fonc mailing list fonc@vpri.org
Re: [fonc] Raspberry Pi
Yes, the annotation scheme you mention is essentially how were were going to do it. The idea was that in the optimization space there would be a variety of ways to do X -- e.g. there are lots of ways to do sorting -- and there would be conditions attached to these ways that would allow the system to choose the most appropriate solutions at the most appropriate times. This would include hints etc. The rule here is that the system had to be able to run correctly with all the optimizations turned off. And your notions about some of the merits of DSLs (known in the 60s as POLs -- for Problem Oriented Languages) is why we took this approach. Cheers, Alan From: Loup Vaillant l...@loup-vaillant.fr To: fonc@vpri.org Sent: Wednesday, February 8, 2012 9:24 AM Subject: Re: [fonc] Raspberry Pi Alan Kay wrote: Hi Loup Actually, your last guess was how we thought most of the optimizations would be done (as separate code guarded by the meanings). […] In practice, the optimizations we did do are done in the translation chain and in the run-time, […] Okay, thanks. I can't recall the exact reference, but I have read once about a middle ground: mechanical optimization passes that are brittle in the face of meaning change. I mean, if you change the concise version of your program, you may have to re-think your optimizations passes, but you don't necessarily have to re-write your optimized version directly. Example { The guy where at the university, and was assigned to write optimized multiplication for big numbers. Each student would be graded according to the speed of their program. No restriction on the programming language. Everyone started coding in C, but _he_ preferred to start with scheme. He coded a straightforward version of the algorithm, then set out to manually (but mostly mechanically) apply a set of correctness-preserving transformations, most notably a CPS transformation, and a direct translation to C with gotos. His final program, written in pure C, was the second fastest of his class (and very close to the first, which used assembly heavily). Looking back at what he could have done better, he saw that his program spent most of his time in malloc(). He didn't know how to do it at the time, but he had managed his memory directly, his program would have been first. Oh, and of course, he had much less trouble dealing with mistakes than his classmates. So, his conclusion was that speed comes from beautiful programs, not prematurely optimized ones. } About Frank, we may imagine using this method in a more automated way, for instance by annotating the source and intermediate code with specialized optimizations that would only work in certain cases. It could be something roughly like this: Nile Source - Annotations to optimize the Nile program | | Compiler pass that check the validity of the | optimizations then applies them. v Maru Intermediate code - Annotations to optimize that maru program | | Compiler pass like the above v C Backend code - Annotations (though at this point…) | | - GCC v Assembly - (no, I won't touch that :-) (Note that instead of annotating the programs, you could manually control the compilers.) Of course, the second you change the Nile source is the second your annotations at every level won't work any more. But (i) you would only have to redo your annotations, and (ii), maybe not all of them anyway, for there is a slight chance that your intermediate representation didn't change too much when you changed your source code. I can think of one big caveat however: if the generated code is too big or too cryptic, this approach may not be feasible any more. And I forgot about profiling your program first. But Dan Amelang did such a great job at the meanings that they ran fast enough tempt us to use them directly [so] Cairo never entered the picture. If I had to speculate from an outsider's point of view, I'd say these good surprises probably apply to almost any domain specific language. The more specialized a language is, the more domain knowledge you can embed in the compiler, the more efficient the optimizations may be. I know it sounds like magic, but I recall a similar example with Haskell, applied to bioinformatics (again, can't find the paper). Example { The goal was to implement a super-fast algorithm. The catch was, the resulting program has to accept a rather big number of parameters. Written directly in C, the fact that those parameters changed meant the main loop had to make several checks, slowing the whole thing down. So they did an EDSL based on monads that basically generated a C program after the parameters were read and knows, then ran it. Not quite Just-In-Time compilation, but close. The
[fonc] Doug Engelbart’s Chorded Keyboard as a Multi-touch Interface
This caught my eye on Hackaday - thought people here might be interested http://labs.teague.com/?p=1451 Cheers, Steve ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Mitsubishi Luggage Tag, third try
Awesome:) On Feb 8, 2012, at 8:51 AM, Alan Kay alan.n...@yahoo.com wrote: Cheers, Alan eRAM direct scan.jpg ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Raspberry Pi
On 8 February 2012 10:37, Reuben Thomas r...@sc3d.org wrote: You can't drive it directly because its specs aren't public. If you use its closed-source Linux driver, you can of course use OpenGL. Uh, and I should also point out the promising work on an open-source GPU driver: ... an open-source, reverse-engineered graphics driver for the ARM Mali graphics processor. OpenGL ES triangles are in action on open-source code. via http://www.phoronix.com/scan.php?page=articleitem=arm_mali_reversenum=1 Regards, Tony -- Tony Garnock-Jones tonygarnockjo...@gmail.com http://homepages.kcbbs.gen.nz/tonyg/ ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
[fonc] Onward! 2012 call for papers, due April 13
Call for Research Visions Do you have an idea that could change the world of software development? Onward! is the place to present it and get constructive criticism from other researchers and practitioners. We are looking for grand visions and new paradigms that could make a big difference in how we build software in 5 or 10 years. We are not looking for research-as-usual papers - conferences like OOPSLA are the place for that. Those conferences require rigorous validation such as theorems or empirical experiments, which are necessary for scientific progress, but which unfortunately can also preclude the discussion of early-stage ideas. Onward! also requires validation: mere speculation is insufficient. However Onward! accepts less rigorous methods of validation such as compelling arguments, exploratory implementations, and substantial examples. It bears repeating that we strongly encourage the use of worked-out examples to substantiate your ideas. This year, Onward! is reaching out to graduate students. You have been taught that conference papers, key to your career, must be solid bricks of incremental research, with scientifically sober claims. But why are you doing research in the first place? You want to change the world with your ideas! You can't talk about that in conference papers. Onward! gives you the chance to spread your wings and share your dreams. We want you to inspire us with your ideas, and perhaps in the process better inspire yourself. This call is also directed at practicing programmers who are deeply dissatisified with the state of our art and who have thought long and hard about how to fix it. The committee encourages you to share your hard-won wisdom about how to reform software development. Many practitioners have dismissed computer science conferences as sterile academic exercises. Onward! is different, and asks you to join the conversation for the good of our field. How else can we ever make progress if we don't share what has been learnt from practical experience? We suggest that to best communicate your ideas you avoid sweeping principles expressed in general terms, especially terms you have coined yourself. It is often more effective to present serveral detailed examples of how your approach would yield concrete benefits, while also revealing what offsetting disadvantages it may entail. If others are working on related ideas you might consider proposing an Onward! workshop: see the call for Onward! workshopshttp://splashcon.org/2012/cfp/due-april-13-2012/389-workshops . *Selection Process* Onward! papers are peer-reviewed, and accepted papers will appear in the SPLASH proceedings and the ACM Digital Library. Papers will be judged on the potential impact of their ideas and the quality of their presentation. *Submission* The submission deadline is April 13, 2012. See the online version of this call http://splashcon.org/2012/cfp/due-april-13-2012/380-onward-papers for further details. *For More Information* For additional information, clarification, or answers to questions please contact the Onward! Papers Chair, Jonathan Edwards, at onw...@splashcon.org. *Onward! Papers Committee* Jonathan Edwards, MIT, USA (chair) Bjorn Freeman-Benson, New Relic, US Bret Victor, US Brian Foote, US Caitlin Sadowski, UC Santa Cruz, US Chung-chieh Shan, University of Tsukuba, Japan Dave Thomas, Bedarra Research, Canada Derek Rayside, University of Waterloo, Canada John Field, Google, US Kevin Sullivan, University of Virginia, US Klaus Ostermann, University of Marburg, Germany Mads Torgersen, Microsoft, US Mark Miller, Google, US Martin Fowler, ThoughtWorks, US Nat Pryce, UK Sean McDirmid, Microsoft Research Asia, China Tom van Cutsem, Vrije Universiteit Brussel, Belgium ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc