Re: [fonc] Unsolved problem in computer science? Fixing shortcuts.
Context below, sorry about the top-post (stupid smartphone.) I think I remember that in Xanadu, links are two-way streets. When you move the link, I can only assume that both of those pointing devices would need to be updated. I'm not sure how it works though. Is there a central authority involved, can it be distributed, etc? It's hard to visualize a two-way link because I have spent my entire life living in flatland. I even mix up which plane is blue and which is pink sometimes:) The gist I got was that the two-way link concept was a powerful idea which could be applied to more problems than just pages (the mere use of the term is liable to give Ted a headache. Flat paper metaphors and such.) I wouldn't be shocked if a good implementation couldn't be done using a vigilant doubly-linked list (i.e. an object which cares about provenance and has a means of vetting it, like perhaps a touch of public key encryption.) Think of all the talk on this list about publish/subscribe as an object model, pattern directed invocation and such, and then try to imagine all of the ways a two-way link or shortcut might outclass the usual (and fragile-as-glass) one-way link. BCC Ted Nelson on the off chance that he might like to help us visualize the two-way link idea. (Ted, let me know if I shouldn't forward messages like this to you. Seems like giving some researchers a view into some of your ideas should help you on your way to realizing them. Then again, the road to hell is paved with... irritating people forwarding messages with good intentions.) Cheers, --Casey Ransberger On Oct 5, 2014, at 5:52 AM, John Carlson yottz...@gmail.com wrote: To put the problem in entirely file system terminology, What happens to a folder with shortcuts into it when you move the folder? How does one automatically repoint the shortcuts? Has this problem been solved in computer science? On linux, the shortcuts would be symbolic links. I had a dream about smallstar when I was thinking about this. The author was essentially asking me how to fix it. He was showing me a hierarchy, then he moved part of the hierarchy into a subfolder and asked me how to automate it--especially the links to the original hierarchy. In language terms, this would be equivalent of refactoring a class which gets dropped down into an inner class. This might be solved. I'm not sure. This would be a great problem to solve on the web as well...does Xanadu do this? I think the solution is to maintain non-persistent nodes which are computed at access time, but I'm not entirely clear. I have no idea why I am posting this to cap-talk. There may be some capability issues that I haven't thought of yet. Or perhaps the capability folks have already solved this. For your consideration, John Carlson ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] About the reduce of complexity in educating children to program
Hello Iliya. While you directed your inquiry to the people at VPRI (and I'm not there,) I hope you'll forgive my curiosity. Questions inline, and hopefully not too many of them. On Sep 19, 2014, at 2:16 AM, Iliya Georgiev ikgeorg...@gmail.com wrote: Hello, I am addressing this letter mainly to Mr. Alan Kay and his fellows at VPRI. I have an idea how to reduce complexity in educating children to program. This seems to be a part of a goal of the VPRI to improve powerful ideas education for the world's children. But in case my idea turns into success, a moral hazard emerges. If the children (6-14 years old) understand things better and can even program, can they become a victim of labor exploitation? All stop. Anyone can be exploited. It just takes a big enough gang of exploitive people. Is this really a question, or is it more of a statement? Up to know they could be exploited physically. From now on they could be exploited mentally. OK, in the north in so called developed countries they may be protected, but in the south... On the other side, don't we owe to the tomorrow people the possibility to understand the world we leave to them? Or they will be savages that use tools, but do not know how work. I made a half-assed promise at the top to only ask questions, but your phrasing here struck me as stunningly beautiful. The tomorrow people. If I'd written that, I'd be inclined to capitalize and underline the words. It'd be a great title for a science fiction novel. So if you want to wear the burden of the moral hazard, I will send the description of my idea to you and help with what I can. All stop. Why must I decide to bear a burden before I read your words? What risk is there in sharing them without contract? You will judge, if it is worth to do it. It would be easily if people work cooperatively. That is a lesson children should learn too. The software could be made from one person, but there may be more challenges than one think. In case you agree to do it I will want you to publish online the results of the experiment. And if possible to make the program to run in a web browser and to release it freely too, just as you did in some of your recent experiments. Isn't that asking a bit much? Would not asking for permission to publish your results yourself be enough? It is strange that unlike more scientists, I will be equally happy from the success and failure of my idea. Ouch. Yeah, I see what you're trying to say. Once again, I've failed to ask a question. What I'll say instead: I've never known a good scientist who would not be happy to know that her hypothesis was incorrect, because in doing so, she's learned something about the universe and our place in it, which is what she set out to do in the first place. Best regards, Iliya Georgiev Hope I haven't created unnecessary noise with this post. I assure you that you will forgive my curiosity! --Casey___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
[fonc] Xanadu has a heartbeat!
Thought I'd let folks know that the Xanadu project has made some very good progress. They now have a mostly-finished implementation in Javascript. http://xanadu.com --Casey ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] SPUTNIK
After reading this as well as I could, I feel like I'd really need to consult with my analyst, Dr. ELIZA, but maybe it's all roses after all? Anyway I was running down the hall and ran into this guy named Markov, are we still there? Next magic trick? On Dec 7, 2013, at 11:32 PM, Евгений Филиппов (Eugene Philippov) egphilip...@gmail.com wrote: BAZALTCOMPENDIUM грязьэтоземляземлялечит жужжат чужие, свои приносят оружие 8 ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
[fonc] Software Crisis (was Re: Final STEP progress report abandoned?)
I don't think cut and paste has been the source of the problems with the systems I've worked on (could be a symptom of one or more of the problems.) What I see is long-term systems built around short-term, usually competitive goals, by people who are competing both with one another (for jobs, promotions, raises, social capital, etc,) and also cooperating with one another to compete with other businesses at the same time. Most people's programming habits seem to change dramatically, for example, when they expect to throw something away. Most programmers dump the company every 2-5 years for another (higher-paying) job, so it's *all* disposable code in effect. It's not just the programmers either, it's the decision makers at C-level too, who are quite often building the company to sell the company. Maybe the kids are getting dumber, but what I see when I look around is smart kids being pushed to ship before the bits are ready, and not being *allowed* to fix low-value bugs which gradually accumulate until a system is deep-sixed for being painful to work on. In other words, I don't believe there's a software crisis or any real shortage of programming talent (I know plenty of great programmers who regularly go without work, often because they're unimpressed with the offers they're seeing.) I think it's not a software crisis, I think it's a *management* crisis. I do think new tools to make managers/customers/investors/partners/users/programmers less stressed out could make the overall experience better for all involved, and with that I guess I'm talking about the continuing emergence of an engineering discipline in software. But that's in-house code. OTOH, FreeBSD has usually been pretty stable for me; I don't have to put out its fires very much. Why might this be? Let's try some fun game theory! http://www.nature.com/ncomms/2013/130801/ncomms3193/pdf/ncomms3193.pdf On Sun, Sep 8, 2013 at 10:33 AM, Paul Homer paul_ho...@yahoo.ca wrote: Hi Alan, Is the gift really that bad? It certainly is an interesting question. I'm a frequent blogger on the topic of what could probably be described as the ongoing 'software crisis'. We definitely build bigger systems these days, but the quality has likely been declining. There is great software out there, but the world is littered with lots of partially working code that causes lots of problems. Perhaps one could lay this on the feet of better documentation. That is, when I started coding it was hard to find out any information so I spent a lot of time just playing with the underlying pieces to really understand them and figure out how to use them appropriately. These days, the kids do a quick google, then just copypaste the results into the code base, mostly unaware of what the underlying 'magic' instructions actually do. So example code is possibly a bad thing? But even if that's true, we've let the genie out of the bottle and he is't going back in. To fix the quality of software, for example, we can't just ban all cutpaste-able web pages. I definitely agree that we're terrible thinkers, and that for the most part as a species we are self-absorbed and often lazy, so I don't really expect that most programmers will have the same desire that I did to get down to really understanding the details. That type of curiosity is rare. The alternate route out of the problem is to exploit these types of human deficiencies. If some programmers just want to cutpaste, then perhaps all we can do is too just make sure that what they are using is high enough quality. If someday they want more depth, then it should be available in easily digestible forms, even if few will ever travel that route. If most people really don't want to think deeply about about their problems, then I think that the best we can do is ensure that their hasty decisions are based on as accurate knowledge as possible. It's far better than them just flipping a coin. In a sense it moves up our decision making to a higher level of abstraction. Some people lose the 'why' of the decision, but their underlying choice ultimately is superior, and the 'why' can still be found by doing digging into the data. In a way, isn't that what we've already done with micro-code, chips and assembler? Or machinery? Gradually we move up towards broader problems... Paul. Sent from my iPad On 2013-09-08, at 10:45 AM, Alan Kay alan.n...@yahoo.com wrote: Hi Paul When I said even scientists go against their training I was also pointing out really deep problems in humanity's attempts at thinking (we are quite terrible thinkers!). If we still make most decisions without realizing why, and use conventional thinking tools as ways to rationalize them, then technologists providing vastly more efficient, wide and deep, sources for rationalizing is the opposite of a great gift. Imagine a Google that also retrieves counter-examples. Or one that actively tries to help find chains of
Re: [fonc] Final STEP progress report abandoned?
John, you're right. I have seen raw binary used as DNA and I left that out. This could be my own prejudice, but it seems like a messy way to do things. I suppose I want to limit what the animal can do by constraining it to some set of safe primitives. Maybe that's a silly thing to worry about, though. If we're going to grow software, I suppose maybe I should expect the process to be as messy as life is:) On Wed, Sep 4, 2013 at 4:06 PM, John Carlson yottz...@gmail.com wrote: I meant to say you could perform and record operations while the program was running. I think people have missed machine language as syntaxless. On Sep 4, 2013 4:17 PM, John Carlson yottz...@gmail.com wrote: On Sep 3, 2013 8:25 PM, Casey Ransberger casey.obrie...@gmail.com wrote: It yields a kind of syntaxlessness that's interesting. Our TWB/TE language was mostly syntaxless. Instead, you performed operations on desktop objects that were recorded (like AppleScript, but with an iconic language). You could even record while the program was running. We had a tiny bit of syntax in our predicates, stuff like range and set notation. Can anyone describe Minecraft's syntax and semantics? ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc -- CALIFORNIA H U M A N ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
[fonc] Study on the effectiveness of learning software
Maybe relevant. Reading through this now... the findings seem to be broadly depressing. Notably: I get the sense that only commercial products were part of the study. I'm not familiar with any of them; in other words: Logo, Etoys, and Scratch were absent. Full text: http://ies.ed.gov/ncee/pubs/20094041/pdf/20094041.pdf ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Final STEP progress report abandoned?
I've heavily abridged your message David; sorry if I've dropped important context. My words below... On Sep 3, 2013, at 3:04 PM, David Barbour dmbarb...@gmail.com wrote: Even better if the languages are good for exploration by genetic programming - i.e. easily sliced, spliced, rearranged, mutated. I've only seen this done with two languages. Certainly it's possible in any language with the right semantic chops but so far it seems like we're looking at Lisp (et al) and FORTH. My observation has been that the main quality that yields (ease of recombination? I don't even know what it is for sure) is syntaxlessness. I'd love to know about other languages and qualities of languages that are conducive to this sort of thing, especially if anyone has seen interesting work done with one of the logic languages. ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Final STEP progress report abandoned?
Yes, in the case of FORTH, the concatenative property is what's interesting in this regard. It yields a kind of syntaxlessness that's interesting. I have to admit no real familiarity with APL (outside of some stunningly elegant solutions I've read to problems on Project Euler!) Thanks for letting me know that there's a familial relationship with FORTH and APL, Brian:) Also, genetic programming in a Prolog? Anyone? On Sep 3, 2013, at 4:45 PM, Brian Rice briantr...@gmail.com wrote: With Forth, you are probably reaching for the definition of a concatenative language like Joy. APL, J, K, etc. would also qualify. On Tue, Sep 3, 2013 at 4:43 PM, Casey Ransberger casey.obrie...@gmail.com wrote: I've heavily abridged your message David; sorry if I've dropped important context. My words below... On Sep 3, 2013, at 3:04 PM, David Barbour dmbarb...@gmail.com wrote: Even better if the languages are good for exploration by genetic programming - i.e. easily sliced, spliced, rearranged, mutated. I've only seen this done with two languages. Certainly it's possible in any language with the right semantic chops but so far it seems like we're looking at Lisp (et al) and FORTH. My observation has been that the main quality that yields (ease of recombination? I don't even know what it is for sure) is syntaxlessness. I'd love to know about other languages and qualities of languages that are conducive to this sort of thing, especially if anyone has seen interesting work done with one of the logic languages. ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc -- -Brian T. Rice ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Final STEP progress report abandoned?
Sorry, I've missed a beat somewhere. Arrowized? What's this bit with arrows? I saw the term arrow earlier and I think I've assumed that it was some slang for the FRP thing (if you think about it, that makes some sense.) But starting with intuitive assumptions is usually a bad plan, so I'd love some clarification if possible. On Sep 3, 2013, at 5:30 PM, David Barbour dmbarb...@gmail.com wrote: Factor would be another decent example of a concatenative language. But I think arrowized programming models would work better. They aren't limited to a stack, and instead can compute rich types that can be evaluated as documents or diagrams. Further, they're really easy to model in a concatenative language. Further, subprograms can interact through the arrow's model - e.g. sharing data or constraints - thus operating like agents in a multi-agent system; we could feasibly model 'chromosomes' in terms of different agents. I've recently (mid August) started developing a language that has these properties: arrowized, strongly typed, concatenative, reactive. I'm already using Prolog to find functions to help me bootstrap (it seems bootstrap functions are not always the most intuitive :). I look forward to trying some genetic programming, once I'm further along. Best, Dave On Tue, Sep 3, 2013 at 4:45 PM, Brian Rice briantr...@gmail.com wrote: With Forth, you are probably reaching for the definition of a concatenative language like Joy. APL, J, K, etc. would also qualify. On Tue, Sep 3, 2013 at 4:43 PM, Casey Ransberger casey.obrie...@gmail.com wrote: I've heavily abridged your message David; sorry if I've dropped important context. My words below... On Sep 3, 2013, at 3:04 PM, David Barbour dmbarb...@gmail.com wrote: Even better if the languages are good for exploration by genetic programming - i.e. easily sliced, spliced, rearranged, mutated. I've only seen this done with two languages. Certainly it's possible in any language with the right semantic chops but so far it seems like we're looking at Lisp (et al) and FORTH. My observation has been that the main quality that yields (ease of recombination? I don't even know what it is for sure) is syntaxlessness. I'd love to know about other languages and qualities of languages that are conducive to this sort of thing, especially if anyone has seen interesting work done with one of the logic languages. ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc -- -Brian T. Rice ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] 2D sexpressions for GIS, Frank/Nile
SHRDLU was related to Planner right? There's a Prologue-alike you might start with in the OMetaJS distro... here: http://tinlizzie.org/ometa-js/#Toylog On Fri, Aug 9, 2013 at 12:43 AM, John Carlson yottz...@gmail.com wrote: Has anyone considered GIS sexpressions for Frank/Nile/Open Croquet/Cobalt? First one would be given a map with capabilities, one could place sexpressions on a map on points in areas, like (deposit (sell (bottle (ferment (harvest #grapes #harvest-location-capability)) #wente-riesling)) #bank-of-america-account-capability). Perhaps I am thinking of simfarm but with sexpressions for learning programming and commerce. Perhaps I'll do some googling if no one has anything. Perhaps this is a kind of 2D SHRDLU. ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Macros, JSON
Lisp is such a joy to implement. FORTH is fun too. I'm working on a scheme-alike on and off. The idea is to take the message passing and delegation from Self, expose it in Lisp, and then map all of that to JavaScript. One idea I had when I was messing around with OMetaJS was that it might have some kind of escape syntax like (let ((x 1)) #{x++; }# ) Would basically mean (let ((x 1)) (+ x 1)) ...which would make doing primitives feel pretty smooth, and also give you the nice JSON syntax. The rule is simple too, '#{' followed by anything:a up until '}#' - eval(a) Only problem is relating environment context between the two languages, which I haven't bothered to figure out yet. The JS eval() in this case is insufficient. (Sorry about the pseudocode, on a phone and don't keep OMeta syntax in my head...) On Jul 21, 2013, at 1:15 PM, Alan Moore kahunamo...@closedsource.com wrote: JSON is all well and good as far as lowest common denominators go. However, you might want to consider EDN: https://github.com/edn-format/edn On the other hand, if you are doing that then you might as well go *all* the way and re-invent half of Common Lisp :-) http://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule Alan Moore On Sun, Jul 21, 2013 at 10:28 AM, John Carlson yottz...@gmail.com wrote: Hmm. I've been thinking about creating a macro language written in JSON that operates on JSON structures. Has someone done similar work? Should I just create a JavaScript AST in JSON? Or should I create an AST specifically for JSON manipulation? Thanks, John ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Macros, JSON
Probably a more usable language would be arrived upon via some extensions to JSON. May I recommend OMetaJS? :) The lack of a unique atomic symbolic literal as distinct from a string is one of the things I'm grappling with right now. To get that I'd need to intern the atoms. Jury's out whether it's a good idea to try to used JS typed arrays to implement the symbol interning or to use an under the hood tag on the string at the intermediate level to distinguish them (hidden from the programmer who just sees a Lisp alike.) On Jul 21, 2013, at 1:45 PM, John Carlson yottz...@gmail.com wrote: Or numbers for pointers... On Jul 21, 2013 3:43 PM, John Carlson yottz...@gmail.com wrote: I think what would be more difficult would be identifying what is persistent and what is runtime values. Also, JSON doesn't contain pointers, so one would have to use strings for pointers. On Jul 21, 2013 3:22 PM, James McCartney asy...@gmail.com wrote: I thought about this briefly. One issue is how to distinguish literal strings from identifiers. On Sun, Jul 21, 2013 at 10:28 AM, John Carlson yottz...@gmail.com wrote: Hmm. I've been thinking about creating a macro language written in JSON that operates on JSON structures. Has someone done similar work? Should I just create a JavaScript AST in JSON? Or should I create an AST specifically for JSON manipulation? Thanks, John ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc -- --- james mccartney ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] 90% glue code
WRT the 90% guess, I usually go for 80% on stuff like that when I make a SWAG where it smells like a Pareto distribution. http://en.wikipedia.org/wiki/Pareto_principle http://en.wikipedia.org/wiki/Pareto_distribution On Tue, Apr 16, 2013 at 7:52 PM, David Barbour dmbarb...@gmail.com wrote: On Tue, Apr 16, 2013 at 2:25 PM, Steve Wart st...@wart.ca wrote: On Sun, Apr 14, 2013 at 1:44 PM, Gath-Gealaich In real systems, 90% of code (conservatively) is glue code. What is the origin of this claim? I claimed it from observation and experience. But I'm sure there are other people who have claimed it, too. Do you doubt its veracity? On Mon, Apr 15, 2013 at 12:15 PM, David Barbour dmbarb...@gmail.comwrote: On Mon, Apr 15, 2013 at 11:57 AM, David Barbour dmbarb...@gmail.comwrote: On Mon, Apr 15, 2013 at 10:40 AM, Loup Vaillant-David l...@loup-vaillant.fr wrote: On Sun, Apr 14, 2013 at 04:17:48PM -0700, David Barbour wrote: On Sun, Apr 14, 2013 at 1:44 PM, Gath-Gealaich In real systems, 90% of code (conservatively) is glue code. Does this *have* to be the case? Real systems also use C++ (or Java). Better languages may require less glue, (even if they require just as much core logic). Yes. The prevalence of glue code is a natural consequence of combinatorial effects. E.g. there are many ways to partition and summarize properties into data-structures. Unless we uniformly make the same decisions - and we won't (due to context-dependent variations in convenience or performance) - then we will eventually have many heterogeneous data models. Similarly can be said of event models. We can't avoid this problem. At best, we can delay it a little. I should clarify: a potential answer to the glue-code issue is to *infer* much more of it, i.e. auto-wiring, constraint models, searches. We could automatically build pipelines that convert one type to another, given smaller steps (though this does risk aggregate lossiness due to intermediate summaries or subtle incompatibilities). Machine-learning could be leveraged to find correspondences between structures, perhaps aiding humans. 90% or more of code will be glue-code, but it doesn't all need to be hand-written. I am certainly pursuing such techniques in my current language development. ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] 90% glue code
This Licklider guy is interesting. CS + psych = cool. This conversation makes me think of things Hofstadter wrote about with regard to isomorphism in Gödel, Escher, Bach. I only have one or two very small things to contribute here. Esperanto doesn't seem to have caught on, and sometimes a good idea takes a long time to catch on :) Mmm, and maybe McCarthy's search for a universal intermediate representation is on-target here too. I've wondered how Frank would deal with the problem of language barrier, especially around metalanguage. On Fri, Apr 19, 2013 at 1:45 AM, Casey Ransberger casey.obrie...@gmail.comwrote: WRT the 90% guess, I usually go for 80% on stuff like that when I make a SWAG where it smells like a Pareto distribution. http://en.wikipedia.org/wiki/Pareto_principle http://en.wikipedia.org/wiki/Pareto_distribution On Tue, Apr 16, 2013 at 7:52 PM, David Barbour dmbarb...@gmail.comwrote: On Tue, Apr 16, 2013 at 2:25 PM, Steve Wart st...@wart.ca wrote: On Sun, Apr 14, 2013 at 1:44 PM, Gath-Gealaich In real systems, 90% of code (conservatively) is glue code. What is the origin of this claim? I claimed it from observation and experience. But I'm sure there are other people who have claimed it, too. Do you doubt its veracity? On Mon, Apr 15, 2013 at 12:15 PM, David Barbour dmbarb...@gmail.comwrote: On Mon, Apr 15, 2013 at 11:57 AM, David Barbour dmbarb...@gmail.comwrote: On Mon, Apr 15, 2013 at 10:40 AM, Loup Vaillant-David l...@loup-vaillant.fr wrote: On Sun, Apr 14, 2013 at 04:17:48PM -0700, David Barbour wrote: On Sun, Apr 14, 2013 at 1:44 PM, Gath-Gealaich In real systems, 90% of code (conservatively) is glue code. Does this *have* to be the case? Real systems also use C++ (or Java). Better languages may require less glue, (even if they require just as much core logic). Yes. The prevalence of glue code is a natural consequence of combinatorial effects. E.g. there are many ways to partition and summarize properties into data-structures. Unless we uniformly make the same decisions - and we won't (due to context-dependent variations in convenience or performance) - then we will eventually have many heterogeneous data models. Similarly can be said of event models. We can't avoid this problem. At best, we can delay it a little. I should clarify: a potential answer to the glue-code issue is to *infer* much more of it, i.e. auto-wiring, constraint models, searches. We could automatically build pipelines that convert one type to another, given smaller steps (though this does risk aggregate lossiness due to intermediate summaries or subtle incompatibilities). Machine-learning could be leveraged to find correspondences between structures, perhaps aiding humans. 90% or more of code will be glue-code, but it doesn't all need to be hand-written. I am certainly pursuing such techniques in my current language development. ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc -- Casey Ransberger -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] 90% glue code
It's on the reading list now. Thank you! On Apr 19, 2013, at 5:56 AM, Alan Kay alan.n...@yahoo.com wrote: Wow, automatic spelling correctors suck, especially early in the morning The only really good -- and reasonably accurate -- book about the history of Lick, ARPA-IPTO (no D, that is when things went bad), and Xerox PARC is Dream Machines by Mitchell Waldrop. Cheers, Alan From: Alan Kay alan.n...@yahoo.com To: Fundamentals of New Computing fonc@vpri.org Sent: Friday, April 19, 2013 5:53 AM Subject: Re: [fonc] 90% glue code The only really good -- and reasonable accurate -- book about the history of Lick, ARPA-IPTO (no D, that is went things went bad), and Xerox PARC is Dream Machines by Mitchel Waldrop. Cheers, Alan From: Miles Fidelman mfidel...@meetinghouse.net To: Fundamentals of New Computing fonc@vpri.org Sent: Friday, April 19, 2013 5:45 AM Subject: Re: [fonc] 90% glue code Casey Ransberger wrote: This Licklider guy is interesting. CS + psych = cool. A lot more than cool. Lick was the guy who: - MIT Professor - pioneered timesharing (bought the first production PDP-1 for BBN) and AI work at BBN - served as the initial Program Manager at DARPA/IPTO (the folks who funded the ARPANET) - Director of Project MAC at MIT for a while - wrote some really seminal papers - Man-Computer Symbiosisis write up there with Vannevar Bush's As We May Think /It seems reasonable to envision, for a time 10 or 15 years hence, a 'thinking center' that will incorporate the functions of present-day libraries together with anticipated advances in information storage and retrieval./ /The picture readily enlarges itself into a network of such centers, connected to one another by wide-band communication lines and to individual users by leased-wire services. In such a system, the speed of the computers would be balanced, and the cost of the gigantic memories and the sophisticated programs would be divided by the number of users./ - J.C.R. Licklider, Man-Computer Symbiosis http://memex.org/licklider.html, 1960. - perhaps the earliest conception of the Internet: In a 1963 memo to Members and Affiliates of the Intergalactic Computer Network, Licklider theorized that a computer network could help researchers share information and even enable people with common interests to interact online. (http://web.archive.org/web/20071224090235/http://www.today.ucla.edu/1999/990928looking.html) Outside the community he kept a very low profile. One of the greats. Miles Fidelman -- In theory, there is no difference between theory and practice. In practice, there is. Yogi Berra ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
[fonc] Report Card
I wanted to send this message out after the final status report, but since that's indefinitely delayed (keep going!) I'm just going to do it now. Easy question: has keeping this dialogue open been useful to the folks at VPRI, or has it been more of a burden than anything else? I can definitely say that it's been very good for me, in that I learned a hell of a lot reading all of the lovely papers posters cited. It's also been a lot of fun meeting people who were interested in a lot of the same things that I was. I'm not so happy about my own contribution though. Did I do anything at all to advance the state of the art? Well, no. I mostly just flapped my lips. It's asymmetrical, I learned way more than I taught. The best I could do was play sounding board for some of Ian's ideas while dinking around with Maru's guts. BTW if you haven't looked at it, Maru is way cool. VPRI has done something pretty awesome and weird here, in that the dialogue was wide open the whole time. As I gather, it was in the spirit of ARPA. We've had our share of trolls, long-winded posters (raises hand) and just general chaos. I really enjoyed the guy who called us all a bunch of Alan Kay fanboys the other day by the way. That was just priceless. Like we can't think for ourselves! (Alan if I can get an autograph after this I think I'll be set.) So seriously, has this been worthwhile? I'm not just asking VPRI folks, though I'm DEFINITELY asking VPRI folks, I'm also asking everyone else on the list. I learned a lot, huge win for me, and we talked in circles a bunch, some of that was fun. I can also think of a few parts where I felt pretty strongly that it *was* worthwhile. To throw out an example, remember when Dale Schumacher asked pretty poignantly whether or not the original idea behind objects/messages was similar to the actor model? That was like a blockbuster for nerds it was so awesome. That totally rocked. That's me. Okay now talk amongst yourselves go! ? ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
[fonc] CodeSpells, new thread
So I think the original thread drifted a bit. I'm curious about what folks think of the research involved here. I read the paper. A few things stuck out. The thing I'd mention is that it seemed to work (at least superficially) with getting 12 year olds to (begin to) tackle a programming language which by my own (prejudiced) standards is a rather difficult choice for *adults* who want to program casually. I guess I also identified with the whole set of things they identified as common among kids who learned to code in a quiet hole without any real support. They say that Java wasn't trying to convert the Lisp crowd, so much as the C++ crowd. Lisp, so far, seems a lot more learnable than C++ but that's beside my interest here. Since one of the things I think we ought to be arguing about in this context is how do we scale things like Scratch or Etoys up to the sky and down to the metal? I do think the study is relevant. It maybe helps explain how to deal with the trip to the metal end. Or maybe not. OTOH I didn't feel like there were enough numbers in there. It felt very very soft-science, and maybe there's no way around that. And maybe I have a prejudice about soft science. I got the general sense that the smell meant it was working, though, so I'm really interested in seeing what these folks do next. At the end of the day, what works, works, right? Does anyone here know these researchers? Any chance we might be able to pull them into the dialogue? At risk of wasting time on BS troll threads. I get the sense these are the kind of people I'd like to see posting here. Anyway they've got some *very* relevant experience now, and I think it would be cool to hear about what they're planning to do next. Just a thought. -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] holy grail of FONC?
I think what you might be seeing is a desire in the community to overcome some of the boundaries to advancement that one arrives upon when building an educational system on top of a less educational system. A class example might be the challenge an avid Etoys user might face when exiting the walled garden of Etoys and starting to try to program in (in this case) something like Smalltalk. Smalltalk is easier to learn than C++ (thanks!) but a lot harder to learn than Etoys. It's sudden. There isn't a very good gradient to the learning curve. It's kind of like, kid, now you're on your own, and you better learn how to use that sword fast if you wanna survive. You go from safe to true become: false pretty fast. See below for one way to ease the unsafe programming problem that's been a product of the FONC work, called worlds. It would be better, one might argue, if the knowledge of one layer might be able to help explain the knowledge needed to tackle the next lower layer. One way to facilitate this might be to build every layer in terms of the layer beneath it, but also, *linguistically* to describe every layer's language in terms of the same substrate. See also, OMeta. Frank seems to do this -- as far as I can discern without using it -- better than anything we've seen yet. Or anyway that's the hope as I understand it. Of course there are other objectives which are related, like getting the total body of work needed to express a fully working system down to the quanta we really actually need to continue our studies. If we can do that, we may be able to make our studies more precise, more accurate. This could be valuable to both educators and to the thousands of slaves in industry (raises hand.) One initiative which is interesting is Worlds which could function as a kind of exploratory programmer's undo. This has been covered in various papers on the VPRI writings page, and touched upon IIRC in some of the NSF updates. It's actually IMHO one of the unsung heroes of what these people have been up to. My favorite quote from anyone related to this effort comes from a private conversation with Ian Piumarta, (Ian, if it wasn't cool to share this, I'll let you hit me in the face one time) and he said this: My mission is to discover the Bose-Einstein condensate of computer programming... Which is (I think) to say: I want to find a way to make quantum effects become apparent at a macroscopic scale. If we can figure out how to do something analogous to that in the context of programming, by re-examining the fundamentals we have taken for granted since the birth of the industry, we may be able to apply that knowledge to build massively simpler large scale systems. Please forgive if I've gone on at length about stuff you already knew. As for the Holy Grail? I don't think it exists. There is no dark side of the moon, really. As a matter of fact, it's all dark. (The redacted part of the original studio recording was: the only thing that makes it look light is the sun.) Hugs and such! Casey On Apr 13, 2013, at 6:34 AM, John Carlson yottz...@gmail.com wrote: Is the holy grail of FONC to create an environment where you can use command line, text editor, IDE, and end-user programming to program the same program? Are there any other ways to program? Circuit boards? I believe FONC includes this. Speech and gestures? Does FONC provide a way to use speech and gestures to program? Is this a bit like Intentional Software? This reminds me a bit of Tcl/Tk as well, where programming command line, program and GUI were integrated. What else out there is trying to encompass all kinds of programming in a cross media way? John ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Scope? [was: The Fanboy Mailing List With No Productivity]
Below. On Apr 13, 2013, at 7:18 AM, Miles Fidelman mfidel...@meetinghouse.net wrote: Ondřej Bílka wrote: This is just a trash bin for people who don't want to do anything. The real work is probably on noise-free mailing list. This is the fanboy list for Alan Kay. Also cannot resist. Well if you are not satisfied you can establish new list. Then we will have: Though... it does raise the question: what is the intended and/or evolved scope of FONC? For the purposes of discussion here, what constitutes new computing? Is it: a. VPRI's work b. programming paradigms and languages (for which http://lambda-the-ultimate.org/ is really the best forum I've seen) c. computational models and paradigms (e.g, massively concurrent systems, AI) d. leading edge applications e. computing paradigms in the large (e.g., biological computing, quantum computing) e. something else? f. some combination of the above? Kinda hard to tell from the discussions, and http://vpri.org/mailman/listinfo/fonc is silent on the question. Miles Fidelman Oh come on. If you'd read everything here: http://vpri.org/html/writings.php ...or followed the dialogue much, you wouldn't have to ask this question. Your pal, Casey ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
[fonc] Please feel free to change the subject line.
What started off (at least ostensibly) as a conversation about NLP ended up being a conversation about the actor model, and the subject did change once, but to something not AFAIK related to actors. If I was less patient about wading though blah blah I might have missed interesting thoughts about actors, which is relevant to my interests. I'm not referring to the off topic origin of the thread so much as the fact that the subject line didn't track the context as it shifted. Trying not to be too much of a complainer, and I kind of have to applaud the community for trying patiently to bring a thread kicking and screaming back into the topical, as well as Kim Rose for putting the official foot down about what's too far off topic for discussion on this list, so thank you and thank you. -- Casey ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
[fonc] Bio computer
Someone finally got round to making a cell that acts like a transistor. You knew they'd do it eventually;) http://news.sciencemag.org/sciencenow/2013/03/a-computer-inside-a-cell.html?ref=hp -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Bio computer
I actually misspoke... they're getting actual logic into the cell. So more than one circuit. I had the same thought though! Isn't it funny that it's easier to get a digital circuit into a biological form than it is to grow software like cells? Still though, maybe we can learn something about the latter from the former. Who knows. On Fri, Mar 29, 2013 at 6:46 PM, Iian Neill iian.d.ne...@gmail.com wrote: As great an achievement as that is, shouldn't they be aiming to make a transistor like a cell? :-) Regards, Iian Sent from my iPhone On 30/03/2013, at 11:16 AM, Casey Ransberger casey.obrie...@gmail.com wrote: Someone finally got round to making a cell that acts like a transistor. You knew they'd do it eventually;) http://news.sciencemag.org/sciencenow/2013/03/a-computer-inside-a-cell.html?ref=hp -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
[fonc] About Waiting (Re: About HyperCard ( was Re: [squeak-dev] Getting rid of coloured code))
Let's keep cool and wait. Or, just take the Gezira code that's leaked out so far and then run as fast as we can with it. I have a feeling, though, that waiting might end up working out better than a lot of folks expect. The results of these experiments may confront us with new challenges. Who could ask for anything more? Let's keep our heads and just pay close attention, okay? I think that's our best plan, don't you? Even if what they're doing isn't a product or intended that way, because it's really a big complicated science experiment, I think we will get the best output in terms of understanding the work. If we can understand the work well, we can repeat it without much effort, no? Let us be patient, as we may entreat angels unawares. (Bad quote! I got it all wrong!) :) C On Feb 28, 2013, at 2:02 AM, karl ramberg karlramb...@gmail.com wrote: To run the latest and greatest FONC system the Gezire plugin is necessary. I tried to get the Gezira plug in to compile on Windows but I could not get the tool chain right and got lost in all the quirky stuff. I would be nice if there where compiled plugins for all platforms for download somewhere. Karl On Thu, Feb 28, 2013 at 5:09 AM, Yoshiki Ohshima yoshiki.ohsh...@acm.org wrote: It is not that we at Viewpoints are trying to be secretive, but we do have a newer system (or systems). Hopefully we can put some code out when our report is done. (Sorry for keeping people guessing.) -- -- Yoshiki ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
[fonc] Sources for Functional Reactive Programming
Didn't know this term, and worrying that I was completely misunderstanding the use of the term behavior, I googled and found these: http://conal.net/papers/icfp97/ http://haskell.cs.yale.edu/wp-content/uploads/2011/02/genuinely-functional-guis.pdf There's a Wikipedia article, but it's very sparse. Anything else I should read? -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Sources for Functional Reactive Programming
Got it. Thanks! On Thu, Feb 21, 2013 at 1:33 PM, Yoshiki Ohshima yoshiki.ohsh...@acm.orgwrote: On Thu, Feb 21, 2013 at 1:15 PM, Casey Ransberger casey.obrie...@gmail.com wrote: Didn't know this term, and worrying that I was completely misunderstanding the use of the term behavior, I googled and found these: http://conal.net/papers/icfp97/ http://haskell.cs.yale.edu/wp-content/uploads/2011/02/genuinely-functional-guis.pdf There's a Wikipedia article, but it's very sparse. Anything else I should read? Yes, I think it is unfortunate that they picked the term behavior to mean continuous time-varying entity. A generic term behavior does not have the connotation of continuous, as far as I can tell. To me, the Flapjax paper and their tutorial are more intuitive (whatever that means) http://www.flapjax-lang.org/publications/ -- -- Yoshiki ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Paranoid programming language
That's a good name for a programming language! On Wed, Feb 13, 2013 at 4:20 AM, David Pennell pennell.da...@gmail.comwrote: Malboge (http://en.wikipedia.org/wiki/Malbolge) was featured on an episode of Elementary. It's named after the eighth circle of hell in Dante's Inferno. Malbolge was so difficult to understand when it arrived that it took two years for the first Malbolge program to appear. The first Malbolge program was not written by a human being, it was generated by a beam searchhttp://en.wikipedia.org/wiki/Beam_searchalgorithm designed by Andrew Cooke and implemented in Lisp http://en.wikipedia.org/wiki/Lisp_programming_language -david On Wed, Feb 13, 2013 at 6:06 AM, Miles Fidelman mfidel...@meetinghouse.net wrote: Well, for evocative names, there's always Brainfuck ( http://en.wikipedia.org/wiki/**Brainfuckhttp://en.wikipedia.org/wiki/Brainfuck) - which is a real language, with derivatives even. And the name is truly accurate. :-) John Carlson wrote: Ah first time I came across a language with such an evocative name. Since I am too paranoid to click on a link, perhaps you could summarize. I did a search and it seemed to indicate that the language was a joke. Sigh. On Feb 12, 2013 7:26 PM, Miles Fidelman mfidel...@meetinghouse.netmailto: mfidelman@**meetinghouse.net mfidel...@meetinghouse.net wrote: John Carlson wrote: Is there a computer language (yes I realize games do this) that work like human languages? With features like misdirection, misinterpretation, volume, persuasion? Can we come up with a social language for computers? No, I'm not talking lojban, I'm talking something something semantically and/or syntactically ambiguous. Maybe lingodroids is close. More work in this area would be interesting. Well PPL (Paranoid Programming Language) might come close. http://zzo38computer.org/**backup/paranoid-programming-** language.htmlhttp://zzo38computer.org/backup/paranoid-programming-language.html:-) -- In theory, there is no difference between theory and practice. In practice, there is. Yogi Berra __**_ fonc mailing list fonc@vpri.org mailto:fonc@vpri.org http://vpri.org/mailman/**listinfo/fonchttp://vpri.org/mailman/listinfo/fonc __**_ fonc mailing list fonc@vpri.org http://vpri.org/mailman/**listinfo/fonchttp://vpri.org/mailman/listinfo/fonc -- In theory, there is no difference between theory and practice. In practice, there is. Yogi Berra __**_ fonc mailing list fonc@vpri.org http://vpri.org/mailman/**listinfo/fonc -- -david http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Building blocks and use of text
The next big thing probably won't be some version of Minecraft, even if Minecraft is really awesome. OTOH, you and your kids can prove me wrong today with Minecraft Raspberry Pi Edition, which is free, and comes with _source code_. http://mojang.com/2013/02/minecraft-pi-edition-is-available-for-download/ /fanboy On Wed, Feb 13, 2013 at 5:55 PM, John Carlson yottz...@gmail.com wrote: Miles wrote: There's a pretty good argument to be made that what works are powerful building blocks that can be combined in lots of different ways; So the next big thing will be some version of minecraft? Or perhaps the older toontalk? Agentcubes? What is the right 3D metaphor? Does anyone have a comfortable metaphor? It would seem like if there was an open, federated MMO system that supported object lifecycles, we would have something. Do we have an object web yet, or are we stuck with text forever, with all the nasty security vunerabilities involved? Yes I agree that we lost something when we moved to the web. Perhaps we need to step away from the document model purely for security reasons. What's the alternative? Scratch and Alice? Storing/transmitting ASTs? Does our reliance on https/ssl/tls which is based on streams limit us? When are we going to stop making streams secure and start making secure network objects? Object-capability security anyone? Are we stuck with documents because they are the best thing for debugging? ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Current topics
favorite examples of this was the Beings master's thesis by Doug Lenat at Stanford in the 70s. And this influenced the partial experiment we did in Etoys 15 years ago. There is probably a nice size for such modules -- large enough to both provide and be well tended, and small enough to minimize internal disasters. An interesting and important design problem is to try to (a) vet this idea in or out, and (b) if in, then what kinds of semi-universal modules would be most fruitful? One could then contemplate trying -- inducing -- to get most programmers to program in terms of these modules (they would be the components of an IDE for commerce, etc., instead of the raw programming components of today). This tack would almost certainly also help the mess the law is in going forward ... Note that desires for runable specifications, etc., could be quite harmonious with a viable module scheme that has great systems integrity. Cheers, Alan ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Current topics
in two different branches where one of them fixes a blocking bug (which is only a minor nuisance for end users) in the bits you shipped last week, and simultaneously working on next week's release, which depends on the changes in the hotfix. When you're perpetually overworked in that scenario and working on several systems at once, the whole business starts to look like a big ball of wobbly-wobbly, timey-wimey... stuff, and our brains like to view things as a strict progression of cause to effect. What day is it again? And, in trying to tease useful analogies from Biology, one I get is that the largest gap in complexity of atomic structures is the one from polymers to the simplest living cells. (One of my two favorite organisms is *Pelagibacter unique*, which is the smallest non-parasitic standalone organism. Discovered just 10 years ago, it is the most numerous known bacterium in the world, and accounts for 25% of all of the plankton in the oceans. Still it has about 1300+ genes, etc.) 25%? That's like finding out that the world has been round the whole time, ten years ago. What's interesting (to me) about cell biology is just how much stuff is organized to make integrity of life. Craig Ventor thinks that a minimal hand-crafted genome for a cell would still require about 300 genes (and a tiniest whole organism still winds up with a lot of components). The kernel is a big thing for such a small thing, as usual! Analogies should be suspect -- both the one to the law, and the one here should be scrutinized -- but this one harmonizes with one of Butler Lampson's conclusions/prejudices: that you are much better off making -- with great care -- a few kinds of relatively big modules as basic building blocks than to have zillions of different modules being constructed by vanilla programmers. One of my favorite examples of this was the Beings master's thesis by Doug Lenat at Stanford in the 70s. And this influenced the partial experiment we did in Etoys 15 years ago. Partial experiment? Can you be more specific? Should I assume that you mean to point at e.g. Morph (and by extension, Object) in Squeak as largish general purpose base objects or cells or genomes? There is probably a nice size for such modules -- large enough to both provide and be well tended, and small enough to minimize internal disasters. I've had lots of fun arguments about this with coworkers. I have a prejudice: I hate having to scroll down, so I tend toward deeply factored systems (like most of Squeak.) The disadvantage there is I probably spend more time mousing about unraveling the timey-wimey ball than I would if I were looking at files with thousands of lines of code. A friend, though, and I wish I could remember the name of the paper, but he put me onto a study where they tried to measure the relationship between defects and function/method size, and the results surprised me. The systems with the shortest methods had more failures than the systems with slightly longer methods, but that fell off a cliff shortly afterward. Longer than a certain size, defect density seemed to hockey stick. Right through the roof, and at least that part I would expect. BCC friend and former coworker, who might be able to find the paper so I can site my source on that:/ An interesting and important design problem is to try to (a) vet this idea in or out, and (b) if in, then what kinds of semi-universal modules would be most fruitful? (a) I think this is going to be hard to actually measure on a number of levels, which is to say, fun! One could then contemplate trying -- inducing -- to get most programmers to program in terms of these modules (they would be the components of an IDE for commerce, etc., instead of the raw programming components of today). This tack would almost certainly also help the mess the law is in going forward ... Note that desires for runable specifications, etc., could be quite harmonious with a viable module scheme that has great systems integrity. Cheers, Alan ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] photography and programming
We seem to have changed subjects:) Fine then! If you can make the same camera, or a better one using less lenses, you win. I'm a fan of the Hasselblad design. The detachable back end opens up a lot of possibilities. One possibility is to take test shots with the Polaroid back end, for e.g. a quick lighting test before moving on to the expensive film that goes into the standard back, wherein you can't even see what you've shot until you've developed it, which necessarily can't happen until after the shoot. Here's why this is on topic: if we can make a camera that's completely understandable by a single individual, but can't shoot anything but black and white (bear with me, I'm playing with words and concepts a bit) because development of color photos takes too long to be practical, with a design analogous to a Hasselblad, we can just swap out the back and end up with the FONC idea that optimizations can be kept separate from meaning and the math of the meaning in a modular way, and... Now I'm going to do something which is arguably a bit mean: for as many lenses as your SLR eschews, is it easier for you to explain concretely to a novice (for example, a small child) what your SLR does than it is for me to explain how my Hasselblad works? I have a feeling that explaining the actual optical chip is going to be something that's very difficult. Probably, if I tried to teach a kid how a camera works, my victim would have a working camera years before yours would have a real chip that could recognize a single pixel, and my game is mostly made out of a small hole in a milk carton. For all of humankind doing decades of this stuff, I really wish it was the other way around. You should let me play with your SLR sometime:) but I'd honestly rather die developing film in a poorly ventilated darkroom than shoot with a camera that I am neither able, nor allowed to, understand. Does that make sense? Casey P.S. This is one of the better metaphors that I've seen on the list. Awesome! On Dec 5, 2012, at 10:21 AM, Randy MacDonald array...@ns.sympatico.ca wrote: If you can span the same space with fewer tools, that is good. If you need 1 lens to cover all subjects, so be it. It sounds like it is a problem of fit, not something independent of the problem space. No need to discuss the benefits of SLR's, that is just stretching the analogy. On 12/4/2012 10:16 PM, John Carlson wrote: Wouldn't it be best to make programming a bit like single lens photography instead of dual (or triple) lens photography? It would seem like the fewer lenses you use, the less likely it would be for one of them to be scratched. Unless somehow there was a compensating factor in the lenses. My 2 bits. Metaphor isn't quite right, but perhaps you see my point. Where's my post-mature optimization? John Damn the torpedos, we're going full speed ahead and getting nowhere Carlson ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc -- --- |\/| Randy A MacDonald | If the string is too tight, it will snap |\\| array...@ns.sympatico.ca| If it is too loose, it won't play... BSc(Math) UNBF '83 | APL: If you can say it, it's done. Natural Born APL'er | I use Real J Experimental webserver -NTP{ gnat }- ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Obviously Kogge-Stone
Oh, I was mostly fishing to see if anyone was doing anything fun with hardware description. The present time sees a bigger trade between performance and power consumption, but that's all in the realm of optimization. A mentor of mine in the software world once said something to me to the effect of Don't look under the ISA, you don't want to see what's down there. Of course, that only encouraged me to look. After having done so, I'm seeing a lot of the same stuff there that motivated FONC. Cruft built up over generations of deadlines and back compat, etc. I'm pretty sure one could really put the STEPS treatment to hardware design, but there's a catch. If you wanted to run Frank over a machine designed to be understandable (rather than fast and compatible) and Frank hadn't seen anything more in the way of optimization than what the researchers working on it had to overcome with optimizations in order to complete their research, one might end up with a whole system too slow to experiment with. That's conjecture (obviously) given that I don't have Frank to play out over my FPGA rig, but I tend to trust my gut. Hence the post:) So I got to thinking, while I was doing my little armchair exploration of Verilog, what if we could declare/specify behavior with some concrete algebra, and (I know, I know, I added an and) find a way to do optimization in an automated way. Thought experiment: what if I designed a behavior-specifying language, which could be reduced to S-expressions, and then tried to find a set of fitness functions around performance? Would it be possible to converge on a performance goal by ripping off nature? If it was possible, how much compute would I need to converge on an acceptable solution? Probably crazy expensive compute because I'm doing two things that don't want to sleep in the same room... verifying correctness (think a lot of tests in some test language) and optimization (convergence on some set of perf metrics.) Not just gen algs but gen programming. This popped into my head after thumbing through this thing: http://www.amazon.com/gp/aw/d/0262111888 Of course, I'm skeptical (of both the content of that book and of my own crazy ideas!) While a performance metric (in some continuum of flops and watts) *does* seem like something GP might be able to optimize, correctness *absolutely* does not. Hence the question about language. Has anyone here read that? I'm quite sure I haven't understood all of the content, and with as obscure as it seems to be, it could be full of bunk. Of course, with regard to obscurity, one might say the same thing of other stuff folks around here know, understand well, and love. I could go into specific ideas I've had, but I won't, because I haven't really a framework for vetting them. Instead, I'm going to ship the Wouldn't It Be Cool If and let the good people of the list slaughter my little thought experiment with the raw unrelenting debunking power of a group of people who like science. Bonus points if anyone can name the Dylan song title that I spoofed for the thread, but that's way OT so please reply direct! Casey On Nov 30, 2012, at 3:35 PM, David Barbour dmbarb...@gmail.com wrote: Could you clarify what you're thinking about? Is your question about metaprogramming of Verilog (with an implicit assumption that Verilog will save battery life)? I've spent much time thinking about language and protocol design to extend battery resources. I happen to think the real wins are at higher levels - avoiding unnecessary work, amortizing work over time, linear logics, graceful degradation of services based on power access. (Questions about power and energy were common in survivable networking courses.) Low level power saving is a common aspect of mobile computer architecture design. But it's hard to push fundamentally better hardware designs without an existing body of software that easily fits it. On Nov 30, 2012 2:06 PM, Casey Ransberger casey.obrie...@gmail.com wrote: Since I'm running out of battery, and my adder is starting to go oh so slowly, I thought I might challenge the lovely people of the list to make it stop draining my battery so quickly. :D My first challenge idea was for someone to make it stop raining in Seattle, but I realized that I was asking a lot with that. Verilog would be cool, but better if you're translating whatcha got to Verilog with OMeta, and you've come up with some randomly pretty language for wires! Come on, someone else has to be thinking about this;) -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo
[fonc] Obviously Kogge-Stone
Since I'm running out of battery, and my adder is starting to go oh so slowly, I thought I might challenge the lovely people of the list to make it stop draining my battery so quickly. :D My first challenge idea was for someone to make it stop raining in Seattle, but I realized that I was asking a lot with that. Verilog would be cool, but better if you're translating whatcha got to Verilog with OMeta, and you've come up with some randomly pretty language for wires! Come on, someone else has to be thinking about this;) -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Not just clear of mind
I like a nice Benedict with a mimosa on a Saturday brunch, but lots of my friends in Seattle think it's gross and/or immoral that I eat chicken eggs and pig meat (actually I've quit the pig part, so I don't get the Bennie anymore.) I think kids do need some guidance, but the things they're really going to glom onto are the things they figure out that they like all on their own. So: why not take the kid to brunch and get it a Benedict, then, instead of being focused on stopping it from having ice cream? If she likes the Benedict, win. If not, try something else. Bonus points for getting something other than a Benedict for yourself and letting her pick off your plate in the event that the Benedict doesn't suit her fancy. In programming: a buffet of language choices is probably a good plan in general. Most programmers have seen C and something comparable to Perl. Everyone on this list knows that there are more than two ideas. Keeping the stuff in the buffet healthy is key, you're right. But young people, more so than anyone else, desire the freedom to choose. I'd suggest that the real dodge is to give them a long list of healthy things to choose from. Okay, it's 7:13am and now I'm gonna eat some ice cream, crack a beer open, listen to 50 Cent, and do some crimes. When I get back though, Bach is ON. ;) On Mon, Oct 1, 2012 at 9:24 AM, John Pratt jpra...@gmail.com wrote: Children will eat ice cream for breakfast if you don't stop them. ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] success building maru in maru - a bunch of bugs I had to fix to make it happen
Inline! On Aug 19, 2012, at 3:47 PM, Ian Piumarta i...@vpri.org wrote: Hi Shawn, My starting point was the published sources for Maru 2.1 here: http://piumarta.com/software/maru Thanks for posting this. Your issues with %typecheck, header and buffer were due to generating the evaluator with %define-accessors set to %define-safe-accessors in boot.l. Setting %define-accessors to %define-unsafe-accessors fixes them. Yes, playing it safe ends up in the middle of the road, which isn't safe at all. Your other observations arising from bit rot were spot-on and I've updated the sources and made a maru-2.2 tarball that can generate a working evaluator from emit.l and eval.l. I took the liberty of writing subr_read from scratch, to match the one in eval.c, and updated various other things to better match the current features. Fantastic! Maru is really lovely and new bits are awesomesauce. Although 'make test' and 'make test2' are working again, the languages implemented by eval.c and eval.l are slightly different due to development in the former that has not been carried forward into the latter. That's funny and fascinating. The bootstrap is not evolving alongside the bootstrapped thing. What's funny is that it can still bootstrap, and is still metacircular, while the prototype sits around collecting dust. It's kind of like the smoldering mess that's left after a rocket leaves for the moon. Good work! Regards, Ian I'm really pleased to see that Maru still lives. No BCPL clone? Is Maru it at this point? Feel free of course to decline to comment on this paragraph until the time is right. P.S. You mentioned something about a generation scavenging GC for it, did you do that? 'Cause if you did we might have a the roots of a fast (enough) Lisp that's purer than Scheme. 'Scuse the French, but holy shit. --Casey ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
[fonc] Fundamentals Of New Guitar (was Re: Alan Kay in the news [german])
(top post) Forgive me if I'm keeping us off topic here, but I like music too:) Here's my excuse: working on doing an assembler (using Ian's peg/leg) and emulator for an expanded TinyComputer which I plan to use in a programming game. *cough* My dad had his hands smashed several times while working. Kind of a lousy coincidence. The doctors told him that he'd never play guitar again a couple times, but he kept at it. He learned to retune his guitar to make certain chords (e.g., the A-shaped barre) which were extremely painful for him to play playable again, and in the process, he expanded his ability by leaps and bounds. He rarely plays in standard tuning anymore, because he can make much more interesting music using open and hybrid tunings. I think in at least one sense his misfortune was my great fortune, because I was exposed early on to an instrument which was not assumed to be fixed, but almost infinitely malleable. There's nothing quite like the sound of a Dm played as a twelve string harmonic, something that just can't be done at all in standard without extra arms. The only thing I could bend further than that guitar was the computer. Makes me think maybe the reason I got into programming languages themselves might have had something to do with the way my dad talked about music. Long story short, I was mugged a few weeks ago, and lost a chunk of my right palm in the process; I was out of commission for a little while but it seems that there was no actual skeletal, neural, or muscular damage, just a really deep flesh wound. I'm lucky and still playing with the band. I was maybe a little bit stupid, because I refused to stop practicing, even when the wound was infected. I've been playing bass with the band because I wanted to grow and I tend to do the same stuff over and over on the guitar. I just slapped some gauze on it, wrapped it tight with tape, took an aspirin and played with a pick for a month until it had healed sufficiently. Here's to finding new expressiveness in the face of adversity! --Casey On Thu, Jul 19, 2012 at 6:16 PM, Alan Kay alan.n...@yahoo.com wrote: Hi John Sorry to hear about your nerve problems. I got a variety of books to get started -- including Anton Shearer's and Christopher Parkening's. Then I started corresponding with a fabulous and wonderfully expressive player in the Netherlands I found on YouTube-- Enno Voorhorst Check out: http://www.youtube.com/watch?v=viVl-G4lFQ4 I like his approach very much -- part of it is that he started out as a violin player, and still does a fair amount of playing in string quartets, etc. You can hear that his approach to tremolo playing is that of a solo timbre rather than an effect. And some of the violin ideas of little to no support for the left hand do work well on classical guitar. But many of the barres (especially the hinged ones) do require some thumb support. What has been interesting about this process is to find out how much of the basic classical guitar technique is quite different from steel string jazz chops -- it's taken a while to unlearn some spinal reflexes that were developed a lifetime ago. Cheers, Alan -- *From:* John Zabroski johnzabro...@gmail.com *To:* Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing fonc@vpri.org *Sent:* Thursday, July 19, 2012 5:40 PM *Subject:* Re: [fonc] Alan Kay in the news [german] On Wed, Jul 18, 2012 at 2:01 PM, Alan Kay alan.n...@yahoo.com wrote: Hi Long, I can keep my elbows into my body typing on a laptop. My problem is that I can't reach out further for more than a few seconds without a fair amount of pain from all the ligament tendon and rotator cuff damage along that axis. If I get that close to the keys on an organ I still have trouble reaching the other keyboards and my feet are too far forward to play the pedals. Similar geometry with the piano, plus the reaches on the much wider keyboard are too far on the right side. Also at my age there are some lower back problems from trying to lean in at a low angle -- this doesn't work. But, after a few months I realized I could go back to guitar playing (which I did a lot 50 years ago) because you can play guitar with your right elbow in. After a few years of getting some jazz technique back and playing in some groups in New England in the summers, I missed the polyphonic classical music and wound up starting to learn classical guitar a little over a year ago. This has proved to be quite a challenge -- much more difficult than I imagined it would be -- and there was much less transfer from jazz/steel string technique that I would have thought. It not only feels very different physically, but also mentally, and has many extra dimensions of nuance and color that is both its charm, and also makes it quite a separate learning experience. Cheers, Alan Hey Alan, That's awesome that you are learning classical guitar. Are you
[fonc] Publish/subscribe vs. send
Here's the real naive question... I'm fuzzy about why objects should receive messages but not send them. I think I can see the mechanics of how it might work, I just don't grok why it's important. What motivates? Are we trying to eliminate the overhead of ST-style message passing? Is publish/subscribe easier to understand? Does it lead to simpler artifacts? Looser coupling? Does it simplify matters of concurrency? I feel like I'm still missing a pretty important concept, but I have a feeling that once I've grabbed at it, several things might suddenly fit and make sense. ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
[fonc] [cross-post] X-Prize guy announces interest in education reform at SXSW, wants ideas
Figured this was broadly applicable enough for a cross-post. Also left a note at the Scratch site. They're talking about bringing someone experienced on to tackle some of the bigger (read: social) challenges around education reform, and they also seem to be looking for some kind of killer app. Has anyone got a killer app lying around for education that mostly just needs some very large carrot dangled and a whole lot of love to see implemented in schools? -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] [cross-post] X-Prize guy announces interest in education reform at SXSW, wants ideas
And I forgot the link. Naturally :/ http://www.forbes.com/sites/georgeanders/2012/03/11/x-prize-founder-seeks-ideas-to-fix-education/ On Mon, Mar 12, 2012 at 11:53 AM, Casey Ransberger casey.obrie...@gmail.com wrote: Figured this was broadly applicable enough for a cross-post. Also left a note at the Scratch site. They're talking about bringing someone experienced on to tackle some of the bigger (read: social) challenges around education reform, and they also seem to be looking for some kind of killer app. Has anyone got a killer app lying around for education that mostly just needs some very large carrot dangled and a whole lot of love to see implemented in schools? -- Casey Ransberger -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] OT: Hypertext and the e-book
Below. On Mar 7, 2012, at 3:13 PM, BGB cr88...@gmail.com wrote: thoughts: admittedly, I am not really much of a person for reading fiction (I tend mostly to read technical information, and most fictional material is more often experienced in the form of movies/TV/games/...). I did find the article interesting though. I wonder: why really do some people have such a thing for traditional books? they are generally inconvenient, can't be readily accessed: they have to be physically present; one may have to go physically retrieve them; it is not possible to readily access their information (searching is a pain); ... Books? First, the smell. Especially old books. I have a friend who has a Kindle. It smells *nothing* like a library, and I do think something is lost there. It's also, ironically, the weight of them. The sense of holding something *real* that in turn holds information. When you move, it takes work to keep a book, so one tends to keep the most important books one has, whereas with digital we just keep whatever we have rights to read, because there's no real expense in keeping. We also can't really share, at least not yet. Not in any legal model. Second: when I finish a book, I usually give it away to someone else who'd enjoy it. Unless I've missed a headline, I can't do this with ebooks any more readily than that dubstep-blackmetal-rap album we still need to record when I buy it on iTunes (or whatever.) ;) ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
[fonc] How Books Work (was: something about ebooks)
Here's a book. You read it. Do you know how it works? Of course. Lots of people, most people know how your book works. Can you tell me everything about how a Kindle works? If it stopped working, would you know how to fix it? ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Error trying to compile COLA
Below. On Feb 29, 2012, at 5:43 AM, Loup Vaillant l...@loup-vaillant.fr wrote: Yes, I'm aware of that limitation. I have the feeling however that IDEs and debuggers are overrated. When I'm Squeaking, sometimes I find myself modeling classes with the browser but leaving method bodies to 'self break' and then write all of the actual code in the debugger. Doesn't work so well for hacking on the GUI, but, well. I'm curious about 'debuggers are overrated' and 'you shouldn't need one.' Seems odd. Most people I've encountered who don't use the debugger haven't learned one yet. At one company (I'd love to tell you which but I signed a non-disparagement agreement) when I asked why the standard dev build of the product didn't include the debugger module, I was told you don't need it. When I went to install it, I was told not to. I don't work there any more... ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Sorting the WWW mess
On Thu, Mar 1, 2012 at 7:04 AM, Alan Kay alan.n...@yahoo.com wrote: Hi Loup snip However, Ted Nelson said a lot in each of the last 5 decades about what kinds of linking do the most good. (Chase down what he has to say about why one-way links are not what should be done.) He advocated from the beginning that the provenance of links must be preserved (which also means that you cannot copy what is being pointed to without also copying its provenance). This allows a much better way to deal with all manner of usage, embeddings, etc. -- including both fair use and also various forms of micropayments and subscriptions. If only we could find a way to finally deal with all that intertwingularity! One way to handle this requirement is via protection mechanisms that real objects can supply. Cheers, Alan -- *From:* Loup Vaillant l...@loup-vaillant.fr *To:* fonc@vpri.org *Sent:* Thursday, March 1, 2012 6:36 AM *Subject:* Re: [fonc] Sorting the WWW mess Martin Baldan wrote: That said, I don't see why you have an issue with search engines and search services. Even on your own machine, searching files with complex properties is far from trivial. When outside, untrusted sources are involved, you need someone to tell you what is relevant, what is not, who is lying, and so on. Google got to dominate that niche for the right reasons, namely, being much better than the competition. I wasn't clear. Actually, I didn't want to state my opinion. I can't find the message, but I (incorrectly?) remembered Alan saying that one-way links basically created the need for big search engines. As I couldn't imagine an architecture that could do away with centralized search engines, I wanted to ask about it. That said, I do have issues with Big Data search engines: they are centralized. That alone gives them more power than I'd like them to have. If we could remove the centralization while keeping the good stuff (namely, finding things), that would be really cool. Loup. ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Error trying to compile COLA
Inline. On Thu, Mar 1, 2012 at 2:56 PM, Loup Vaillant l...@loup-vaillant.fr wrote: Le 01/03/2012 22:58, Casey Ransberger a écrit : Below. On Feb 29, 2012, at 5:43 AM, Loup Vaillantl...@loup-vaillant.fr wrote: Yes, I'm aware of that limitation. I have the feeling however that IDEs and debuggers are overrated. When I'm Squeaking, sometimes I find myself modeling classes with the browser but leaving method bodies to 'self break' and then write all of the actual code in the debugger. Doesn't work so well for hacking on the GUI, but, well. Okay I take it back. Your use case sounds positively awesome. It's fun:) I'm curious about 'debuggers are overrated' and 'you shouldn't need one.' Seems odd. Most people I've encountered who don't use the debugger haven't learned one yet. Spot on. The only debugger I have used up until now was a semi-broken version of gdb (it tended to miss stack frames). Oh, ouch. Missed frames. I hate it when things are ill-framed. I can't say I blame you. GDB is very *NIXy. Not really very friendly to newcomers. Crack open a Squeak image and break something. It's a whole different experience. Where is this nil value coming from? is a question that I can answer more easily in a ST-80 debugger than I can in any other that I've tried (exception of maybe Self.) The button UI on the thing could probably use a bit of modern design love (I'm sure I'm going to be trampled for saying so!) but otherwise I think it's a great study for what the baseline debugging experience ought to be for a HLL (why deal with less awesome when there's more awesome available under the MIT license as a model to work from?) Of course, I'm saying *baseline.* Which is to say that we can probably go a whole lot further with these things in the future. I'm still waiting on that magical OmniDebugger that Alessandro Warth mentioned would be able to deal with multiple OMeta-implemented languages;) Loup. __**_ fonc mailing list fonc@vpri.org http://vpri.org/mailman/**listinfo/fonchttp://vpri.org/mailman/listinfo/fonc -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] COLAs or CLOAs? : are lambda systems fundamentally simpler than object systems?
There's always http://en.wikipedia.org/wiki/Actor_model and http://www.dalnefre.com/wp/humus/ ...which seem to make concurrency less of a PITA. Like most languages that crystalize a particular style, though, there's some learning involved for folks (like me!) who hadn't really thought about the actor model in a deep way. On Sun, Feb 12, 2012 at 9:15 AM, Steve Wart st...@wart.ca wrote: Simplicity, like productivity, is an engineering metric that can only be measured in the context of a particular application. Most successful programming languages aren't mathematically pure but some make it easier than others to use functional idioms (by which I mean some mechanism to emulate the types of operations you describe below). However many problems are simpler to solve without such abstractions. That's why practical languages like Smalltalk-80 and Lisp and their mainstream derivatives have succeeded over their more pure counterparts. I think I responded to this message because I was sick in bed yesterday reading the intro paragraphs of The Algorithmic Beauty of Plants yesterday ( http://algorithmicbotany.org/papers/#abop), and shared the introduction of DOL-systems with my son. We both enjoyed it, but I'm not sure how this translates into reasoning about distributed systems. Can the distributed computation model you describe be formalized as a set of rewrite rules, or is the black box model really about a protocol for message dispatch? Attempts to build distributed messaging systems haven't been particularly simple. In fact I consider both CORBA and Web Services to be failures for that reason. It's very difficult to use OO in this way without imposing excessive knowledge about the internal representation of objects if you need to serialize parameters or response objects. HTTP seems to have avoided this by using MIME types, but this is more about agreed upon engineering standards rather than computational abstractions. Cheers, Steve On Sun, Feb 12, 2012 at 4:02 AM, Jakob Praher j...@hapra.at wrote: We would have to define what you mean by the term computation. Computation is a way to transform a language syntactically by defined rules. The lambda calculus is a fundamental way of performing such transformation via reduction rules (the alpha, beta, gamma rules). In the end the beta-reduction is term substitution. But abstraction and substitution in a generic purpose von Neumann-style computer has to be modelled accordingly: A variable in the computer is a memory location/a register that can be updated (but it is not a 1:1 correspondence). E.g. A function in a computer is jump to a certain code location having to write to certain locations in memory/registers to get the arguments passed. IMHO the computational model of objects and method dispatch is more of a black box / communcation-oriented model. One does not know much about the destination and dispatchs a message interpreting the result. In functional languages the model is more white boxed. One can always decompose a term into subterms and interpret it. Therefore functional languages do not grow easily to distributed programming, where the knowledge over the terms is limited. Best, Jakob ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Mitsubishi Luggage Tag, third try
Awesome:) On Feb 8, 2012, at 8:51 AM, Alan Kay alan.n...@yahoo.com wrote: Cheers, Alan eRAM direct scan.jpg ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] One more year?!
Below. On Jan 21, 2012, at 6:26 PM, BGB cr88...@gmail.com wrote: like, for example, if a musician wanted to pursue various musical forms. say, for example: a dubstep backbeat combined with rap-style lyrics sung using a death-metal voice or similar, without the man (producers, ...) demanding all the time that they get a new album together (or that their fans and the man expect them to stay with their existing sound and theme), and if they just gave them something which was like and so wub-wub-wub, goes the sub-sub-sub, as the lights go blim-blim-blim, as shorty goes rub-run-run, on my hub-hub-hub, as my rims go spin-spin-spin or something... (all sung in deep growls and roars), at which point maybe the producers would be very unhappy (say, if he was hired on to be part of a tween-pop boy-band, and adolescent females may respond poorly to bass-filled wubbing growl-rap, or something...). or such... This is probably the raddest metaphor that I have ever seen on a mailing list. BGB FTW! P.S. If you want to get this song out the door, I'm totally in. Dubsteprapmetal might be the next big thing. I can do everything except the drums. We should write an elegant language for expressing musical score in OMeta and use a simulated orchestra! ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Inspired 3D Worlds
Check out Ken Musgrave. He makes whole planets with fractals. It's cool. Twisting knobs is a lot less work than manual 3D modeling and such. http://www.amazon.com/gp/aw/d/1558608486 On Jan 16, 2012, at 7:31 PM, BGB cr88...@gmail.com wrote: On 1/16/2012 6:47 PM, Casey Ransberger wrote: Top post. Heightmapping can go a really long way. Probably not news though:) I am still not certain, since a lot of this has a lot more to do with my own project than with general issues in computing. I had messed with a few technologies already. height-maps (long ago, not much used since then, generally randomized). the issue was mostly one of being not terribly interesting, but it makes sense if one wants terrain (and is fairly cheap in terms of memory use and performance impact). a more advanced variety would be to combine a height-map with a tile-map, where the terrain generator would also vary the texture-map to give a little more interest. I have considered this as a possibility. also tried randomly generated voxel terrain (similar to Minecraft, using perlin noise). issues were of being difficult to integrate well with my existing technology, and being very expensive in terms of both rendering and memory usage (particularly for storing intermediate meshes). one may need to devote about 500MB-1GB of RAM to the problem to have a moderately sized world with (with similar specifics to those in Minecraft). I suspect that, apart from making something like Minecraft, the technology is a bit too expensive and limited to really be all that generally useful at this point in time and on current hardware (I suspect, however, it will probably be much more relevant on future HW). I also tried randomly generated grid-based areas (basically, stuff is built from pre-made parts and randomly-chosen parts are put on a grid). I had also tried combining this with maze-generation algorithms. the results were functional but also nothing to get excited about. the big drawback was that I couldn't really think of any way to make the results of such a grid based generator particularly interesting (this is I think more so with a first-person viewpoint: such a structure is far less visually interesting from the inside than with a top-down or isometric view). it could work if one were sufficiently desperate, but I doubt it would be able to hold interest of players for all that long absent something else of redeeming value. the main maps in my case mostly use a Quake/Doom3/... style maps, composed mostly of entities (defined in terms of collections of key/value pairs representing a given object), brushes (convex polyhedra), patches (Bezier Surfaces), and meshes (mostly unstructured polygonal meshes). these would generally be created manually, by placing every object and piece of geometry visible in the world, but this is fairly effort-intensive, and simply running head first into it tends to quickly drain my motivation (resulting in me producing worlds which look like big boxes with some random crap in them). sadly, random generation not on a grid of some sort is a much more complex problem (nor random generation directly in terms of unstructured or loosely-structured geometry). fractals exist and work well on things like rocks or trees or terrain, but I haven't found a good way to apply them to general map generation problem (such as generating an interesting place to run around in and battle enemies, and get to the exit). the problem domain is potentially best suited to some sort of maze algorithm, but in my own tests, this fairly quickly stopped being all that interesting. the upper end I think for this sort of thing was likely the .Hack series games (which had a lot of apparently randomly generated dungeons). it is sad that I can't seem to pull off maps even half as interesting as those (generally created by hand) in commercial games from well over a decade ago. I can have a 3D engine which is technically much more advanced (or, at least, runs considerably slower on much faster hardware with moderately more features), but apart from reusing maps made by other people for other games, I can't make it even a small amount nearly as interesting or inspiring. On Jan 16, 2012, at 8:45 AM, David Barbour dmbarb...@gmail.com wrote: Consider offloading some of your creativity burden onto your computer. The idea is: It's easier to recognize and refine something interesting than to create it. So turn it into a search, recognition, and refinement problem, and automate creation. There are various techniques, which certainly can be combined: * constraint programming * generative grammar programming * genetic programming * seeded fractals You might be surprised about how much of a world can be easily written with code rather than mapping. A map can be simplified by marking regions
Re: [fonc] Inspired 3D Worlds
Top post. Heightmapping can go a really long way. Probably not news though:) On Jan 16, 2012, at 8:45 AM, David Barbour dmbarb...@gmail.com wrote: Consider offloading some of your creativity burden onto your computer. The idea is: It's easier to recognize and refine something interesting than to create it. So turn it into a search, recognition, and refinement problem, and automate creation. There are various techniques, which certainly can be combined: * constraint programming * generative grammar programming * genetic programming * seeded fractals You might be surprised about how much of a world can be easily written with code rather than mapping. A map can be simplified by marking regions up with code and using libraries of procedures. Code can sometimes be simplified by having it read a simple map or image. Remember, the basic role of programming is to automate that which bores you. Regards, Dave On Sun, Jan 15, 2012 at 4:18 PM, BGB cr88...@gmail.com wrote: I am generally personally stuck on the issue of how to make interesting 3D worlds for a game-style project while lacking in both personal creativity and either artistic skill or a team of artists to do it (creating decent-looking 3D worlds generally requires a fair amount of effort, and is in-fact I suspect somewhat bigger than the effort required to make a passable 3D model of an object in a 3D modeling app, since at least generally the model is smaller and well-defined). it seems some that creativity (or what little of it exists) is stifled by it requiring a large amount of effort (all at once) for the activity needed to express said creativity (vs things which are either easy to do all at once, or can be easily decomposed into lots of incremental activities spread over a large period of time). trying to build a non-trivial scene (something which would be passable in a modern 3D game) at the level of dragging around and placing/resizing/... cubes and/or messing with individual polygon-faces in a mapper-tool is sort of a motivation killer (one can wish for some sort of higher level way to express the scene). meanwhile, writing code, despite (in the grand scale) requiring far more time and effort, seems to be a lot more enjoyable (but, one can't really build a world in code, as this is more the mapper-tool's domain). ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Debugging PEGs and Packrats
Inline. Thanks for your reply! On Dec 28, 2011, at 12:20 PM, John Leuner je...@subvert-the-dominant-paradigm.net wrote: Hi Casey In my OMeta implementations I have found that simply recording the position of the deepest error and then printing out the remaining input text was sufficient to debug my grammars. This is something I hadn't thought of. It would tell one where the grammar choked. That's interesting. I think the next step I would take (if that wasn't sufficient) is to create a test suite that tests each grammar rule independently, successively building up to the complex input that is failing. I'd been doing this for terminals, but it seems like non-terminals get harder to write as unit tests, parser state and all. I suppose integration tests could work, but it seems the more complex the grammatical structure, the more I experience diminishing returns with tests. I just end up with tests that fail mysteriously which is my original problem. Do you have any examples about? I wonder if maybe there's something essential I'm failing to understand. Maybe looking at your tests might send me along to an a-ha moment. Thanks again! John On Tue, 2011-12-13 at 23:17 -0800, Casey Ransberger wrote: I know this has come up before. Hopefully I'm not about to repeat a lot. Debugging this stuff just seems really hard. And significantly harder than what I've experienced working with e.g. Yacc. Hypothesis: Yacc had a lot of time to bake before I ever found it. PEGs are new, so there's been less overall experience with debugging them. I've experimented in what little time I can devote with OMeta, PetitParser, and Treetop. The debugging experience has been roughly consistent across all three. One particular issue which has bugged me: memoization seems to carry a lot of instance-state that's really hard to comprehend when the grammar isn't working as I expect. It's just really hard to use that ocean of information to figure out what I've done wrong. Given that with these new parsing technologies, we're pretty lucky to see parse error as an error message, I can't help but think that it's worth studying debugging strategies. Heh. :D I'm really not complaining, I'm just pointing it out. Has anyone here found any technique(s) which makes debugging a grammar written for a PEG/packrat less of a pain in the butt? I'd be really interested in hearing about it. ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] History of computing talks at SJSU
Below. On Dec 16, 2011, at 9:03 PM, Wesley Smith wesley.h...@gmail.com wrote: Some things are just expensive. No one has found an acceptable solution. These are things we should avoid in the infrastructure underneath a personal computing experience:) Or figure out how to amortize them over time. I think recent raytracing apps are a good example of this. You can preview the image as it is rendered to see if it's just right and if not, tweak it. Another example is scraping data to build a database that will inform autocompletion and other productivity enhancing UI effects. Sometimes gather and parsing out the data to put in the database can be expensive, but it can easily be done in a background thread without any cost to responsiveness. I'm sure there are plenty of other examples. wes Totally. Look for new ways to make expensive things cheap. Look for ways to turn NP-complete linear! And never ever stop trying. Just don't put factorial complexity in my email client if you can avoid it:);):P ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] History of computing talks at SJSU
That's really funny:) On Dec 16, 2011, at 7:13 PM, John Zabroski johnzabro...@gmail.com wrote: On Fri, Dec 16, 2011 at 10:10 PM, John Zabroski johnzabro...@gmail.com wrote: On Fri, Dec 16, 2011 at 10:04 PM, Jecel Assumpcao Jr. je...@merlintec.com wrote: Steve Dekorte wrote: [NeXTStation memories versus reality] I still have a running Apple II. My slowest working PC is a 33MHz 486, so I can't directly do the comparison I mentioned. But I agree we shouldn't trust what we remember things feeling like. -- Jecel The Apple booting up faster was not simply a feeling, but a fact owing to its human-computer interaction demands. They set fast boot speeds as a design criteria. Jef Raskin talks about this in the book The Humane Interface. Even modern attempts to reduce boot speed have not been that good, such as upstart, an event-driven alternative to init. Eugen has some very good points about human limits of managing performance details, though. Modern approaches to performance are already moving away from such crude methods. By the way, slight tangent: Modern operating systems, with all their hot-swapping requirements, do a poor job distinguishing device error from continuously plugging-in and plugging-out the device. For example, if you have an optical mouse and damage it, it might slowly die and your entire system will hang because 99% of your CPU will be handling plugin and plugout events. ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] History of computing talks at SJSU
Below. Abridged. On Dec 16, 2011, at 1:42 PM, Steve Dekorte st...@dekorte.com wrote: FWIW, in my memory, my old NeXTstation felt as snappy as modern desktops but when I ran across one at the Computer History Museum it felt painfully slow. I've had similar experiences with seeing old video games and finding the quality of the graphics to be much lower than I remembered. This is just a guess, but I suspect what we remember is strongly influenced by our emotional reactions which in turn are shaped by our expectations. At the time, my expectations were lower. This is an excellent point. At work I'm using a 32-bit single core machine that's 0.6ghz slower than my personal 64-bit dual core machine. Once in awhile, I notice that it's slower. I have a feeling, though, that this is a consequence of slower hardware *compounded* by expensive software, because most of the time, I can't tell the difference at all. What I'm saying is in part that the computational power of modern computers typically eclipses my personal need for computing power. When things are suddenly slow, I suspect algorithm/datastructure. Whereas: it used to be that everything seemed to take a long time. Some things are just expensive. No one has found an acceptable solution. These are things we should avoid in the infrastructure underneath a personal computing experience:) ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] History of computing talks at SJSU
Below. On Dec 16, 2011, at 3:19 PM, Alan Kay alan.n...@yahoo.com wrote: And what Engelbart was upset about was that the hands out -- hands together style did not survive. The hands out had one hand with the 5 finger keyboard and the other with the mouse and 3 buttons -- this allowed navigation and all commands and typing to be done really efficiently compared to today. Hands together on the regular keyboard only happened when you had bulk typing to do. Are you talking about the so-called chording keyboard? I had an idea years ago to have a pair of twiddlers (the one chording keyboard I'd seen was called a twiddler) which tracked movement of both hands over the desktop, basically giving you two pointing devices and a keyboarding solution at the same time. Now it's all trackpads and touch screens, and my idea seems almost Victorian:) ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] History of computing talks at SJSU
Inline and greatly abridged. On Dec 14, 2011, at 5:09 PM, Jecel Assumpcao Jr. je...@merlintec.com wrote: About Joss, we normally like to plot computer improvement on a log scale. But if you look at it on a linear scale, you see that many years go by initially where we don't see any change. So the relative improvement in five years is more or less the same no matter what five years you pick, but the absolute improvement is very different. When I needed a serious computer for software development back in 1985 I built an Apple II clone for myself, even though that machine was already 8 years old at the time (about five Moore cycles). That's just so cool. Someday I want to make an Apple IIgs clone because that thing rocked and the emulator I have is dog-slow:/ but we've talked about that before! The state of the art in personal computers at the time was the IBM PC AT (6MHz iAPX286) which was indeed a few times faster than the Apple II, but not enough to make a qualitative difference for me. If I compare a 1992 PC with one from 2000, the difference is far more important to me. Okay so this is where stuff gets funny to me. My computer, if you look at the clock and the cores, is blazing fast. You can see it once in awhile: when doing something graphically intensive (the GPU is also really fast) or something straightforwardly computationally expensive, like compiling C code with all of the optimizations on. But in general... my computer is only a tiny bit faster than the one I had in the early nineties. In terms of day to day stuff, it's only gotten a tinsy bit faster. Sometimes I sit there looking at an hourglass or a beach ball and think to myself this only used to happen when I was waiting on a disk to spin about. There isn't even a disk in this thing. What the hell? Hypothesis: Mainstream software slows down at a rate slightly less than mainstream hardware speeds up. It's an almost-but-not-quite-inverse Moore's Law. Unless someone else has called this out directly, I'm calling it Joe's Law, because I don't want to deal with the backlash! Cheers, -- Jecel ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
[fonc] Debugging PEGs and Packrats
I know this has come up before. Hopefully I'm not about to repeat a lot. Debugging this stuff just seems really hard. And significantly harder than what I've experienced working with e.g. Yacc. Hypothesis: Yacc had a lot of time to bake before I ever found it. PEGs are new, so there's been less overall experience with debugging them. I've experimented in what little time I can devote with OMeta, PetitParser, and Treetop. The debugging experience has been roughly consistent across all three. One particular issue which has bugged me: memoization seems to carry a lot of instance-state that's really hard to comprehend when the grammar isn't working as I expect. It's just really hard to use that ocean of information to figure out what I've done wrong. Given that with these new parsing technologies, we're pretty lucky to see parse error as an error message, I can't help but think that it's worth studying debugging strategies. Heh. :D I'm really not complaining, I'm just pointing it out. Has anyone here found any technique(s) which makes debugging a grammar written for a PEG/packrat less of a pain in the butt? I'd be really interested in hearing about it. ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] new document
+1 is a low-bandwidth way to express yes, let's explore what this person is talking about some more because this is interesting. If it's a problem, maybe the rule should be to place [+1] before the subject line. But that's going to split the thread in lots of mail readers... Especially when there's a strong polar dispute between people on squeak-dev, this has worked really well in the short time I've been around. One time I got in trouble for saying +1000. I guess I'm only allowed one. :) Casey On Nov 8, 2011, at 5:14 PM, Joel Healy joel.h.he...@gmail.com wrote: I started this +1 thing. It was indeed referring to Sean's comment (that is why Sean's comment was quoted). I thought about this post before I made it. Normally I would not spam a thread with a me too comment, but in this case I thought that it was important to let everyone know that there are people who read this list and website who may not feel qualified to participate much but none the less rely on them as a vital source of information. Dave, I did not realize that you owned this topic. I wasn't even aware that you started this topic. If I infringed on your intellectual property rights, I apologize. As far as moving my comment to a new topic, that just doesn't seem reasonable to me. Starting a new topic with I agree with what Sean said in another topic seems to me to be a poor organizational structure. If I violated some list etiquette by expressing my opinion as +1, I do sincerely apologize. Perhaps my opinion is of no value. I doubt that I will offer any in the future, so there is no need to chastise me further. Regards, Joel On Tue, Nov 8, 2011 at 5:32 PM, David Barbour dmbarb...@gmail.com wrote: `+1`? Really? I seriously do not appreciate having my mail spammed in this manner. If you're offering an opinion on the article, try to say something specific and relevant to those who might have skimmed it. Which parts interested you? If you're referring to Sean's comment for recording the outreach events, please consider moving it to another topic. On Tue, Nov 8, 2011 at 3:17 PM, Julian Leviston jul...@leviston.net wrote: +1 On 09/11/2011, at 6:30 AM, Kevin Driedger wrote: +1 !! ]{evin ])riedger On Tue, Nov 8, 2011 at 1:05 PM, Joel Healy joel.h.he...@gmail.com wrote: +1 Joel Healy On Tue, Nov 8, 2011 at 10:49 AM, DeNigris Sean s...@clipperadams.com wrote: On Nov 7, 2011, at 6:08 PM, karl ramberg wrote: http://www.vpri.org/pdf/tr2011004_steps11.pdf It's so exciting to watch the project come along. I can't wait to eventually play with it! With every annual report, I think what a shame it is that there are so many talks given about it (~20 this year) and so few (~3) are recorded. Given the vital importance of this project, and all the work that must go into preparing the talks, it seems like a great waste to share this knowledge with only the few academics who happen to be at the various conferences. I attend about 6 conferences a year and still feel like I'm missing all the fun. Why doesn't VPRI just take the bull by the horns and record them even if the conferences don't? Consumer video equipment is so good now, it probably wouldn't cost anything but a few conversations - even an iPhone video could work! Sean ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
[fonc] (eval metacircularly forever)
___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Tension between meta-object protocol and encapsulation
Top-post: thanks for your reply. I'll check this stuff out. Somehow I missed this when you sent it. Casey On Sep 7, 2011, at 1:50 PM, David Barbour dmbarb...@gmail.com wrote: On Wed, Sep 7, 2011 at 12:48 PM, Casey Ransberger casey.obrie...@gmail.com wrote: It seems to me that there is tension here, forces pulling in orthogonal directions. In systems which include a MOP, it seems as though encapsulation is sort of hosed at will. Certainly I'm not arguing against the MOP, it's one of the most powerful ideas in programming. For some things, it seems absolutely necessary, but then... there's the abuse of the MOP. Is this tension irreconcilable? There are patterns for meta-object protocol that protect encapsulation (and security). Gilad Bracha discusses capability-secure reflection and MOP by use of 'Mirrors' [1][2]. The basic idea with mirrors is that the authority to reflect on an object can be kept separate from the authority to otherwise interact with the object - access to the MOP is not universal. Maude's reflective towers [3] - which I feel are better described as 'towers of interpreters' - are also a secure basis. Any given interpreter is constrained to the parts of the model it is interpreting. By extending the interpreter model, developers are able to achieve ad-hoc extensions similar to a MOP. These two classes of mechanisms are actually quite orthogonal, and can be used together. For example, developers can provide frameworks or interpreters in a library, and each interpreter 'instance' can easily export a set of mirror capabilities (which may then be fed to the application as arguments). [1] Mirrors: Design Principles for Meta-level Facilities of Object-Oriented Programming Language. Gilad Bracha and David Ungar. http://bracha.org/mirrors.pdf [2] The Newspeak Programming Platform. http://bracha.org/newspeak.pdf (sections 3,4). [3] Maude Manual Chapter 11: Reflection, Metalevel Computation, and Strategies. http://maude.cs.uiuc.edu/maude2-manual/html/maude-manualch11.html We can use power responsibly. The trick is to control who holds it, so that power is isolated to the 'responsible' regions of code. Regards, Dave ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Maru: anon symbols
Sweet! Maru is really fun:) On Oct 17, 2011, at 8:26 PM, Kurt Stephens k...@kurtstephens.com wrote: I've been toying with maru. Added anonymous symbol (gensym) support here: https://github.com/kstephens/maru/tree/anon-symbol Rewrote the pop macro form to use gensym. Has anyone ported ometa to maru? If not, I might try. -- KAS ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Lessons from COLA
Have you looked at Maru? It's likely(?) what you're after. Or was. On Wed, Sep 28, 2011 at 7:43 PM, Erick Lavoie erick.lav...@gmail.comwrote: I've found the vision of a simple, open and evolutionary adaptable programming language substrate, as described in Albert [1], tantalizing. I especially like the idea of dynamically evolving a language 'from within' a fluid substrate. I am left wondering the extent to which this vision was realized and its actual benefits. The commit log of idst [2] shows that work has been done up until 2009 (minus the minor commit in 2010). The 2009 and 2010 NSF reports make no mention of the COLA system, the latter report mentions Nothing as the target for all the other DSLs. No comment has been made on the language(s) used to explore the VMs design. The git mirror for idst and ocean are not accessible on Micheal FIG website anymore. The latest papers citing [1] dates from 2009 [3]. Based on Alan's comment that it is not an incremental project [...]. We actually throw away much of our code and rewrite with new designs pretty often [4], I would assume that there is no interest in pursuing the COLA project anymore. However, I would still be interested in lessons learned from developing and using COLA for implementing VMs: Was a fully bootstrapped dynamic implementation ever realized (Coke-based implementation of the Pepsi compiler and Coke compiler/evaluator)? Was the system used to implement major subsystems (Worlds, Continuations, On Stack Replacement, Multi-Threading, Network-Transparent and Fault-Tolerant Distributed semantic)? What were the major gains in the practice of developing a new VM when using a dynamic substrate? What was not significant in practice? Was the evolve from within strategy truly fruitful? If not, was it a problem in the vision or in its implementation? Was there still a need to start from scratch instead of using COLA to prototype new implementation techniques or bootstrap a new system? Why? I understand that the VPRI team is probably racing against the clock to put the final touches to Frank and producing paperwork for the NSF, so I guess more detailed answers might come next year (or hopefully in the last report) but I am still interested in relevant pointers to article/documentation/code and experience from members of this list which might provide elements of answer to those questions. In the present, I am interested in developing a COLA-like system that could serve as an exploration vehicle for research on high-performance meta-circular VMs for dynamic languages. The main objective would be to drastically reduce the amount of effort needed to test new ideas about object representations, calling conventions, memory management, compiler optimizations, etc. and pave the way for more dynamic optimizations. I would like the system to: 1. allow interactive and safe development of multiple natively-running object representations and dynamic compilers with full tool support (debugger, profiler, inspector) 2. easily migrate the system to a different implementation language The first property would serve to bring the benefits of a live programming environment to VM development (as possible in Squeak, through simulation) and the second would serve a) to facilitate the dissemination of the implementation techniques (including meta-circular VM construction) in existing language communities (Python, JavaScript, Scheme, etc.) and b) obtain expressive and performant notation(s) for VM research without having to start from scratch each time. Answers to the aforementioned questions would guide my own implementation strategy. I am also interested in pointers to past work that might be relevant to such a pursuit. Erick [1] http://piumarta.com/papers/**albert.pdfhttp://piumarta.com/papers/albert.pdf [2] http://piumarta.com/svn2/idst/**trunkhttp://piumarta.com/svn2/idst/trunk [3] http://scholar.google.com/**scholar?cites=** 8937759201081236471as_sdt=**2005sciodt=1,5hl=enhttp://scholar.google.com/scholar?cites=8937759201081236471as_sdt=2005sciodt=1,5hl=en [4] http://vpri.org/mailman/**private/fonc/2010/001352.htmlhttp://vpri.org/mailman/private/fonc/2010/001352.html __**_ fonc mailing list fonc@vpri.org http://vpri.org/mailman/**listinfo/fonchttp://vpri.org/mailman/listinfo/fonc -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] OOP
also risk exposing some implementation nastiness, or may be used in confusing and/or counter-intuitive ways (since there may not be any obvious way to tell them apart from built-in functions or similar). some of what could be done in using macros can be done in my case using the quote and unquote keywords, but these are sort of a lame joke vs macros. for example: unquote foo(quote x+3); would be (sort of) similar to: (eval (foo '(+ x 3))) although possible could be to do similar with the syntax: foox+3; I don't really like this option (I don't really like the ... syntax for sake of it creating ambiguities). foo!(...) or foo![...] could also be possible. also: foo:(...) but this would create a whitespace sensitive operation. foo::(...), could also work, and is not *too* ugly. note that, ideally, this would be a cached operation, rather than naively rebuilding the code-fragment on every execution (or being a one off static operation), but this leads to another problem: how to make this semantically-transparent?... (I lack ideas for an obvious flushing/invalidation mechanism in this case). this could lead to several edge cases: a case where it is preferable to have it be one-off; a case where it is preferable to dynamically re-build the expression. - I'm not sure where, if at all, security comes in doesn't really appear to me that Lisp was really designed with security in mind. snip I can't really comment on a lot of this (too far outside my area). ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
[fonc] Line endings
Did Squeak pick up Macintosh style line endings when it travelled through Apple, or did Apple pick up Smalltalk style line endings when it travelled through PARC? I've been wondering about this for awhile now. ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
VR for the rest of us (was Re: [fonc] Re: SecondPlace, QwaqLife or TeleSim? Open ended, comments welcome)
Inline and abridged... and rather long anyhow. I *really* like some of the ideas that are getting tossed around. On Tue, Aug 9, 2011 at 2:05 AM, BGB cr88...@gmail.com wrote: On 8/8/2011 6:55 PM, Casey Ransberger wrote: I almost missed this thread. I'm also hunting that grail. VR for consumers that isn't lame. CC'd FONC because I think this is actually relevant to that conversation. My feeling is, and I may be wrong, that the problems with Second Life are twofold: [sorry in advance, I mostly ended up running off in an unrelated direction, but maybe it could still be interesting]. You're fine, I do it all the time:) IMO, probably better (than centralized servers) is to have independent world-servers which run alongside a traditional web-server (such as Apache or similar). This appears more or less to be the way OpenQwaq works. I'm pretty sure that I haven't fully comprehended everything the server does and how that relates to the more familiar (to me) client, though. I note that the models and such seem to live on the server, and then get sent (synced?) to the client. one can jump to a server using its URL, pull down its local content via HTTP, and connect to a server which manages the shared VR world, ... Ah, you're talking about running in a web browser? Yeah, that will probably happen, but the web browser strikes me as a rather poor choice of life support system for a 3D multimedia collaboration and learning environment at least as of today... OTOH I guess it solves the problem of not being able to deploy (e.g.) GPL'd code on platforms like iOS. I should say that I'm a huge fan of things like Clamato and Lively Kernel, but I'm not sure the WebGL thing is ready for prime time, and I'm not sure how something like e.g. Croquet will translate at this point in time. I also don't have a Croquet implemented in Javascript lying around anywhere, and it's not exactly a small amount of work to implement the basis. I don't even understand how all of the parts work or interact yet... a partial issue though becomes how much client-specific content to allow, for example, if clients use their own avatars (which are bounced along using the webserver to distribute them to anyone who sees them), and a persons' avatar is derived from copyrighted material, there is always a risk that some jerkface lawyers may try to sue the person running the server, or the author of the VR software, with copyright infringement (unless of course one uses the usual sets of legal disclaimers and terms of use agreements and similar). Heh, yes. Fortunately there are places one can go to purchase assets which can then be used under commercially compatible licenses... to be honest, though, the avatar I've been testing with is *cough* Tron. Found it on the web and couldn't really resist. Got to take him out of there before I can deploy anything, I think, but I Am Not A Lawyer, so I can't say that I actually know, and like most folks, I'm going to play it safe... what I do know is that this is slightly embarrassing :O Working on an original protagonist/avatar for my game but she's not quite done yet. It's all dialed in but the clothes aren't right yet. Having to learn to use this pile of expensive 3D animation software as I go... I really wish I could just draw everything using a pencil and then use a lightbox to transfer the keyframes to cell and paint, but I don't know how to make hand drawn animation work in 3D. This is actually why I was curious about the availability of the sources to SketchPad, because that constraints in 3D idea seems to underly the automated inbetweening that goes on nowadays and you could do stuff in 3D using a light pen with SketchPad, which seems better than what I have now in a lot of ways. allowing user-created worlds on a shared server (sort of like a Wiki) poses similar problems. the temptation for people to be lazy and use copyrighted images/music/... in their worlds is fairly large, and nearly any new technology is a major bait for opportunistic lawsuit-crazy lawyers... So it *seems* like the way most businesses deal with this is by taking UGC down without quarter whenever someone complains. I'll probably end up having to do something like this. It's still painful because one then needs to employ people to actually handle that every day. I don't know, maybe there's some way to use community policing to accomplish this. In my view, though, if it happens, it isn't the worst problem in the world to have. It means someone noticed that your product/service or what have you exists! And if it was fatal, I don't think YouTube would still be on the internet. In fact all of that bad press probably helped YouTube get traction. yep, and there is a question of what exact purposes these 3D worlds serve vs more traditional systems, such as good old web-pages. I think being able to point at things and see by the eyes and the angle of the head what people are looking
Re: VR for the rest of us (was Re: [fonc] Re: SecondPlace, QwaqLife or TeleSim? Open ended, comments welcome)
Cut it down to what I'm responding too, and inline. On Tue, Aug 9, 2011 at 11:09 AM, Steve Wart st...@wart.ca wrote: Despite its commercial nature Minecraft seems very open and easy to adapt. Interestingly this implementation does a lot more to show that Java is fast enough for real-time 3D environments than Croquet was able to with Squeak. Croquet always felt awkward to me, partly it was performance, but it was also because some of the primitives were too primitive. Have you checked out OpenQwaq? Runs on Cog. I have a feeling if I ran the server on a different computer, rather than in VMWare on the same modest hardware, performance would be a non-issue unless I allowed extremely complex meshes or high-rez textures in. It's even totally acceptable and usable even the way I'm currently running it, which is in a relatively resource starved way. It chunks just a wee bit from time to time. I've been really impressed with the performance so far. It would not, in any previous year, have occurred to me to run an application that rendered 3D graphics alongside an application that virtualized a big old enterprise operating system at the same time on the same machine, but here I am doing it:) Regards, Steve ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: VR for the rest of us (was Re: [fonc] Re: SecondPlace, QwaqLife or TeleSim? Open ended, comments welcome)
On Tue, Aug 9, 2011 at 4:17 PM, David Barbour dmbarb...@gmail.com wrote: The best way to have a conversation with someone is in person, I think it depends on the nature of the conversation. There are significant advantages to written conversations, such as: the ability to spend more time thinking about our responses, the ability to operate at different times, and having a written record you can search and reference. Well put. This is an excellent point, and I stand *quite* corrected. I wrote this while slightly irked that a message I sent via a popular textual communication medium was too long. I still prefer a mailing list for most of the stuff I like to talk about: case in point. -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
[fonc] Large places and lots of users (was Re: VR for the rest of us)
Aha. This explains why various wonderful people warned me that what I wanted to do would end up being expensive to implement. There's been some talk about federated worlds and this is I *think* part of what I'm after. So let me ask you this: if I can find a way to cut down the time to load a new room far enough to where users don't have to notice it very much, e.g. by caching a lot of assets on disk, etc, find a way so that one can see into the next room in a matrix of rooms, and then just make the portals invisible and at compass boundaries between spaces, do you still think I'd need to rip out TeaTime and replace it with something of a completely different design in order to build a large, apparently (but not actually) continuous space? I realize that I'm probably pushing my luck:) but crowds are important for large groups like everyone, so this just got twice as interesting! On Tue, Aug 9, 2011 at 5:37 PM, David Barbour dmbarb...@gmail.com wrote: On Tue, Aug 9, 2011 at 3:40 PM, BGB cr88...@gmail.com wrote: ideally, we should probably be working with higher-level entities instead of lower-level geometry. I agree with rendering high-level concepts rather than low-level geometries. But I favor a more logical model - i.e. rendering a set of logical predicates. Either way, we have a set of records to render. But predicates can be computed dynamically, a result of composing queries and computing views. Predicates lack identity or state. This greatly affects how we manage the opposite direction: modeling user input. possibly, ultimately all levels should be expressed, but what should be fundamental, what should be expressed in each map, ... is potentially a subject of debate. I wouldn't want to build in any 'fundamental' features, except maybe strings and numbers. But we should expect a lot of de-facto standards - including forms, rooms, avatars, clothing, doors, buildings, landscapes, materials, some SVG equivalent, common image formats, video, et cetera - as a natural consequence of the development model. It would pay to make sure we have a lot of *good* standards from the very start, along with a flexible model (e.g. supporting declarative mixins might be nice). I am not familiar with the Teatime protocol. apparently Wikipedia doesn't really know about it either... Teatime was developed for Croquet. You can look it up on the VPRI site. But the short summary is: * Each computer has a redundant copy of the world. * New (or recovering) participant gets snapshot + set of recent messages. * User input is sent to every computer by distributed transaction. * Messages generated within the world run normally. * Logical discrete clock with millisecond precision; you can schedule incremental events for future. * Smooth interpolation of more cyclic animations without discrete events is achieved indirectly: renderer provides render-time. This works well for medium-sized worlds and medium numbers of participants. It scales further by connecting a lot of smaller worlds together (via 'portals'), which will have separate transaction queues. It is feasible to make it scale further yet using specialized protocols for handling 'crowds', e.g. if we were to model 10k participants viewing a stage, we could model most of the crowd as relatively static NPCs, and use some content-distribution techniques. But at this point we're already fighting the technology, and there are still security concerns, disruption tolerance concerns, and so on. Regards, Dave ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: VR for the rest of us (was Re: [fonc] Re: SecondPlace, QwaqLife or TeleSim? Open ended, comments welcome)
on. Regards, Dave [1] http://inform7.com/ [2] http://awelonblue.wordpress.com/2011/07/05/transaction-tribulation/ [3] http://awelonblue.wordpress.com/2011/05/21/comparing-frp-to-rdp/ ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
[fonc] Re: SecondPlace, QwaqLife or TeleSim? Open ended, comments welcome
On Sun, Aug 7, 2011 at 3:48 AM, Giulio Prisco giu...@gmail.com wrote: SecondPlace, QwaqLife or TeleSim? Open ended, comments welcome. http://giulioprisco.blogspot.com/2011/08/secondplace-qwaqlife-or-telesim.html I almost missed this thread. I'm also hunting that grail. VR for consumers that isn't lame. CC'd FONC because I think this is actually relevant to that conversation. My feeling is, and I may be wrong, that the problems with Second Life are twofold: 1. There are technical problems with the implementation. It's very crashy and didn't scale well. My suspicion is that the biggest problem they have is getting the user generated texture maps out to all of the clients on the fly. This leads to usability issues, etc. It can take minutes to get to the point where one is actually participating after arriving in a sim, while all those textures and meshes load it just thrashes like crazy. Also there's the server-centric architecture, which is usually harder to scale than peer to peer technology. I have not yet determined what the weight of the OpenQwaq server is yet, though, because I don't have enough machines to build out a production-like environment currently. It seems a touch crawly running all of the services on my modest laptop under a single CentOS host in VMWare while the client is also running at the same time (heh) and this is not a real measure of server or client performance:) it's currently just a way to warm up my apartment. 2. Their entire business model ended up being a cultural toxin. Free accounts mean spam and griefing/trolling/abuse. A profit motive for users seemed like a good idea at the outset, as it's about the most marketable universal out there, but it seems that DRM+UGC = red light district, real estate, fashion, and a handful of enterprise applications which would probably be served at least as well by Teleplace. I think one ultimately wants user generated content, but I'm not sure what the right way to do it is. One might read a book about Logo:) Minecraft has been running with an honor system for awhile now, and people just don't seem to mess with each other as much there. They're implementing some anti-griefing stuff around treasure chests now, but users build beautiful sculptures, ALUs, delay line memory, etc. Not a red light district to be found. I think it's because you actually have to pay in order to play, and you can't make money in there. My guess is the real difference is in the business model, but it might also be that low resolution volumetrics don't lend themselves as well to amplifying some of the less-awesome universals, instead amplifying much more creative and often non-universal activity. It's almost like Lego geology. These are all just hunches though. I could be wrong. -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] HotDraw's Tool State Machine Editor
After I heard about the Wolfram Alpha integration, I decided to give it a shot. I didn't like the price tag. At the time I was interested in chemical simulations, and I didn't know of anything else that'd let me do them out of the box (without requiring me to already have a much greater command of chemistry than I do presently.) I was on an A-Life kick. Was wondering if I might be able to find some kind simplified artificial chemistry that'd let me get an evolutionary process going at a (pseudo-) chemical level without leaving me starved for computing resources. I really dug the way Alpha would in some cases be able to supply me with m-expression representations of things. Kind of wish they'd just used Lisp for Mathematica, and gridding it up isn't cheap, so I ended up doing some weird art with it before abandoning it in favor of Squeak, which I can parallelize at the process level without trouble, and of course the green threads work well enough for some things too. I did really like the touch of HyperCard that was going on with it though. All in all, I haven't used it enough to justify the expense, and it takes too much space on the disk:( On Jul 28, 2011, at 12:14 PM, David Leibs david.le...@oracle.com wrote: Ah, powerful notations and the Principia. These are some of my favorite things. :-) An excellent example of a difficult subject made easier by a wonderful tool is Mathematica. I think the Wolfram folks have done a really great job over the last 25 years with Mathematica. You can directly use math notation but it is converted into the much more powerful internal representation of Mathematica terms. Just think of Mathematica terms as s-expressions and you are close. Mathematica's math notation system is built on a general system that supports notations so that you could build a notation system that is what chemists would want. The Mathematica Notebook is a wonderful system for exploration. Notebooks let you build beautiful publishable documents that typically also contain live code for simulations and modeling. Of course a Mathematica notebook is just a bunch of Mathematica terms. I wish web browsers were as good as Mathematica notebooks. They are like Smalltalk workspaces on steroids. I wish there was an open source Mathematica notebook like read-canonicalize-evaluate-present shell. At the bottom of Mathematica is the Mathematica language which is a very Lispy functional language with a gigantic library of primitives. There is no Type Religion in Mathematica's functional programming. The documentation is extensive and made from Mathematica notebooks so that all examples are both beautiful and executable. The learning curve is very very high because there is so much there but you can start simply just by evaluating 3+4 and grow. It's very open ended. The Mathematica language is also great for meta programming. Last year my son was working for the University of Colorado physics department building a model of the interaction of the solar wind with the interstellar medium. His code made big use of meta programming to dramatically boost the performance. He would partially evaluate his code/ math in the context of interesting boundary conditions then use Simplify to reduce the code then run it through the low level Mathematica compiler. He was able to get a 100x performance boost this way. Mathematica was his first programming language and he has used it regularly for about 14 years . To give you a taste let's implement the most beautiful Newton's forward difference algorithm from the Principia. see: http://mathworld.wolfram.com/FiniteDifference.html for the background. The code below (hopefully not too mangled by blind email systems) repeatedly takes differences until a zero is found resulting in a list of list of all differences. The first elements are then harvested and the last zero dropped. Now just make some parallel lists for that will get passed to the difference term algorithm. I could have written a loop but APL and Mathematica has taught me that Transpose and Apply, and a level spec can write my loops for me without error. Once you have all the terms in a list just join them with Plus. Finally run the whole thing through Simplify. The differenceTerm and fallingFactorial helper functions should look familiar to those who have played with modern functional programming. Note I pass in a symbol and the Mathematica rule reducing system naturally does partial evaluation. I realize I have left a lot of Mathematica details unexplained but I am just trying to give a taste. Using the data from the mathworld web page you we can evaluate away praiseNewton[{1, 19, 143, 607, 1789, 4211, 8539}, n] 1+7 n+2 n^2+3 n^3+6 n^4 Without the Simplify we would have gotten: 1+18 n+53 (-1+n) n+39 (-2+n) (-1+n) n+6 (-3+n) (-2+n) (-1+n) n
Re: Growth, Popularity and Languages - was Re: [fonc] Alan Kay talk at HPI in Potsdam
I want to try using a fluffy pop song to sell a protest album... it worked for others before me:) If you're really lucky, some people will accidentally listen to your other songs. (metaphorically speaking) A spoonful of sugar -- Casey On Jul 25, 2011, at 10:47 PM, Alan Kay alan.n...@yahoo.com wrote: The trivial take on computing today by both the consumers and most of the professionals would just be another pop music to wince at most of the time, if it weren't so important for how future thinking should be done. ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Alan Kay talk at HPI in Potsdam
On Jul 26, 2011, at 3:28 PM, BGB cr88...@gmail.com wrote: On 7/26/2011 9:05 AM, David Barbour wrote: On Tue, Jul 26, 2011 at 1:50 AM, BGB cr88...@gmail.com wrote: whether or not compiling to bytecode is itself an actually effective security measure, it is the commonly expected security measure. Is it? I've not heard anyone speak that way in many, many years. I think people are getting used to JavaScript. for web-apps maybe, but it will likely be a long time before it becomes adopted by commercial application software (where the source-code is commonly regarded as a trade-secret). Worth pointing out that server side JS dodges this problem. Now that Node is out there, people are actually starting to do stuff with JS that doesn't run on the client, so it's happening... whether or not it's a real qualitative improvement for anyone.___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
[fonc] Physics and Types
Please forgive - not a physicist. Ian mentioned something about a Bose-Einstein Condensate for computer programming once, and this really jumped out at me. I've seen math and I've seen biology applied, at least in metaphor, to our problems. I can't think of a lot of stories about applying physics to these troubles, and I wonder why. I keep hearing about purely mathematical type systems. I will admit that being focused on type seems to have been a bit of an intellectual digression for me, perhaps because the Air Force base really wanted me to take classes centered on Ada, and when I eventually found Smalltalk, I felt almost a sense of relief. I don't think I've ever seen a set of types that looked anything like #(strong weak electro slipperyGraviton) asSet. Maybe it's because this is too much like a type system for an assembly language? I probably don't have sufficient command of these fields to even ask smart questions. I'm asking anyway. With the discussion of particles and fields it seems at least topical. Any takers? ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Last programming language
Even if it were possible to have a last language, it would be double plus ungood. On Mon, Jul 18, 2011 at 8:58 AM, Paul Homer paul_ho...@yahoo.ca wrote: Realistically, I think Godel's Incompleteness Theorem implies that there can be no 'last' programming language (formal system). But I think it is possible for a fundamentally different paradigm make a huge leap in our ability to build complex systems. My thinking from a couple of years back: http://theprogrammersparadox.blogspot.com/2009/04/end-of-coding-as-we-know-it.html Paul. --- On *Mon, 7/18/11, BGB cr88...@gmail.com* wrote: From: BGB cr88...@gmail.com Subject: Re: [fonc] Last programming language To: Fundamentals of New Computing fonc@vpri.org Received: Monday, July 18, 2011, 6:28 AM On 7/18/2011 2:56 AM, Casey Ransberger wrote: Smells like Kool-Aide. I smell bullshit. Dude is selling a book tour or something. Let's just pick the POS we have now and run with it? Seriously? How many times has that gone well? Dude is on a book-tour or something. Let him have it. for most people and most projects, advice like just pick C or Java or C# or similar generally aligns fairly well with the path to highest likely productivity (get code written and out the door to customers, ...). if it is something common, then there is less likely to be slowdowns or similar due to some of the development team members getting confused, or having area of responsibility confusion or similar. the bigger question is what can be done which hasn't already been done? and more so, why does it necessarily matter? and, if there is something great waiting, how does one best go about finding and it and making productive use of it? ... one potentially overlooked issue in the video: 40 years ago, threads and multiprocessor systems were not exactly common; now they are pretty much everywhere, but the most common languages tend to be fairly incompetent of effectively utilizing them. though not fundamentally new, this is at least a relevant change. for example, what is a not crappy way to go about writing code, say, for a GPU?... maybe there are better answers than, say, well, pretend you are running loops over big arrays (CUDA) and well, just run C on the thing (OpenCL). IMO, I sort of like mailboxes and asynchronous and trans-thread function/method calls, but these are relative novelties (vs the ever present lock a mutex or enter a critical section or similar model). ... or such... On Jul 17, 2011, at 11:31 AM, karl ramberg karlramb...@gmail.comhttp://mc/compose?to=karlramb...@gmail.com wrote: Hi Here is a interesting video about programming languages http://skillsmatter.com/podcast/agile-testing/bobs-last-language Karl ___ fonc mailing list fonc@vpri.org http://mc/compose?to=fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://mc/compose?to=fonc@vpri.org http://vpri.org/mailman/listinfo/fonc -Inline Attachment Follows- ___ fonc mailing list fonc@vpri.org http://mc/compose?to=fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Last programming language
Memorizing Pi is a dumb old trick, like ripping a phone book in half. I can memorize Pi, I mean if I wanted to spend my time that way, but it's all a dodge: I'm just ripping a phone book in half, and the whole trick is to twist it just so that the entire heroic thing only ever happens over a few pages at a time, but quickly enough that people think you know how to do something really special. I saw one of this guy's talks once - or at least, I think so, wasn't he at Rails Conf? Anyway: I think dude's talent is a form of stagecraft. Can we please blow this guy off and get back to inventing the future ??? On Jul 17, 2011, at 2:51 PM, BGB cr88...@gmail.com wrote: On 7/17/2011 2:33 PM, Craig Latta wrote: That talk would have been a whole lot better if he had grounded it with a discussion of how constraints are good for creativity. It's how he should have spent the time where he went on about memorizing Pi for no good reason... if memorizing pie is a measure of programmer status, it likely puts down those of us who only know 10s of digits of Pi... granted, it is probably not that terribly important to know far more digits of pie than can be reasonably represented in a sanely-sized floating point constant. funny though is I don't really believe in taking things away from languages, rather just providing ideally clean and concise constructs to take over the role of more hairy constructs. for example, if/while/for/... don't mean goto shouldn't exist in a language or should be branded as evil as a result, rather they provide better alternatives such that things like goto are more of a break glass in case of emergency feature (or can be taken as a sign that, when one finds themselves using it, then likely things have already gone fairly far south as far as this particular code is concerned...). so, I think it is more about providing better and cleaner alternatives to common problems than in trying to take things away. ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
[fonc] Fractal Planets
I bought a product called MojoWorld, which was written by one of Mandelbrot's students. It's a bit long for an update, but it still works on my machine, though I do suspect it's running under emulation (haven't checked, having too much fun.) It's basically a system of dialogs which group inputs into categories like atmosphere, terrain, ocean, etc. These inputs can be hooked up to a number of functions which can be composed. The output is a fractal planet (of naturally arbitrary detail.) So much fun! I thought about mentioning it here for two reasons: a) I wonder if something like this wouldn't be a great way to teach people about fractals. and b) These planets fit into very small files, the biggest one I've made is 53KB. In that one I used these area-effect devices called parameter bombs to shove the otherwise evenly distributed terrain around into continents. It renders noticeably slower with these in place as well. Before I did that, the same planet was a 16KB file. What this seems to tell me is that what's represented in the file is just the math for generating the planet, not the generated planet at all. It does export meshes to common 3D formats, which is nice. Does anyone here know of any open source software that does this stuff? The author's dissertation is here: http://portal.acm.org/citation.cfm?id=193498 ...and the book they wrote later can be purchased here: http://www.amazon.com/Texturing-Modeling-Third-Procedural-Approach/dp/1558608486 -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
[fonc] Fwd: Fractal Planets
From the dissertation: In the author's opinion, however, the greatest import of this work lies not in its technical merit as original research in the field of computer graphics, but rather in its significance as the development of a truly novel medium and creative process for the visual arts. The medium is numbers, strings, and logic; the process is distinguished by the use of deterministic formal logic, as embodied in a computer program, to obtain artistic self-expression in representational imagery (i.e., in realistic pictures of something familiar, as opposed to impressionistic or abstract imagery). The fascinating challenge of encapsulating maximal expressive power in terse logical formalisms motivates our emphasis on procedural modelling. -- Forwarded message -- From: Casey Ransberger casey.obrie...@gmail.com Date: Fri, Jul 8, 2011 at 6:49 PM Subject: Fractal Planets To: Fundamentals of New Computing fonc@vpri.org I bought a product called MojoWorld, which was written by one of Mandelbrot's students. It's a bit long for an update, but it still works on my machine, though I do suspect it's running under emulation (haven't checked, having too much fun.) It's basically a system of dialogs which group inputs into categories like atmosphere, terrain, ocean, etc. These inputs can be hooked up to a number of functions which can be composed. The output is a fractal planet (of naturally arbitrary detail.) So much fun! I thought about mentioning it here for two reasons: a) I wonder if something like this wouldn't be a great way to teach people about fractals. and b) These planets fit into very small files, the biggest one I've made is 53KB. In that one I used these area-effect devices called parameter bombs to shove the otherwise evenly distributed terrain around into continents. It renders noticeably slower with these in place as well. Before I did that, the same planet was a 16KB file. What this seems to tell me is that what's represented in the file is just the math for generating the planet, not the generated planet at all. It does export meshes to common 3D formats, which is nice. Does anyone here know of any open source software that does this stuff? The author's dissertation is here: http://portal.acm.org/citation.cfm?id=193498 ...and the book they wrote later can be purchased here: http://www.amazon.com/Texturing-Modeling-Third-Procedural-Approach/dp/1558608486 -- Casey Ransberger -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Eternal computing
I can't help wondering whether or not it was any easier to keep a system running when systems were big enough to climb inside of. When my tablet bricks and refuses to take a flash, I can open the machine (I mean I can break it open) but the part that computes and remembers is all one piece now. I enjoyed swapping parts out of desktop machines, looking for defective components. It was like a meditation. Of course I would have to power them down first, and I can only imagine this has been generally true for all electronic computers. I used to take a pair of broken computers, and use the best (working) parts from both to make a computer that would often be better overall than either machine was when they still worked. I liked doing this, and so people started bringing me a lot of old broken computers. Usually whenever I built a new one, I would give the old one away, and this motivated people, as it happened, to keep bringing me their junk, so I could be perpetually looking for a better machine. I hadn't ever looked for a biological metaphor in what I was doing with those obsolete junkers, but I think I can see one now. This is a great thread. On Jun 25, 2011, at 9:39 AM, Steve Wart st...@wart.ca wrote: I've been thinking about eternal computing not so much in the context of software, but more from a cultural level. Software ultimately runs on some underlying physical computing machine, and physical machines are always changing. If you want a program to run for a long time, the software needs to be flexible enough to move from host to host without losing its state. That's more of a requirements statement than an insight, and it's not a particularly steep hurdle (given some expectation of down time), so I'll leave it at that for now. I recently stumbled across the work of Quinlan Terry, whom I had never heard of until I did a search for an inscription in a print that caught my eye. I found this essay helps capture what makes him different from most people designing buildings today: http://www.qftarchitects.com/essays/sevenmisunderstandings.php I don't make any claims that these observations have anything do with software, except in a more general sense of the cultural values that influence design. I suppose the pitfalls of trivializing something because it seems familiar applies to software as well as any other design discipline. We have an engineering culture that pursues change at an ever increasing rate. The loss of eternal values in physical architecture is sad indeed, especially in the context of urban sprawl and the now rampant deterioration of buildings that were built a generation ago, to last only a single generation. The ongoing global financial mess is arguably a result of short-term thinking. Economics matters. One of the intriguing facets of computing is the incredible amount of money the industry generates and consumes. And nowhere is short-term thinking more generously rewarded than in the continual churn of new computing devices and software. Personally I find it overwhelming and I have been trying to keep up for 30 years. Clearly it's not slowing down. I think there's a good reason for the ever-increasing rate of change in computer technology, and that it is the nature of computation itself. Seth Lloyd has a very interesting perspective on revolutions in information processing: http://www.edge.org/3rd_culture/lloyd06/lloyd06_index.html If you consider that life itself is computational in nature (not a big leap given what we know about DNA), it's instructive to think about the amount of energy most organisms expend on the activities surrounding sexual reproduction. As our abilities to perform artificial computations increase, it seems that more and more of our economic life will be driven by computing activities. Computation is an essential part of what we are. In this context, I wonder what to make of the 10,000 year clock: http://www.1yearclock.net/learnmore.html First, I'm skeptical that something made of metal will last 10,000 years. But suppose it would be possible to build a clock that lasts that long. If in a fraction of a second I have a device that can execute billions of instructions, what advantage does stone-age (or iron-age) technology offer beyond longevity? I think the key advantage is that no computation takes place in isolation. Every time you calculate a result, the contextual assumptions that held at the start of that calculation have changed. Other computations by other devices may have obviated your result or provided you with new inputs that can allow you to continue processing. Which means running for a long time is no longer a simple matter of saving your state and jumping to a new host, since all the other hosts that you are interacting with have made assumptions about you too. It starts to look like a model of life, where the best way to free up resources
Re: [fonc] Eternal computing
On Jun 29, 2011, at 2:03 PM, Wesley Smith wesley.h...@gmail.com wrote: The aspect of Bersgon that I was thinking about though was the concept of duration, particularly that of the cerebral interval (the time between a received movement and an executed movement), which generates perception. Yet perception is both matter (made of up of neurons, cells, chemical networks, sensors, ...) and the perception of matter. It's a self-loop of something perceiving itself. We see the same kind of self-loop pattern in von Foerster's Cybernetics of Epistemology and Notes on an Epistemology of Living Things where computation is understood as com + putare or thinking together. Thinking together is a really interesting thought. Have you ever read Minsky's Society of Mind? I'm wanting to quote it but I lent my copy to a curious stranger two days ago, and I don't want to misquote, so I'm just going to have to recommend it:) It's one of my favorite books to lend people. They always come back with stars in their eyes. ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
[fonc] Conventions on the travelers (Re: [CAG] Humus, syntactic preferences, and all-caps)
Abridged. Below. On Tue, Jun 21, 2011 at 6:36 AM, Dale Schumacher dale.schumac...@gmail.comwrote: Just to be clear. The ? symbol represents undefined. The NIL symbol is semantically equivalent to (), the empty tuple/list. And finally, the _ symbol represents the wildcard pattern with no implied binding. See the Humus Overview (http://www.dalnefre.com/wp/humus/humus-overview/) for details. Oh, heh! I stand corrected. This one is always sticky because there's been so much variation in different languages. IIRC even Lisp and Scheme differ slightly on this one, right? This reminds me of one of the funniest things that I ever saw at work. This architect, who's identity I will protect (I mean, nice smart ambitious guy usually) wanted to roll his own cross platform RPC. I kicked and screamed and pointed at things like THRIFT because I really didn't want to end up owning another unnecessary proprietary RPC mechanism, but I lost that argument and it got built and we deployed it. You should have seen me nearly fall out of my chair laughing when someone hit the first bug, which I had a feeling would be in there. I can't claim that I found it myself (I was juggling a lot at the time,) but I was two cubes away and heard the howling expletive emitted by the guy who did. I had a plan to go looking for that problem. I was kinda robbed;) Ruby and Perl (the two languages of central interest at the time) differ slightly with regard to the truth value of the number 0. Ruby wants to say, yeah, zero is an object and it isn't nil. Meanwhile, Perl programmers regularly use zero as a shorthand for falsehood. CC FONC because it's a funny story about ill-advised use of language. The poor Perl folks ended up having to write FALSE, or 'FALSE', if I remember right (BCC'd someone who was there, maybe his recollection about that is better than mine.) They were sooo mad. Fortunately, I wasn't working on the Perl parts much, because if I had been, I probably would have led the revolt. -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
[fonc] Re: [CAG] Link list
Hmm. You know, I bet the nice folks at VPRI wouldn't mind a links page about the actor model on their FONC wiki. Another potential home for it might we Ward's wiki, I mean I'd call actors a pattern on a first try, even if it's probably more than a pattern at the end of the day. CC FONC list. On Sat, Jun 18, 2011 at 12:32 PM, Dale Schumacher dale.schumac...@gmail.com wrote: I'm not well-versed in the ways of google groups, but it appears to be a simple mailing list. No file storage. No permanent pages. We'll have to share content, such as link, in message content for now. The Actor References thread is probably a good place for that sort of thing. On Sat, Jun 18, 2011 at 2:23 PM, Casey Ransberger casey.obrie...@gmail.com wrote: No, not linked list:) Does goog groups give us a wiki? It occurs to me that my own bookmarks list is a bad way for other people to find the stuff that folks have been kind enough to point me at. If I was going to try to start a movement, I think I'd want a links page for it. -- Casey Ransberger -- You received this message because you are subscribed to the Google Groups Computational Actor's Guild group. To post to this group, send email to computational-actors-gu...@googlegroups.com. To unsubscribe from this group, send email to computational-actors-guild+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/computational-actors-guild?hl=en. -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Consolidation and collaboration
I'm asking myself how relevant the projects I hack on are in this context. Others probably are too. Of the stuff that didn't disappear into the commercial void, recently it's been mostly Smalltalk for me, and FONC is not about Smalltalk; Smalltalk is almost a footnote here, I think. Without a doubt, the only project I've worked on thus far that even begins to scratch the surface of this subject is Cuis. I'm not a researcher, so I'm inclined toward systems that I can use today, which adds a bit of interesting tension to our approach. Basically it means that I am not currently in a position to burn the disk packs, as I intend to make a living with those disks. So we're starting with what we've got and whittling our way down to the smallest system that gives us the leverage that we already have. I was surprised to find in some cases that I was able to add features and still end up with less code than what I found when I got involved just by refactoring as I went. It's been a wonderful meditation, a much more intentional working style than what I experienced in industry. The end goals, though, are similar. Personal computing in a much smaller bag, etc. If folks took a look at http://www.jvuletich.org/Cuis/Index.html and said yes, a note about this belongs on the FONC wiki I would gladly do the touch typing to make it happen. I think the nascent hardware project that seems to be emerging before me may make interesting material for the FONC wiki, but it will be some time before that yields anything of interest beyond discussion. I'm doing library science right now, gathering what people before me were able to learn. I should really add a list On Sat, Jun 18, 2011 at 10:56 AM, BGB cr88...@gmail.com wrote: ** On 6/16/2011 8:43 AM, Frederick Grose wrote: On Thu, Jun 16, 2011 at 4:34 AM, BGB cr88...@gmail.com wrote: On 6/15/2011 8:04 PM, BGB wrote: On 6/15/2011 3:22 PM, Ian Piumarta wrote: On Jun 15, 2011, at 14:09 , BGB wrote: http://vpri.org/fonc_wiki description sounds like it is specific to the FoNC / VPRI projects... Sorry about that. I left the original main page, figuring that people would just start a new page and when some useful content had been accumulated we'd rearrange things on the main page. Misconception corrected. yeah, cool. (was gone much of the day, recently got back). I went and created this page (partly as a test): http://vpri.org/fonc_wiki/index.php/Dynamic_typing I generally tried to keep it fairly generic. I opted with this for the moment, rather than describing my own stuff, as I am not certain the level of project-specificness which is appropriate. ended up writing a few more articles, mostly about generic programming-language stuff, but then was thinking that maybe the point of this is not to do like a half-assed Wikipedia and describe a bunch of general topics that probably most people here already know about (type-systems, class/instance and prototype OO, vtables and method dispatching, ...). even if, yes, I found some of this stuff personally relevant when implementing my own VM stuff. (sadly, much of my own thinking largely boils down to personal experiences and trivia...). Practical experience, thoughtfully recorded, often helps in learning. I guess the alternative would be that we (myself and others on this list?) write about our respective projects, and then comment on them?... Having a glossary available for new learners is valuable, especially when terms are hyper-linked either to internal or other wiki pages such as those on Wikipedia. http://vpri.org/fonc_wiki/index.php/Glossary When we get the version upgrade, we should install http://www.mediawiki.org/wiki/Extension:SpecialInterwiki , which allows convenient interwiki linking. added more content, doesn't seem like anyone else is adding content though... was sort of hoping interesting stuff would pop up, and partial uncertainty as to how objective some of the added content is, where my perspective is naturally limited to my own experiences. did describe my language some here: http://vpri.org/fonc_wiki/index.php/BGBScript however, this just describes the language, rather than the surrounding VM framework in general (may go do this next). it also does not describe the language in any comprehensive sense. or such... ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
History's Forced-Perspective (was Re: [fonc] Consolidation and collaboration)
On Jun 15, 2011, at 8:55 AM, Ian Piumarta piuma...@speakeasy.net wrote: Invention receives no attention, and innovation (even when incorrectly understood) receives lip service in the press, but no current-day vehicle exists to to nurture it. +360 +360! I love this expression, it doesn't just say I agree, it also says let's think our way back around to where we are now and see what we can learn. In this case, I'll offer a smaller variation made out of exactly half as much stuff: +180 As in, let's turn this situation around. We'll end up at a different vantage point, where history's forced-perspective might appear more obvious. ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Consolidation and collaboration
+1 For an example of how wonderful and also not-Wikipedia this can be, check out: http://c2.com/cgi/wiki?PortlandPatternRepository If you haven't seen this yet, it's the best wiki ever, a sprawling hyperlinked conversation that covers just about every concept in programming, with lots of opinion and historical tidbits (i.e., it's not an encyclopedia at all and isn't trying to be) and a focus on people, places, and patterns. On Wed, Jun 15, 2011 at 7:30 AM, Carl Gundel ca...@psychesystems.comwrote: Why not use a wiki to collaborate and organize thoughts and information?** ** ** ** -Carl ** ** *From:* fonc-boun...@vpri.org [mailto:fonc-boun...@vpri.org] *On Behalf Of *CHM de Beer *Sent:* Wednesday, June 15, 2011 10:21 AM *To:* fonc@vpri.org *Subject:* [fonc] Consolidation and collaboration ** ** Hello fonc members, Over the past year I have greatly enjoyed, and benefited from, threads on this mailing list, written by individuals with far greater understanding and insight than I will ever master. The diversity, and somewhat seasonal traffic, does make me wonder if we are maximising the impact of our efforts. Would there be value in a platform for us to; capture all the ideas and initiatives, distil them into groups, reduce them to a handful concepts to explore, and finally focus all our efforts on. Obviously that means I may have to relinquish a pet project, but I am surprisingly comfortable with it, if substantial progress on fundamentals of new computing results. Consider the typical mail from Dr. Kay. He would comment: Back in 196x, we considered *this*, but elected to go with *that*, because of *some reason*, or we did *this*, going forward you should consider *something else*. In my imagination I can see as many opinions as there were people in the room. Yet the language suggest the initiatives were reduced to a handful, and then pursued with vigour. Just think of what we can do by following the same pattern, and we have the added benefit of doing it as a virtual, distributed team. Significant action is needed, because I fear the odds are stacked against us. Invention receives no attention, and innovation (even when incorrectly understood) receives lip service in the press, but no current-day vehicle exists to to nurture it. The only hope I have, is that a number of talented individuals pool their energy and collaborate towards fundamentally changing computing. I am willing to start a database of ideas and initiatives if there are at least a few in the fonc group that agree in principle. Regards, Marius -- mobile: +1 604 369 1854 skype: chmdebeer twitter: twitter.com/chmdebeer ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Alternative Web programming models?
Amplification: if I wagered a guess, I'd go with of human reach or of potential leverage. I also have one amp that goes up to 11, which is really nice because sometimes I like a touch of extra kick for the solo. On Jun 13, 2011, at 9:35 AM, Julian Leviston jul...@leviston.net wrote: On 14/06/2011, at 1:17 AM, Alan Kay wrote: It would be great if everyone on this list would think deeply about how to have an eternal system, and only be amplified by it. Hi Alan, You might need to elucidate a little more on this for me to personally understand you. Not sure how others feel, but the Worlds work seems to be just a description of a versioning pattern applied to running program state. Why is it especially interesting? In the Ruby community, we have gem which is a package manager and also bundler, the two of which handle dependency management and sets of bundles of dependencies in context and situ elegantly and beautifully. Depending on your requirements when writing code, you can point to a version of a gem, the latest version, or say things like versions greater than 2.3. It works really well. It also fits very neatly with your idea of (Alexander's? ;-)) the arch and biological cellular structure being a scalable system: this system is working in practice extremely well. (Mind you, there's a global namespace, so it will eventually get crowded I'm sure ;-)) What do you mean by an eternal system? Do you mean a system which lasts forever? and what do you mean by amplified? Do you mean amplified as in our energy around this topic, or something else? Sorry for not understanding you straight away, Regards, Julian. ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Logo and Silicon
Inline and abridged. On Jun 13, 2011, at 1:03 PM, Jecel Assumpcao Jr. je...@merlintec.com wrote: Have you looked at the ALUs that kids have been making in Minecraft? You can _walk around_ in there. Inside the simulated microprocessor, and actually watch the electrons walk down the Redstone wire. And when you want the birds-eye, a simple server mod lets you fly way up and look down. I watched some movies of this and while very neat, it has some of the limitations of Visual6502. If I had actually played with it and had been able to choose what to look at, it might have been more undestandable. I'm not sure what they let you do with Minecraft currently without paying for it (it's pretty cheap anyway) but I know that at one point you could play single player for free, which is all you need. I ended up thinking it was the best creative game I'd seen since SimCity, so I just went ahead and bought it. Fortunately you pay once and you're done, which I think is a very respectable business model in today's age. You can fly around in maps and modify them, but not *run* them, using the map editor here: http://davidvierra.com/mcedit.html The Elements of Computing Systems seems to have influenced the Minecraft creative community -- I haven't read it myself. It kind of blows my mind that people have to patience to do computational stuff in-between letting fly arrows at Creepers (I hate those things.) You can get schematic data for various Redstone (read: cellular automata) projects here: http://www.mcschematics.com/index.php?board=79.0 It's worth noting that the default texture pack is not well suited to viewing this stuff. I ended up making my own. I hope this isn't too far off-topic, but I'm fascinated by anything that tricks the rest of us into programming of any kind. -- Jecel ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Age and Language (was Re: [fonc] Alternative Web programming models?)
Below. On Jun 13, 2011, at 2:16 PM, C. Scott Ananian csc...@laptop.org wrote: On Mon, Jun 13, 2011 at 4:02 PM, BGB cr88...@gmail.com wrote: Consider what it'd be like if we didn't represent code as text... and represented it maybe as series of ideograms or icons (TileScript nod). Syntax errors don't really crop up any more, do they? Given a slightly nicer User Interface than tilescript, you could still type your code, (ie use the keyboard to fast-select tokens), but the computer won't validate any input that isn't in its dictionary of known possible syntactically correct items given whatever context you're in. I think Tiles prevent syntax errors is a red herring. Sure, you can prevent stupid typos by offering only tiles with correctly spelled keywords, but that's not really a major problem in ordinary experience. The more pernicious errors aren't especially affected one way or the other by tile-based systems. (You could just as accurately say that strongly-typed systems prevent errors.) Agreed, when we're talking about adults, and especially ones who've already learned to code. When it comes to kids and non-programming adults though, I do think that e.g. Scratch is really powerful. I don't have the cognitive science to back up the statement that I'm about to make, so I'm hoping folks will try to shoot some holes in it. Kids may not have the linguistic development out of the way that one needs to do serious programming. Adults who don't already code may find themselves short on some of the core concepts that conventional programming languages expect of the user. In both cases, I think visual systems can get useless syntactic hurdles out of the way, so that users can focus of developing a command of the core concepts at work. Inviting criticism! Fire away, ladies and gentlemen.___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Logo and Silicon
Jecel, thanks for your reply. Inline. On Jun 11, 2011, at 4:38 PM, Jecel Assumpcao Jr. je...@merlintec.com wrote: Casey, Here's a fun thought: while staring at the Visual6502 visualization, it occurred to me that the likes of Verilog and VHDL probably represent a rather tall order to new folks (like, hey, me,) and the idea popped in there. (snip) But did you actually understand the Visual6502 and not just the idea of it? Nope. But it struck me to be able to see it compute. I do think I took something of value from the experience: I just don't know what it is yet. I didn't, and I am reasonably familiar with that processor at the schematic level and also an integrated circuit designer (I have created a few chips at the rectangle level). Even simple microprocessors are hard to grok, yes, because they're vast. The next watershed, though, might be finding a relatively simple architecture which is also fast. Field programmability gives me a touch of hope that systems will be able to optimize adaptively to the behavior of the user(s) driving, and evolution itself is a pretty straightforward idea, but this is just a thought-example. Most likely the shape it would take would end up surprising me. Biology is a great place to look for working concurrent systems, at least I think so, so hopefully that's a worthwhile thought experiment. The problem is quantitative - there are just too many rectangles changing color at once and there are too many to fit in the screen at a reasonable size. You have to blow them waay up, slow them wy down, and then focus on units of things, I think. Have you looked at the ALUs that kids have been making in Minecraft? You can _walk around_ in there. Inside the simulated microprocessor, and actually watch the electrons walk down the Redstone wire. And when you want the birds-eye, a simple server mod lets you fly way up and look down. It was the thing that jumped out at me and said: it's time, mortals can make processors now, which means you can do anything. Quit your job and go, Case. I probably won't have time to finish a design, but I'll have learned a lot in failing, at least. I really hate to deal with structural designs in Verilog or VHDL (as opposed to behavioral designs) which is why I use TkGate. I'm going to have to look up TkGate, because I don't know the difference. Unfortunately, we get into quantitative problems again with screen sizes. My hand drawn schematics in the 1980s were always one to three pages of very large paper. You needed a big desk to be able to fully open them up and you could see both the big picture and details at the same time. It was easy to quickly trace some signal from one side of the design to the other. Imagine walking alongside the wire as the electron travels. While your view is very limited to your specific locus of attention, you can zoom out. A heads up display would probably help. Now people do schematics on letter sized paper. The project is broken down into some 20 or so pages. Each page has just one or two integrated circuits (or subblocks) in them and wires running to the edge of the page to connectors that indicate other pages. In other words, this is a netlist and not a schematic and there is no advantage compared to the same thing in VHDL. It has the disadvantage of taking up 20 pages to do what VHDL would do in just 3. Sure. So in this hypothetical Logo, which I'm calling WeeHDL like a right parody, you should be able to do macroscopic things like what you do in Verilog. We seem to have learned that different sets of metaphors help explain different sets of problems. The problem I have with Verilog seems to be that it's used to avoid thinking about the very details that I hope to understand. I obviously want a lot of abstraction, but I also want to able to understand the mapping between these representations, which got me thinking OMeta, etc. It dawned on me that I could probably make a little Logo where the turtles draw with metal ink. Has anyone tried anything like this before? Does it seem like a good idea to try it now? You might like Chuck Moore OKAD system which is used to create the GreenArray chips. The software is not available, but there are videos of him giving demos of it. Mostly in his fireside chats: Oh, I looked at their site the first day that I became aware that I wanted to actually build a computer instead of keeping the blinders on about my hardware. I didn't know that he was involved in that. The TinyComputer jumped out at me as a system that I actually wouldn't mind writing assembly on, and it's been a long time since I've said that. Going to look at GreenArrays again. http://www.ultratechnology.com/rmvideo.htm http://www.ultratechnology.com/okad.htm http://www.colorforth.com/vlsi.html Note that the software evolved quite a bit from the early 1990s (when it was a paint the rectangles style) to
This vs. That (was Re: [fonc] Alternative Web programming models?)
Hahaha, this is it exactly! Perpendicular, but a poignant friend/mentor of mine said real software engineering hasn't emerged because there aren't enough people dying yet. He said that after I made my bid on what the difference is. My angle was: the difference between software and engineering is just that when the real bridge you designed falls over with people on it, you probably won't work again, whereas in software we just apologize to the users and just ship a nice hotfix for them. In any event it seems that he and I agree that the difference is usually one of consequence, or lack thereof. I must tip my hat, however, to Alan's argument that we haven't even found our arches yet. This just resonates with me, especially after stomaching all of this best-practice-as-religion crap in industry; I really want more evidence that we haven't got a clue what we're doing yet, because it would be lovely to dispel the myth that we do. On Jun 10, 2011, at 3:00 PM, Craig Latta cr...@netjam.org wrote: Can I ask how this is not an OS? Operating systems have more entertaining failure modes... if a really bad crash can render the hardware unbootable, it's an operating system. :) -C -- Craig Latta www.netjam.org/resume +31 6 2757 7177 + 1 415 287 3547 ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Alternative Web programming models?
On Jun 9, 2011, at 11:01 AM, Josh Gargus j...@schwa.ca wrote: Conceptually, yes. In practice, no, because the HTML/DOM render-target is also the lingua franca that makes the Web searchable and mashupable. So I'd like to first point out that you're making a great point here, so I hope it isn't too odd that I'm about to try to tear it down. Devil's advocate? While the markup language has proven quite mashable/searchable, I think it's worth noting that just about *any* structured, broadly applied _convention_ will give you that; it could have been CSV, if SGML hadn't been tapped. One of the nicest things about markup has been free-to-cheap accessibility for blind folks... with most languages you can embed in a web page, this tends to go out the door quickly, and AJAX probably doesn't help either. If the browser was an operating system, I imagine we'd find a more traditional route to this kind of accessibility, which is about text-to-speech, and if you have the text, you should be able to search it too. Take a moment to imagine how different the world might be today if the convention had been e.g. s-exprs. How many linguistic context shifts do you think you'd need to build a web application in that world? While I love programming languages, when I have a deadline? bouncing back and forth between five or six languages probably hurts my productivity. Not to mention that we end up compensating in industry by hyperspecializing. I wish it was easier to hire people who just knew how to code, instead of having to qualify them as backend vs. frontend. I mean seriously. It's like specializing in putting a patty that someone else cooked on a bun, in terms of personal empowerment. Factory work, factory thinking. I'm the button pusher and your job is to assemble the two parts that I send down the line every five seconds when I push the button. Patty, bun. I hoped Seaside might help a touch, and the backend guys all seem to really dig it (hey, now we can make web apps all by ourselves, without the burden of the wrangling boring markup-goop) but the frontend folks I've talked to (in-trench, not online) are hard pressed to have time to learn a whole new system. Since they build the part that the stakeholders actually see, I think they end up with more in the way of random asks from business folks, which have this way of making it clear over engineering managers, etc. There's also the problem wherein you have a whole bunch of people out there who've never seen anything else and don't have any context for why someone like me might be displeased with the current state of affairs. It'd be nice to be able to sort out how many of these problems are cognitive+technical versus cultural/social. The most interesting thing I've seen so far was when I was at a (now sold/defunct) company called Snapvine. We integrated telephony with social networking sites. Anyway, I spent more time looking at MySpace than I wanted to, and was stunned to discover: Kids with MySpace pages were learning HTML and CSS just to trick out and add something unique to their profiles, and didn't seem to relate what they were doing to software at all. I wasn't sure if I was supposed to smile or frown when that realization hit me. That's about when I started talking to people about HyperCard again, which is ultimately how I found my way to Squeak, and then this list. ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
[fonc] A Message for Frank about HyperCard
Frank, Really good to hear that you've taken your first steps. You have great parents and a promising future. Keep up the good work! I was really impressed that you've already gotten into HyperCard; I must have been fully 12 or 13 years old before I noticed it sitting on my own computer. I thought I'd let you know about a trouble I had with it, once, way back when, in case you ever run into anything similar in your own journey through life: I looked through the menus, and I couldn't find the share this stack option. It was particularly frustrating because I knew I wanted to click it, but I didn't really know why back then. It took me another roughly 15 years to figure that part out. Turns out there's a connection between epidemiology and the transmission of ideas, and the upshot is that the share this option ends up being the most powerful thing in the whole system, regardless of what other features are there. Also, don't let the popular kids get you down. A little bit of share this can change everything overnight when it comes to popularity. Hope this helps! Your pal, Casey ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Alternative Web programming models?
You know this isn't usable with the browser I have handy at the moment, but I can already see it. Really interesting, I can imagine it would look more or less like this. Thanks for putting me onto this, Ian. On Jun 9, 2011, at 2:52 PM, Ian Piumarta piuma...@speakeasy.net wrote: On Jun 9, 2011, at 14:38 , Casey Ransberger wrote: Take a moment to imagine how different the world might be today if the convention had been e.g. s-exprs. http://hop.inria.fr/ ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Re: Electrical Actors?
Ooooh, this looks shiny. I must admit having a taste for the stochastic. Reading. Thank you! Nope... drat. Clicking to request access to the document instead. On Jun 5, 2011, at 10:00 PM, Max OrHai max.or...@gmail.com wrote: You might get a kick out of this toy model I made to demonstrate how a mesh (or cloud) of minimal hardware actors can work together to compute. It's the latest in a series of explorations of the particle / field concept... http://cs.pdx.edu/~orhai/mesh-sort I think there's a lot that can be done with fine-grained homogeneous self-contained hardware in quantity, and I may get around to building a poor man's Connection Machine out of a bucketful of microcontrollers one of these days. The AVR is quite a capable machine for $5 apiece! -- Max (big snip for bandwidth)___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Re: Electrical Actors?
Below:) On Jun 5, 2011, at 11:19 PM, C. Scott Ananian csc...@laptop.org wrote: I explored this idea a bit once upon a time in the context of Java: http://cscott.net/Publications/design.pdf The bibliography cites most of the related work I know about. --scott Reading it now -- thanks for sharing this. I remember feeling so fascinated when I heard people talking about JavaOS on a chip; while I was aware of people jamming whole operating systems into ROM, the idea of an OS written mostly in a high level language by itself was completely new to me then (I was in highschool, I had only just secured Internet access for the first time) and it was just so compelling... You can do that?? was what I remember thinking, wait, how's that work... Oh I don't even care, forget about this manual memory management thing then! May I have a triple tall mocha with no whip? And do you do those in bulk orders? It's a shame that it took me so many years to find Smalltalk. Sometimes I wish I could go back in time just to point myself at things. I may not have listened to future-me then, though, so I suppose the real lesson is to remember that I probably don't know anything of import even now, and that the best ideas I've got presently are likely to find hard judgement in my own eventual hindsight. ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] languages
I've heard of an IDE called VisualAge (I think?) that was written in Smalltalk but could parse and to a degree reason about other languages, but I've never seen it. Have you looked for that thing, or was it just not so great? On Jun 5, 2011, at 11:55 PM, BGB cr88...@gmail.com wrote: On 6/5/2011 11:03 PM, C. Scott Ananian wrote: On Sun, Jun 5, 2011 at 8:35 PM, BGB cr88...@gmail.com wrote: I would personally like to see an IDE which was: more-or-less language neutral, to what extent this was practical (more like traditional standalone editors); not tied to or hard-coded for particular tools or build configurations (nearly everything would be actions tied to high-level scripts, which would be customizable per-project, and ideally in a readily human-editable form); not being tied to a particular operating system; ... This is Eclipse. Granted, it's an IDE which is designed-by-committee and hard to love, but it answers all of your requirements. --scott I don't believe Eclipse is it, exactly... it handles multiple languages, yes, and can be used with multiple operating systems, and supports multiple compiler backends, ... however, AFAIK, pretty much all of the logic is written in Java-based plugins, which is not ideal (and so, essentially the logic is tied to Eclipse itself, and not to the individual projects). I was imagining something a little different here, such as the project control files being more like Makefiles or Bash-scripts, and so would be plain-text and attached to the project (along with the source files), where it is possible to control things much more precisely per-project. more precisely, I had imagined essentially a hybrid of Makefiles and Bash. also imagined was the possibility of using JavaScript (or similar) as the build-control language, just using JS in a manner similar to Make+Bash, likely with some special-purpose API functionality (to make it more usable for Make-like purposes). a difficulty with JS though is that, normally, IDEs like things to be fairly declarative, and JS code its not declarative, unless the JS is split into multiple parts: info about the project proper is stored in a JSON-based format, and then any build logic is JS files attached to the project. so, the IDE would mostly just manage files and editors, and invoke the appropriate scripts as needed, and many IDE actions essentially just call functions, and so one causes something to happen by replacing the default action functions (such as in a script loaded by the project file). actually, conceptually I like the JS route more, even if it would likely be a little more verbose than a Bash-like syntax. IMO, the next best alternative is SciTE, so what I was imagining would be a more expanded version of SciTE. then there is also CMake, ... there is also SCons, which is conceptually related to the prior idea, but it based on Python. but, for the most part, I have mostly just ended up sticking with good old text editors and makefiles, as these have served me well, despite their drawbacks (the cost of switching to an alternative strategy likely being somewhat higher than that of doing nothing and staying with the present strategy). IOW, the if it aint broke, don't fix it strategy... or such... ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] languages
Inline On Jun 6, 2011, at 10:48 AM, Alan Kay alan.n...@yahoo.com wrote: It was ... and is mostly associated with what came to be called Algol 58, but not Algol 60. Another way to look at it is that almost all systems are difficult to maintain down the line -- partly because they were not designed with this in mind -- and this is true for most programming languages. However, I don't think this is necessary, but more an artifact of incomplete design. This, and design drift, wherein over time various forms of pseudo-arch get piled up and end up jutting out at weird angles:) Cheers, Alan ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Language in Test (was Re: [fonc] languages)
I'm actually not talking about the potty mouths:) APL is up there on my list now, but it hasn't knocked Prolog out of the top slot. I've done a bunch of test automation. I really enjoy testing because on a good day it can approach something reminiscent of science, but OTOH the test code I ended up wrangling (often my own code) wound up the worst sort of mess, for a few reasons. Not-test code that I've worked on or composed myself has always been a lot better, for reasons I don't totally understand yet. I can toss some guesses out there: One is that people who do automation are often outnumbered 5:1 or worse by the people making the artifacts under examination, such that there's too much to do in order to do anything very well. Another is, testing often fails to strike decision makers as important enough to invest much in, so you end up hacking your way around blocking issues with the smallest kludge you can think of, instead of instrumenting proper hooks into the thing under test, which usually takes a little longer, and risks further regression in the context of a release schedule. Things I learned from Smalltalk and Lisp have been really useful for reducing the amount of horror in the test code I've worked on, but it's still kind of bad. Actually I was inspired to look for an EDSL in my last adventure that would cut down on the cruft in the test code there, which was somewhat inspired by STEPS, and did seem to actually help quite a bit. Use of an EDSL proved very useful, in part just because most engineering orgs I've been in don't seem to want to let me use Lisp. Being able to claim honestly that I'd implemented what I'd done in Ruby seemed to help justify the unorthodox approach to my managers. I did go out of my way to explain that what I'd done was compose a domain specific language, and this did not seem to get the same kind of resistance, just a few raised eyebrows and funny looks. I keep getting the feeling that the best tool for the job of encoding and running invariants might be a Prolog, and so this one is currently at the top of my list of things to understand deeply. Anyone else here ever deal with a bunch of automation? Ever find a way to apply what you know about programming languages themselves in the context of software verification? Because I would *love* to hear about that! On Jun 5, 2011, at 7:06 PM, David Leibs david.le...@oracle.com wrote: I love APL! Learning APL is really all about learning the idioms and how to apply them. This takes quite a lot of training time. Doing this kind of training will change the way you think. Alan Perlis quote: A language that doesn't affect the way you think about programming, is not worth knowing. I love this quote. Thanks for your (snip) -David Leibs ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
[fonc] Alto-2?
Hello all, I've found myself with the first sizable chunk of free time I've had in years. I've been having so much fun! But I must admit, after a bunch of hustle-your-butt software work, the software part isn't completely satisfying me. I miss taking apart computers. It's wonderful that they've gotten so small, but it comes at a price, I think. No one's really figured out a way to make something that small which leaves room for serviceability. When I was a kid, I learned _so much_ with the case open. Somewhere I read about an XO installation where they found a little girl who'd set up an assembly line and was doing her own repairs on other kids laptops. No one asked her to, she just decided to do it. It really warmed my heart:) and I couldn't help feeling some nostalgia, because I was *totally* that kid. And when you add free time to life long love, well, hah! I'm gonna build a computer this year. I was thinking it would be fun to throw out the Intel architecture and look at alternatives. I know nothing of silicon, not really, and so I'm liable to grab parts off of the shelf, though that visual-6502 simulator I found on the web has me tempted all the same. For a CPU, I thought it might be interesting, and temporarily future-proof, to go with something ARM. I know people have had the Squeak VM running on ARM chips, which is sort of my only req'ment anymore, outside of the web browser which lets me live in the modern world. But then I stopped. What about Frank? I have a feeling Frank should work anywhere, but since I've only seen things Ian is doing, I thought I'd stop to ask. If I wanted to be able to run VPRI's bits (if and) when they become generally available, is there a particular chip architecture I should go with? Okay that's the first question. The other question is, was there anything in particular about the Alto that folks on this list miss? Would the Alto make an interesting case study for me to explore, or have modern computers imitated it to the point where it isn't the thing to examine? I'm picking my way through the wikipedia article, but it occurs to me that not having used the thing, it might be hard for the details on the wiki to jump out at me in any sort of aha moment. Not sure that the tech is at the point where I can hope to construct something Dynabook-shaped, but I know that I can make one improvement to the interim desktop design just by using a flat panel that will swivel into a portrait orientation:) ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc
Re: [fonc] Alto-2?
Totally. People who are really serious about software should make their own hardware. That quote's been stuck in my head ever since I saw Jobs doing the commencement speech at Stanford on YouTube. I've been wanting to build a computer again more and more ever since my usual vendor started using proprietary screws to keep me out of my laptop (it works... I don't really feel like reinventing the screwdriver head...) Like I said, I don't believe that I have the requisite knowledge to do silicon design myself, even if it has gotten cheap enough. If it makes you feel better, though, I do believe that an interim of the early 21st century ought to be as self-describing and self-implementing as possible, so I'm currently thinking about making two models. a) I want to play with software b) I want to play with FPGAs I've also thought about actually shipping something where the *only* option was to go with one of those FPGA development boards, and I haven't investigated the ramifications of going that route, but if I had to guess, I bet those boards cost more than the deployment boards, and I'm not sure what the difference is performance-wise. Thanks for recommending Silicon Squeak. Jecel's project is so awesome! And while I totally can't wait to have one:) I think what I can do this year will likely be limited to integrating off the shelf parts. That said, I'm hoping I can create something interesting even with those constraints. I've bounced email back and forth with Jecel, and I really like his point of view:) On May 25, 2011, at 4:57 PM, Max OrHai max.or...@gmail.com wrote: This sounds like a really cool project, and I hope you report to the list as you make progress. Have you looked at Jecel Assumpcao's SiliconSqueak? An awful lot can be done on the cheap with modern FPGAs, so long as you don't stray too far from the conventional CPU design space... (For an example of what I mean by too far, check out http://cellmatrix.com or http://greenarraychips.com). I really wish more people designed whole systems, both hardware and software, these days. -- Max On Wed, May 25, 2011 at 2:44 PM, Casey Ransberger casey.obrie...@gmail.com wrote: Hello all, I've found myself with the first sizable chunk of free time I've had in years. I've been having so much fun! But I must admit, after a bunch of hustle-your-butt software work, the software part isn't completely satisfying me. I miss taking apart computers. It's wonderful that they've gotten so small, but it comes at a price, I think. No one's really figured out a way to make something that small which leaves room for serviceability. When I was a kid, I learned _so much_ with the case open. Somewhere I read about an XO installation where they found a little girl who'd set up an assembly line and was doing her own repairs on other kids laptops. No one asked her to, she just decided to do it. It really warmed my heart:) and I couldn't help feeling some nostalgia, because I was *totally* that kid. And when you add free time to life long love, well, hah! I'm gonna build a computer this year. I was thinking it would be fun to throw out the Intel architecture and look at alternatives. I know nothing of silicon, not really, and so I'm liable to grab parts off of the shelf, though that visual-6502 simulator I found on the web has me tempted all the same. For a CPU, I thought it might be interesting, and temporarily future-proof, to go with something ARM. I know people have had the Squeak VM running on ARM chips, which is sort of my only req'ment anymore, outside of the web browser which lets me live in the modern world. But then I stopped. What about Frank? I have a feeling Frank should work anywhere, but since I've only seen things Ian is doing, I thought I'd stop to ask. If I wanted to be able to run VPRI's bits (if and) when they become generally available, is there a particular chip architecture I should go with? Okay that's the first question. The other question is, was there anything in particular about the Alto that folks on this list miss? Would the Alto make an interesting case study for me to explore, or have modern computers imitated it to the point where it isn't the thing to examine? I'm picking my way through the wikipedia article, but it occurs to me that not having used the thing, it might be hard for the details on the wiki to jump out at me in any sort of aha moment. Not sure that the tech is at the point where I can hope to construct something Dynabook-shaped, but I know that I can make one improvement to the interim desktop design just by using a flat panel that will swivel into a portrait orientation:) ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc ___ fonc mailing list fonc@vpri.org http://vpri.org
Re: [fonc] Beats
Cool! I've been hoping to see some more multimedia stuff happen for Ruby, and I actually like the little DSL they've got going there: it's very visual, and a grid is perfect when what you're emulating is a drum machine which usually has a grid interface or some such, and doesn't know about inexact timing like a drummer does. It looks like fun... too many shiny distractions:) On Mon, May 16, 2011 at 8:21 PM, Josh McDonald j...@joshmcdonald.infowrote: Thought you guys would get a kick out of this YAML-WAV sequencer written in Ruby: https://github.com/jstrait/beats -- Therefore, send not to know For whom the bell tolls. It tolls for thee. Josh 'G-Funk' McDonald - j...@joshmcdonald.info - http://twitter.com/sophistifunk - http://flex.joshmcdonald.info/ ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc -- Casey Ransberger ___ fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc