Re: [fonc] Reading Maxwell's Equations

2010-02-26 Thread Andrey Fedorov
I've been reading Roy and Haridi's Concepts, Techniques, and Models of
Computer Programming [1], and I see it as a great practical approach to
VPRI's ideas on the principles of programming languages. Is there anyone
more autoritative here who could chime in on the intellectual connection (if
there is one)?

Cheers,
Andrey

1. http://www.amazon.com/dp/0262220695

On Fri, Feb 26, 2010 at 7:50 PM, Gerry J geral...@tpg.com.au wrote:

 John, et al
 I am interested in what you think are the better approach alternatives to
 handle complexity and size (etc), what criteria should apply and why one
 ranks higher than another.
 For example, should a language support both actors and be model driven?
 Is a mix of type inference and explicit typing with operators (like OCAML)
 better than extremely late binding, and for what?
 Should there be a hierarchy of syntax compatible languages, with different
 restrictions, say extremely late binding at the top, and fully typed and OS
 or device driver oriented at the bottom?
 (ie pick the right tool in the family, size of hand held screwdriver up to
 exchangeable bits for a power tool).
 Thanks for your interesting references and insights.

 Regards,
 Gerry Jensen






 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Reading Maxwell's Equations

2010-02-26 Thread Andrey Fedorov
  ... what criteria should apply [to complexity] and why one ranks
higher than another.

Don't have a chance to look up the source, but Dijkstra has a
wonderful observation about every implementation of a solution in CS
includes two kinds of complexity: that intrinsic to the problem, and
that introduced by the tools you're using to express it in a way
computers can interpret.

While not a well defined measure, I think this is a powerful intuition.

- Andrey

Sent from my cell. Please forgive abbreviations and typos.

On Feb 26, 2010, at 7:50 PM, Gerry J geral...@tpg.com.au wrote:

 John, et al
 I am interested in what you think are the better approach
 alternatives to handle complexity and size (etc), what criteria
 should apply and why one ranks higher than another.
 For example, should a language support both actors and be model
 driven?
 Is a mix of type inference and explicit typing with operators (like
 OCAML) better than extremely late binding, and for what?
 Should there be a hierarchy of syntax compatible languages, with
 different restrictions, say extremely late binding at the top, and
 fully typed and OS or device driver oriented at the bottom?
 (ie pick the right tool in the family, size of hand held screwdriver
 up to exchangeable bits for a power tool).
 Thanks for your interesting references and insights.

 Regards,
 Gerry Jensen





 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

  ... what criteria should apply [to complexity] and why one ranks
higher than another.

Don't have a chance to look up the source, but Dijkstra has a
wonderful observation about every implementation of a solution in CS
includes two kinds of complexity: that intrinsic to the problem, and
that introduced by the tools/lan

- Andrey

Sent from my cell. Please forgive abbreviations and typos.

On Feb 26, 2010, at 7:50 PM, Gerry J geral...@tpg.com.au wrote:

 John, et al
 I am interested in what you think are the better approach
 alternatives to handle complexity and size (etc), what criteria
 should apply and why one ranks higher than another.
 For example, should a language support both actors and be model
 driven?
 Is a mix of type inference and explicit typing with operators (like
 OCAML) better than extremely late binding, and for what?
 Should there be a hierarchy of syntax compatible languages, with
 different restrictions, say extremely late binding at the top, and
 fully typed and OS or device driver oriented at the bottom?
 (ie pick the right tool in the family, size of hand held screwdriver
 up to exchangeable bits for a power tool).
 Thanks for your interesting references and insights.

 Regards,
 Gerry Jensen





 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Reading Maxwell's Equations

2010-02-26 Thread Andrey Fedorov

   ... what criteria should apply [to complexity] and why one ranks higher
 than another.


Ack, please forgive the copy/paste fumble. I found the source of Dijkstra's
observation, and it seems I had added quite a bit of my own conclusions. His
was:

For us scientists it is very tempting to blame the lack of education of the
 average engineer, the short-sightedness of the managers and the malice of
 the entrepreneurs for this sorry state of affairs, but that won't do. You
 see, while we all know that unmastered complexity is at the root of the
 misery, we do not know what degree of simplicity can be obtained, nor to
 what extent the intrinsic complexity of the whole design has to show up in
 the interfaces. We simply do not know yet the limits of disentanglement. We
 do not know yet whether intrinsic intricacy can be distinguished from
 accidental intricacy. We do not know yet whether trade-offs will be
 possible. We do not know yet whether we can invent for intricacy a
 meaningful concept about which we can prove theorems that help. To put it
 bluntly, we simply do not know yet what we should be talking about, but that
 should not worry us, for it just illustrates what was meant by intangible
 goals and uncertain rewards. (November 19th, 2000 - 
 EWD1304http://www.cs.utexas.edu/users/EWD/transcriptions/EWD13xx/EWD1304.html
 )


I think he's a little too doom-and-gloom here. I think it's fair to say that
a given solutions to a problem has some intrinsic complexity - that
dependent on which cognitive models we decide use to consider the problem.
Exploring the intrinsic complexity of problems and solutions is the domain
of mathematics.

Then there is the related complexity of designing a machine to derive the
solution for a given instance of the problem (or as is more common nowadays,
designing an algorithm to run on modern hardware). The latter complexity is
extrinsic to the problem itself, and also, in some sense, extrinsic to
our cognitive reasoning about it.

EWD seems most pessimistic about the meta-properties of our models -
measuring the complexities in relation to the problem and to each other. But
while modeling our cognitive models is a much younger domain, there's
certainly people asking these questions under the guise of foundations of
mathematics or cogsci of mathematics or cognitive science in general.
Personally, I find Lakoff and Núñez's Where Mathematics Comes From rather
convincing in its broad strokes, even if lacking in testable hypotheses.

Cheers,
Andrey
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Reading Maxwell's Equations

2010-02-27 Thread Andrey Fedorov
John,

Have you been able to find any good definitions for your use of
trustworthiness? The wikipedia article
about trustworthy computing [1] makes it sound like something which
originated in Microsoft's marketing department.

Using intuitive definitions, the three metrics you mention seem to
be synonymous. Code size, when measured in the number of thoughts it takes
to conceptualize it, is synonymous to complexity. As far as the right way
to write trustworthy code, the only convincing argument I've heard is from
SPJ:

Tony Hoare has this wonderful turn of phrase in which he says your code
 should obviously have no bugs rather than having no obvious bugs. So for me
 I suppose beautiful code is code that is obviously right. It's kind of
 limpidly transparent. -Simon Peyton Jones, from Peter Seibel's Coders At
 Work


Just keep it as simple (and short) as possible.

Cheers,
Andrey

1. http://en.wikipedia.org/wiki/Trustworthy_Computing

On Fri, Feb 26, 2010 at 6:15 PM, John Zabroski johnzabro...@gmail.comwrote:

 I've been following this project for a long time, and only recently joined
 the mailing list.

 For a long time, I did not fully understand Alan Kay's thoughts on software
 architecture, despite reading many of his press interviews and watching his
 public presentations.  What I've come to feel is that Alan has a partially
 complete vision, and some inconsistent viewpoints likely blocking a complete
 vision of computer science.

 For example, I had heard Alan refer to Lisp as Maxwell's Equations of
 computer science, but did not fully grasp what that really meant.  When I
 first played around with OMeta, I described it to a friend at MIT as
 ridiculously meta. This idea was pretty much confirmed by Ian Piumarta's
 widespread unreasonable behavior whitepaper, which basically argues that
 we can't truly do software engineering until we actually know what that
 means, so the best approach to go with is extremely late-binding.  The idea
 to use syntax-directed interpretation via PEG is an obvious way to achieve
 this, as it addresses one of the three key stumbling blocks to building real
 software engineering solutions -- size.

 But I am not convinced VPRI really has a solution to the remaining two
 stumbling blocks: complexity and trustworthiness.

 In terms of complexity, I think I'll refer back to Alan Kay's 1997 OOPSLA
 speech, where he talks about doghouses and cathedrals.  Alan mentions Gregor
 Kiczales' The Art of the Meta Object Protocol as one of the best books
 written in the past 10 years on OOP-work.  I don't really understand this,
 because AMOP is entirely about extending the block-structured, procedural
 message passing approach to OO using computational reflection.  From what
 I've read about Smalltalk and the history of its development, it appears the
 earliest version of Smalltalk I could read about/heard of, Smalltalk-72,
 used an actors model for message passing.  While metaobjects allow
 implementation hiding, so do actors.  Actors seems like a far better
 solution, but it is also obviously not covered by Goldberg and Robson's
 Smalltalk-80 Blue Book.  To be clear, I very much dislike Kiczales model and
 think it says a lot about current practice in Java-land that most people
 abuse reflection through the use of tools like AspectJ.  Yet, aspect-weaving
 is also seen in the Executable UML realm, where you draw pictures about a
 problem domain, and separate concerns into their own domains. But it seems
 way more pure than AMOP because a model-driven compiler necessarily will
 bind things as late as necessary, in part thanks to a clockless, concurrent,
 asynchronous execution model.  The aspect-weaving seen here is therefore
 different, and the entire model is connect the dots using handshaking
 protocols.

 For me, besides the execution model, the other most basic measure of
 complexity is, for how much complexity you add to the system, how many more
 features can you produce?  UNIX hit a blocking point almost immediately due
 to its process model, where utility authors would tack on extra functions to
 command-line programs like cat.  This is where Kernighan and Pike coined the
 term cat -v Considered Harmful, becaise cat had become way more than just
 a way to concatenate two files.  But I'd argue what KP miss is that the
 UNIX process model, with pipes and filters as composition mechanisms on
 unstructured streams of data, not only can't maximize performance, it can't
 maximize modularity, because once a utility hits a performance wall, a
 programmer goes into C and adds a new function to a utility like cat so that
 the program does it all at once.  So utilities naturally grow to become
 monolithic.  Creating Plan9 and Inferno Operating Systems just seem like
 incredibly pointless from this perspective, and so does Google's Go
 Programming Language (even the tools for Go are monolithic).

 Apart from AMOP, Alan has not really said much about what interests him and
 what doesn't 

Re: [fonc] Reading Maxwell's Equations

2010-02-28 Thread Andrey Fedorov
Considering the ambition of the project relative to its resources, I think
it's reasonable for STEPS to keep a low profile and spend less effort on
educating than one might like.

That said, I'd appreciate a simple suggested reading list for independent
study - in my case, for someone with an undergrad in CS.

*That* said, this section http://vpri.org/html/writings.php is wonderful.

Cheers,
Andrey

On Sun, Feb 28, 2010 at 11:50 AM, Reuben Thomas r...@sc3d.org wrote:

 On Sunday, February 28, 2010, Brian Gilman brian.gil...@gmail.com wrote:
  That having been said, I think the project is an interesting one, but I'm
  not sure it's really ready for tons of publicity yet.

 Think of a software project as like Plato's model of the soul as a
 charioteer with two horses, one immortal and one mortal, only without
 the goal of reaching heaven. The mortal horse is the imperatives of
 the real world: developers, money, users, releases and so on, while
 the immortal horse represents elegance, simplicity, performance,
 design perfection. A successful project usually manages to keep the
 two horses in relative harmony, making something good and practical.
 VPRI seems to have started off with just the immortal horse (or, if
 you take the view that the project's members are gods, two immortal
 horses).

 In other words, I think you have it the wrong way round: it is
 precisely by caring about one's public that one fixes the rough edges
 so that the code is releasable and usable even when it's not finished
 (and it never is). This is the whole point of release early, release
 often: stay in touch with the real world.

 I think it's scandalous that a publically-funded non-secret project
 does not have far stricter requirements for public engagement than are
 apparent here.

 I would add that the reason I care is because I have a great deal of
 respect for Ian Piumarta in particular: I was blown away by his
 Virtual Virtual Machine work when I went to INRIA Rocquencourt in
 1999, greatly impressed by his code generation work on Smalltalk (at
 least that did get out the door), and really excited when I first came
 across COLA. This stuff should be out there!

 --
 http://rrt.sc3d.org

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] Recommended reading (was: Reading Maxwell's Equations)

2010-03-01 Thread Andrey Fedorov
Does anyone have recommended reading regarding what would help me understand
the low-level implementation of STEPS? Would reading up on OOMV and other
embedded Smalltalk systems be worthwhile?

- Andrey

1. http://www.smalltalk.org/versions/OOVM.html

On Sun, Feb 28, 2010 at 5:42 PM, Dan Amelang daniel.amel...@gmail.comwrote:

 On Sun, Feb 28, 2010 at 9:53 AM, Andrey Fedorov anfedo...@gmail.com
 wrote:
  Considering the ambition of the project relative to its resources, I
 think
  it's reasonable for STEPS to keep a low profile and spend less effort on
  educating than one might like.

 Thank you :) We do have limited resources and wild ambitions. And I
 won't be able to answer emails as thoroughly as I am today for that
 reason.

  That said, I'd appreciate a simple suggested reading list for
 independent
  study - in my case, for someone with an undergrad in CS.

 A reasonable suggestions. Besides the list on the vpri website, you
 could also look at the references in the writings. Also, Alan likes to
 give people references to read, so you could try him, and report back
 here (with his permission).

 Dan

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] my two cents

2010-03-01 Thread Andrey Fedorov
Also available in non-proprietary format via your browser
herehttp://docs.google.com/viewer?url=http://research.microsoft.com/en-us/um/people/toddpro/papers/disruptive.ppt
.

Cheers,
Andrey

On Mon, Mar 1, 2010 at 5:53 PM, Dirk Muysers dmuys...@hotmail.com wrote:

  Everybody in this discussion should have read the following:


 http://www.google.com/url?sa=tsource=webct=rescd=1ved=0CAYQFjAAurl=http%3A%2F%2Fresearch.microsoft.com%2Fen-us%2Fum%2Fpeople%2Ftoddpro%2Fpapers%2Fdisruptive.pptrct=jq=proebsting+%2Bdisruptiveei=TUSMS8PRKYS80gSKpMjRCwusg=AFQjCNGlGioXg3dL2G5rekpa-U8dKuZAtg

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] my two cents

2010-03-02 Thread Andrey Fedorov
It's a must in a hyperbolic sense - Dirk is just excited about a good
presentation. I think any idea that encourages attempts at discovering
perspectives different from those in fashion is important - this
presentation falls into that category.

Cheers,
Andrey

On Tue, Mar 2, 2010 at 10:18 AM, John Zabroski johnzabro...@gmail.comwrote:

 Explain to me why that document is a must, because I don't get it.
 Somebody has to put their foot down, put some weight in his or her butt,
 push back, and say, BULLSHIT.  I don't get it.

 Looking at Richard Hamming's three questions:

 1. I am simultaneously interested in open, reflective, dynamically
 distributed and dynamically federated systems
 2. I am a practitioner, so the most important problem is squishing out
 applications for my company
 3. This is a hobby, and for a hobby I want to build something that
 radically changes the future -- anything less is a complete waste.  I
 consider all databases and graphical user interfaces to be examples of 1,
 and so my interests map out all over the place.  One big problem in a field
 this large would be dangerously myopic.

 Also, Richard Hamming or Todd Proebstring isn't going to hire me, so these
 questions are pretty irrelevant to this discussion (for me).

 What I possess is an extremely closed-minded and visionary type of
 personality, and a desire to settle for nothing less.  The last thing I am
 going to do is appeal to Todd for what I should be doing or asking Alan Kay
 or Ian Piumarta about.  I should just go directly to Alan or Ian and push
 back on their ideas.  A much better article than Todd's powerpoint would be
 Accept Defeat: The Neuroscience of Screwing Up
 http://www.wired.com/magazine/2009/12/fail_accept_defeat/all/1

 To summarize what I don't find Todd's presentation:
 1. Everything he said of value is obvious to anyone with any perspective at
 all
 2. It's all been said before
 3. Flight Data Recorders is a fancy way to say all software is a living
 system, and computer scientists suck at studying living systems, and
 programmers lack sufficient tools for repairing living systems
 4. Flight Data Recorders biggest disadvantage is not mentioned: dealing
 with purity
 5. Checkpoints/Undo: the biggest flaw in this idea is always the same,
 and is the same as my criticism of Warth and Kay's Worlds idea: you are
 *still* moving the program counter, and so state *is* changing.  Any attempt
 to hide the fact the program counter seems like an abstraction failure to
 me.
 6. Checkpoints/Undo does not mention attribute grammars as a current
 solution.
 7. Checkpoints/Undo: Completely goes against the earlier point in the
 powerpoint that smart algorithms will do more for performance than compiler
 optimization
 8. Compiler optimization vs. algorithms in general is a false dichotomy in
 a maximally open, maximally reflective system, and I'll claim anyone who
 thinks this dichotomy is real will not push themselves into an *extreme
 position* necessary to radically innovate
 9. Disruptive Database Integration appears to be LINQ, which relies on
 monad comprehensions and does not facilitate metalogic substitution.  LINQ
 hits a complexity wall early, and currently Visual Studio and .NET Assembly
 compilation introduces real abstraction barriers to how people think about
 database integration.  Typed data access has never been a real problem, it
 is a red herring only MS Research would latch onto, because it looks cool
 and sells VS licenses.
 10. Disruptive Parsing: Working with Concrete Syntax Trees only makes
 sense if you need to work with concrete syntax trees; advocating unnecessary
 abstraction weight seems silly
 11. Disruptive XML Manipulation: Misses the point that hardwiring data
 interchange to XML means weak service encapsulation and disallows for
 extreme late-binding between the client and server; this raises questions
 about versionability, immediately
 12. Disruptive constraint solvers: In my experience, the biggest problem
 with constraint solvers today -- in terms of putting them into a language --
 is that they are mostly designed by people who don't have professional
 training and experience designing languages
 13. Distributed programming: Performance matters.  The speed of light is
 your bottleneck.  Stuff like fault tolerance isn't that complicated and the
 bigger issue will be what to distribute and how to distribute it, e.g. fast
 algorithms for dynamic reconfiguration that don't have long system pause
 characteristics

 I'm going to conclude by saying the three stumbling blocks are size,
 complexity and trustworthiness.  Paying attention to all of Todd P's big
 problems just puts your focus on solutions-oriented thinking, rather than
 fundamentals-oriented thinking.

 On Mon, Mar 1, 2010 at 5:53 PM, Dirk Muysers dmuys...@hotmail.com wrote:

  Everybody in this discussion should have read the following:


 

Re: [fonc] my two cents

2010-03-02 Thread Andrey Fedorov
John Zabroski wrote:

 the three stumbling blocks are size, complexity and trustworthiness


How are these different?

A small program is a simple program by definition, assuming it's expressed
in an intuitively comprehensible way. And a simple program is a program I
can trust to do what I think it does. Conversely, the only reason I wouldn't
trust a program (assuming I trust the compilers/interpreters) is because it
would be too complicated to understand. That's what I meant when I quoted
SPJ earlier:

Tony Hoare has this wonderful turn of phrase in which he says your code
 should obviously have no bugs rather than having no obvious bugs. So for me
 I suppose beautiful code is code that is obviously right. It's kind of
 limpidly transparent. -Simon Peyton Jones, from Peter Seibel's Coders At
 Work


Cheers,
Andrey
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] my two cents

2010-03-02 Thread Andrey Fedorov
 Sean Heber wrote:

 I could send a single message to an object that, by itself, seemed obvious
 and straightforward but which actually causes a huge cascade of messages to
 flow among a huge set of objects that result in all sorts of unexpected
 consequences to the state of the original object which drastically alters
 the meaning of future messages to that object. That's the kind of complexity
 that causes problems, IMO, and it has nothing to do with size.


We just have a difference of definitions then. I would say the side-effects
of a given operation definitely contribute to its size.

Perhaps another kind of example is constructing parser grammars. I've spent
 a lot of time looking at OMeta examples and thinking I understand it. After
 all, they are often short and seemingly simple. Then I sit down and try to
 implement one from scratch and fail miserably. Perhaps I have yet to learn
 the correct mental model to use when building these, but they sure seem
 small and simple when I'm not trying to write one...


That's unrelated, as I said assuming you trust the interpreter. The
problems you (and I, as well) have with understanding OMeta don't have a
place in a model of computation. Whatever model we use for computation, I
think it shouldn't confuse the notions of programmer competency and
expressive power of a language.

John Zabroski wrote:

 My simplest counter-example would be...


A counter-example to your definitions of simple, small, and
trustworthy isn't enough, though, because I don't agree with how you're
using those words. I don't think the way you're using them conforms to any
useful model. I'm literally asking for a general method I could use to
measure the simplicity, the size, and the trustworthiness of a program
in a way that is consistent.

Let me be explicit. Here is my model: A programming language combines a
certain number of perspectives from which it lets you express algorithms
(a kernel language [1] tries to isolate a single perspective).
Perspectives consist of a set of related metaphors [2] which provide
thinking-blocks for you to structure an algorithm. The size of an
algorithm expressed from a certain perspective is measured in the number of
these thinking blocks your mind needs to understand its behavior. Finding
the optimal perspective from which to express an algorithm will lead to an a
representation which is all small, simple, and trustworthy.

The Big Problem In Our Field is finding two things: --one-- new perspectives
of computation and --two-- ways of implementing them in machines.

An important feature of this model is isolating these definitions from
questions about the learning curve or experience of a given programmer.
It doesn't address the pertinent question of how well can a given
programmer solve a given problem?, as that would involve measuring the
intuition a programmer has in regard to different perspectives, and the
size of the solution from those perspectives. Those are non-trivial to
measure.

Cheers,
Andrey

1. This terms is from Roy and Haridi's Concepts, Techniques, and Models of
Computer Programming. I remember reading Kay refer to such a language as
being a crystallization of style.
2. http://en.wikipedia.org/wiki/Conceptual_metaphor

On Tue, Mar 2, 2010 at 4:59 PM, Sean Heber s...@fifthace.com wrote:

 On Mar 2, 2010, at 3:18 PM, Andrey Fedorov wrote:

  John Zabroski wrote:
  the three stumbling blocks are size, complexity and trustworthiness
 
  How are these different?
 
  A small program is a simple program by definition, assuming it's
 expressed in an intuitively comprehensible way.

 I'm not so sure that a small program is necessarily simple.

 I could send a single message to an object that, by itself, seemed obvious
 and straightforward but which actually causes a huge cascade of messages to
 flow among a huge set of objects that result in all sorts of unexpected
 consequences to the state of the original object which drastically alters
 the meaning of future messages to that object. That's the kind of complexity
 that causes problems, IMO, and it has nothing to do with size.

 Perhaps another kind of example is constructing parser grammars. I've spent
 a lot of time looking at OMeta examples and thinking I understand it. After
 all, they are often short and seemingly simple. Then I sit down and try to
 implement one from scratch and fail miserably. Perhaps I have yet to learn
 the correct mental model to use when building these, but they sure seem
 small and simple when I'm not trying to write one...

 l8r
 Sean


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] my two cents

2010-03-03 Thread Andrey Fedorov

 Let's consider a program is a network with objects as nodes and paths of
 communication as edges...


Channels between nodes are part of an algorithm, and hence counts
towards its size. Again, I'm looking for *well defined* measurement of
size and complexity where the two aren't the same.

Often in science, relations amongst objects are not considered as
 significant.


What?

This was particularly the case in biology for a long time where a large
 number of practitioners thought they could understand biological systems in
 their entirety by isolating subsystems and not considering that in reality
 these systems exist in an environment


Even if this were true once, modern practitioners obviously don't believe
this. It's now reasonably well accepted that evolutionary systems seem to
thrive on tight coupling and side effects, whereas engineered systems
attempt to minimize it. While really fascinating, evolves systems aren't in
the scope of what most programming is about.

if complexity can be understood in terms of topology and graph theory


But how can complexity be understood in therms of topology and graph
theory? What about the complexity of a sorting algorithm? Are you sure you
know what topology is?

Cheers,
Andrey

 On Tue, Mar 2, 2010 at 9:25 PM, Wesley Smith wesley.h...@gmail.com wrote:

 On Tue, Mar 2, 2010 at 1:18 PM, Andrey Fedorov anfedo...@gmail.com
 wrote:
  John Zabroski wrote:
 
  the three stumbling blocks are size, complexity and trustworthiness
 
  How are these different?
  A small program is a simple program by definition, assuming it's
 expressed
  in an intuitively comprehensible way.

 I disagree that a small program is by definition a simple program.
 Let's consider a program os a network with objects as nodes and paths
 of communication as edges.  If you have a program with N objects and
 N-1 edges, it's going to be a simple and pretty linear program.  If
 however, in the other direction, each object itself has N-1 edges for
 N*(N-1) paths of communication you'll quickly end up with a complex
 system without growing in size.  Complexity is a question of topology
 and graph theory, size is a question of quantity.

 Often in science, relations amongst objects are not considered as
 significant.  This was particularly the case in biology for a long
 time where a large number of practitioners thought they could
 understand biological systems in their entirety by isolating
 subsystems and not considering that in reality these systems exist in
 an environment.  A powerful case for thinking about relationships was
 made by Bertalanaffy in his book General Systems Science.  What
 particularly struck me about his work is the difference between
 informational feedback/openness (e.g. Cybernetics) and structural
 feedback/openness particularly of the thermodynamic kind.  In
 Cybernetics, it's not possible for a system to evolve into a more
 complex structure whereas with thermodynamic feedback and input of
 negative entropy it is.  This may seem like it has nothing to do with
 computer science or software, but I think it does.  These concepts can
 show us how to understand complex systems, especially those that are
 capable of changing in structure.

 Incidentally, there's a link between the ideas of Bertalanaffy (and
 also the related ideas of Prigogine) to Lambda Calculus via the work
 of Walter Fontana and his work on Abstract Chemistry
 (http://tuvalu.santafe.edu/~walter/Papers/barrier.pdfhttp://tuvalu.santafe.edu/%7Ewalter/Papers/barrier.pdf,
 http://tuvalu.santafe.edu/~walter/Pages/publications.htmlhttp://tuvalu.santafe.edu/%7Ewalter/Pages/publications.html)


  And a simple program is a program I
  can trust to do what I think it does. Conversely, the only reason I
 wouldn't
  trust a program (assuming I trust the compilers/interpreters) is because
 it
  would be too complicated to understand. That's what I meant when I quoted


 As for trustworthiness ... if complexity can be understood in terms of
 topology and graph theory, then trustworthiness can be understood in
 terms of applying graph theoretical techniques to verify or guarantee
 behavior of the structures represented by the network.

 All in all, I think there's a pretty clear distinction to be made.

 wes

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] my two cents

2010-03-04 Thread Andrey Fedorov
Size in terms of number of bytes or number of lines isn't interesting
to me. It's a variable that is easy to measure, but is dependent on too many
other variables. Just as with any model, we should be looking for variables
as independent as possible.

Complexity of an algorithm without programmers in absolutely nonsensical.
I was talking about complexity *assuming maximal proficiency with a model of
computation*. All I'm doing is taking out programmer's skill as a free
variable - I'm fixing a variable in order to get rid of a dimension, nothing
more.

Alejandro said:

 Which one of the following systems is more complex?


Those aren't systems, they are graphs. You can define complexity to be
whatever you wish. If you define it as (| edges | + | vertices |), system B
is more complex.

I think I'm on quite a different wavelength than you guys. Arguing about the
inherent meaning of complexity is a idiotic exercise in futility - leave
it to the philosophy undergrads.

John Zabroski wrote:

 Andrey Fedorov wrote:

 Let me be explicit. Here is my model: [...]

 This is wrong, of course.


A model is a bunch of definitions - I defined the inter-dependence of
arbitrary terms size, complexity, perspective, etc. in terms of models
used by Lakoff's work in cognition. Just because your intuition about what
the words inherently mean that it doesn't agree with doesn't make it
wrong. I agree that if you were to ask me to find measures and an actual
mathematical relationships between these variables, I would have to do a lot
more thinking.

Concepts, Tecniques, and Models of Computer Programming is inferior to
 Design Concepts in Programming Languages by Franklyn Turbak and David
 Gifford.  The former is about how to think in Oz, while the latter is how
 to think like a programming language designer.


That's a bit crass, but thanks for the recommendation. I'll bump it to the
next thing on my reading list after I'm done with CTM.


 Of course, you are actually not that far off, too.  The problem is that
 when all solutions are uniformly small, you need a way to compare their
 expressive power when they are extended.  Thus, you study complexity in
 terms of what changes and what is fixed (the absence of change), and also
 how many bits of information you've encoded into each solution size.  Then,
 there is a question of how much effort you must expend to make something
 trustworthy.  Trustworthiness itself has complexity issues, e.g. how many
 bits of information are added, removed or changed from one executable
 specification to another.  And just studying bits of information is only one
 technique, derived from work by Kolmogorov and Chaitin.  Here is a great
 example using a precedence graph: http://cr.yp.to/qhasm/20050210-fxch.txt
 We can therefore measure, for example, the complexity in mapping a model to
 a linear store, and also try to predict timing characteristics of the
 physical model.  We can also then compare them to other models.


You're talking way past me. By complexity I just mean how hard is this to
understand?. A experiment to measure of complexity of some code would be
taking an experienced Java engineer, giving him Java code he hasn't seen,
and asking him to understand it to the point of being able to recreate it
from memory, or modify it, or something else to demonstrate he's
internalized it to some extent. I don't see how Kolmogorov or Chaitin's
models will help here.

There are two schools of thought I've seen about making something
trustworthy: (1) repeating your intentions - like type systems, unit
tests, etc., and (2) creating abstractions. I'm not interested in (1), as I
don't think it's viable in the long-term, and (2) collapses nicely into the
definition of complexity.

CTM really doesn't do a great job discussing any such stuff. Chapter 10 on
 GUIs is a good example; they make a similar low slope encoding decision
 Microsoft architect Mike Hillberg made; they argue that using syntax to
 express containment is a good idea.  I am saying syntax is better for
 instead expressing the mathematical relationships that define that
 containment, because if you need to change containment relations, it is
 algebraically manipulable.  You can then separately define how to make
 changes conflict-serializable.  WPF doesn't think this way; instead, it
 defines concepts like a Visual Tree, Logical Tree, Visual and Freezable
 objects.  This in turn leads to the API controlling you, and so you spend
 most of your most productive hours figuring out how you can manipulate the
 static production rules of WPF's Retained Mode architecture.  Retained Mode
 should be more flexible, and also allow for higher-level control of resource
 virtualization, as well as defning UI compositions in a way more amenable to
 combining interaction graphs and scene graphs.  -- This is intended to
 totally *blow up* any preconceived notions you have about user interfaces
 and how they should be built e.g. what you maybe have read

Re: [fonc] System A vs B, what?

2010-03-04 Thread Andrey Fedorov
The picture you gave isn't a system, it's a directed graph. I guess you're
implying anything you imagine to be a system can be represented as a graph
- but what *is* a system?

Also, you can define the complexity of a graph in any way you like. Until
you show that this definition is somehow representative of the real world,
you're just masturbating.

Cheers,
Andrey
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] my two cents

2010-03-04 Thread Andrey Fedorov
Michael Arnoldus wrote:

 On Mar 4, 2010, at 15:50 , John Zabroski wrote:

 There are many *forms* of complexity and using just one *metric* seems
 silly to me.

 Makes sense.


So you suggest we should keep saying complexity without specifying which
kind we're talking about? No! Pick one form of complexity and find a clear
definition of it. Show its relevance by pointing to data it explains. As
long as we are talking about complexity without narrowing it down to one
of the many forms you so wisely point out exist, we're going to be babbling
nonsense with as much result as freshman philosophy majors with a bong.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] System A vs B, what?

2010-03-05 Thread Andrey Fedorov
 Alejandro Garcia wrote:

 Andrey Fedorov wrote:

 The picture you gave isn't a system, it's a directed graph. I guess
 you're implying anything you imagine to be a system can be represented as
 a graph - but what *is* a system?


 Well it isn't a system in the same sense that a map isn't the terrain. I
 think people call those things a representation.


 Precisely! So we *are* on the same page. It's a representation which
 doesn't always preserve a system's complexity (without defining
 complexity).


 No we are not in the same page. I'm pretty sure that if a map shows a
 mountain and I go to the terrain that mountain is going to be there. [...]
 I'd better got to know them. For example I can't see the entire USA or the
 entire planet but with a good map I can make pretty good Idea of how that
 system looks.


But a good map-maker includes the important stuff and discard the
details. When complexity or more specifically the possible states of
each component are part of the important stuff, CRT is not a good
representation of a system.

So all I'm getting from your earlier point is that the CRT representation of
 a system can't be used to define complexity. So it's a crappy
 representation, after all.


 The point of the diagrams was to show that some people think of complexity
 as the number of nodes + arrows in a system.(ie how many words it takes to
 describe it), while other people see complexity as the number of degrees of
 freedom (possible states) of the system.


Both seem like perfectly adequate definitions of properties of a system.
Might be a good idea to call one of them in vitro complexity and the other
live complexity. Or maybe call the former is better referred to as size?
I see how the former thread now - great point.

System A has 16 possible sates
 System B has 2
 So in essence system B is equivalent to just one circle! Isn't that
 simple (Inherent Simplicity as Goldratt would call it)


Are those numbers you derived from the picture alone? If you did, could you
go through the math? Unless I'm misunderstanding the notation (could you
link to a rigorous definition?), I see System B having a lot more than 2
states.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] System A vs B, what?

2010-03-05 Thread Andrey Fedorov
On Fri, Mar 5, 2010 at 4:56 PM, John Zabroski johnzabro...@gmail.comwrote:

 On Fri, Mar 5, 2010 at 1:14 PM, Andrey Fedorov anfedo...@gmail.comwrote:

 If we *do** *want to define complexity, we could put a constraint on
 these CRT graphs, like nodes have no state? This is starting to smell like
 the classical argument against OOP.

 What classical argument are you referring to?


That objects are a bad way to maintain state. I'm still undecided on if it's
a good argument or not. If we add that constraint: that each node represents
a function without side effects, I imagine the complexity of a system can be
wholly defined from a graph representing it. Such a graph is more or less
what Haskell code defines.

In system A, nodes have no explicit state - in other words, state is not
 given a name the outside world can refer to and inquire upon.


But what are the nodes? Maybe I'm still just not following what that
graph-like thing represents - according to wikipedia, Current Reality
Treeshttp://en.wikipedia.org/wiki/Current_reality_tree_(TOC) are
graphs (*not* necessarily trees) which are meant to model observed
phenomena, the type that occurs in the real world. This has nothing to do
with writing algorithms - algorithms are structures which are rigorously
defined and run on hardware you trust to adhere to those rigorous
definitions upon execution.


 From the perspective of side effects, system A can still have deadlocks
 and/or race conditions (ordering side-effects that lead to unsafe
 computational sequences).  We can adorn each node in A with an effect type,
 possibly allowing each node to have a type defined by a separate type
 system.


 In system B, nodes have explicit coordination of computational sequences,
 but that explicit coordination does not guarantee safety.  As I understand
 current thinking in category theory, such as work by Glynn Winskel on
 general nets, the big idea is to unfold System B into an occurrence net,
 thus giving its precise semantics, which we can then show to be either (a)
 inconsistent (b) consistent (c) undecidable (d) unknown [know technique for
 unfolding is known].


So are nodes computational sequences, and the arrows just represent the
sequence in which they run? But if they have side-effects of the kind where
any node can change the behavior of any other node, what's the point of what
order they run in?
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] System A vs B, what?

2010-03-08 Thread Andrey Fedorov
Thanks for the recommendation Felix, Alan.

On Mar 5 Alejandro Garcia wrote:

  Andrey this is what I mean when I say sytem B has only two states: [...]


Two questions and a comment:

Q1 - the way you're reasoning about it makes directed edges
seem unnecessary. Why not just use undirected edges?

Q2 - if we are following wikipedia's definition of
CRT'shttp://en.wikipedia.org/wiki/Current_reality_tree_(TOC),
an arrow from x to y means IF x THEN y, which is different from IFF x
THEN y (which you seem to be using). Given wikipedia's definition, it
should be possible for x = F and y = T. Hence, we can set your original node
to F, and the rest of the nodes to T. No?

C1 - there's a difference between making accurate representations of
observed systems and creating reusable but intuitive formal structures.
The former asks how do I describe and predict reality? the latter asks
given these axioms (ie. a CPU instruction set), what can I build?. The
former is the role of experimental science (ie. physics), the latter is math
(ie. computer science). CRTs seem designed to describe the former.
Programming is about answering the latter. I'm not sure if we're all making
that distinction - the one between what is true and what is real [1].

Cheers,
Andrey

1. As Alan succinctly comments here:
http://www.edge.org/q2005/q05_8.html#kay
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] my two cents

2010-03-08 Thread Andrey Fedorov
On Mar 7 Pascal J. Bourguignon wrote:

 How hard is this to understand? depends on the processor, the person
 who tries to understand it.


Exactly: her familiarity with the language, and the programming style, etc.
Lots of variables, so I suggest we fix some as constants:

- the programmer is seeing this partricular algorithm for the first time
- the programmer is ultimately familiar with the programming language
and style (she has maximal intuition for it)

By fixing these variables, I think we're left with only (1) the programming
style in question and (2) the type of problem.

On Mar 8, Michael Arnoldus wrote:

 Would it be fair to say that we're really searching for is a formally
 specified processor that makes the programmer as efficient as possible?


Yes, for a given problem. I think we're searching for formal languages (and
formal structures) which are intuitive expressions of algorithms. Formal
means a machine is able to execute them. Intuitive means people find it
easy to reason using them.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Code Bubbles

2010-03-11 Thread Andrey Fedorov
Julian Leviston wrote:

 As Self and Smalltalk have tried to point out to us, make programming SO
 EASY that it is the SAME THING as using the computer.


But it *is* the same thing, by definition. The only difference is in the
interfaces: GMail a specialized/simple interface, a C compiler is a
general/complicated one. With time, we'll figure out how to make general
interfaces which are simple.

- Andrey
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Code Bubbles

2010-03-12 Thread Andrey Fedorov
Julian Leviston jul...@leviston.net wrote:

 To restate my point, simply: programming computers is not as easy as using
 them, and using them is not even as easy or useful as it could be.


Don't get me wrong - I completely understand your intuition. I have it too.
But beware! Intuition weighs heavy with prejudices unique to the immediate
present.

Try to formalize what you're saying - in particular: what is programming,
and what is using?

I propose: using a computer means inputting some data for it to interpret.
One kind of data are incomplete instructions meant to be mixed with
further input at a later point: those are programs. As we move forward,
we'll keep finding better ways of structuring these incomplete
instructions in a way that makes them both easy to complete (the way
things are easy to use), and also are general in their purpose (what we
intuit now as programming).

To reiterate - I'm not saying there's no difference between programming
and using, I'm saying the difference is a temporary artifact unique to our
present state of affairs.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] Systems and artifacts

2010-04-30 Thread Andrey Fedorov
I've noticed the word artifact used in a similar sense as system, with
no overly obvious distinction [1]. One that comes to mind is an artifact
being something we're considering in relation to its human origins, and
system being something we are considering in terms of finding an optimal
representation. For example, we could consider TCP/IP an artifact if we're
talking about its design, or a system if we're talking about studying its
inherent properties [2].

Or is this off?

Cheers,
Andrey

1. The former, mostly in Brooks' The Design of Design. The latter, mostly
in writings relating to VPRI's work.
2. Kay makes a similar distinction between invention/engineering and
research/science here:
http://computinged.wordpress.com/2010/04/23/alan-kay-on-hoping-that-simple-is-not-too-simple/#comment-2318
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] new document

2011-11-08 Thread Andrey Fedorov

 there are people who read this list and website who may not feel qualified
 to participate much but none the less rely on them as a vital source of
 information


Fellow lurker here. Thanks for pointing that out! Still, e-mails that say
simply +1, me too, or I agree are a pain to receive on e-mail clients
where they appear as a separate message, so it's usually courteous to avoid
sending them to mailing lists.

Back to the original topic, I wanted to drop a quick congrats to everyone
on the amazing progress. It was a real treat to read this update, thanks
for your hard work!

Is there any chance I (someone with general programming knowledge, but
without much intimacy with STEPS) could get Frank running on a virtual
machine on OSX, or is that not worth attempting just yet?

Cheers,
Andrey
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc