On 10/10/10 1:18 PM, Julian Leviston wrote:
> On 11/10/2010, at 2:39 AM, Paul D. Fernhout wrote:
>> Software is never "done". :-) Especially because the world keeps
>> changing around it. :-) And especially when it is "research" and doing
>> basic research looking for new ideas. :-)
>
My answer can be best expressed simply and deeply thus:

"I don't see the unix command 'ls' being rewritten every day or even
every year."

http://git.busybox.net/busybox/log

:-)

Do you understand what I'm trying to get at? It's possible to use an 'ls'
replacement if I so choose, but that's simply my preference. 'ls' itself
hasn't been touched much in a long time. The same as the ADD assembler
instruction is pretty similar across platforms. Get my drift?

http://en.wikipedia.org/wiki/Cover_Flow

:-)

And there's a lawsuit about that ongoing, by the way, with at US$250 million dollar or so judgment being appealed.

I really dislike software patents. :-( Or being essentially forced to accede to them as a condition of employment:
  http://www.freepatentsonline.com/6513009.html

From:

http://developer.yahoo.com/yui/theater/video.php?v=crockford-yuiconf2009-state
"Douglas Crockford: So one of the lessons is that patents and open systems are not compatible. I think the solution to that incompatibility is to close the Patent Office [applause]"

Part of the role of a language meta-description is implementation of
every translatable artefact.  Thus if some source item requires some
widget, that widget comes with it along for the ride as part of the
source language (and framework) meta-description.

Well, licenses may get in the way, as they did for my translation of Delphi to Java and Python. Often code we have control over or responsibility for is only a small part of a large set of interdependent modules.

Also, you may call a service and that service may need to be reimplemented or rethought with an entire chain of conceptual dependencies...

One issue is, what are the boundaries of the system?

Do they even include things like the documentation and culture surrounding the artifact?

From:
  http://en.wikipedia.org/wiki/Social_constructivism
"Social constructivism is a sociological theory of knowledge that applies the general philosophical constructionism into social settings, wherein groups construct knowledge for one another, collaboratively creating a small culture of shared artifacts with shared meanings. When one is immersed within a culture of this sort, one is learning all the time about how to be a part of that culture on many levels. Its origins are largely attributed to Lev Vygotsky. ... Social constructivism is closely related to social constructionism in the sense that people are working together to construct artifacts. However, there is an important difference: social constructionism focuses on the artifacts that are created through the social interactions of a group, while social constructivism focuses on an individual's learning that takes place because of their interactions in a group. ... Vygotsky's contributions reside in Mind in Society (1930, 1978) and Thought and Language (1934, 1986). [2] Vygotsky independently came to the same conclusions as Piaget regarding the constructive nature of development. ... An instructional strategy grounded in social constructivism that is an area of active research is computer-supported collaborative learning (CSCL). This strategy gives students opportunities to practice 21st-century skills in communication, knowledge sharing, critical thinking and use of relevant technologies found in the workplace."

So, can a truly useful translation system function outside that social context, including arbitrary legal constructs?

So, what are the boundaries of the translation task? They may be fuzzier than they appear at first (or even more sharp and arbitrary, like above, due to legal issues or social risk assessments or even limited time).

I'm possibly missing something, but I don't see the future as being a
simple extension of the past... it should not be that we simply create
"bigger worlds" as we've done in the past (think virtual machines) but
rather looks for ways to adapt things from worlds to integrate with each
other. Thus, I should not be looking for a better IDE, or programming
environment, but rather take the things I like out of what exists... some
people like to type their code, others like to drag 'n drop it. I see no
reason why we can't stop trying to re-create entire universes inside the
machines we use and simply split things at a component-level. We're
surely smarter than reinventing the same pattern again and again.

Despite what I wrote above, I basically agree with your main point here. :-)

Still, Chuck Moore rewrote Forth over and over again. He liked doing it, and it slowly improved. And as I read elsewhere, it would really upset the people at the company he worked with that he would rewrite Forth every time from scratch for every new processor (although relying on some microfiche from previous versions for inspiration). One place it was said he'd even rewrite basic routines like multiplication that came with systems saying that he had written a multiplication routine ten times so his was more likely to work right than one written by someone who had only ever written one once. :-)

Chuck Moore is a bit more like a Jazz pianist in that sense than a Classical composer. :-)

From:
  http://www.advogato.org/article/1034.html
"Question to [Chuck Moore]: "Would you consider developing a new language from scratch?" CM: "No. I develop languages all the time. Each application requires one. But they’re all based on Forth. Forth has been called a language tool-kit. It is the starting point for an application language, much as the infant brain has the ability to learn any human language. ..."

But I more and more see Chuck Moore's point. I've been reimplementing the same darn triples for almost thirty years, but I like it -- it kind of has a Zen quality to it. I'm doing it yet again in JavaScript to learn about JavaScript. For me, that is like being a pianist and sitting down at a new piano in a new room and playing an old tune. :-) It is programming as performance more than programming as bottom dollar maximal reuse. :-) But it is not incompatible with craftsmanship. Who do you want to redo your siding -- the carpenter who has doing it once, or the carpenter who has done it ten times? Of course, at some point stuff gets boring and we automate it, sure. But a lot of people still play Jazz music live even if you can get it cheap on CD or even for free lots of places on the internet.

At the link above there is some text (not sure if Chuck Moore said it) of: "I despair. Technology, and our very civilization, will get more and more complex until it collapses. There is no opposing pressure to limit this growth. No environmental group saying: Count the parts in a hybrid car to judge its efficiency or reliability or maintainability. All I can do is provide existence proofs: Forth is a simple language; OKAD is a simple design tool; GreenArrays offers simple computer chips. No one is paying any attention."

Now, is in not a good thing if lots of people have experience writing "ls" from scratch? Isn't redundancy in human culture learning overall a good thing? To know that there are thousands of people who could write ls in their sleep the same way concert pianists could play Chopsticks in their sleep?

And also that one may even figure out some way to improve it (make it faster, make it more intuitive, make it do something new, make it run on another piece of hardware, improve the testing methodology for it, and so on)?

Can you really say you "grok" the ls program (assuming it is stand alone) if you have not written it? Maybe you have to use someone else's "ls" for monetary reasons or lack of interest in it, but maybe for the right person, it would be fun to write it again? Perhaps, you can rewrite it with your own twist on it, like putting it into BusyBox or putting some sort of "CoverFlow" or other GUI on top of it as an extension of the file list idea? Or maybe redesign "ls" out of existence entirely by using a semantic web of triples to store data with not a file in sight?

What does it even mean than there is an ls "program" if it calls libraries? Where are the boundaries of "ls"? Isn't any implementation of ls just a sort of ripple in a larger wave pool of ideas constantly in interplay like a lake's surface in the rain?

And even the people who maintain "ls" can get stuck in various problem states -- seemingly absurd ones, but real ones; example:
  http://www.busybox.net/FAQ.html
"The "linux" or "asm" directories of /usr/include contain Linux kernel headers, so that the C library can talk directly to the Linux kernel. In a perfect world, applications shouldn't include these headers directly, but we don't live in a perfect world. ... The BusyBox developers spent two years trying to figure out a clean way to do all this. There isn't one. The losetup in the util-linux package from kernel.org isn't doing it cleanly either, they just hide the ugliness by nesting #include files. Their mount/loop.h #includes "my_dev_t.h", which #includes <linux/posix_types.h> and <linux/version.h> just like we do. There simply is no alternative."

So, there is an example of people spending two years trying to write a better "ls" from one particular perspective (and failing and making do with a kludge. :-)

Anyway, so I still maintain, software is never done because the world is continually changing around it.

Maybe in theory software like "ls" should be done. Maybe someday it will be done. But I don't see that anytime in the next decade or two with all the changes going on.

So, that's an interesting issue you raise with this example. How does FONC relate to implementing and debugging something as seeming trivial and basic as "ls" that has caused experts two years of hair pulling in the real world?

And even then, it looks like this version of ls still has "known bugs". :-)

http://busybox.sourcearchive.com/documentation/1:1.10.2-1ubuntu3/coreutils_2ls_8c-source.html
"""
/*
 * To achieve a small memory footprint, this version of 'ls' doesn't do any
 * file sorting, and only has the most essential command line switches
 * (i.e., the ones I couldn't live without :-) All features which involve
 * linking in substantial chunks of libc can be disabled.
 *
 * Although I don't really want to add new features to this program to
 * keep it small, I *am* interested to receive bug fixes and ways to make
 * it more portable.
 *
 * KNOWN BUGS:
 * 1. ls -l of a directory doesn't give "total <blocks>" header
 * 2. ls of a symlink to a directory doesn't list directory contents
 * 3. hidden files can make column width too large
 *
 * NON-OPTIMAL BEHAVIOUR:
 * 1. autowidth reads directories twice
 * 2. if you do a short directory listing without filetype characters
 *    appended, there's no need to stat each one
 * PORTABILITY:
 * 1. requires lstat (BSD) - how do you do it without?
 */
"""

So, there is a todo list for an ls implementation. :-)

You picked the example, not I. :-)

And I'm glad you did, because talking about specific examples brings us back to reality as to the history of computing and its likely future. From a historian of computing (among other things):
  http://www.princeton.edu/~hos/Mahoney/articles/medichi/medichi07.html
"The history of software is the history of how various communities of practitioners have put their portion of the world into the computer. That has meant translating their experience and understanding of the world into computational models, which in turn has meant creating new ways of thinking about the world computationally and devising new tools for expressing that thinking in the form of working programs. It has also meant deciding, in each realm of practice, which aspects of experience can be computed and which cannot, and establishing a balance between them. Thus the models and tools that constitute software reflect the histories of the communities that created them and cannot be understood without knowledge of those histories, which extend beyond computers and computing to encompass the full range of human activities. All software, even the most current, is in that sense "legacy" software. That is what makes the history of software hard, and it is why the history of software matters to the current practitioner."

So, it sounds like to me as long as the communities keep changing, the tools will keep changing. :-) Even "ls".

Now, you might rightfully say this situation with "ls" still having known bugs somewhere after more than forty years
  http://www.unix.org/what_is_unix/history_timeline.html
is unfair, it is absurd, it is an awful commentary on the state of affairs in the world of computing, it is an endless waste of effort better spent playing with puppies and raising children and being neighborly, and so on. And you might even be right. :-) (Ignoring how reimplementing ls might be involved somehow in all those activities if you stretch. :-) But, nonetheless, that's the way it is. With just a couple minutes of Googling, I found a file for "ls.c" with known bugs and a "to do" list. :-)

So, with OMeta being a new thing, well, I would expect it might still have some known bugs and a todo list, given that ls has them after forty years? :-)

Is there any tiny program anybody can point to that has remained stable for a long time and not proliferated through versions adaptations and stuff like that (including as it was ported from system to system if it was useful)?

To be fair, of course, I'm playing with boundaries here. :-)
  http://en.wikipedia.org/wiki/Finite_and_Infinite_Games
"Finite players play within boundaries; infinite players play with boundaries."

Sure, if you draw really tight boundaries and look around really hard you can probably say "The command ls on system XYZ which has been in use for three decades straight as an embedded Whatsit has not changed during all that time". But see how tight you have to draw the boundaries to do so? And, just for fun, I challenge you to find an actual example you can point to with source code where ls has been stable for three decades in production without any code changes to the actual ls command (even if the underlying system changed). There probably is one somewhere, I would think? Maybe in an oil refinery or nuclear power plant system? Maybe in the code used with an avionics system on top of VMS -- but that might be "DIR"? But it would take some digging -- and chances are, unless you work with such systems, you'd never be able to find that out since the code of such systems often has not been published in an open way. An IBM mainframe might be your best bet as to where to look?

But one can ask, where are the tools that would let us debug, maintain, and even reimplement all the ls implementations out there? What would such a tool look like? Would such a tool seem like Chuck Moore somehow? :-)

--Paul Fernhout
http://www.pdfernhout.net/
====
The biggest challenge of the 21st century is the irony of technologies of
abundance in the hands of those thinking in terms of scarcity.

_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to