Re: [fonc] Stephen Wolfram on the Wolfram Language

2014-09-25 Thread BGB

On 9/24/2014 6:39 PM, David Leibs wrote:
I think Stephen is misrepresenting the Wolfram Language when he says 
it is a big language. He is really talking about the built in library 
which is indeed huge.  The language proper is actually simple, 
powerful, and lispy.

-David



I think it is partly size along two axes:
core features built into the language core and the languages' core syntax;
features that can be built on top of the language via library features 
and extensibility mechanisms.


a lot of mainstream languages have tended to be bigger in terms of 
built-in features and basic syntax (ex: C++ and C#);
a lot of other languages have had more in terms of extensibility 
features, with less distinction between library code and the core language.


of course, if a language generally has neither, it tends to be regarded 
as a toy language.


more so if the implementation lacks sufficient scalability to allow 
implementing a reasonable sized set of library facilities (say, for 
example, if it always loads from source and there is a relatively high 
overhead for loaded code).



sometimes, it isn't so clear cut as apparent 
complexity==implementation complexity.


for example, a more complex-looking language could reduce down somewhat 
with a simpler underlying architecture (say, the language is itself 
largely syntax sugar);
OTOH, a simple looking language could actually have a somewhat more 
complicated implementation (say, because a lot of complex analysis and 
internal machinery is needed to make it work acceptably).


in many cases, the way things are represented in the high-level language 
vs nearer the underlying implementation may be somewhat different, so 
the representational complexity may be being reduced at one point and 
expanded at another.



another related factor I have seen is whether the library API design 
focuses more on core abstractions and building things from these, or 
focuses more on a large number of specific use-cases. for example, Java 
having classes for nearly each and every way they could think up that a 
person might want to read/write a file, as opposed to, say, a more 
generic multipurpose IO interface.



generally, complexity has tended to be less of an issue than utility and 
performance though.
for most things, it is preferable to have a more useful language if 
albeit at the cost of a more complex compiler, at least up until a point 
where the added complexity outweighs any marginal gains in utility or 
performance.


where is this point exactly? it is subject to debate.


On Sep 24, 2014, at 3:32 PM, Reuben Thomas r...@sc3d.org 
mailto:r...@sc3d.org wrote:


On 24 September 2014 23:20, Tim Olson tim_ol...@att.net 
mailto:tim_ol...@att.net wrote:


Interesting talk by Stephen Wolfram at the Strange Loop conference:

https://www.youtube.com/watch?v=EjCWdsrVcBM

He goes in the direction of creating a big language, rather
than a small kernel that can be built upon, like Smalltalk, Maru,
etc.


Smalltalk and Maru are rather different: Ian Piumarta would argue, I 
suspect, that the distinction between small and large languages 
is an artificial one imposed by most languages' inability to change 
their syntax. Smalltalk can't, but Maru can. Here we see Ian making 
Maru understand Smalltalk, ASCII state diagrams, and other things:


https://www.youtube.com/watch?v=EGeN2IC7N0Q

That's the sort of small kernel you could build Wolfram on.

Racket is a production-quality example of the same thing: 
http://racket-lang.org http://racket-lang.org/


--
http://rrt.sc3d.org http://rrt.sc3d.org/
___
fonc mailing list
fonc@vpri.org mailto:fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Modern General Purpose Programming Language

2013-11-08 Thread BGB

On 11/6/2013 3:55 AM, Chris Warburton wrote:

BGB cr88...@gmail.com writes:


it is sad, in premise, that hard-coded Visual Studio projects, and raw
Makefiles, are often easier to get to work when things don't go just
right. well, that and one time recently managing to apparently get on
the bad side of some developers for a FOSS GPL project, by going and
building part of it using MSVC (for plugging some functionality into
the app), but in this case, it was the path of least effort (the other
code I was using with it was already being built with MSVC, and I
couldn't get the main project to rebuild from source via the
approved routes anyways, ...).

weirder yet, some of the better development experiences I have had
have been in developing extensions for closed-source commercial
projects (without any ability to see their source-code, or for that
matter, even worthwhile API documentation), which should not be.

This is probably an artifact of using Windows (I assume you're using
Windows, since you mention Windows-related programs). Unfortunately
building GNU-style stuff on Windows is usually an edge case; many *NIX
systems use source-level packages (ports, emerge, etc.) which forces
developers to make their stuff build on *NIX. If a GNU-style project
even works on Windows at all, most people will be grabbing a pre-built
binary (programs as executables, libraries as DLLs bundled with whatever
needs them).

It's probably a cultural thing too; you find GNU-style projects awkward
on your Microsoft OS, I find Microsoft-style projects awkward on my GNU
OS.


primarily Windows, but often getting Linux apps rebuilt on Linux doesn't 
go entirely smoothly either, like trying to get VLC Media Player rebuilt 
from source on Ubuntu, and having some difficulties mostly due to things 
like library version issues.



granted, generally it does work a little better, like at least most of 
the time the provided configure scripts wont blow up in ones' face, in 
contrast to Windows and Cygwin or similar, where most of the time 
things will fail to build (short of some fair amount of manual 
intervention and hackery...).


and, a few times just finding it easier to go and port the things over 
to building with MSVC (and fixing up any cases where it tries to use 
GCC-specific functionality).




granted, it would also require an shift in focus:
rather than an application being simply a user of APIs and resources,
it would instead need to be a provider of an interface for other
components to reuse parts of its functionality and APIs, ideally with
some decoupling such that neither the component nor the application
need to be directly aware of each other.

Sounds a lot like Web applications. I've noticed many projects using
locally-hosted Web pages as their UI; especially with languages like
Haskell where processing HTTP streams and automatically generating
Javascript fits the language's style more closely than wrapping an OOP
UI toolkit like GTK.


can't say, not done much web apps.

I was more thinking though of some of what (limited) experience I have 
had with things like driver development.


like, there is some bit of hair in the mix (handling event messages 
and/or dealing with COM+), but in some ways, a sort of more subtle 
elegance in being able to see ones' code just work in a lot of 
various apps without having to mess with anything in those apps to make 
it work.



but, in general though, I try to write code to try to minimize 
dependencies on uncertain code or functionality.


a few times I have guessed wrong, such as allowing my codec code to 
directly depend on my projects' VFS and MM/GC system, but later on 
ending up hacking over it a little to make the thing operate as a 
self-contained codec driver in this case.


this doesn't necessarily mean shunning use of any functionality outside 
of ones' control (in a Not Invented Here sense), but rather it 
involves some level of routing such that functionality can enable or 
disable itself depending on whether or not the required functionality is 
available (for example, a Linux build of a program can't use 
Windows-specific APIs, ..., but it sort of misses out if can't use them 
when built for Windows simply because they are not also available for a 
Linux build).




sort of like codecs on Windows: you don't have to go write a plugin
for every app that uses media (or, worse, hack on their code), nor
does every media player or video-editing program have to be aware of
every possible codec or media container format, they seemingly, just
work, you install the appropriate drivers and it is done.

the GNU-land answer more tends to be we have FFmpeg / libavcodec and
VLC Media Player, then lots of stuff is built by building lots of
things on top of these, which isn't quite the same thing (you need to
deal with one or both to add a new codec, hacking code into and
rebuilding libavcodec, or modifying or plugging into VLC to add
support, ...).

Windows tends to be we have

Re: [fonc] Modern General Purpose Programming Language (Was: Task management in a world without apps.)

2013-11-05 Thread BGB

On 11/5/2013 7:15 AM, Miles Fidelman wrote:
Casey Ransberger casey.obrien.r at gmail.comwrites 
mailto:fonc%40vpri.org?Subject=Re%3A%20%5Bfonc%5D%20Task%20management%20in%20a%20world%20without%20apps.In-Reply-To=%3CDD90A941-C94A-4F01-A013-6D838B0B2524%40gmail.com%3E


A fun, but maybe idealistic idea: an application of a computer 
should just be what one decides to do with it at the time.


I've been wondering how I might best switch between tasks (or 
really things that aren't tasks too, like toys and documentaries and 
symphonies) in a world that does away with most of the application 
level modality that we got with the first Mac.


The dominant way of doing this with apps usually looks like either 
the OS X dock or the Windows 95 taskbar. But if I wanted less shrink 
wrap and more interoperability between the virtual things I'm 
interacting with on a computer, without forcing me to multitask 
(read: do more than one thing at once very badly,) what's my best 
possible interaction language look like?


I would love to know if these tools came from some interesting 
research once upon a time. I'd be grateful for any references that 
can be shared. I'm also interested in hearing any wild ideas that 
folks might have, or great ideas that fell by the wayside way back when.




For a short time, there was OpenDoc - which really would have turned 
the application paradigm on its head.  Everything you interacted with 
was an object; with methods incorporated into it's container.  E.g., 
if you were working on a document, there was no notion of a word 
processor, just the document with embedded methods for interacting 
with it.




a while ago, I had started, but didn't finish writing (or at least to a 
level I would want to send it) about the relationship between 
object-based and dataflow-based approaches to modular systems (where in 
both cases, the application could be largely dissolved in favor of 
interacting components and generic UIs).


but, the line gets kind of fuzzy, as what people often call OOP 
actually covers several distinct sets of methodologies, and people so 
much often focus on lower-level aspects (class vs not-a-class, 
inheritance trees, ...), that there is a tendency to overlook 
higher-level aspects, like whether the system is composed of objects 
interacting via passing messages using certain interfaces, or whether it 
is working with a data-stream where the objects don't really interact at 
all and rather produce and consume data in a set of shared representations.



then, there is the bigger issue from an architectural POV, namely, can 
App A access anything from within App B? short of both developers each 
having access to and the ability to hack on each-others' source code 
(or, often, get the thing rebuilt from source sometimes).


so, we have some problems:
lack of shared functionality (often short of what has explicitly been 
made into shared libraries or similar);
frequent inability to add new functionality to existing apps (or UIs), 
short of having access to and ability to modify their source-code to 
ones uses;

lots of software that is a PITA to get to rebuild from source (*1);
...

*1: especially in GNU land, where they pride themselves of freely 
available source, but the ever present GNU Autoconf system has a problem:

it very often has a tendency not to work;
it is often annoyingly painful to get it to work when it has decided it 
doesn't want to;
very often developers set some rather arbitrary restrictions on projects 
build-probing, like must have exactly this version of this library to 
build, even when it will often still build and work with later (and 
earlier) versions of the library;

...

it is sad, in premise, that hard-coded Visual Studio projects, and raw 
Makefiles, are often easier to get to work when things don't go just 
right. well, that and one time recently managing to apparently get on 
the bad side of some developers for a FOSS GPL project, by going and 
building part of it using MSVC (for plugging some functionality into the 
app), but in this case, it was the path of least effort (the other code 
I was using with it was already being built with MSVC, and I couldn't 
get the main project to rebuild from source via the approved routes 
anyways, ...).


weirder yet, some of the better development experiences I have had have 
been in developing extensions for closed-source commercial projects 
(without any ability to see their source-code, or for that matter, even 
worthwhile API documentation), which should not be.



not that I don't think these problems are solvable, but maybe the 
spaghetti string mess that is GNU-land at present isn't really an 
ideal solution. like, there might be a need to address general 
architectural issues (provide solid core APIs, ...), rather than just 
daisy-chaining everything in a somewhat ad-hoc manner.



but, as an assertion:
with increasing modularity and ability to share functionality between 
apps, and to extend 

Re: [fonc] Macros, JSON

2013-07-21 Thread BGB

On 7/21/2013 12:28 PM, John Carlson wrote:


Hmm.  I've been thinking about creating a macro language written in 
JSON that operates on JSON structures.  Has someone done similar 
work?  Should I just create a JavaScript AST in JSON? Or should I 
create an AST specifically for JSON manipulation?




my scripting language mostly uses S-Expressions for its AST format.

my C frontend mostly used an XML variant.
I had a few times considered a hybrid, essentially like XML with a more 
compact syntax (*1).


in the future, most likely I would just use S-Expressions.
while S-Exps are slightly more effort in some cases to extend, they are 
generally faster than manipulating XML, and are easier to work with.



JSON could work, but its syntax is slightly more than what is needed for 
something like this, and its data representation isn't necessarily ideal.


EDN looks ok.


*1:
node := '' tag (key'='value)* node* ''
tag key=value tag2 key2=val2 tag3

where value was a literal value with one of several types, IIRC:
integer type;
floating-point type;
string.

note that there were no free-floating raw values.
a free-floating value would instead be wrapped in a node.

had also considered using square braces:
[array [int val=1234] [real val=3.14] [string val=string] [symbol 
val=symbol]]


the allowed forms would otherwise have been fairly constrained.
the constrained structure would be mostly for sake of performance and 
similar.


note:
the XML variant used by my C frontend also ended up (quietly) adding 
support for raw numeric values, mostly because of the added overhead of 
converting between strings and numbers.




Thanks,

John



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Natural Language Wins

2013-04-06 Thread BGB

On 4/6/2013 10:59 AM, John Carlson wrote:


When I was studying Revelation in the 1980s.  We thought this same 
scripture referred to the European Union.  We also thought that Jesus 
had to return by 1988, because that was one generation past when the 
Jews returned to Israel in 1948. It seems that god has a way of 
overturning predictions.




read something recently that asserted that the prediction still held, 
only that the generation was 80 years rather than 40, thus putting the 
end-event somewhere around 2028 (with the rebuilding of the temple and 
tribulation and so on happening before this).


it still remains to be seen how things will turn out.



Some answered questions: http://reference.bahai.org/en/t/ab/SAQ/

On Apr 6, 2013 5:32 AM, Kirk Fraser overcomer@gmail.com 
mailto:overcomer@gmail.com wrote:


 Most likely your personal skills at natural language are 
insufficient to understand Revelation in the Bible, like mine were 
until I spent lots of time learning.  Now I can predict the current 
Pope Francis will eventually help create the 7 nation Islamic 
Caliphate with 3 extra-national military powers like Hamas in Rev. 13, 
17:3.  You must understand natural language well if you want to 
program it well.  Many grad students hack out an NLP project that 
works at an uninspiring level.  To go beyond the state of the art, you 
must learn to understand beyond state of the art.


 Claims that the world has progressed beyond some past century are 
true for technology but not in human behavior.  People still have wars 
large and small.  Some of the worst human behavior can be seen in 
courts during wars between family members, and some of that behavior 
comes from lawyers.  Human behavior can only be improved by everyone 
pursuing the absolute perfection of God and his human form Jesus 
Christ, the Creator.  We must go beyond the state of the art churches, 
to learn from the true church which Jesus practiced with His students, 
before He left and they quit doing much of what He did and taught.


 Because his first graduates ignored His teaching of equal status 
under Him  instead pursuing positions over others and their money, 
today we have inherited that culture instead of Jesus' life where it 
is possible to be fed directly by God's miracles without need of 
money.  So I propose a return from today's advanced culture to 
Jesus' absolute perfection. www.freetom.info.truechurch


 In view of the human spiritual awakening possible that way, 
computers are only a temporary support until we get there.  Watch 
videos archived at www.sidroth.org http://www.sidroth.org some of 
which are lame but others are impressive showing what is happening now 
giving the idea the perfect culture of Jesus' church is possible.


 Love Absolute Truth,
 Kirk W. Fraser


 On Fri, Apr 5, 2013 at 10:23 PM, Steve Taylor s...@ozemail.com.au 
mailto:s...@ozemail.com.au wrote:


 Charlie Derr wrote:

 Nevertheless I'm finding some of this conversation truly 
fascinating (though I'm having a little trouble figuring out

 what is truth and what isn't).


 I'm just waiting for Kirk to mention Atlantis or the Rosicrucians. 
It feels like it could be any moment...




 Steve

 ___
 fonc mailing list
 fonc@vpri.org mailto:fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc




 --
 Kirk W. Fraser
 http://freetom.info/TrueChurch - Replace the fraud churches with the 
true church.
 http://congressionalbiblestudy.org - Fix America by first fixing its 
Christian foundation.

 http://freetom.info - Example of False Justice common in America

 ___
 fonc mailing list
 fonc@vpri.org mailto:fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Natural Language Wins

2013-04-06 Thread BGB

On 4/6/2013 12:13 PM, Reuben Thomas wrote:
On 6 April 2013 18:09, Eugen Leitl eu...@leitl.org 
mailto:eu...@leitl.org wrote:


On Sat, Apr 06, 2013 at 12:08:35PM -0500, John Carlson wrote:
 The Lord will return like a thief in the night:
 http://bible.cc/1_thessalonians/5-2.htm
 Is this predictable?  Is there more than one return?  Jews
believe in one
 Messiah.  Christians believe in 2 Messiahs (Jesus and his
return).  Anyone
 for 3 or 4 or more?

Can the list moderator please terminate this thread?
Anyone? Anyone? Bueller?


Indeed, this is a list about the Foundations of New Computing; please 
stay on-topic.




yeah...

it is possibly a notable property that most topics on the internet tend 
to diverge into a debate about religion and/or politics (regardless of 
the original topic in question).



but, elsewhere, one can then find people into getting into inflamed 
debates about other things as well, including in computing:

UTF-16 vs UTF-32;
RGBA vs DXT;
little-endian vs big-endian in file-formats;
x86 vs ARM;
choice of programming language;
...

so, in a way, people arguing about stuff like this may be inevitable.


one assertion that can be made here is that people seem to be 
overzealous in their choice and application of universals, often without 
a lot of evidence to support their choices.


so, one thing ends up being true to one person and false to another, if 
for no other reason than differences in terms of basic assumptions, and 
a tendency to regard these assumptions as absolute (rather than, say, as 
probabilities).


well, along with an excess of people making value judgements, say, 
rather than things being more in terms of cost/benefit or similar, ...


but, sometimes it seems that regardless of ones' choice of basic 
assumptions, someone somewhere will still take issue with it.


expecting everyone to agree on much of anything is probably unrealistic...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] DSL Engineering: Designing, Implementing and Using Domain-Specific Languages

2013-01-25 Thread BGB

On 1/25/2013 10:11 AM, Kurt Stephens wrote:

Don't know if this has been posted/discussed before:

http://dslbook.org/

The 560-page, book is donationware.  Lots to read here.  :)


nifty, may have to go read it...


it does make me wonder:
how viable is donationware as an option for software, vs, say, the 
shareware model?


(I wonder this partly as my 3D engine is sort-of donationware at 
present, but I had considered potentially migrating to a shareware model 
eventually...).


note: this doesn't apply to my scripting stuff, which is intended to be 
open-source (currently uses MIT license), but is presently bundled with 
the 3D engine, but could be made available separately (as some people 
have expressed concern over downloading proprietary code to get the free 
code, but admittedly I am not entirely sure what the issue is here, but 
either way).



also left wondering if anyone has a good idea for a good way to 
determine when things like bounds-checks and null-checks can safely be 
omitted? (say, with array and pointer operations).


I guess probably a way of statically determining that both the array 
exists and the index will always be within array bounds, but am not 
entirely sure exactly what this check would itself look like (like if 
there is a good heuristic approximation, ...).


type-checking is a little easier, at least as far as one can determine 
when/where things have known types and propagate them forwards, which 
can be used in combination with explicitly declared types, ... however, 
the size of the array or value-range of an integer is not part of the 
type and may not always be immediately obvious to a compiler.


then again, looking online, this may be a non-trivial issue (but does 
add several checks and conditional-jumps operations into each array access).



well, at least luckily the security checks has a good solution:
if the code belongs to the VM root, most security checks can be omitted.
the only one that can't really be safely omitted relates to method calls 
(we don't want root calling blindly into insecure code).


often this check can be done as a static check (by checking the source 
and destination rights when generating a call-info structure or 
similar, which will be flushed whenever new code is loaded or the VM 
otherwise suspects that a significant scope-change has occurred).


actually, static-call caching is itself a heuristic:
it assumes that if the site being called is a known function or static 
method declaration, then the call is static;
if the site being called is a generic variable or similar, a dynamic 
call is used (which will use lookups, type-checks, ... to go about 
making the call).


most of this is largely transparent to high-level code, and mostly is 
figured out in the bytecode - threaded-code stage, and most of this 
information may be reused by the JIT.



or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] deriving a POL from existing code

2013-01-23 Thread BGB

On 1/9/2013 11:53 AM, David Barbour wrote:


On Wed, Jan 9, 2013 at 9:37 AM, John Carlson yottz...@gmail.com 
mailto:yottz...@gmail.com wrote:


I've been collecting references to game POLs on:
http://en.wikipedia.org/wiki/Domain-specific_entertainment_language


That's neat. I'll definitely peruse.



interesting...


my own language may loosely fit in there, being mostly developed in 
conjunction with a 3D engine, and not particularly intended for 
general-purpose programming tasks...


like, beyond just ripping off JS and AS3 and similar, has some amount of 
specialized constructs mostly for 3D game related stuff (like 
vector-math and similar).


well, and some just plain obscure stuff, ...


BTW: recently (mostly over the past few days), I went and wrote a 
simplistic JIT for the thing (I have not otherwise had a working JIT for 
several years).


it turns out if one factors out most of the hard-parts in advance, 
writing a JIT isn't actually all that difficult (*1).


in my case it gave an approx 20x speedup, bringing it from around 60x 
slower than C with (plain C) threaded code, to around 3x slower than C, 
or at least for my selection-sort test and similar... (the recursive 
Fibonacci test is still fairly slow though, at around 240x slower than 
C). as-is, it mostly just directly compiles a few misc things, like 
arithmetic operators and variable loads/stores and similar, leaving most 
everything else as call-threaded code (where the ASM code mostly just 
directly calls C functions to carry out operations).


in the selection sort test, the goal is basically to sort an array using 
selection sort. for a 64k element array, this is currently about 15s for 
C, and around 49s for BS. with the interpreter, this operation takes 
takes a good number of minutes.



*1:
current JIT, 1.2 kloc;
rest of core interpreter, 18 kloc;
rest of script interpreter (parser, front-end bytecode compiler, ...), 
32 kloc;
VM runtime (dynamic typesystem, OO facilities, C FFI, ...) + 
assembler/linker + GC: 291 kloc;

entire project, 1.32 Mloc.

so, yes, vs 291 kloc (or 1.32 Mloc), 1.2 kloc looks pretty small.

language rankings in project (excluding tools) by total kloc:
C (1.32 Mloc), BGBScript (16.33 kloc), C++ (16.29 kloc).

there may be some amount more script-code embedded in data files or 
procedurally generated, but this is harder to measure.



or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Current topics

2013-01-03 Thread BGB

On 1/3/2013 7:27 PM, Miles Fidelman wrote:

BGB wrote:

Whoa, I think you just invented nanotech organelles, at least this

is the first time I've heard that idea and it seems pretty
mind-blowing.  What would a cell use a cpu for?


mostly so that microbes could be programmed in a manner more like 
larger-scale computers.


say, the microbe has its basic genome and capabilities, which can be 
treated more like hardware, and then a person can write behavioral 
programs in a C-like language or similar, and then compile them and 
run them on the microbes.


for larger organisms, possibly the cells could network together and 
form into a sort of biological computer, then you can possibly have 
something the size of an insect with several GB of storage and 
processing power rivaling a modern PC, as well as possibly other 
possibilities, such as the ability to communicate via WiFi or similar.


you might want to google biological computing - you'll start finding 
things like this:
http://www.guardian.co.uk/science/blog/2009/jul/24/bacteria-computer 
(title: Bacteria make computers look like pocket calculators)




FWIW: this is like comparing a fire to an electric motor.


yes, but you can't use a small colony of bacteria to do something like 
drive an XBox360, they just don't work this way.


with bacteria containing CPUs, you could potentially do so.

and, by the time you got up to a colony the size of an XBox360, the 
available processing power would be absurd...



this is not a deficiency of the basic biological mechanisms (which are 
in-fact quite powerful), but rather their inability to readily organize 
themselves into a larger-scale computational system.





alternatively could be the possibility of having an organism with 
more powerful neurons, such that rather than neurons communicating 
via simple impulses, they can send more complex messages (neuron 
fires with extended metadata, ...). then neurons can make more 
informed decisions about whether to fire off a message.



cells do lots of nifty stuff, but most of their functionality is more 
based around cellular survival than about computational tasks.


Ummm have you heard of:

1. Brains (made up of cells),

2. Our immune systems,

3. The complex behaviors of fungi



yes, but obseve just how pitifully these things do *at* traditional 
computational tasks...


for all the raw power in something like the human brain, and the ability 
of humans to possess things like general intelligence, ..., we *still* 
have to single-step in a stupid graphical debugger and require *hours* 
to think about and write chunks of code (and weeks or months to write a 
program), and a typical human can barely even add or subtract numbers in 
a reasonable time-frame (with the relative absurdity that, with all 
their raw power, a human finds it easier just to tap the calculation 
into a calculator, in the first place).



meanwhile, a C compiler can churn through and compile around a million 
lines of code in around 1 minute or so, a task for which a human has no 
hope to even attempt.


something is clearly deficient for the human mind at this task.


Think massively parallel/distributed  computation focused on organism 
level survival and behavior.  If you want to program colonies of nano 
machines (biological or otherwise), you're going to have to start 
thinking of something a very different kinds of algorithms, running on 
something a lot more powerful than a small cpu programmed in c.




I am thinking of billions of small CPUs programmed in C, and probably 
organized into micrometer or millimeter scale networks. there would be a 
reason why each cell would have its own CPU (built out of basic 
biological components).


also, humans would probably use a C-like language mostly because it 
would be most familiar, but need not be executed exactly like how it 
would on a modern computer (they may or may not have an ISA as would be 
currently understood).


probably these would need to mesh together somehow and simulate the 
functionality of larger computers, and would likely work by distributing 
computation and memory storage among individual cells.


even if the signaling and organization is moderately inefficient, likely 
it could be made up for by using redundancy and bulk.



similarly, tasks that would, at the larger scale, be accomplished via 
robots and bulk mechanical forces, could be performed instead by 
cooperative actions by individual cells (say, millions of cells all push 
on something in the same direction at the same time, or they start 
building a structure by secreting specific chemicals at specific 
locations, ...).



Start thinking billions of actors, running on highly parallel 
hardware, and we might start approaching what cells do today.  (FYI, 
try googling micro-tubules and you'll find some interesting papers 
on how these sub-cellular structures just might act like associative 
arrays :-)




they don't do the same things...

as-noted

Re: [fonc] Current topics

2013-01-02 Thread BGB

On 1/2/2013 10:31 PM, Simon Forman wrote:

On Tue, Jan 1, 2013 at 7:53 AM, Alan Kay alan.n...@yahoo.com wrote:

The most recent discussions get at a number of important issues whose
pernicious snares need to be handled better.

In an analogy to sending messages most of the time successfully through
noisy channels -- where the noise also affects whatever we add to the
messages to help (and we may have imperfect models of the noise) -- we have
to ask: what kinds and rates of error would be acceptable?

We humans are a noisy species. And on both ends of the transmissions. So a
message that can be proved perfectly received as sent can still be
interpreted poorly by a human directly, or by software written by humans.

A wonderful specification language that produces runable code good enough
to make a prototype, is still going to require debugging because it is hard
to get the spec-specs right (even with a machine version of human level AI
to help with larger goals comprehension).

As humans, we are used to being sloppy about message creation and sending,
and rely on negotiation and good will after the fact to deal with errors.

We've not done a good job of dealing with these tendencies within
programming -- we are still sloppy, and we tend not to create negotiation
processes to deal with various kinds of errors.

However, we do see something that is actual engineering -- with both care
in message sending *and* negotiation -- where eventual failure is not
tolerated: mostly in hardware, and in a few vital low-level systems which
have to scale pretty much finally-essentially error-free such as the
Ethernet and Internet.

My prejudices have always liked dynamic approaches to problems with error
detection and improvements (if possible). Dan Ingalls was (and is) a master
at getting a whole system going in such a way that it has enough integrity
to exhibit its failures and allow many of them to be addressed in the
context of what is actually going on, even with very low level failures. It
is interesting to note the contributions from what you can say statically
(the higher the level the language the better) -- what can be done with
meta (the more dynamic and deep the integrity, the more powerful and safe
meta becomes) -- and the tradeoffs of modularization (hard to sum up, but
as humans we don't give all modules the same care and love when designing
and building them).

Mix in real human beings and a world-wide system, and what should be done?
(I don't know, this is a question to the group.)

There are two systems I look at all the time. The first is lawyers
contrasted with engineers. The second is human systems contrasted with
biological systems.

There are about 1.2 million lawyers in the US, and about 1.5 million
engineers (some of them in computing). The current estimates of programmers
in the US are about 1.3 million (US Dept of Labor counting programmers and
developers). Also, the Internet and multinational corporations, etc.,
internationalizes the impact of programming, so we need an estimate of the
programmers world-wide, probably another million or two? Add in the ad hoc
programmers, etc? The populations are similar in size enough to make the
contrasts in methods and results quite striking.

Looking for analogies, to my eye what is happening with programming is more
similar to what has happened with law than with classical engineering.
Everyone will have an opinion on this, but I think it is partly because
nature is a tougher critic on human built structures than humans are on each
other's opinions, and part of the impact of this is amplified by the simpler
shorter term liabilities of imperfect structures on human safety than on
imperfect laws (one could argue that the latter are much more of a disaster
in the long run).

And, in trying to tease useful analogies from Biology, one I get is that the
largest gap in complexity of atomic structures is the one from polymers to
the simplest living cells. (One of my two favorite organisms is Pelagibacter
unique, which is the smallest non-parasitic standalone organism. Discovered
just 10 years ago, it is the most numerous known bacterium in the world, and
accounts for 25% of all of the plankton in the oceans. Still it has about
1300+ genes, etc.)

What's interesting (to me) about cell biology is just how much stuff is
organized to make integrity of life. Craig Ventor thinks that a minimal
hand-crafted genome for a cell would still require about 300 genes (and a
tiniest whole organism still winds up with a lot of components).

Analogies should be suspect -- both the one to the law, and the one here
should be scrutinized -- but this one harmonizes with one of Butler
Lampson's conclusions/prejudices: that you are much better off making --
with great care -- a few kinds of relatively big modules as basic building
blocks than to have zillions of different modules being constructed by
vanilla programmers. One of my favorite examples of this was the Beings
master's thesis 

Re: [fonc] Linus Chews Up Kernel Maintainer For Introducing Userspace Bug - Slashdot

2013-01-01 Thread BGB

On 12/31/2012 10:47 PM, Marcus G. Daniels wrote:

On 12/31/12 8:30 PM, Paul D. Fernhout wrote:
So, I guess another meta-level bug in the Linux Kernel is that it is 
written in C, which does not support certain complexity management 
features, and there is no clear upgrade path from that because C++ 
has always had serious linking problems.
But the ABIs aren't specified in terms of language interfaces, they 
are architecture-specific.  POSIX kernel interfaces don't need C++ 
link level compatibility, or even extern C compatibility 
interfaces.  Similarly on the device side, that's packing command 
blocks and such, byte by byte.  Until a few years ago, GCC was the 
only compiler ever used (or able) to compile the Linux kernel.  It is 
a feature that it all can be compiled with one open source toolchain.  
Every aspect can be improved.




granted.

typically, the actual call into kernel-land is a target-specific glob of 
ASM code, which may then be wrapped up to make all the various system calls.



as for ABIs a few things could help:
if the C++ ABI was defined *along with* the C ABI for a given target;
if the C++ compilers would use said ABI, rather than each rolling their own;
if the ABI were sufficiently general to be more useful to multiple 
languages (besides just C and C++);

...

in this case, the C ABI could be considered a formal subset of the C++ ABI.


admittedly, if I could have my say, I would make some changes to the way 
struct/class passing and returning is handled in SysV / AMD64. namely 
make it less complicated/evil, like, say, the struct is either passed in 
a single register, or passed as a reference (no decomposition and 
passing via multiple registers).


more-so, probably also provide spill-space for arguments passed as 
registers (more like in Win64).



granted, this itself may illustrate part of the problem:
with many of these ABIs, not everyone is happy, so there is a lot of 
temptation for compiler vendors to go their own way (making going mix 
and match with code compiled by different compilers, or sometimes with 
different compiler options, unsafe...).


sometimes, it may usually work, but sometimes fail, due to minor ABI 
differences.



From that thread I read that those in the Linus camp are fine with 
abstraction, but it has to be their abstraction on their terms. An 
later in the thread, Theodore T'so gave an example of opacity in the 
programming model:


a = b + /share/ + c + serial_num;

Arguing where you can have absolutely no idea how many memory 
allocations are

done, due to type coercions, overloaded operators

Well, I'd say just write the code in concise notation.  If there are 
memory allocations they'll show up in valgrind runs, for example. Then 
disassemble that function and understand what the memory allocations 
actually are.  If there is a better way to do it, then either change 
abstractions, or improve the compiler to do it more efficiently.   
Yes, there can be an investment in a lot of stuff. But just defining 
any programming model with a non-obvious performance model as a bad 
programming model is shortsighted advice, especially for developers 
outside of the world of operating systems.   That something is 
non-obvious is not necessarily a bad thing.   It just means a bit more 
depth-first investigation.   At least one can _learn_ something from 
the diversion.




yep.

some of this is also a bit of a problem for many VM based languages, 
which may, behind the scenes, chew through memory, while giving little 
control of any of this to the programmer.


in my case, I have been left fighting performance in many areas with my 
own language, admittedly because its present core VM design isn't 
particularly high performance in some areas.



though, one can still be left looking at a sort of ugly wall:
the wall separating static and dynamic types.

dynamic types is a land of relative ease, but not particularly good 
performance.
static types is a land of pain and implementation complexity, but also 
better performance.


well, there is also the fixnum issue, where a fixnum may be just 
slightly smaller than an analogous native type (it is the curse of the 
28-30 bit fixnum, or the 60-62 bit long-fixnum...).


this issue is annoying specifically because it specifically gets in the 
way of having an efficient fixnum type and also map it to a sensible 
native type (like int) while keeping the usual definition  intact that 
int is exactly 32-bits and/or that long is exactly 64-bits.


but, as a recent attempt at trying to switch to untagged value types 
revealed, even with an interpreter core that is mostly statically 
typed, making this switch may still open a big can of worms in some 
other cases (because there are still holes in the static type-system).



I have been left considering the possibility of instead making a compromise:
int, float, and double can be represented directly;
long, however, would (still) be handled as a boxed-value.

this 

Re: [fonc] Incentives and Metrics for Infrastructure vs. Functionality

2013-01-01 Thread BGB

On 1/1/2013 2:12 PM, Loup Vaillant-David wrote:

On Mon, Dec 31, 2012 at 04:36:09PM -0700, Marcus G. Daniels wrote:

On 12/31/12 2:58 PM, Paul D. Fernhout wrote:
2. The programmer has a belief or preference that the code is easier
to work with if it isn't abstracted. […]

I have evidence for this poisonous belief.  Here is some production
C++ code I saw:

   if (condition1)
   {
 if (condition2)
 {
   // some code
 }
   }

instead of

   if (condition1 
   condition2)
   {
 // some code
   }

-

   void latin1_to_utf8(std::string  s);

instead of

   std::string utf8_of_latin1(std::string s)
or
   std::string utf8_of_latin1(const std::string  s)

-

(this one is more controversial)

   Foo foo;
   if (condition)
 foo = bar;
   else
 foo = baz;

instead of

   Foo foo = condition
   ? bar
   : baz;

I think the root cause of those three examples can be called step by
step thinking.  Some people just can't deal with abstractions at all,
not even functions.  They can only make procedures, which do their
thing step by step, and rely on global state.  (Yes, global state,
though they do have the courtesy to fool themselves by putting it in a
long lived object instead of the toplevel.)  The result is effectively
a monster of mostly linear code, which is cut at obvious boundaries
whenever `main()` becomes too long (too long generally being a
couple hundred lines.  Each line of such code _is_ highly legible,
I'll give them that.  The whole however would frighten even Cthulhu.


part of the issue may be a tradeoff:
does the programmer think in terms of abstractions and using high-level 
overviews?
or, does the programmer mostly think in terms of step-by-step operations 
and make use of their ability to keep large chunks of information in memory?


it is a question maybe of whether the programmer sees the forest or the 
trees.


these sorts of things may well have an impact on the types of code a 
person writes, and what sorts of things the programmer finds more readable.



like, for a person who can mentally more easily deal with step-by-step 
thinking, but can keep much of the code in their mind at-once, and 
quickly walk around and explore the various possibilities and scenarios, 
this kind of bulky low-abstraction code may be preferable, since when 
they walk the graph in their mind, they don't really have to stop and 
think too much about what sorts of items they encounter along the way.


in their minds-eye, it may well look like a debugger stepping at a rate 
of roughly 5-10 statements per second or so. maybe they may or may not 
be fully aware how their mind does it, but they can vaguely see the 
traces along the call-stack, ghosts of intermediate values, and the 
sudden jump of attention to somewhere where a crash has occurred or an 
exception has been thrown.


actually, I had before compared it to ants:
it is like ones' mind has ants in it, which walk along trails, either 
stepping code, or trying out various possibilities, ...
once something interesting comes up, it starts attracting more of 
these mental ants, until it has a whole swarm, and then a more clear 
image of the scenario or idea may emerge in ones' mind.


but, abstractions and difficult concepts are like oil to these ants, 
where if ants encounter something they don't like (like oil) they will 
back up and try to walk around it (and individual ants aren't 
particularly smart).



and, probably, other people use other methods of reasoning about code...


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Wrapping object references in NaN IEEE floats for performance (was Re: Linus...)

2013-01-01 Thread BGB

On 1/1/2013 6:36 PM, Paul D. Fernhout wrote:

On 1/1/13 3:43 AM, BGB wrote:

here is mostly that this still allows for type-tags in the
references, but would likely involve a partial switch to the use of
64-bit tagged references within some core parts of the VM (as a partial
switch away from magic pointers). I am currently leaning towards
putting the tag in the high-order bits (to help reduce 64-bit arithmetic
ops on x86).


One idea I heard somewhere (probably on some Squeak-related list 
several years ago) is to have all objects stored as floating point NaN 
instances (NaN == Not a Number). The biggest bottleneck in practice 
for many applications that need computer power these days (like 
graphical simulations) usually seems to be floating point math, 
especially with arrays of floating point numberls. Generally when you 
do most other things, you're already paying some other overhead 
somewhere already. But multiplying arrays of floats efficiently is 
what makes or breaks many interesting applications. So, by wrapping 
all other objects as instances of floating point numbers using the NaN 
approach, you are optimizing for the typically most CPU intensive case 
of many user applications. Granted, there is going to be tradeoffs 
like integer math and so looping might then probably be a bit slower? 
Perhaps there is some research paper already out there about the 
tradeoffs for this sort of approach?




I actually tried this already...

I had borrowed the idea originally off of Lua (a paper I was reading 
talking about it mentioned it as having been used in Lua).



the problems were, primarily on 64-bit targets:
my other code assumed value-ranges which didn't fit nicely in the 52-bit 
mantissa;

being a NaN obscured the pointers from the GC;
it added a fair bit of cost to pointer and integer operations;
...

granted, you only really need 48 bits for current pointers on x86-64, 
the problem was that other code had been already assuming using a 56-bit 
tagged space when using pointers (spaces), leaving a little bit of a 
problem of 5652.


so, everything was crammed into the mantissa somewhat inelegantly, and 
the costs regarding integer and pointer operations made it not really an 
attractive option.


all this was less of an issue with 32-bit x86, as I could essentially 
just shove the whole pointer into the mantissa (spaces and all), and 
the GC wouldn't be confused by the value.



basically, what spaces is, is that a part of the address space will 
basically be used and divided up into a number of regions for various 
dynamically typed values (the larger ones being for fixnum and flonum).


on 32-bit targets, spaces is 30 bits, and located between the 3GB and 
4GB address mark (which the OS generally reserves for itself). on 
x86-64, currently it is a 56-bit space located at 0x7F00_.




For more background, see:
  http://en.wikipedia.org/wiki/NaN
For example, a bit-wise example of a IEEE floating-point standard 
single precision (32-bit) NaN would be: s111  1axx    
  where s is the sign (most often ignored in applications), a 
determines the type of NaN, and x is an extra payload (most often 
ignored in applications)


So, information about other types of objects would start in that 
extra payload part. There may be some inconsistency in how hardware 
interprets some of these bits, so you'd have to think about if that 
could be worked around if you want to be platform-independent.


See also:
  http://en.wikipedia.org/wiki/IEEE_floating_point

You might want to just go with 64 bit floats, which would support 
wrapping 32 bit integers (including as pointers to an object table if 
you wanted, even up to probably around 52 bit integer pointers); see:

  IEEE 754 double-precision binary floating-point format: binary64
  http://en.wikipedia.org/wiki/Binary64



yep...

my current tagging scheme partly incorporates parts of double, mostly in 
the sense that some tags were chosen mostly such that a certain range of 
doubles could be passed through unmodified and with full precision.


the drawback is that 0 is special, and I haven't yet thought up a good 
way around this issue.


admittedly I am not entirely happy with the handling of fixnums either 
(more arithmetic and conditionals than I would like).



here is what I currently have:
http://cr88192.dyndns.org:8080/wiki/index.php/Tagged_references



does sometimes seem like I am going in circles at times though...


I know that feeling myself, as I've been working on semantic-related 
generally-triple-based stuff for going on 30 years, and I still feel 
like the basics could be improved. :-)




yes.

well, in this case, it is that I have bounced back and forth between 
tagged-references and magic pointers multiple times over the years.


granted, this would be the first time I am doing so using fixed 64-bit 
tagged references.



granted, on x86-64, I will probably end up later merging a lot of this 
back

Re: [fonc] Linus Chews Up Kernel Maintainer For Introducing Userspace Bug - Slashdot

2012-12-30 Thread BGB

On 12/30/2012 10:49 PM, Paul D. Fernhout wrote:
Some people here might find of interest my comments on the situation 
in the title, posted in this comment here:

http://slashdot.org/comments.pl?sid=3346421cid=42430475

After citing Alan Kay's OOPSLA 1997 The Computer Revolution Has Not 
Happened Yet speech, the key point I made there is:
Yet, I can't help but feel that the reason Linus is angry, and 
fearful, and shouting when people try to help maintain the kernel and 
fix it and change it and grow it is ultimately because Alan Kay is 
right. As Alan Kay said, you never have to take a baby down for 
maintenance -- so why do you have to take a Linux system down for 
maintenance?


Another comment I made in that thread cited Andrew Tanenbaum's 1992 
comment that it is now all over but the shoutin':
http://developers.slashdot.org/comments.pl?sid=3346421threshold=0commentsort=0mode=threadcid=42426755 



So, perhaps now we finally twenty-years see the shouting begin as the 
monolithic Linux kernel reaches its limits as a community process? :-) 
Still, even if true, it was a good run.


The main article can be read here:
http://developers.slashdot.org/story/12/12/29/018234/linus-chews-up-kernel-maintainer-for-introducing-userspace-bug 



This is not to focus on personalities or the specifics of that mailing 
list interaction -- we all make mistakes (whether as leaders or 
followers or collaborators), and I don't fully understand the culture 
of the Linux Kernel community. I'm mainly raising an issue about how 
software design affects our emotions -- in this case, making someone 
angry probably about something they fear -- and how that may point the 
way to better software systems like FONC aspired to.




dunno...

in this case, I think Torvalds was right, however, he could have handled 
it a little more gracefully.


code breaking changes are generally something to be avoided wherever 
possible, which seems to be the main issue here.


sometimes it is necessary though, but usually this needs to be for a 
damn good reason.
more often though this leads to a shim, such that new functionality can 
be provided, while keeping whatever exists still working.


once a limit is hit, then often there will be a clean break, with a 
new shiny whatever provided, which is not backwards compatible with the 
old interface (and will generally be redesigned to address prior 
deficiencies and open up routes for future extension).


then usually, both will coexist for a while, usually until one or the 
other dies off (either people switch to the new interface, or people 
rebel and stick to the old one).


in a few cases in history, this has instead leads to forks, with the old 
and new versions developing in different directions, and becoming 
separate and independent pieces of technology.


for example, seemingly unrelated file formats that have a common 
ancestor, or different CPU ISA's that were once a single ISA, ...


likewise, at each step, backwards compatibility may be maintained, but 
this doesn't necessarily mean that things will remain static. sometimes, 
there may still be a common-subset, buried off in there somewhere, or in 
other cases the loss of occasional archaic details, will cause what 
remains of this common subset to gradually fade away.




as for design and emotions:
I think people mostly prefer to stay with familiar things.
unfamiliar things will often drive people away, especially if they look 
scary of different, whereas people will be more forgiving of things 
which look familiar, even if they are different internally.


often this may well amount to shims as well, where something familiar 
will be emulated as a shim on top of something different. even if it is 
actually fake, people will not care, they can just keep on doing what 
they were doing before.


granted, yes, when some people look into the heart of computing, and 
see this seeming mountain of things held together mostly by shims and 
some amount of duct tape, they regard it as a thing of horror. others 
may see it, and be like this is just how it is.


luckily, it doesn't go on indefinitely, as often with enough shims, it 
will create a sufficiently thick layer of abstraction to where it may 
become more reasonable to rip out a lot of it, while only maintaining 
the surface-level details (for sake of compatibility). compatibility may 
be maintained, even if a lot of what goes on in-between has since 
changed, and things can be extended that much longer...


granted, by this point, it is often less the thing it once was so much 
as an emulator.
but, under the surface, what is the real-thing, and what is an emulator, 
isn't really always all that certain. what usually defines an emulator 
then, is not so much about what it actually does, but how much of a big 
ugly seam there is in it doing so.



or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Falun Dafa

2012-12-22 Thread BGB

On 12/22/2012 5:52 PM, Julian Leviston wrote:

Thank you, captain obvious.

Man is a three-centered (three-brained if you will) being. Focussing 
on only one of the brains is by definition imbalanced.


Bring back the renaissance man.



so, if, say, a person likes computers, but largely lacks either an 
emotional or creative side, is this implying that computers somehow took 
away their emotions and creativity, or is it more likely the case that 
they didn't really have them to begin with?...


like, a person after a while, observing that they rarely feel much of 
anything, no longer have much of any real sense of romantic interest, 
have little intrinsic creative motivation, are unable to understand 
symbolism, tend to see the world in a literal manner, ...


and, then wonder: so it is? what now?...

doesn't really seem like it is the computer's fault anymore than a 
person also noting that they are also partially color-blind.


unless I have missed the point?...


a more obvious downside though is that generally, doing lots of stuff on 
a computer keeps the user nailed down to their chair. even though they 
might realize that getting up and doing stuff might be better for their 
health, doing so is time away from working on stuff...


I guess a mystery then would be if, some time in the future, there will 
be ways of using computers which don't effectively require the users to 
be sitting in a chair all day (ideally without compromising either the 
user experience or capabilities). (granted, yes, traditional exercise 
can be tiring/unpleasant though...).



as for the mentioned practice, it seems like it could conflict with a 
persons' religious beliefs (many people consider these types of things 
as being occult).


more often a person might do something like memory-verses or similar 
instead (like, memorize and recite John 3:16 or similar, ...).


or such...



Julian

On 23/12/2012, at 4:28 AM, John Pratt jpra...@gmail.com 
mailto:jpra...@gmail.com wrote:



I want to tell everyone on this list about something I found.

Maybe someone out there hears what I say, thinks I am pretty
crazy for saying it to an entire mailing list, but appreciates it.

That is the kind of person I am sometimes.  I might tell a CEO
not to use high-class mustard on a hotdog and genuinely wonder afterwards
why he gets angry.  So, similarly, I am going to tell all of you to
go to FalunDafa.org http://falundafa.org/ because this is the best 
thing I have done

to extricate myself cognitively from computer prison that we
all live in.

It is true that computers are impressive, but they are also injurious
in other respects and if people won't acknowledge the downsides
to what they do to our cognition, I don't think that is ok, either. I am
actually a generalist on this subject, so I don't take technical stances
on this minor subject or that minor subject inside the vast field of
computer science.  But what holds true for me also holds true for you,
that computers draw you in to a certain, narrow type of thinking that
needs to be balanced by true, traditional, /human/ things like music 
or dance or art.

___
fonc mailing list
fonc@vpri.org mailto:fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Falun Dafa

2012-12-22 Thread BGB

On 12/22/2012 9:11 PM, Julian Leviston wrote:

I think you've missed the point.

The point is... you need to use your body and your emotions as well as 
your mind. Our society is overly focussed on the mind.




could be, fair enough...


emotions are hard though, like nearly completely absent at one moment, 
or showing up and being distracting at another moment, and generally not 
very easy to make much sense out of them. but, I guess, if ignored too 
much they can start to fade away altogether. but, if not controlled, 
they can make a mess of things, leading to poor judgement and irrational 
behavior, but most often when emotions do show up, they are like I am 
bored and lonely, and this kind of sucks, which isn't really all that 
helpful. in other cases, they might show up, cause a sense of sadness, 
and erode ones' ability to do stuff, which also isn't really helpful.


for most things in life, it doesn't seem to make much difference, but 
does apparently have a bit of a dampening effect in the relationship 
sense, like no one is really interested, which probably doesn't help 
matters all that much (and it doesn't help much when one can know with 
statistical near certainty that it wont go anywhere, most often because 
there is some critical incompatibility, or more often, the other person 
has only a short period of time before they lose interest and go elsewhere).


most else is the short of short-lived emotional states which arise from 
watching TV shows or similar, but, when the show ends, everything is as 
it was before.


nevermind things like poetry or similar, which are more just confusing 
and cryptic than anything else (what does it mean? who knows? wait, it 
was about drinking coffee? oh well, whatever...).


even for as ineffective as it ultimately is, a person can still get a 
lot more of an effect by watching a pile of anime or similar (say, a 
person can get ~ 75 hours of emotional stimulation by watching ~ 150 
episodes of InuYasha, then be looked down on by others for doing so, or 
similar...).


and, sometimes, there are good shows, some of which a person can wish 
there were more of (like, say, Invader Zim), but then again, there are 
always new shows (like MLP: FiM...).
and elsewhere, there are videos on YouTube, like all the endless 
Gangnam Style parodies.


otherwise, a person is left to realize that their life is kind of empty 
and unproductive, and seemingly all their emotions can really seem to do 
is remind them about how lame their life is (and there isn't even really 
much to want, like say, there is no real way to build a newer/better 
computer without dumping lots of money into overly expensive parts, and 
better is trying to find a way to earn some sort of income...).


but, even as such, it is hard to imagine though if/how it could be any 
different.



in a way, such is life...


but, at least I am sort of making a game, and putting some videos of it 
on YouTube:

http://www.youtube.com/watch?v=GRVaCPgVxb8

and, a video about some of the high-level architecture:
http://www.youtube.com/watch?v=TlamKh8vUJ0

nevermind if it amounts to anything much more than this (hardly anyone 
cares, no one makes donations).


but, keeping going is still better than falling into despair, even if 
everything does eventually all amount to nothing.



or such...



Julian

On 23/12/2012, at 1:52 PM, BGB cr88...@gmail.com 
mailto:cr88...@gmail.com wrote:



On 12/22/2012 5:52 PM, Julian Leviston wrote:

Thank you, captain obvious.

Man is a three-centered (three-brained if you will) being. Focussing 
on only one of the brains is by definition imbalanced.


Bring back the renaissance man.



so, if, say, a person likes computers, but largely lacks either an 
emotional or creative side, is this implying that computers somehow 
took away their emotions and creativity, or is it more likely the 
case that they didn't really have them to begin with?...


like, a person after a while, observing that they rarely feel much of 
anything, no longer have much of any real sense of romantic interest, 
have little intrinsic creative motivation, are unable to understand 
symbolism, tend to see the world in a literal manner, ...


and, then wonder: so it is? what now?...

doesn't really seem like it is the computer's fault anymore than a 
person also noting that they are also partially color-blind.


unless I have missed the point?...


a more obvious downside though is that generally, doing lots of stuff 
on a computer keeps the user nailed down to their chair. even though 
they might realize that getting up and doing stuff might be better 
for their health, doing so is time away from working on stuff...


I guess a mystery then would be if, some time in the future, there 
will be ways of using computers which don't effectively require the 
users to be sitting in a chair all day (ideally without compromising 
either the user experience or capabilities). (granted, yes, 
traditional exercise can be tiring

Re: [fonc] How it is

2012-10-03 Thread BGB
 of my VM project is not that I have such big 
fancy code, or fancy code generation, but rather in that my C code has 
reflection facilities. even for plain old C code, reflection can be 
pretty useful sometimes (allowing doing things that would otherwise be 
impossible, or at least, rather impractical).


so, in a way, reflection metadata is what makes my fancy C FFI possible.

this part was made to work fairly well, even if, admittedly, this is 
something of a fairly limited scope.



much larger big concept things though would likely require big 
concept metadata, and this is where the pain would begin.


with FFI gluing, the task is simpler, like:
on one side, I have a 'string' or 'char[]' type, and on the other, a 
'char *' type, what do I do?


usually, the types are paired in a reasonably straightforward way, and 
the number of arguments match, ... so the code can either succeed (and 
generate the needed interface glue), or fail at doing so.


but, admittedly, figuring out something like how do I make these two 
unrelated things interact?, where there is not a clear 1:1 mapping, is 
not such an easy task.



this is then where we get into APIs, ABIs, protocols, ... where each 
side defines a particular (and, usually narrow) set of defined 
interactions, and interacts in a particular way.


and this is, itself, ultimately limited.

for example, a person can plug all manner of filesystems into the Linux 
VFS subsystem, but ultimately there is a restriction here: it has to be 
able to present itself as a hierarchical filesystem.


more so, given the way it is implemented in Linux, it has to be possible 
to enumerate the files, so sad as it is, you can't really just implement 
the internet as a Linux VFS driver (say, 
/mnt/http/www.google.com/#hl=en...), albeit some other VFS-style 
systems allow this.



so, in this sense, it still requires intelligence to put the pieces 
together, and design the various ways in which they may interoperate...



I really don't know if this helps, or is just me going off on a tangent.



Paul.



*From:* BGB cr88...@gmail.com
*To:* fonc@vpri.org
*Sent:* Tuesday, October 2, 2012 5:48:14 PM
*Subject:* Re: [fonc] How it is

On 10/2/2012 12:19 PM, Paul Homer wrote:

It always seems to be that each new generation of programmers
goes straight for the low-hanging fruit, ignoring that most of it
has already been solved many times over. Meanwhile the real
problems remain. There has been progress, but over the couple of
decades I've been working, I've always felt that it was '2 steps
forward, 1.99 steps back.



it depends probably on how one measures things, but I don't think
it is quite that bad.

more like, I suspect, a lot has to do with pain-threshold:
people will clean things up so long as they are sufficiently
painful, but once this is achieved, people no longer care.

the rest is people mostly recreating the past, often poorly,
usually under the idea this time we will do it right!, often
without looking into what the past technologies did or did not do
well engineering-wise.

or, they end up trying for something different, but usually this
turns out to be recreating something which already exists and
turns out to typically be a dead-end (IOW: where many have gone
before, and failed). often the people will think why has no one
done it before this way? but, usually they have, and usually it
didn't turn out well.

so, a blind rebuild starting from nothing probably wont achieve
much.
like, it requires taking account of history to improve on it
(classifying various options and design choices, ...).


it is like trying to convince other language/VM
designers/implementers that expecting the end programmer to have
to write piles of boilerplate to interface with C is a problem
which should be addressed, but people just go and use terrible
APIs usually based on registering the C callbacks with the VM
(or they devise something like JNI or JNA and congratulate
themselves, rather than being like this still kind of sucks).

though in a way it sort of makes sense:
many language designers end up thinking like this language will
replace C anyways, why bother to have a half-decent FFI?
whereas it is probably a minority position to design a language
and VM with the attitude C and C++ aren't going away anytime soon.


but, at least I am aware that most of my stuff is poor imitations
of other stuff, and doesn't really do much of anything actually
original, or necessarily even all that well, but at least I can
try to improve on things (like, rip-off and refine).

even, yes, as misguided and wasteful as it all may seem sometimes...


in a way it can be distressing though when one has created
something that is lame

Re: [fonc] How it is

2012-10-03 Thread BGB

On 10/3/2012 2:46 PM, Paul Homer wrote:

I think it's because that's what we've told them to ask for :-)

In truth we can't actually program 'everything', I think that's a 
side-effect of Godel's incompleteness theorem. But if you were to take 
'everything' as being abstract quantity, the more we write, the closer 
our estimation comes to being 'everything'. That perspective lends 
itself to perhaps measuring the current state of our industry by how 
much code we are writing right now. In the early years, we should be 
writing more and more. In the later years, less and less (as we get 
closer to 'everything'). My sense of the industry right now is that 
pretty much every year (factoring in the economy and the waxing or 
waning of the popularity of programming) we write more code than the 
year before. Thus we are only starting :-)





yeah, this seems about right.

from my own experience, new code being written in any given area tends 
to drop off once that part is reasonably stable or complete, apart from 
occasional tweaks/extensions, ...


but, there is always more to do somewhere else, so on average the code 
gradually gets bigger, as more functionality gets added in various areas.


and, I often have to decide where I will not invest time and effort.

so, yeah, this falls well short of everything...



Paul.


*From:* Pascal J. Bourguignon p...@informatimago.com
*To:* Paul Homer paul_ho...@yahoo.ca
*Cc:* Fundamentals of New Computing fonc@vpri.org
*Sent:* Wednesday, October 3, 2012 3:32:34 PM
*Subject:* Re: [fonc] How it is

Paul Homer paul_ho...@yahoo.ca mailto:paul_ho...@yahoo.ca writes:

 The on-going work to enhance the system would consistent of
modeling data, and creating
 transformations. In comparison to modern software development,
these would be very little
 pieces, and if they were shared are intrinsically reusable (and
recombination).

Yes, that gives L4Gs.  Eventually (when we'll have programmed
everything) all computing will be only done with L4Gs: managers
specifying their data flows.

But strangely enough, users are always asking for new programs... 
Is it

because we've not programmed every functions already, or because
we will
never have them all programmed?


-- 
__Pascal Bourguignon__ http://www.informatimago.com/

A bad day in () is better than a good day in {}.




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] How it is

2012-10-02 Thread BGB

On 10/2/2012 12:19 PM, Paul Homer wrote:
It always seems to be that each new generation of programmers goes 
straight for the low-hanging fruit, ignoring that most of it has 
already been solved many times over. Meanwhile the real problems 
remain. There has been progress, but over the couple of decades I've 
been working, I've always felt that it was '2 steps forward, 1.99 
steps back.




it depends probably on how one measures things, but I don't think it is 
quite that bad.


more like, I suspect, a lot has to do with pain-threshold:
people will clean things up so long as they are sufficiently painful, 
but once this is achieved, people no longer care.


the rest is people mostly recreating the past, often poorly, usually 
under the idea this time we will do it right!, often without looking 
into what the past technologies did or did not do well engineering-wise.


or, they end up trying for something different, but usually this turns 
out to be recreating something which already exists and turns out to 
typically be a dead-end (IOW: where many have gone before, and failed). 
often the people will think why has no one done it before this way? 
but, usually they have, and usually it didn't turn out well.


so, a blind rebuild starting from nothing probably wont achieve much.
like, it requires taking account of history to improve on it 
(classifying various options and design choices, ...).



it is like trying to convince other language/VM designers/implementers 
that expecting the end programmer to have to write piles of boilerplate 
to interface with C is a problem which should be addressed, but people 
just go and use terrible APIs usually based on registering the C 
callbacks with the VM (or they devise something like JNI or JNA and 
congratulate themselves, rather than being like this still kind of sucks).


though in a way it sort of makes sense:
many language designers end up thinking like this language will replace 
C anyways, why bother to have a half-decent FFI? whereas it is 
probably a minority position to design a language and VM with the 
attitude C and C++ aren't going away anytime soon.



but, at least I am aware that most of my stuff is poor imitations of 
other stuff, and doesn't really do much of anything actually original, 
or necessarily even all that well, but at least I can try to improve on 
things (like, rip-off and refine).


even, yes, as misguided and wasteful as it all may seem sometimes...


in a way it can be distressing though when one has created something 
that is lame and ugly, but at the same time is aware of the various 
design tradeoffs that has caused them to design it that way (like, a 
cleaner and more elegant design could have been created, but might have 
suffered in another way).


in a way, it is a slightly different experience I suspect...



Paul.


*From:* John Pratt jpra...@gmail.com
*To:* fonc@vpri.org
*Sent:* Tuesday, October 2, 2012 11:21:59 AM
*Subject:* [fonc] How it is

Basically, Alan Kay is too polite to say what
we all know to be the case, which is that things
are far inferior to where they could have been
if people had listened to what he was saying in the 1970's.

Inefficient chip architectures, bloated frameworks,
and people don't know at all.

It needs a reboot from the core, all of it, it's just that
people are too afraid to admit it.  New programming languages,
not aging things tied to the keyboard from the 1960's.

It took me 6 months to figure out how to write a drawing program
in cocoa, but a 16-year-old figured it out in the 1970's easily
with Smalltalk.
___
fonc mailing list
fonc@vpri.org mailto:fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] How it is

2012-10-02 Thread BGB

On 10/2/2012 5:48 PM, Pascal J. Bourguignon wrote:

BGB cr88...@gmail.com writes:


On 10/2/2012 12:19 PM, Paul Homer wrote:

 It always seems to be that each new generation of programmers goes
 straight for the low-hanging fruit, ignoring that most of it has
 already been solved many times over. Meanwhile the real problems
 remain. There has been progress, but over the couple of decades
 I've been working, I've always felt that it was '2 steps forward,
 1.99 steps back.

it depends probably on how one measures things, but I don't think it
is quite that bad.

more like, I suspect, a lot has to do with pain-threshold: people will
clean things up so long as they are sufficiently painful, but once
this is achieved, people no longer care.

the rest is people mostly recreating the past, often poorly, usually
under the idea this time we will do it right!, often without looking
into what the past technologies did or did not do well
engineering-wise.

or, they end up trying for something different, but usually this
turns out to be recreating something which already exists and turns
out to typically be a dead-end (IOW: where many have gone before, and
failed). often the people will think why has no one done it before
this way? but, usually they have, and usually it didn't turn out
well.

One excuse for this however, is that sources for old research projects
are not available generally, the more so for failed projects. At most,
there's a paper describing the project and some results, but no source,
much less machine readable sources.  (The fact is that those sources
were on punch cards or other unreadable media).


a lot of things are for things which are much more recent as well.



so, a blind rebuild starting from nothing probably wont achieve
much.  like, it requires taking account of history to improve on it
(classifying various options and design choices, ...).

Sometimes while not making great scientific or technological advances,
it still improves things.  Linus wanted to learn unix and wrote Linux
and Richard wanted to have the sources and wrote GNU, and we get
GNU/Linux which is better than the other unices.


well, except, in both of these cases, they were taking account of things 
which happened before:

both of them knew about, and were basing their design efforts off of, Unix.


the bigger problem is not with people being like I am going to write my 
own version of X, but, rather, a person running into the problem 
without really taking into account that X ever existed, or without 
putting any effort into understanding how it worked.




it is like trying to convince other language/VM designers/implementers
that expecting the end programmer to have to write piles of
boilerplate to interface with C is a problem which should be
addressed, but people just go and use terrible APIs usually based on
registering the C callbacks with the VM (or they devise something
like JNI or JNA and congratulate themselves, rather than being like
this still kind of sucks).

though in a way it sort of makes sense: many language designers end up
thinking like this language will replace C anyways, why bother to
have a half-decent FFI? whereas it is probably a minority
position to design a language and VM with the attitude C and C++
aren't going away anytime soon.

but, at least I am aware that most of my stuff is poor imitations of
other stuff, and doesn't really do much of anything actually original,
or necessarily even all that well, but at least I can try to improve
on things (like, rip-off and refine).

even, yes, as misguided and wasteful as it all may seem sometimes...

in a way it can be distressing though when one has created something
that is lame and ugly, but at the same time is aware of the various
design tradeoffs that has caused them to design it that way (like, a
cleaner and more elegant design could have been created, but might
have suffered in another way).

in a way, it is a slightly different experience I suspect...

I would say that for one thing the development of new ideas would have
to be done in autarcy: we don't want and can't support old OSes and old
languages, since the fundamental principles will be different.

But then I'd observe the fate of those different systems, even with a
corporation such as IBM backing them, such as OS/400, or BeOS.  Even if
some of them could find a niche, they remain quite confidential.


yeah. the problem is, a new thing is hard.
it is one thing to sell, for example, an x86 chip with a few more 
features hacked on, and quite another to try to sell something like an 
Itanium.


people really like their old stuff to keep on working, and for better or 
worse, it makes sense to keep the new thing as a backwards-compatible 
extension.




On the other hand, more or less relatively recently, companies have been
able to develop and sell new languages/systems: Sun did Java/JVM and
it's now developed by Google in Android systems;  Apple promoted

Re: [fonc] Deployment by virus

2012-07-19 Thread BGB

On 7/19/2012 7:32 AM, Eugen Leitl wrote:

On Thu, Jul 19, 2012 at 02:28:18PM +0200, John Nilsson wrote:

More work relative to an approach where full specification and controll is
feasible. I was thinking that in a not to distant future we'll want to
build systems of such complexity that we need to let go of such dreams.

It could be enough with one system. How do you evolve a system that has
emerged from som initial condition directed by user input. Even with only
one instance of it running you might have no way to recreate it so you must
patch it, and given sufficient complexity you might have no way to know how
a binary diff should be created.

It seems a great idea for evolutionary computation (GA/GP) but an
awful idea for human engineering.


it comes back to the idea of total complexity vs perceived or external 
complexity:
as the system gets larger and more complex, the level of abstraction 
tends to increase.


the person can still design and engineer a system, just typically 
working at a higher level of abstraction (and a fair amount of 
conceptual layering).


so, yeah, traditional engineering and development practices will 
probably continue on well into the foreseeable future.




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Component-based software

2012-07-18 Thread BGB

On 7/18/2012 4:04 PM, Miles Fidelman wrote:

Tomasz Rola wrote:

On Sun, 15 Jul 2012, Ivan Zhao wrote:

By Victorian plumbing, I meant the standardization of the plumbing 
and
hardware components at the end of the 19th century. It greatly 
liberated
plumbers from fixing each broken toilet from scratch, to simply 
picking and

assembling off the shelf pieces.



There was (or even still is) a proposition to make software from
prefabricated components. Not much different to another proposition 
about
using prefabricated libraries/dlls etc. Anyway, seems like there is a 
lot

of component schools nowadays, and I guess they are unable to work with
each other - unless you use a lot of chewing gum and duct tape.



It's really funny, isn't it - how badly software components have 
failed.  The world is littered with component libraries of various 
sorts, that are unmitigated disasters.


Except. when it actually works.  Consider:
- all the various c libraries
- all the various java libraries
- all the various SDKs floating around
- cpan (perl)

Whenever we use an include statement, or run a make, we're really 
assembling from huge libraries of components.  But we don't quite 
think of it that way for some reason.




yeah.

a few factors I think:
how much is built on top of the language;
how much is mandatory when interacting with the system (basically, in 
what ways does it impose itself on the way the program is structured or 
works, what sorts of special treatment does it need when being used, ...).


libraries which tend to be more successful are those which operate at a 
level much closer to that of the base language, and which avoid placing 
too many special requirements on the code using the library (must always 
use memory-allocator X, object system Y, must register global roots with 
the GC, ...).


say, a person building a component library for C is like:
ok, I will build a GC and some OO facilities;
now I am going to build some generic containers on top of said OO library;
now I am going to write a special preprocessor to make it easier to use;
...

while ignoring issues like:
what if the programmer still wants or needs to use something like 
malloc or mmap?;
how casual may the programmer be regarding the treatment of object 
references?;
what if the programmer wants to use the containers without using the OO 
facilities?;
what if for some reason the programmer wants to write code which does 
not use the preprocessor, and call into code which does?;

...

potentially, the library can build a large collection of components, but 
they don't play well with others (say, due to large amounts of 
internal dependencies and assumptions in the design). this means that, 
potentially, interfacing a codebase built on the library with another 
codebase may require an inordinate amount of additional pain.



in my case I have tried to, where possible, avoid these sorts of issues 
in my own designs, partly by placing explicit restrictions on what sorts 
of internal dependencies and assumptions are allowed when writing 
various pieces of code, and trying to keep things, for the most part, 
fairly close to the metal.



or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Historical lessons to escape the current sorry state of personal computing?

2012-07-17 Thread BGB

On 7/17/2012 9:04 AM, Pascal J. Bourguignon wrote:

David-Sarah Hopwood david-sa...@jacaranda.org writes:


On 17/07/12 02:15, BGB wrote:

so, typically, males work towards having a job, getting lots money, ... and 
will choose
females based mostly how useful they are to themselves (will they be faithful, 
would they
make a good parent, ...).

meanwhile, females would judge a male based primarily on their income, 
possessions,
assurance of continued support, ...

not that it is necessarily that way, as roles could be reversed (the female 
holds a job),
or mutual (both hold jobs). at least one person needs to hold a job though, and 
by
default, this is the social role for a male (in the alternate case, usually the 
female is
considerably older, which has a secondary limiting factor in that females have 
a viable
reproductive span that is considerably shorter than that for males, meaning 
that the
older-working-female scenario is much less likely to result in offspring, ...).

in this case, then society works as a sort of sorting algorithm, with better 
mates
generally ending up together (rich business man with trophy wife), and worse 
mates ending
up together (poor looser with a promiscuous or otherwise undesirable wife).

Way to go combining sexist, classist, ageist, heteronormative, cisnormative, 
ableist
(re: fertility) and polyphobic (equating multiple partners with undesirability)
assumptions, all in the space of four paragraphs. I'm not going to explain in 
detail
why these are offensive assumptions, because that is not why I read a mailing 
list
that is supposed to be about the Fundamentals of New Computing. Please stick 
to
that topic.

It is, but it is the reality, and the reason of most of our problems
too.  And it's not by putting an onus on the expression of these choices
that you will repress them: they come from the deepest, our genes and
the genetic selection that has been applied on them for millena.

My point here being that what's needed is a change in how selection of
reproductive partners is done, and obviously, I'm not considering doing
it based on money or political power.   Of course, I have none of either
:-)


yeah.

don't think that this is me saying that everything should operate this 
way, rather that, at least from my observations, this is largely how it 
does already. (whether it is good or bad then is a separate and 
independent issue).


the issue with a person going outside the norm may not necessarily be 
that it is bad or wrong for them to do so, but that it may risk putting 
them at a social disadvantage.


in the original context, it was in relation to a person trying to 
maximize their own pursuit of self-interest, which would tend to 
probably overlap somewhat with adherence to societal norms.



granted, that is not to say, for example, that everything I do is 
socially advantageous:
for example, being a programmer / computer nerd carries its own set of 
social stigmas and negative stereotypes (and in many ways I still hold 
minority views on things, ...).


an issue though is that society will not tend to see a person as they 
are as a person, but will rather tend to see a person in terms of a 
particular set of stereotypes.




And yes, it's perfectly on-topic, if you consider how science and
technology developments are directed.  Most of our computing technology
has been created for war.


yes.



Or said otherwise, why do you think this kind of refundation project
hasn't the same kind of resources allocated to the commercial or
military projects?



I am not entirely sure I understand the question here.

if you mean, why don't people go and try to remake society in a 
different form?

well, I guess that would be a hard one.

about as soon as people start trying to push for any major social 
changes, there is likely to be a large amount of resistance and backlash.


it is much like, if you have one person pushing for progressive 
ideals, you will end up with another pushing for conservative ideals, 
typically with relatively little net change. (so, sort of a societal 
equal-and-opposite effect). (by progressive and conservative here, I 
don't necessarily mean them exactly as they are used in current US 
politics, but more in general).


there will be changes though in a direction where nearly everyone agrees 
that this is the direction they want to go, but people fighting or 
trying to impose their ideals on the other side is not really a good 
solution. people really don't like having their personal freedoms and 
choices being hindered, or having their personal ideals and values torn 
away simply because this is how someone else feels things should be 
(the problem is that promoting something for one person also tends to 
come at the cost of imposing it on someone else).


a better question would be:
what sort of things have come up where nearly everyone has agreed and 
ended up going along with it?


people don't as often think as much about these ones, since

Re: [fonc] Historical lessons to escape the current sorry state of personal computing?

2012-07-17 Thread BGB

On 7/17/2012 11:12 AM, Loup Vaillant wrote:

Pascal J. Bourguignon a écrit :

BGB cr88...@gmail.com writes:

dunno, I learned originally partly by hacking on pre-existing
codebases, and by cobbling things together and seeing what all did and
did not work (and was later partly followed by looking at code and
writing functionally similar mock-ups, ...).

some years later, I started writing a lot more of my own code, which
largely displaced the use of cobbled-together code.

from what I have seen in code written by others, this sort of cobbling
seems to be a fairly common development process for newbies.



I learn programming languages basically by reading the reference, and by
exploring the construction of programs from the language rules.


When I started learning programming on my TI82 palmtop in high school, 
I started by copying programs verbatim.  Then, I gradually started to 
do more and more from scratch. Like BGB.


But when I learn a new language now, I do read the reference (if any), 
and construct programs from the language rules. Like Pascal.


Maybe there's two kinds of beginners: beginners in programming itself, 
and beginners in a programming language.




yep.


likewise, many people who aren't really programmers, but are just trying 
to get something done, probably aren't really going to take a formal 
approach to learning programming, but are more likely going to try to 
find code fragments off the internet they can cobble together to make 
something that basically works.


sometimes, it takes a while to really make the transition, from being 
someone who wrote a lot of what they had by cobbling and imitation, to 
being someone who really understands how it all actually works.




Loup.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Historical lessons to escape the current sorry state of personal computing?

2012-07-17 Thread BGB

On 7/17/2012 9:47 PM, David-Sarah Hopwood wrote:

[Despite my better judgement I'm going to respond to this even though it is
seriously off-topic.]


in all likelihood, the topic will probably end pretty soon anyways.
don't really know how much more can really be said on this particular 
subject anyways.


but, yeah, probably this topic has gone on long enough.



On 17/07/12 17:18, BGB wrote:

an issue though is that society will not tend to see a person as they are as a 
person, but
will rather tend to see a person in terms of a particular set of stereotypes.

Society doesn't see people as anything. We do live in/with a culture where
stereotyping is commonplace, but the metonymy of letting the society stand for 
the
people in it is inappropriate here, because it is individual people who 
*choose* to
see other people in terms of stereotypes, or choose not to do so.

You're also way too pessimistic about the extent to which most reasonably 
well-educated
people in practice permit cultural stereotypes to override independent thought. 
Most
people are perfectly capable of recognizing stereotypes -- even if they 
sometimes need a
little prompting -- and understanding what is wrong with them.


a big factor here is how well one person knows another.
stereotypes and generalizations are a much larger part of the 
interaction process when dealing with people who are either strangers or 
casual acquaintances.


if the person is known by much more than a name and a face and a few 
other bits of general information, yes, then maybe they will take a 
little more time to be a little more understanding.




I speak from experience: it is entirely possible to live your life in a way 
that is
quite opposed to many of those cultural stereotypes that you've expressed 
concerning
sexuality, gender expression, employment, reproductive choices, etc., and still 
be
accepted as a matter of course by the vast majority of people. As for the 
people who don't
accept that, *it's they're fault* that they don't get it. No excuses of the form
society made me think that way.


I think it depends some on the cultural specifics as well, since how 
well something may go over may depend a lot on where a person is, and 
who they are interacting with.


if a person is located somewhere where these things are fairly common 
and generally considered acceptable (for example: California), it may go 
over a lot easier with people than somewhere where it is less commonly 
accepted (for example: Arkansas or Alabama or similar).


likewise, it may go over a bit easier with people who are generally more 
accepting of these forms of lifestyle (such as more non-religious / 
secular type people), than it will with people who are generally less 
accepting of these behaviors (say, people with a more conservative leaning).



(I would prefer not go too much more into this, since yeah, here 
generally isn't really the place for all this.).



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Historical lessons to escape the current sorry state of personal computing?

2012-07-16 Thread BGB

On 7/16/2012 8:00 AM, Pascal J. Bourguignon wrote:

Miles Fidelman mfidel...@meetinghouse.net writes:


Pascal J. Bourguignon wrote:

Miles Fidelman mfidel...@meetinghouse.net writes:

And seems to have turned into something about needing to recreate the
homebrew computing milieu, and everyone learning to program - and
perhaps why don't more people know how to program?

My response (to the original question) is that folks who want to
write, may want something more flexible (programmable) than Word, but
somehow turning everone into c coders doesn't seem to be the answer.

Of course not.  That's why there are languages like Python or Logo.



More flexible tools (e.g., HyperCard, spreadsheets) are more of an
answer -  and that's a challenge to those of us who develop tools.
Turning writers, or mathematicians, or artists into coders is simply a
recipe for bad content AND bad code.

But everyone learns mathematics, and even if they don't turn out
professionnal mathematicians, they at least know how to make a simple
demonstration (or at least we all did when I was in high school, so it's
possible).

Similarly, everyone should learn CS and programming, and even if they
won't be able to manage software complexity at the same level as
professionnal programmers (ought to be able to), they should be able to
write simple programs, at the level of emacs commands, for their own
needs, and foremost, they should understand enough of CS and programming
to be able to have meaningful expectations from the computer industry
and from programmers.

Ok... but that begs the real question: What are the core concepts that
matter?

There's a serious distinction between computer science, computer
engineering, and programming.  CS is theory, CE is architecture and
design, programming is carpentry.

In math, we start with arithmetic, geometry, algebra, maybe some set
theory, and go on to trigonometry, statistics, calculus, .. and
pick up some techniques along the way (addition, multiplication, etc.)


in elementary school, I got out of stuff, because I guess the school 
figured my skills were better spent doing IT stuff, so that is what I 
did (and I guess also because, at the time, I was generally a bit of a 
smart kid compared to a lot of the others, since I could read and do 
arithmetic pretty well already, ...).


by high-school, it was the Pre-Algebra / Algebra 1/2 route (basically, 
the lower-route), so basically the entirety of highschool was spent 
solving for linear equations (well, apart for the first one, which was 
mostly about hammering out the concept of variables and PEMDAS).


took 151A at one point, which was basically like algebra + matrices + 
complex numbers + big sigma, generally passed this.



tried to do other higher-level college level math classes later, total 
wackiness ensues, me having often little idea what is going on and 
getting lost as to how to actually do any of this stuff.


although, on the up-side, I did apparently manage to impress some people 
in a class by mentally calculating the inverse of a matrix... (nevermind 
ultimately bombing on nearly everything else in that class).



general programming probably doesn't need much more than pre-algebra or 
maybe algebra level stuff anyways, but maybe touching on other things 
that are useful to computing: matrices, vectors, sin/cos/..., the big 
sigma notation, ...




In science, it's physics, chemistry, biology,  and we learn some
lab skills along the way.

What are the core concepts of CS/CE that everyone should learn in
order to be considered educated?  What lab skills?  Note that there
still long debates on this when it comes to college curricula.

Indeed.  The French National Education is answering to that question
with its educational programme, and the newly edited manual.

https://wiki.inria.fr/sciencinfolycee/TexteOfficielProgrammeISN

https://wiki.inria.fr/wikis/sciencinfolycee/images/7/73/Informatique_et_Sciences_du_Num%C3%A9rique_-_Sp%C3%A9cialit%C3%A9_ISN_en_Terminale_S.pdf



can't say much on this.


but, a person can get along pretty well provided they get basic literacy 
down fairly solidly (can read and write, and maybe perform basic 
arithmetic, ...).


most other stuff is mostly optional, and wont tend to matter much in 
daily life for most people (and most will probably soon enough forget 
anyways once they no longer have a school trying to force it down their 
throats and/or needing to cram for tests).


so, the main goal in life is basically finding employment and basic job 
competence, mostly with education being as a means to an end: getting 
higher paying job, ...


(so, person pays colleges, goes through a lot of pain and hassle, gets a 
degree, and employer pays them more).




Some of us greybeards (or fuddy duddies if you wish) argue for
starting with fundamentals:
- boolean logic
- information theory
- theory of computing
- hardware design
- machine language programming (play with microcontrollers in the lab)

Re: [fonc] Historical lessons to escape the current sorry state of personal computing?

2012-07-16 Thread BGB

On 7/16/2012 11:22 AM, Pascal J. Bourguignon wrote:

BGB cr88...@gmail.com writes:


general programming probably doesn't need much more than pre-algebra
or maybe algebra level stuff anyways, but maybe touching on other
things that are useful to computing: matrices, vectors, sin/cos/...,
the big sigma notation, ...

Definitely.  Programming needs discreete mathematics and statistics much
more than the mathematics that are usually taught (which are more useful
eg. to physics).


yes, either way.

college experience was basically like:
go to math classes, which tend to be things like Calculus and similar;
brain melting ensues;
no degree earned.

then I had to move, and the college here would require taking a bunch 
more different classes, and I would still need math classes, making 
trying to do so not terribly worthwhile.




but, a person can get along pretty well provided they get basic
literacy down fairly solidly (can read and write, and maybe perform
basic arithmetic, ...).

most other stuff is mostly optional, and wont tend to matter much in
daily life for most people (and most will probably soon enough forget
anyways once they no longer have a school trying to force it down
their throats and/or needing to cram for tests).

No, no, no.  That's the point of our discussion.  There's a need to
increase computer-literacy, actually programming-literacy of the
general public.


well, I mean, they could have a use for computer literacy, ... depending 
on what they are doing.
but, do we need all the other stuff, like US History, Biology, 
Environmental Science, ... that comes along with it, and which doesn't 
generally transfer from one college to another?...


they are like, no, you have World History, we require US History or 
we require Biology, but you have Marine Biology.


and, one can ask: does your usual programmer actually even need to know 
who the past US presidents were and what things they were known for? or 
the differences between Ruminant and Equine digestive systems regarding 
their ability to metabolize cellulose?


maybe some people have some reason to know, most others don't, and for 
them it is just the educational system eating their money.




The situation where everybody would be able (culturally, with a basic
knowing-how, an with the help of the right software tools and system) to
program their applications (ie. something totally contrary to the
current Apple philosophy), would be a better situation than the one
where people are dumbed-down and are allowed to use only canned software
that they cannot inspect and adapt to their needs.


yes, but part of the problem here may be more about the way the software 
industry works, and general culture, rather than strictly about education.


in a world where typically only closed binaries are available, and where 
messing with what is available may risk a person facing legal action, 
then it isn't really a good situation.


likewise, the main way which newbies tend to develop code is by 
copy-pasting from others and by making tweaks to existing code and data, 
again, both of which may put a person at legal risk (due to copyright, 
...), and often results in people creating programs which they don't 
actually have the legal right to possess much less distribute or sell to 
others.



yes, granted, it could be better here.
FOSS sort of helps, but still has limitations.

something like, the ability to move code between a wider range of 
compatible licenses, or safely discard the license for sufficiently 
small code fragments ( 25 or 50 or 100 lines or so), could make sense.



all this is in addition to technical issues, like reducing the pain and 
cost by which a person can go about making changes (often, it requires 
the user to be able to get the program to be able to rebuild from 
sources before they have much hope of being able to mess with it, 
limiting this activity more to serious developers).


likewise, it is very often overly painful to make contributions back 
into community projects, given:
usually only core developers have write access to the repository (for 
good reason);

fringe developers typically submit changes via diff patches;
usually this itself requires communication with the developers (often 
via subscribing to a developer mailing-list or similar);
nevermind the usual hassles of making the patches just so, so that the 
core developers will actually look into them (they often get fussy over 
things like which switches they want used with diff, ...);

...

ultimately, this may mean that the vast majority of minor fixes will 
tend to remain mostly in the hands of those who make them, and not end 
up being committed back into the main branch of the project.


in other cases, it may leads to forks, mostly because non-core 
developers can't really deal with the core project leader, who lords 
over the project or may just be a jerk-face, or a group of people may 
want features which the core doesn't feel are needed

Re: [fonc] Historical lessons to escape the current sorry state of personal computing?

2012-07-16 Thread BGB

On 7/16/2012 8:59 PM, David-Sarah Hopwood wrote:

On 17/07/12 02:15, BGB wrote:

so, typically, males work towards having a job, getting lots money, ... and 
will choose
females based mostly how useful they are to themselves (will they be faithful, 
would they
make a good parent, ...).

meanwhile, females would judge a male based primarily on their income, 
possessions,
assurance of continued support, ...

not that it is necessarily that way, as roles could be reversed (the female 
holds a job),
or mutual (both hold jobs). at least one person needs to hold a job though, and 
by
default, this is the social role for a male (in the alternate case, usually the 
female is
considerably older, which has a secondary limiting factor in that females have 
a viable
reproductive span that is considerably shorter than that for males, meaning 
that the
older-working-female scenario is much less likely to result in offspring, ...).

in this case, then society works as a sort of sorting algorithm, with better 
mates
generally ending up together (rich business man with trophy wife), and worse 
mates ending
up together (poor looser with a promiscuous or otherwise undesirable wife).

Way to go combining sexist, classist, ageist, heteronormative, cisnormative, 
ableist
(re: fertility) and polyphobic (equating multiple partners with undesirability)
assumptions, all in the space of four paragraphs. I'm not going to explain in 
detail
why these are offensive assumptions, because that is not why I read a mailing 
list
that is supposed to be about the Fundamentals of New Computing. Please stick 
to
that topic.



sorry to anyone who was offended by any of this, it was not my intent to 
cause any offense here.



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Historical lessons to escape the current sorry state of personal computing?

2012-07-15 Thread BGB

On 7/14/2012 5:11 PM, Iian Neill wrote:

Ivan,

I have some hope for projects like the Raspberry Pi computer, which aims to 
replicate the 'homebrew' computing experience of the BBC Micro in Britain in 
the 1980s. Of course, hardware is only part of the equation -- even versatile 
hardware that encourages electronic tinkering -- and the languages and software 
that are bundled with the Pi will be key.


yeah, hardware is one thing, software another.



Education is ultimately the answer, but what kind of education? Our computer 
science education is itself a product of our preconceptions of the field of 
computing, and to some degree fails to bridge the divide between the highly skilled 
technocratic elite and the personal computer consumer. The history of home 
computing in the Eighties shows the power of cheap hardware and practically 'bare 
metal' systems that are conceptually graspable. And I suspect the fact that BASIC 
was an interpreted language had a lot to do with fostering experimentation  
play.


maybe it would help if education people would stop thinking that CS is 
some sort of extension of Calculus or something... (and stop assigning 
scary-level math classes as required for CS majors). this doesn't really 
help for someone whose traditional math skills sort of run dry much past 
the level of algebra (and who finds things like set-theory to not really 
make any sense, where these classes like to use it like gravy they put 
on everything... class about SQL, yes, your set theory is mentioned 
their as well, and put up on the board, but at least for that class, was 
almost never mentioned again once the actual SQL part got going, and the 
teacher made his way past the select statement).


along with programming classes which might leave a person for the 
first few semesters using pointy-clicky graphical things, and drawing 
flowcharts in Visio or similar (and/or writing out desk checks on paper).


now, how might it be better taught in schools?...
I don't know.


maybe something that up front goes into the basic syntax and behavior of 
the language, then has people go write stuff, and is likewise maybe 
taught starting earlier.


for example, I started learning programming in elementary school (on my 
own), and others could probably do likewise.



classes could maybe teach from a similar basis: like, here is the 
language, and here is what you can type to start making stuff happen, 
... (with no flowcharting, desk-checks, or set-notation, anywhere to be 
seen...).


the rest then is basically climbing up the tower and learning about 
various stuff...

like, say, if there were a semester-long class for the OpenGL API, ...



Imagine if some variant of Logo had been built in, that allowed access to the 
machine code subroutines in the way BASIC did...


could be nifty.
I don't really think the problem is as much about language though, as 
much as it is about disinterest + perceived difficulty + lack of 
sensible education strategies + ...




Regards,
Iian


Sent from my iPhone

On 15/07/2012, at 7:41 AM, Miles Fidelman mfidel...@meetinghouse.net wrote:


Ivan Zhao wrote:

45 years after Engelbart's demo, we have a read-only web and Microsoft Word 2011, a gulf between 
users and programmers that can't be wider, and the scariest part is that 
most people have been indoctrinated long enough to realize there could be alternatives.

Naturally, this is just history repeating itself (a la pre-Gutenberg scribes, 
Victorian plumbers). But my question is, what can we learn from these 
historical precedences, in order to to consciously to design our escape path. A 
revolution? An evolution? An education?

HyperCard meets the web + P2P?

--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-16 Thread BGB

On 6/16/2012 9:19 AM, Randy MacDonald wrote:

On 6/10/2012 1:15 AM, BGB wrote:
meanwhile, I have spent several days on-off pondering the mystery of 
if there is any good syntax (for a language with a vaguely C-like 
syntax), to express the concept of execute these statements in 
parallel and continue when all are done.

I believe that the expression in Dyalog APL is:

⍎¨statements

or

{execute}{spawn}{each}statements.



I recently thought about it off-list, and came up with a syntax like:
async! {A}{B}{C}

but, decided that this isn't really needed at the more moment, and is a 
bit extreme of a feature anyways (and would need to devise a mechanism 
for implementing a multi-way join, ...).


actually, probably in my bytecode it would look something like:
mark
mark; push A; close; call_async
mark; push B; close; call_async
mark; push C; close; call_async
multijoin

(and likely involve adding some logic into the green-thread scheduler...).


ended up basically opting in this case for something simpler which I had 
used in the past:
callback events on timers. technically, timed callbacks aren't really 
good, but they work well enough for things like animation tasks, ...


but, I may still need to think about it.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-16 Thread BGB

On 6/16/2012 10:05 AM, Randy MacDonald wrote:
@BGB, if the braces around the letters defers execution, as my 
memories of Perl confirm, this is perfect.  With APL, quoting an 
expression accomplishes the same end: '1+1'




no, the braces indicate a code block (in statement context), and it is 
the async keyword which indicates that there is deferred execution. 
(in my language, quoting indicates symbols or strings, as in this is a 
string, 'a', or 'single-quoted string', where a is always a string, 
but 'a' is a character-literal).


in a expression context, the braces indicate creation of an ex-nihilo 
object, as in {x: 3, y: 4}.


the language sort-of distinguishes between statements and expressions, 
but this is more relaxed than in many other languages (it is more built 
on context than on a strict syntactic divide, and in most cases an 
explicit return is optional since any statement/expression in tail 
position may implicitly return a value).



the letters in this case were just placeholders for the statements which 
would go in the blocks.


for example example:
if(true)
{
printf(A\n);
sleep(1000);
printf(B\n);
sleep(1000);
}
printf(Done\n);

executes the print statements synchronously, causing the thread to sleep 
for 1s in the process (so, Done is printed 1s after B).


and, with a plain async keyword:
async {
sleep(1000);
printf(A\n);
}
printf(Done\n);

will print Done first, and then print A about 1 second later (since 
the block is folded into another thread).


technically, there is another operation, known as a join.

var a = async { ... };
...
var x = join(a);

where the join() will block until the given thread has returned, and 
return the return value from the thread.
generally though, a join in this form only makes sense with a single 
argument (and would be implemented in the VM using a special bytecode op).


an extension would be to implicitly allow multiple joins, as in:
join(a, b, c);//wait on 3 threads
except, now, the return value doesn't make much sense anymore, and likewise:
join(
async{A},
async{B},
async{C});
is also kind of ugly.

in this case, the syntax:
async! {A}{B}{C};
although, this could also work:
async! {A}, {B}, {C};

either would basically mean async with join, and essentially mean 
something similar to the 3-way join (basically, as syntax sugar). it may 
also imply we don't really care what the return value is.


basically, the ! suffix has ended up on several of my keywords to 
indicate alternate forms, for example: a as int and a as! int will 
have slightly different semantics (the former will return null if the 
cast fails, and the latter will throw an exception).



but, since I got to thinking about it again, I started writing up more 
of the logic for this (adding multiway join logic, ...).





On another note, I agree with the thesis that OO is just message passing:

  aResult ? someParameters 'messageName' to anObject ?? so, once 
'to' is defined, APL does OO.


I was thinking 'new' didn't fit, but

   'new' to aClass

convinced me otherwise.

It also means that 'object oriented language' is a category error.



my language is a bit more generic, and loosely borrows much of its 
current syntax from JavaScript and ActionScript.


however, it has a fair number of non-JS features and semantics exist as 
well.
it is hardly an elegant, cleanly designed, or minimal language, but it 
works, and is a design more based on being useful to myself.




On 6/16/2012 11:40 AM, BGB wrote:


I recently thought about it off-list, and came up with a syntax like:
async! {A}{B}{C}



--
---
|\/| Randy A MacDonald   | If the string is too tight, it will snap
|\\|array...@ns.sympatico.ca|   If it is too loose, it won't play...
  BSc(Math) UNBF '83  | APL: If you can say it, it's done.
  Natural Born APL'er | I use Real J
  Experimental webserverhttp://mormac.homeftp.net/
-NTP{ gnat }-


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-16 Thread BGB

On 6/16/2012 11:36 AM, Randy MacDonald wrote:
@BGB, by the 'same end' i meant tranforming a statement into something 
that a flow control operator can act on, like if () {...} else {}  The 
domain of the execute operator in APL is quoted strings.  I did not 
mean that the same end was allowing asynchronous execution.




side note:
a lot of how this is implemented came from how it was originally 
designed/implemented.


originally, the main use of the call_async opcode was not for async 
blocks, but rather for explicit asynchronous function calls:
foo!(...);//calls function, doesn't wait for return (return value is 
a thread-handle).

likewise:
join(foo!(...));
would call a function asynchronously, and join against the result 
(return value).


async also was latter added as a modifier:
async function bar(...) { ... }

where the function will be called asynchronously by default:
bar(...);//will perform an (implicit) async call

for example, it was also possible to use a lot of this to pass messages 
along channels:

chan!(...);//send a message, don't block for receipt.
chan(...);//send a message, blocking (would wait for other end to join)
join(chan);//get message from channel, blocks for message

a lot of this though was in the 2004 version of the language (the VM was 
later re-implemented, twice), and some hasn't been fully reimplemented 
(the 2004 VM was poorly implemented and very slow).


the async-block syntax was added later, and partly built on the concept 
of async calls.



but, yeah, probably a lot of people here have already seen stuff like 
this before.





On 6/16/2012 1:23 PM, BGB wrote:

On 6/16/2012 10:05 AM, Randy MacDonald wrote:
@BGB, if the braces around the letters defers execution, as my 
memories of Perl confirm, this is perfect.  With APL, quoting an 
expression accomplishes the same end: '1+1'




no, the braces indicate a code block (in statement context), and it 
is the async keyword which indicates that there is deferred 
execution. (in my language, quoting indicates symbols or strings, as 
in this is a string, 'a', or 'single-quoted string', where a is 
always a string, but 'a' is a character-literal).


in a expression context, the braces indicate creation of an ex-nihilo 
object, as in {x: 3, y: 4}.


the language sort-of distinguishes between statements and 
expressions, but this is more relaxed than in many other languages 
(it is more built on context than on a strict syntactic divide, and 
in most cases an explicit return is optional since any 
statement/expression in tail position may implicitly return a value).



the letters in this case were just placeholders for the statements 
which would go in the blocks.


for example example:
if(true)
{
printf(A\n);
sleep(1000);
printf(B\n);
sleep(1000);
}
printf(Done\n);

executes the print statements synchronously, causing the thread to 
sleep for 1s in the process (so, Done is printed 1s after B).


and, with a plain async keyword:
async {
sleep(1000);
printf(A\n);
}
printf(Done\n);

will print Done first, and then print A about 1 second later 
(since the block is folded into another thread).


technically, there is another operation, known as a join.

var a = async { ... };
...
var x = join(a);

where the join() will block until the given thread has returned, 
and return the return value from the thread.
generally though, a join in this form only makes sense with a 
single argument (and would be implemented in the VM using a special 
bytecode op).


an extension would be to implicitly allow multiple joins, as in:
join(a, b, c);//wait on 3 threads
except, now, the return value doesn't make much sense anymore, and 
likewise:

join(
async{A},
async{B},
async{C});
is also kind of ugly.

in this case, the syntax:
async! {A}{B}{C};
although, this could also work:
async! {A}, {B}, {C};

either would basically mean async with join, and essentially mean 
something similar to the 3-way join (basically, as syntax sugar). it 
may also imply we don't really care what the return value is.


basically, the ! suffix has ended up on several of my keywords to 
indicate alternate forms, for example: a as int and a as! int 
will have slightly different semantics (the former will return null 
if the cast fails, and the latter will throw an exception).



but, since I got to thinking about it again, I started writing up 
more of the logic for this (adding multiway join logic, ...).





On another note, I agree with the thesis that OO is just message 
passing:


  aResult ? someParameters 'messageName' to anObject ?? so, once 
'to' is defined, APL does OO.


I was thinking 'new' didn't fit, but

   'new' to aClass

convinced me otherwise.

It also means that 'object oriented language' is a category error.



my language is a bit more generic, and loosely borrows much of its 
current syntax from JavaScript and ActionScript.


however, it has a fair number of non-JS features

Re: [fonc] The Web Will Die When OOP Dies

2012-06-16 Thread BGB

On 6/16/2012 1:39 PM, Wesley Smith wrote:

If things are expanding then they have to get more complex, they encompass
more.

Aside from intuition, what evidence do you have to back this statement
up?  I've seen no justification for this statement so far.  Biological
systems naturally make use of objects across vastly different scales
to increase functionality with a much less significant increase in
complexity.  Think of how early cells incorporated mitochondria whole
hog to produce a new species.


in code, the later example is often called copy / paste.
some people demonize it, but if a person knows what they are doing, it 
can be used to good effect.


a problem is partly how exactly one defines complex:
one definition is in terms of visible complexity, where basically 
adding a feature causes code to become harder to understand, more 
tangled, ...


another definition, apparently more popular among programmers, is to 
simply obsess on the total amount of code in a project, and just 
automatically assume that a 1 Mloc project is much harder to understand 
and maintain than a 100 kloc project.


if the difference is that the smaller project consists almost entirely 
of hacks and jury-rigging, it isn't necessarily much easier to understand.


meanwhile, building abstractions will often increase the total code size 
(IOW: adding complexity), but consequently make the code easier to 
understand and maintain (reducing visible complexity).


often the code using an abstraction will be smaller, but usually adding 
an abstraction will add more total code to the project than that saved 
by the code which makes use of it (except past a certain point, namely 
where the redundancy from the client code will outweigh the cost of the 
abstraction).



for example:
MS-DOS is drastically smaller than Windows;
but, if most of what we currently have on Windows were built directly on 
MS-DOS (with nearly every app providing its own PMode stuff, driver 
stack, ...), then the total wasted HD space would likely be huge.


and, developing a Windows-like app on Windows is much less total effort 
than doing similar on MS-DOS would be.




Also, I think talking about minimum bits of information is not the
best view onto the complexity problem.  It doesn't account for
structure at all.  Instead, why don't we talk about Gregory Chaitin's
[1] notion of a minimal program.  An interesting biological parallel
to compressing computer programs can be found in looking at bacteria
DNA.  For bacteria near undersea vents where it's very hot and genetic
code transcriptions can easily go awry due to thermal conditions, the
bacteria's genetic code as evolved into a compressed form that reuses
chunks of itself to express the same features that would normally be
spread out in a larger sequence of DNA.


yep.

I have sometimes wondered what an organism which combined most of the 
best parts of what nature has to offer would look like (an issue seems 
to be that most major organisms seem to be more advanced in some ways 
and less advanced in others).




wes

[1] http://www.umcs.maine.edu/~chaitin/
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-16 Thread BGB

On 6/16/2012 2:20 PM, Miles Fidelman wrote:

BGB wrote:


a problem is partly how exactly one defines complex:
one definition is in terms of visible complexity, where basically 
adding a feature causes code to become harder to understand, more 
tangled, ...


another definition, apparently more popular among programmers, is to 
simply obsess on the total amount of code in a project, and just 
automatically assume that a 1 Mloc project is much harder to 
understand and maintain than a 100 kloc project.


And there are functional and behavioral complexity - i.e., REAL 
complexity, in the information theory sense.


I expect that there is some correlation between minimizing visual 
complexity and lines of code (e.g., by using domain specific 
languages), and being able to deal with more complex problem spaces 
and/or develop more sophisticated approaches to problems.




a lot depends on what code is being abstracted, and how much code can be 
reduced by how much.


if the DSL makes a lot of code a lot smaller, it will have a good effect;
if it only makes a little code only slightly smaller, it may make the 
total project larger.



personally, I assume not worrying too much about total LOC, and more 
concern with how much personal effort is required (to 
implement/maintain/use it), and how well it will work (performance, 
memory use, reliability, ...).


but, I get a lot of general criticism for how I go about doing things...


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-15 Thread BGB

On 6/14/2012 10:19 PM, John Zabroski wrote:


Folks,

Arguing technical details here misses the point. For example, a 
different conversation can be started by asking Why does my web 
hosting provider say I need an FTP client? Already technology is way 
too much in my face and I hate seeing programmers blame their tools 
rather than their misunderstanding of people.


Start by asking yourself how would you build these needs from scratch 
to bootstrap something like the Internet.


What would a web browser look like if the user didnt need a seperate 
program to put data somewhere on their web server and could just use 
one uniform mexhanism? Note I am not getting into nice to have 
features like resumption of paused uploads due to weak or episodic 
connectivity, because that too is basically a technical problem -- and 
it is not regarded as academically difficult either. I am simply 
taking one example of how users are forced to work today and asking 
why not something less technical. All I want to do is upload a file 
and yet I have all these knobs to tune and things to install and 
none of it takes my work context into consideration.




idle thoughts:
there is Windows Explorer, which can access FTP;
would be better if it actually remembered login info, had automatic 
logic, and could automatically resume uploads, ...


but, the interface is nice, as an FTP server looks much like a 
directory, ...



also, at least in the past, pretty much everything *was* IE:
could put HTML on the desktop, in directories (directory as webpage), ...
but most of this went away AFAICT (then again, not really like IE is 
good).


maybe, otherwise, the internet would look like local applications or 
similar. they can sit on desktop, and maybe they launch windows. IMHO, I 
don't as much like tabs, as long ago Windows basically introduced its 
own form of tabs:

the Windows taskbar.

soon enough, it added another nifty feature:
it lumped various instances of the same program into popup menus.


meanwhile, browser tabs are like Win95 all over again, with the thing 
likely to experience severe lag whenever more than a few pages are open 
(and often have responsiveness and latency issues).


better maybe if more of the app ran on the client, and if people would 
use more asynchronous messages (rather than request/response).


...

so, then, webpages could have a look and feel more like normal apps.




Why do I pay even $4 a month for such crappy service?

On Jun 11, 2012 8:17 AM, Tony Garnock-Jones 
tonygarnockjo...@gmail.com mailto:tonygarnockjo...@gmail.com wrote:


On 9 June 2012 22:06, Toby Schachman t...@alum.mit.edu
mailto:t...@alum.mit.edu wrote:

Message passing does not necessitate a conceptual dependence on
request-response communication. Yet most code I see in the
wild uses
this pattern.


Sapir-Whorf strikes again? ;-)

I rarely
see an OO program where there is a community of objects who
are all
sending messages to each other and it's conceptually ambiguous
which
object is in control of the overall system's behavior.


Perhaps you're not taking into account programs that use the
observer/observable pattern? As a specific example, all the uses
of the dependents protocols (e.g. #changed:, #update:) in
Smalltalk are just this. In my Squeak image, there are some 50
implementors of #update: and some 500 senders of #changed:.

In that same image, there is also protocol for events on class
Object, as well as an instance of Announcements loaded. So I think
what you describe really might be quite common in OO /systems/,
rather than discrete programs.

All three of these aspects of my Squeak image - the dependents
protocols, triggering of events, and Announcements - are
encodings of simple asynchronous messaging, built using the
traditional request-reply-error conversational pattern, and
permitting conversational patterns other than the traditional
request-reply-error.

As an aside, working with such synchronous simulations of
asynchronous messaging causes all sorts of headaches, because
asynchronous events naturally involve concurrency, and the
simulation usually only involves a single process dispatching
events by synchronous procedure call.

Regards,
  Tony
-- 
Tony Garnock-Jones

tonygarnockjo...@gmail.com mailto:tonygarnockjo...@gmail.com
http://homepages.kcbbs.gen.nz/tonyg/

___
fonc mailing list
fonc@vpri.org mailto:fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-09 Thread BGB

On 6/9/2012 9:28 PM, Igor Stasenko wrote:

While i agree with guy's bashing on HTTP,
the second part of his talk is complete bullshit.


IMO, he did raise some valid objections regarding JS and similar though 
as well.


these are also yet more areas though where BS differs from JS: it uses 
different semantics for == and === (in BS, == compares by value 
for compatible types, and === compares values by identity).



granted, yes, bashing OO isn't really called for, or at least absent 
making more specific criticisms.


for example, I am not necessarily a fan of Class/Instance OO or deeply 
nested class hierarchies, but I really do like having objects to hold 
things like fields and methods, but don't necessarily like it to be a 
single form with a single point of definition, ...


would this mean I am for or against OO?...

I had before been accused of being anti-OO because I had asserted that, 
rather than making deeply nested class hierarchies, a person could 
instead use some interfaces.


the problem is partly that OO often means one thing for one person and 
something different for someone else.




He mentions a kind of 'signal processing' paradigm,
but we already have it: message passing.
Before i learned smalltalk, i was also thinking that OOP is about
structures and hierarchies, inheritance.. and all this
private/public/etc etc bullshit..
After i learned smalltalk , i know that OOP it is about message
passing. Just it. Period.
And no other implications: the hierarchies and structures is
implementation specific, i.e.
it is a way how an object handles the message, but it can be
completely arbitrary.

I think that indeed, it is a big industry's fault being unable to
grasp simple and basic idea of message passing
and replace it with horrible crutches with tons of additional
concepts, which makes it hard
for people to learn (and therefore be effective with OOP programming).


yeah.


although a person may still implementing a lot of this for sake of 
convention, partly because it is just sort of expected.


for example, does a language really need classes or instances (vs, say, 
cloning or creating objects ex-nihilo)? not really.


then why have them? because people expect them; they can be a little 
faster; and they provide a convenient way to define and answer the 
question is X a Y?, ...


I personally like having both sets of options though, so this is 
basically what I have done.




meanwhile, I have spent several days on-off pondering the mystery of if 
there is any good syntax (for a language with a vaguely C-like syntax), 
to express the concept of execute these statements in parallel and 
continue when all are done.


practically, I could allow doing something like:
join( async{A}, async{B}, async{C} );
but this is ugly (and essentially abuses the usual meaning of join).

meanwhile, something like:
do { A; B; C; } async;
would just be strange, and likely defy common sensibilities (namely, in 
that the statements would not be executed sequentially, in contrast to 
pretty much every other code block).


I was left also considering another possibly ugly option:
async![ A; B; C; ];
which just looks weird...

for example:
async![
{ sleep(1000); printf(A, ); };
{ sleep(2000); printf(B, ); };
{ sleep(3000); printf(C, ); }; ];
printf(Done\n);

would print A, B, C, Done with 1s delays before each letter.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] iconic representations of powerful ideas

2012-06-04 Thread BGB

On 6/4/2012 6:48 AM, Miles Fidelman wrote:

BGB wrote:


and, recently devised a hack for creating component layered JPEG 
images, or, basically, a hack to allow creating JPEGs which also 
contained alpha-blending, normal maps, specular maps, and luma maps 
(as an essentially 16-component JPEG image composed of multiple 
component layers, with individual JPEG images placed end-to-end with 
marker tags between them to mark each layer).



dunno if anyone would find any of this all that interesting though.


well, I'd certainly be interested in seeing that hack!

Mile Fidelman



from a comment in my JPEG code:
--
BGB Extensions:
APP11: BGBTech Tag
ASCIZ TagName
Tag-specific data until next marker.

AlphaColor:
AlphaColor
RGBA as string (red green blue alpha).

APP11 markers may indicate component layer:
FF,APP11,CompLayer\0, layername:ASCIZ
RGB: Base RGB
XYZ: Normal XYZ (XZY ordering)
SpRGB: Specular RGB
DASe: Depth, Alpha, Specular-Exponent
LuRGB: Luma RGB
Alpha: Mono alpha layer

Component Layouts:
3 component: (no marker, RGB)
4 component: RGB+Alpha
7 component: RGB+Alpha+LuRGB
8 component: RGB+XYZ+DASe
12 component: RGB+XYZ+SpRGB+DASe
16 component: RGB+XYZ+SpRGB+DASe+LuRGB
--

AlphaColor was an prior extension, basically for in-image chroma-keys.
the RGB color specifies the color to be matched, and A specifies how 
strongly the color is matched (IIRC, it is the distance to Alpha=128 
or so).


it was imagined that this could be calculated dynamically per-image, but 
doing so is costly, so typically a fixed color is specified during 
encoding (such as cyan or magenta).



CompLayer is the component layers.
currently, this tag precedes the SOI tages.

example:
FF,APP11, CompLayer\0, RGB\0
FF,SOI
...
FF,EOI
FF,APP11, CompLayer\0, XYZ\0
FF,SOI
...
FF,EOI
...


basically:
most component-layers are generic 4:2:0 RGB/YUV layers (except the mono 
alpha layer, which is monochrome).


the layers may share the same Huffman and Quantization tables (by having 
only the first layer encode them).


the RGB layer always comes first, so a decoder that doesn't know of the 
extension, will just see the basic RGB components. also all layers are 
the same resolution.



this is hardly an ideal design, but was intended more to allow a 
simple encoder/decoder tweak to handle it (currently, it is 
encoded/decoded by a function which may accept 4 RGBA buffers, and may 
shuffle things around slightly to encode them into the layers).


the in-program layers are:
RGBA;
XYZD ('D' may be used for parallax mapping, and represents the relative 
depth of the pixel);
Specular (RGBe), this basically gives the reflection color and shininess 
of surface pixels;

Luma (RGBA).


so, yes, it is all a bit of a hack...


there was also a little fudging to the my AVI code to allow these videos 
to be used for surface video-mapping (basically, the video is streamed 
into all 4 layers at the same time).


example use-cases of something like this would likely be things like 
making animated textures which resemble moving parts (such as metal 
gears and blinking lights), or alternatively as a partial alternative to 
using 3D modeled character faces (the face moving is really the texture 
and animation frames, rather than 3D geometry), however presently this 
is likely a better fit for animated textures than for video-maps.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] iconic representations of powerful ideas

2012-06-03 Thread BGB

On 6/3/2012 8:31 PM, Shawn Morel wrote:

I'm a very visual learner / thinker. I usually find it mentally painful (yes 
brow furrowing, headache inducing) to think of hard (distant) ideas until I can 
find an image in my mind's eye. Understood that not everyone thinks like this :)


I guess I often think visually as well, though with both a lot of 
pictures and text (but, how does one really know for certain?...).


I also tend to be a bit of a concrete thinker (or, a sensing type in 
psychology terms).




I was re-reading the original NSF grant proposal, in particular after reading 
this passage:

Key to the tractability of this approach is the separation of the kernel into two 
complementary facets: representation of executable specifications (structures of 
message-passing objects) forming symbolic expressions and the meaning of those 
specifications (interpretation of their structure) that yields concrete behavior.

I was gliding along the surface of a dynamically shifting Klein bottle.

Curious what other people might think.


personally I don't much understand the core goals of the project all 
that well either.


I lurk some, and respond if something interesting shows up, and 
sometimes make a fool of myself in the process, but oh well...


as well, it sometimes seems to me like maybe I am some sort of 
generalized antagonist for many people or something, at least given 
how many often pointless arguments seem to pop up (in general).



but, thinking of visual things:

I had recently looked over the SWF spec, and noticed that to some 
degree, at this level Flash looks a good deal like some sort of 
animated photoshop-like thing (both seem to be composed of stacks of 
layers and similar). or, at least, I found it kind of interesting.


then was recently left dealing with the idea of systems being driven 
from the top-down, rather than how I am more familiar with them in 
games: basically as interacting finite-state-machines (although top-down 
wouldn't likely replace FSMs, but they could be used in combination).



and, recently devised a hack for creating component layered JPEG 
images, or, basically, a hack to allow creating JPEGs which also 
contained alpha-blending, normal maps, specular maps, and luma maps (as 
an essentially 16-component JPEG image composed of multiple component 
layers, with individual JPEG images placed end-to-end with marker tags 
between them to mark each layer).


the main purpose was mostly though that I could have more advanced 
video-mapped surfaces (and, for the most part, I use MJPEG AVIs for 
these). there wasn't any other clearly better way.



among other things...

dunno if anyone would find any of this all that interesting though.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The problem with programming languages

2012-05-09 Thread BGB

On 5/9/2012 12:13 AM, Jarek Rzeszótko wrote:

There is an excellent video by Feynman on a related note:

http://www.youtube.com/watch?v=Cj4y0EUlU-Y

A damn good way to spend six minutes IMO...



yep.


I was left previously trying to figure out whether thinking using text 
was more linguistic/verbal or visual thinking, given it doesn't really 
match well with either:
verbal thinking is generally described as people thinking with words and 
sounds;

visual thinking is generally described as pictures / colors / emotions / ...

so, one can wonder, where does text fit?...

granted, yes, there is some mode-changing as well, as not everything 
seems to happen the same way all the time, and I can often push things 
around if needed (natural language can alternate between auditory and 
textual forms, ...).


I have determined though that I can't really read and also visualize 
the story (apparently, many other people do this), as all I can really 
see at the time is the text. probably because my mind is more busy 
trying to buffer up the text, and the space is already used up and so 
can't be used for drawing pictures (unless I use up a lot of the space 
for drawing a picture, in which case there isn't much space for holding 
text, ...).


I can also write code while also listening to someone talk, such as in a 
technical YouTube video or similar, since the code and person talking 
are independent (and usually relevant visuals are sparse and can be 
looked at briefly). but, I can't compose an email and carry on a 
conversation with someone at the same time, because they interfere (but 
I can often read and carry on a conversation though, though it is more 
difficult to entirely avoid topical bleed-over).



despite thinking with lots of text, I am also not very good at math, as 
I still tend to find both arithmetic and symbolic manipulation type 
tasks as fairly painful (but, these are used heavily in math classes).


when actually working with math, in a form that I understand, it is 
often more akin to wireframe graphics. for example, I can see the 
results of a dot-product or cross-product (I can see the orthogonal 
cross-bars of a cross-product, ...), and can mentally let the system 
play out (as annotated/diagrammed 3D graphics) and alter the results 
and see what happens (and the math is the superstructure of lines and 
symbols interconnecting the objects).


yet, I can't usually do this effectively in math classes, and usually 
have to resort to much less effective strategies, such as trying to 
convert the problem into a C-like form, and then evaluating this 
in-head, to try to get an answer. similarly, this doesn't work unless I 
can figure out an algorithm for doing it, or just what sort of thing the 
question is even asking for, which is itself often problematic.


another irony is that I don't really like flowcharts, as I personally 
tend to see them as often a very wasteful/ineffective way of 
representing many of these sorts of problems. despite both being 
visually-based, my thinking is not composed of flow-charts (and I much 
prefer more textual formats...).



or such...



Cheers,
Jaros?aw Rzeszótko

2012/5/9 BGB cr88...@gmail.com mailto:cr88...@gmail.com

On 5/8/2012 2:56 PM, Julian Leviston wrote:

Isn't this simply a description of your thought clearing process?

You think in English... not Ruby.

I'd actually hazard a guess and say that really, you think in a
semi-verbal semi-phyiscal pattern language, and not very well
formed one, either. This is the case for most people. This is why
you have to write hard problems down... you have to bake them
into physical form so you can process them again and again,
slowly developing what you mean into a shape.



in my case I think my thinking process is a good deal different.

a lot more of my thinking tends to be a mix of visual/spatial
thinking, and thinking in terms of glyphs and text (often
source-code, and often involving glyphs and traces which I suspect
are unique to my own thoughts, but are typically laid out in the
same character cell grid as all of the text).

I guess it could be sort of like if text were rammed together with
glyphs and PCB traces or similar, with the lines weaving between
the characters, and sometimes into and out of the various glyphs
(many of which often resemble square boxes containing circles and
dots, sometimes with points or corners, and sometimes letters or
numbers, ...).

things may vary somewhat, depending on what I am thinking about
the time.


my memory is often more like collections of images, or almost like
pages in a book, with lots of information drawn onto them,
usually in a white-on-black color-scheme. there is typically very
little color or movement.

sometimes it may include other forms of graphics, like pictures of
things I have seen, objects I can imagine, ...


thoughts may often

Re: [fonc] The problem with programming languages

2012-05-08 Thread BGB

On 5/8/2012 2:56 PM, Julian Leviston wrote:

Isn't this simply a description of your thought clearing process?

You think in English... not Ruby.

I'd actually hazard a guess and say that really, you think in a 
semi-verbal semi-phyiscal pattern language, and not very well formed 
one, either. This is the case for most people. This is why you have to 
write hard problems down... you have to bake them into physical form 
so you can process them again and again, slowly developing what you 
mean into a shape.




in my case I think my thinking process is a good deal different.

a lot more of my thinking tends to be a mix of visual/spatial thinking, 
and thinking in terms of glyphs and text (often source-code, and often 
involving glyphs and traces which I suspect are unique to my own 
thoughts, but are typically laid out in the same character cell grid 
as all of the text).


I guess it could be sort of like if text were rammed together with 
glyphs and PCB traces or similar, with the lines weaving between the 
characters, and sometimes into and out of the various glyphs (many of 
which often resemble square boxes containing circles and dots, sometimes 
with points or corners, and sometimes letters or numbers, ...).


things may vary somewhat, depending on what I am thinking about the time.


my memory is often more like collections of images, or almost like 
pages in a book, with lots of information drawn onto them, usually in 
a white-on-black color-scheme. there is typically very little color or 
movement.


sometimes it may include other forms of graphics, like pictures of 
things I have seen, objects I can imagine, ...



thoughts may often use natural-language as well, in a spoken-like form, 
but usually this is limited either to when talking to people or when 
writing something (if I am trying to think up what I am writing, I may 
often hear echoes of various ways the thought could be expressed, and 
of text as it is being written, ...). reading often seems to bypass this 
(and go more directly into a visual form).



typically, thinking about programming problems seems to be more like 
being in a storm of text flying all over the place, and then bits of 
code flying together from the pieces.


if any math is involved, often any relevant structures will be 
themselves depicted visually, often in geometry-like forms.


or, at least, this is what it looks like, I really don't actually know 
how it all works, or how the thoughts themselves actually work or do 
what they do.


I think all this counts as some form of visual thinking (though I 
suspect probably a non-standard form based on some stuff I have read, 
given that colors, movement, and emotions don't really seem to be a 
big part of this).



or such...



On 09/05/2012, at 2:20 AM, Jarek Rzeszótko wrote:

Example: I have been programming in Ruby for 7 years now, for 5 years 
professionally, and yet when I face a really difficult problem the 
best way still turns out to be to write out a basic outline of the 
overall algorithm in pseudo-code. It might be a personal thing, but 
for me there are just too many irrelevant details to keep in mind 
when trying to solve a complex problem using a programming language 
right from the start. I cannot think of classes, method names, 
arguments etc. until I get a basic idea of how the given computation 
should work like on a very high level (and with the low-level details 
staying fuzzy). I know there are people who feel the same way, 
there was an interesting essay from Paul Graham followed by a very 
interesting comment on MetaFilter about this:




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Quiet and light weight devices

2012-04-23 Thread BGB

On 4/21/2012 1:57 PM, Andre van Delft wrote:

TechCrunch has an interview with Linus Torvalds. He uses a MacBook Air (iOS, 
BTW):


sure it is not OS X?...

although, it is kind of funny that he would be using a computer not 
running his own OS...




[Start of Quote]
I’m have to admit being a bit baffled by how nobody else seems to have done 
what Apple did with the Macbook Air – even several years after the first 
release, the other notebook vendors continue to push those ugly and *clunky* 
things. Yes, there are vendors that have tried to emulate it, but usually 
pretty badly. I don’t think I’m unusual in preferring my laptop to be thin and 
light.

Btw, even when it comes to Apple, it’s really just the Air that I think is 
special. The other apple laptops may be good-looking, but they are still the 
same old clunky hardware, just in a pretty dress.

I’m personally just hoping that I’m ahead of the curve in my strict requirement 
for “small and silent”. It’s not just laptops, btw – Intel sometimes gives me 
pre-release hardware, and the people inside Intel I work with have learnt that 
being whisper-quiet is one of my primary requirements for desktops too. I am 
sometimes surprised at what leaf-blowers some people seem to put up with under 
their desks.

I want my office to be quiet. The loudest thing in the room – by far – should 
be the occasional purring of the cat. And when I travel, I want to travel 
light. A notebook that weighs more than a kilo is simply not a good thing 
(yeah, I’m using the smaller 11″ macbook air, and I think weight could still be 
improved on, but at least it’s very close to the magical 1kg limit).

[End of Quote]
http://techcrunch.com/2012/04/19/an-interview-with-millenium-technology-prize-finalist-linus-torvalds/

I agree with Linus, especially on the importance of silence (I don't travel 
that much yet). I intent never to buy a noisy Windows PC any more. My 1 year 
old 4GB iMac is pretty silent, but with every tab I open in Chrome I hear some 
soft rumble that irritates me heavily. My iPad is nicely quiet when it should 
be.


well, it would be an improvement...

I suspect it is possible that PC fan noise may be a source of mild 
tinnitus in my case.
it is not noticeable when near a computer, but often when away from a 
computer or somewhere quiet, there is a ringing noise oddly similar to 
that of PC fans.


otherwise, I don't usually mind the fans it all that much (it is a 
tradeoff...).



but, I once got a fan with a fairly high CFM rating (I forget now the 
exact number, something like 1600 CFM or something...), but ended up not 
using it as the thing as it sounded more like a vacuum cleaner than a 
normal PC fan (and could also be heard from other rooms in the house).


IIRC, it was in 120mm form (and fairly thick as well), and it had 
double-sided metal grates (and also metal blades IIRC). if powered, it 
was also strong enough to propel itself along on a surface if kept 
upright (but, sadly, not strong enough to lift its own weight, which 
would have been funny...), and IIRC had thicker-than-usual wiring and a 
dedicated molex connector as well (it claimed that molex connector all 
to itself). it was also extra heavy as well.


so, it served more as a novelty than anything else...

funny would have been to tape it onto a CD and have it slide around like 
an airboat or similar, apart from the wire-length issues with a typical PSU.



luckily, fans like this are not exactly standard components...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-09 Thread BGB

On 4/8/2012 8:26 PM, Miles Fidelman wrote:

BGB wrote:

On 4/4/2012 5:26 PM, Miles Fidelman wrote:

BGB wrote:
Not so sure.  Probably similar levels of complexity between a 
military sim. and, say, World of Warcraft.  Fidelity to real-world 
behavior is more important, and network latency matters for the 
extreme real-time stuff (e.g., networked dogfights at Mach 2), but 
other than that, IP networks, gaming class PCs at the endpoints, 
serious graphics processors.  Also more of a need for 
interoperability - as there are lots of different simulations, 
plugged together into lots of different exercises and training 
scenarios - vs. a MMORPG controlled by a single company.




ok, so basically a heterogeneous MMO. and distributed




well, yes, but I am not entirely sure how many non-distributed 
(single server) MMO's there are in the first place.


presumably, the world has to be split between multiple servers to 
deal with all of the users.


some older MMOs had shards, where users on one server wouldn't be 
able to see what users on a different server were doing, but this is 
AFAIK generally not really considered acceptable in current MMOs 
(hence why the world would be divided up into areas or regions 
instead, presumably with some sort of load-balancing and similar).


unless of course, this is operating under a different assumption of 
what a distributed-system is than one which allows a load-balanced 
client/server architecture.


Running on a cluster is very different between having all the 
intelligence on the individual clients.  As far as I can tell, MMOs by 
and large run most of the simulation on centralized clusters (or at 
least within the vendor's cloud).  Military sims do EVERYTHING on the 
clients - there are no central machines, just the information 
distribution protocol layer.


yes, but there are probably drawbacks with this performance-wise and 
reliability wise.


not that all of the servers need to be run in a single location or be 
owned by a single company, but there are some general advantages to the 
client/server model.





reading some stuff (an overview for the DIS protocol, ...), it 
seems that the level of abstraction is in some ways a bit higher 
(than game protocols I am familiar with), for example, it will 
indicate the entity type in the protocol, rather than, say, the 
name of, its 3D model.
Yes.  The basic idea is that a local simulator - say a tank, or an 
airframe - maintains a local environment model (local image 
generation and position models maintained by dead reckoning) - what 
goes across the network are changes to it's velocity vector, and 
weapon fire events.  The intent is to minimize the amount of data 
that has to be sent across the net, and to maintain speed of image 
generation by doing rendering locally.




now, why, exactly, would anyone consider doing rendering on the 
server?...


Well, render might be the wrong term here.  Think more about map 
tiling.  When you do map applications, the GIS server sends out map 
tiles.  Similarly, at least some MMOs do most of the scene generation 
centrally.  For that matter, think about moving around Google Earth in 
image mode - the data is still coming from Google servers.


The military simulators come from a legacy of flight simulators - VERY 
high resolution imagery, very fast movement.  Before the simulation 
starts, terrain data and imagery are distributed in advance - every 
simulator has all the data needed to generate an out-the-window view, 
and to do terrain calculations (e.g., line-of-sight) locally.




ok, so sending polygons and images over the net.

so, by very, is the implication that they are sending large numbers of 
1024x1024 or 4096x4096 texture-maps/tiles or similar?...


typically, I do most texture art at 256x256 or 512x512.

but, anyways, presumably JPEG or similar could probably make it work.


ironically, all this leads to more MMOs using client-side physics, 
and more FPS games using server-side physics, with an MMO generally 
having a much bigger problem regarding cheating than an FPS.


For the military stuff, it all comes down to compute load and network 
bandwidth/latency considerations - you simply can't move enough data 
around, quickly enough to support high-res. out-the-window imagery for 
a pilot pulling a 2g turn.  Hence you have to do all that locally.  
Cheating is less of an issue, since these are generally highly managed 
scenarios conducted as training exercises.  What's more of an issue is 
if the software in one sim. draws different conclusions than the 
software in an other sim. (e.g., two planes in a dogfight, each 
concluding that it shot down the other one) - that's usually the 
result of a design bug rather than cheating (though Capt. Kirk's I 
don't believe in the no win scenario line comes to mind).




this is why most modern games use client/server.

some older games (such as Doom-based games) determined things like AI 
behaviors and damage on each

Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-04 Thread BGB

On 4/3/2012 9:29 PM, Miles Fidelman wrote:

BGB wrote:


On 4/3/2012 10:47 AM, Miles Fidelman wrote:


Hah.  You've obviously never been involved in building a CGF 
simulator (Computer Generated Forces) - absolute spaghetti code when 
you have to have 4 main loops, touch 2000 objects (say 2000 tanks) 
every simulation frame.  Comparatively trivial if each tank is 
modeled as a process or actor and you run asynchronously.


I have not encountered this term before, but does it have anything to 
do with an RBDE (Rigid Body Dynamics Engine), or often called simply 
a physics engine. this would be something like Havok or ODE or 
Bullet or similar.


There is some overlap, but only some - for example, when modeling 
objects in flight (e.g., a plane flying at constant velocity, or an 
artillery shell in flight) - but for the most part, the objects being 
modeled are active, and making decisions (e.g., a plane or tank, with 
a simulated pilot, and often with the option of putting a 
person-in-the-loop).


So it's really impossible to model these things from the outside 
(forces acting on objects), but more from the inside (run 
decision-making code for each object).




fair enough...

but, yes, very often in cases where one is using a physics engine, this 
may be combined with the use of internal logic and forces as well, 
albeit admittedly there is a split:
technically, these forces are applied directly by whatever code is using 
the physics engine, rather than by the physics engine itself.


for example: just because it is a physics engine doesn't mean that it 
necessarily has to be realistic, or that objects can't supply their 
own forces.


I guess, however, that this would be closer to the main server end in 
my case, namely the part that manages the entity system and NPC AIs and 
similar (and, also, the game logic is more FPS style).


still not heard the term CGF before though.


in this case, the basic timestep update is basically to loop over all 
the entities in the scene and calls their think methods and similar 
(things like AI and animation and similar are generally handled via 
think methods and similar), and maybe do things like updating physics 
(if relevant), ...


this process is single threaded with a single loop though.

I guess it is arguably event-driven though:
handling timing is done via events (think being a special case);
most interactions between entities involve events as well;
...

many entities and AIs are themselves essentially finite-state-machines.


or such...


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-04 Thread BGB

On 4/4/2012 9:29 AM, Miles Fidelman wrote:

BGB wrote:

On 4/4/2012 6:35 AM, Miles Fidelman wrote:

BGB wrote:


still not heard the term CGF before though.
If you do military simulations, CGF (Computer Generated Forces) and 
SAF (Semi-Automated Forces) are the equivalent terms of art to game 
engine.  Sort of.




military simulations as in RTS (Real Time Strategy) or similar, or 
something different?... (or, maybe even like realistic simulations 
used by actual military, rather than for purposes of gaming?...).


Well, there are really two types of simulations in use in the military 
(at least that I'm familiar with):


- very detailed engineering models of various sorts (ranging from 
device simulations to simulations of say, a sea-skimming missile vs. a 
gattling gun point-defense weapon).  (think MATLAB and SIMULINK type 
models)




don't know much all that much about MATLAB or SIMULINK, but do know 
about things like FEM (Finite Element Method) and CFD (Computational 
Fluid Dynamics) and similar.


(left out a bunch of stuff, mostly about FEM, CFD, and particle systems, 
in games technology and wondering about how some of this stuff compares 
with their analogues as used in an engineering context).



- game-like simulations (which I'm more familiar with): but these are 
serious games, with lots of people and vehicles running around 
practicing techniques, or experimenting with new weapons and tactics, 
and so forth; or pilots training in team techniques by flying missions 
in a networked simulator (and saving jet fuel); or decision makers 
practicing in simulated command posts -- simulators take the form of 
both person-in-the-loop (e.g., flight sim. with a real pilot) and 
CGF/SAF (an enemy brigade is simulated, with information inserted into 
the simulation network so enemy forces show up on radar screens, 
heads-up displays, and so forth)


For more on the latter, start at:

http://en.wikipedia.org/wiki/Distributed_Interactive_Simulation
http://www.sisostds.org/



so, sort of like: this stuff is to gaming what IBM mainframes are to PCs?...

I had mostly heard about military people doing all of this stuff using 
decommissioned vehicles and paintball and similar, but either way.


I guess game-like simulations are probably cheaper.





Wikipedia hasn't been being very helpful here regarding a lot of this 
(it doesn't seem to know about most of these terms).


well, it does know about game engines and RTS though.


Maybe check out 
http://www.mak.com/products/simulate/computer-generated-forces.html 
for an example of a CGF.




looked briefly, yes, ok.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future

2012-04-03 Thread BGB

On 4/3/2012 9:46 AM, Miles Fidelman wrote:

David Barbour wrote:


On Tue, Apr 3, 2012 at 8:25 AM, Eugen Leitl eu...@leitl.org 
mailto:eu...@leitl.org wrote:


It's not just imperative programming. The superficial mode of human
cognition is sequential. This is the problem with all of mathematics
and computer science as well.


Perhaps human attention is basically sequential, as we're only able 
to focus our eyes on one point and use two hands.  But I think humans 
understand parallel behavior well enough - maintaining multiple 
relationships, for example, and predicting the behaviors of multiple 
people.


And for that matter, driving a car, playing a sport, walking and 
chewing gum at the same time :-)




yes, but people have built-in machinery for this, in the form of the 
cerebellum.
relatively little conscious activity is generally involved in these 
sorts of tasks.


if people had to depend on their higher reasoning powers for basic 
movement tasks, people would likely be largely unable to operate 
effectively in basic day-to-day tasks.





If you look at MPI debuggers, it puts people into a whole other
universe of pain that just multithreading.


I can think of a lot of single-threaded interfaces that put people in 
a universe of pain. It isn't clear to me that distribution is at 
fault there. ;)




Come to think of it, tracing flow-of-control through an 
object-oriented system REALLY is a universe of pain (consider the 
difference between a simulation - say a massively multiplayer game - 
where each entity is modeled as an object, with one or two threads 
winding their way through every object, 20 times a second; vs. 
modeling each entity as a process/actor).




FWIW, in general I don't think much about global control-flow.

however, there is a problem with the differences between:
global behavior (the program as a whole);
local behavior (a local collection of functions and statements).

a person may tend to use general fuzzy / intuitive behavior for 
reasoning about the system as a whole, but will typically use fairly 
rigid sequential logic for thinking about the behavior of a given piece 
of code.


there is a problem if the individual pieces of code are no longer 
readily subject to analysis.



the problem I think with multithreading isn't so much that things are 
parallel or asynchronous, but rather that things are very often 
inconsistent.


if two threads try to operate on the same piece of data at the same 
time, often this will create states which are impossible had either been 
operating on the data individually (and, very often, changes made in one 
thread will not be immediately visible to others, say, because the 
compiler had not actually thought to write the change to memory, or the 
other thread to reload the variable).


hence, people need things like the volatile modifier, use of atomic 
operations, things like mutexes or synchronized regions, ...



this leaves several possible options:
systems go further in this direction, with little expectation of global 
synchronization unless some specific mechanism is used (two threads 
working on a piece of memory may temporarily each see their own local copy);
or, languages/compilers go the other direction, so that one thread 
changing a variable is mandated to be immediately visible to other threads.


one option is more costly than the other.

as-is, the situation seems to be that compilers lean on one side (only 
locally consistent), whereas the HW tries to be globally consistent.


a question then, is assuming HW is not kept strictly consistent, how to 
best handle this (regarding both language design and performance).



however, personally I think abandoning local sequential logic and 
consistency, as being a bad move.


I am personally more in-favor of message passing, and either the ability 
to access objects synchronously, or pass messages to the object, which 
may be in-turn synchronous.


consider, for example:
class Foo
{
sync function methodA() { ... }//synchronous (only one such 
method executes at a time)

sync function methodB() { ... }//synchronous
async function methodC() { ... }//asynchronous / concurrent 
(calls will not block)
sync async function methodD() { ... } //synchronous, but calls will 
not block

}

...
var obj=new Foo();

//thread A
obj.methodA();
//thread B
obj.methodB();

the VM could enforce that the object only executes a single such method 
at a time (but does not globally lock the object, unlike synchronized).


similarly:
//thread A
async obj.methodA();
//thread B
async obj.methodB();

which works similarly, except neither thread blocks (in this case, obj 
functions as a virtual process, and the method call serves more as a 
message pass). note that, if methods are not sync, then they may 
execute concurrently.


note that obj.methodC(); will behave as if the async keyword were 
given (it may be called concurrently). obj.methodD(); will behave as 

[fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-03 Thread BGB
(changed subject, as this was much more about physics simulation than 
about concurrency).


yes, this is a big long personal history dump type thing, please 
ignore if you don't care.



On 4/3/2012 10:47 AM, Miles Fidelman wrote:

David Barbour wrote:


Control flow is a source of much implicit state and accidental 
complexity.


A step processing approach at 20Hz isn't all bad, though, since at 
least you can understand the behavior of each frame in terms of the 
current graph of objects. The only problem with it is that this 
technique doesn't scale. There are easily up to 15 orders of 
magnitude in update frequency between slow-updating and fast-updating 
data structures. Object graphs are similarly heterogeneous in many 
other dimensions - trust and security policy, for example.


Hah.  You've obviously never been involved in building a CGF simulator 
(Computer Generated Forces) - absolute spaghetti code when you have to 
have 4 main loops, touch 2000 objects (say 2000 tanks) every 
simulation frame.  Comparatively trivial if each tank is modeled as a 
process or actor and you run asynchronously.




I have not encountered this term before, but does it have anything to do 
with an RBDE (Rigid Body Dynamics Engine), or often called simply a 
physics engine.

this would be something like Havok or ODE or Bullet or similar.

I have written such an engine before, but my effort was single-threaded 
(using a fixed-frequency virtual timer, with time-step subdivision to 
deal with fast-moving objects).


probably would turn a bit messy though if it had to be made internally 
multithreaded (it is bad enough just trying to deal with irregular 
timesteps, blarg...).


however, it was originally considered to potentially run in a separate 
thread from the main 3D engine, but I never really bothered as there 
turned out to not be much point.



granted, one could likely still parallelize it while keeping everything 
frame-locked though, like having the threads essentially just subdivide 
the scene-graph and each work on a certain part of the scene, doing the 
usual thing of all of them predicting/handling contacts within a single 
time step, and then all updating positions in-sync, and preparing for 
the next frame.


in the above scenario, the main cost would likely be how to best go 
about efficiently dividing up work among the threads (the usual strategy 
I use is work-queues, but I have doubts regarding their scalability).


side note:
in my own experience, simply naively handling/updating all objects 
in-sequence doesn't tend to work out very well when mixed with things 
like contact forces (example: check if object can make move, if so, 
update position, move on to next object, ...). although, this does work 
reasonably well for Quake-style physics (where objects merely update 
positions linearly, and have no actual contact forces).


better seems to be:
for all moving objects, predict where the object wants to be in the next 
frame;

determine which objects will collide with each other;
calculate contact forces and apply these to objects;
update movement predictions;
apply movement updates.

however, interpenetration is still not avoided (sufficient forces will 
still essentially push objects into each other). theoretically, one can 
disallow interpenetration (by doing like Quake-style physics and simply 
disallow any post-contact updates which would result in subsequent 
interpenetration), but in my prior attempts to enable such a feature, 
the objects would often become stuck and seemingly entirely unable to 
move, and were in-fact far more prone to violently explode (a pile of 
objects will seemingly become stuck-together and immovable, maybe for 
several seconds, until ultimately all of them will violently explode 
outward at high velocities).


allowing objects to interpenetrate was thus seen as the lesser evil, 
since, even though objects were violating the basic assumption that 
rigid bodies aren't allowed to exist in the same place at the same 
time, typically (assuming the collision-detection and force-calculation 
functions are working correctly, itself easier said than done), this 
will generally correct itself reasonably quickly (the contact forces 
will push the objects back apart, until they reach a sort of 
equilibrium), and with far less incidence of random explosions.


sadly, the whole physics engine ended up a little rubbery as a result 
of all of this, but it seemed reasonable, as I have also observed 
similar behavior to some extent in Havok, and have figured out that I 
could deal with matters well enough by using a simpler (Quake-style) 
physics engine for most non-dynamic objects. IOW: things using AABBs 
(Axis-Aligned Bounding-Box) and similar, and other related solid 
objects which can't undergo rotation, a very naive check and update 
strategy works fairly well for objects which can only ever undergo 
translational movement.


admittedly, I also never was able to get constraints 

Re: [fonc] Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future

2012-03-27 Thread BGB

On 3/27/2012 12:23 PM, Miles Fidelman wrote:

karl ramberg wrote:

Slides/pdf:
http://www.dynamic-languages-symposium.org/dls-11/program/media/Ungar_2011_EverythingYouKnowAboutParallelProgrammingIsWrongAWildScreedAboutTheFuture_Dls.pdf 






Granted that their approach to an OLAP cube is new, but the folks 
behind Erlang, and Carl Hewitt have been talking about massive 
concurrency, an assuming inconsistency as the future trend, for years.


Miles Fidelman



yeah, it seems a bit like an overly dramatic and hand-wavy way of saying 
stuff that probably many of us knew already (besides maybe some guy off 
in the corner being like but I thought mutex-based locking could scale 
up forever?!...).


granted, language-design may still need some work to find an ideal 
programming model for working with concurrent systems, but I still more 
suspect it will probably end up looking more like existing language 
with better concurrency features bolted on than some fundamentally new 
approach to programming (like, say, C or C++ or C# or ActionScript or 
similar, with message-passing and constraints or similar bolted on).



another issue I can think of:
how does Tilera compare, say, with AMD Fusion?...

a quick skim over what information I could find was not showing any 
strong reasons (technical or economic) which would be leaning in 
Tilera's favor vs AMD Fusion (maybe there is something more subtle?...).


both seem to be highly parallel VLIW architectures, ...


granted, as-is, one is still stuck using things like CUDA or OpenCL, but 
maybe something can be found to be able to largely eliminate needing 
these (or, gloss over them).


a partial idea that comes up is that of having a sort of bytecode format 
which can be compiled into the particular ISAs of the particular cores.


or, alternatively, throwing some sort of x86 to VLIW JIT / 
trans-compiler or similar into the mix.



or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] OT? Polish syntax

2012-03-19 Thread BGB

On 3/18/2012 6:54 PM, Martin Baldan wrote:

BGB, please see my answer to shaun. In short:

_ I'm not looking for stack-based languages. I want a Lisp which got
rid of (most of the) the parens by using fixed arity and types,
without any loss of genericity, homoiconicity or other desirable
features. REBOL does just that, but it's not so good regarding
performance, the type system, etc.


fair enough...


but, hmm... one could always have 2 stacks: create a stack over the 
stack, in turn reversing the RPN into PN, and also gets some meta 
going on...


+ 2 * 3 4 = 24

commands are pushed left to right, execution then consists of popping of 
and executing said commands (the pop/execute loop continues until the 
stack is empty). execution then proceeds left-to-right.


ironically, I have a few specialized interpreters that actually sort of 
work this way:
one such interpreter uses a similar technique to implement a naive 
batch-style command language. similar was also before used in a 
text-to-speech engine of mine.


nifty features:
no need to buffer intermediate expressions during execution (no ASTs or 
bytecode);
no need for an explicit procedure call/return mechanism (the process is 
largely implicit, however one does need a mechanism to push the contents 
of a procedure, although re-parsing works fairly well);

easily handles recursion;
it also implicitly performs tail-call optimization;
fairly quick/easy to implement;
handles pause/resume easily (since the interpreter is non-recursive).

possible downsides:
not particularly likely to be high-performance (although an 
implementation using objects or threaded code seems possible);

behavior can be potentially reasonably counter-intuitive;
...



_ I *hate* infix notation. It can only make sense where everything has
arity 3, like in RDF.


many people would probably disagree.
whether culture or innate, infix notations seem to be fairly popular.

actually, it can be noted that the many of the world languages are SVO 
(and many others are SOV), so there could be a pattern here.


a reasonable tradeoff IMO is using prefix notation for commands and 
infix notation for arithmetic.




_ Matching parens is a non-issue. Just use Paredit or similar ;)


I am currently mostly using Notepad2, which does have parenthesis 
matching via highlighting.


however, the issue isn't as much with just using an editor with 
parenthesis matching, but more an issue when quickly typing something 
interactively. one may have to make extra mental effort to get the 
counts of opening and closing parenthesis right, potentially distracting 
from the task at hand (typing in a command or math expression or 
similar). it also doesn't help matters that the parenthesis are IMO more 
effort to type than some other keys.



granted, C style syntax isn't perfect for interactive use either. IMO, 
probably the more notable issue in this case is having to type commas. 
one can fudge it though (say, by making commas and semicolons generally 
optional).


one of the better syntax designs for interactive use seems to be the 
traditional shell-command syntax. behind this is probably C-like syntax, 
followed by RPN, followed by S-Expressions.


although physically RPN is probably a little easier to type than C style 
syntax, a downside is that one may have to mentally rework the 
expressions prior to typing them. another downside is that of being 
difficult to read or decipher later.



something like REBOL could possibly work fairly well here, given it has 
some structural similarity to shell-command syntax.




_ Umm, whitespace sensitive sounds a bit dangerous. I have enough
with Python :p


small-scale whitespace sensitivity actually seems to work out a bit 
nicer than larger scale whitespace sensitivity IMO. large-scale 
constrains the overall formatting and may end up needing to be worked 
around. small-scale generally has a much smaller impact, and need not 
influence overall code formatting.


the main merit it has is that it can reduce the need for commas (and/or 
semicolons), since the parser can use whitespace as a separator (and 
space is an easier key to hit).


however, many people like to use whitespace in weird places in code, 
which would carry the drawback that with such a parser, such tendencies 
would lead to incorrect code parsing.


example:
foo (x)
x = 3
  +4
...
could likely lead to the code being parsed incorrectly in several places.

otherwise, one has to write instead:
foo(x)
x=3+4
or possibly also allowed:
foo(
  x)
x=3+
  4
which would be more obvious to the parser.


or, alternatively, whitespace sensitivity can allow things like:
dosomething 2 -3 4*9-2

to be parsed without being ambiguous (except maybe to human readers due 
to variable-width font evilness, where font designers seem to like to 
often hide the spaces, but one can assume that most real 
programmers, if given the choice, will read code with a fixed-width 
font...). otherwise, I have had generally good luck

Re: [fonc] OT? Polish syntax

2012-03-19 Thread BGB

On 3/19/2012 5:24 AM, Martin Baldan wrote:

but, hmm... one could always have 2 stacks: create a stack over the stack,
in turn reversing the RPN into PN, and also gets some meta going on...

Uh, I'm afraid one stack is one too many for me. But then again, I'm
not sure I get what you mean.


in traditional RPN (PostScript, Forth, ...).
one directly executes commands from left-to-right.

in this case, one pushes commands left to right, and then pops and 
executes them in a loop.
so, there is a stack for values, and a stack for the future (commands 
awaiting execution).


naturally enough, it flips the notation (since sub-expressions are 
executed first).



+ 2 * 3 4 =  24

Wouldn't that be + 2 * 3 4 =  14 in Polish notation? Typo?


or mental arithmetic fail, either way...
I vaguely remember writing this, and I think the mental arithmetic came 
out wrong.




commands are pushed left to right, execution then consists of popping of and
executing said commands (the pop/execute loop continues until the stack is
empty). execution then proceeds left-to-right.

Do you have to get busy with rot, swap, drop,  over etc?
That's the problem I see with stack-based languages.


if things are designed well, these mostly go away.

mostly it is a matter of making every operation do the right thing and 
expect arguments in a sensible order.


a problem, for example, in the design of PostScript, is that people 
tried to give the operations their intuitive ordering, but this leads 
to both added awkwardness and additional need for explicit stack operations.



say, for example, one can have a language with:
/someName someValue bind
or:
someValue /someName bind

though seemingly a trivial difference, one form is likely to need more 
swap/exch calls than the other.


likewise:
array index value setindex
vs:
value array index setindex
...


dup is a little harder, but generally I have found that dup tended to 
appear in places where a higher-level / compound operation was more 
sensible.


granted, for example, such compound operations are a large portion of my 
interpreter's bytecode ISA, but many help improve performance by 
optimizing common operations).


an example, suppose one compiles for an operation like:
j=i++;

one could emit, say:
load i; dup; push 1; binary add; store i; store j;

with all of the lookups and type-checks along the way.

also possible is the sequence:
lpostinc_fn 1; lstore_f 2;
(assume 1 and 2 are the lexical variable indices for i and j, both 
inferred fixnum).

or:
postinc_s i; store j;
(collapsed operation, but not knowing/giving exact types or locations).

now, what has happened?:
the first 5 operations collapse into a single operation, in the former 
case, specialized also for a lexical variable and for fixnums (say, due 
to type-inference);

what is left is a simple store.

as-noted, most of this was due to interpreter-specific 
micro-optimizations, and a lot of this is ignored in the 
(newer/incomplete) JIT (which mostly decomposes the operations again, 
and uses more specialized variable allocation and type-inference logic).


these sorts of optimizations are also to some extent language and 
use-case specific, but they do help somewhat with performance of a plain 
interpreter.



similar could likely be applied to a stack language designed for use by 
humans, where lots of operations/words are dedicated to common 
constructions likely to be used by a user of the language.




_ I *hate* infix notation. It can only make sense where everything has
arity 3, like in RDF.


many people would probably disagree.
whether culture or innate, infix notations seem to be fairly popular.

My beef with infix notation is that you get ambiguity, and then this
ambiguity is usually eliminated with arbitrary policies of operator
priority, and then you still have to use parens, even with fixed
arity. In contrast, with pure Polish notation, once you accept fixed
arity you get unambiguity for free and you get rid of parens for
everything (except, maybe, explicit lists).

For instance, in infix notation, when I see:

2 + 3 * 4

I have to remember that it means:

2 + (3*4)

But then I need the parens when I mean:

(2 + 3) * 4

In contrast, with Polish notation, the first case would be:

+ 2 * 3 4

And the second case would be:

* 4 + 2 3

Which is clearly much more elegant. No parens, no operator priority.


many people are not particularly concerned with elegance though, and 
tend to take it for granted what the operator precedences are and where 
the parens go.


this goes even for the (arguably poorly organized) C precedence hierarchy:
many new languages don't change it because people expect it a certain way;
in my case, I don't change it mostly for sake of making at least some 
effort to conform with ECMA-262, which defines the hierarchy a certain way.


the advantage is that, assuming the precedences are sensible, much more 
commonly used operations have higher precedence, and so don't need 
explicit 

Re: [fonc] OT? Polish syntax

2012-03-15 Thread BGB

On 3/15/2012 9:21 AM, Martin Baldan wrote:

I have a little off-topic question.
Why are there so few programming languages with true Polish syntax? I
mean, prefix notation, fixed arity, no parens (except, maybe, for
lists, sequences or similar). And of course, higher order functions.
The only example I can think of is REBOL, but it has other features I
don't like so much, or at least are not essential to the idea. Now
there are some open-source clones, such as Boron, and now Red, but
what about very different languages with the same concept?

I like pure Polish notation because it seems as conceptually elegant
as Lisp notation, but much closer to the way spoken language works.
Why is it that this simple idea is so often conflated with ugly or
superfluous features such as native support for infix notation, or a
complex type system?


because, maybe?...
harder to parse than Reverse-Polish;
less generic than S-Expressions;
less familiar than more common syntax styles;
...

for example:
RPN can be parsed very quickly/easily, and/or readily mapped to a stack, 
giving its major merit. this gives it a use-case for things like textual 
representations of bytecode formats and similar. languages along the 
lines of PostScript or Forth can also make reasonable assembler 
substitutes, but with higher portability. downside: typically hard to read.


S-Expressions, however, can represent a wide variety of structures. 
nearly any tree-structured data can be expressed readily in 
S-Expressions, and all they ask for in return is a few parenthesis. 
among other things, this makes them fairly good for compiler ASTs. 
downside: hard to match parens or type correctly.


common syntax (such as C-style), while typically harder to parse, and 
typically not all that flexible either, has all the usual stuff people 
expect in a language: infix arithmetic, precedence levels, statements 
and expressions, ... and the merit that it works fairly well for 
expressing most common things people will care to try to express with 
them. some people don't like semicolons and others don't like 
sensitivity to line-breaks or indentation, and one generally needs 
commas to avoid ambiguity, but most tend to agree that they would much 
rather be using this than either S-Expressions or RPN.


(and nevermind some attempts to map programming languages to XML based 
syntax designs...).


or, at least, this is how it seems to me.


ironically, IMO, it is much easier to type C-style syntax interactively 
while avoiding typing errors than it is to type S-Expression syntax 
interactively while avoiding typing errors (maybe experience, maybe not, 
dunno). typically, the C-style syntax requires less total characters as 
well.


I once designed a language syntax specially for the case of being typed 
interactively (for terseness and taking advantage of the keyboard 
layout), but it turned out to be fairly difficult to remember the syntax 
later.


some of my syntax designs have partly avoided the need for commas by 
making the parser whitespace sensitive regarding expressions, for 
example a -b will parse differently than a-b or a - b. however, 
there are some common formatting quirks which would lead to frequent 
misparses with such a style. foo (x+1); (will parse as 2 expressions, 
rather than as a function call).


a partial downside is that it can lead to visual ambiguity if code is 
read using a variable-width font (as opposed to the good and proper 
route of using fixed-width fonts for everything... yes, this world is 
filled with evils like variable-width fonts and the inability to tell 
apart certain characters, like the Il1 issue and similar...).


standard JavaScript also uses a similar trick for implicit semicolon 
insertion, with the drawback that one needs to use care when breaking 
expressions otherwise the parser may do its magic in unintended ways.



the world likely goes as it does due to lots of many such seemingly 
trivial tradeoffs.


or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Block-Strings / Heredocs (Re: Magic Ink and Killing Math)

2012-03-14 Thread BGB

On 3/14/2012 8:57 AM, Loup Vaillant wrote:

Michael FIG wrote:

Loup Vaillantl...@loup-vaillant.fr  writes:


You could also play the human compiler: use the better syntax in the
comments, and implement a translation of it in code just below.  But
then you have to manually make sure they are synchronized.  Comments
are good.  Needing them is bad.


Or use a preprocessor that substitutes the translation inline
automatically.


Which is a way of implementing the syntax… How is this different than
my Then you write the parser?  Sure you can use a preprocessor, but
you still have to write the macros for your new syntax.



in my case, this can be theoretically done already (writing new 
customized parsers), and was part of why I added block-strings.


most likely route would be translating code into ASTs, and maybe using 
something like (defmacro) or similar at the AST level.


another route could be I guess to make use of quote and unquote, 
both of which can be used as expression-building features (functionally, 
they are vaguely similar to quasiquote in Scheme, but they haven't 
enjoyed so much use thus far).



a more practical matter though would be getting things nailed down 
enough so that larger parts of the system can be written in a language 
other than C.


yes, there is the FFI (generally seems to work fairly well), and one can 
shove script closures into C-side function pointers (provided arguments 
and return types are annotated and the types match exactly, but I don't 
entirely trust its reliability, ...).


slightly nicer would be if code could be written in various places which 
accepts script objects (either via interfaces or ex-nihilo objects).


abstract example (ex-nihilo object):
var obj={render: function() { ... } ... };
lbxModelRegisterScriptObject(models/script/somemodel, obj);

so, if some code elsewhere creates an object using the given model-name, 
then the script code is invoked to go about rendering it.


alternatively, using an interface:
public interface IRender3D { ... }//contents omitted for brevity
public class MyObject implements IRender3D { ... }
lbxModelRegisterScriptObject(models/script/somemodel, new MyObject());

granted, there are probably better (and less likely to kill performance) 
ways to make use of script objects (as-is, using script code to write 
objects for use in the 3D renderer is not likely to turn out well 
regarding the framerate and similar, at least until if/when there is a 
good solid JIT in place, and it can compete more on equal terms with C 
regarding performance).



mostly the script language was intended for use in the game's server 
end, where typically raw performance is less critical, but as-is, there 
is still a bit of a language-border issue that would need to be worked 
on here (I originally intended to write the server end mostly in script, 
but at the time the VM was a little less solid at the time (poorer 
performance, more prone to leak memory and trigger GC, ...), and so the 
server end was written more quick and dirty in plain C, using a design 
fairly similar to a mix of the Quake 1 and 2 server-ends). as-is, it is 
not entirely friendly to the script code, so a little more work is needed.


another possible use case is related to world-construction tasks 
(procedural world-building and similar).


but, yes, all of this is a bit more of a mundane ways of using a 
scripting language, but then again, everything tends to be built from 
the bottom up (and this just happens to be where I am currently at, at 
this point in time).


(maybe at which point in time I am stuck less worrying about which 
language is used where and about cross-language interfacing issues, then 
allowing things like alternative syntax, ... could be more worth 
exploration. but, in many areas, both C and C++ have a bit of a gravity 
well...).



or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Block-Strings / Heredocs (Re: Magic Ink and Killing Math)

2012-03-14 Thread BGB

On 3/14/2012 11:31 AM, Mack wrote:

On Mar 13, 2012, at 6:27 PM, BGB wrote:

SNIP

the issue is not that I can't imagine anything different, but rather that doing 
anything different would be a hassle with current keyboard technology:
pretty much anyone can type ASCII characters;
many other people have keyboards (or key-mappings) that can handle 
region-specific characters.

however, otherwise, typing unusual characters (those outside their current 
keyboard mapping) tends to be a bit more painful, and/or introduces editor 
dependencies, and possibly increases the learning curve (now people have to 
figure out how these various unorthodox characters map to the keyboard, ...).

more graphical representations, however, have a secondary drawback:
they can't be manipulated nearly as quickly or as easily as text.

one could be like drag and drop, but the problem is that drag and drop is 
still a fairly slow and painful process (vs, hitting keys on the keyboard).


yes, there are scenarios where keyboards aren't ideal:
such as on an XBox360 or an Android tablet/phone/... or similar, but people 
probably aren't going to be using these for programming anyways, so it is 
likely a fairly moot point.

however, even in these cases, it is not clear there are many clearly better 
options either (on-screen keyboard, or on-screen tile selector, either way it is likely 
to be painful...).


simplest answer:
just assume that current text-editor technology is basically sufficient and call it 
good enough.

Stipulating that having the keys on the keyboard mean what the painted symbols 
show is the simplest path with the least impedance mismatch for the user, there are 
already alternatives in common use that bear thinking on.  For example:

On existing keyboards, multi-stroke operations to produce new characters 
(holding down shift key to get CAPS, CTRL-ALT-TAB-whatever to get a special 
character or function, etc…) are customary and have entered average user 
experience.

Users of IDE's like EMACS, IntelliJ or Eclipse are well-acquainted with special 
keystrokes to get access to code completions and intention templates.

So it's not inconceivable to consider a similar strategy for typing non-character graphical elements.  One 
could think of say… CTRL-O, UP ARROW, UP ARROW, ESC to type a circle and size it, followed by CTRL-RIGHT 
ARROW, C to enter the circle and type a c inside it.

An argument against these strategies is the same one against command-line 
interfaces in the CLI vs. GUI discussion: namely, that without visual 
prompting, the possibilities that are available to be typed are not readily 
visible to the user.  The user has to already know what combination gives him 
what symbol.

One solution for mitigating this, presuming rich graphical typing was desirable, would 
be to take a page from the way touch type cell phones and tablets work, showing symbol 
maps on the screen in response to user input, with the maps being progressively refined as the user 
types to guide the user through constructing their desired input.

…just a thought :)


typing, like on phones...
I have seen 2 major ways of doing this:
hit key multiple times to indicate the desired letter, with a certain 
timeout before it moves to the next character;
type out characters, phone shows first/most-likely possibility, hit a 
key a bunch of times to cycle though the options.



another idle thought would be some sort of graphical/touch-screen 
keyboard, but it would be a matter of finding a way to make it not suck. 
using on-screen inputs in Android devices and similar kind of sucks:
pressure and sensitivity issues, comfort issues, lack of tactile 
feedback, smudges on the screen if one uses their fingers, and 
potentially scratches if one is using a stylus, ...


so, say, a touch-screen with these properties:
similar sized (or larger) than a conventional keyboard;
resistant to smudging, fairly long lasting, and easy to clean;
soft contact surface (me thinking sort of like those gel insoles for 
shoes), so that ideally typing isn't an experience of constantly hitting 
a piece of glass with ones' fingers (ideally, both impact pressure and 
responsiveness should be similar to a conventional keyboard, or at least 
a laptop keyboard);
ideally, some sort of tactile feedback (so, one can feel whether or not 
they are actually hitting the keys);
being dynamically reprogrammable (say, any app which knows about the 
keyboard can change its layout when it gains focus, or alternatively the 
user can supply per-app keyboard layouts);
maybe, there could be tabs to change between layouts, such as a US-ASCII 
tab, ...

...

with something like the above being common, I can more easily imagine 
people using non-ASCII based input methods.


say, one is typing in US-ASCII, hits a math-symbol layout where, for 
example, the numeric keypad (or maybe the whole rest of the keyboard) is 
replaced by a grid of math symbols, or maybe also have a drawing 
tablet tab

Re: [fonc] Apple and hardware

2012-03-14 Thread BGB

On 3/14/2012 3:55 PM, Jecel Assumpcao Jr. wrote:

snip, interesting, but no comment


If you have a good version of confinement (which is pretty simple HW-wise) you
can use Butler Lampson's schemes for Cal-TSS to make a workable version of a
capability system.

The 286 protected mode was good enough for this, and was extended in the
386. I am not sure all modern x86 processors still implement these, and
if they do it is likely that actually using them will hurt performance
so much that it isn't an option in practice.


the TSS?...

it is still usable on x86 in 32-bit Protected-Mode.

however, it generally wasn't used much by operating systems, and in the 
transition to x86-64, was (along with the GDT and LDT) mostly reduced to 
a semi-vestigial structure.


its role is generally limited to holding register state and the stack 
pointers when doing inter-ring switches (such as an interrupt-handler 
transferring control into the kernel, or when transferring control into 
userspace).


however, it can no longer be used to implement process switching or 
similar on modern chips.




And, yep, I managed to get them to allow interpreters to run on the iPad, but 
was
not able to get Steve to countermand the no sharing rule.

That is a pity, though at least having native languages makes these
devices a reasonable replacement for my old Radio Shack PC-4 calculator.
I noticed that neither Matlab nor Mathematica are available for the
iPad, but only simple terminal apps that allow you to access these
applications running on your PC. What a waste!


IMHO, this is at least one reason to go for Android instead...

not that Android is perfect though, as admittedly I would prefer if I 
could have a full/generic ARM version of Linux or similar, but alas.


sadly, I am not getting a whole lot of the tablet I have development 
wise, which is lame considering that was a major reason I bought it 
(ended up doing far more development in Linux ARMv5TEL in QEMU preparing 
to try to port stuff to Android).


more preferable would have been:
if the NDK didn't suck as badly;
if there were, say, a C API for the GUI stuff (so one could more easily 
just use C without having to deal with Java or the JNI).


would have likely been a little happier had Android been more like just 
a ARM build of a more generic Linux distro or something.



or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Block-Strings / Heredocs (Re: Magic Ink and Killing Math)

2012-03-13 Thread BGB

On 3/12/2012 9:01 PM, David Barbour wrote:


On Mon, Mar 12, 2012 at 8:13 PM, Julian Leviston jul...@leviston.net 
mailto:jul...@leviston.net wrote:



On 13/03/2012, at 1:21 PM, BGB wrote:


although theoretically possible, I wouldn't really trust not
having the ability to use conventional text editors whenever
need-be (or mandate use of a particular editor).

for most things I am using text-based formats, including for
things like world-maps and 3D models (both are based on arguably
mutilated versions of other formats: Quake maps and AC3D models).
the power of text is that, if by some chance someone does need to
break out a text editor and edit something, the format wont
hinder them from doing so.



What is text? Do you store your text in ASCII, EBCDIC,
SHIFT-JIS or UTF-8? If it's UTF-8, how do you use an ASCII editor
to edit the UTF-8 files?

Just saying' ;-) Hopefully you understand my point.

You probably won't initially, so hopefully you'll meditate a bit
on my response without giving a knee-jerk reaction.




I typically work with the ASCII subset of UTF-8 (where ASCII and UTF-8 
happen to be equivalent).


most of the code is written to assume UTF-8, but languages are designed 
to not depend on any characters outside the ASCII range (leaving them 
purely for comments, and for those few people who consider using them 
for identifiers).


EBCDIC and SHIFT-JIS are sufficiently obscure that one can generally 
pretend that they don't exist (FWIW, I don't generally support codepages 
either).


a lot of code also tends to assume Modified UTF-8 (basically, the same 
variant of UTF-8 used by the JVM). typically, code will ignore things 
like character normalization or alternative orderings. a lot of code 
doesn't particularly know or care what the exact character encoding is.


some amount of code internally uses UTF-16 as well, but this is less 
common as UTF-16 tends to eat a lot more memory (and, some code just 
pretends to use UTF-16, when really it is using UTF-8).




Text is more than an arbitrary arcane linear sequence of characters. 
Its use suggests TRANSPARENCY - that a human could understand the 
grammar and content, from a relatively small sample, and effectively 
hand-modify the content to a particular end.


If much of our text consisted of GUIDs:
  {21EC2020-3AEA-1069-A2DD-08002B30309D}
This might as well be
  {BLAHBLAH-BLAH-BLAH-BLAH-BLAHBLAHBLAH}

The structure is clear, but its meaning is quite opaque.



yep.

this is also a goal, and many of my formats are designed to at least try 
to be human editable.
some number of them are still often hand-edited as well (such as texture 
information files).



That said, structured editors are not incompatible with an underlying 
text format. I think that's really the best option.


yes.

for example, several editors/IDEs have expand/collapse, but still use 
plaintext for the source-code.


Visual Studio and Notepad++ are examples of this, and a more advanced 
editor could do better (such as expand/collapse on arbitrary code blocks).


these are also things like auto-completion, ... which are also nifty and 
work fine with text.



Regarding multi-line quotes... well, if you aren't fixated on ASCII 
you could always use unicode to find a whole bunch more brackets:

http://www.fileformat.info/info/unicode/block/cjk_symbols_and_punctuation/images.htm
http://www.fileformat.info/info/unicode/block/miscellaneous_technical/images.htm
http://www.fileformat.info/info/unicode/block/miscellaneous_mathematical_symbols_a/images.htm
Probably more than you know what to do with.



AFAIK, the common consensus in much of programmer-land, is that using 
Unicode characters as part of the basic syntax of a programming language 
borders on evil...



I ended up using:
[[ ... ]]
and:
 ...  (basically, same syntax as Python).

these seem probably like good enough choices.

currently, the [[ and ]] braces are not real tokens, and so will only 
be parsed specially as such in the particular contexts where they are 
expected to appear.


so, if one types:
2[[3, 4], [5, 6]]
the '' will be parsed as a less-than operator.

but, if one writes instead:
var str=[[
some text...
more text...
]];

it will parse as a multi-line string...

both types of string are handled specially by the parser (rather than 
being handled by the tokenizer, as are normal strings).



or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Block-Strings / Heredocs (Re: Magic Ink and Killing Math)

2012-03-13 Thread BGB

On 3/13/2012 4:37 PM, Julian Leviston wrote:


On 14/03/2012, at 2:11 AM, David Barbour wrote:




On Tue, Mar 13, 2012 at 5:42 AM, Josh Grams j...@qualdan.com 
mailto:j...@qualdan.com wrote:


On 2012-03-13 02:13PM, Julian Leviston wrote:
What is text? Do you store your text in ASCII, EBCDIC,
SHIFT-JIS or
UTF-8?  If it's UTF-8, how do you use an ASCII editor to edit
the UTF-8
files?

Just saying' ;-) Hopefully you understand my point.

You probably won't initially, so hopefully you'll meditate a bit
on my
response without giving a knee-jerk reaction.

OK, I've thought about it and I still don't get it.  I understand
that
there have been a number of different text encodings, but I
thought that
the whole point of Unicode was to provide a future-proof way out
of that
mess.  And I could be totally wrong, but I have the impression
that it
has pretty good penetration.  I gather that some people who use the
Cyrillic alphabet often use some code page and China and Japan use
SHIFT-JIS or whatever in order to have a more compact representation,
but that even there UTF-8 tools are commonly available.

So I would think that the sensible thing would be to use UTF-8 and
figure that anyone (now or in the future) will have tools which
support
it, and that anyone dedicated enough to go digging into your data
files
will have no trouble at all figuring out what it is.

If that's your point it seems like a pretty minor nitpick.  What am I
missing?


Julian's point, AFAICT, is that text is just a class of storage that 
requires appropriate viewers and editors, doesn't even describe a 
specific standard. Thus, another class that requires appropriate 
viewers and editors can work just as well - spreadsheets, tables, 
drawings.


You mention `data files`. What is a `file`? Is it not a service 
provided by a `file system`? Can we not just as easily hide a storage 
format behind a standard service more convenient for ad-hoc views and 
analysis (perhaps RDBMS). Why organize into files? Other than 
penetration, they don't seem to be especially convenient.


Penetration matters, which is one reason that text and filesystems 
matter.


But what else has penetrated? Browsers. Wikis. Web services. It 
wouldn't be difficult to support editing of tables, spreadsheets, 
drawings, etc. atop a web service platform. We probably have more 
freedom today than we've ever had for language design, if we're 
willing to stretch just a little bit beyond the traditional 
filesystem+text-editor framework.


Regards,

Dave


Perfectly the point, David. A token/character in ASCII is equivalent 
to a byte. In SHIFT-JIS, it's two, but this doesn't mean you can't 
express the equivalent meaning in them (ie by selecting the same 
graphemes) - this is called translation) ;-)


this is partly why there are codepoints.
one can work in terms of codepoints, rather than bytes.

a text editor may internally work in UTF-16, but saves its output in 
UTF-8 or similar.

ironically, this is basically what I am planning/doing at the moment.

now, if/how the user will go about typing UTF-16 codepoints, this is not 
yet decided.



One of the most profound things for me has been understanding the 
ramifications of OMeta. It doesn't just parse streams of 
characters (whatever they are) in fact it doesn't care what the 
individual tokens of its parsing stream is. It's concerned merely with 
the syntax of its elements (or tokens) - how they combine to form 
certain rules - (here I mean valid patterns of grammar by rules). If 
one considers this well, it has amazing ramifications. OMeta invites 
us to see the entire computing world in terms of sets of 
problem-oriented-languages, where language is a liberal word that 
simply means a pattern of sequence of the constituent elements of a 
thing. To PEG, it basically adds proper translation and true 
object-orientism of individual parsing elements. This takes a while to 
understand, I think.


Formats here become languages, protocols are languages, and so are 
any other kind of representation system you care to name (computer 
programming languages, processor instruction sets, etc.).


possibly.

I was actually sort of aware of a lot of this already though, but didn't 
consider it particularly relevant.



I'm postulating, BGB, that you're perhaps so ingrained in the current 
modality and approach to thinking about computers, that you maybe 
can't break out of it to see what else might be possible. I think it 
was turing, wasn't it, who postulated that his turing machines could 
work off ANY symbols... so if that's the case, and your programming 
language grammar has a set of symbols, why not use arbitrary (ie not 
composed of english letters) ideograms for them? (I think these days 
we call these things icons ;-))


You might say but how will people name their variables - well 
perhaps for those things

Re: [fonc] Error trying to compile COLA

2012-03-12 Thread BGB

On 3/12/2012 10:24 AM, Martin Baldan wrote:


that is a description of random data, which granted, doesn't apply to most
(compressible) data.
that wasn't really the point though.

I thought the original point was that there's a clear-cut limit to how
much redundancy can be eliminated from computing environments, and
that thousand-fold (and beyond) reductions in code size per feature
don't seem realistic. Then the analogy from data compression was used.
I think it's a pretty good analogy, but I don't think there's a
clear-cut limit we can estimate in advance, because meaningful data
and computations are not random to begin with. Indeed, there are
islands of stability where you've cut all the visible cruft and you
need new theoretical insights and new powerful techniques to reduce
the code size further.


this is possible, but it assumes, essentially, that one doesn't run into 
such a limit.


if one gets to a point where every fundamental concept is only ever 
expressed once, and everything is built from preceding fundamental 
concepts, then this is a limit, short of dropping fundamental concepts.




for example, I was able to devise a compression scheme which reduced
S-Expressions to only 5% their original size. now what if I want 3%, or 1%?
this is not an easy problem. it is much easier to get from 10% to 5% than to
get from 5% to 3%.

I don't know, but there may be ways to reduce it much further if you
know more about the sexprs themselves. Or maybe you can abstract away
the very fact that you are using sexprs. For instance, if those sexprs
are a Scheme program for a tic-tac-toe player, you can say write a
tic-tac-toe player in Scheme and you capture the essence.


the sexprs were mostly related to scene-graph delta messages (one could 
compress a Scheme program, but this isn't really what it is needed for).


each expression basically tells about what is going on in the world at 
that moment (objects appearing and moving around, lights turning on/off, 
...). so, basically, a semi-constant message stream.


the specialized compressor was doing better than Deflate, but was also 
exploiting a lot more knowledge about the expressions as well: what the 
basic types are, how things fit together, ...


theoretically, about the only way to really do much better would be 
using a static schema (say, where the sender and receiver have a 
predefined set of message symbols, predefined layout templates, ...). 
personally though, I really don't like these sorts of compressors (they 
are very brittle, inflexible, and prone to version issues).


this is essentially what write a tic-tac-toe player in Scheme implies:
both the sender and receiver of the message need to have a common notion 
of both tic-tac-toe player and Scheme. otherwise, the message can't 
be decoded.


a more general strategy is basically to build a model from the ground 
up, where the sender and reciever have only basic knowledge of basic 
concepts (the basic compression format), and most everything else is 
built on the fly based on the data which has been seen thus far (old 
data is used to build new data, ...).


in LZ77 based algos (Deflate: ZIP/GZ/PNG; LZMA: 7zip; ...), this takes 
the form of a sliding window, where any recently seen character 
sequence is simply reused (via an offset/length run).


in my case, it is built from primitive types (lists, symbols, strings, 
fixnums, flonums, ...).




I expect much of future progress in code reduction to come from
automated integration of different systems, languages and paradigms,
and this integration to come from widespread development and usage of
ontologies and reasoners. That way, for instance, you could write a
program in BASIC, and then some reasoner would ask you questions such
as I see you used a GOTO to build a loop. Is that correct? or this
array is called 'clients'  , do you mean it as in server/client
architecture or in the business sense? . After a few questions like
that, the system would have a highly descriptive model of what your
program is supposed to do and how it is supposed to do it. Then it
would be able to write an equivalent program in any other programming
language. Of course, once you have such a system, there would be much
more powerful user interfaces than some primitive programming
language. Probably you would speak in natural language (or very close)
and use your hands to point at things. I know it sounds like full-on
AI, but I just mean an expert system for programmers.


and, of course, such a system would likely be, itself, absurdly complex...

this is partly the power of information entropy though:
it can't really be created or destroyed, only really moved around from 
one place to another.



so, one can express things simply to a system, and it gives powerful 
outputs, but likely the system itself is very complex. one can express 
things to a simple system, but generally this act of expression tends to 
be much more complex. in either case, the complexity is 

Re: [fonc] Block-Strings / Heredocs (Re: Magic Ink and Killing Math)

2012-03-12 Thread BGB

On 3/12/2012 6:31 PM, Josh McDonald wrote:
Since it's your own system end-to-end, why not just stop editing 
source as a stream of ascii characters? Some kind of simple structured 
editor would let you put whatever you please in strings without 
requiring any escaping at all. It'd also make the parsing simpler :)




although theoretically possible, I wouldn't really trust not having the 
ability to use conventional text editors whenever need-be (or mandate 
use of a particular editor).


for most things I am using text-based formats, including for things like 
world-maps and 3D models (both are based on arguably mutilated versions 
of other formats: Quake maps and AC3D models). the power of text is 
that, if by some chance someone does need to break out a text editor and 
edit something, the format wont hinder them from doing so.



but, yes, that Inventing on Principle / Magic Ink video did rather get 
my interest up in terms of wanting to support much more streamlined 
script-editing interface.


I recently had a bit of fun writing small script fragments to blow up 
light sources and other things, and figure if I can get a more advanced 
text-editing interface thrown together, more interesting things might 
also be possible.


blow the lights, all nearby light sources explode (with fiery particle 
explosion effects and sounds), and the area goes dark.


current leaning is to try to throw something together vaguely 
QBasic-like (with a proper text editor, and probably F5 as the Run 
key, ...).


as-is, I already have an ed / edlin-style text editor, and ALT + 1-9 as 
console-change keys (and now have multiple consoles, sort of like Linux 
or similar), ... was considering maybe the fancier text editor would use 
ALT-SHIFT + A-Z for switching between modules. will see what I can do here.



or such...



--

Enjoy every sandwich. - WZ

Josh 'G-Funk' McDonald
   - j...@joshmcdonald.info mailto:j...@joshmcdonald.info




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-11 Thread BGB

On 3/11/2012 5:28 AM, Jakub Piotr Cłapa wrote:

On 28.02.12 06:42, BGB wrote:

but, anyways, here is a link to another article:
http://en.wikipedia.org/wiki/Shannon%27s_source_coding_theorem


Shannon's theory applies to lossless transmission. I doubt anybody 
here wants to reproduce everything down to the timings and bugs of the 
original software. Information theory is not thermodynamics.




Shannon's theory also applies some to lossy transmission, as it also 
sets a lower bound on the size of the data as expressed with a certain 
degree of loss.


this is why, for example, with JPEGs or MP3s, getting a smaller size 
tends to result in reduced quality. the higher quality can't be 
expressed in a smaller size.



I had originally figured that the assumption would have been to try to 
recreate everything in a reasonably feature-complete way.



this means such things in the OS as:
an OpenGL implementation;
a command-line interface, probably implementing ANSI / VT100 style 
control-codes (even in my 3D engine, my in-program console currently 
implements a subset of these codes);

a loader for program binaries (ELF or PE/COFF);
POSIX or some other similar OS APIs;
probably a C compiler, assembler, linker, run-time libraries, ...;
network stack, probably a web-browser, ...;
...

then it would be a question of how small one could get everything while 
still implementing a reasonably complete (if basic) feature-set, using 
any DSLs/... one could think up to shave off lines of code.


one could probably shave off OS-specific features which few people use 
anyways (for example, no need to implement support for things like GDI 
or the X11 protocol). a simple solution being that OpenGL largely is 
the interface for the GUI subsystem (probably with a widget toolkit 
built on this, and some calls for things not directly supported by 
OpenGL, like managing mouse/keyboard/windows/...).


also, potentially, a vast amount of what would be standalone tools, 
could be reimplemented as library code and merged (say, one has the 
shell as a kernel module, which directly implements nearly all of the 
basic command-line tools, like ls/cp/sed/grep/...).


the result of such an effort, under my estimates, would likely still end 
up in the Mloc range, but maybe one could get from say, 200 Mloc (for a 
Linux-like configuration) down to maybe about 10-15 Mloc, or if one 
tried really hard, maybe closer to 1 Mloc, and much smaller is fairly 
unlikely.



apparently this wasn't the plan though, rather the intent was to 
substitute something entirely different in its place, but this sort of 
implies that it isn't really feature-complete per-se (and it would be a 
bit difficult trying to port existing software to it).


someone asks: hey, how can I build Quake 3 Arena for your OS?, and 
gets back a response roughly along the lines of you will need to 
largely rewrite it from the ground up.


much nicer and simpler would be if it could be reduced to maybe a few 
patches and modifying some of the OS glue stubs or something.



(tangent time):

but, alas, there seems to be a bit of a philosophical split here.

I tend to be a bit more conservative, even if some of this stuff is put 
together in dubious ways. one adds features, but often ends up 
jerry-rigging things, and using bits of functionality in different 
contexts: like, for example, an in-program command-entry console, is not 
normally where one expects ANSI codes, but at the time, it seemed a sane 
enough strategy (adding ANSI codes was a fairly straightforward way to 
support things like embedding color information in console message 
strings, ...). so, the basic idea still works, and so was applied in a 
new context (a console in a 3D engine, vs a terminal window in the OS).


side note: internally, the console is represented as a 2D array of 
characters, and another 2D array to store color and modifier flags 
(underline, strikeout, blink, italic, ...).


the console can be used both for program-related commands, accessing 
cvars, and for evaluating script fragments (sadly, limited to what can 
be reasonably typed into a console command, which can be a little 
limiting for much more than make that thing over there explode or 
similar). functionally, the console is less advanced than something like 
bash or similar.


I have also considered the possibility of supporting multiple consoles, 
and maybe a console-integrated text-editor, but have yet to decide on 
the specifics (I am torn between a specialized text-editor interface, or 
making the text editor be a console command which hijacks the console 
and probably does most of its user-interface via ANSI codes or similar...).


but, it is not obvious what is the best way to integrate a text-editor 
into the UI for a 3D engine, hence why I have had this idea floating 
around for months, but haven't really acted on it (out of humor, it 
could be given a VIM-like user-interface... ok, probably not, I was 
imagining more like MS

Re: [fonc] Magic Ink and Killing Math

2012-03-10 Thread BGB

On 3/8/2012 9:32 PM, David Barbour wrote:
Bret Victor's work came to my attention due to a recent video, 
Inventing on Principle


http://vimeo.com/36579366

If you haven't seen this video, watch it. It's especially appropriate 
for the FoNC audience.




although I don't normally much agree with the concept of principle, 
most of what was shown there was fairly interesting.


kind of makes some of my efforts (involving interaction with things via 
typing code fragments into a console) seem fairly weak... OTOH, at the 
moment, I am not entirely sure how such a thing would be implemented 
either (very customized JS interpreter? ...).


it also much better addresses the problem of how do I easily get an 
object handle for that thing right there?, which is an unsolved issue 
in my case (one can directly manipulate things in the scene via script 
fragments entered as console commands, provided they can get a reference 
to the entity, which is often a much harder problem).



timeliness of feedback is something I can sort of relate to, as I tend 
to invest a lot more effort in things I can do fairly quickly and test, 
than in things which may take many hours or days before I see much of 
anything (and is partly a reason I invested so much effort into my 
scripting VM, rather than simply just write everything in C or C++ and 
call it good enough, even despite the fair amount of code that is 
written this way).



most notable thing I did recently (besides some fiddling with getting a 
new JIT written), was adding a syntax for block-strings. I used [[ ... 
]] rather than triple-quotes (like in Python), mostly as this syntax is 
more friendly to nesting, and is also fairly unlikely to appear by 
accident, and couldn't come up with much obviously better at the 
moment, {{ ... }} was another considered option (but is IIRC already 
used for something), as was the option of just using triple-quote (would 
work, but isn't readily nestable).


this was itself a result of a quick thing which came up while writing 
about something else:
how to deal with the problem of easily allowing user-defined syntax in a 
language without the (IMO, relatively nasty) feature of allowing 
context-dependent syntax in a parser?...


most obvious solution: some way to easily create large string literals, 
which could then be fed into a user-defined parser / eval. then one can 
partly sidestep the matter of syntax within a syntax. granted, yes, it 
is a cheap hack...




Anyhow, since then I've been perusing Bret Victor's other works at:
http://worrydream.com

Which, unfortunately, renders painfully slowly in my Chrome browser 
and relatively modern desktop machine. But sludging through the first 
page to reach content has been rewarding.


One excellent article is Magic Ink:

http://worrydream.com/MagicInk/

In this article, Victor asks `What is Software?` and makes a case that 
`interaction` should be a resource of last resort, that what we often 
need to focus on is presenting context-relevant information and 
graphics design, ways to put lots of information in a small space in a 
non-distracting way.


And yet another article suggests that we Kill Math:

http://worrydream.com/KillMath/

Which focuses on using more concrete and graphical representations to 
teach concepts and perform computations. Alan Kay's work teaching 
children (Doing With Images Makes Symbols) seems to be relevant here, 
and receives a quote.


Anyhow, I think there's a lot here everyone, if you haven't seen it 
all before.




nifty...

(although, for me, at the moment, it is after 2AM, I am needing to 
sleep...).




Regards,

Dave



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] Block-Strings / Heredocs (Re: Magic Ink and Killing Math)

2012-03-10 Thread BGB


On 3/10/2012 2:21 AM, Wesley Smith wrote:

most notable thing I did recently (besides some fiddling with getting a new
JIT written), was adding a syntax for block-strings. I used[[ ... ]]
rather than triple-quotes (like in Python), mostly as this syntax is more
friendly to nesting, and is also fairly unlikely to appear by accident, and
couldn't come up with much obviously better at the moment, {{ ... }}
was another considered option (but is IIRC already used for something), as
was the option of just using triple-quote (would work, but isn't readily
nestable).


You should have a look at Lua's long string syntax if you haven't already:

[[ my
long
string]]


this was briefly considered, but would have a much higher risk of clashes.

consider someone wants to type a nested array:
[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
which is not so good if this array is (randomly) parsed as a string.

preferable is to try to avoid syntax which is likely to appear by 
chance, as then programmers have to use extra caution to avoid any 
magic sigils which might have unintended behaviors, but can pop up 
randomly as a result of typing code using only more basic constructions 
(I try to avoid this much as I do ambiguities in general, and is partly 
also why, IMO, the common AS, T syntax for templates/generics is a 
bit nasty).



the syntax:
[[ ... ]]

was chosen as it had little chance of clashing with other valid syntax 
(apart from, potentially, the CDATA end marker for XML, which at present 
would need to be escaped if using this syntax for globs of XML).


it is possible, as the language does include unary  and  
operators, which could, conceivably, be applied to a nested array. this 
is, however, rather unlikely, and could be fixed easily enough with a space.


as-is, they have an even-nesting rule.
WRT uneven-nesting, they can be escaped via '\' (don't really like, as 
it leaves the character as magic...).


[[
this string has an embedded \]]...
but this is ok.
]]


OTOH (other remote possibilities):
{ ... }
was already used for insert-here expressions in XML literals:
foo{generateSomeNode()}/foo

(...) or ((...)) just wouldn't work (high chance of collision).

#(...), #[...], and #{...} are already in use (tuple, float vector or 
matrix, and list).


example:
vector: #[0, 0, 0]
quaternion: #[0, 0, 0, 1]Q
matrix: #[[1, 0, 0] [0, 1, 0] [0, 0, 1]]
list: #{#foo, 2, 3; #v}
note: (...) parens, [...] array, {...} dictionary/object (example: {a: 
3, y: 4}).


@(...), @[...], and @{...} are still technically available.

also possible:
/[...]/ , /[[...]]/
would be passable mostly only as /.../ is already used for regex 
syntax (inherited from JS...).


hmm:
? ... ?
? ... ?
(available, currently syntactically invalid).

likewise:
\ ... \, ...
| ... |

...

so, the issue is mostly lacking sufficient numbers of available (good) 
brace types.
in a few other cases, this lack has been addressed via the use of 
keywords and type-suffixes.



but, a keyword would be lame for a string, and a suffix wouldn't work.



You can nest by matching the number of '=' between the brackets:

[===[
a
long
string [=[ with a long string inside it ]=]
xx
]===]


this would be possible, as otherwise this syntax would not be 
syntactically valid in the language.


[=[...]=]
would be at least possible.

not that I particularly like this syntax though...


(inlined):

On 3/10/2012 2:43 AM, Ondřej Bílka wrote:

On Sat, Mar 10, 2012 at 01:21:42AM -0800, Wesley Smith wrote:
You should have a look at Lua's long string syntax if you haven't 
already: 

Better to be consistent with rest of scripting languages(bash,ruby,perl,python)
and use heredocs.


blarg...

heredoc syntax is nasty IMO...

I deliberately didn't use heredocs.

if I did, I would probably use the syntax:
#END; ... END
or similar...


Python uses triple-quotes, which I had also considered (just, they 
couldn't nest):


lots of stuff...
over multiple lines...



this would mean:
[[
lots of stuff...
over multiple lines...
]]
possibly also with the Python syntax:

...



or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] OT: Hypertext and the e-book

2012-03-08 Thread BGB
 of the required textbooks for college courses 
(some fairly solidly worse than the usual teach yourself X in Y 
units-of-time books), with the major difference that a teach yourself 
book is something like $15, rather than more like $150 for typical 
college textbooks...


one of the worse I saw was primarily screenshots from Visual Studio with 
arrows drawn on them with comments like click here and drag this 
there, and pretty much the whole book was this, and I wasn't exactly 
impressed... (and each chapter was basically a bunch of follow the 
screenshots and you will end up with whatever program was the assignment 
for this chapter...).


and the teacher didn't even accept any homework, it was simply click to 
check yes that you have read chapter/done assignment, and receive credit.


not that I necessarily have a problem with easy classes, but there 
probably needs to be some sort of reasonable limit.


nevermind several classes that have apparently outsourced the homework 
to the textbook distributor.


sometimes though, it is kind of difficult to have a very positive 
opinion of education (or, at least, CS classes... general-education 
classes tend to be actually fairly difficult... but then the classes 
just suck).


a person is probably almost better off learning CS being self-taught, 
FWIW (except when it is the great will of parents that a person goes and 
gets a degree and so on).


unless maybe it is just personal perception, and to others the general 
education classes are click yes to pass and the CS classes are 
actually difficult (and actually involve doing stuff...). sometimes 
though, the classes are harder, and may actually ask the student to 
write some code (and I know someone who is having difficulty, having 
encountered such a class, with an assignment mostly consisting of using 
linked-lists and reading/writing the contents of text files).



Anyway, I probably wouldn't have replied to this post at all except 
that I wanted to let you all know about an especially relevant project 
which is trying to raise money to publish a book. I joined Kickstarter 
just to support this thing, and I am very reluctant to join websites 
these days. If you were at SPLASH 2010 in Reno, you might recall Slim 
from his Onward! film presentation. I think he's really onto 
something, although his language is a little touchy-feely. Please, 
check it out. If you believe better design is a necessary part of 
better computing (as do I) then please consider a pledge.


http://kck.st/whvn03

(Oops, I just checked, and he's made the goal! Well, I've already 
wrote this, and I still mean it, but perhaps with a little less urgency.)


-- Max



ok.



On Wed, Mar 7, 2012 at 4:11 PM, Mack m...@mackenzieresearch.com 
mailto:m...@mackenzieresearch.com wrote:


I am a self-admitted Kindle and iPad addict, however most of the
people I know are real book aficionados for relatively
straight-forward reasons that can be summed up as:

-   Aesthetics:  digital readers don't even come close to
approximating the experience of reading a printed and bound paper
text.  To some folks, this matters a lot.

-   A feeling of connectedness with history: it's not a
difficult leap from turning the pages of a modern edition of
'Cyrano de Bergerac' to perusing a volume that was current in
Edmund Rostand's time.  Imagining that the iPad you hold in your
hands was once upon a shelf in Dumas Pere's study is a much bigger
suspension of disbelief.  For some people, this contributes to a
psychological distancing from the material being read.

-   Simplicity of sharing:  for those not of the technical
elite, sharing a favored book more closely resembles the kind of
matching of intrinsics that happens during midair refueling of
military jets than the simple act of dropping a dog-eared
paperback on a friend's coffee table.

-   Simplicity.  Period.  (Manual transmissions and paring
knives are still with us and going strong in this era of
ubiquitous automatic transmissions and food processors.  Facility
and convenience doesn't always trump simplicity and reliability.
 Especially when the power goes out.)

Remember Marshall Mcluhan's observation: The medium is the
message?  Until we pass a generational shift where the bulk of
readers have little experience of analog books, these
considerations will be with us.

-- Mack

m...@mackenzieresearch.com mailto:m...@mackenzieresearch.com



On Mar 7, 2012, at 3:13 PM, BGB wrote:

 On 3/7/2012 3:24 AM, Ryan Mitchley wrote:
 May be of interest to some readers of the list:

 http://nplusonemag.com/bones-of-the-book


 thoughts:
 admittedly, I am not really much of a person for reading fiction
(I tend mostly to read technical information, and most fictional
material is more often experienced in the form of
movies/TV/games

Re: [fonc] OT: Hypertext and the e-book

2012-03-08 Thread BGB

On 3/8/2012 7:51 AM, David Corking wrote:

BGB said:

by contrast, a wiki is often a much better experience, and similarly allows
the option of being presented sequentially (say, by daisy chaining articles
together, and/or writing huge articles). granted, it could be made maybe a
little better with a good WYSIWYG style editing system.

potentially,  maybe, something like MediaWiki or similar could be used for
fiction and similar.

Take a look at both Wikibooks and the booki project (which publishes
flossmanuals.net)


so, apparently, yes...



a mystery is why, say, LCD panels can't be made to better utilize ambient
light

Why isn't the wonderful dual-mode screen used by the OLPC XO more widely used?


it is a mystery.

seems like it could be useful (especially for anyone who has ever tried 
to use a laptop... outside...).
back-lights just can't match up to the power of the sun, as even with 
full brightness, ambient background light makes the screen look dark (a 
loss of color is a reasonable tradeoff).



my brother also had a Neo Geo Pocket, which was a handheld gaming 
device which was usable in direct sunlight (because it used reflection 
rather than a backlight).


apparently, there is also a type of experimental LCD which pulls off 
color without using a color mask, which could also be nifty if combined 
with the use of reflected light.



personally, I would much rather have an LCD than an electronic paper 
display, given a device with an LCD could presumably also be used as a 
computer of some sort, without very slow refreshing. like, say, a tablet 
style thing which is usable in direct sunlight. likewise, ones' e-books 
can be PDF's (vs some obscure device-specific format).




the one area I think printed books currently have a slight advantage (vs
things like Adobe Reader and similar), is the ability to quickly place
custom bookmarks (would be nice if one could define user-defined bookmarks
in Reader, and if it would remember wherever was the last place the user was
looking in a given PDF).

Apple Preview, and perhaps other PDF readers, already do this.


except, like many Apple products, it is apparently Mac only...

it seems like an obvious enough feature, but Adobe Reader doesn't have it.
I haven't really though to check if there were other PDF viewers that 
could do so.



Have fun! David
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] OT: Hypertext and the e-book

2012-03-08 Thread BGB

On 3/8/2012 12:34 PM, Max Orhai wrote:



On Thu, Mar 8, 2012 at 7:07 AM, Martin Baldan martino...@gmail.com 
mailto:martino...@gmail.com wrote:



 - Print technology is orders of magnitude more environmentally
benign
 and affordable.


That seems a pretty strong claim. How do you back it up? Low cost and
environmental impact are supposed to be some of the strong points of
ebooks.


Glad you asked! That was a pretty drastic simplification, and I'm 
conflating 'software' with 'hardware' too. Without wasting too much 
time, hopefully, here's what I had in mind.


I live in a city with some amount of printing industry, still. In the 
past, much more. Anyway, small presses have been part of civic life 
for centuries now, and the old-fashioned presses didn't require much 
in the way of imports, paper mills aside. I used to live in a smaller 
town with a mid-sized paper mill, too. No idea if they're still in 
business, but I've made my own paper, and it's not that hard to do 
well in small batches. My point is just that print technology 
(specifically the letterpress) can be easily found in the real world 
which is local, nontoxic, and sustainable (in the sense of only 
needing routine maintenance to last indefinitely) in a way that I 
find hard to imagine of modern electronics, at least at this point in 
time. Have you looked into the environmental cost of manufacturing and 
disposing of all our fragile, toxic gadgets which last two years or 
less? It's horrifying.




I would guess, apart from macro-scale parts/materials reuse (from 
electronics and similar), one could maybe:
grind them into dust and extract reusable materials via means of 
mechanical separation (magnetism, density, ..., which could likely 
separate out most bulk glass/plastic/metals/silicon/... which could then 
be refined and reused);
maybe feed whatever is left over into a plasma arc, and maybe use either 
magnetic fields or a centrifuge to separate various raw elements (dunno 
if this could be made practical), or maybe dissolve it with strong acids 
and use chemical means to extract elements (could also be expensive), or 
lacking a better (cost effective) option, simply discard it.



the idea for a magnetic-field separation could be:
feed material through a plasma arc, which will basically convert it into 
mostly free atoms;

a large magnetic coil accelerates the resultant plasma;
a secondary horizontal magnetic field is applied (similar to the one in 
a CRT), causing elements to deflect based on relative charge (valence 
electrons);
depending on speed and distance, there is likely to be a gravity based 
separation as well (mostly for elements which have similar charge but 
differ in atomic weight, such as silicon vs carbon, ...);
eventually, all of them ram into a wall (probably chilled), with a more 
or less 2D distribution of the various elements (say, one spot on the 
wall has a big glob of silicon, and another a big glob of gold, ...). 
(apart from mass separation, one will get mixes of similarly charged 
elements, such as globs of silicon carbide and titanium-zirconium and 
similar)


an advantage of a plasma arc is that it will likely result in some 
amount of carbon-monoxide and methane and similar as well, which can be 
burned as fuel (providing electricity needed for the process). this 
would be similar to a traditional gasifier.



but, it is possible that in the future, maybe some more advanced forms 
of manufacturing may become more readily available at the small scale.


a particular example is that it is now at least conceivably possible 
that lower-density lower-speed semiconductor electronics (such as 
polymer semiconductors) could be made at much smaller scales and cheaper 
than with traditional manufacturing (silicon wafers and optical 
lithography), but at this point there is little economic incentive for 
this (companies don't care, as they have big expensive fabs to make 
chips, and individuals and communities don't care as they don't have 
much reason to make their own electronics vs just buying those made by 
said large semiconductor manufacturers).


similarly, few people have much reason to invest much time or money in 
technologies which are likely to max out in the MHz range.


but, conceivably, one could make a CPU, and memory, essentially using 
conductive and semiconductive inks and an old-style printing-plates 
(possibly, say, on a celluloid substrate), if needed (making a CPU 
probably itself sort of resembling a book...). also sort of imagining 
some here the idle thought of movable-type logic gates and similar, ...



granted, such a scenario is very unlikely at present (it would likely 
only occur due to a collapse of current manufacturing or distribution 
architecture). any restoration of the ability to do large scale 
manufacture is likely to result in a quick return to faster and more 
powerful technologies (such as optical lithography).


apart from a loss of 

Re: [fonc] OT: Hypertext and the e-book

2012-03-07 Thread BGB

On 3/7/2012 3:24 AM, Ryan Mitchley wrote:

May be of interest to some readers of the list:

http://nplusonemag.com/bones-of-the-book



thoughts:
admittedly, I am not really much of a person for reading fiction (I tend 
mostly to read technical information, and most fictional material is 
more often experienced in the form of movies/TV/games/...).


I did find the article interesting though.

I wonder: why really do some people have such a thing for traditional books?

they are generally inconvenient, can't be readily accessed:
they have to be physically present;
one may have to go physically retrieve them;
it is not possible to readily access their information (searching is a 
pain);

...

by contrast, a wiki is often a much better experience, and similarly 
allows the option of being presented sequentially (say, by daisy 
chaining articles together, and/or writing huge articles). granted, it 
could be made maybe a little better with a good WYSIWYG style editing 
system.


potentially,  maybe, something like MediaWiki or similar could be used 
for fiction and similar.
granted, this is much less graphically elaborate than some stuff the 
article describes, but I don't think text is dead yet (and generally 
doubt that fancy graphical effects are going to kill it off any time 
soon...). even in digital forms (where graphics are moderately cheap), 
likely text is still far from dead.


it is much like how magazines filled with images have not killed books 
filled solely with text, despite both being printed media (granted, 
there are college textbooks, which are sometimes in some ways almost 
closer to being very and large expensive magazines in these regards: 
filled with lots of graphics, a new edition for each year, ...).



but, it may be a lot more about the information being presented, and who 
it is being presented to, than about how the information is presented. 
graphics work great for some things, and poor for others. text works 
great for some things, and kind of falls flat for others.


expecting all one thing or the other, or expecting them to work well in 
cases for which they are poorly suited, is not likely to turn out well.



I also suspect maybe some people don't like the finite resolution or 
usage of back-lighting or similar (like in a device based on a LCD 
screen). there are electronic paper technologies, but these generally 
have poor refresh times.


a mystery is why, say, LCD panels can't be made to better utilize 
ambient light (as opposed to needing all the light to come from the 
backlight). idle thoughts include using either a reflective layer, or a 
layer which responds strongly to light (such as a phosphorescent layer), 
placed between the LCD and the backlight.



but, either way, things like digital media and hypertext displacing the 
use of printed books may be only a matter of time.


the one area I think printed books currently have a slight advantage (vs 
things like Adobe Reader and similar), is the ability to quickly place 
custom bookmarks (would be nice if one could define user-defined 
bookmarks in Reader, and if it would remember wherever was the last 
place the user was looking in a given PDF).


the above is a place where web-browsers currently have an advantage, as 
one can more easily bookmark locations in a web-page (at least apart 
from frames evilness). a minor downside though is that bookmarks are 
less good for temporarily marking something.


say, if one can not only easily add bookmarks, but easily remove or 
update them as well.



the bigger possible issues (giving books a partial advantage):
they are much better for very-long-term archival storage (print a book 
with high-quality paper, and with luck, a person finding it in 1000 or 
2000 years can still read it), but there is far less hope of most 
digital media remaining intact for anywhere near that long (most current 
digital media tends to have a life-span more measurable in years or 
maybe decades, rather than centuries).


most digital media requires electricity and is weak against things like 
EMP and similar, which also contributes to possible fragility.


these need not prevent use of electronic devices for convenience-sake or 
similar, but does come with the potential cost that, if things went 
particularly bad (societal collapse or widespread death or similar), the 
vast majority of all current information could be lost.


granted, it is theoretically possible that people could make bunkers 
with hard-copies of large amounts of information and similar printed on 
high-quality acid-free paper and so on (and then maybe further treat 
them with wax or polymers).


say, textual information is printed as text, and maybe data either is 
represented in a textual format (such as Base-85), or is possibly 
represented via a more compact system (a non-redundant or semi-redundant 
dot pattern).


say (quick calculation) one could fit up to around 34MB on a page at 72 
DPI, though possibly 16MB/page 

Re: [fonc] OT: Hypertext and the e-book

2012-03-07 Thread BGB
 it...).


I guess how things go will depend mostly on the common majority or similar.


it is likely similar with books and programming:
people who like lots of books and reading, will tend to like doing so, 
and will make up the majority position of readers (as strange and alien 
as their behaviors may seem to others);
those who like programming will, similarly, continue doing so, and thus 
make up the majority position of programmers (as similarly strange and 
alien as this may seem, given how often and negatively many people 
depict nerds and similar...).


ultimately, whoever makes up the fields, controls the fields, and 
ultimately holds control over how things will be regarding said field. 
so, books are controlled by literature culture, much like computers 
remain mostly under the control of programmer culture (except those 
parts under the control of business culture and similar...).



or such...





On Mar 7, 2012, at 3:13 PM, BGB wrote:


On 3/7/2012 3:24 AM, Ryan Mitchley wrote:

May be of interest to some readers of the list:

http://nplusonemag.com/bones-of-the-book


thoughts:
admittedly, I am not really much of a person for reading fiction (I tend mostly 
to read technical information, and most fictional material is more often 
experienced in the form of movies/TV/games/...).

I did find the article interesting though.

I wonder: why really do some people have such a thing for traditional books?

they are generally inconvenient, can't be readily accessed:
they have to be physically present;
one may have to go physically retrieve them;
it is not possible to readily access their information (searching is a pain);
...

by contrast, a wiki is often a much better experience, and similarly allows the 
option of being presented sequentially (say, by daisy chaining articles 
together, and/or writing huge articles). granted, it could be made maybe a 
little better with a good WYSIWYG style editing system.

potentially,  maybe, something like MediaWiki or similar could be used for 
fiction and similar.
granted, this is much less graphically elaborate than some stuff the article 
describes, but I don't think text is dead yet (and generally doubt that fancy 
graphical effects are going to kill it off any time soon...). even in digital 
forms (where graphics are moderately cheap), likely text is still far from dead.

it is much like how magazines filled with images have not killed books filled 
solely with text, despite both being printed media (granted, there are college 
textbooks, which are sometimes in some ways almost closer to being very and 
large expensive magazines in these regards: filled with lots of graphics, a new 
edition for each year, ...).


but, it may be a lot more about the information being presented, and who it is 
being presented to, than about how the information is presented. graphics work 
great for some things, and poor for others. text works great for some things, 
and kind of falls flat for others.

expecting all one thing or the other, or expecting them to work well in cases 
for which they are poorly suited, is not likely to turn out well.


I also suspect maybe some people don't like the finite resolution or usage of 
back-lighting or similar (like in a device based on a LCD screen). there are 
electronic paper technologies, but these generally have poor refresh times.

a mystery is why, say, LCD panels can't be made to better utilize ambient light 
(as opposed to needing all the light to come from the backlight). idle thoughts 
include using either a reflective layer, or a layer which responds strongly to 
light (such as a phosphorescent layer), placed between the LCD and the 
backlight.


but, either way, things like digital media and hypertext displacing the use of 
printed books may be only a matter of time.

the one area I think printed books currently have a slight advantage (vs things 
like Adobe Reader and similar), is the ability to quickly place custom 
bookmarks (would be nice if one could define user-defined bookmarks in Reader, 
and if it would remember wherever was the last place the user was looking in a 
given PDF).

the above is a place where web-browsers currently have an advantage, as one can more easily 
bookmark locations in a web-page (at least apart from frames evilness). a minor 
downside though is that bookmarks are less good for temporarily marking something.

say, if one can not only easily add bookmarks, but easily remove or update them 
as well.


the bigger possible issues (giving books a partial advantage):
they are much better for very-long-term archival storage (print a book with 
high-quality paper, and with luck, a person finding it in 1000 or 2000 years 
can still read it), but there is far less hope of most digital media remaining 
intact for anywhere near that long (most current digital media tends to have a 
life-span more measurable in years or maybe decades, rather than centuries).

most digital media requires electricity and is weak against

[fonc] on script performance and scalability (Re: Error trying to compile COLA)

2012-03-03 Thread BGB
basically, the same thing as before, but I encountered this elsewhere 
(on Usenet), and figured I might see what people here thought about it:

http://www.codingthewheel.com/game-dev/john-carmack-script-interpreters-considered-harmful

yes, granted, this is a different domain from what the people here are 
dealing with.



BTW: I was recently doing some fiddling with working on a new JIT 
(unrelated to the video), mostly restarting a prior effort (started 
writing a new JIT a few months ago, stopped working on it as there were 
more important things going on elsewhere, this effort is still not a 
high-priority though, ...).


not really sure if stuff related to writing a JIT is particularly 
relevant here, and no, I am not trying to spam, even if it may seem like 
it sometimes.



On 3/2/2012 10:25 AM, BGB wrote:

On 3/2/2012 3:07 AM, Reuben Thomas wrote:

On 2 March 2012 00:43, Julian Levistonjul...@leviston.net  wrote:
What if the aim that superseded this was to make it available to the 
next
set of people, who can do something about real fundamental change 
around

this?

Then it will probably fail: why should anyone else take up an idea
that its inventors don't care to promote?


yeah.

most people are motivated essentially by getting the job done, and 
if a technology doesn't exist yet for them to use, most often they 
will not try to implement one (instead finding something which exists 
and making it work), or if they do implement something, it will be 
their thing, their way.


so, it makes some sense to try to get a concrete working system in 
place, which people will build on, and work on.


granted, nearly everything tends towards big and complex, so there 
is not particularly to gain by fighting it. if one can get more done 
in less code, this may be good, but I don't personally believe 
minimalism to be a good end-goal in itself (if it doesn't offer much 
over the existing options).



Perhaps what is needed is to ACTUALLY clear out the cruft. Maybe 
it's not
easy or possible through the old channels... too much work to 
convince too

many people who have so much history of the merits of tearing down the
existing systems.

The old channels are all you have until you create new ones, and
you're not going to get anywhere by attempting to tear down existing
systems; they will be organically overrun when alternatives become
more popular. But this says nothing about which alternatives become
more popular.



yep.

this is a world built on evolutions, rather than on revolutions.
a new thing comes along, out-competes the old thing, and eventually 
takes its place.

something new comes along, and does the same.
and so on...

the most robust technologies then are those which have withstood the 
test of time despite lots of competition, and often which have been 
able to adapt to better deal with the new challenges.


so, if one wants to defeat what exists, they may need to be able to 
create something decidedly better than what exists, at the things it 
does well, and should ideally not come with huge costs or drawbacks 
either.



consider, a new systems language (for competing against C):
should generate more compact code than C;
should be able to generate faster code;
should not have any mandatory library dependencies;
should ideally compile faster than C;
should ideally be easier to read and understand than C;
should ideally be more expressive than C;
...

and, avoid potential drawbacks:
impositions on the possible use cases (say, unsuitable for writing an 
OS kernel, ...);
high costs of conversion (can't inter-operate with C, is difficult to 
port existing code to);
steep learning curve or weirdness (should be easy to learn for C 
developers, shouldn't cause them to balk at weird syntax or semantics, 
...);

language or implementation is decidedly more complex than C;
most of its new features are useless for the use-case;
it poorly handles features essential to the use case;
...

but, if one has long lists, and compares many other popular languages 
against them, it is possible to generate estimates for how and why 
they could not displace it from its domain.


not that it means it is necessarily ideal for every use case, for 
example, Java and C# compete in domains where both C and C++ have 
often done poorly.


neither really performs ideally in the others domain, as C works about 
as well for developing user-centric GUI apps as Java or C# works well 
as a systems language: not very well.


and, neither side works all that well for high-level scripting, hence 
a domain largely controlled by Lua, Python, Scheme, JavaScript, and so 
on.


but, then these scripting languages often have problems scaling to the 
levels really needed for application software, often showing 
weaknesses, such as the merit (in the small) of more easily declaring 
variables while mostly ignoring types, becomes a mess of 
difficult-to-track bugs and run-time exceptions as the code gets 
bigger, and one may start running

Re: [fonc] Sorting the WWW mess

2012-03-02 Thread BGB

On 3/2/2012 8:37 AM, Martin Baldan wrote:

Julian,

I'm not sure I understand your proposal, but I do think what Google
does is not something trivial, straightforward or easy to automate. I
remember reading an article about Google's ranking strategy. IIRC,
they use the patterns of mutual linking between websites. So far, so
good. But then, when Google became popular, some companies started to
build link farms, to make themselves look more important to Google.
When Google finds out about this behavior, they kick the company to
the bottom of the index. I'm sure they have many secret automated
schemes to do this kind of thing, but it's essentially an arms race,
and it takes constant human attention. Local search is much less
problematic, but still you can end up with a huge pile of unstructured
data, or a huge bowl of linked spaghetti mess, so it may well make
sense to ask a third party for help to sort it out.

I don't think there's anything architecturally centralized about using
Google as a search engine, it's just a matter of popularity. You also
have Bing, Duckduckgo, whatever.


yeah.

the main thing Google does is scavenging and aggregating data.
and, they have done fairly well at it...

and they make money mostly via ads...



  On the other hand, data storage and bandwidth are very centralized.
Dropbox, Google docs, iCloud, are all sympthoms of the fact that PC
operating systems were designed for local storage. I've been looking
at possible alternatives. There's distributed fault-tolerant network
filesystems like Xtreemfs (and even the Linux-based XtreemOS), or
Tahoe-LAFS (with object-capabilities!), or maybe a more P2P approach
such as Tribler (a tracker-free bittorrent), and for shared bandwidth
apparently there is a BittorrentLive (P2P streaming). But I don't know
how to put all that together into a usable computing experience. For
instance, squeak is a single file image, so I guess it can't benefit
from file-based capabilities, except if the objects were mapped to
files in some way. Oh, well, this is for another thread.


agreed.

just because I might want to have better internet file-systems, doesn't 
necessarily mean I want all my data to be off on someones' server somewhere.


much more preferable would be if I could remotely access data stored on 
my own computer.


the problem is that neither OS's nor networking hardware were really 
designed for this:
broadband routers tend to assume by default that the network is being 
used purely for pulling content off the internet, ...


at this point, it means convenience either requires some sort of central 
server to pull data from, or bouncing off of such a server (sort of like 
some sort of Reverse FTP, the computer holding the data connects to a 
server, and in turn makes its data visible on said server, and other 
computers connect to the server to access data stored on their PC, 
probably with some file-proxy magic and mirroring and similar...).


technically, the above could be like a more organized version of a P2P 
file-sharing system, and could instead focus more on sharing for 
individuals (between their devices) or between groups. unlike with a 
central server, it allows for much more storage space (one can easily 
have TB of shared space, rather than worrying about several GB or 
similar on some server somewhere).


nicer would be if it could offer a higher-performance alternative to a 
Mercurial or GIT or similar style system or similar (rather than simply 
being a raw shared filesystem).



better though would be if broadband routers and DNS worked in a way 
which made it fairly trivial for pretty much any computer to be easily 
accessible remotely, without having to jerk off with port-forwarding and 
other things.



potentially, if/when the last mile internet migrates to IPv6, this 
could help (as then presumably both NAT and dynamic IP addresses can 
partly go away).


but, it is taking its time, and neither ISPs nor broadband routers seem 
to yet support IPv6...





-Best

  Martin

On Fri, Mar 2, 2012 at 6:54 AM, Julian Levistonjul...@leviston.net  wrote:

Right you are. Centralised search seems a bit silly to me.

Take object orientedism and apply it to search and you get a thing where
each node searches itself when asked...  apply this to a local-focussed
topology (ie spider web serch out) and utilise intelligent caching (so
search the localised caches first) and you get a better thing, no?

Why not do it like that? Or am I limited in my thinking about this?

Julian

On 02/03/2012, at 4:26 AM, David Barbour wrote:


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-01 Thread BGB

On 3/1/2012 8:04 AM, Reuben Thomas wrote:

On 1 March 2012 15:02, Julian Levistonjul...@leviston.net  wrote:

Is this one of the aims?

It doesn't seem to be, which is sad, because however brilliant the
ideas you can't rely on other people to get them out for you.


this is part of why I am personally trying to work more to develop 
products than doing pure research, and focusing more on trying to 
improve the situation (by hopefully increasing the number of viable 
options) rather than remake the world.


there is also, at this point, a reasonable lack of industrial strength 
scripting languages.
there are a few major industrial strength languages (C, C++, Java, C#, 
etc...), and a number of scripting languages (Python, Lua, JavaScript, 
...), but not generally anything to bridge the gap (combining the 
relative dynamic aspects and easy of use of a scripting language, with 
the power to get stuff done as in a more traditional language).


a partial reason I suspect:
many script languages don't scale well (WRT either performance or 
usability);
many script languages have jokingly bad FFI's, combined with a lack of 
good native libraries;

...


or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-01 Thread BGB

On 3/1/2012 10:12 AM, Loup Vaillant wrote:

BGB wrote:

there is also, at this point, a reasonable lack of industrial strength
scripting languages.
there are a few major industrial strength languages (C, C++, Java, C#,
etc...), and a number of scripting languages (Python, Lua, JavaScript,
...), but not generally anything to bridge the gap (combining the
relative dynamic aspects and easy of use of a scripting language, with
the power to get stuff done as in a more traditional language).


What could you possibly mean by industrial strength scripting language?

When I hear about an industrial strength tool, I mostly infer that 
the tool:

 - spurs low-level code (instead of high-level meaning),
 - is moderately difficult to learn (or even use),
 - is extremely difficult to implement,
 - has paid-for support.



expressiveness is a priority (I borrow many features from scripting 
languages, like JavaScript, Scheme, ...). the language aims to have a 
high-level of dynamic abilities in most areas as well (it supports 
dynamic types, prototype OO, lexical closures, scope delegation, ...).



learning curve or avoiding implementation complexity were not huge 
concerns (the main concession I make to learning curve is that it is in 
many regards fairly similar to current mainstream languages, so if a 
person knows C++ or C# or similar, they will probably understand most of 
it easily enough).


the main target audience is generally people who already know C and C++ 
(and who will probably keep using them as well). so, the language is 
mostly intended to be used mixed with C and C++ codebases. the default 
syntax is more ActionScript-like, but Java/C# style declaration syntax 
may also be used (the only significant syntax differences are those 
related to the language's JavaScript heritage, and the use of as and 
as! operators for casts in place of C-style cast syntax).


generally, its basic design is intended to be a bit less obtuse than C 
or C++ though (the core syntax is more like that in Java and 
ActionScript in most regards, and more advanced features are intended 
mostly for special cases).



the VM is intended to be free, and I currently have it under the MIT 
license, but I don't currently have any explicit plans for support. it 
is more of a use it if you want proposition, provided as-is, and so on.


it is currently given on request via email, mostly due to my server 
being offline and probably will be for a while (it is currently 1600 
miles away, and my parents don't know how to fix it...).



but, what I mostly meant was that it is designed in such a way to 
hopefully deal acceptably well with writing largish code-bases (like, 
supporting packages/namespaces and importing and so on), and should 
hopefully be competitive performance-wise with similar-class languages 
(still needs a bit more work on this front, namely to try to get 
performance to be more like Java, C#, or C++ and less like Python).


as-is, performance is less of a critical failing though, since one can 
put most performance critical code in C land and work around the weak 
performance somewhat (and, also, my projects are currently more bound by 
video-card performance than CPU performance as well).



in a few cases, things were done which favored performance over strict 
ECMA-262 conformance though (most notably, there are currently 
differences regarding default floating-point precision and similar, due 
mostly to the VM presently needing to box doubles, and generally double 
precision being unnecessary, ... however, the VM will use double 
precision if it is used explicitly).




If you meant something more positive, I think Lua is a good candidate:
 - Small (and hopefully reliable) tools.
 - Fast implementations.
 - Widely used in the gaming industry.
 - Good C FFI.
 - Spurs quite higher-level meaning.



Lua is small, and fairly fast (by scripting language terms).

its use in the gaming industry is moderate (it still faces competition 
against several other languages, namely Python, Scheme, and various 
engine-specific languages).


not everyone (myself included) is entirely fond of its Pascal-ish syntax 
though.


I also have doubts how well it will hold up to large-scale codebases though.


its native C FFI is moderate (in that it could be a lot worse), but 
AFAIK most of its ease of use here comes from the common use of SWIG 
(since SWIG shaves away most need for manually-written boilerplate).


the SWIG strategy though is itself a tradeoff IMO, since it requires 
some special treatment on the part of the headers, and works by 
producing intermediate glue code.


similarly, it doesn't address the matter of potential semantic 
mismatches between the languages (so the interfaces tend to be fairly 
basic).



in my case, a similar system to SWIG is directly supported by the VM, 
does not generally require boilerplate code (but does require any 
symbols to be DLL exports on Windows), and the FFI is much more tightly

Re: [fonc] Can semantic programming eliminate the need for Problem-Oriented Language syntaxes?

2012-03-01 Thread BGB

On 3/1/2012 10:25 AM, Martin Baldan wrote:
Yes, namespaces provide a form of jargon, but that's clearly not 
enough. If it were, there wouldn't be so many programming languages. 
You can't use, say, Java imports to turn Java into Smalltalk, or 
Haskell or Nile. They have different syntax and different semantics. 
But in the end you describe the syntax and semantics with natural 
language. I was wondering about using a powerful controlled language, 
with a backend of, say, OWL-DL, and a suitable syntax defined using 
some tool like GF (or maybe OMeta?).




as for Java:
this is due in large part to Java's lack of flexibility and expressiveness.

but, for a language which is a good deal more flexible than Java, why not?

I don't think user-defined syntax is strictly necessary, but things 
would be very sad and terrible if one were stuck with Java's syntax 
(IMO: as far as C-family languages go, it is probably one of the least 
expressive).


a better example I think was Lisp's syntax, where even if at its core 
fairly limited, and not particularly customizable (apart from reader 
macros or similar), still allowed a fair amount of customization via macros.



but, anyways, yes, the language problem is still a long way from 
solved, and so instead it is a constant stream of new languages trying 
to improve things here and there vs the ones which came before.



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-01 Thread BGB

On 3/1/2012 2:58 PM, Casey Ransberger wrote:

Below.

On Feb 29, 2012, at 5:43 AM, Loup Vaillantl...@loup-vaillant.fr  wrote:


Yes, I'm aware of that limitation.  I have the feeling however that
IDEs and debuggers are overrated.

When I'm Squeaking, sometimes I find myself modeling classes with the browser 
but leaving method bodies to 'self break' and then write all of the actual code 
in the debugger. Doesn't work so well for hacking on the GUI, but, well.

I'm curious about 'debuggers are overrated' and 'you shouldn't need one.' Seems 
odd. Most people I've encountered who don't use the debugger haven't learned 
one yet.


agreed.

the main reason I can think of why one wouldn't use a debugger is 
because none are available.
however, otherwise, debuggers are a fairly useful piece of software 
(generally used in combination with debug-logs and unit-tests and similar).


sadly, I don't yet have a good debugger in place for my scripting 
language, as mostly I am currently using the Visual-Studio debugger 
(which, granted, can't really see into script code). granted, this is 
less of an immediate issue as most of the project is plain C.




At one company (I'd love to tell you which but I signed a non-disparagement agreement) 
when I asked why the standard dev build of the product didn't include the debugger 
module, I was told you don't need it. When I went to install it, I was told 
not to.

I don't work there any more...


makes sense.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-01 Thread BGB

On 3/1/2012 3:56 PM, Loup Vaillant wrote:

Le 01/03/2012 22:58, Casey Ransberger a écrit :

Below.

On Feb 29, 2012, at 5:43 AM, Loup Vaillantl...@loup-vaillant.fr  wrote:


Yes, I'm aware of that limitation.  I have the feeling however that
IDEs and debuggers are overrated.


When I'm Squeaking, sometimes I find myself modeling classes with the 
browser but leaving method bodies to 'self break' and then write all 
of the actual code in the debugger. Doesn't work so well for hacking 
on the GUI, but, well.


Okay I take it back. Your use case sounds positively awesome.


I'm curious about 'debuggers are overrated' and 'you shouldn't need 
one.' Seems odd. Most people I've encountered who don't use the 
debugger haven't learned one yet.



Spot on.  The only debugger I have used up until now was a semi-broken
version of gdb (it tended to miss stack frames).



yeah...

sadly, apparently the Visual Studio debugger will miss stack frames, 
since it apparently often doesn't know how to back-trace through code in 
areas it doesn't have debugging information for, even though presumably 
pretty much everything is using the EBP-chain convention for 32-bit code 
(one gets the address, followed by question-marks, and the little 
message stack frames beyond this point may be invalid).



a lot of time this happens in my case in stack frames where the crash 
has occurred in code which has a call-path going through the BGBScript 
VM (and the debugger apparently isn't really sure how to back-trace 
through the generated code).


note: although I don't currently have a full/proper JIT, some amount of 
the execution path often does end up being through generated code (often 
through piece-wise generate code fragments).


ironically, in AMD Code Analyst, this apparently shows up as unknown 
module, and often accounts for more of the total running time than does 
the interpreter proper (although typically still only 5-10%, as the bulk 
of the running time tends to be in my renderer and also in nvogl32.dll 
and kernel.exe and similar...).



or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-29 Thread BGB

On 2/29/2012 5:34 AM, Alan Kay wrote:
With regard to your last point -- making POLs -- I don't think we are 
there yet. It is most definitely a lot easier to make really powerful 
POLs fairly quickly  than it used to be, but we still don't have a 
nice methodology and tools to automatically supply the IDE, debuggers, 
etc. that need to be there for industrial-strength use.




yes, agreed.

the basic infrastructure is needed, and to a large degree this is the 
harder part, but it is far from a huge or impossible undertaking (it is 
more a matter of scaling: namely tradeoffs between performance, 
capabilities, and simplicity).


another issue though is the cost of implementing the POL/DSL/... vs the 
problem area being addressed: even if creating the language is fairly 
cheap, if the problem area is one-off, it doesn't really buy much.


a typical result is that of creating cheaper languages for more 
specialized tasks, and considerably more expensive languages for more 
general-purpose tasks (usually with a specialized language falling on 
its face in the general case, and a general-purpose language often 
being a poorer fit for a particular domain).



the goal is, as I see it, to make a bigger set of reusable parts, which 
can ideally be put together in new and useful ways. ideally, the IDEs 
and debuggers would probably work similarly (by plugging together logic 
from other pieces).




in my case, rather than trying to make very flexible parts, I had been 
focused more on making modular parts. so, even if the pipeline is itself 
fairly complicated (as are the parts themselves), one could presumably 
split the pipeline apart, maybe plug new parts in at different places, 
swap some parts out, ... and build something different with them.


so, it is a goal of trying to move from more traditional software 
design, where everything is tightly interconnected, to one where parts 
are only loosely coupled (and typically fairly specialized, but 
reasonably agnostic regarding their use-case).


so, say, one wants a new language with a new syntax, there are 2 major 
ways to approach this:
route A: have a very flexible language (or meta-language), where one 
can change the syntax and semantics at will, ... this is what VPRI seems 
to be working towards.


route B: allow the user to throw together a new parser and front-end 
language compiler, reusing what parts from the first language are 
relevant (or pulling in other parts maybe intended for other languages, 
and creating new parts as needed). how easy or difficult it is, is then 
mostly a product of how many parts can be reused.


so, a language looks like an integrated whole, but is actually 
internally essentially built out of LEGO blocks... (with parts 
essentially fitting together in a hierarchical structure). it is also 
much easier to create languages with similar syntax and semantics, than 
to create ones which are significantly different (since more differences 
mean more unique parts).


granted, most of the languages I have worked on implementing thus far, 
have mostly been bigger and more expensive languages (I have made a 
few small/specialized languages, but most have been short-lived).


also, sadly, my project currently also contains a few places where there 
are architecture splits (where things on opposite sides work in 
different ways, making it difficult to plug parts together which exist 
on opposite sides of the split). by analogy, it is like where the screw 
holes/... don't line up, and where the bolts are different sizes and 
threading, requiring ugly/awkward adapter plates to make them fit.


essentially, such a system would need a pile of documentation, hopefully 
to detail what all parts exist, what each does, what inputs and outputs 
are consumed and produced, ... but, writing documentation is, sadly, 
kind of a pain.



another possible issue is that parts from one system wont necessarily 
fit nicely into another:
person A builds one language and VM, and person B makes another language 
and VM;
even if both are highly modular, there may be sufficient structural 
mismatches to make interfacing them be difficult (requiring much pain 
and boilerplate).



some people have accused me of Not Invented Here, mostly for sake of 
re-implementing things theoretically found in libraries, but sometimes 
this is due to legal reasons (don't like the license terms), and other 
times because the library would not integrate cleanly into the project. 
often, its essential aspects can be decomposed and its functionality 
is re-implemented from more basic parts. another advantage is that these 
parts can often be again reused in implementing other things, or allow 
better inter-operation between a given component and those other 
components it may share parts with (and the parts may be themselves more 
useful and desirable than the thing itself).


this also doesn't mean creating a standard of non-standard (some 
people have accused me of this, but I 

Re: [fonc] Error trying to compile COLA

2012-02-28 Thread BGB

On 2/28/2012 10:33 AM, Reuben Thomas wrote:

On 28 February 2012 16:41, BGBcr88...@gmail.com  wrote:

  - 1 order of magnitude is gained by removing feature creep.  I agree
   feature creep can be important.  But I also believe most feature
   belong to a long tail, where each is needed by a minority of users.
   It does matter, but if the rest of the system is small enough,
   adding the few features you need isn't so difficult any more.


this could help some, but isn't likely to result in an order of magnitude.

Example: in Linux 3.0.0, which has many drivers (and Linux is often
cited as being mostly drivers), actually counting the code reveals
about 55-60% in drivers (depending how you count). So that even with
only one hardware configuration, you'd save less than 50% of the code
size, i.e. a factor of 2 at very best.



yeah, kind of the issue here.

one can shave code, reduce redundancy, increase abstraction, ... but 
this will only buy so much.


then one can start dropping features, but how many can one drop and 
still have everything still work?...


one can be like, well, maybe we will make something like MS-DOS, but in 
long-mode? (IOW: single-big address space, with no user/kernel 
separation, or conventional processes, and all kernel functionality is 
essentially library functionality).



ok, how small can this be made?
maybe 50 kloc, assuming one is sparing with the drivers.


I once wrote an OS kernel (long-dead project, ended nearly a decade 
ago), going and running a line counter on the whole project, I get about 
84 kloc. further investigation: 44 kloc of this was due to a copy of 
NASM sitting in the apps directory (I tried to port NASM to my OS at the 
time, but it didn't really work correctly, very possibly due to a 
quickly kludged-together C library...).



so, a 40 kloc OS kernel, itself at the time bordering on barely worked.

what sorts of HW drivers did it have: ATA / IDE, console, floppy, VESA, 
serial mouse, RS232, RTL8139. how much code as drivers: 11 kloc.


how about VFS: 5 kloc, which include (FS drivers): BSHFS (IIRC, a 
TFTP-like shared filesystem), FAT (12/16/32), RAMFS.


another 5 kloc goes into the network code, which included TCP/IP, ARP, 
PPP, and an HTTP client+server.


boot loader was 288 lines (ASM), setup was 792 lines (ASM).

boot loader: copies boot files (setup.bin and kernel.sys) into RAM 
(in the low 640K). seems hard-coded for FAT12.


setup was mostly responsible for setting up the kernel (copying it to 
the desired address) and entering protected mode (jumping into the 
kernel). this is commonly called a second-stage loader, partly because 
it does a lot of stuff which is too bulky to do in the boot loader 
(which is limited to 512 bytes, whereas setup can be up to 32kB).


setup magic: Enable A20, load GDT, enter big-real mode, check for MZ 
and PE markers (kernel was PE/COFF it seems), copies kernel image to 
VMA base, pushes kernel entry point to stack, remaps IRQs, executes 
32-bit return (jumps into protected mode).


around 1/2 of the setup file is code for jumping between real and 
protected mode and for interfacing with VESA.


note: I was using PE/COFF for apps and libraries as well.
IIRC, I was using a naive process-based model at the time.


could a better HLL have made the kernel drastically smaller? I have my 
doubts...



add the need for maybe a compiler, ... and the line count is sure to get 
larger quickly.


based on my own code, one could probably get a basically functional C 
compiler in around 100 kloc, but maybe smaller could be possible (this 
would include the compiler+assembler+linker).


apps/... would be a bit harder.


in my case, the most direct path would be just dumping all of my 
existing libraries on top of my original OS project, and maybe dropping 
the 3D renderer (since it would be sort of useless without GPU support, 
OpenGL, ...). this would likely weigh in at around 750-800 kloc (but 
could likely be made into a self-hosting OS, since a C compiler would be 
included, and as well there is a C runtime floating around).


this is all still a bit over the stated limit.


maybe, one could try to get a functional GUI framework and some basic 
applications and similar in place (probably maybe 100-200 kloc more, at 
least).


probably, by this point, one is looking at something like a Windows 3.x 
style disk footprint (maybe using 10-15 MB of HDD space or so for all 
the binaries...).



granted, in my case, the vast majority of the code would be C, probably 
with a smaller portion of the OS and applications being written in 
BGBScript or similar...



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread BGB

On 2/28/2012 5:36 PM, Julian Leviston wrote:


On 29/02/2012, at 10:29 AM, BGB wrote:


On 2/28/2012 2:30 PM, Alan Kay wrote:
Yes, this is why the STEPS proposal was careful to avoid the 
current day world.


For example, one of the many current day standards that was 
dismissed immediately is the WWW (one could hardly imagine more of a 
mess).




I don't think the web is entirely horrible:
HTTP basically works, and XML is ok IMO, and an XHTML variant could 
be ok.


Hypertext as a structure is not beautiful nor is it incredibly useful. 
Google exists because of how incredibly flawed the web is and if 
you look at their process for organising it, you start to find 
yourself laughing a lot. The general computing experience these days 
is an absolute shambles and completely crap. Computers are very very 
hard to use. Perhaps you don't see it - perhaps you're in the trees - 
you can't see the forest... but it's intensely bad.




I am not saying it is particularly good, just that it is ok and not 
completely horrible


it is, as are most things in life, generally adequate for what it does...

it could be better, and it could probably also be worse...


It's like someone crapped their pants and google came around and said 
hey you can wear gas masks if you like... when what we really need to 
do is clean up the crap and make sure there's a toilet nearby so that 
people don't crap their pants any more.




IMO, this is more when one gets into the SOAP / WSDL area...


granted, moving up from this, stuff quickly turns terrible (poorly 
designed, and with many shiny new technologies which are almost 
absurdly bad).



practically though, the WWW is difficult to escape, as a system 
lacking support for this is likely to be rejected outright.


You mean like email? A system that doesn't have anything to do with 
the WWW per se that is used daily by millions upon millions of people? 
:P I disagree intensely. In exactly the same was that facebook was 
taken up because it was a slightly less crappy version of myspace, 
something better than the web would be taken up in a heartbeat if it 
was accessible and obviously better.


You could, if you chose to, view this mailing group as a type of 
living document where you can peruse its contents through your email 
program... depending on what you see the web as being... maybe if you 
squint your eyes just the right way, you could envisage the web as 
simply being a means of sharing information to other humans... and 
this mailing group could simply be a different kind of web...


I'd hardly say that email hasn't been a great success... in fact, I 
think email, even though it, too is fairly crappy, has been more of a 
success than the world wide web.




I don't think email and the WWW are mutually exclusive, by any means.

yes, one probably needs email as well, as well as probably a small 
mountain of other things, to make a viable end-user OS...



however, technically, many people do use email via webmail interfaces 
and similar.
nevermind that many people use things like Microsoft Outlook Web 
Access and similar.


so, it is at least conceivable that a future exists where people read 
their email via webmail and access usenet almost entirely via Google 
Groups and similar...


not that it would be necessarily a good thing though...



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-27 Thread BGB

On 2/27/2012 10:30 AM, Steve Wart wrote:

Just to zero in on one idea here



Anyway I digress... have you had a look at this file?:

http://piumarta.com/software/maru/maru-2.1/test-pepsi.l

Just read the whole thing - I found it fairly interesting :) He's
build pepsi on maru there... that's pretty fascinating, right?
Built a micro smalltalk on top of the S-expression language...
and then does a Fast Fourier Transform test using it...


my case: looked some, but not entirely sure how it works though.


See the comment at the top:
./eval repl.l test-pepsi.l
eval.c is written in C, it's pretty clean code and very cool. Then 
eval.l does the same thing in a lisp-like language.


Was playing with the Little Schemer with my son this weekend - if you 
fire up the repl, cons, car, cdr stuff all work as expected.




realized I could rip the filename off the end of the URL to get the 
directory, got the C file.


initial/quick observations:
apparently uses Boehm;
type system works a bit differently than my stuff, but seems to expose a 
vaguely similar interface (except I tend to put 'dy' on the front of 
everything here, so dycar(), dycdr(), dycaddr(), and most 
predicates have names like dyconsp() and similar, and often I 
type-check using strings rather than an enum, ...);
the parser works a bit differently than my S-Expression parser (mine 
tend to be a bit more, if/else, and read characters typically either 
from strings or stream objects);

ANSI codes with raw escape characters (text editor not entirely happy);
mixed tabs and spaces not leading to very good formatting;
simplistic interpreter, albeit it is not entirely clear how the built-in 
functions get dispatched;

...

a much more significant difference:
in my code, this sort of functionality is spread over many different 
areas (over several different DLLs), so one wouldn't find all of it in 
the same place.


will likely require more looking to figure out how the parser or syntax 
changing works (none of my parsers do this, most are fixed-form and 
typically shun context dependent parsing).



some of my earlier/simpler interpreters were like this though, vs newer 
ones which tend to have a longer multiple-stage translation pipeline, 
and which make use of bytecode.



Optionally check out the wikipedia article on PEGs and look at the 
COLA paper if you can find it.




PEGs, apparently I may have been using them informally already (thinking 
they were EBNF), although I haven't used them for directly driving a 
parser. typically, they have been used occasionally for describing 
elements of the syntax (in documentation and similar), at least not when 
using the lazier system of syntax via tables of examples.


may require more looking to try to better clarify the difference between 
a PEG and EBNF...
(the only difference I saw listed was that PEGs are ordered, but I would 
have assumed that a parser based on EBNF would have been implicitly 
ordered anyways, hmm...).



Anyhow, it's all self-contained, so if you can read C code and 
understand a bit of Lisp, you can watch how the syntax morphs into 
Smalltalk. Or any other language you feel like writing a parser for.


This is fantastic stuff.



following the skim and some more looking, I think I have a better idea 
how it works.



I will infer:
top Lisp-like code defines behavior;
syntax in middle defines syntax (as comment says);
(somehow) the parser invokes the new syntax, internally converting it 
into the Lisp-like form, which is what gets executed.



so, seems interesting enough...


if so, my VM is vaguely similar, albeit without the syntax definition or 
variable parser (the parser for my script language is fixed-form and 
written in C, but does parse to a Scheme-like AST system).


the assumption would have been that if someone wanted a parser for a new 
language, they would write one, assuming the semantics mapped tolerably 
to the underlying VM (exactly matching the semantics of each language 
would be a little harder though).


theoretically, nothing would really prevent writing a parser in the 
scripting language, just I had never really considered doing so (or, for 
that matter, even supporting user-defined syntax elements in the main 
parser).



the most notable difference between my ASTs and Lisp or Scheme:
all forms are special forms, and function calls need to be made via a 
special form (this was mostly to help better detect problems);

operators were also moved to special forms, for similar reasons;
there are lots more special forms, most mapping to HLL constructs (for, 
while, break, continue, ...);

...

as-is, there are also a large-number of bytecode operations, many 
related to common special cases.


for example, a recent addition called jmp_cond_sweq reduces several 
instructions related to switch into a single operation, partly 
intended for micro-optimizing (why 3 opcodes when one only needs 1?), 
and also partly intended to be used as a VM 

Re: [fonc] Error trying to compile COLA

2012-02-27 Thread BGB

On 2/27/2012 10:31 AM, David Harris wrote:

Alan ---

I appreciate both you explanation and what you are doing.  Of course 
jealousy comes into it, because you guys appear to be having a lot of 
fun mixed in with your hard work, and I would love to part of that.  I 
know where I would be breaking down the doors if I was starting a 
masters or doctorate.   However, I have made my choices, a long time 
ago, and so will have live vicariously through your reports.  The 
constraint system, a la Sketchpad, is a laudable experiment and I 
would love to see a hand-constructible DBjr.  You seem to be 
approaching a much more understandable and malleable system, and 
achieving more of the promise of computers as imagined in the sixties 
and seventies, rather than what seems to be the more mundane and 
opaque conglomerate that is generally the case now.




WRT the mundane opaque conglomerate:
I sort of had assumed that is how things always were and (presumably) 
always would be, just with the assumption that things were becoming more 
open due to both the existence of FOSS, and the rising popularity of 
scripting languages (Scheme, JavaScript, Python, ...).


like, in contrast to the huge expensive closed systems of the past 
(like, the only people who really know how any of it works were the 
vendors, and most of this information was kept secret).


I had sort of guessed that the push towards small closed embedded 
systems (such as smart-phones) was partly a move to try to 
promote/regain vendor control over the platforms (vs the relative 
user-freedom present on PCs).



(I don't know if I will ever go for a masters or doctorate though, as I 
have been in college long enough just going for an associates' degree, 
and would assume trying to find some way to get a job or money or 
similar...).




Keep up the excellent work,
David


On Monday, February 27, 2012, Alan Kay alan.n...@yahoo.com 
mailto:alan.n...@yahoo.com wrote:

 Hi Julian
 I should probably comment on this, since it seems that the STEPS 
reports haven't made it clear enough.

 STEPS is a science experiment not an engineering project.



I had personally sort of assumed this, which is why I had thought it 
acceptable to mention my own ideas and efforts as well, which would be a 
bit more off-topic if the focus were on a single product or piece of 
technology (like, say, hanging around on the LLVM or Mono lists, writing 
about code-generators or VM technology in general, rather than the topic 
being restricted to LLVM or Mono, which is the focus of these lists).


but, there are lots of tradeoffs, say, between stuff in general and 
pursuit of trying to get a marketable product put together, so trying 
for the latter to some extent impedes the former in my case.


although, I would personally assume everyone decides and acts 
independently, in pursuit of whatever is of most benefit to themselves, 
and make no claim to have any sort of absolute position.


(decided to leave out me going off into philosophy land...).


but, anyways, I tend to prefer the personal freedom to act in ways which 
I believe to be in my best interests, rather than being endlessly judged 
by others for random development choices (such as choosing to write a 
piece of code to do something when there is a library for that), like, 
if I can easily enough throw together a piece of code to do something, 
why should I necessarily subject myself to the hassles of a dependency 
on some random 3rd party library, and why do other people feel so 
compelled to make derisive comments for such a decision?


and, I would assume likewise for others. like, if it works, who cares?
like, if I write a piece of code, what reason would I have to think this 
somehow obligates other people to use it, and what reason do other 
people seem to have to believe that I think that it does?
like, what if I just do something, and maybe people might consider using 
it if they find it useful, can agree with the license terms, ... ?
and, if in-fact it sucks too hard for anyone else to have much reason to 
care, or is ultimately a dead-end, what reason do they have to care?

...

but, I don't think it means I have to keep it all secret either, but I 
don't really understand people sometimes... (though, I guess I am 
getting kind of burnt out sometimes of people so often being judgmental...).


or, at least, this is how I see things.


 It is not at all about making and distributing an operating system 
etc., but about trying to investigate the tradeoffs between problem 
oriented languages that are highly fitted to problem spaces vs. 
what it takes to design them, learn them, make them, integrate them, 
add pragmatics, etc.
 Part of the process is trying many variations in interesting (or 
annoying) areas. Some of these have been rather standalone, and some 
have had some integration from the start.
 As mentioned in the reports, we made Frank -- tacking together some 
of the POLs that were done as satellites -- to 

Re: [fonc] Error trying to compile COLA

2012-02-27 Thread BGB

On 2/27/2012 1:27 PM, David Girle wrote:

I am interested in the embedded uses of Maru, so I cannot comment on
how to get from here to a Frank-like GUI.  I have no idea how many
others on this list are interested in the Internet of Things (IoT),
but I expect parts of Frank will be useful in that space.  Maybe 5kLOC
will bring up a connected, smart sensor system, rather than the 20kLOC
target VPRI have in mind for a programming system.

David


IoT: had to look it up, but it sounds like something which could easily 
turn very cyber-punky or end up being abused in some sort of dystopic 
future scenario. accidentally touch some random object and suddenly the 
person has a price on their head and police jumping in through their 
window armed with automatic weapons or something...


and escape is difficult as doors will automatically lock on their 
approach, and random objects will fly into their path as they try to 
make a run for it, ... (because reality itself has something akin to the 
Radiant AI system from Oblivion or Fallout 3).


(well, ok, not that I expect something like this would necessarily 
happen... or that the idea is necessarily a bad idea...).



granted, as for kloc:
code has to go somewhere, I don't think 5 kloc is going to work.

looking at the Maru stuff from earlier, I would have to check, but I 
suspect it may already go over that budget (by quickly looking at a few 
files and adding up the line counts).



admittedly, I don't as much believe in the tiny kloc goal, since as-is, 
getting a complete modern computing system down into a few Mloc would 
already be a bit of an achievement (vs, say, a 14 Mloc kernel running a 
4 Mloc web browser, on a probably 10 Mloc GUI framework, all being 
compiled by a 5 Mloc C compiler, add another 1 Mloc if one wants a 3D 
engine, ...).



yes, one can make systems much smaller, but typically at a cost in terms 
of functionality, like one has a small OS kernel that only run on a 
single hardware configuration, a compiler that only supports a single 
target, a web browser which only supports very minimal functionality, ...


absent a clearly different strategy (what the VPRI people seem to be 
pursuing), the above outcome would not be desirable, and it is generally 
much more desirable to have a feature-rich system, even if potentially 
the LOC counts are far beyond the ability of any given person to 
understand (and if the total LOC for the system, is likely, *huge*...).


very course estimates:
a Linux installation DVD is 3.5 GB;
assume for a moment that nearly all of this is (likely) compressed 
program-binary code, and assuming that code tends to compress to approx 
1/4 its original size with Deflate;

so, probably 14GB of binary code;
my approx 1Mloc app compiles to about 16.5 MB of DLLs;
assuming everything else holds (and the basic assumptions are correct), 
this would work out to ~ 849 Mloc.


(a more realistic estimate would need to find how much is program code 
vs data files, and maybe find a better estimate of the binary-size to 
source-LOC mapping).



granted, there is probably a lot of redundancy which could likely be 
eliminated, and if one assumes it is a layered tower strategy (a large 
number of rings, with each layer factoring out most of what resides 
above it), then likely a significant reduction would be possible.


the problem is, one is still likely to have an initially fairly large 
wind up time, so ultimately the resulting system, is still likely to 
be pretty damn large (assuming it can do everything a modern OS does, 
and more, it is still likely to be probably well into the Mloc range).



but, I could always be wrong here...



On Mon, Feb 27, 2012 at 7:01 AM, Martin Baldanmartino...@gmail.com  wrote:

David,

Thanks for the link. Indeed, now I see how to run  eval with .l example
files. There are also .k  files, which I don't know how they differ from
those, except that .k files are called with ./eval filename.k while .l
files are called with ./eval repl.l filename.l where filename is the
name of the file. Both kinds seem to be made of Maru code.

I still don't know how to go from here to a Frank-like GUI. I'm reading
other replies which seem to point that way. All tips are welcome ;)

-Martin


On Mon, Feb 27, 2012 at 3:54 AM, David Girledavidgi...@gmail.com  wrote:

Take a look at the page:

http://piumarta.com/software/maru/

it has the original version you have + current.
There is a short readme in the current version with some examples that
will get you going.

David



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-27 Thread BGB

On 2/27/2012 10:08 PM, Julian Leviston wrote:

Structural optimisation is not compression. Lurk more.


probably will drop this, as arguing about all this is likely pointless 
and counter-productive.


but, is there any particular reason for why similar rules and 
restrictions wouldn't apply?


(I personally suspect that similar applies to nearly all forms of 
communication, including written and spoken natural language, and a 
claim that some X can be expressed in Y units does seem a fair amount 
like a compression-style claim).



but, anyways, here is a link to another article:
http://en.wikipedia.org/wiki/Shannon%27s_source_coding_theorem


Julian

On 28/02/2012, at 3:38 PM, BGB wrote:


granted, I remain a little skeptical.

I think there is a bit of a difference though between, say, a log table, and a 
typical piece of software.
a log table is, essentially, almost pure redundancy, hence why it can be 
regenerated on demand.

a typical application is, instead, a big pile of logic code for a wide range of 
behaviors and for dealing with a wide range of special cases.


executable math could very well be functionally equivalent to a highly compressed program, but 
note in this case that one needs to count both the size of the compressed program, and also the size of the 
program needed to decompress it (so, the size of the system would also need to account for the compiler and 
runtime).

although there is a fair amount of redundancy in typical program code (logic 
that is often repeated,  duplicated effort between programs, ...), eliminating 
this redundancy would still have a bounded reduction in total size.

increasing abstraction is likely to, again, be ultimately bounded (and, often, 
abstraction differs primarily in form, rather than in essence, from that of 
moving more of the system functionality into library code).


much like with data compression, the concept commonly known as the Shannon 
limit may well still apply (itself setting an upper limit to how much is 
expressible within a given volume of code).

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-26 Thread BGB

On 2/25/2012 7:48 PM, Julian Leviston wrote:
As I understand it, Frank is an experiment that is an extended version 
of DBJr that sits atop lesserphic, which sits atop gezira which sits 
atop nile, which sits atop maru all of which which utilise ometa and 
the worlds idea.


If you look at the http://vpri.org/html/writings.php page you can see 
a pattern of progression that has emerged to the point where Frank 
exists. From what I understand, maru is the finalisation of what began 
as pepsi and coke. Maru is a simple s-expression language, in the same 
way that pepsi and coke were. In fact, it looks to have the same 
syntax. Nothing is the layer underneath that is essentially a symbolic 
computer - sitting between maru and the actual machine code (sort of 
like an LLVM assembler if I've understood it correctly).




yes, S-Expressions can be nifty.
often, they aren't really something one advertises, or uses as a 
front-end syntax (much like Prototype-OO and delegation: it is a 
powerful model, but people also like their classes).


so, one ends up building something with a C-like syntax and 
Class/Instance OO, even if much of the structure internally is built 
using lists and Prototype-OO. if something is too strange, it may not be 
received well though (like people may see it and be like just what the 
hell is this?). better then if everything is just as could be expected.



in my case, they are often printed out in debugging messages though, as 
a lot of my stuff internally is built using lists (I ended up recently 
devising a specialized network protocol for, among other things, sending 
compressed list-based messages over a TCP socket).


probably not wanting to go too deeply into it, but:
it directly serializes/parses the lists from a bitstream;
a vaguely JPEG-like escape-tag system is used;
messages are Huffman-coded, and make use of both a value MRU/MTF and 
LZ77 compression (many parts of the coding also borrow from Deflate);
currently, I am (in my uses) getting ~60% additional compression vs 
S-Expressions+Deflate, and approximately 97% compression vs plaintext 
(plain Deflate got around 90% for this data).


the above was mostly used for sending scene-graph updates and similar in 
my 3D engine, and is maybe overkill, but whatever (although, luckily, it 
means I can send a lot more data while staying within a reasonable 
bandwidth budget, as my target was 96-128 kbps, and I am currently using 
around 8 kbps, vs closer to the 300-400 kbps needed for plaintext).



They've hidden Frank in plain sight. He's a patch-together of all 
their experiments so far... which I'm sure you could do if you took 
the time to understand each of them and had the inclination. They've 
been publishing as much as they could all along. The point, though, is 
you have to understand each part. It's no good if you don't understand it.




possibly, I don't understand a lot of it, but I guess part of it may be 
knowing what to read.
there were a few nifty things to read here and there, but I wasn't 
really seeing the larger whole I guess.



If you know anything about Alan  VPRI's work, you'd know that their 
focus is on getting children this stuff in front as many children as 
possible, because they have so much more ability to connect to the 
heart of a problem than adults. (Nothing to do with age - talking 
about minds, not bodies here). Adults usually get in the way with 
their stuff - their knowledge sits like a kind of a filter, 
denying them the ability to see things clearly and directly connect to 
them unless they've had special training in relaxing that filter. We 
don't know how to be simple and direct any more - not to say that it's 
impossible. We need children to teach us meta-stuff, mostly this 
direct way of experiencing and looking, and this project's main aim 
appears to be to provide them (and us, of course, but not as 
importantly) with the tools to do that. Adults will come secondarily - 
to the degree they can't embrace new stuff ;-). This is what we need 
as an entire populace - to increase our general understanding - to 
reach breakthroughs previously not thought possible, and fast. Rather 
than changing the world, they're providing the seed for children to 
change the world themselves.


there are merits and drawbacks here.

(what follows here is merely my opinion at the moment, as stated at a 
time when I am somewhat in need of going to sleep... ).



granted, yes, children learning stuff is probably good, but the risk is 
also that children (unlike adults) are much more likely to play things 
much more fast and loose regarding the law, and might show little 
respect for existing copyrights and patents, and may risk creating 
liability issues, and maybe bringing lawsuits to their parents (like, 
some company decides to sue the parents because little Johnny just 
went and infringed on several of their patents, or used some of their IP 
in a personal project, ...).


( and, in my case, I learned 

Re: [fonc] Error trying to compile COLA

2012-02-26 Thread BGB

On 2/26/2012 3:53 AM, Julian Leviston wrote:
What does any of what you just said have to do with the original 
question about COLA?




sorry, I am really not good with topic, was just trying to respond to 
what was there, but it was 2AM...

(hmm, maybe I should have waited until morning? oh well...).

as for getting COLA to compile, I have little idea, hence why I did not 
comment on this.
it seemed to be going off in the direction of motivations, ... which I 
can comment on.


likewise, I can comment on Prototype OO and S-Expressions, since I have 
a lot more experience using these, ... (both, just so happen, are things 
that seem to be seen very negatively by average programmers, vs say 
Class/Instance and XML, ...). however, both continue to be useful, so 
they don't just go away (like, Lists/S-Exps are easier to work with than 
XML via DOM or similar, ...).



but, yes, maybe I will go back into lurk mode now...


Julian

On 26/02/2012, at 9:25 PM, BGB wrote:


On 2/25/2012 7:48 PM, Julian Leviston wrote:
As I understand it, Frank is an experiment that is an extended 
version of DBJr that sits atop lesserphic, which sits atop gezira 
which sits atop nile, which sits atop maru all of which which 
utilise ometa and the worlds idea.


If you look at the http://vpri.org/html/writings.php page you can 
see a pattern of progression that has emerged to the point where 
Frank exists. From what I understand, maru is the finalisation of 
what began as pepsi and coke. Maru is a simple s-expression 
language, in the same way that pepsi and coke were. In fact, it 
looks to have the same syntax. Nothing is the layer underneath that 
is essentially a symbolic computer - sitting between maru and the 
actual machine code (sort of like an LLVM assembler if I've 
understood it correctly).




yes, S-Expressions can be nifty.
often, they aren't really something one advertises, or uses as a 
front-end syntax (much like Prototype-OO and delegation: it is a 
powerful model, but people also like their classes).


so, one ends up building something with a C-like syntax and 
Class/Instance OO, even if much of the structure internally is built 
using lists and Prototype-OO. if something is too strange, it may not 
be received well though (like people may see it and be like just 
what the hell is this?). better then if everything is just as could 
be expected.



in my case, they are often printed out in debugging messages though, 
as a lot of my stuff internally is built using lists (I ended up 
recently devising a specialized network protocol for, among other 
things, sending compressed list-based messages over a TCP socket).


probably not wanting to go too deeply into it, but:
it directly serializes/parses the lists from a bitstream;
a vaguely JPEG-like escape-tag system is used;
messages are Huffman-coded, and make use of both a value MRU/MTF and 
LZ77 compression (many parts of the coding also borrow from Deflate);
currently, I am (in my uses) getting ~60% additional compression vs 
S-Expressions+Deflate, and approximately 97% compression vs plaintext 
(plain Deflate got around 90% for this data).


the above was mostly used for sending scene-graph updates and similar 
in my 3D engine, and is maybe overkill, but whatever (although, 
luckily, it means I can send a lot more data while staying within a 
reasonable bandwidth budget, as my target was 96-128 kbps, and I am 
currently using around 8 kbps, vs closer to the 300-400 kbps needed 
for plaintext).



They've hidden Frank in plain sight. He's a patch-together of all 
their experiments so far... which I'm sure you could do if you took 
the time to understand each of them and had the inclination. They've 
been publishing as much as they could all along. The point, though, 
is you have to understand each part. It's no good if you don't 
understand it.




possibly, I don't understand a lot of it, but I guess part of it may 
be knowing what to read.
there were a few nifty things to read here and there, but I wasn't 
really seeing the larger whole I guess.



If you know anything about Alan  VPRI's work, you'd know that their 
focus is on getting children this stuff in front as many children as 
possible, because they have so much more ability to connect to the 
heart of a problem than adults. (Nothing to do with age - talking 
about minds, not bodies here). Adults usually get in the way with 
their stuff - their knowledge sits like a kind of a filter, 
denying them the ability to see things clearly and directly connect 
to them unless they've had special training in relaxing that filter. 
We don't know how to be simple and direct any more - not to say that 
it's impossible. We need children to teach us meta-stuff, mostly 
this direct way of experiencing and looking, and this project's main 
aim appears to be to provide them (and us, of course, but not as 
importantly) with the tools to do that. Adults will come secondarily 
- to the degree they can't embrace new stuff

[fonc] OT? S-Exps and network (Re: Error trying to compile COLA)

2012-02-26 Thread BGB

On 2/26/2012 11:33 AM, Martin Baldan wrote:


Guys, I find these off_topic comments (as in not strictly about my 
idst compilation  problem)  really interesting. Maybe I should start a 
new thread? Something like «how can a newbie start playing with this 
technology?». Thanks!




well, ok, hopefully everyone can tolerate my OT-ness here...
(and hopefully, my forays deep into the land of trivia...).


well, ok, I am not personally associated with VPRI though, mostly just 
lurking and seeing if any interesting topics come up (but, otherwise, am 
working independently on my own technology, which includes some VM stuff 
and a 3D engine).


( currently, no code is available online, but parts can be given on 
request via email or similar if anyone is interested, likewise goes for 
specs, ... )



recently, I had worked some on adding networking support for my 3D 
engine, but the protocol is more generic (little about it is 
particularly specific to 3D gaming, and so could probably have other uses).


internally, the messaging is based on lists / S-Expressions (it isn't 
really clear which term is better, as lists is too generic, and 
S-Expressions more refers to the syntax, rather than their in-program 
representation... actually it is a similar terminology problem with XML, 
where the term may ambiguously either be used for the textual 
representation, or for alternative non-text representations of the 
payload, IOW: Binary XML, and similar).


but, either way, messages are passed point-to-point as lists, typically 
using a structure sort of like:

(wdelta ... (delta 315 (org 140 63 400) (ang 0 0 120) ...) ...)

the messages are free-form (there is no schema, as the system will try 
to handle whatever messages are thrown at it, but with the typical 
default behavior for handlers of ignoring anything which isn't 
recognized, and the protocol/codec is agnostic to the types or format of 
the messages it is passing along, provided as long as they are built 
from lists or similar...).


as-is, these expressions are not eval'ed per-se, although the typical 
message handling could be itself regarded as a crude evaluator (early 
versions of my original Scheme interpreter were not actually all that 
much different). theoretically, things like ASTs or Scheme code or 
whatever could be easily passed over the connection as well.


in-program, the lists are dynamically typed, and composed primarily of 
chains of cons cells, with symbols, fixnums, flonums, strings, 
... comprising most of the structure (these are widely used in my 
projects, but aren't particularly unique to my project, though seemingly 
less well-known to most more mainstream programmers).


as-is, currently a small subset of the larger typesystem is handled, and 
I am currently ignoring the matter of list cycles or object-identity 
(data is assumed acyclic, and currently everything is passed as a copy).



at the high-level, the process currently mostly looks like:
process A listens on a port, and accepts connections, and then handles 
any messages which arrive over these connections, and may transmit 
messages in response.

process B connects to A, and may likewise send and receive messages.

currently, each end basically takes whatever messages are received, and 
passes them off to message-processing code (walks the message 
expressions and does whatever). currently, queues are commonly used for 
both incoming and outgoing messages, and most messages are asynchronous.


neither end currently needs to worry about the over-the-wire format of 
these lists.
a system resembling XMPP could probably also be built easily enough (and 
may end up being done anyways).


lists were chosen over XML mostly for sake of them being more convenient 
to work with.



actually, I did something similar to all this long ago, but this effort 
fell into oblivion and similar had not been done again until fairly 
recently (partly involving me reviving some old forgotten code of mine...).




now, on to the protocol itself:
it is currently built over raw TCP sockets (currently with nodelay set);
messages are encoded into lumps, which are basically tags followed by 
message data (lumps are also used for stream-control purposes, and may 
relay other types of messages as well).


currently, a system of tags resembling the one in JPEG is used, except 
that the tags are 4 bytes (with 3 bytes of magic and 1 byte to 
indicate the tag type, a longer magic was used to reduce the number of 
times it would need to be escaped in a bitstream). currently, no length 
is used (instead, one knows a complete message lump has been received 
because the end-tag is visible). this currently means an 8 byte overhead 
per-message lump due to tags (there are also Deflate lumps, but these 
have the added overhead of a decoded-length and a checksum, needed for 
technical reasons, leading to 16 bytes of overhead).


message lumps are themselves a bitstream, and are currently built out of 
a collection of 

Re: [fonc] Error trying to compile COLA

2012-02-26 Thread BGB

On 2/26/2012 8:23 PM, Julian Leviston wrote:
I'm afraid that I am in no way a teacher of this. I'm in no way 
professing to know what I'm talking about - I've simply given you my 
observations. Perhaps we can help each other, because I'm intensely 
interested, too... I want to understand this stuff because it is chock 
full of intensely powerful ideas.




yep, generally agreed.

I generally look for interesting or useful ideas, but admittedly have a 
harder time understanding a lot of what is going on or being talked 
about here (despite being, I think, generally being fairly knowledgeable 
about most things programming-related).


there may be a domain mismatch or something though.


admittedly, I have not generally gotten as far as being really able to 
understand Smalltalk code either, nor for that matter languages too much 
different from vaguely C-like Procedural/OO languages, except maybe 
ASM, which I personally found not particularly difficult to 
learn/understand or read/write, but the main drawbacks of ASM being its 
verbosity and portability issues.


granted, this may be partly a matter of familiarity or similar, since I 
encountered both C and ASM (along with BASIC) at fairly young age (and 
actually partly came to understand C originally to some degree by 
looking at the compiler's ASM output, getting a feel for how the 
constructs mapped to ASM operations, ...).



The elitism isn't true... I've misrepresented what I was meaning to 
say - I simply meant that people who aren't fascinated enough to 
understand won't have the drive to understand it... until it gets to a 
kind of point where enough people care to explain it to the people who 
take longer to understand... This makes sense. It's how it has always 
been. Sorry for making it sound elitist. It's not, I promise you. When 
your time is limited, though (as the VPRI guys' time is), one needs to 
focus on truly expounding it to as many people as you can, so one can 
teach more teachers first... one teaches the people who can understand 
the quickest first, and then they can propagate and so on... I hope 
this is clear.




similarly, I was not meaning to apply that children having knowledge is 
a bad thing, but sadly, it seems to run contrary to common cultural 
expectations.


granted, it is possibly the case that some aspects of culture are broken 
in some ways, namely, that children are kept in the dark, and there is 
this notion that everyone should be some sort of unthinking and passive 
consumer. except, of course, for the content producers, which would 
generally include both the media industry (TV, movies, music, ...) as a 
whole and to a lesser extent the software industry, with a sometimes 
questionable set of Intellectual Property laws in an attempt to uphold 
the status quo (not that IP is necessarily bad, but it could be better).


I guess this is partly why things like FOSS exist.
but, FOSS isn't necessarily entirely perfect either.


but, yes, both giving knowledge and creating a kind of safe haven seem 
like reasonable goals, where one can be free to tinker around with 
things with less risk from some overzealous legal department somewhere.


also nice would be if people were less likely to accuse all of ones' 
efforts of being useless, but sadly, this probably isn't going to happen 
either.



this is not to imply that I personally necessarily have much to offer, 
as beating against a wall may make one fairly well aware of just how far 
there is left to go, as relevance is at times a rather difficult goal 
to reach.


admittedly, I am maybe a bit dense as well. I have never really been 
very good with abstract concepts (nor am I particularly intelligent 
in the strict sense). but, I am no one besides myself (and have no one 
besides myself to fall back on), so I have to make due (and hopefully 
try to avoid annoying people too much, and causing them to despise me, ...).


like, the only way out is through and similar.


I don't think it was a prank. It's not really hidden at all. If you 
pay attention, all the components of Frank are there... like I said. 
It's obviously missing certain things like Nothing, and other 
optimisations, but for the most part, all the tech is present.


sorry for asking, but is their any sort of dense people friendly 
version, like maybe a description on the Wiki or something?...


like, so people can get a better idea of what things are about and how 
they all work and fit together?... (like, in the top-down description 
kind of way?).



My major stumbling block at the moment is understanding OMeta fully. 
This is possibly the most amazing piece of work I've seen in a long, 
long time, and there's no easy explanation of it, and no really simple 
explanation of the syntax, either. There are the papers, and source 
code and the sandboxes, but I'm still trying to understand how to use 
it. It's kind of huge. I think perhaps I need to get a grounding in 
PEGs before I start on OMeta because 

Re: [fonc] Error trying to compile COLA

2012-02-26 Thread BGB

On 2/26/2012 11:43 PM, Julian Leviston wrote:

Hi,

Comments line...

On 27/02/2012, at 5:33 PM, BGB wrote:



I don't think it was a prank. It's not really hidden at all. If you 
pay attention, all the components of Frank are there... like I said. 
It's obviously missing certain things like Nothing, and other 
optimisations, but for the most part, all the tech is present.


sorry for asking, but is their any sort of dense people friendly 
version, like maybe a description on the Wiki or something?...


like, so people can get a better idea of what things are about and 
how they all work and fit together?... (like, in the top-down 
description kind of way?).




I don't think this is for people who aren't prepared to roll up their 
sleeves and try things out. For a start, learn SmallTalk. It's not 
hard. Go check out squeak. There are lots of resources to learn SmallTalk.




could be.

I messed with Squeak some before, but at the time got 
confused/discouraged and gave up after a little while.







My major stumbling block at the moment is understanding OMeta fully. 
This is possibly the most amazing piece of work I've seen in a long, 
long time, and there's no easy explanation of it, and no really 
simple explanation of the syntax, either. There are the papers, and 
source code and the sandboxes, but I'm still trying to understand 
how to use it. It's kind of huge. I think perhaps I need to get a 
grounding in PEGs before I start on OMeta because there seems to be 
a lot of assumed knowledge there. Mostly I'm having trouble with the 
absolute, complete basics.


Anyway I digress... have you had a look at this file?:

http://piumarta.com/software/maru/maru-2.1/test-pepsi.l

Just read the whole thing - I found it fairly interesting :) He's 
build pepsi on maru there... that's pretty fascinating, right? Built 
a micro smalltalk on top of the S-expression language... and then 
does a Fast Fourier Transform test using it...




my case: looked some, but not entirely sure how it works though.



You could do what I've done, and read the papers and then re-read them 
and re-read them and re-read them... and research all references you 
find (the whole site is totally full of references to the entire of 
programming history). I personally think knowing LISP and SmallTalk, 
some assembler, C, Self, Javascript and other things is going to be 
incredibly helpful. Also, math is the most helpful! :)




ASM and C are fairly well known to me (I currently have a little over 1 
Mloc of C code to my name, so I can probably fairly safely say I know 
C...).



I used Scheme before, but eventually gave up on it, mostly due to:
problems with the Scheme community (seemed to be very fragmented and 
filled with elitism);
I generally never really could get over the look of S-Expression 
syntax (and also the issue that no one was happy unless the code had 
Emacs formatting, but I could never really get over Emacs either);
I much preferred C style control flow (which makes arbitrary 
continues/breaks easier), whereas changing flow through a loop in Scheme 
often meant seriously reorganizing it;

...

Scheme remains as a notable technical influence though (and exposure to 
Scheme probably had a fairly notable impact on much of my subsequent 
coding practices).



JavaScript I know acceptably, given my own scripting language is partly 
based on it.
however, I have fairly little experience using it in its original 
context: for fiddling around with web-page layouts (and never really got 
into the whole AJAX thing).


I messed around with Self once before, but couldn't get much 
interesting from it, but I found the language spec and some papers on 
the VM fairly interesting, so I scavenged a bunch of ideas from there.


the main things I remember:
graph of objects, each object being a bag of slots, with the ability 
to delegate to any number of other objects, and the ability to handle 
cyclic delegation loops;

using a big hash-table for lookups and similar;
...

many of those ideas were incorporated into my own language/VM (with 
tweaks and extensions: such as my VM has lexical scoping, and I later 
added delegation support to the lexical environment as well, ...). (I 
chose free-form / arbitrary delegation rather than the single-delegation 
of JavaScript, due to personally finding it more useful and interesting).


I had noted, however, that my model differs in a few significant ways 
from the description of Lieberman Prototypes on the site (my clone 
operation directly copies objects, rather than creating a new empty 
object with copy-on-write style semantics).



the current beast looks like a mix of like a C/JS/AS mix on the surface, 
but internally may have a bit more in common than Scheme and Self than 
it does with other C-like languages (much past the syntax, and 
similarities start to fall away).


but, yet, I can't really understand SmallTalk code...


granted, math is a big weak area of mine:
apparently, my effective

[fonc] misc: bytecode and level of abstraction

2012-01-22 Thread BGB
I don't know if this topic has probably been already beat to death, or 
is otherwise not very interesting or relevant here, but alas...


it is a question though what is the ideal level of abstraction (and 
generality) in a VM.



for example, LLVM is fairly low level (using a statically-typed 
SSA-form as an IR, and IIRC a partially-decomposed type-system).
the JVM is a little higher level, being a statically-typed stack machine 
(using primitive types for stack elements and operations), with an 
abstracted notion of in-memory class layout;
MSIL/CIL is a little higher still, abstracting the types out of the 
stack elements (all operations work against inferred types, and unlike 
the JVM there is no notion of long and double take 2 stack slots, ...).


both the JVM and MSIL tend to declare types from the POV of their point 
of use, rather than from their point of declaration. hence, the load 
or call operations directly reference a location giving the type of 
the variable.


similarly, things like loads / stores / method-calls/dispatching / ... 
are resolved prior to emitting the bytecode.



in my VMs, I have tended to leave the types at the point of 
declaration, hence all the general load/store/call operations merely 
link to a symbolic-reference.


one of my attempts (this VM never got fully implemented) would have 
attempted to pre-resolve all scoping (like in the JVM or .NET, but ran 
into problems WRT a complex scoping model), but I have not generally 
done this.


my current VM only does so for the lexical scope, which is treated 
conceptually as a stack:
all variable declarations are pushed to the lexical environment, and 
popped when a given frame exits;
technically, function arguments are pushed in left-to-right order, 
meaning that (counter-intuitively) their index numbers are reverse of 
their argument position;
unlike in JBC or MSIL, the index does not directly reference a declared 
variables' declaration, merely its relative stack position, hence it is 
also needed to infer the declaration;
note that it being (conceptually) a stack also does not imply it is 
physically also represented as a stack.


hence, in the above case, the bytecode not too far removed from the 
source code.



I guess one can argue, that as one moves up the abstraction layer, then 
the amount of work needed in making the VM becomes larger (it deals with 
far more semantics issues, and is arguably more specific to the 
particular languages in use, ...).


I suspect it is much less clear cut than this though, for example, 
targeting a dynamic-language (such as Scheme or JavaScript) to a VM such 
as LLVM or JBC (pre JDK7) essentially requires implementing much of the 
VM within the VM, and may ultimately reduce how effectively the VM can 
optimize the code (rather than merely dealing with the construct, now 
the first VM also has to deal with how the second VM's constructs were 
implemented on top of the first VM).


a secondary issue is when the restrictions of such a VM (particularly 
the JVM) impede what can be effectively expressed within the VM, running 
counter to the notion that higher abstraction necessarily equates to 
greater semantic restrictions.



the few cases where I can think of where the argument does make a 
difference include:


the behavior of variable scoping (mostly moot for JVM, which pretty much 
hard-codes this);
the effects of declaration modifiers (moot regarding JVM and .NET, which 
manage modifiers internally).


the shape of the type-system and numeric tower (likewise as the above, 
although neither enforces a particular type-system, neither gives much 
room for it to be effectively done much differently, likewise in LLVM 
and ASM one is confined to whatever is provided by the HW).


the behavior of specific operators as applied to specific types. this 
may be a merit of the JVM and .NET arguably vs my own VMs, since both 
VMs only perform operations directly against primitive types, the 
behavior of mixed-type cases is de-facto left to the language and 
compiler, this may be ultimately a moot point, as manual type-coercion 
or scope-qualified operator overloading could achieve the same ends. 
similarly, a high-level VM could also (simply) discard the notion of 
built-in/hard-coded operator+type semantics, and instead expect the 
compiled code to either overload operators or import a namespace 
containing the desired semantics (say, built-in or library-supplied 
overloaded operators). more-so, unlike the JVM and .NET strategies, this 
does not mandate the need for static typing (prior to emitting bytecode) 
in order to achieve language-specific type-semantics.


in the above case (operators being a result of an implicit import), if 
Language-A disallows string+int, Language-B interprets it as append 
the string(a) with int::toString(b), and Language-C as offset the 
string by int chars, well then, the languages can each do so without 
interfering with the others.


...


or, in effect, I 

Re: [fonc] One more year?!

2012-01-22 Thread BGB

On 1/22/2012 5:30 PM, Dion Stewart wrote:

Is there a hard line between science and art?

I lean towards Richard Gabriel's and Kevin Sullivan's views on this one.

How do artists and scientist work? The same.

http://dreamsongs.com/Files/BetterScienceThroughArt.pdf



I was actually going to argue something vaguely similar, but was more 
busy with writing something else (a professional musician may not be 
necessarily that much different from a scientist or engineer, and a mad 
scientist may not necessarily be too much different from traditional 
notions of an artist).





 How do artists and scientists work? The same
On Jan 22, 2012, at 3:51 PM, Reuben Thomas wrote:

On 22 January 2012 21:26, Casey Ransberger casey.obrie...@gmail.com 
mailto:casey.obrie...@gmail.com wrote:

Below.

On Jan 21, 2012, at 6:26 PM, BGB cr88...@gmail.com 
mailto:cr88...@gmail.com wrote:


like, for example, if a musician wanted to pursue various musical 
forms.
say, for example: a dubstep backbeat combined with rap-style lyrics 
sung
using a death-metal voice or similar, without the man (producers, 
...)

demanding all the time that they get a new album together


Only art is not science: it doesn't have pieces you can take apart and
reuse in the same way (technique does).

So it's not an analogy that works.

(I did a PhD in computer science, and I make my living as a singer.)

--
http://rrt.sc3d.org
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc





___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] One more year?!

2012-01-22 Thread BGB

On 1/22/2012 5:11 PM, Julian Leviston wrote:

On 23/01/2012, at 8:26 AM, Casey Ransberger wrote:


Below.

On Jan 21, 2012, at 6:26 PM, BGBcr88...@gmail.com  wrote:


like, for example, if a musician wanted to pursue various musical forms. say, for example: a dubstep backbeat combined 
with rap-style lyrics sung using a death-metal voice or similar, without the man (producers, ...) demanding 
all the time that they get a new album together (or that their fans and the man expect them to stay with 
their existing sound and theme), and if they just gave them something which was like and so wub-wub-wub, goes the 
sub-sub-sub, as the lights go blim-blim-blim, as shorty goes rub-run-run, on my hub-hub-hub, as my rims go 
spin-spin-spin or something... (all sung in deep growls and roars), at which point maybe the producers would be 
very unhappy (say, if he was hired on to be part of a tween-pop boy-band, and adolescent females may respond poorly to 
bass-filled wubbing growl-rap, or something...).


or such...

This is probably the raddest metaphor that I have ever seen on a mailing list.

BGB FTW!

P.S.

If you want to get this song out the door, I'm totally in. Dubsteprapmetal 
might be the next big thing. I can do everything except the drums. We should 
write an elegant language for expressing musical score in OMeta and use a 
simulated orchestra!

Oh come on, Dub Step Rap Metal has been done before... Korn is basically what 
that is...  Just because you're not CALLING it dubstep doesn't mean it doesn't 
have the dubstep feel.


I was more giving it as an example of basically wanting to do one thing 
while being obligated (due to prior work) to do something very different.


say, if a musician (or scientist/programmer/...) has an established 
audience, and is expected to produce more of the same, they may have 
less personal freedom to explore other alternatives (and doing so may 
alienate many of their fans). an real-life example being, for example, 
Metallica incorporating a lot of Country Western elements.


in the example, the idea is that the producers may know full well that 
if their promoted boy-band suddenly released an album containing lots of 
bass and growling (rather than dancing around on stage being 
pretty-boys) then the audience of teenage girls might be like what the 
hell is this? and become disillusioned with the band (costing the 
producers a pile of money).


this does not necessarily mean that an idea is fundamentally new or 
original though.




Interesting, also, that you chose dubstep here, because that's a genre that's been basically 
raped in a similar way to what has been done to the ideas in object-orientism in 
order to get it into the mainstream :) People think dubstep is just a wobble bass... but it's 
actually more about the feel of the dub break...shrug


possibly. I encountered some amount of it before, which ranged between 
pretty cool and simplistic and actually kind of sucks (take 
whatever, put a pile of bass on it, call it good enough...).


some of it has just been the wub-wub-wub part with pretty much nothing 
else going on.



I had briefly experimented (I am not really a musician, just tinkered 
some) with trying to combine the wub-wub-wub part with a beat 
(apparently, someone else thought it sounded more like techno or 
industrial). I did tests trying to sing (poorly) doing both rap-style 
and growl-voice lyrics (in both cases about matters of programming), but 
didn't try combining them at the time as this would have been physically 
difficult (both require some level of physical effort, and I also have 
little personal interest either in the rough-tough thug from da hood 
or the gloom and doom and corpses images traditionally associated with 
the two lyrical styles). (actually, I partly vaguely remember rap in 
the form of MC Hammer and Vanilla Ice and similar, from before the 
days of thugz from da hood, although this is stuff from very long 
ago... although the attempts I made had more stylistically in common 
with the latter, than with MC Hammer and similar, which were more 
closer to actually singing the lyrics, rather than saying lots of 
rhyming-words to a fixed beat, ...).


my own musical interests have mostly been things like 
house/trance/industrial/... and similar...


I don't really have either instruments or any real skill with 
instruments, so what tests I had done had been purely on my computer 
(mostly using Audacity and similar, in this case). some past experiments 
had involved using tweaks to a custom written MIDI synthesizer (which 
allows, among other things, using arbitrary sound-effects as patches), 
however I haven't as-of-yet devised a good way to express non-trivial 
patterns in the MIDI command-language, leaving it as slightly less 
effort to just use multi-track sound-editing instead...



but, I have little intention at the moment of doing much of anything 
really serious with regards to musical stuff... (later? who knows, 
just

Re: [fonc] One more year?!

2012-01-22 Thread BGB

On 1/22/2012 7:16 PM, Casey Ransberger wrote:

Below and mile off-topic...

On Jan 22, 2012, at 4:11 PM, Julian Levistonjul...@leviston.net  wrote:


On 23/01/2012, at 8:26 AM, Casey Ransberger wrote:


Below.

On Jan 21, 2012, at 6:26 PM, BGBcr88...@gmail.com  wrote:


like, for example, if a musician wanted to pursue various musical forms. say, for example: a dubstep backbeat combined 
with rap-style lyrics sung using a death-metal voice or similar, without the man (producers, ...) demanding 
all the time that they get a new album together (or that their fans and the man expect them to stay with 
their existing sound and theme), and if they just gave them something which was like and so wub-wub-wub, goes the 
sub-sub-sub, as the lights go blim-blim-blim, as shorty goes rub-run-run, on my hub-hub-hub, as my rims go 
spin-spin-spin or something... (all sung in deep growls and roars), at which point maybe the producers would be 
very unhappy (say, if he was hired on to be part of a tween-pop boy-band, and adolescent females may respond poorly to 
bass-filled wubbing growl-rap, or something...).


or such...

This is probably the raddest metaphor that I have ever seen on a mailing list.

BGB FTW!

P.S.

If you want to get this song out the door, I'm totally in. Dubsteprapmetal 
might be the next big thing. I can do everything except the drums. We should 
write an elegant language for expressing musical score in OMeta and use a 
simulated orchestra!

Oh come on, Dub Step Rap Metal has been done before... Korn is basically what 
that is...  Just because you're not CALLING it dubstep doesn't mean it doesn't 
have the dubstep feel.

Interesting, also, that you chose dubstep here, because that's a genre that's been basically 
raped in a similar way to what has been done to the ideas in object-orientism in 
order to get it into the mainstream :) People think dubstep is just a wobble bass... but it's 
actually more about the feel of the dub break...shrug

Julian


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Julian,

Generally good points but I'm pretty sure the Korn I've heard wasn't dubstep. 
It's also crap. T.M. :D


admittedly, I have never heard Korn (that I am aware of), so have no 
real idea what it sounds like in particular. only that I think it is 
often associated with Linkin Park, which generally sounds like crap IMO 
(although I remember one instance where I heard something, and had a 
response roughly along the lines of what the hell is this?, my brother 
said it was Linkin Park, I was surprised, but don't remember what it 
sounded like, much beyond the response of ?... strange... and sounds 
kinda like crap...).


for some mysterious/unknown reason, my mom likes Linkin Park, I don't 
know...
(and, my dad mostly likes music from when he was young, which mostly 
amounts to heavy metal and similar).



little if anything in that area that generally makes me think dubstep 
though...


(taken loosely enough, most gangsta-rap could be called dubstep if 
one turns the sub-woofer loud enough, but this is rather missing the 
point...).


or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] One more year?!

2012-01-22 Thread BGB

On 1/22/2012 8:57 PM, Julian Leviston wrote:


On 23/01/2012, at 2:30 PM, BGB wrote:

little if anything in that area that generally makes me think 
dubstep though...


(taken loosely enough, most gangsta-rap could be called dubstep 
if one turns the sub-woofer loud enough, but this is rather missing 
the point...).


Listen to this song. It's dubstep. Popular dubstep has been raped to 
mean brostep or what skrillex plays... but this song is original 
dubstep.


two cents. mine.

http://www.youtube.com/watch?v=IlEkvbRmfrA



sadly... my internet sucks too much recently to access YouTube (it, 
errm, doesn't even try to go, just says an error occured. please try 
again later). at this point, it is hard to access stupid Wikipedia 
or Google without connection timed out errors, but this is the free 
internet provided by the apartment complex, where traces show it is 
apparently through 3 layers of NAT as well according to tracert (the 
local network, and two different 192.169.*.* network addresses)


also no access to usenet (because NNTP is blocked), or to FTP/SSH/... 
(also blocked).

and sites like StackOverflow, 4chan, ... are black-listed, ...

but, yeah, may have to pay money to get real internet (like, via a 
cable-modem or similar).



but, anyways, I have had an idea (for music generation).

as opposed to either manually placing samples on a timeline (like in 
Audacity or similar), or the stream of note-on/note-off pulses and 
delays used by MIDI, an alternate idea comes up:
one has a number of delayed relative events, which are in-turn piped 
through any number of filters.


then one can procedurally issue commands of the form in N seconds from 
now, do this, with commands being relative to a base-time (and the 
ability to adjust the base-time based either on a constant value or how 
long it would take a certain expression to finish playing).


likewise, expressions/events can be piped through filters.
filters could either apply a given effect (add echo or reverb, ...), or 
could be structural (such as to repeat or loop a sequence, potentially 
indefinitely), or possibly sounds could be entirely simulated (various 
waveform patterns, such as sine, box, and triangle, ...).


the main mix would be either a result of evaluating a top-level 
expression, or possibly some explicit send this to output command.


evaluation of a script would be a little more complex and expensive than 
MIDI, but really why should this be a big issue?...


the advantage would be mostly that it would be easier to specify 
beat-progressions, and individually tweak things, without the pile of 
undo/redo, copy/pasting, and saving off temporary audio samples, as 
would be needed in a multi-track editor.



it is unclear if this would be reasonably suited to a generic 
script-language (such as BGBScript in my case), or if this sort of thing 
would be better suited to a DSL.


a generic script language would have the advantage of making it easier 
to implement, but would potentially make the syntax more cumbersome. in 
such a case, most mix-related commands would likely accept/return 
stream handles (the mixer would probably itself be written in plain C 
either way). my current leaning is this way (if I should choose to 
implement this).


a special-purpose DSL could have a more narrowly defined syntax, but 
would make implementation probably more complex (and could ultimately 
hinder usability if the purpose is too narrow, say, because one can't 
access files or whatever...).



say, for example, if written in BGBScript syntax:
var wub=mixLoadSample(sound/patches/wub.wav);
var wub250ms=mixScaleTempoLength(wub, 0.25);
var wub125ms=mixScaleTempoLength(wub, 0.125);
var drum=mixLoadSample(sound/patches/drum.wav);
var cymbal=mixLoadSample(sound/patches/cymbal.wav);

//play sequences of samples
function beatPlay3Play1(a, b)
mixPlaySequence([a, a, a, b]);
function beatPlay3Play2(a, b)
mixPlaySequence([a, a, a, b, b]);

var beat0=mixBassBoost(beatPlay3Play2(wub250ms, wub125ms), 12.0);
//add 12db of bass

var beat1=mixPlaySequenceDelay([drum, cymbal], 0.5);//drum and cymbal
var beat2=mixPlayTogether([beat0, beat1]);

mixPlayOutput(beat2);//mix and send to output device


or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] One more year?!

2012-01-21 Thread BGB

On 1/21/2012 8:11 AM, Peter Franta wrote:
VPRI answer to their Government and Private Funders, not those of us 
who have the fortune to observe as they go.


It is my understanding the deliverable is not a product but lessons 
learned to go and do it for real! Not ideal for us but then they're 
not serving us.


Clearly this doesn't match your expectations. This is frustrating to 
you but are your expectations their concern? Where do these 
expectations come from?


Popular/Successful  Better. It all depends on your pov and what you 
value.




yeah. I am just a random observer as well, and operating in the far more 
mundane world of spewing code and trying to make it all work... (and 
wondering if/when/where I will ever make an income).


there is some amount of experimentation, but most of this is itself 
much more mundane, and not really a central focus (it is more of a try 
various things and see what sticks experience).



likewise, it can be interesting to gather and try out ideas, and see 
what all works.
sometimes, the new thing can be simpler overall, even if initially or 
conceptually more complex.



for example, at a basic level, dynamic typing is more complex than 
static typing (which can be seen as a type-checked variant of an untyped 
system). once things start getting more complex, and one is dealing with 
less trivial type semantics, then dynamic typing becomes much simpler 
(the main difference is that dynamic typing is not well-supported on 
current hardware, leading to complex/awkward type inference in an 
attempt to get reasonable performance).


but, then there is a tradeoff:
static types work better in the simple case for finding many types of 
errors (even a good type-inference algo with type-validation may, with a 
loose type-system, accept bad input and coerce it into a 
semantically-valid, but unintended form).


dynamic type-systems work better when the types are less trivial, and/or 
may become, well, dynamic (at which point, many good old static 
languages lead either to overly complex mechanisms, or a need to strong 
arm the type-system via lots of casts). even in an otherwise 
well-developed static language, there may end up being stupid/arbitrary 
restrictions which don't follow from either the implementation or the 
conceptual semantics (such as, in several languages, the inability to 
pass/convert a generic-container for a subclass into a generic container 
for the superclass, ...).


in these cases, it may well make sense to leave it up to the programmer 
which semantics they want to use.


for example, in my own language, if a declared type is used, it is 
treated more like in a static language (coercing if needed and the types 
are compatible, or potentially making an error otherwise).


some other languages treat things more loosely, making me wonder what 
really is the point of having type annotations in this case. what is 
gained from annotations if the compiler does not enforce the types and 
the types don't impact run-time behavior? in terms of optimization, in 
this case, they offer little more than what could have been reasonably 
gained by type-inference. there only real obvious merit is to serve as a 
hint for what type the variable should be (if people so happen to be 
look at the declarations at the time).


granted, when one has both static and dynamic type-semantics, this is 
potentially more complex than either case by itself (although, having 
both does allow some potentially interesting semantics, such as the 
ability to handle semantics which would likely be slow in a 
dynamically-typed language, but which would be overly complicated or not 
possible to represent with traditional uses of generics or templates, 
such as conditional or evaluated types).



but, who knows, maybe there are better ways to deal with all this as well?

how much of the concept of 'types' is itself real, and how much is an 
artifact of the existing implementations? how much of existing 
programming practice is real, and how much is essentially cruft?


simple example:
there are fields and there are methods;
sometimes we want to access fields directly, and sometimes to limit such 
access.


so, then the public/private notion popped up, with the usual convention 
being to keep all fields as private. then, people introduced 
getter/setter methods (and there is then a whole lot of obj.getX() and 
obj.setX(value)).


then, languages have added properties, so that the extra notation can be 
shaved back off (allowing, once again, obj.X and obj.X=value;), and 
have similarly allowed short-hand dummy property declarations, such as 
public double X { get; set; };


although there are sensible reasons behind all of this, one can also 
wonder: what exactly really is the point?. couldn't one, say, just as 
easily (assuming the language handled it gracefully) declare the field 
initially as public, and if later it needs to be trapped, simply change 
it to private, drop in the property 

Re: [fonc] One more year?!

2012-01-21 Thread BGB

On 1/21/2012 11:23 AM, Shawn Morel wrote:

Reuben,

Your response is enlightening in many ways. I think it re-inforces for me how computer science is really more 
of an automation pop culture. As a society, we've become engrossed with product yet we spend only 
a few minutes at most usually playing around with the artefacts produced - forming surface judgements and 
rarely if ever deconstructing them in order to understand their inner environment. As an academic discipline, 
I fear we've become entrenched in dogma around the accidental complexities we now consider core 
to our understanding of a problem - a running executable, bits of ASCII sigels organized in files.


dunno if you are objecting just to the notion of executables, or to 
files in general.


I suspect in both cases though, it may be due to some extent on a 
user/psychological need for humans to be like there is something, and 
there it is.


users then need to have a sense of files existing, and to be able to 
interact with them (copy them around, ...). otherwise, there would 
likely need to be something to take on a similar role. if it were merely 
a sea of data, some people might be happy, but others would be 
somewhat less happy (the OS and File-System is often becoming enough of 
such a sea of data as it already is).


to some extent, the program binary is also, similarly, a representation 
of the thing that is the program (although, some naive users gain the 
misconception that it is the icon which is the app, leading sometimes 
to people copying the icon onto a thumb-drive or similar, going onto 
another system, and having little idea why their application no longer 
works on the new system).


but, like the FS, applications have increasingly started to become a 
sea of data, as the program becomes a whole directory of files, or has 
its contents spread all over within the OS's directories.


hence, some systems have made use of an executable which is really an 
executable read-only archive or database (JVM and .NET).


a few systems have gone further still, having the app be essentially 
distributed as a self-contained virtual file-system. Steam does this, 
with the program being itself a self-contained read-only VFS, but 
allowing the application to write into its VFS (the actual file data is 
stored separately in this case).


I have even encountered a few applications before distributed as VMware 
images (with an entire OS, typically FreeDOS or a stripped-down/hacked 
Win9x or WinXP, bundled as a part of the app), granted, in the cases 
where I have encountered this, I have been left to doubt both the 
legality and competence of the developers (just how poorly made and 
hacked-together crap does an app need to be before it justifies shipping 
an entire OS with the app in order to run it?... meanwhile does MS 
necessarily approve of Win9x or WinXP being used as a redistributable 
run-time for said app?...).


granted, at least if Win98 blue-screens in VMware, it is a little less 
annoying than on real HW, but then one can be left to wonder if it just 
doesn't run well in an emulator, or if it really was that unreliable?... 
(granted, I do sort of remember Win98 being one of those OS's I thought 
was too terrible to really bother actually using, me instead mostly 
running Linux and later Win2K, before later migrating back mostly into 
Windows land just prior to the release of WinXP...).


nevermind unverified reports that WinME was worse than Win98 (with both 
poor reliability and also software compatibility problems...).



but, anyways, application as a runnable VFS is possibly an idea which 
could be worth possible exploration. now, whether or not it is 
allowed/possible for a user to extract the contents of such an 
application, is a more open issue (as-is whether or not the VFS is 
read-only or self-modifying).


ideally, if it could also be done better than with the Valve/Steam GCF 
system is more open (ideally... it could be done without manual 
LoadLibrary() / GetProcAddress() and API-fetching hacks, or an 
otherwise overly constrained/annoying bootstrap process...). if one were 
still using native binaries, a probable strategy could be to compile the 
program as native DLLs or SO's, and have the launcher potentially 
incorporate a customized loader (launcher opens the VFS, looks for 
appropriate binaries for the architecture, loads them into the address 
space, and executes them).


one could make the VFS be an EXE, but my personal testing shows this 
to be pointless on modern Windows (if executed directly, the associated 
program will be invoked, including with any file extensions).


say, image exists foobar.myvm, and is called as foobar.myvm Arg0 
Arg1, apparently the VM will be called with a commandline something like:

C:\Program Files (x86)\MyVM\myvm.exe E:\...\foobar.myvm Arg0 Arg1

(note that, if launching via WinMain(), it will be a single big string 
as-above, but if by main() it will of course be split into the 

Re: [fonc] Inspired 3D Worlds

2012-01-17 Thread BGB

On 1/17/2012 10:58 AM, karl ramberg wrote:



On Tue, Jan 17, 2012 at 5:43 PM, Loup Vaillant l...@loup-vaillant.fr 
mailto:l...@loup-vaillant.fr wrote:


David Barbour wrote:



On Tue, Jan 17, 2012 at 12:30 AM, karl ramberg
karlramb...@gmail.com mailto:karlramb...@gmail.com
mailto:karlramb...@gmail.com mailto:karlramb...@gmail.com
wrote:

   I don't think you can do this project without a
understanding of
   art. It's a fine gridded mesh that make us pick between
practically
   similar artifacts with ease and that make the engineer
baffled. From
   a engineering standpoint there is not much difference between a
   random splash of paint and a painting by Jackson Pollock.
You can
   get far with surprisingly little resources if done correctly.

   Karl


I think, even with an understanding of art and several art history
classes in university, it is difficult to tell the difference
between a
random splash of paint and a painting by Jackson Pollock.

Regards,

Dave


If I recall correctly, there is a method: zoom in.  Pollock's
paintings
are remarkable in that they tend to display the same amount of entropy
no matter how much you zoom in (well, up to 100, actually).  Like a
fractal.

(Warning: this is a distant memory, so don't count me as a reliable
source.)

Loup.


My point here  was not to argue about a specific artist or genere but 
that the domain of art is very
different from that of engineer. What makes some music lifeless and 
some the most awe-inspiring

you heard in your whole life ?



game art doesn't need to be particularly awe inspiring, so much as 
basically works and is not total crap.


for example, if the game map is just:
spawn near the start;
kill a few guys standing in the way;
hit the exit.

pretty much no one will be impressed.

in much a similar way, music need not be the best thing possible, but 
if it generally sounds terrible or is just a repeating drum loop, this 
isn't so good either.



the issue, though, is that the level of effort needed to reach 
mediocre is often itself still a good deal of effort, as maybe one is 
comparing themselves against a mountain of other people, many trying to 
do the minimal they can get away with, and many others actually trying 
to make something decent.


it is more so a problem when ones' effort is already spread fairly thin:
between all of the coding, graphics and sound creation, 3D modeling and 
map creation, ...


it can all add up fairly quickly (even if one cuts many corners in many 
places).


what all I have thus far technically sort of works, but still falls a 
bit short of what was the norm in commercial games in the late-90s / 
early-2000s era.


it is also going on a much longer development time-frame as well. many 
commercial games get from concept to release in 6 months to 1 year, 
rather than requiring years, but then again, most companies don't have 
to build everything from the ground up (they have their own base of 
general art assets, will often license the engine from someone else, 
...), as well as having a team of people on the project (vs being a 
single-handed effort), ...



a lot of this is still true of the 3D engine as well, for example my 
Scripting VM is still sort of lame (I am using a interpreter, rather 
than a JIT, ...), my renderer architecture kind of sucks and doesn't 
perform as well as could be hoped (ideally, things would be more modular 
and cleanly written, ...), ...


note: mostly I am using an interpreter as JITs are a lot more effort to 
develop and maintain IME, and the interpreter is fast enough... the 
interpreter is mostly using indirect threaded code (as this is a 
little faster and more flexible than directly dispatching bytecode via a 
switch(), although the code is a little bigger given each opcode 
handler needs its own function).



likewise, after the Doom3 source code came out, I was left to realize 
just how drastically the engines differed internally (I had sort of 
assumed that Carmack was doing similar stuff with the internals).


the issue is mostly that my engine pulls off worse framerates on current 
hardware using the stock Doom3 maps than the Doom3 engine does (and 
leads to uncertainty regarding if scenes can be sufficiently 
large/complex while performing adequately).



for example:
my engine uses a mostly object-based scene-graph, where objects are 
roughly split into static objects (brushes, patches, meshes, ...) 
and dynamic objects (3D models for characters and entities, 
brush-models, ...);
it then does basic (dynamic) visibility culling (frustum and occlusion 
checks) and makes use of a dynamically-built BSP-tree;
most of the rendering is done (fairly directly) via a thin layer of 
wrappers over OpenGL (the shader system);
many rendering operations are implemented via 

Re: [fonc] Inspired 3D Worlds

2012-01-17 Thread BGB

On 1/17/2012 5:10 PM, David Barbour wrote:
On Tue, Jan 17, 2012 at 2:57 PM, BGB cr88...@gmail.com 
mailto:cr88...@gmail.com wrote:


game art doesn't need to be particularly awe inspiring, so much
as basically works and is not total crap.


It can't be awe inspiring all the time, anyway. Humans would quickly 
become jaded to that sort of stimulation.




partly agreed (although, maybe not jaded, but more like what is awe 
inspiring one year becomes mundane/mandatory the next).



actually, it is also an issue with many classic map generation 
technologies:
people become used to them, and used to seeing more impressive things 
being generated by hand (often with uninteresting aspects increasingly 
subtly offloaded to tools).


once something better takes hold, it is harder to keep interest in the 
older/simpler/more-mundane technologies.



so, now many people take for granted technologies which were novelties 
10 years ago (real-time rigid-body physics / ragdoll / ..., the ability 
to have light-sources move around in real-time, ability to have dynamic 
shadows, ...), and possibly unimaginable 15 or 20 years ago.


if a person went directly from being exposed to games like, say, 
Wolfenstein 3D and Sonic The Hedgehog, ... to seeing games like 
Portal 2 and Rage, what would their response be?


but, to those who have lived though it, it seems like nothing 
particularly noteworthy.


15 years ago, the big new things were having 3D modeled characters, 
colored lighting, and maybe translucent geometry. 20 years ago, it was 
having any sort of real-time 3D at all (well, that and FMV).



The quality of a game map depends on many properties other than visual 
appeal. A program that creates maps for a first-person shooter should 
probably have concepts such as defensible positions, ambush positions, 
snipe positions and visual occlusion, reachable areas, path-generation 
for AIs.




yeah. path-finding data can be built after the fact, just it tends to be 
fairly expensive to rebuild.




One might express `constraints` such as:
* a spawn zone should not be accessible from a snipe position.
* a capture-the-flag map should ensure that every path from the enemy 
flag to the base moves past at least one good snipe position and one 
good ambush position.
* there should be some `fairness` in quality and quantity of 
defensible-position resources for the different teams.
* the map needs to occlude enough that we never have more than K 
lights/triangles/objects/etc. in view at any given instant.




yep.

actually, the bigger issue regarding performance isn't really how many 
lights/polygons/... are visible, but more like the total screen-space 
taken by everything which needs to be drawn.


a single large polygon with a whole bunch of light sources right next to 
it, could be a much nastier problem than a much larger number of 
light-sources and a large number of tiny polygons.


it is stuff right up near the camera which seems to actually eat up the 
vast majority of rendering time, whereas the same complexity model some 
distance away may be much cheaper (although LOD and similar may help, 
although interestingly, LOD helps much more with reducing the 
computational costs of animating character models than it does with 
renderer performance per-se).



also, using fragment shaders can be fairly expensive (kind of a problem 
in my case, as most of my lighting involves the use of fragment shaders).


currently, there are multiple such shaders in my case (for per-pixel 
phong lighting):

one which uses the OpenGL lighting model (not used much);
one which uses a Quake-style lighting model (for attenuation), but is 
otherwise like the above;
one which uses a Quake-style lighting model (like the above), but adds 
support for normal and specular map textures (renderer avoids using this 
one where possible... as it is expensive);
one which uses a Doom3 style lighting model (box + falloff + projection 
textures) in addition to normal and specular maps (not currently used, 
considered possible reintroduction as a special effect feature).


the issue is mostly that when one has a shader pulling from around 6 
textures (the falloff texture needs to be accessed twice), the shader is 
a bit expensive.


note that the normal is actually a bump+normal map (bump in alpha), and 
the specular map is a specular+exponent map (specular color, with 
exponent-scale in alpha). in all cases, these are combined with the 
normal material properties (ambient/diffuse/specular/emission/...).



for related reasons:
adding a normal or specular map to a texture can make it nicer looking, 
but adding a normal map also makes it slower (and a possible performance 
feature would be to essentially disable the use of normal and specular 
maps).


there also seems to be a relation between texture size and performance 
as well (smaller resolution == faster).


it is also a non-linear tradeoff:
a large increase in texture resolution or use

Re: [fonc] Inspired 3D Worlds

2012-01-17 Thread BGB

On 1/17/2012 9:50 PM, Julian Leviston wrote:
There are different kinds of art, just like there are different 
qualities of everything.


I think you may find on closer inspection that there can be things 
that are intrinsically beautiful, or intrinsically awe-inspiring to 
humanity as a whole. I don't think that's silly, and I'm perfectly ok 
with the fact that you might think it's silly, but I feel the need to 
let you (all) know this.


I'm told, for example, that the Sistine Chapel is one such thing... or 
the great canyon. I know of a few things in Sydney where I live that 
seem to have a common effect on most people... (some of the churches, 
or architecture we have here, for example - even though we have such a 
young culture, the effect is still there).


It doesn't strike me as being that there is anything different in 
computer art or architecture than other art or architecture in this 
regard.


While I agree that computer game art doesn't *have* to be 
awe-inspiring (in an absolute, non-relative, non-subjective, objective 
sense) in order to be computer game art, or qualify as being of a 
standard which is enough to be acceptable to most people as being 
computer game art (ie qualifying for the title), I think it 
nonetheless matters in a general sense to aspire to such a high 
standard of quality in everything, irrespective of whether it's 
computer game art, or ordinary art, architecture of buildings, or 
architecture of information systems.


This is, after all, why we attempt anything at its root, isn't it? or 
is it simply to satisfy some form of mediocre whimsy? or to get by 
so to speak?


Contrast things that last with things that don't last. I personally 
don't hold that good graphics from a technical standpoint are 
inherently or necessarily awe-inspiring, because usually the context 
of technology yields little meaning compared to the context of 
culture, but good graphics from a technical standpoint are able, 
obviously, to transmit a message that *is* awe-inspiring (ie the media 
/ conduit / accessibility channel). In other words, the technology of 
quadrupling memory capacity and processor speed provides little impact 
on the kinds of meanings I can make from a social  cultural 
perspective. If I print my book on a different type of paper, it 
doesn't change the message of the book, but rather perhaps the 
accessibility of it. That is, except, perhaps for the cases such as 
the recently new Superman movie, where providing a similar visual 
and feel context to the previous movies provides more meaning to the 
message BECAUSE of current fashions of style in direction/production 
in movies. It actually adds to the world and meaning in this case - 
but this is a case of feedback, which IMHO is an exception to prove 
the rule.


This segues rather neatly to the question of content being contained 
within a context that simultaneously holds it and gives it meaning. 
The semantic content and context in contrast to those the content 
and context of the accessibility / conduit / media.


This brings the question Where is the semantic value held? to bear 
on the situation. If the point (ie meaning) of a game and therefore 
its visual design is not to impact the senses in some form of 
objective visual art, but rather to provide a conduit of accessibility 
to impact the mind in some form of objective mental art, then I would 
agree that visual art need not be very impressive or awe inspiring 
in order to achieve its aim.


Perhaps, however, the entire point of the game is simply to make 
money in which case none of my comments hold value. :)


Also, a question that springs to mind is... do you find any of the 
popularly impressive movies or graphics of the current day 
awe-inspiring? I find them quite cool... impressive in a technical 
sense, but not in a long-lasting impacting sense... obviously ( - to 
me, at least, and my friends - ) technology is inherently and 
constantly subject to fashion and incredibly time-sensitive, therefore 
there is little meaning contained in the special effects or 
technological advancements that are possible. I think we long ago 
passed the point where technology allowed us to build anything we can 
fantasise about... for example, I find inception, or the godfather, or 
even games like wizard of wor to be far more entertaining from the 
point of view of what they do to me in totality, than I do with 
something like transformers 2, for example.




interesting thoughts, albeit admittedly a bit outside my area...

I actually liked the Transformers movies (except: too much humans, not 
enough robot-on-robot battle), and admittedly sort of like Transformers 
in general (have watched through most of the shows, ...), and have taken 
some influence from the franchise (although, I also have many other 
things I liked / borrowed ideas from: Ghost In The Shell, Macross, 
Zone Of The Enders, Gundam, ...).



in general, it was still better in many regards than Cowboys 

Re: [fonc] Inspired 3D Worlds

2012-01-16 Thread BGB

On 1/16/2012 10:26 PM, Neu Xue wrote:

There are commercial big boxes with some random crap in them game worlds now 
and
have been since the 8-bit era.
The games that stood out by immersing us despite the limitations of technology 
were usually
the ones which were lovingly crafted.


very possibly.
time and effort makes a good product

quick and dirty makes a poor product, despite the availability of more 
advanced technologies.


like, many old games did fairly well even with few pixels to work with, 
and even a fairly high-resolution texture can still look terrible. say, 
a 512x512 canvas doesn't mean some quick-and-dirty passes with an 
airbrush and paint-can tool and throwing some emboss at it will look good.


with some time and practice though, it gets a little easier to start 
making artwork that looks more passable.



otherwise:
I was left idly imagining the possibility of using a good old scripting 
language (probably BGBScript in my case) to assist in building worlds. 
say, higher level API commands can be added to the mapper, so I can 
issue commands like build a room here with these textures and 
dimensions or generate some terrain over there as API calls or similar.


loops or functions could also generate things like stairs and similar, ...

then, it can partly be a process of writing scripts, invoking them in 
the mapper, and optionally saving out the results if it looks about like 
what was being hoping for (maybe followed by some amount of manual fine 
tuning...).


similarly, the commands would probably be usable from the console as 
well (as-is, BGBScript code can already be entered interactively at the 
console), in addition to the existing GUI-based mapping interface.


probably the underlying world structure would remain being built out of 
entities, convex polyhedrons (brushes), Bezier patches, and polygon 
meshes. (unlike some other ideas, this wouldn't drastically change how 
my engine works, or even require redesigning/recreating my tools...).



sorry if all this is a bother to anyone, being solidly not really about 
programming per-se...





From: BGBcr88...@gmail.com
To: Fundamentals of New Computingfonc@vpri.org
Sent: Tuesday, 17 January 2012, 3:31
Subject: Re: [fonc] Inspired 3D Worlds

8

these would generally be created manually, by placing every object
and piece of geometry visible in the world, but this is fairly
effort-intensive, and simply running head first into it tends to
quickly drain my motivation (resulting in me producing worlds which
look like big boxes with some random crap in them).

8

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] OLPC related

2011-11-16 Thread BGB

On 11/14/2011 4:42 PM, Max OrHai wrote:
Criticism of the OLPC project is easy to find, so I won't repeat much 
of it here, except to say that I find their whole model obnoxiously 
paternalistic; it's based on centralized government-controlled 
institutions (that is, schools), government and NGO subsidies for 
equipment, dependence on foreign administration and technical service, 
and a general all-too-familiar neo-colonial obliviousness of the real 
needs of poor societies rather than the vision of a few rich 
founders. A children's computer is of necessity hard to distinguish 
from a toy, and the line between children and adults is, I believe, 
an artificial one. Enforcing this fake distinction just gives poor 
kids yet another developmental dead end.


The societies of the so-called developing world (or global South 
or whatever) desperately needs real, affordable, resilient IT 
infrastructure, but OLPC isn't the ones giving it to them. See, for 
example, the UN International Telecom Union report for 2011:


http://www.itu.int/ITU-D/ict/publications/idi/2011/Material/MIS_2011_without_annex_5.pdf

Truly affordable and robust decentralized mesh networking is 
technically attainable, but the political ramifications are pretty 
intense. We tend to take our free speech for granted here in the 
rich world, but even here all of our communications technologies 
conform to government-mandated regulation and monitoring policies. In 
short, the authorities can turn them off.


Together with some fellow students at Portland State, I am currently 
developing a bottom-up approach to digital communications 
technology, engineered around a hard bottom-line per-device price tag 
of US$20, unsubsidized and no TV or network connection needed. We're 
not trying to replace broadband, but supplement it 
with distributed, persistent, self-maintaining localized message-board 
infrastructure. Here are a few links to technologies and concepts 
we're working with:


http://www.contiki-os.org/
http://www.eluaproject.net/
http://www.zeromq.org/
http://soft.vub.ac.be/amop/research/tuples and 
http://agentgroup.unimo.it/wiki/index.php/TOTA

http://en.wikipedia.org/wiki/Social_capital
http://villageearth.org/appropriate-technology/appropriate-technology-sourcebook/introduction-to-the-appropriate-technology-sourcebook 
http://villageearth.org/


We're still in early stages of this thing, but I'm very serious about 
it. We're not publicizing it very widely at the moment so we can focus 
development effort on the hardware prototypes. Once that stabilizes 
somewhat, we'll be looking at starting some manufacturing co-ops and 
coordinating open-source software development. If anyone here is 
interested or wants to know more, please just email me.




sorry, I have not looked at all the linked info...


however, I am left thinking maybe mesh networking could be done, say, 
with RF-based relay base stations.


then one can have a dual-level wireless network:
802.11b/g for LAN;
something else for WAN (say, more powerful  200MHz or 600MHz 
transmission or similar).


if one could get approval (so governments allow it), one option could be 
to search for unused NTSC/ATSC-band channels, and use those for network 
(if no TV is detected, a device might consider using it for internet 
bandwidth). or, maybe regional base-stations could allocate channels 
for use as network channels (individual base stations then know they can 
transmit over them because they detect a carrier signal or similar).


ideally, the whole network can configure itself more-or-less autonomously...


so, one goes and puts the base-station in their house, and it provides LAN.
then, they hook up an external/outside antenna, which provides WAN 
(like, they put it on their roof, or mounted on a pole).


maybe the WAN can be Ethernet-like (listen, try to transmit, retry if 
collision detected).
maybe IPv6 can be used as the WAN protocol (with either IPv6 or IPv4 for 
LAN).


the advantage of IPv6 here is that it can more easily auto-configure 
without needing a DHCP server or similar, but with a LAN, the 
base-station itself can provide DHCP services (allowing for IPv4 to work).


connection to the normal (IPv4) internet could be aided by the use of IP 
tunneling or similar...



or such...



-- Max


On Mon, Nov 14, 2011 at 8:06 AM, Carl Gundel ca...@psychesystems.com 
mailto:ca...@psychesystems.com wrote:


One very important thing the XO laptop has is mesh networking
technology,
and not just for use in the bush.  A way to free the general computing
public.  An alternate internet free from monopoly control.  Now
that I say
it more than one would be even better.

-Carl

-Original Message-
From: fonc-boun...@vpri.org mailto:fonc-boun...@vpri.org
[mailto:fonc-boun...@vpri.org mailto:fonc-boun...@vpri.org] On
Behalf Of
David Corking
Sent: Monday, November 14, 2011 10:56 AM
To: Fundamentals of New Computing
Subject: 

Re: [fonc] IBM eyes brain-like computing

2011-10-29 Thread BGB

On 10/29/2011 6:46 AM, karl ramberg wrote:

On Sat, Oct 29, 2011 at 5:06 AM, BGBcr88...@gmail.com  wrote:

On 10/28/2011 2:27 PM, karl ramberg wrote:

On Fri, Oct 28, 2011 at 6:36 PM, BGBcr88...@gmail.comwrote:

On 10/28/2011 7:28 AM, K. K. Subramaniam wrote:

On Thursday 27 Oct 2011 11:27:39 PM BGB wrote:

most likely, processing power will stop increasing (WRT density and/or
watts) once the respective physical limits are met (basically, it would
no longer be possible to get more processing power in the same space or
using less power within the confines of the laws of physics).

The adoption of computing machines at large is driven primarily by three
needs
- power (portable), space/weight and speed. The last two are now
solvable
in
the large but the third one is still stuck in the dark ages. I
recollect
a
joke by Dr An Wang (founder of Wang Labs) in keynote during the 80s that
goes
something like this:

A man struggled to lug two heavy suitcases into a bogie in a train that
was
just about to depart. A fellow passenger helped him in and they start a
conversation. The man turns out to be a salesman from a company that
made
portable computers. He showed one that fit in a pocket to his fellow
passenger.
It does everything that a mainframe does and more and it costs only
$100.
Amazing! exclaimed the passenger as he held the marvel in his hands,
Where
can I get one?. You can have this piece, said the gracious gent, as
thank
you gift for helping me. Thank you very much. the passenger was
thrilled
beyond words as he gingerly explored the new gadget. Soon, the train
reached
the next station and the salesman stepped out. As the train departed,
the
passenger yelled at him. Hey! you forgot your suitcases!. Not
really!
the
gent shouted back. Those are the batteries for your computer.

;-) .. Subbu

yeah...

this is probably a major issue at this point with hugely multi-core
processors:
if built, they would likely use lots of power and produce lots of heat.

this is sort of also an issue with video cards, one gets a new/fancy
nVidia
card, which is then noted to have a few issues:
it takes up two card slots (much of this apparently its heat-sink);
it is long enough that it partially sticks into the hard-drive bays;
it requires a 500W power supply;
it requires 4 plugs from the power-supply;
...

so, then one can joke that they have essentially installed a brick into
their computer.

nevermind it getting high framerates in games...


however, they would have an advantage as well:
people can still write their software in good old C/C++/Java/...

it is likely that the existence of existing programming languages and
methodologies will continue to be necessary of new computing
technologies.


also, likewise people will continue pushing to gradually drive-down the
memory requirements, but for the most part the power use of devices has
been
largely dictated by what one can get from plugging a power-cord into the
wall (vs either running off batteries, or OTOH, requiring one to plug in
a
240V dryer/arc-welder/... style power cord).


elsewhere, I designed a hypothetical ISA, partly combining ideas from ARM
and x86-64, with a few unique ways of representing instructions (the
idea
being that they are aligned values of 1/2/4/8 bytes, rather than either
more
free-form byte-patterns or fixed-width instruction-words).

or such...


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


This is also relevant regarding understanding how to make these computers
work:

http://www.infoq.com/presentations/We-Really-Dont-Know-How-To-Compute

seems interesting, but is very much a pain trying to watch as my internet is
slow and the player doesn't really seem to buffer up the video all that far
when paused...


but, yeah, eval and reflection are features I really like, although sadly
one doesn't really have much of anything like this standard in C, meaning
one has to put a lot of effort into making a lot of scripting and VM
technology primarily simply to make up for the lack of things like 'eval'
and 'apply'.


this becomes at times a point of contention with many C++ developers, where
they often believe that the greatness of C++ for everything more than
makes up for its lack of reflection or dynamic features, and I hold that
plain C has a lot of merit if-anything because it is more readily amendable
to dynamic features (which can plug into the language from outside), which
more or less makes up for the lack of syntax sugar in many areas...

The notion I get from this presentation is that he is against C and
static languages in general.
It seems lambda calculus derived languages that are very dynamic and
can self generate code
is the way he thinks the exploration should take.


I was not that far into the video at the point I posted, due mostly to 
slow internet, and the player not allowing the pause, let it buffer, 
and come back later strategy, generally needed for things

Re: [fonc] IBM eyes brain-like computing

2011-10-28 Thread BGB

On 10/28/2011 7:28 AM, K. K. Subramaniam wrote:

On Thursday 27 Oct 2011 11:27:39 PM BGB wrote:

most likely, processing power will stop increasing (WRT density and/or
watts) once the respective physical limits are met (basically, it would
no longer be possible to get more processing power in the same space or
using less power within the confines of the laws of physics).

The adoption of computing machines at large is driven primarily by three needs
- power (portable), space/weight and speed. The last two are now solvable in
the large but the third one is still stuck in the dark ages. I recollect a
joke by Dr An Wang (founder of Wang Labs) in keynote during the 80s that goes
something like this:

A man struggled to lug two heavy suitcases into a bogie in a train that was
just about to depart. A fellow passenger helped him in and they start a
conversation. The man turns out to be a salesman from a company that made
portable computers. He showed one that fit in a pocket to his fellow passenger.
It does everything that a mainframe does and more and it costs only $100.
Amazing! exclaimed the passenger as he held the marvel in his hands, Where
can I get one?. You can have this piece, said the gracious gent, as thank
you gift for helping me. Thank you very much. the passenger was thrilled
beyond words as he gingerly explored the new gadget. Soon, the train reached
the next station and the salesman stepped out. As the train departed, the
passenger yelled at him. Hey! you forgot your suitcases!. Not really! the
gent shouted back. Those are the batteries for your computer.

;-) .. Subbu


yeah...

this is probably a major issue at this point with hugely multi-core 
processors:

if built, they would likely use lots of power and produce lots of heat.

this is sort of also an issue with video cards, one gets a new/fancy 
nVidia card, which is then noted to have a few issues:

it takes up two card slots (much of this apparently its heat-sink);
it is long enough that it partially sticks into the hard-drive bays;
it requires a 500W power supply;
it requires 4 plugs from the power-supply;
...

so, then one can joke that they have essentially installed a brick into 
their computer.


nevermind it getting high framerates in games...


however, they would have an advantage as well:
people can still write their software in good old C/C++/Java/...

it is likely that the existence of existing programming languages and 
methodologies will continue to be necessary of new computing technologies.



also, likewise people will continue pushing to gradually drive-down the 
memory requirements, but for the most part the power use of devices has 
been largely dictated by what one can get from plugging a power-cord 
into the wall (vs either running off batteries, or OTOH, requiring one 
to plug in a 240V dryer/arc-welder/... style power cord).



elsewhere, I designed a hypothetical ISA, partly combining ideas from 
ARM and x86-64, with a few unique ways of representing instructions 
(the idea being that they are aligned values of 1/2/4/8 bytes, rather 
than either more free-form byte-patterns or fixed-width instruction-words).


or such...


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] IBM eyes brain-like computing

2011-10-28 Thread BGB

On 10/28/2011 2:27 PM, karl ramberg wrote:

On Fri, Oct 28, 2011 at 6:36 PM, BGBcr88...@gmail.com  wrote:

On 10/28/2011 7:28 AM, K. K. Subramaniam wrote:

On Thursday 27 Oct 2011 11:27:39 PM BGB wrote:

most likely, processing power will stop increasing (WRT density and/or
watts) once the respective physical limits are met (basically, it would
no longer be possible to get more processing power in the same space or
using less power within the confines of the laws of physics).

The adoption of computing machines at large is driven primarily by three
needs
- power (portable), space/weight and speed. The last two are now solvable
in
the large but the third one is still stuck in the dark ages. I recollect
a
joke by Dr An Wang (founder of Wang Labs) in keynote during the 80s that
goes
something like this:

A man struggled to lug two heavy suitcases into a bogie in a train that
was
just about to depart. A fellow passenger helped him in and they start a
conversation. The man turns out to be a salesman from a company that made
portable computers. He showed one that fit in a pocket to his fellow
passenger.
It does everything that a mainframe does and more and it costs only
$100.
Amazing! exclaimed the passenger as he held the marvel in his hands,
Where
can I get one?. You can have this piece, said the gracious gent, as
thank
you gift for helping me. Thank you very much. the passenger was
thrilled
beyond words as he gingerly explored the new gadget. Soon, the train
reached
the next station and the salesman stepped out. As the train departed, the
passenger yelled at him. Hey! you forgot your suitcases!. Not really!
the
gent shouted back. Those are the batteries for your computer.

;-) .. Subbu

yeah...

this is probably a major issue at this point with hugely multi-core
processors:
if built, they would likely use lots of power and produce lots of heat.

this is sort of also an issue with video cards, one gets a new/fancy nVidia
card, which is then noted to have a few issues:
it takes up two card slots (much of this apparently its heat-sink);
it is long enough that it partially sticks into the hard-drive bays;
it requires a 500W power supply;
it requires 4 plugs from the power-supply;
...

so, then one can joke that they have essentially installed a brick into
their computer.

nevermind it getting high framerates in games...


however, they would have an advantage as well:
people can still write their software in good old C/C++/Java/...

it is likely that the existence of existing programming languages and
methodologies will continue to be necessary of new computing technologies.


also, likewise people will continue pushing to gradually drive-down the
memory requirements, but for the most part the power use of devices has been
largely dictated by what one can get from plugging a power-cord into the
wall (vs either running off batteries, or OTOH, requiring one to plug in a
240V dryer/arc-welder/... style power cord).


elsewhere, I designed a hypothetical ISA, partly combining ideas from ARM
and x86-64, with a few unique ways of representing instructions (the idea
being that they are aligned values of 1/2/4/8 bytes, rather than either more
free-form byte-patterns or fixed-width instruction-words).

or such...


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


This is also relevant regarding understanding how to make these computers work:

http://www.infoq.com/presentations/We-Really-Dont-Know-How-To-Compute


seems interesting, but is very much a pain trying to watch as my 
internet is slow and the player doesn't really seem to buffer up the 
video all that far when paused...



but, yeah, eval and reflection are features I really like, although 
sadly one doesn't really have much of anything like this standard in C, 
meaning one has to put a lot of effort into making a lot of scripting 
and VM technology primarily simply to make up for the lack of things 
like 'eval' and 'apply'.



this becomes at times a point of contention with many C++ developers, 
where they often believe that the greatness of C++ for everything more 
than makes up for its lack of reflection or dynamic features, and I hold 
that plain C has a lot of merit if-anything because it is more readily 
amendable to dynamic features (which can plug into the language from 
outside), which more or less makes up for the lack of syntax sugar in 
many areas...


although, granted, in my case, the language I eval is BGBScript and not 
C, but in many cases they are similar enough that the difference can 
be glossed over. I had considered, but never got around to, creating a 
language I was calling C-Aux, which would have taken this further, being 
cosmetically similar to and mostly (85-95% ?) source-compatible with C, 
but being far more dynamic (being designed to more readily allow quickly 
loading code from source, supporting eval, ...). essentially, in a 
practical sense C-Aux would

Re: [fonc] IBM eyes brain-like computing

2011-10-27 Thread BGB

On 10/27/2011 10:10 AM, Steve Dekorte wrote:




BGBcr88...@gmail.com  wrote:

  Leitl wrote:

John Zabroski wrote:


Kurzweil addresses that.

As far as I know Kurzweil hasn't presented anything technical or even detailed.
Armwaving is cheap enough.

yep, one can follow a polynomial curve to pretty much anything...
actually getting there is another matter.

I wonder what the curve from the early Roman civilization looked liked and how 
that compared to the actual data from the Dark Ages.


probably:
sharp rise...
plateau...
collapse...
dark ages then begin.

a lot was forgotten for a while, but then in the following centuries 
much of what was lost was recovered, and then the original roman empire 
was surpassed.



now, things are rising at the moment, and may either:
continue indefinitely;
hit a plateau and stabilize;
hit a plateau and then follow a downward trend.

most likely, processing power will stop increasing (WRT density and/or 
watts) once the respective physical limits are met (basically, it would 
no longer be possible to get more processing power in the same space or 
using less power within the confines of the laws of physics).


granted, I suspect there may still be a ways to go (it is possible that 
such a computer might not even necessarily be matter as currently 
understood).


then again, the limits of what is practical could come a bit sooner.

a fairly conservative estimate would be if people hit the limits of what 
could be practically done with silicon, and then called it good enough.


otherwise, people may migrate to other possibilities, such as graphene 
or photonics, or maybe build anyonic systems, or similar.




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


  1   2   3   >