Re: [fonc] Morphic 3 defensive disclosure

2013-12-05 Thread Gath-Gealaich
On Tue, 03 Dec 2013 23:24:12 -0300
J. Vuletich (mail lists) juanli...@jvuletich.org wrote:

 Hi Folks,
 
 The first defensive disclosure about Morphic 3 has been accepted and  
 published at  
 http://www.defensivepublications.org/publications/prefiltering-antialiasing-for-general-vector-graphics
 and http://ip.com/IPCOM/000232657 ..
 
 Morphic 3 is described at  
 http://www.jvuletich.org/Morphic3/Morphic3-201006.html

On http://www.jvuletich.org/Morphic3/Morphic3-201006.html, you claim:

 Anti-aliasing is usually considered a technique to avoid stairway
 artifacts on rendered images. This is a simplistic view on the
 problem. Aliasing is a consequence of sampling continuous functions
 (images, photos, sound, etc). Makers of digital cameras and audio
 software know and use the theory behind it. You can read more at
 http://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem.

 Researches know all this. The best text books say it. However,
 existing graphics software completely ignore the theory.

 ... This allows for mathematically proved alias free rendering. As no
 existing application does this ...

I'm sort of puzzled by this. I've always thought that this was the
whole idea behind the stochastic sampling thingy that the ILM/Pixar
people patented (http://www.google.com/patents/US4897806) in the 1980's
to achieve mathematically proven alias-free rendering (as you said) of
arbitrarily shaded arbitrary geometry (even shaded with non-analytical
functions). Of course, it trades aliasing for noise, but I believe
that you can have the noise arbitrarily low (and for animations, it may
not matter all that much anyway since one can expect some grain or
noisiness on live footage so completely noise-free sampling may even
look unnatural). They certainly didn't ignore the problem; they had
been studying numerous analytical and non-analytical solutions for a
better part of the 1980s and then finally striked gold with stochastic
sampling and PRMan.

-- Gath
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-14 Thread Gath-Gealaich
On Sat, Apr 13, 2013 at 8:29 PM, David Barbour dmbarb...@gmail.com wrote:


 On this forum, 'Nile' is sometimes proffered as an example of the power of
 equational reasoning, but is a domain specific model.


Isn't one of the points of idst/COLA/Frank/whatever-it-is-called-today to
simplify the development of domain-specific models to such an extent that
their casual application becomes conceivable, and indeed even practical, as
opposed to designing a new one-size-fits-all language every decade or so?

I had another idea the other day that could profit from a domain-specific
model: a model for compiler passes. I stumbled upon the nanopass approach
[1] to compiler construction some time ago and found that I like it. Then
it occurred to me that if one could express the passes in some sort of a
domain-specific language, the total compilation pipeline could be assembled
from the individual passes in a much more efficient way that would be the
case if the passes were written in something like C++.

In order to do that, however, no matter what the intermediate values in the
pipeline would be (trees? annotated graphs?), the individual passes would
have to be analyzable in some way. For example, two passes may or may not
interfere with each other, and therefore may or may not be commutative,
associative, and/or fusable (in the same respect that, say, Haskell maps
over lists are fusable). I can't imagine that C++ code would be analyzable
in this way, unless one were to use some severely restricted subset of C++
code. It would be ugly anyway.

Composing the passes by fusing the traversals and transformations would
decrease the number of memory accesses, speed up the compilation process,
and encourage the compiler writer to write more fine-grained passes, in the
same sense that deep inlining in modern language implementations encourages
the programmer to write small and reusable routines, even higher-order
ones, without severe performance penalties. Lowering the barrier to
implementing such a problem-specific language seems to make such an
approach viable, perhaps even desirable, given how convoluted most
production compilers seem to be.

(If I've just written something that amounts to complete gibberish, please
shoot me. I just felt like writing down an idea that occurred to me
recently and bouncing it off somebody.)

- Gath

[1] Kent Dybvig, A nanopass framework for compiler education (2005),
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.72.5578
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Natural Language Wins

2013-04-05 Thread Gath-Gealaich
On Fri, Apr 5, 2013 at 7:59 PM, Kirk Fraser overcomer@gmail.com wrote:

 I was pointing out that innovation for its own sake is worthless then was
 agreeing with the view that not all the world's inventions come from people
 who think in English yet pointing out communicating in English is best for
 world wide distribution.  I don't really know how many Jews who won Nobel
 Prizes thought in Hebrew, English, or even Russian.  But it is as you wrote
 possible that Hebrew is more efficient.


No, it's not. Whorfianism has been all but refuted. The only area in which
the idea hold water, quite ironically, is formal/computer/programming
languages (or so Paul Graham says, but he's right, as far as I can tell).

Something about their culture  tends to be productive compared to others.
  Perhaps it's their orientation toward God, which is defined as absolute
 spiritual perfection.  That in itself would tend to produce more efficient
 thought.


They've been oppressed by intellectually impoverished Christians for two
millennia, denied the right to work in the fields of agriculture and
crafts, and were forced to work in knowledge oriented professions such as
medicine or finances. Of course that this nurtures a specific culture, and
with the (most likely involuntary) need to become as indispensable for
others as possible in over to avoid getting killed by hilt-happy Easter
celebrators, they were virtually forced into what is usually referred to as
overachievement (although here I have to admit, despite my former point,
that you English people have the weirdest notions in your language).

English has a property that unfortunately allows it to be redefined with
 liberal definitions which are inefficient.


^^^ This is a thoroughly nonsensical and meaningless statement.


  Computers need smarter software to exceed the performance of Watson and
 OpenCyc to create worthwhile innovations automatically.  I think working to
 automate Bible analysis is an efficient way to produce smarter software.
  But based the failures of automatic translators, computers may be slow to
 think flawlessly.


Again, you're completely ignoring the actual nature of speech, demonstrated
in such phenomena as the existence of idiolects, referential indeterminacy,
diachronic shifts etc. Language is what it is because there's a common
sense component to its processing in our brains, and once you have that,
you've successfully replicated a human being in silicon. Until that
happens, all bets are off.

(I'm tempted to wager that the inverse also holds,
has_human_intelligence(X) :- understands_language(X). Although the fact
that an average human being picked from your general population often fails
at simple logical reasoning sort of suggests that the intelligence is of a
slightly different kind that what we usually mean by saying he's
intelligent/he's a genius.)

- Gath
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Natural Language Wins

2013-04-04 Thread Gath-Gealaich
The first math language Fortran was soon displaced in business by more
readable code afforded by Cobol's longer variable names.

Fortran was displaced in business because early Fortran had no structures
and random record-oriented file access, and because of some silly
government requirements for computer system procurement.

There are three basic statements in any computer language: assignment, If
then else, and loop.

...except for those languages that have none of these three? I'd rather
argue that all languages have

1) primitives,
2) means of composition,
3) means of abstraction.

Some languages lack the third (Excel?) but these are not especially useful
on large scale.

Now to complete the project without corporate resources, it is necessary
to select an NLP application which is both more powerful and physically
smaller than IBM's Watson which won against Jeopardy's best players.  The
most powerful NLP text in history is the Bible which is only 4 Mb instead
of Watson's 4 Tb.

I have absolutely no idea what powerful is supposed to mean in this
context, but I'd bet the Reuters corpora against the Bible any day of the
week. Bible sounds like a horrible source material for any automated NLP
endeavor, no matter whether research oriented or production-oriented, since
it's on all levels (lexical, semantic, factual) schizophrenically
disconnected from modern textual material.

This level of NLP mastery in or external to an outside and indoor robot
could be used to end poverty, illiteracy, crime, terrorism, and war around
the world by growing and serving food, educating and entertaining a family
with the same language and religion cradle to Ph.D

what? O_o;

- Gath
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Natural Language Wins

2013-04-04 Thread Gath-Gealaich
On Thu, Apr 4, 2013 at 11:13 PM, Kirk Fraser overcomer@gmail.comwrote:

 Liberal dictionaries have definitions that are by default wrong.


There's no such thing as liberal dictionaries.


  For evidence of language decay, read definitions from the 1988
 Webster's Collegiate vs. the current Webster's.  Pure word and definition
 is needed to understand truth.


There's no such thing as pure words. Language is a dynamic, evolving,
feedback-driven entity that grows and adapts to new conditions, with
meanings of words broadening (dog), narrowing (hound),
shifting(computer) etc.


 People who love to lie get along without words meaning things.


...I won't comment on that nonsense.


  For example the current political fight on marriage demonstrates some
 people couldn't care less for truth, only for employer's spouse benefits to
 be shared with roommates.


Political fight on marriage? I don't live in the US, so I have little
understanding what you're talking about, but the word marriage seems to
be applied in most cultures over the globe for some sort of binding social
contract between individuals related to nurturing younglings for the next
generation, yielding vastly different rights and obligations from such
union across the different cultures. This makes the meaning of the word
marriage highly contextual. (But I admit freely that my understanding of
cultural anthropology is limited to having skimmed through the Encyclopedia
of World Cultures. It was worth it, though - and quite fascinating at that.)

- Gath
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] education experimental target

2013-03-03 Thread Gath-Gealaich
On Sun, Mar 3, 2013 at 5:40 PM, David Girle davidgi...@gmail.com wrote:

 Given the interest the Raspberry Pi is enjoying in education, the new
 platform coming out of TI towards the end of April, might be an interesting
 target for any fonc experiments runnable on ARM.

 *http://beagleboard.org/unzipped/


 *

Is this going to require another dose of proprietary binary blobs? With Pi,
you at least have to prospect of being able to compile your graphics stuff
from Nile into something that actually uses the graphics hardware the way
it's supposed to be used. (For the same reason, current AMD APU offerings
seem much more. At least you have the documentation.)

Cheers,

Gath
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] education experimental target

2013-03-03 Thread Gath-Gealaich
On Sun, Mar 3, 2013 at 8:02 PM, Loup Vaillant-David 
l...@loup-vaillant.frwrote:

 Err, what did you mean exactly?  Could Nile use hardware graphics on
  the rPi? Why (not)?


 My current understanding is that to do this, we would have to
 interface with the rPi's firmware blob, effectively compiling Nile
 into using using openGL.  It looks that it's possible, though not
 optimal (not only in terms of speed, but also because it may require
 more code). Is that correct?


You ought to be able to compile it directly into the GPU's instruction set:

https://github.com/hermanhermitage/videocoreiv

https://github.com/hermanhermitage/videocoreiv/wiki/VideoCore-IV-Programmers-Manual

If we can't do that, it's a waste of good silicon.

If I were to port the FoNC SW onto something new, I'd never go into HW
without public specs. I have quite a lot of ideas for applications (AI, ML,
OCR etc.) that would benefit from extra processing power. So much power,
and it should go to waste? The very idea just gives me the creeps.

But I somehow can't avoid the feeling that from this perspective, anything
ARM-based on the market is still inferior to stuff like AMD T40E and the
like. Do you know of any truly FLOSS friendly ARM SoCs, by any chance?

Gath
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc