Re: An AI program that teaches itself

2017-10-25 Thread Brent Meeker



On 10/25/2017 5:37 AM, Bruno Marchal wrote:
I am not entirely sure of this. I think that in the long term, the 
free-market can work, both for preserving resource and happiness.
We might have a different feelings due to the fact that it does not 
seem to have work with us, but the reason is that we don't have a 
free-market, given that we have the prohibition laws. Even at the 
start, Henri Ford, who made his 300 first Ford car in Hemp, and using 
Hemp, defended the Hemp for building car by saying that it is a 
renewable resource, and that it would not perturb the current 
concentration of 0_2, C0_2. If the Market would have been free, most 
people would have used Hemp (which was the petrol before petrol) 
instead of petrol. 


Nonsense.  Hemp was grown for rope.  It was never a fuel.  Henry Ford 
built a car whose body panels were made from plant cellulose, mostly 
from soybeans but including 10% hemp.  But it was never shown to be 
economically viable or durable enough to replace steel. Notice that when 
GM built plastic bodied cars, the Corvette, Saturn, Fiero...they did not 
make the plastic from soybeans or hemp and the cars have not aged well.  
The plastic hardens and cracks.


To sell something as toxic and disgusting as petrol, you *need* to 
abolish the free market, which is what happened. After that you do 
lose happiness, and you do destroy basically everything quickly, 
hopefully in a reversible way.
Free-market is like evolution. It does not see anything in the long 
term, but can still lead to building things which can see in a longer 
and longer terms.


Bruno 


I'm afraid you've become a crank on this point...as though marijuana the 
basis and measure of world capitalism.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: An AI program that teaches itself

2017-10-25 Thread Bruno Marchal

Hi Telmo,








With AlphaGo, it is curious that the heuristic half of the system is
also the one that becomes a black box.


When you have the time, you might elaborate on this.


AlphaGo combines search trees and neural networks.

The old-school approach to solving turn-based games such as checkers,
chess, etc is by using the minimax search tree algorithm. The idea of
minimax is simple: suppose you start with a give board state and it's
your turn. You consider all of your possible moves, then all possible
opponent moves from each move and so on, alternating between players.
The goals is to maximize your expected outcome on each of your moves,
while minimizing your expected outcome for every opponent move (thus
the name). Intuitively, it tries to find the strongest possible play
assuming the strongest possible opposition.

The problem, of course, is combinatorial explosion. A first approach
is alpha-beta prunning. When exploring a branch, once it finds that it
already knows a play that is guaranteed to be better than the one
being explored it stops (prunes)  that branch.

Then this can be further improved with heuristics. For example, one
can have an heuristic function for chess that assigns a utility value
to a board configuration, based on the pieces remaining on each side
and their positions. By exploring the most promising branches first,
more branches can be pruned earlier.

Then it can be made even more aggressive by using the heuristic
speculatively, cutting branches even if it's not certain (but just
likely) that they will be weaker. One strategy for chess is to go as
deeps as time allows, and fallback to heuristic pruning once there is
no more time. The more powerful the computer and the more clever the
implementation the deeper you can go, and this is how Deep Blue
eventually defeated a grandmaster (plus a dictionary of openings and
endings from human masters playing the game, chess textbooks, etc).

AlphaGo replaces the heuristic function with neural networks: the
protocol network and the value network. The value network learns to
assign a value to board configuration.


Ah! OK.




It also uses a stochastic version of minimax, using a Monte Carlo
technique. Instead of following all branches, or following them by
some heuristically-determined order, it samples them. The sampling is
guided by a probability assigned to each future state. The protocol
network learns to assign these probabilities.


OK.





In the first version of AlphaGo, the protocol network was first
trained to replicate the actions of human masters. Then, it was
further improved by playing against itself. The first stage is
supervised learning, the second is reinforcement learning (more
similar dopamine-based learning, if you will). The new version was
able to do it purely by reinforcement learning, with no reference to
human-generated examples. It became a master by exploring the game
from scratch.

The neural networks used are convolutional networks, usually applied
to image recognition. Instead of taking a large number of inputs (the
entire board), they scan it. A smaller square starts on the top-left
feeding the input of the network, and then it iteratively roams the
board.

It's a very clever, hybrid combination of AI techniques, combining the
strength of old-fashioned symbolic methods with more recent
statistical learning.


Cool.




What I meant is that the search-tree part is purely logic and
deterministic, while the neural networks learn to have the right
intuition about which board configurations look promising. The
search-tree is easy to understand, while the neural network becomes a
complex black-box. So the programmers of AlphaGo can inspect its data
structures and explain what it hopes to achieve with a given move, but
they are not capable of explaining you why the program bet on
exploring certain ideas and not others.


Thank, I understand now.




I find this is akin to the bicameral model of the brain, which I know
you like. Here the corpus callosum is simply the piece of code that
plugs the output of the neural networks to the Monte Carlo search
algo.

It seems obvious from the above description that this cannot easily be
extrapolated to creating computer programs (or performing
self-modification), but it is also clear that this hybrid approach
looks promising. I like it very much. There is a big fight in AI
between its "tribes" (symbolic, connectionist / statistical,
evolutionary).


Yes, that fight is part of the process. Tomorrow, the "clever"  
machines will ask to see the archives of that fight. Their origin.





I think that wonderful things will be built by
combining all of these ideas, and using their respective strengths
where appropriate.



What is missing (but has no economical value) is to make such a  
machine with the only goal being to survive by itself, and multiply.  
It would be implemented by a reentry of all above into itself (a  
circular neural nets). Instead of learning games, it 

Re: [everythinglist] - A comically knotty inflation, giving rise to our 3 dimension universe?

2017-10-25 Thread Bruno Marchal


On 24 Oct 2017, at 04:35, 'Chris de Morsella' via Everything List wrote:

Would be a neat explanation for the engine driving the epoch of  
inflation... also wonder what the implications would be for the  
multiverse hypothesis that relies upon a mechanism of eternal  
inflation (leading to an infinity of bubble universes), if inflation  
is instead an extremely short lived phenomena driven by the latent  
energy of cosmic knots. All speculative, but nevertheless also  
thought provoking. I especially like how it would provide a  
mechanism for why we experience a 3-D + time geometry and not some  
other number of dimensions.

-Chris

Why is our universe three dimensional? Cosmic knots could untangle  
the mystery
Why is our universe three dimensional? Cosmic knots could untangle  
the mystery
Next time you’re untangling your earbuds, remember that knots may  
have played a crucial part in kickstarting our universe, and without  
them we wouldn’t live in 3D. That’s the strange story pitched by  
physicists in a new paper, to help plug a few plot







Quite interesting. I suspect knots and braids (and Temperley Algebra)  
are intermediate between number and quantized geometries, and provide  
indeed the constraints for the "spatial" dimensions, a bit like I  
suspect the finite simple group to determine the particle symmetries.  
Programming a quantum topological universal machine/number is  
essentially encoding information in braids, which I think should be  
enough for the digital unitary matrices.


Note that that site crashed my old computer, but not my small portable  
fortunately. Some site have no respect for the old machines!


Bruno









--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: An AI program that teaches itself

2017-10-25 Thread Bruno Marchal


On 23 Oct 2017, at 15:49, Telmo Menezes wrote:

On Sat, Oct 21, 2017 at 3:58 PM, John Clark   
wrote:
On Sat, Oct 21, 2017 at 12:33 AM, Brent Meeker  
 wrote:




The problem is that, like most real problems, improving computer  
code has
no simple one-dimensional measure of "better".  Go games are won  
or lost.



A computer program that does the same thing as another but is  
smaller and

executes faster is objectively better
; and although there is no guarantee small fast programs usually  
have fewer
bugs than large slow programs, and the bugs they do have are easier  
to find

and fix.


This is not necessarily the case. In engineering practice it is common
to use the expression "premature optimization". The idea is: don't try
to make programs as fast as you can, because this hurts readability
and maintainability. Only optimize for speed when you absolutely must.

There are biological equivalents, the idea of "evolution of
evolvability". Some species hit local maxima and strongly optimize for
a dimension, but this also places them in a dead end. Less optimized
solutions might have the property of being more easily evolvable
beyond the local maxima. This is why modern scientists use Python
instead of C whenever they can. Python is one order of magnitude
slower than C.

And if you complain that speed size and robustness are 3 dimensions  
not one

then try making the most money. That's the great thing about the Free
Market, one dimension rules them all.


The above is also a problem with the free market. The free market is
incredibly efficient in utilizing resources to spread the maximum
amount of gizmos to the maximum amount of people. It is not
necessarily optimally efficient in preserving resources for what
really matters in the long term, or creating incentives for individual
happiness, or anything long-term to be honest.



I am not entirely sure of this. I think that in the long term, the  
free-market can work, both for preserving resource and happiness.
We might have a different feelings due to the fact that it does not  
seem to have work with us, but the reason is that we don't have a free- 
market, given that we have the prohibition laws. Even at the start,  
Henri Ford, who made his 300 first Ford car in Hemp, and using Hemp,  
defended the Hemp for building car by saying that it is a renewable  
resource, and that it would not perturb the current concentration of  
0_2, C0_2. If the Market would have been free, most people would have  
used Hemp (which was the petrol before petrol) instead of petrol. To  
sell something as toxic and disgusting as petrol, you *need* to  
abolish the free market, which is what happened. After that you do  
lose happiness, and you do destroy basically everything quickly,  
hopefully in a reversible way.
Free-market is like evolution. It does not see anything in the long  
term, but can still lead to building things which can see in a longer  
and longer terms.


Bruno




You seem to love astrophysics -- I do too, but you are surely more
knowledgeable. Who pays for the astrophysicists and their equipment?
Would the free-market ever do that? Maybe once there's a clear path to
profit. Elon Musk is banking on that, but would Elon Musk take the
leap without the previous efforts by NASA and other such agencies? I
think this is equivalent to the local maxima problem that I allude to
above.

Best,
Telmo.



John K Clark






--
You received this message because you are subscribed to the Google  
Groups

"Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an

email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything- 
l...@googlegroups.com.

Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.