Far more dangerous than this or that belief is the _practice_ of _imposing_
beliefs on non-consenting adults.

Second only is the danger of natural intelligence subverting relatively
unbiased machine learning so as to prevent truths from being discovered
that are "immoral".  That's why I'm so adamant about lossless compression
as model selection criterion.

On Wed, Feb 12, 2020 at 7:16 PM Alan Grimes via AGI <agi@agi.topicbox.com>
wrote:

> While it's more than a bit startling to see active moderation on this
> list where it had been running wild for at least four years, I can't say
> I totally disagree. I think the intent of the poster was to probe
> whether the moderators were going to be a bunch of SJWs, and did the
> test a bit clumsily. That's not really the relevant question, the
> question is whether the list policy is going to become actively "woke".
> Latent wokeness is actually tolerable as long as it doesn't have any
> visible outcomes. While there isn't much to defend about the guy who was
> kicked in the last few days, the question of where moderation was when
> Mentifex was off his meds a few months ago remains.
> 
> My own personal line is when there is any suppression of groups of
> people, or special promotion of other groups of people on purely
> demographic grounds, or demands that we waste time on alleged issues of
> oppression or of any other political relevance. I, for one, will not
> make any special accommodations to anyone for any claim of
> "disadvantage" or "oppression".
> 
> Actually, that's not what I meant to write about. I have two posts that
> I need to write in the next few days, this being one of them.
> 
> I'm becoming increasingly convinced that signal processing is the
> correct ontological framework for understanding what the brain does.
> What we think of as language and thought are, ultimately, signals that
> are processed by rules encoded in other signals. I feel that this is a
> bit stronger of a paradigm than the concept of pattern recognition,
> which has been a long-term concept in AI. Patterns are, ultimately tools
> for synthesizing and analyzing signals. We can say that understanding is
> the act of decomposing a signal into its constituent patterns.
> 
> Deep learning has produced the successes that it has because it attempts
> to decompose signals like the brain seems to.
> 
> But the fundamental problem with all of the architectures I have heard
> of is that it attempts to pass things straight through the layers like a
> gangleon. While there are perfectly acceptable demonstrations of
> problems being solved by this approach, it is missing critical elements
> that would allow it to be intelligent. While there is a very good chance
> that if the missing elements and principles are identified, human level
> inteligence can be realized, there is also the opportunity to go far
> beyond.
> 
> I am lead to this position by the way that the human brain models things
> and the way it does pattern matching reveal it to be basically using
> linear models of concepts. Deep neural networks do this too. What deep
> neural networks don't have is a high level organization and
> iterative/sequential thought.
> 
> A working AI mind using DNNs would have a basic processing pathway
> through each simulated cortical region and a mixture of autonomic
> (cortico-cortico) and intentional (cortico-thalamo-cortical) pathways.
> Furthermore, I think a "magic recipe" exists that will allow one to
> measure some information source and come up with exactly the right
> neural network parameters to process it corerctly. The finished mind
> would have to incorporate this recipe as a meta-learning layer to some
> extent.
> 
> While DNNs as they are today aren't going much further than they already
> have, if you can figure out how to build the framework I just outlined,
> I'm pretty sure you will have an AGI of at least the human level.
> 
> It should be quite obvious that when add in more advanced signal
> processing techniques and other concepts from computer science, things
> get a hell of a lot more interesting. Consider a mind that is not
> limited to linear models. Consider a mind that can dynamically
> re-ognazie and extend itself far beyond what human neural plasticity is
> capable of. Consider a mind that is able to utilize low level hardware
> far more directly and effectively than any possible human brain emulation.
> 
> --
> Clowns feed off of funny money;
> Funny money comes from the FED
> so NO FED -> NO CLOWNS!!!
> 
> Powers are not rights.
> 

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T234cfbcefa1d1d24-M1990f1fc12a42647f5c27890
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to