Re: [agi] AGI = GLOBAL CATASTROPHIC RISK?

2021-01-18 Thread korrelan
;)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T86b555c591599ac6-Mb9deaf949ef4f59948bce062
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: Anyone else work on AGI 24/356?

2020-11-01 Thread korrelan
Yup... 24/ 365 which is nine more days than you... :)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9960e338ab119a23-Mca02c7ce69df322a63b28a4e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: [Singularity] Re: The singularity nobody is talking about.

2020-03-30 Thread korrelan
Lock we have been friends for years, don't diss me... I don't deserve it... and 
you will regret it.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T219f799b8db5224f-M3fb4f2dd4c13dce855c0079b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The limitations of the validity of compression.

2020-03-30 Thread korrelan
One of the main methods the human brain uses for learning is
repetition, the constant repeated exposure to information/ knowledge is slowly
engrained into your connectome.  The
compound effect is subliminal, your sub conscious remembers everything, just
the fact you have read or seen something, lays down neural markers which are
used during sleep to ingrain the engram's into your knowledge base.


Advertising companies have known and leveraged these
phenomena for decades; if you hear something enough times it will have a
noticeable effect on you.  Everything you
see or read has an effect on your mind set and mental abilities… and no one is
immune.  As advertisers know, what
bolsters the effect is attention, if they can get you to pay attention, even
for a split second, even if you don’t believe what they are saying, they have
done their work… and you will be affected.


My point being…reading nonsensical ramblings, especially if
you are concentrating, trying to decode/ make sense of what the writer is
saying… does effect your mind/ skill set/ knowledge and intelligence, this
phenomenon can not only affect what you think, but how you think, your
intelligence is based/ driven by your knowledge base after all… be careful what
you feed your brain.


K:)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T2a0cd9d392f9ff94-Mc27fbdf3ad65fbcaa87ed462
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Take my IQ test

2020-02-08 Thread korrelan
The test is subjective, the true point of the test is revealed the last 
question and your poor results. K:)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6eceeeb18240293e-Mf1469635dab0e402b15c861a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: General Intelligence vs. no-free-lunch theorem

2020-02-05 Thread korrelan
Perhaps you don't understand me and have misinterpreted what I wrote, both are 
valid. K:)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T353f2000d499d93b-M22f68379805c0e885c7aac52
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: General Intelligence vs. no-free-lunch theorem

2020-02-05 Thread korrelan
@James


Seriously? That’s the example/ narrow pigeon-hole
argument you are going to use? That’s what prediction means to you?


Your example is a calculated sequence, the answer
derived from the next logical calculation; it cannot be found from the prior
sequence, hence the whole concept of prediction is negated. What you require is
a calculation… or some magic.


You will be telling me next that numbers (PI), mathematics,
language or even the perceived colours of the rainbow are native/ innate to
reality.  That they aren’t a human
derived construct/ concept… if no humans existed would these concepts still
exist?


I think we both consider prediction very differently.


K:)


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T353f2000d499d93b-M9c5361b237ddc3cc55b54512
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: General Intelligence vs. no-free-lunch theorem

2020-02-05 Thread korrelan
BTW the whole ‘no search is better than random’
argument is moot, it’s negated by a schema that doesn’t have to search, where
all required ‘relative’ data is instantly available.


https://youtu.be/OO8lR3j1Vfc


And whilst I’m here Legg is also wrong… there
is indeed an elegant/ simple universal theory of prediction and intelligence
for that matter.  Prediction is not a mathematical
formula/ algorithm, the bow wave of a ship accurately predicts where the ship
will be in the (short term) future, etc.  Perspective of the problem space 
matters.


https://youtu.be/deqCJRiwshg


K:)



--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T353f2000d499d93b-M3b4c4fa23d4170c6196cfeec
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: General Intelligence vs. no-free-lunch theorem

2020-02-05 Thread korrelan
Hi I’ve been paying attention and thought I’d
throw my own pennies worth into the mix.


The problem with any human derived idea/ theorem
like ‘no free lunch’ is that… it’s formulated/ constructed by humans, and as
such can only be applied using the depth/ expanse of our own knowledge,
understanding and experience.  I’ve
always considered this a very narrow, almost arrogant perspective.  Even 
mathematical proof is relative to our
current understanding of our reality.  We
should use our theories/ knowledge as a guide only, not set rules.


@Danko


>Otherwise,
we would have only one machine learning algorithm, not multiple ones. Each of
them works better in one situation and worse in another.


We only know of one true intelligent system,
the multiple AI algorithms you speak of have arose because they each only tap
one aspect the human connectomes schema, this is obviously why they are all
based on similar model (nodes/ connections).


>Then we go and say "let me make
a network of neurons as similar to human brain as possible". Maybe this
works. The NFL theorems, goes again telling us "Nope, included that too in
my proof. Not working."


But it does indeed work; what you are stating
is that no one ‘so far’ had figured it out, you are again applying a narrow/ 
relative
rule NFL which is limited/ constrained by human/ your understanding/ knowledge. 
 Even E=MC2 should be taken as guide, not a
rule… we as a race do not know everything.


https://www.youtube.com/user/korrelan


K:)



--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T353f2000d499d93b-M096dd58cfd94f7cc70a0592d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: OpenAI’s Robot Hand Won't Stop Rotating The Rubik’s Cube 

2019-11-23 Thread korrelan
It's a slippery slope lol...

https://www.youtube.com/watch?v=CWwikNgKvQE

:)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tf92664c5505fb143-Mc7ec6ada0bfb48de417a0124
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Standard Model of AGI

2019-11-18 Thread korrelan
If no one speaks up he will quote that everyone on this site agrees... semantic 
games.

:)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T28a97a3966a63cca-M8c2b5893180ea5842db6a337
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Who wants free cash for the support of AGI creation?

2019-11-18 Thread korrelan
Hi James, as I'm sure you are aware I was referring to sensory salience, and 
while some may not consider/ understand it  as 'science' it never the less is 
still relevant/ applicable to this model.  

I'm not really concerned about 'political bias' at this stage in the systems 
development, although I would hope that it's innate intelligence will win 
through, politics after all is a human derived concept and as such is both 
flawed by human traits and inefficient at best.

:)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T26a5f8008aa0b4f8-M465d80973364a8b266945b6f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Who wants free cash for the support of AGI creation?

2019-11-17 Thread korrelan
IMO compression... or to be more precise... salient spatio-temporal compression 
is a key/ major factor in mammalian intelligence... it gets less-lossy/ more 
focused though exposure/ repetition/ experience.

https://youtu.be/OO8lR3j1Vfc

:)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T26a5f8008aa0b4f8-Md365e0e7be8097da5da03a2e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Missing Data

2019-11-01 Thread korrelan
I agree with WOM...

Lossless relies on a mathematical algorithm to re-constitute the data, 
compression is achieved through a re-coding/ translation/ more efficient use of 
the storage/ transmission medium. Lossy relies on embedded knowledge within the 
decoder to re-constitute the original data.

>  It's there, all the information of the original results are thee, it really 
>is, it just takes longer.

I could compress the whole of the top paragraph too... 123.  Given just 123 you 
would never arrive/ decode that paragraph without the stored knowledge of what 
123 actually means.

:)

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T36c83eb0aa31fc55-Mdbed577e43e574b4b02b625e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: group on Telegram if you would like please join here

2019-11-01 Thread korrelan
I joined a few days ago and left/ deleted it yesterday, it's going to take a 
month of inactivity for my account to be fully removed apparently.  

Typical chat environment, no considered posts/ content... to much posturing/ 
nonsense and not enough thinking.

I read all the various feeds... er... no.

:)

https://sites.google.com/view/korrtecx 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc7f6ce83ae20b45a-M7528f8796e4ed52614d1a3cf
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: DEAR ASK A ROBOT: Who is God?

2019-10-26 Thread korrelan
I knew I was normal... didn't I... Yes we did... bl**dy psychiatrists know 
nothing... they just don't understand us.

:)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T22987f362748a39b-M24c3da6e5ab651a396a65fae
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: How to stay focused for longer

2019-10-25 Thread korrelan
Sleep is *extremely *important to both your physical and mental well-being.  

Although your brain is trying to consolidate the synaptic connections 
representing new learning constantly it's only during sleep that it manages to 
catch up and clear away the transmitters associated with said learning.  
Regular deep sound sleep reinforces the hierarchies in the same order you 
learned them, cause and affect etc.  

You should use sleep as a tool, read/ learn/ ponder an important topic an hour 
before you sleep, give your mental mechanisms a chance to ingrain the 
information before you pile more on top the next day.

Lack of sleep actually hinders mental function and learning, you are actually 
hindering your overall productivity, spend less time thinking more clearly not 
vice versa.

Deprive yourself at your own peril.

:)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T214348ea19c23aa2-Mc30488e5da0a7b07f290c736
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: How These Self-Aware Robots Are Redefining Consciousness:

2019-10-20 Thread korrelan
Woa... the first bot in the video (white) is definitely a man in a costume... 
rest is cool though.

:)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T0bac50bd46a02459-Mca17c178714c753967ecb792
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] COMPUTE THIS!!!

2019-10-20 Thread korrelan
Even a simple common narrow ANN does not use a language to
generate it's 'intelligence', yes the program is written in a language but the
calculations/ processing are generated by a process/ schema... not the
language.


Human languages are just a common communication protocol,
there is no intelligence engrained within language.


There are thousands of human languages all with differing
syntax and structures, yet they all serve the same purpose. If there was any
kind of 'intelligence' wrapped up in these languages they would have a
commonality that reflected the underlying logical/ processes... they don't.


I suppose it depends on the type of system you are trying to
create.  A system based on the processing/ understanding of language is
only ever going to be a mimic, it will appear to understand by mimicking the
patterns/ syntax found in said language, a chatbot is a good example.


IMO it's the underlying engine, the intelligent process that
is capable of learning and using our many languages that is the goal of AGI.


Language is a crude protocol at best… describe the colour
blue to a blind person… it can’t be done… language is not the seat of human
intelligence.


:) 



--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T4f01e8a4b34d0e2a-Mc3dd1b5ac89a07d71e91b4dc
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] COMPUTE THIS!!!

2019-10-20 Thread korrelan
I agree with Alan, an intelligence learns language... language does not make an 
intelligence.

:)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T4f01e8a4b34d0e2a-Mac422d06aafa3982ec53b114
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Robber's Rules

2019-10-15 Thread korrelan
I based my solution on the information given, the definition of the problem.


It’s a mediation dilemma, my solution is valid by negation
of the original problem, its a translation of the original problem space.  
Negotiation through the justice system is
negated by the greed of the robbers, they get 80%.  The robbers, by nature will 
not tell other
robbers about the money through fear of being robbed themselves… hence they are
a lone party.  Even if they suspect a
ruse their greed and self confidence in being ‘robbers’ will allow the ruse to 
play
out/ continue.   The dilemma is then not how to negotiate but how to kill 
robbers, a much simpler dilemma.


A lot was explained about the first party, the robbers and
their moral standing. Nothing was expressed about the second party, most people
would naturally assume the second party, because they had legal right to the
money, are morally above/ superior and thus vulnerable to the first… not 
necessarily
so.


Human nature is diverse, my point being that no matter how
‘bad ass’ a party perceives themselves to be, there is always someone worse, ie
the second party.  Assumptions are extremely
dangerous by both parties; the translation of the problem space ultimately
would come down to greedy robber’s vs the intelligent psychopathic killers.
If done correctly, no one would be the wiser as to the ultimate fate of the
robbers, there would be no proof or evidence… no dilemma… no problem... lie to 
the robbers, let their greed and sense of superiority seal their fate... 


No part of the described problem mentioned a solution were the second party 
would exit with the moral high ground.

:)

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5dd6b6c7d648588e-M860c15e7e5d58a148fcf0b41
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Robber's Rules

2019-10-15 Thread korrelan
Get a legal contract signed by both parties stating that if the robbers allow 
you to accept the money, you will give them 80%. Get the money... Kill the 
robbers.  Win.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5dd6b6c7d648588e-M7255fc4c033e7af236b5ec4c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Whats everyones goal here?

2019-10-12 Thread korrelan
What's everyone's goal here?

Why... it's the same goal I have every night...

https://www.youtube.com/watch?v=XJYmyYzuTa8

:)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td4a5dff7d017676c-Mf88cc9024126f93861efc224
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The Job market.

2019-10-06 Thread korrelan
The perceived complexity of any rendering/ reality is relative to the concious 
perception/ resolution of the observer... hence, your discussion is moot... 
unless you can provide a standard observer resolution constant?

:)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8eabd59f2f06cc50-Mb04d5a4f7fa17122d5b87416
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-24 Thread korrelan
Reading back up the thread I do seem rather stern or harsh in my opinions, if I 
came across this way I apologise. 

Believe it or not I'm quite an amicable chap, I just lack/ forget the social 
graces on occasion.

:)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M9364edb559421575f774cb09
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-24 Thread korrelan
The realisation/ understanding that the human brain is
closed system, to me… is a first order/ obvious/ primary concept when designing 
an AGI
or in my case a neuromorphic brain simulation.


> Your model
looks like it has a complexity barrier.
On the contrary,
I’m negating a complexity barrier by representing processing at the basest
level, akin to binary on a Von Neumann machine.  In my opinion the only way to 
create a human level AGI is to start at
the bottom with a human derived connectome model and build up hierarchically.


>  I'm not pursuing a complex system model perhaps that's our disconnect here?


Yes... perhaps that's our disconnect, cheers for the protocol exchange.

:)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M3adf8a31d6cfc7293fc1c5a6
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-24 Thread korrelan
>How does
ANY brain acting as a pattern reservoir get filled?


No one fills/
places/ forces/ feeds/ writes information directly into a brain, the brain is
presented with external patterns that represent information/ learning by other
brains that triggers a similar scenario in the receiver’s brain.


It’s the
singular closed intelligence of the learning brain that makes sense of the
information and creates its own internalised version for its own usage based on
its own knowledge and experiences.


When you talk to someone you are not imparting any kind of
information, language is a common protocol designed to trigger the same/
similar mental model in the receiver, but it’s the receiver’s version of
reality that comprises the information, not the senders.


Take this post as an example; I’m trying to explain a
concept in a manner that will enable your personal closed internal simulation
of reality to recognise/ simulate the same/ similar concept, but you will be
using your own knowledge and intelligence to grasp it… not mine.


> Uhm... a "closed system"
that views. Not closed then?


Does any of the actual visual information you gather from
viewing ever leave your closed system? You can convert it into a common
protocol and describe it to another brain, but the actual visual information
stays with you.


A brain can’t learn anything without the innate intelligence
to do so.


:)

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-Mebf2be0557ae3129c04da517
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-23 Thread korrelan
>From the reference/ perspective point of a single
intelligence/ brain there are no other brains; we are each a closed system and a
different version of you, exists in every other brain.


We don’t receive any information from other brains; we receive
patterns that our own brain interprets based solely on our own learning and
experience.  There is no actual
information encoded in any type of language or communication protocol, without
the interpretation/ intelligence of the receiver the data stream is meaningless.


:)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M376259a89e4e444d954a3076
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-22 Thread korrelan
> Or network on top of network on top of network... turtles.


Close... though more... networks within networks...

Consciousness seems so elusive because it is not an
‘intended’ product of the connectome directly recognising sensory patterns,
consciousness is an extra layer. The interacting synaptic networks produce
harmonics because each is using a specific frequency to communicate with its
logical/ connected neighbours.  The
harmonics/ interference patterns travel through the synaptic network just like
normal internal/ sensory patterns.
Consciousness uses the same internal/ external networks that
are the product of learning through external experiences but… it’s
disconnected/ out of phase from the normal deep pattern recognition processes…
it’s an interference bi-product that piggy-backs/ influences the global thought
pattern.


It’s similar to hypnotism or deep meditation…cortical
regions learn the harmonics… our sub-conscious is just out of phase, or to be
more precise, our consciousness is out of phase with the ‘logical’ intelligence
of our connectome.


Our consciousness is like… just the surface froth, reading
between the lines, or the summation of interacting logical pattern recognition
processes.


Consciousness is just the sound of all the gears grinding.


https://www.youtube.com/user/korrelan


:)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M5717525ccecbe5e67a353269
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: ConscioIntelligence, Symbol Negentropy in Communication Complexity

2019-09-18 Thread korrelan
https://youtu.be/UcBDSoVs42M

:)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T354278308a7acf85-Medb7bbf510105f6790c7a770
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Narrow AGI

2019-08-10 Thread korrelan
>Legg proved
there is no such thing as a simple, universal learner. So we can stop looking
for one.


With all due
respect to everyone involved this kind of comprehensive sweeping statement is
both narrow minded and counter productive.


>Suppose you
have a simple learner that can predict any computable sequence of symbols with
some probability at least as good as random guessing. Then I can create a
simple sequence that your predictor will get wrong 100% of the time. My
program runs a copy of your program and outputs something different from your 
guess.


This is
nonsensical, especially with regard to AGI, just the temporal aspect of reality
invalidates the premise within any meaningful context.  Predication is not a 
learned quality, it’s a
natural phenomena generated by physical fields/ laws. (Pressure/ inertia/
gravity/ time/ etc)


Take the bow/
pressure wave of a ship as an analogy. The bow wave is an accurate prediction 
generated
by the medium (water) of where the ship will be in the future.  The properties 
of the bow wave are also affected
by the state of the water, waves or wakes from other passing ships. In the big
scheme of things this predictive schema is a very simple system yet it’s 
impossible
to recreate the exact sequence or indeed recreate the qualities of the
predication post happening.


This video
shows a similar effect in a neural schema, the resultant pressure wave or
inertia of the GTP thought pattern is generated by the preceding thought 
patterns, they
are unique to this experience and would have totally different properties post
happening… your program can’t run a copy and create a predictor because the
prediction was partially temporally based… and the time has passed. 


https://www.youtube.com/watch?v=I1Dyj5hgvtc


Language and mathematics
are constructs created by an intelligent system; they are not an insight into
how the intelligent system functions.


 :)



--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1ff21f8b11c8c9ae-Mf1c94ca843df1698b8964378
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Reflections on OpenAI + Microsoft

2019-07-29 Thread korrelan
IMO OpenAI is trying to establish itself as a governing body, they are trying 
to lay the infrastructure so that when legislation is passed (and it will be) 
they will be in the best position, and thus be appointed to control/ oversee AI 
technology. If appointed as some kind of a governing body, OpenAI will have 
access to all the latest AI tech developments, you won’t be able to put 
anything out into the market unless they have reviewed/ tested it first, this 
is what MS are investing in.



Reading between the lines, dubious tactics at best...



https://medium.com/@NPCollapse/the-hacker-learns-to-trust-62f3c1490f51



How convenient that they saved the day and also reinforced their agenda at the 
same time.



:)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T7338823268197294-M723e531a6a0ebc8da9e8acd2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Mature mid-AGI Mens Latina

2019-07-26 Thread korrelan
He's working the numbers.  If he can saturate the web with enough nonsensical 
posts (especially on key sites) that seem to make sense to a laymen then he 
will get a following. If enough 'laymen' can be convinced then his theory/ 
work/ crap has weight.  Once he has the 'weight' he is more likely to convince 
a 'layman' with serious money to invest, boom... he's rich.  Lets call it cult 
theory... lol.

:)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41abf1800d026522-M753f2eac703f4e5db7e1a514
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] ARGH!!!

2019-07-04 Thread korrelan
Hi Colin


>In my most recent posting there is this interaction that answers one of your 
>question:

I presume you mean my comment on spare cash and resources,
it was actually a statement not a question but I understand your point of view
and predicament.  In an ideal scenario your
method, without using computers, might indeed produce results. I do feel that
10-15 years is a little optimistic though considering what has to be done, not
least of which is physically designing your actual chip, prototyping, testing,
etc.
I don’t mean to patronise and I understand your desire for
an actual physical chip along with the empirical truths, but have you tried 
simulating
your ideas/ concepts on computer? Investors are going to need something besides
a theory, anything is better than nothing.  I’m sure a decent simulation 
encompassing the essence of EMF’s could be
produced, orientation, field strength, scope/ penetration, wave length, etc.


If the military, nuclear physicists can trust a computer
simulation to design/ predict the yield/ qualities of a nuclear device without
gaining the empirical data from an actual detonation then I’m sure you could
simulate EMF fields at a sufficient resolution to gain insights.


Not to mention the simulation of quantum qubits…


*The empirical evidence derives from simulations of two
universal random quantum circuits, one with depth 27 for 49 qubits and one with
depth 23 for 56 qubits.**
*

https://www.tomshardware.co.uk/julich-46-qubit-simulation-top-supercomputers,news-57562.html


>I am
assuming you didn't mean the statement the way it looks.


Yup I did, the key word here is ‘functions’. I agree there’s
loads of data regarding the structure of the brain, but
everything regarding its actual operation/ functioning, how it achieves 
conscious
thought, etc is speculation… otherwise this site wouldn't exist, we would
already have AGI’s.


With regards to the rest of your post, with all due respect,
it’s a pointless discussion. If we don’t have the infrastructure, cash or time
to adopt the schema within a meaningful time scale then I don’t see how it’s
relevant to us at this point in time.


By all means fight for change in the future and redefine AGI
research; you may go down in the annals of history as the guy that enabled AGI
to be created… if the rest of us ultimately fail.


:)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T4cc8d68d18f1759a-Mdd3b0f6c13987e88ee2d259f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] ARGH!!!

2019-07-04 Thread korrelan
@Robert


Taking dementia as a single problem, there can be a myriad
of theories that can describe its function.  Taken in the context of a system 
that can also simulate many other human
mental conditions you start to limit the possibilities, which is the point of
the exercise.


The destination is unknown; you have to feel your way. I’m
not building a human consciousness, we have plenty of them and there easy to
make. I'm building a scalable intelligence, based on the mechanisms that allow
humans to do what we do. And to do that I need to consider and simulate
anything and everything, all the information is useful for cross referencing
and gaining insights.


I’m currently a self-employed software engineer; my vocational
career has included vehicle mechanic, teaching/ lecturing, precision engineer,
electronics/ design engineer, corporate software designer/ programmer, I think
you get the gist. Everyone has a skill set and a ‘mind type’, I tend to fall
onto the practical side of the spectrum. Hence I always approach a problem
space from this point of view; I see the brain as a complex deterministic bio
electrochemical machine and nothing more, cause and effect. There is no special
magic, soul or other paranormal forces at work, it’s not derived from quantum
effects and rocks aren't conscious, and neither is the universe.


@Jim


I’m sure your speculations as to what the Wright brothers
were thinking are on par, though the relevance of my reference seems to have
gone a miss. Perhaps I should have used the term ‘mind-set’ rather than
methodology, apologies, my bad.


@Alan


>How much hardware do you need?
 I wish it was a matter of throwing computing resources at
the problem to expedite progress. I’m currently running a 12 x 4 core PC cluster
which is sufficient at the moment.  My
one saving grace regarding computing power has been that the GTP complexity is
roughly linearly proportional to the maturity/ experience of the connectome, so
the older it gets the slower it gets (in simulation).  The generality of how 
the knowledge is stored
also helps to a degree; many diverse concepts can be learned using the same
knowledge facets, just recombined.


And just for the record, although I didn’t think it would be
a necessary thing to state, when I mention consciousness or self-awareness I’m
not referring to the human level phenomena, I’m referring to the mechanisms/
phenomena within my model that I construe as the equivalent phenomena.  


The key point is I’m not building a human
mind… it’s a machine/ alien equivalent, a scalable improvement leveraging what
I believe to be the essence/ seat of our intelligence.
 
:)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T87761d322a3126b1-M46f6549919a9d6cde4412d41
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] ARGH!!!

2019-07-04 Thread korrelan
Unless you have the spare cash, time and resources then the
whole argument is moot, and you must find another way of achieving the goals
within your means.  You can negate most of the above by taking a leaf out the
Wright brothers methodology… take a leap of faith (in yourself) and just build
the damn thing, make it work… prove it works.


Every now and again I like to take a break from teaching/
designing my AGI’s and consider human frailties, and check if my design can
simulate the symptoms, and/ or give any insights into the prognosis/ diagnosis
or cure.


I have a list, roughly ordered by complexity and today it’s
the turn of terminal or paradoxical lucidity (PL).  Paradoxical Lucidity is one 
of natures
cruellest tricks, approx 75% of patients with long term dementia will fully/
partially become conscious/ lucid shortly before they die.  It’s a very complex 
diagnosis that ties into
many other conditions and I’m greatly over simplifying the topic for the
purpose of explanation.


 
https://www.sciencedirect.com/science/article/pii/S1552526019300950


Considering the phenomena in its simplest terms obviously
begs the question of how this can happen/ function. It seems intuitive that for
normal (ish) function to return the symptoms of dementia cannot be caused by
permanent damage/ change, or that something like a build up of amyloid plaque
is ultimately responsible, but something is impeding consciousness, so what
could it be.


Keep in mind I have already done this for a myriad of
conditions and phenomena, so I have insight into how my model behaves/
functions.  I’ve replicated optical/
audio illusions, pareidolia, schizophrenia, hallucinations, hypnotism,
meditation (states of mind), epilepsy, anaesthesia, NDE, and many more, all
with in the same model.


Firstly I read as much empirical information about the subject
as possible. Then formulate a theory of how those symptoms could arise and
manifest within my model. I then alter the models balances and test, repeat 
until
I get the desired results, making notes all the way.


Within my model memory consolidation and consciousness are
extremely sensitive to the base frequencies of the Global Thought Pattern (GTP).
The high dimensional facets of memories are encoded/ indexed by the state of
the GTP performing the task at hand, consciousness manifests from the harmonics
within the GTP.


 
https://www.youtube.com/watch?v=dJmdWfDTgLQ


This shows a small section (1.2mm², 0.01%, 10K neurons, 200K
synapse) of cerebral cortex from my model, I use it for testing hypotheses and
it encompasses all the functionality of the full model. It’s learned 40K memory
engram's segmented into 80 pattern concepts along with a regular base GTP
rhythm. The graph (lower left) function is equivalent to real-time colour coded
Golgi staining, and shows the confidence the model has in recognising the
current pattern, shown by the scrolling bar. Notice the actual pattern stream/
matrix on the upper right along with the injected regular GTP rhythm just
below. On the first pass it shows a very high confidence in recognising the all
patterns, both the episodic sequence memories and the memories regarding the
pattern structure are being recalled/ accessed.  On the second pass I change 
the base frequency of just the GTP, notice
how the memory retrieval/ recognition becomes sporadic. On the third pass I cut
the GTP and the confidence totally drops even though the 80 patterns are still
being injected. I then re-establish the GTP and normal operation resumes. This
shows how reliant/ sensitive the system is to the state of the underlying base 
GTP
frequencies.


The slow onset of dementia hints at the second pass, it’s
not like the global GTP disruption caused by anaesthetic, so I don’t think it’s
an imbalance in the neurotransmitter levels/ medium.  It must also be affecting 
the well
established networks with diminished plasticity; otherwise the brain would just
adapt to the disruptions and wouldn’t then be able to exhibit the PL phenomena.


So one cause of dementia could be an alteration of the base
frequencies within the GTP, and the PL phenomena could mean that whatever is
causing the phase change is related to a condition that rises or reduces/
diminishes just before death. Allowing the GTP to phase back through its normal
frequency domain and thus allowing consciousness to temporarily return.  My 
current main candidate is intracranial
pressure, as altering the shape of the connectome can also have adverse effects
on the phase of the GTP, further pondering is required.


My point being that… although there is no empirical data on
how the human brain functions it is still possible to gain insights and build a
working model through experimentation and cross reference, and although this is
a low resolution insight into the functioning of the brain it hints that so far
my schema is correct.


Indeed, IMO this is the only way to do it, you have to work
the problems. Applying/ finding 

Re: [agi] test

2019-06-30 Thread korrelan
the bots head/ arm movements.


To me it’s not a matter of writing a theory;  we already know of an intelligent 
schema, we just have to figure out
how it actually functions and build it.


There is more information @ the following...
 
https://www.youtube.com/user/korrelan


https://sites.google.com/view/korrtecx


 :)

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tf97c751029c2e4db-M6238e7e5cd342c968571ee19
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] test

2019-06-26 Thread korrelan
**Abstraction*** **in its main sense is a conceptual
process where general** **rules* 
* **and** **concepts* 
* **are
derived from the usage and classification of specific examples, literal
("real" or "**concrete* *")
signifiers,** **first principles* 
*, or other methods.**
*

*"An abstraction" is the outcome of this process—a
concept that acts as a common noun for all subordinate concepts, and connects
any related concepts as a** **group**,** **field**, or** **category**.**[1]* 



To consider a modern PC at its lowest level of abstraction it
would be the binary/ logic gates, if not the movement of electrons around the
circuits.  As you abstract/ remove each
layer of complexity, high level language, machine code, registers; etc the
underlying binary gate schema is always evident in the design and operation.


We can trace this backwards because we designed the system
and have insight, but this cannot be applied to the human brain, it would be
akin to understanding the innards of a combustion engine by just listening to
the sounds it makes.


Human derived logic, calculus, language, set theory, etc can
be considered concepts equivalent to high level programming languages; you need
to consider the low level/ machine code and the mechanisms that comprise these
concepts.  How does a biological neuron
have to function in order to recognise a face, learn physics and build a Moon
Lander?


Tying to build an AGI using symbolism, spoken language or C++
code for example, IMO is like trying to build a car from cars. (I seem to have
a car theme going).


I haven’t written any papers, just the KorrTecx site ATM.


There may be many ways to build an AGI, and even more
efficient schemas (bird, plane).  Everyone
has a theory, I figure the best way to prove my point is to build it and let it
explain it’s self.


:) 



--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tf97c751029c2e4db-Mc93db46e109bf19faeab4b67
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] test

2019-06-26 Thread korrelan
I require a frequency modulated speech/ phoneme engine, none exist so
I’ve just started building my own.


 https://www.youtube.com/watch?v=ffY37q44O4E


 :)


 


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tf97c751029c2e4db-Mef4c5310539dee5243232fb4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] test

2019-06-26 Thread korrelan
For the past 20 years I’ve been working on a compromise.


A massively parallel electro-chemical simulation/ emulation
of the human brain, that has to exist in a 3D volumetric space and time. 

There are no standard/ common neural nets, weights, bias,
sigmoid functions or even back propagation. All ‘computation’ is performed by
the Global Thought Pattern (GTP) that constantly flows/ cycles through 3D
connectome model.  ‘Computation’ is
achieved by spatio-temporal, frequency modulated networks and the waves that
propagate between them, short term/ working memory is the GTP and long term
memories are stored/ engrained in the 3D physical structure of the connectome.

The model is as biologically accurate as required to enable
the functioning of the GTP, connectome, lobes, axons, dendrite trees, 
neurotransmitters,
calcium channels, EMF, blood flow are all simulated and over the years I
narrowed the requirements to the key criteria required.

It has plasticity, prediction, long/ short term memories,
can repair after damage/ stoke using plausible mechanisms, adapts over time, 
can recognise objects/ sounds and make decisions, requires
sleep cycles and even dreams.

Whilst not a direct hit on the ? I feel I'm closer to it
than most.
https://www.youtube.com/channel/UCXUxxBJGhs1klBtD8lgLyAA


:) 



--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tf97c751029c2e4db-Md04bfaea10b9891966926a53
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Do Neural Networks Need To Think Like Humans?

2019-03-10 Thread korrelan
@Matt - I agree.

This is a AGI ocular input module, at the end of the video you can see the 
results from using a polar retina, high resolution fovea and the peripheral 
degrading lower resolutions.  There are videos showing the negation of 
rotational/ scale invariance using the same technique.

https://www.youtube.com/watch?v=SPr8KhqVCeo

Korrelan :) 

https://sites.google.com/view/korrtecx 
<https://sites.google.com/view/korrtecx/home>
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T4aa81fd9912dbd39-Medc34dc5fd5308f2a5686d05
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: For the ammusement of the Homunculus.

2019-02-23 Thread korrelan
I tend to program in VB6 & WIN/API *NOT* VB.NET, it's really quick and easy for 
prototyping and compiles to native code, with all the optimizations on  it runs 
fast enough for this type of work.  If I require more speed I use C++ and 
create a DLL.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9974b13cda814d12-Mcd5c4e841c8ff4aedd0f18d0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: For the ammusement of the Homunculus.

2019-02-23 Thread korrelan
https://youtu.be/jrHT6Rx_y7s

:)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9974b13cda814d12-M07a564afeec52c3a9b1084e8
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Discussing hardware.

2019-02-07 Thread korrelan




If there is no image above then... oops.

https://www.youtube.com/channel/UCXUxxBJGhs1klBtD8lgLyAA

:)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td8baca9fcbb1-M3feae4efb55e093d12b50e45
Delivery options: https://agi.topicbox.com/groups/agi/subscription