Joel,

You make some good points that make me think deeper about this, but I'm still locked into a non-intentional KC. I'm going to try to explain this in light of my defnition of knowledge creation, which is different from others out there today.

Per one of my last notes, knowledge is a structure of symbols chosen by society. "Logic" is a logical structure of knowledge, which is always comprised of symbols. "Meaning" is synonymous with knowledge as well. Basically, there are lots of terms all mixed up associated with knowledge and all of them really point to knowledge structure. We know all knowledge is structured because all knowledge can be categorized and all knowledge can be connected in related groups of three.

My online book, Knowledge Machine (http://www.anti-knowledge.com/book/00_Title.htm), explains this in Chapter 4 - The Philosophy of Knowledge. This chapter is a hard read, but shows how many terms that describe facets of knowledge are overlapping, synonymous, or conflicting in their definitions. It's my opinion that, properly understood, all of these point to a single meaning for knowledge which is simply logical structure. Granted, there are two basic types of knowledge, one convergent (science), and one expanding (technology), but both of these expand as structures.

This expansion occurs on the "Cutting Edge" of that knowledge. It is at the cutting edge, as is much of the discussion in a list like this, that new structural connections are formed and new knowledge is created.

These structural connections are formed by either recognizing and correcting erroneous connections, or by recognizing a lack of structure (which is a question), and adding new structure to it. The knowledge creation process then, looks like this:

1.       Definition/Solution/Structure/Meaning (knowledge context)
2.       Question/Query/Problem (perceived lack of structure)
3.       Logical Operation (connects/structures/defines)
4.       Advanced Definition/Solution/Structure/Meaning
5.       Return to Step 2

All knowledge, without exception, is created following this simple process. Creating new knowledge is simply creating a larger, cold, heartless, non-emotional, non-intentional, non-deciding, monolithic mental structure. Again, not confusing this with human intention built into the code...I'm talking about pure knowledge here.

For example, in physics one would gather data by human or machine that is fed to the AKC system which recognizes a lack of knowledge structure and then assembles this into structure. This is artificial knowledge creation. It requires four things. 1) A relevant data source that can be harvested. 2) The ability to recognize existing knowledge structure in this context, 3) The ability to recognize and structure questions, or the perceived lack of structure, into knowledge, and 4) A loop...to go forth and do 1 -3 to the limits of the storage capacity (this would be where I see 'the goal').

Granted, when you look instead at replication of the human system, not strictly AKC, you run into all kinds of intention issues because humans have friendly and not friendly intentions. It is possible for a machine to have intentionality, but I believe it has to be transferred to the machine. The machine cannot set a goal in and of itself without parameters being fed into it. The Internet is one, big huge pile of human intentions. One could argue that it is emerging into something greater, and I'd agree, but the intention to emerge is coming from intention that is fed into it.

It is also possible for a machine to recognize human intention as a data point. But this is not originating an intention. Intention is either gathered or fed into a machine. The machine is a tool.

That said, I can definitely see how someone could see this going exactly the other way, and I respect your views on this Joel. Just keep in mind the main point I want to make...the KEY question here either side needs to answer is:

** From where does intention originate?  **

If folks answer this question, they answer the 'friendly/not friendly AI' question.

This, however, is a much deeper question than:
'What happens if I copy human intention to a machine?'

Some of the technical discussion in this forum is tough for me to follow, but it seems to me that this is where most of the discussion in this thread has been. The answer to that question is simple, unfriendly people intentions transferred to a machine = unfriendly machines.

When the person related to this list about a cut throat e-mail group of AI researchers, its a bit scary to think that they are headed down this path with those kinds of personal intentions. Fights and bad intentions inside of people can easily be transferred to machine...without a doubt. Which is why the first order of business is dealing with human intentions on a spirtual level.

So I see the answer to the key question above as 'intention is spiritual' and believe that it originates outside of the mind and outside of knowledge itself.

I really enjoy this kind discussion...hope my comments are coming across in e-mail with fervor and not 'attitude.' I try to stay in dialogue and not debate, but am not always successful. E-mail is not a very good tool for dialogue. Thanks for taking the time to chat with me.

Kind Regards,

Bruce LaDuke
Managing Director

Instant Innovation, LLC
Indianapolis, IN
[EMAIL PROTECTED]
http://www.hyperadvance.com




----Original Message Follows----
From: "Joel Pitt" <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
To: [email protected]
Subject: Re: [singularity] Singularity - Human or Machine?
Date: Mon, 11 Sep 2006 16:36:12 +1200

Hi Bruce,

By Sol I meant our sun....

I think that knowledge creation can't be seperated from intention.
Creating knowledge implies there is direction or purpose to a system.
Giving it a wealth of data and then telling it to generate knowledge
will not go anywhere without a goal - whether this is the
classification/categorisation of data that many "machine learning"
algorithms carry out these days or some other concept of knowledge
there is still process heading towards some optimal state. You could
argue that this is the goal programmed by the system creators, but
when the system becomes particular complex, the goals are not always
completely clear, and several goals can start competing for attention.

Of course you could still attempt prevent a machine from carrying out
physical actions, but one of the concerns about unfriendly AI is that
given the room to improve itself it could discover methods of
influencing the real world that we couldn't conceive of ourselves.

If you believe that machines cannot become self-aware then I can't
argue with that, but even non-self-aware systems can have goals.

-Joel

On 9/11/06, Bruce LaDuke <[EMAIL PROTECTED]> wrote:
Joel,

I apologize, but I'm not sure I understand how you're using the term 'Sol'
here, but I think I see where you are going with this, so I'm going to take
a run at this anyway.

Key words in your question are 'decide' and 'take apart.'  The knowledge
creation process is distinct from the decision process and action or
performance.

It is possible to advance knowledge beyond known limits and never 'decide to
do' anything.  Advancing that knowledge creates a potential to do though,
which does need to be managed.  Related to this, I separate 'futuring' into
three categories:

1) Social advance - The center is knowledge creation
2) Social Context - The center is the balance of interests
3) Industry - The center is supply and demand

In this definition, social advance = cumulative created knowledge that has
been accepted by society.  So then the knowledge creation process, and
really knowledge itself, gives a society options to decide upon, and to do
things with.  The more knowledge a society has, the more that society can
potentially decide to do with it. But 'deciding and doing' are not inherent
to knowledge creation....these are very much distinct in their operation.

For example, it would be possible to increase our nanotechnology knowledge
beyond comprehensible limits and still not decide as a society to do
anything with that knowledge.  Or we could decide to base our entire
economic system on a 'molecular economy,' as we are basically starting to do
now.  The implication here is that we have in knowledge the power to do.
Power to make material multiplied times lighter and stronger than steel, or
power to make nanobombs that can level a city from your shirt pocket.
Neither is executed without an intention and decision to do.

Social context, is how we deal with these options.  How society, for
example, copes with change and volatility associated with knowledge advance. It is in this social context that decisions are made. Decisions require
consciousness and intention.  The barrier is teaching the machine to have
intention.  A machine can anticipate intention, but I don't see a machine
originating it, because this is a function of consciousness, which I see as
residing outside of logic and knowledge.

Industry is the science of making things.  It is application or 'doing' in
society.  Granted, we do things within the social context as well (e.g.
philanthropy or war), but by in large, industry is the actionable arm of
society.  This is likely where a machine would 'do' something, if it had
intention and could decide.

Said all this to say that artificial knowledge creation can be an automated
expanding of knowledge to storage limits independent of any decisions,
social context and its application, or industrial application. By nature of how the knowledge creation process really works, this is exactly how I think
it will look...a self-expanding resource and not an intentional
decision-making machine.

But I can't deny that, at some juncture, we may find ourselves dealing with
an conscious or aware machine that can choose and can then act through
cyber-benevolence, cyber-terrorism, robotics, etc.  But as I understand the
knowledge creation process, this is more science fiction than reality.  I
see the paradigm more naturally evolving as an automated 'resource' that
expands to its storage limits and that is, and will always be, incapable of
intentionality or decision-making (unless these are loaded into it by a
human).

The tricky thing here is that it is possible to load intention or decision
criteria into a machine, such that it makes judgements based on the
intention/decision it is given...an extension of the expert system type of
thing.  "Machine, when you reach these GPS coordinates, nanobomb, blow up."
The intention and decision in this scenario ultimately extends out from a
human being and is not 'originated' by the machine.  Based on my current
understanding of the knowledge creation process, which again is very cold,
logical, and predictable...the the most likely scenario will be that
intention and decisions always ultimately will originate from the human
creator (though sometimes this 'origin' will be buried under pre-loaded
decision trees).

I definitely can't deny that machine awareness is possible, but I think it's
more prudent at this juncture in human history to manage the social context
created by human beings. It's my opinion, that human beings themselves will
intend or decide their own fate, and that as we learn more and more about
the knowledge creation process, the machine/technology is going to be proven
to be only a tool that magnifies the potential for human intentionality,
decision, and for action...be it benevolent or destructive action....not an
intentional, decision-making entity.  When we develop true AKC, we'll start
to understand how deep, and spiritual, the mystery of human intention,
decision-making, awareness, and consciousness really is.  And given the
powerful machine/technology tools we will hold in our hands, it will be this
facet of humanity that we need to mature.  In my opinion, this is what we
are all going to face at singularity...our own humanity and not the machine.

Kind Regards,

Bruce LaDuke
Managing Director

Instant Innovation, LLC
Indianapolis, IN
[EMAIL PROTECTED]
http://www.hyperadvance.com


--
-Joel

"Wish not to seem, but to be, the best."
               -- Aeschylus

-------
AGIRI.org hosts two discussion lists: http://www.agiri.org/email
[singularity] = more general, [agi] = more technical

To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to