Jim

These are just my thoughts for now. They may not necessarily be correct.

By CAS-based SDLC I mean a specific systems development life cycle that is 
geared towards complex-adaptive systems and being supported in engineering 
terms by a method and a management system specific to purposes of regenerative 
systems. From this, you gather my perspective of what AI should be.

I accept that the definition of AI has somewhat fallen into ever-broader 
categories. However, XAI seems most specific in wanting to be able to trace and 
even control (in a sense) a degree of autonomous-machine outcomes all the way 
back and forward within machine logic. My contention is that this calls for a 
specialized, systems-engineering approach, not smart programming alone.

I think it safe to say that we probably do live in a binary universe. I find 
your comments in this regard illuminating. It would follow that the engineering 
approach I'm referring to would be able to seamlessly interact with 
mathematical models. To satisfy such an objective in an "explainable" manner, 
would at least require auto-converting contextual information into binary 
constructs. In other words, it might be feasible to state that any image of 
reality should convert to a testable, objectively-manageable binary construct 
of the same reality. Human-driven machine programming is a typical way of such 
conversion, but machine-to-machine programming is another.

In trying to understand the XAI question, my view is; the XAI stakeholders are 
searching for mature products of extensive R&D into non-human translation from 
X-input to binary-driven regenerative AI. Functional, yet structured in such a 
manner as to retain human control over it. When AI regulation eventually 
arrives, as it surely must, this would give the holders of such technology a 
distinctive, market advantage in that they would be able to offer transparency 
(in the sense of governance).

To pay  a few million dollars to a winning submission, and collecting all the 
other submissions for "free" is a quick and cheap alternative to trying to 
develop it yourself over the next 5-10 years.

I would think that the race against time is truly on now.


I'm grateful for this initiative. It reminds me of the value of the work I have 
already concluded (as R&D persons tend to think they exist in isolation with 
their passionate ideas in the making). Further, it inspires me to open my work 
more to the levels of scientific adaptation as implied by you.

In the end there's only 1 qualification a product needs to satisfy: Does it 
work? As you quite rightly said; How can anything be explained to work if there 
is no demonstrable knowledge as to how it works, or not work? Further to that, 
how does anyone demonstrate exact knowledge of how-to anything not designed by 
self, or somehow replicated? The engineering approach I'm advancing here should 
satisfy the aforementioned knowledge criteria.

Regards,

Robert Benjamin

________________________________
From: Jim Bromer <jimbro...@gmail.com>
Sent: 03 February 2017 09:40 PM
To: AGI
Subject: Re: [agi] Re: Digest for AGI

I have been working on a lot of other things so I would, at this point, would 
be interested in working on a common AI project (in other words XAI.)

I personally do not think that neural nets per se or drawing results from 
mathematical measures (again per se) have the potential to solve this problem. 
I do agree (I realize) that some kind of network of related idea-like data has 
to be used to achieve stronger AI (like XAI) but I just don't feel that 
creating recognition nets that could recognize the components of other learning 
nets has much potential in the short term. I've been wrong about a lot of 
things before so I could be wrong about this. I am not sure what Nanograte 
Knowledge Technologies was  talking about when he mentioned CAS-based SDLC.

There is an interesting instructional site on machine learning by Andrew Ng at 
http://openclassroom.stanford.edu/MainFolder/CoursePage.php?course=MachineLearning
Machine Learning - 
OpenClassroom<http://openclassroom.stanford.edu/MainFolder/CoursePage.php?course=MachineLearning>
openclassroom.stanford.edu
Course Description. In this course, you'll learn about some of the most widely 
used and successful machine learning techniques. You'll have the opportunity to 
...

if you want to get started in ML I had a problem with it because there is 
something wrong with me. Andrew Ng seems to want the student to be able to make 
a few inferences which he would have to make if he was figuring it out for 
himself. So it is as if he leaves out a few key steps so you can have the fun 
of feeling like you are missing something but not quite getting it.The reason I 
am not going through the angst of figuring it all out for myself is because I 
want to save time by getting someone else to teach the key steps to me. I guess 
I have an emotional issue with his style of teaching. When I got stuck I was, 
after a week or two, able to guess what he did not specifically state, but then 
I had so much other stuff to do I never got back to it after that. It was like 
- OK, he is taking some particular (relatively simple) mathematical theory and 
then using approximation steps to write a simple ML application to solve it. 
After dealing with the annoyance of wondering why he did not specify a simple 
key step that he left out I ended up thinking that I've done things like that 
enough times that I don't have to learn what he is teaching. Why should I spend 
more time on it? However I do appreciate what he is doing in making that study 
freely available (regardless of my complaints) and if I do want to continue 
online instruction - perhaps when I am more mature emotionally.

Jim Bromer

On Thu, Feb 2, 2017 at 12:39 PM, Rustam Eynaliyev 
<rustam.eynali...@gmail.com<mailto:rustam.eynali...@gmail.com>> wrote:
Hi guys,

I'm looking to participate in the General AI challenge.

My background:
CS degree from Indiana University, currently a full-stack web developer, 
working on government projects.

Looking to learn AI and machine learning in the near future and apply my 
new-learned knowledge to the General AI competition and may be, making a career 
switch to a Machine Learning engineer/researcher.

Please email me if you're interested in joining me.

Rustam

On Mon, Jan 16, 2017 at 6:31 AM, <a...@listbox.com<mailto:a...@listbox.com>> 
wrote:
This is a digest of messages to AGI.
Digest Contents

  1.  Re: [agi] Explainable Artificial Intelligence
  2.  Re: [agi] Explainable Artificial Intelligence
  3.  Unsharp Mask?
  4.  Re: Unsharp Mask?

Re: [agi] Explainable Artificial 
Intelligence<https://www.listbox.com/member/archive/303/2017/01/20170109075356:AC4083B6-D66A-11E6-9365-BC080493EEC8>

Sent by Jim Bromer <jimbro...@gmail.com<mailto:jimbro...@gmail.com>> at Mon, 9 
Jan 2017 07:53:49 -0500

The explanation (sorry for the duality of that term in this discussion) seems 
to imply that the explanatory AI would either have to be significantly better 
than anything contemporary Deep Learning or it would have to work with Deep 
Learning so that it could be used to explain what features the DL network was 
using to make its categorical selections. (DL is the dominant AI method right 
now.) Old AI did not really get to the point where it could extend the ability 
to draw good inferences on a sufficiently wide spectrum of data. So as long as 
the features in the data were strongly consistent with the features in the 
training, or strongly related to a simple defined range, old AI could work. DL 
is important just because it has noticeably extended AI capabilities. I do 
think DL is a kind of hybrid of old AI and NNs. Jim Bromer [ trailing quoted 
section removed ]

Re: [agi] Explainable Artificial 
Intelligence<https://www.listbox.com/member/archive/303/2017/01/20170110091535:3E847E9C-D73F-11E6-8CFF-5667BCDDB970>

Sent by Jim Bromer <jimbro...@gmail.com<mailto:jimbro...@gmail.com>> at Tue, 10 
Jan 2017 09:15:26 -0500

Explainable AI may start out as being as difficult as np-complete. There are 
some cases where it will work out well, but there are going to be a lot of 
other cases where it won't. Human thinking has a lot of unfilled spaces and it 
may be that is a key to solving problems that are np-hard. By substituting good 
estimates and approximations *at the right time* we can *begin* to explain why 
we make some decision (once we have had some practice). A discrete-based system 
(like Watson) can give you a response about why a poor alternative was 
disqualified but it cannot explain that decision in any depth beyond noting 
that it got some kind of score and how that score might have been arrived at. 
That is the problem with combinations of lossy decision methods (including 
discrete decision processes where the store of the precise steps taken are 
disposed of once the product of the method is computed). And systems where the 
reasons are distributed and combined across the memory (like neural nets) are 
probably as difficult as np-complete problems. As a neural net grows larger it 
will be more difficult to keep track of the all the embedded 'feature detection 
objects' and if each one of them has to be related to all the others (to be 
able to use them in explanations) then this will result in a combinatoric 
explosion. Maybe a deep learning net (a hybrid system) could be used to analyze 
a deep learning net, but if so why isn't this a straightforward project?

Unsharp 
Mask?<https://www.listbox.com/member/archive/303/2017/01/20170111101243:6266B664-D810-11E6-82C1-D511BDDDB970>

Sent by Jim Bromer <jimbro...@gmail.com<mailto:jimbro...@gmail.com>> at Wed, 11 
Jan 2017 10:12:33 -0500

Unsharp Mask? Jim Bromer

Re: Unsharp 
Mask?<https://www.listbox.com/member/archive/303/2017/01/20170111112210:186A357C-D81A-11E6-B416-CFA9F54E02C3>

Sent by Jim Bromer <jimbro...@gmail.com<mailto:jimbro...@gmail.com>> at Wed, 11 
Jan 2017 11:22:03 -0500

You need a primary subject against a background for an unsharp mask to work on 
an out-of-focus image. Jim Bromer [ trailing quoted section removed ]




AGI | Archives<https://www.listbox.com/member/archive/303/=now> 
[https://www.listbox.com/images/feed-icon-10x10.jpgecd5649.jpg?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2ZlZWQtaWNvbi0xMHgxMC5qcGc]
 <https://www.listbox.com/member/archive/rss/303/25423763-04a01972>  | 
Modify<https://www.listbox.com/member/?&;> Your Subscription      
[https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc]
 <http://www.listbox.com>

AGI | Archives<https://www.listbox.com/member/archive/303/=now> 
[https://www.listbox.com/images/feed-icon-10x10.jpgecd5649.jpg?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2ZlZWQtaWNvbi0xMHgxMC5qcGc]
 <https://www.listbox.com/member/archive/rss/303/24379807-653794b5>  | 
Modify<https://www.listbox.com/member/?&;> Your Subscription      
[https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc]
 <http://www.listbox.com>

AGI | Archives<https://www.listbox.com/member/archive/303/=now> 
[https://www.listbox.com/images/feed-icon-10x10.jpgecd5649.jpg?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2ZlZWQtaWNvbi0xMHgxMC5qcGc]
 <https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc>  | 
Modify<https://www.listbox.com/member/?&;> Your Subscription 
[https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc]
 <http://www.listbox.com>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to