Print Story On the state of Artificial Intelligence Part the second
By whazat (Sun Apr 18, 2004 at 12:29:40 PM EST) (all tags)
My views on the state of AI, because I thought that the previous diary missed out a couple of groups that are interesting, and of course missed out my view on AI ;)

There has recently been a bit of discussion on Artificial Intelligence on this site and I would like to present a my comments and a provide a place for further discussion. And as I missed out I want to rekindle the discussion.

Before I present my own view it would be remiss of me to neglect other groups of Artificial Intelligence researchers not mentioned in the original diary.

The first and most interesting is the group of Computer Scientists such as Marcus Hutter and Juergen Schmidhuber. These are the inductionist equivalent of logicians. They are also the antithesis of connectionists, but their formalism instead of logic is based on the maths of Algorithmic Information Theory. As such they take for their inspiration Godel, Turing and Kolmorogov. It is very proof heavy, and interesting.  However the approaches are generally not suited to real time problems, and I have doubts of how useful the systems would be in some situations such as dealing with other learning systems (which I will not go into here). Also they are not very biologically plausible.

Then there is the Artificial General Intelligence group, or Real AI. I probably do them a disservice, by lumping Ben Goertzel and co and Eliezer Yudowsky and Co. However they both represent a form of AI I don't like. It is the one the tries to create computational structures for the words we have created for things in our minds. Such as concepts, memories, thinking. For once I am with the behaviourists on this one, these things should not be explicitly coded for.

So where do I fit into to all this. Peg me somewhere between a Godel Machine and evolution. Where as Juergen relies on proof to make sure that his machines continue to improve, I try to use the less reliable but also less computationally expensive method of competition to make sure that the best self-modifiers survive. Some of my thoughts can be found here at my Codesoup project page.

I dodge the issue of how to define intelligence somewhat, and I am simply trying to develop a system that can adapt and mold itself to a users desires and feedback. Because this is what I think people want from an Intelligent system.

< I think I'm going to become wealthy | BBC White season: 'Rivers of Blood' >
On the state of Artificial Intelligence Part the second | 9 comments (9 topical, 0 hidden)
Bah by whazat (3.00 / 0) #1 Sun Apr 18, 2004 at 12:36:54 PM EST
Mucked up the link.

Also forgot the most forgotten group of AI researchers at the moment, neuroscientists. For once we have a concrete understanding of how neurons  and glial cells do that thing they do, we will have a way of ascertaining whether our ideas about intelligence have any meaning at all.

The revolution will not be realised

Re: meaning by EvergrowingPulsatingBrain (3.00 / 0) #2 Sun Apr 18, 2004 at 01:12:50 PM EST
I too hope the debate revives, I missed the first one.  

Picking up on meaning, I don't think it's so hard actually.  Within a framework of predictive learning, meaning might just be, the outcome(s) of some state and action. As some state is more flexible, that is, the outcome is less defined, well, the cloud of meanings would be, too.  Affective meaning could just be whatever affect arises from the situational meaning.

paper [pdf] by Paul Cohen, who's working on getting robots to learn their own representation / meanings of the world.

I'm interested in behavioral, bottom-up AI though, so I admit I'm not versed in the whole philosophical debate.

Well by whazat (3.00 / 0) #5 Mon Apr 19, 2004 at 10:50:31 PM EST
I have seen quite a few projects that try to alter the representation in some way. So I am not sure what saying that an AI has meaning does for you (especially if you use a technical use of meaning, so that it loses meaning to the lay person).

The revolution will not be realised
[ Parent ]
meaning by EvergrowingPulsatingBrain (3.00 / 0) #6 Tue Apr 20, 2004 at 07:31:16 AM EST
"I have seen quite a few projects that try to alter the representation in some way"

Really?  Can you name a couple names?

As to meaning (if you're referring to the paper) he claims that eg the terms 'path', 'obstacle' must be defined by the programmer, which is the hardest work;  the robot basically then does search. His idea is to get robots to define their own terms, from their own experience, though that's just beginning.  

The original article's dis. of AI has missed the whole behavior-based branch and its offshoots. You mention biological plausibility so you must be familiar with it. Well it seems it's blossomed (it's hard to know lineage but...) into modelling cognitive architecture and effects from human development. AI isn't dry, it's just gone into new channels.

[ Parent ]
Invented many times by whazat (3.00 / 0) #7 Tue Apr 20, 2004 at 10:27:48 AM EST
I used citeseer for feature creation and got this

References some 1983 and earlier work on similar ideas. One of those being Mitchell that wrote the Machine Learning text book for my course.

Is a  fairly obvious thing in Machine Learning, as a common thing to do is the inverse, feature selection which selects a subset of the features presented. I knew people had done it because one of my class mates did something similar using a GP to construct new features from others, one of the features learnt was to distinguish circles from squares only using height and area and a decision tree (IIRC).

Re: behaviour-based computing, I think it is the best way of programming robots we have at the moment, but like Rodney Brooks I think something is missing.

The revolution will not be realised

[ Parent ]
The difference by EvergrowingPulsatingBrain (3.00 / 0) #8 Tue Apr 20, 2004 at 11:52:39 AM EST
with what at least that Othello stuff does, is that the features are arbitrarily concocted (IIRC) and then selected for their usefulness; they were not raised to the fore by the machine's own experience. What Cohen is doing is segmenting common, representative robot experiences and turning prototype versions of them into concepts. Ie it's just happy to create them, it's only assumed that they'll be useful. The interest of it is that it's transitioning from sub-symbolic to symbolic by itself, and that the symbols are grounded. It's conceptualizing its world on its own, complete with some idea of 'meaning'; at least that's the goal.

".. like Rodney Brooks I think something is missing."

Has he said something like this? I thought he was a 'behavior is all' guy.

That Goedel Machine paper looks interesting; you say you're somewhere between behavior and deep self-reflection?  Unless I'm misinterpreting it that's kind of Piaget's theory, that abstractions arise from reflection about behavior.  I'll have to check it out, thanks.

[ Parent ]
Hmm by whazat (3.00 / 0) #9 Tue Apr 20, 2004 at 10:07:14 PM EST
Have a look Here for Brooks latest statements.

I'll have a closer look at things and get back to you on the meaning stuff.

The revolution will not be realised

[ Parent ]
Hmm. by eann (3.00 / 0) #3 Sun Apr 18, 2004 at 02:06:19 PM EST

I find your ideas intriguing, and would like to subscribe to your newsletter.

I wonder where I fit in. I deal with a lot of CI method people (GA, NN, IFN, etc.), but the research I'm currently doing is more about using CBR as a meta-CI "knowledge manager". Does that still make me a "not-AI" too-specific hack?

It's okay. My own interests are more in cognitive process in interaction with information tools, with a side interest in informetric domain analysis, but the CBR is what pays the bills for the next few months, so I do it.

I'm not trolling here...just trying to earn my badge in the PC Police Force. —tps12
$email =~ s/0/o/;

Hmm by whazat (3.00 / 0) #4 Mon Apr 19, 2004 at 10:34:13 PM EST
Sorry about not getting back to you. I have decided to try and stop surfing the Web so much and get on with my project.

I would say that there is a pretty big gap between those that are trying to discover what goes on inside an intelligent mind and what most AI researchers do. I think the term narrow AI (borrowed from the AGI people) is a fair term to describe it. I think it is like creating fragments of an intelligence* without having the framework or glue to put it all together. If you think your case based reasoning is sufficient glue, then you are a general AI researcher.

*they are really fragments of intelligence, for they must have some analogue in their creators minds. Else there would be no way of knowing whether they worked as expected. However they may only exist in the creator rather than in humans in general.

The revolution will not be realised

[ Parent ]
On the state of Artificial Intelligence Part the second | 9 comments (9 topical, 0 hidden)