!!! On Artificial Intelligence:

~[following are some notes by [Murray] related to [Artificial Intelligence].]

"SMPA: the sense-model-plan-act framework. See section 3.6 for more details of how
the SMPA framework inuenced the manner in which robots were built over the 
following years, and how those robots in turn imposed restrictions on the ways in 
which intelligent control programs could be built for them." -- [Brooks|RodneyBrooks], p.2

From Brooks "Intelligence Without Reason"[5]:

    There are a number of key aspects characterizing this style of work.
    
* __Situatedness__: The robots are situated in the world — they do not deal with abstract descriptions, but with the here and now of the world directly influencing the behavior of the system.
* __Embodiment__: The robots have bodies and experience the world directly — their actions are part of a dynamic with the world and have immediate feedback on their own sensations.
* __Intelligence__: They are observed to be intelligent — but the source of intelligence is not limited to just the computational engine. It also comes from the situation in the world, the signal transformations within the sensors, and the physical coupling of the robot with the world.
* __Emergence__: The intelligence of the system emerges from the system's interactions with the world and from sometimes indirect interactions between its components — it is sometimes hard to point to one event or place within the system and say that is why some external action was manifested.

Brooks notes that the evolution of machine intelligence is somewhat similar to biological evolution, with "''punctuated equilibria''" as a norm, where "there have been long periods of incremental work within established guidelines, and occasionally a shift in orientation and assumptions causing a new subfield to branch off. The older work usually continues, sometimes remaining strong, and sometimes dying off gradually."

He expands upon these four concepts starting on page 14.

* The key idea from situatedness is: ''The world is its own best model.''
* The key idea from embodiment is: ''The world grounds regress.''
* The key idea from intelligence is: ''Intelligence is determined by the dynamics of interaction with the world.''
* The key idea from emergence is: ''Intelligence is in the eye of the observer.''

I might note that Brooks' criticisms of the field of [Knowledge Representation] reflect my own findings, observed during the four years I did doctoral research on KR at the Knowledge Media Institute. 

%%blockquote
   It is my opinion, and also Smith's, that there is a fundamental problem still and one 
   can expect continued regress until the system has some form of embodiment. — Brooks 1991
%%

The lack of grounding of abstract representation is evident from the almost complete lack of the field's researchers to even bother to definitively explicate the two terms in its title: "Knowledge" and "Representation". How can one rationally explore a field when one doesn't yet know what "knowledge" is, or where there is no epistemologically-sound definition of the word "representation"? The greatest advances in that field belong to the likes of C.S. Peirce, John Dewey, Wilfred Sellars, Richard Rorty and Robert Brandom, 
but this seems (at this point in time) to be still disconnected to the concept of "embodiment" as explored in robotics.

I must agree with Brooks, that embodiment is a necessary precondition for research into intelligence. Brooks' paper was from 1991, my doctoral programme began in 2002. I wish I'd read his paper prior to that. I met Doug Lenat in 2000 and over dinner in Austin even toyed with the idea of working for his company, Cycorp, which is the corporate home of the Cyc Ontology. The whole thing is a giant chess set, a massive undertaking that as 
of 2020 is still essentially doing what it did when I saw it for the first time in 1979; it's as Brooks says, it's just followed the advances in computing technology but not really provided any real breakthroughs.

! Regarding Scale or Size

%%blockquote
   "The limiting factor on the amount of portable computation is not weight of the 
    computers directly, but the electrical power that is available to run them. 
    Empirically we have observed that the amount of electrical power available is 
    proportional to the weight of the robot."                 — Brooks [5], p. 18
%%

! On Fear of AI

Don't be like Elon Musk. You have little to fear from AI. Here's [Rodney Brooks], former director of the [MIT Computer Science and Artificial Intelligence Laboratory|https://en.wikipedia.org/wiki/MIT_Computer_Science_and_Artificial_Intelligence_Laboratory]: 
%%blockquote
For about thirty years we have known the full “wiring diagram” of the 
302 neurons in the worm ''[C. elegans|https://en.wikipedia.org/wiki/Caenorhabditis_elegans]'', along with the 7,000 connections 
between them.  This has been incredibly useful for  [understanding how|https://www.scientificamerican.com/article/c-elegans-connectome/] 
 behavior and neurons are linked. But it has been a thirty years study 
with hundreds of people involved, all trying to understand just 302 
neurons. And according to the  [OpenWorm|https://en.wikipedia.org/wiki/OpenWorm] 
 project trying to simulate C. elegans bottom up, they are not yet half 
way there.  To simulate a human brain with 100 billion neurons and a 
vast number of connections is quite a way off. So if you are going to 
rely on the Singularity to upload yourself to a brain simulation I would
 try to hold off on dying for another couple of centuries.
             
             
             
             
              
              
              
-- ''Rodney Brooks "[The Seven Deadly Sins of Predicting the Future of AI|https://rodneybrooks.com/the-seven-deadly-sins-of-predicting-the-future-of-ai/]"''
%% 


----

[{Tag ArtificialIntelligence MurrayAltheim}]