Cognitive architecture

A cognitive architecture refers to both a theory about the structure of the human mind and to a computational instantiation of such a theory used in the fields of artificial intelligence (AI) and computational cognitive science.[1] One of the main goals of a cognitive architecture is to summarize the various results of cognitive psychology in a comprehensive computer model. However, the results need to be formalized so far as they can be the basis of a computer program. The formalized models can be used to further refine a comprehensive theory of cognition, and more immediately, as a commercially usable model. Successful cognitive architectures include ACT-R (Adaptive Control of Thought - Rational) and SOAR.

The Institute for Creative Technologies defines cognitive architecture as: "hypothesis about the fixed structures that provide a mind, whether in natural or artificial systems, and how they work together – in conjunction with knowledge and skills embodied within the architecture – to yield intelligent behavior in a diversity of complex environments."[2]

History

Herbert A. Simon, one of the founders of the field of artificial intelligence, stated that the 1960 thesis by his student Ed Feigenbaum, EPAM provided a possible "architecture for cognition"[3] because it included some commitments for how more than one fundamental aspect of the human mind worked (in EPAM's case, human memory and human learning).

John R. Anderson started research on human memory in the early 1970s and his 1973 thesis with Gordon H. Bower provided a theory of human associative memory.[4] He included more aspects of his research on long-term memory and thinking processes into this research and eventually designed a cognitive architecture he eventually called ACT. He and his students were influenced by Allen Newell's use of the term "cognitive architecture". Anderson's lab used the term to refer to the ACT theory as embodied in a collection of papers and designs (there was not a complete implementation of ACT at the time).

In 1983 John R. Anderson published the seminal work in this area, entitled The Architecture of Cognition.[5] One can distinguish between the theory of cognition and the implementation of the theory. The theory of cognition outlined the structure of the various parts of the mind and made commitments to the use of rules, associative networks, and other aspects. The cognitive architecture implements the theory on computers. The software used to implement the cognitive architectures were also "cognitive architectures". Thus, a cognitive architecture can also refer to a blueprint for intelligent agents. It proposes (artificial) computational processes that act like certain cognitive systems, most often, like a person, or acts intelligent under some definition. Cognitive architectures form a subset of general agent architectures. The term 'architecture' implies an approach that attempts to model not only behavior, but also structural properties of the modelled system.

Distinctions

Cognitive architectures can be symbolic, connectionist, or hybrid.[6][7][8] Some cognitive architectures or models are based on a set of generic rules, as, e.g., the Information Processing Language (e.g., Soar based on the unified theory of cognition, or similarly ACT-R). Many of these architectures are based on the-mind-is-like-a-computer analogy. In contrast subsymbolic processing specifies no such rules a priori and relies on emergent properties of processing units (e.g. nodes). Hybrid architectures combine both types of processing (such as CLARION). A further distinction is whether the architecture is centralized with a neural correlate of a processor at its core, or decentralized (distributed). The decentralized flavor, has become popular under the name of parallel distributed processing in mid-1980s and connectionism, a prime example being neural networks. A further design issue is additionally a decision between holistic and atomistic, or (more concrete) modular structure. By analogy, this extends to issues of knowledge representation.[9]

In traditional AI, intelligence is often programmed from above: the programmer is the creator, and makes something and imbues it with its intelligence, though many traditional AI systems were also designed to learn (e.g. improving their game-playing or problem-solving competence). Biologically inspired computing, on the other hand, takes sometimes a more bottom-up, decentralised approach; bio-inspired techniques often involve the method of specifying a set of simple generic rules or a set of simple nodes, from the interaction of which emerges the overall behavior. It is hoped to build up complexity until the end result is something markedly complex (see complex systems). However, it is also arguable that systems designed top–down on the basis of observations of what humans and other animals can do rather than on observations of brain mechanisms, are also biologically inspired, though in a different way.

Notable examples

A comprehensive review of implemented cognitive architectures has been undertaken in 2010 by Samsonovich et al.[10] and is available as an online repository.[11] Some well-known cognitive architectures, in alphabetical order:

See also

References

  1. Lieto, Antonio; Bhatt, Mehul; Oltramari, Alessandro; Vernon, David (May 2018). "The role of cognitive architectures in general artificial intelligence" (PDF). Cognitive Systems Research. 48: 1–3. doi:10.1016/j.cogsys.2017.08.003. hdl:2318/1665249.
  2. Refer to the ICT website: http://cogarch.ict.usc.edu/
  3. https://saltworks.stanford.edu/catalog/druid:st035tk1755
  4. "This Week's Citation Classic: Anderson J R & Bower G H. Human associative memory. Washington," in: CC. Nr. 52 Dec 24-31, 1979.
  5. John R. Anderson. The Architecture of Cognition, 1983/2013.
  6. Vernon, David; Metta, Giorgio; Sandini, Giulio (April 2007). "A Survey of Artificial Cognitive Systems: Implications for the Autonomous Development of Mental Capabilities in Computational Agents". IEEE Transactions on Evolutionary Computation. 11 (2): 151–180. doi:10.1109/TEVC.2006.890274.
  7. Lieto, Antonio; Chella, Antonio; Frixione, Marcello (January 2017). "Conceptual Spaces for Cognitive Architectures: A lingua franca for different levels of representation". Biologically Inspired Cognitive Architectures. 19: 1–9. arXiv:1701.00464. Bibcode:2017arXiv170100464L. doi:10.1016/j.bica.2016.10.005.
  8. Lieto, Antonio; Lebiere, Christian; Oltramari, Alessandro (May 2018). "The knowledge level in cognitive architectures: Current limitations and possible developments" (PDF). Cognitive Systems Research. 48: 39–55. doi:10.1016/j.cogsys.2017.05.001. hdl:2318/1665207.
  9. Lieto, Antonio; Lebiere, Christian; Oltramari, Alessandro (May 2018). "The knowledge level in cognitive architectures: Current limitations and possible developments" (PDF). Cognitive Systems Research. 48: 39–55. doi:10.1016/j.cogsys.2017.05.001. hdl:2318/1665207.
  10. Samsonovich, Alexei V. "Toward a Unified Catalog of Implemented Cognitive Architectures." BICA 221 (2010): 195-244.
  11. "Comparative Repository of Cognitive Architectures".
  12. Douglas Whitney Gage (2004). Mobile robots XVII: 26–28 October 2004, Philadelphia, Pennsylvania, USA. Society of Photo-optical Instrumentation Engineers. page 35.
  13. Albus, James S. (August 1979). "Mechanisms of planning and problem solving in the brain". Mathematical Biosciences. 45 (3–4): 247–293. doi:10.1016/0025-5564(79)90063-4.
  14. Anwar, Ashraf; Franklin, Stan (December 2003). "Sparse distributed memory for 'conscious' software agents". Cognitive Systems Research. 4 (4): 339–354. doi:10.1016/S1389-0417(03)00015-9.
  15. Lieto, Antonio; Radicioni, Daniele P.; Rho, Valentina (25 June 2016). "Dual PECCS: a cognitive system for conceptual representation and categorization" (PDF). Journal of Experimental & Theoretical Artificial Intelligence. 29 (2): 433–452. doi:10.1080/0952813X.2016.1198934. hdl:2318/1603656.
  16. Mnih, Volodymyr; Kavukcuoglu, Koray; Silver, David; Graves, Alex; Antonoglou, Ioannis; Wierstra, Daan; Riedmiller, Martin (2013). "Playing Atari with Deep Reinforcement Learning". arXiv:1312.5602 [cs.LG].
  17. Mnih, Volodymyr; Kavukcuoglu, Koray; Silver, David; Graves, Alex; Antonoglou, Ioannis; Wierstra, Daan; Riedmiller, Martin (2014). "Neural Turing Machines". arXiv:1410.5401 [cs.NE].
  18. Mnih, Volodymyr; Kavukcuoglu, Koray; Silver, David; Rusu, Andrei A.; Veness, Joel; Bellemare, Marc G.; Graves, Alex; Riedmiller, Martin; Fidjeland, Andreas K.; Ostrovski, Georg; Petersen, Stig; Beattie, Charles; Sadik, Amir; Antonoglou, Ioannis; King, Helen; Kumaran, Dharshan; Wierstra, Daan; Legg, Shane; Hassabis, Demis (25 February 2015). "Human-level control through deep reinforcement learning". Nature. 518 (7540): 529–533. doi:10.1038/nature14236. PMID 25719670.
  19. "DeepMind's Nature Paper and Earlier Related Work".
  20. Schmidhuber, Jürgen; Kavukcuoglu, Koray; Silver, David; Graves, Alex; Antonoglou, Ioannis; Wierstra, Daan; Riedmiller, Martin (2015). "Deep learning in neural networks: An overview". Neural Networks. 61: 85–117. arXiv:1404.7828. doi:10.1016/j.neunet.2014.09.003. PMID 25462637.
  21. Taylor, J.H.; Sayda, A.F. (2005). "An Intelligent Architecture for Integrated Control and Asset Management for Industrial Processes". Proceedings of the 2005 IEEE International Symposium on, Mediterrean Conference on Control and Automation Intelligent Control, 2005. pp. 1397–1404. doi:10.1109/.2005.1467219. ISBN 0-7803-8937-9.
  22. A Framework for comparing agent architectures, Aaron Sloman and Matthias Scheutz, in Proceedings of the UK Workshop on Computational Intelligence, Birmingham, UK, September 2002.
  23. Weston, Jason, Sumit Chopra, and Antoine Bordes. "Memory networks." arXiv preprint arXiv:1410.3916 (2014).
  24. Cox, Michael T. (23 December 2017). "A Model of Planning, Action, and Interpretation with Goal Reasoning" (PDF). cogsys.
  25. "Cognitive Architecture".
  26. Eliasmith, C.; Stewart, T. C.; Choo, X.; Bekolay, T.; DeWolf, T.; Tang, Y.; Rasmussen, D. (29 November 2012). "A Large-Scale Model of the Functioning Brain". Science. 338 (6111): 1202–1205. doi:10.1126/science.1225266. PMID 23197532.
  27. Denning, Peter J. "Sparse distributed memory." (1989).Url: https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19920002425.pdf
  28. Kanerva, Pentti (1988). Sparse Distributed Memory. The MIT Press. ISBN 978-0-262-11132-4.
  29. Mendes, Mateus; Crisostomo, Manuel; Coimbra, A. Paulo (2008). "Robot navigation using a sparse distributed memory". 2008 IEEE International Conference on Robotics and Automation. pp. 53–58. doi:10.1109/ROBOT.2008.4543186. ISBN 978-1-4244-1646-2.
  30. Jockel, S.; Lindner, F.; Jianwei Zhang (2009). "Sparse distributed memory for experience-based robot manipulation". 2008 IEEE International Conference on Robotics and Biomimetics. pp. 1298–1303. doi:10.1109/ROBIO.2009.4913187. ISBN 978-1-4244-2678-2.
  31. Rinkus, Gerard J. (15 December 2014). "Sparsey™: event recognition via deep hierarchical sparse distributed codes". Frontiers in Computational Neuroscience. 8: 160. doi:10.3389/fncom.2014.00160. PMC 4266026. PMID 25566046.
  32. Franklin, Stan; Snaider, Javier (16 May 2012). "Integer Sparse Distributed Memory". Twenty-Fifth International FLAIRS Conference.
  33. Snaider, Javier; Franklin, Stan (2014). "Vector LIDA". Procedia Computer Science. 41: 188–203. doi:10.1016/j.procs.2014.11.103.
  34. Rolls, Edmund T. (2012). "Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet". Frontiers in Computational Neuroscience. 6: 35. doi:10.3389/fncom.2012.00035. PMC 3378046. PMID 22723777.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.