Artificial stupidity
Artificial Stupidity is commonly used as a humorous opposite of the term artificial intelligence (AI), often as a derogatory reference to the inability of AI technology to adequately perform its tasks.[1] However, within the field of computer science, artificial stupidity is also used to refer to a technique of "dumbing down" computer programs in order to deliberately introduce errors in their responses.
History
Alan Turing, in his 1952 paper Computing Machinery and Intelligence, proposed a test for intelligence which has since become known as the Turing test.[2] While there are a number of different versions, the original test, described by Turing as being based on the "imitation game", involved a "machine intelligence" (a computer running an AI program), a female participant, and an interrogator. Both the AI and the female participant were to claim that they were female, and the interrogator's task was to work out which was the female participant and which was not by examining the participant's responses to typed questions.[2] While it is not clear whether or not Turing intended that the interrogator was to know that one of the participants was a computer, while discussing some of the possible objections to his argument Turing raised the concern that "machines cannot make mistakes".[2]
It is claimed that the interrogator could distinguish the machine from the man simply by setting them a number of problems in arithmetic. The machine would be unmasked because of its deadly accuracy.
— Turing, 1950, page 448
As Turing then noted, the reply to this is a simple one: the machine should not attempt to "give the right answers to the arithmetic problems".[2] Instead, deliberate errors should be introduced to the computer's responses.
Applications
Within computer science, there are at least two major applications for artificial stupidity: the generation of deliberate errors in chatbots attempting to pass the Turing test or to otherwise fool a participant into believing that they are human; and the deliberate limitation of computer AIs in video games in order to control the game's difficulty.
Chatbots
The first Loebner prize competition was run in 1991. As reported in The Economist, the winning entry incorporated deliberate errors – described by The Economist as "artificial stupidity" – to fool the judges into believing that it was human.[3] This technique has remained a part of the subsequent Loebner prize competitions, and reflects the issue first raised by Turing.
Game design
Lars Lidén argues that good game design involves finding a balance between the computer's "intelligence" and the player's ability to win. By finely tuning the level of "artificial stupidity", it is possible to create computer controlled plays that allow the player to win, but do so "without looking unintelligent".[4]
Algorithms
There are many ways to deliberately introduce poor decision-making in search algorithms. Take the minimax algorithm for example. The minimax algorithm is an adversarial search algorithm that is popularly used in games that require more than one player to compete against each other. The main purpose in this algorithm is to choose a move that maximizes your chance of winning and avoid moves that maximizes the chance of your opponent winning. An algorithm like this would be extremely beneficial to the computer as computers are able to search thousands of moves ahead. To "dumb down" this algorithm to allow for different difficulty levels, heuristic functions have to be tweaked. Normally, huge points are given in winning states. Tweaking the heuristic by reducing such big payoffs would reduce the chance of the algorithm in choosing the winning state.
Creating heuristic functions to allow for stupidity is more difficult than one might think. If a heuristic allows for the best move, the computer opponent would be too omniscient, making the game frustrating and unenjoyable. But if the heuristic is poor, the game might also be unenjoyable. Therefore, a balance of good moves and bad moves in an adversarial game relies on a well-implemented heuristic function.
Other applications
According to its definition, a sufficiently developed Artificial Stupidity program would be able to make all the worst cases regarding a given situation. This would enable computer programmers and analysts to find flaws immediately while minimizing errors that are within the code.
However, it is mostly expected to be used within the development and debugging stages of computer software.
Arguments on artificial stupidity
The Economist states that if we are to achieve Alan Turing's prediction, "it will be a dreadful anticlimax." It is a much pointless attempt to create a machine that mimics the behaviour and intelligent level of a human being. The purpose of the invention of the computer is to assist humans in performing tasks that would be deemed too tedious or time consuming to perform by hand.[3]
As mentioned in the passage regarding the Loebnizer prize competition, the computer was able to trick judges by introducing deliberate typing errors. The Economist argues that nobody would want a computer that couldn't type properly.[3]
Durham, T., an author of the journal "On Artificial Stupidity," seems to view computers as naturally possessing intelligence. It is only due to bad programming that a computer appears unintelligent. Durham states the opposite in what people would normally believe as "there is no such thing as AI, only artificial stupidity, which is what happens when computers are not given the knowledge they need."[5]
The term has appeared in connection with other subject areas, mostly related to computing technology but also with respect to human behaviour. Sadie Plant suggests that Ada Lovelace, the nineteenth century prophetess of computing, employed something like artificial stupidity to criticize those who underestimated the future potential for calculating machines.[6][7] Hito Steyerl has been applying the term with reference to the stupidity of not recognizing the power and dangers of algorithms because of their invisibility.[8] On a different note, artificial stupidity, for Avital Ronnel, describes the shocking misuse of intelligence measurement during the eugenics era in the USA: people were artificially deemed unintelligent.[9][10] Artist and researcher Micheál O'Connell employed the term artificial stupidity for his doctoral thesis[11] with reference both to questions of human intelligence and technological intelligence. Artificial stupidity can refer to the, usually dubious, human ability to "act stupid" or "dumb down" but he suggests that it can also be a means of creative unearthing. He cites John Roberts' attention to contemporary artists and their use of a "thinking stupidity" as a "rejection of the dominant discourses".[12][13] A suggestion is that one reason for the resilience of the "art system" is its cunning inverse pretentiousness; on its own art seems stupid from the point of view of utilitarian interests, commerce, entertainment, academia and political agendas. As well as discussing art and creativity O'Connell advocates for a readdressing of the significance of human intelligence against, the often palpable, stupidity of the technological sphere including AI (understood in the broadest sense).
Artificial stupidity as a limitation of artificial intelligence
Artificial stupidity is not just delivering deliberate errors into the computer, but it could also be seen as a limitation of computer artificial intelligence. Dr. Jay Liebowitz argues that "if intelligence and stupidity naturally exist, and if AI is said to exist, then is there something that might be called "artificial stupidity?""[14]
Liebowitz pointed out that the limitations are:
- Ability to possess and use common sense
- Development of deep reasoning systems
- Ability to vary an expert system's explanation capability
- Ability to get expert systems to learn
- Ability to have distributed expert systems
- Ability to easily acquire and update knowledge
— Liebowitz, 1989, Page 109
References
- O’Connell, M., 2017. Art as ‘artificial stupidity’. [online] Falmer, East Sussex: University of Sussex. Available at: <http://sro.sussex.ac.uk/id/eprint/67604> [Accessed 3 Aug. 2020]. p. 44
- Turing, Alan (October 1950), "Computing Machinery and Intelligence", Mind, LIX (236): 433–460, doi:10.1093/mind/LIX.236.433, ISSN 0026-4423
- "Artificial Stupidity", The Economist, 324 (7770): 14, 1992-09-01,
the first event was held in 1991
- Lidén, Lars (2004), S. Rabin (ed.), "Artificial Stupidity: The art of making intentional mistakes", AI Game Programming Wisdom 2, Charles River Media, Inc., pp. 41–48
- Durham, T. (21 March 1985), "On Artificial Stupidity", Computing, The Magazine pp: 4-5
- Plant, S., 1998. Zeros and ones: digital women and the new technoculture. London: Fourth Estate. p. 89
- O’Connell, M., 2017. Art as ‘artificial stupidity’. [online] Falmer, East Sussex: University of Sussex. Available at: <http://sro.sussex.ac.uk/id/eprint/67604> [Accessed 8 Aug. 2017]. p. 170
- Steyerl, Hito; Crawford, Kate (23 January 2017). "Data Streams". The New Inquiry. Retrieved 8 August 2017.
- Ronell, A., 2002. Stupidty. Urbana, IL: Urbana: University of Illinois Press. pp. 59-60
- O’Connell, M., 2017. Art as ‘artificial stupidity’. [online] Falmer, East Sussex: University of Sussex. Available at: <http://sro.sussex.ac.uk/id/eprint/67604> [Accessed 8 Aug. 2017]. p. 168
- O’Connell, M., 2017. Art as ‘artificial stupidity’. [online] Falmer, East Sussex: University of Sussex. Available at: <http://sro.sussex.ac.uk/id/eprint/67604> [Accessed 8 Aug. 2017].
- Roberts, J., 1996b. Mad For It! by John Roberts. [online] Everything Magazine. Available at: <http://bak.spc.org/everything/e/hard/text/roberts1.html> [Accessed 14 Jun. 2016].
- O’Connell, M., 2017. Art as ‘artificial stupidity’. [online] Falmer, East Sussex: University of Sussex. Available at: <http://sro.sussex.ac.uk/id/eprint/67604> [Accessed 8 Aug. 2017]. p. 174
- Liebowitz, Jay (July 1989). "If There is Artificial Intelligence, Is There Such Thing As Artificial Stupidity?". SIGART Newsletter. 109.
Further reading
- http://www.c2.com/cgi/wiki?ArtificialStupidity Describes Artificial Stupidity in the humorous context
- TEDx: "The Turing Test, Artificial Intelligence and the Human Stupidity"