Winograd Schema Challenge

The Winograd Schema Challenge (WSC) is a test of machine intelligence proposed by Hector Levesque, a computer scientist at the University of Toronto. Designed to be an improvement on the Turing test, it is a multiple-choice test that employs questions of a very specific structure: they are instances of what are called Winograd Schemas, named after Terry Winograd, professor of computer science at Stanford University.[1]

On the surface, Winograd Schema questions simply require the resolution of anaphora: the machine must identify the antecedent of an ambiguous pronoun in a statement. This makes it a task of natural language processing, but Levesque argues that for Winograd Schemas, the task requires the use of knowledge and commonsense reasoning.[2]

Nuance Communications announced in July 2014 that it would sponsor an annual WSC competition, with a prize of $25,000 for the best system that could match human performance.[3] However, the prize is no longer offered.

Background

The Winograd Schema Challenge was proposed in the spirit of the Turing test. Proposed by Alan Turing in 1950, the Turing test plays a central role in the philosophy of artificial intelligence. Turing proposed that, instead of debating whether a machine can think, the science of AI should be concerned with demonstrating intelligent behavior, which can be tested. But the exact nature of the test Turing proposed has come under scrutiny, especially since an AI chatbot named Eugene claimed to pass it in 2014. One of the major concerns with the Turing Test is that a machine could easily pass the test with brute force and/or trickery, rather than true intelligence.[4]

The Winograd Schema Challenge was proposed in part to ameliorate the problems that came to light with the nature of the programs that performed well on the test.[5]

Turing's original proposal was what he called the imitation game, which involves free-flowing, unrestricted conversations in English between human judges and computer programs over a text-only channel (such as teletype). In general, the machine passes the test if interrogators are not able to tell the difference between it and a human in a five-minute conversation.[4]

Eugene Goostman

On June 7, 2014, a computer program named Eugene Goostman was declared to be the first AI to have passed the Turing test in a competition held by the University of Reading in England. In the competition Eugene was able to convince 33% of judges that they were talking with a 13-year-old Ukrainian boy.[6] The supposed victory of a machine that thinks aroused controversies about the Turing test. Critics claimed that Eugene passed the test simply by fooling the judge and taking advantage of its purported identity. For example, it could easily skip some key questions by joking around and changing subjects. However, the judge would forgive its mistakes because Eugene identified as a teenager who spoke English as his second language.[7]

Weaknesses of the Turing test

The performance of Eugene Goostman exhibited some of the Turing test's problems. Levesque identifies several major issues,[2] summarized as follows:[8]

  • Deception: The machine is forced to construct a false identity, which is not part of intelligence.
  • Conversation: A lot of interaction may qualify as "legitimate conversation"—jokes, clever asides, points of order—without requiring intelligent reasoning.
  • Evaluation: Humans make mistakes and judges often would disagree on the results.

Winograd Schemas

The key factor in the WSC is the special format of its questions, which are derived from Winograd Schemas. Questions of this form may be tailored to require knowledge and commonsense reasoning in a variety of domains. They must also be carefully written not to betray their answers by selectional restrictions or statistical information about the words in the sentence.

Origin

The first cited example of a Winograd Schema (and the reason for their namesake) is due to Terry Winograd:[9]

The city councilmen refused the demonstrators a permit because they [feared/advocated] violence.

The choices of "feared" and "advocated" turn the schema into its two instances:

The city councilmen refused the demonstrators a permit because they feared violence.

The city councilmen refused the demonstrators a permit because they advocated violence.

The question is whether the pronoun "they" refers to the city councilmen or the demonstrators, and switching between the two instances of the schema changes the answer. The answer is immediate for a human reader but proves difficult to emulate in machines. Levesque[2] argues that knowledge plays a central role in these problems: the answer to this schema has to do with our understanding of the typical relationships between and behavior of councilmen and demonstrators.

Since the original proposition of the Winograd Schema Challenge, Ernest Davis, a professor at New York University, has compiled a list of over 140 Winograd Schemas from various sources as examples of the kinds of questions that should appear on the Winograd Schema Challenge.[10]

Formal description

A Winograd Schema Challenge question consists of three parts:

  1. A sentence or brief discourse that contains the following:
    • Two noun phrases of the same semantic class (male, female, inanimate, or group of objects or people),
    • An ambiguous pronoun that may refer to either of the above noun phrases, and
    • A special word and alternate word, such that if the special word is replaced with the alternate word, the natural resolution of the pronoun changes.
  2. A question asking the identity of the ambiguous pronoun, and
  3. Two answer choices corresponding to the noun phrases in question.

A machine will be given the problem in a standardized form which includes the answer choices, thus making it a binary decision problem.

Advantages

The Winograd Schema Challenge has the following purported advantages:

  • Knowledge and commonsense reasoning are required to solve them.
  • Winograd Schemas of varying difficulty may be designed, involving anything from simple cause-and-effect relationships to complex narratives of events.
  • They may be constructed to test reasoning ability in specific domains (e.g., social/psychological or spatial reasoning).
  • There is no need for human judges.[5]

Pitfalls

One difficulty with the Winograd Schema Challenge is the development of the questions. They need to be carefully tailored to ensure that they require commonsense reasoning to solve. For example, Levesque[5] gives the following example of a so-called Winograd Schema that is "too easy":

The women stopped taking pills because they were [pregnant/carcinogenic]. Which individuals were [pregnant/carcinogenic]?

The answer to this question can be determined on the basis of selectional restrictions: in any situation, pills do not get pregnant, women do; women cannot be carcinogenic, but pills can. Thus this answer could be derived without the use of reasoning, or any understanding of the sentences' meaning—all that is necessary is data on the selectional restrictions of pregnant and carcinogenic.

Activity

In 2016 and 2018, Nuance Communications sponsored a competition, offering a grand prize of $25,000 for the top scorer above 90% (for comparison, humans correctly answer to 92–96% of WSC questions[11]). However, the 2018 competition was cancelled[12] and the prize is no longer offered.[13]

The Twelfth International Symposium on the Logical Formalizations of Commonsense Reasoning was held on March 23–25, 2015 at the AAAI Spring Symposium Series at Stanford University, with a special focus on the Winograd Schema Challenge. The organizing committee included Leora Morgenstern (Leidos), Theodore Patkos (The Foundation for Research & Technology Hellas), and Robert Sloan (University of Illinois at Chicago).[14]

The 2016 Winograd Schema Challenge was run on July 11, 2016 at IJCAI-16. There were four contestants. The first round of the contest was to solve PDPs—pronoun disambiguation problems, adapted from literary sources, not constructed as pairs of sentences.[15] The highest score achieved was 58% correct, by Quan Liu et al, of the University of Science and Technology, China.[16] Hence, by the rules of that challenge, no prizes were awarded, and the challenge did not proceed to the second round. The organizing committee in 2016 was Leora Morgenstern, Ernest Davis, and Charles Ortiz.

70% accuracy on 70 manually selected problems from the 273[11] in the original Winograd Schema dataset was achieved in 2017 by a Neural Association Model designed for Commonsense Knowledge Acquisition.[17] In June 2018, a score of 63.7% accuracy was achieved on the full dataset using an ensemble of Recurrent Neural Network language models,[18] marking the first use of deep neural networks that learn from independent corpora to acquire common sense knowledge. In 2019 a score of 90.1%, was achieved on the original Winograd Scheme dataset by fine-tuning of the BERT language model with appropriate WSC-like training data to avoid having to learn commonsense reasoning.[11] The general language model GPT-3 achieved a score of 88.3% without specific fine-tuning in 2020.[19] A more challenging, adversarial "Winogrande" dataset of 44,000 problems was designed in 2019. The state-of-the-art on this larger dataset as of August 2020 remains at the 84.6% reported for fine-tuned BERT.[19]

A version of the Winograd Schema Challenge is one part of the GLUE (General Language Understanding Evaluation) benchmark collection of challenges in automated natural language understanding.[20]

References

  1. Ackerman, Evan (29 July 2014). "Can Winograd Schemas Replace Turing Test for Defining Human-level AI". IEEE Spectrum. Retrieved 29 October 2014.
  2. Levesque, H. J. (2014). "On our best behaviour". Artificial Intelligence. 212: 27–35. doi:10.1016/j.artint.2014.03.007.
  3. "Nuance announces the Winograd Schemas Challenge to Advance Artificial Intelligence Innovation". Business Wire. 28 July 2014. Retrieved 9 November 2014.
  4. Turing, Alan (October 1950). "Computing Machinery and Intelligence" (PDF). Mind. LIX (236): 433–460. doi:10.1093/mind/LIX.236.433. Retrieved 28 October 2014.
  5. Levesque, Hector; Davis, Ernest; Morgenstern, Leora (2012). The Winograd Schema Challenge. Proceedings of the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning. Retrieved 29 October 2014.
  6. Ackerman, Evan (October 2014). "A Better Test Than Turing". IEEE Spectrum. 51 (10): 20–1. doi:10.1109/mspec.2014.6905475.
  7. Lewis, Tanya (11 August 2014). "Brainy Machines Need An Updated IQ Test, Expert Say". Live Science. Retrieved 28 October 2014.
  8. Michael, Julian (18 May 2015). The Theory of Correlation Formulas and Their Application to Discourse Coherence. UT Digital Repository. p. 6. hdl:2152/29979.
  9. Winograd, Terry (January 1972). "Understanding Natural Language" (PDF). Cognitive Psychology. 3 (1): 1–191. doi:10.1016/0010-0285(72)90002-3. Retrieved 4 November 2014.
  10. Davis, Ernest. "A Collection of Winograd Schemas". cs.nyu.edu. NYU. Retrieved 30 October 2014.
  11. Sakaguchi, Keisuke; Ronan Le Bras; Bhagavatula, Chandra; Choi, Yejin (2019). "WinoGrande: An Adversarial Winograd Schema Challenge at Scale". arXiv:1907.10641 [cs.CL].
  12. Boguslavsky, I.M.; Frolova, T.I.; Iomdin, L.L.; Lazursky, A.V.; Rygaev, I.P.; Timoshenko, S.P. (2019). "Knowledge-based approach to Winograd Schema Challenge" (PDF). Proceedings of the International Conference of Computational Linguistics and Intellectual Technologies. Moscow.
  13. "Winograd Schema Challenge". CommonsenseReasoning.org. Retrieved 24 January 2020.
  14. "AAAI 2015 Spring Symposia". Association for the Advancement of Artificial Intelligence. Retrieved 1 January 2015.
  15. Davis, Ernest; Morgenstern, Leora; Ortiz, Charles (Fall 2017). "The First Winograd Schema Challenge at IJCAI-16". AI Magazine.
  16. Liu, Quan; Jiang, Hui; Ling, Zhen-Hua; Zhu, Xiaodan; Wei, Si; Hu, Yu (2016). "Commonsense Knowledge Enhanced Embeddings for Solving Pronoun Disambiguation Problems in Winograd Schema Challenge". arXiv:1611.04146 [cs.AI].
  17. Liu, Quan; Jiang, Hui; Evdokimov, Andrew; Ling, Zhen-Hua; Zhu, Xiaodan; Wei, Si; Hu, Yu (2017). "Cause-Effect Knowledge Acquisition and Neural Association Model for Solving A Set of Winograd Schema Problems". Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence: 2344–2350. doi:10.24963/ijcai.2017/326. ISBN 9780999241103.
  18. Trinh, Trieu H.; Le, Quoc V. (26 September 2019). "A Simple Method for Commonsense Reasoning". arXiv:1806.02847 [cs.AI].
  19. Brown, Tom B.; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared; Dhariwal, Prafulla; Neelakantan, Arvind; Shyam, Pranav; Sastry, Girish; Askell, Amanda; Agarwal, Sandhini; Herbert-Voss, Ariel; Krueger, Gretchen; Henighan, Tom; Child, Rewon; Ramesh, Aditya; Ziegler, Daniel M.; Wu, Jeffrey; Winter, Clemens; Hesse, Christopher; Chen, Mark; Sigler, Eric; Litwin, Mateusz; Gray, Scott; Chess, Benjamin; Clark, Jack; Berner, Christopher; McCandlish, Sam; Radford, Alec; et al. (2020). "Language Models are Few-Shot Learners". arXiv:2005.14165 [cs.CL].
  20. "GLUE Benchmark". GlueBenchmark.com. Retrieved 30 July 2019.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.