Culturomics

Culturomics is a form of computational lexicology that studies human behavior and cultural trends through the quantitative analysis of digitized texts.[1][2] Researchers data mine large digital archives to investigate cultural phenomena reflected in language and word usage.[3] The term is an American neologism first described in a 2010 Science article called Quantitative Analysis of Culture Using Millions of Digitized Books, co-authored by Harvard researchers Jean-Baptiste Michel and Erez Lieberman Aiden.[4]

Michel and Aiden helped create the Google Labs project Google Ngram Viewer which uses n-grams to analyze the Google Books digital library for cultural patterns in language use over time.

Because the Google Ngram data set is not an unbiased sample,[5] and does not include metadata,[6] there are several pitfalls when using it to study language or the popularity of terms.[7] Medical literature accounts for a large, but shifting, share of the corpus,[8] which does not take into account how often the literature is printed, or read.

Studies

Narrative network of US Elections 2012[9]

In a study called Culturomics 2.0, Kalev H. Leetaru examined news archives including print and broadcast media (television and radio transcripts) for words that imparted tone or "mood" as well as geographic data.[10][11] The research retroactively predicted the 2011 Arab Spring and successfully estimated the final location of Osama Bin Laden to within 124 miles (200 km).[10][11]

In a 2012 paper by Alexander M. Petersen and co-authors,[12] they found a "dramatic shift in the birth rate and death rates of words":[13] Deaths have increased and births have slowed. The authors also identified a universal "tipping point" in the life cycle of new words at about 30 to 50 years after their origin, they either enter the long-term lexicon or fall into disuse.[13]

Culturomic approaches have been taken in the analysis of newspaper content in a number of studies by I. Flaounas and co-authors. These studies showed macroscopic trends across different news outlets and countries. In 2012, a study of 2.5 million articles suggested that gender bias in news coverage depends on topic and how the readability of newspaper articles is related to topic.[14] A separate study by the same researchers, covering 1.3 million articles from 27 countries,[15] showed macroscopic patterns in the choice of stories to cover. In particular, countries made similar choices when they were related by economic, geographical and cultural links. The cultural links were revealed by the similarity in voting for the Eurovision song contest. This study was performed on a vast scale, by using statistical machine translation, text categorisation and information extraction techniques.

The possibility to detect mood shifts in a vast population by analysing Twitter content was demonstrated in a study by T. Lansdall-Welfare and co-authors.[16] The study considered 84 million tweets generated by more than 9.8 million users from the United Kingdom over a period of 31 months, showing how public sentiment in the UK has changed with the announcement of spending cuts.

In a 2013 study by S Sudhahar and co-authors, the automatic parsing of textual corpora has enabled the extraction of actors and their relational networks on a vast scale, turning textual data into network data. The resulting networks, which can contain thousands of nodes, are then analysed by using tools from Network theory to identify the key actors, the key communities or parties, and general properties such as robustness or structural stability of the overall network, or centrality of certain nodes.[17]

In a 2014 study by T Lansdall-Welfare and co-authors, 5 million news articles were collected over 5 years[18] and then analyzed to suggest a significant shift in sentiment relative to coverage of nuclear power, corresponding with the disaster of Fukushima. The study also extracted concepts that were associated with nuclear power before and after the disaster, explaining the change in sentiment with a change in narrative framing.

In 2015, a study revealed the bias of the Google books data set, which "suffers from a number of limitations which make it an obscure mask of cultural popularity,"[5] and calls into question the significance of many of the earlier results.

Culturomic approaches can also contribute towards conservation science through a better understanding of human-nature relationships. In 2016, a publication by Richard Ladle and colleagues<refdoi:10.1002/fee.1260</ref> highlighted five key areas where culturomics can be used to advance the practice and science of conservation, including recognizing conservation-oriented constituencies and demonstrating public interest in nature, identifying conservation emblems, providing new metrics and tools for near-real-time environmental monitoring and to support conservation decision making, assessing the cultural impact of conservation interventions, and framing conservation issues and promoting public understanding.

In 2017, a study correlated joint pain with Google search activity and temperature.[19] While the study observed higher search activity for hip and knee pain (but not arthritis) during higher temperatures, it does not (and cannot) control for relevant other factors such as activity. Mass media misinterpreted this as "myth busted: rain does not increase joint pain",[20][21] while the authors speculate the observed correlation is due to "changes in physical activity levels".[22]

Criticism

Linguists and lexicographers have expressed skepticism regarding the methods and results of some of these studies, including one by Petersen et al.,[23] while others have demonstrated bias in the Ngram data set, and their results "call into question the vast majority of existing claims drawn from the Google Books corpus,"[5] and "instead of speaking about general linguistic or cultural change, it seems to be preferable to explicitly restrict the results to linguistic or cultural change ‘as it is represented in the Google Ngram data’"[6] because it is unclear what caused the observed change in the sample.

See also

References

  1. Cohen, Patricia (16 December 2010). "In 500 Billion Words, New Window on Culture". New York Times.
  2. Hayes, Brian (May–June 2011). "Bit Lit". American Scientist. 99 (3): 190. doi:10.1511/2011.90.190. Archived from the original on 2016-10-18. Retrieved 2011-09-09.
  3. Letcher, David W. (April 6, 2011). "Cultoromics: A New Way to See Temporal Changes in the Prevalence of Words and Phrases" (PDF). American Institute of Higher Education 6th International Conference Proceedings. 4 (1): 228. Archived from the original (PDF) on March 3, 2016. Retrieved September 9, 2011.
  4. Michel, Jean-Baptiste; Liberman Aiden, Erez (16 December 2010). "Quantitative Analysis of Culture Using Millions of Digitized Books". Science. 331 (6014): 176–82. doi:10.1126/science.1199644. PMC 3279742. PMID 21163965.
  5. Pechenick, Eitan Adam; Danforth, Christopher M.; Dodds, Peter Sheridan (2015-10-07). "Characterizing the Google Books Corpus: Strong Limits to Inferences of Socio-Cultural and Linguistic Evolution". PLOS ONE. 10 (10): e0137041. arXiv:1501.00960. Bibcode:2015PLoSO..1037041P. doi:10.1371/journal.pone.0137041. ISSN 1932-6203. PMC 4596490. PMID 26445406.
  6. Koplenig, Alexander (April 2017). "The impact of lacking metadata for the measurement of cultural and linguistic change using the Google Ngram data sets—Reconstructing the composition of the German corpus in times of WWII". Digital Scholarship in the Humanities. 32 (1): 169–188. doi:10.1093/llc/fqv037. ISSN 2055-7671.
  7. Zhang, Sarah. "The Pitfalls of Using Google Ngram to Study Language". WIRED. Retrieved 2017-05-24.
  8. Comparison of example terms
  9. Sudhahar, Saatviga; Veltri, Giuseppe A.; Cristianini, Nello (2015). "Automated analysis of the US presidential elections using Big Data and network analysis". Big Data & Society. 2. doi:10.1177/2053951715572916. S2CID 62188746.
  10. Leetaru, Kalev H. (5 September 2011). "Culturomics 2.0: Forecasting Large-Scale Human Behavior Using Global News Media Tone In Time And Space". First Monday. 16 (9). doi:10.5210/fm.v16i9.3663. Archived from the original on 4 April 2012. Retrieved 9 September 2011.
  11. Quick, Darren (7 September 2011). "Culturomics research uses quarter-century of media coverage to forecast human behavior". Gizmag.com. Retrieved 9 September 2011.
  12. Petersen, Alexander M. (15 March 2012). "Statistical Laws Governing Fluctuations in Word Use from Word Birth to Word Death". Scientific Reports. 2: 313. arXiv:1107.3707. Bibcode:2012NatSR...2E.313P. doi:10.1038/srep00313. PMC 3304511. PMID 22423321.
  13. "The New Science of the Birth and Death of Words ", CHRISTOPHER SHEA, Wall Street Journal, March 16, 2012
  14. Flaounas, Ilias; Ali, Omar; Lansdall-Welfare, Thomas; De Bie, Tijl; Mosdell, Nick; Lewis, Justin; Cristianini, Nello (2013). "Research Methods in the Age of Digital Journalism". Digital Journalism. 1: 102–116. doi:10.1080/21670811.2012.714928. S2CID 61080552.
  15. Flaounas, Ilias; Turchi, Marco; Ali, Omar; Fyson, Nick; De Bie, Tijl; Mosdell, Nick; Lewis, Justin; Cristianini, Nello (2010). "The Structure of the EU Mediasphere". PLOS ONE. 5 (12): e14243. Bibcode:2010PLoSO...514243F. doi:10.1371/journal.pone.0014243. PMC 2999531. PMID 21170383.
  16. Lansdall-Welfare, Thomas; Lampos, Vasileios; Cristianini, Nello (2012). "Effects of the recession on public mood in the UK". Proceedings of the 21st international conference companion on World Wide Web - WWW '12 Companion. p. 1221. doi:10.1145/2187980.2188264. ISBN 9781450312301. S2CID 1825992.
  17. Sudhahar, Saatviga; De Fazio, Gianluca; Franzosi, Roberto; Cristianini, Nello (2015). "Network analysis of narrative content in large corpora". Natural Language Engineering. 21: 81–112. doi:10.1017/S1351324913000247.
  18. Lansdall-Welfare, Thomas; Sudhahar, Saatviga; Veltri, Giuseppe A.; Cristianini, Nello (2014). "On the coverage of science in the media: A big data study on the impact of the Fukushima disaster". 2014 IEEE International Conference on Big Data (Big Data). pp. 60–66. doi:10.1109/BigData.2014.7004454. hdl:2381/31439. ISBN 978-1-4799-5666-1. S2CID 7686818.
  19. Telfer, Scott; Obradovich, Nick (2017-08-09). "Local weather is associated with rates of online searches for musculoskeletal pain symptoms". PLOS ONE. 12 (8): e0181266. Bibcode:2017PLoSO..1281266T. doi:10.1371/journal.pone.0181266. ISSN 1932-6203. PMC 5549896. PMID 28792953.
  20. "Are achy joints associated with rain? Google suggests otherwise". NBC News. Retrieved 2017-08-10.
  21. "This Myth About Joint Pain Is Total Crap". Men's Health. 2017-08-10. Retrieved 2017-08-10.
  22. "Rain increases joint pain? Google suggests otherwise: People's activity levels -- increasing as temperatures rise, to a point -- are likelier than the weather itself to cause pain that motivates online searches, researchers say". ScienceDaily. Retrieved 2017-08-10.
  23. "When physicists do linguistics", BEN ZIMMER, Boston Globe, February 10, 2013

Further reading

  • Culturomics.org, website by The Cultural Observatory at Harvard directed by Erez Lieberman Aiden and Jean-Baptiste Michel
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.