Automatic indexing
Automatic indexing is the computerized process of scanning large volumes of documents against a controlled vocabulary, taxonomy, thesaurus or ontology and using those controlled terms to quickly and effectively index large electronic document depositories. These keywords or language are applied by training a system on the rules that determine what words to match. There are additional parts to this such as syntax, usage, proximity, and other algorithms based on the system and what is required for indexing. This is taken into account using Boolean statements to gather and capture the indexing information out of the text.[1] As the number of documents exponentially increases with the proliferation of the Internet, automatic indexing will become essential to maintaining the ability to find relevant information in a sea of irrelevant information. Natural language systems are used to train a system based on seven different methods to help with this sea of irrelevant information. These methods are Morphological, Lexical, Syntactic, Numerical, Phraseological, Semantic, and Pragmatic. Each of these look and different parts of speed and terms to build a domain for the specific information that is being covered for indexing. This is used in the automated process of indexing.[1]
The automated process can encounter problems and these are primarily caused by two factors: 1) the complexity of the language; and, 2) the lack intuitiveness and the difficulty in extrapolating concepts out of statements on the part of the computing technology.[2] These are primarily linguistic challenges and specific problems involve semantic and syntactic aspects of language.[2] These problems occur based on defined keywords. With these keywords you are able to determine the accuracy of the system based on Hits, Misses, and Noise. These terms relate to exact matches, keywords that a computerized system missed that a human wouldn't, and keywords that the computer selected that a human would not have. The Accuracy statistic based on this should be above 85% for Hits out of 100% for human indexing. This puts Misses and Noise combined to be 15% or less. This scale provides a basis for what is considered a good Automatic Indexing System and shows where problems are being encountered.[1]
History
There are scholars who cite that the subject of automatic indexing attracted attention as early as the 1950s, particularly with the demand for faster and more comprehensive access to scientific and engineering literature.[3] This attention in indexing began with text processing between 1957 and 1959 by H.P. Lunh through a series of papers that were published. Lunh proposed that a computer could handle keyword matching, sorting, and content analysis. This was the beginning of Automatic Indexing and the formula to pull keywords from text based on frequency analysis. It was later determined that frequency alone was not sufficient for good descriptors however this began the path to where we are now with Automatic Indexing.[4] This was highlighted by the information explosion, which was predicted in the 1960s[5] and came about through the emergence of information technology and the World Wide Web. The prediction was prepared by Mooers where an outline was created with the expected role that computing would have for text processing and information retrieval. This prediction said that machines would be used for storage of documents in large collections and that we would use these machines to run searches. Mooers also predicted the online aspect and retrieval environment for indexing databases. This led Mooers to predict an Induction Inference Machine which would revolutionize indexing.[4] This phenomenon required the development of an indexing system that can cope with the challenge of storing and organizing vast amount of data and can facilitate information access.[6][7] New electronic hardware further advanced automated indexing since it overcame the barrier imposed by old paper archives, allowing the encoding of information at the molecular level.[5] With this new electronic hardware there were tools developed for assisting users. These were used to manage files and were organized into different categories such as PDM Suites like Outlook or Lotus Note and Mind Mapping Tools such as MindManager and Freemind. These allow users to focus on storage and building a cognitive model.[8] The automatic indexing is also partly driven by the emergence of the field called computational linguistics, which steered research that eventually produced techniques such as the application of computer analysis to the structure and meaning of languages.[3][9] Automatic indexing is further spurred by research and development in the area of artificial intelligence and self-organizing system also referred to as thinking machine.[3]
See also
- Subject indexing – the process which is automated by automatic indexing
- Tag (metadata)
- Web indexing
References
- Hlava, Marjorie M. (31 January 2005). "Automatic Indexing: A Matter of Degree". Bulletin of the American Society for Information Science and Technology. 29 (1): 12–15. doi:10.1002/bult.261.
- Cleveland, Ana; Cleveland, Donald (2013). Introduction to Indexing and Abstracting: Fourth Edition. Santa Barbara, CA: ABC-CLIO. p. 289. ISBN 9781598849769.
- Riaz, Muhammad (1989). Advanced Indexing and Abstracting Practies. Delhi: Atlantic Publishers & Distributors. p. 263.
- Historical Note: The Past Thirty Years in Information Retrieval Salton, Gerard Journal of the American Society for Information Science (1986-1998); Sep 1987; 38, 5; ProQuest pg. 375
- Torres-Moreno, Juan-Manuel (2014). Automatic Text Summarization. Hoboken, NJ: John Wiley & Sons. pp. xii. ISBN 9781848216686.
- Kapetanios, Epaminondas; Sugumaran, Vijayan; Natural Language and Information Systems: 13th International Conference on Applications of Natural Language to Information Systems, NLDB 2008 London, UK, June 24-27, 2008, Proceedings, Myra (2008). Natural Language and Information Systems: 13th International Conference on Applications of Natural Language to Information Systems, NLDB 2008 London, UK, June 24-27, 2008, Proceedings. Berlin: Springer Science & Business Media. p. 350. ISBN 978-3-540-69857-9.CS1 maint: multiple names: authors list (link)
- Basch, Reva (1996). Secrets of the Super Net Searchers: The Reflections, Revelations, and Hard-won Wisdom of 35 of the World's Top Internet Researchers. Medford, NJ: Information Today, Inc. pp. 271. ISBN 0910965226.
- Jayaweera, Y. D.; Johar, Md Gapar Md; Perera, S. N. "Open Journal Systems". Cite journal requires
|journal=
(help) - Armstrong, Susan (1994). Using Large Corpora. Cambridge, MA: MIT Press. p. 291. ISBN 0262510820.