Everyday Cataloging Concerns
Everyday Cataloger Concerns seeks to identify important areas of research from the perspective of practicing catalogers. We are particularly interested in the kinds of research that need to be done in three specific areas: the design of catalogs; the practice of cataloging; the tools & standards used by catalogers in their daily work. We seek to identify research topics and project ideas that will lead to improvements in each of these three areas.
Clustering Fiction Works to Improve Online Catalog Displays
1999 - 2000
This study will determine procedures for automatic clustering of records retrieved in online library catalog searches for works of fiction. Automatic clustering will contribute to efforts to ease the problem of information overload for system users.
Jin Ha Lee
Appeal Factors: Enabling Crossmedia Advisory Services
Providing readers’ advisory (RA) is widely acknowledge by the library community as a mission-critical service. However, current RA practices and tools focus heavily on the recommendation of books and audiobooks, excluding wide swaths of library collection in other formats. Additionally, librarians and RA recommendation engines currently relay on metadata fields related to topic and genre, which are limited in their ability to generate great recommendations. We are conducting a three-year research project investigating the common “appeal factors” across multiple types of media, including books, films, video games, graphic novels, and music, to support the provision of robust, 21st century readers’ advisory services in libraries. The goal of this research is to enable libraries to use appeal factors to provide crossmedia advisory services.
Constructing a Metadata Schema for Video Games and Interactive Media
2012 - 2014
The primary objective of this research is to create a metadata schema that can capture the essential information about video games and interactive media in a standardized way which will allow for better navigation through a game collection as well as improved interoperability across multiple organizational systems. Based on the data obtained from a comprehensive domain analysis and empirical user data obtained from various user studies, our end goal is to develop a metadata schema specifying the important information features, their definitions, and attributes. We hope to augment existing standards in the Library and Information Science (LIS) field, such as the Functional Requirements for Bibliographic Records (FRBR), and related standards as well as assisting organizations with video game collections by providing a formal metadata schema and encoding schemes that can be used across multiple game-related websites and other resources.
Linked Data for Education (LD4E)
The University of Washington Information School’s Linked Data for Professional Education (LD4PE) project will develop a web-based Linked Data Exploratorium to support structured discovery of the learning resources available online by open educational resource (OER) and commercial providers. The project will include the development of a competency framework for linked data that supports indexing learning resources according to specific competencies or abilities measured against a standard as well as the skills and knowledge they address. The Exploratorium will assign global identifiers (URIs) to statements of competency then cite those URIs in metadata descriptions of learning resources. LD4PE will support the education and training of professionals in the use of linked data technology as well as promote the use of open technology to facilitate discovery of knowledge and cultural heritage across national memory institutions and smaller institutions at the local level.
Joseph T. Tennis
Planning for development of an integrated learning environment to teach Linked Data technologies
2011 - 2012
"Planning for development of an integrated learning platform to teach the principles and process of metadata design for a Linked Data environment", proposed by the University of Washington Information School for an IMLS Level 1 Planning Grant project under the National Leadership Program, plans to convene a workshop of educators and technology experts to assess requirements for a software platform in support of teaching the design of metadata for the modern Web environment.
Ethics and Intentionality in Knowledge Organization
Part of evaluating knowledge organization systems is knowing whether the actions you take are intentional, informed, and are in accordance with particular ethical beliefs. To that end I have looked at some aspects of ethics and intentionality in knowledge organization. Ethics and intentionality also reinforce issues brought up by teleology, teleonomy, and subject ontogeny. So I see these three a family of research interests that help us understand knowledge organization. Engaged Knowledge Organization (EKO) If we take knowledge organization to be a craft, then we are assuming it requires, skill, time, care, and results in a work of art. If we think that this work can do good or harm, we have to be sure that the time and care we take is well intentioned or engaged. This line of work asks what does it take for us to understand our intentions in knowledge organization, and how, upon reflection, can we act in an engaged way with the work or organizing knowledge? Metatheory of Indexing and Knowledge Organization Metatheory, as outlined by Ritzer, is can help us better understand theory, be a prelude to future theory development, and serve as an overarching perspective on theory. Some also say it can serve to evaluate theory. My work in metatheory is to better understand indexing theory, classification theory, and other regimes of theory so we might improve the theoretically informed practices of indexing, classification, etc.
Descriptive Informatics and Framework Analysis
Descriptive Informatics looks at metadata in the wild, and asks: How do we conceptualize different species of metadata and how diverse are their design requirements and implementations? We analyze these conceptual constructs of metadata through Framework Analysis. We ask: How can we conceptualize the differences and similarities that obtain between species of metadata schemes, and how does what we find affect our rubrics for design, use, management, and evaluation of such systems? Ontomon One way to describe the similarities and differences between indexing languages is to measure the terms used in them. We can create visualizations of these measurements to inspect the comparisons. Such radar graphs are evocative of creatures, and such measurements are not unlike cladistiic taxonomy. So I have coined the phrase ontomon (ontology monster) to characterize these visualizations. Teleology and Teleonomy in Metadata We can also compare metadata by investigating the purpose of creation, maintenance, and use. So we can compare and contrast how metadata for bibliographic control compares with metadata for the purpose of presuming authenticity. And in fact we must carry out this kind of analysis in order to do work in ontogenic analysis. Otherwise we are not assuming a constant purpose over time. Framework Analysis In order to understand the difference between ontologies, thesauri, and other knowledge organization systems, we have developed another analytical technique called framework analysis. This not only looks at the structure of the system, but also the work practices that surround it, and the discourse which outlines the purpose, rhetoric, and context of the system. So the information organization framework (IOF) is the unit of analysis, which is larger than other analyses of KOS.
Ontogenic analysis is the process of following a subject through an indexing language. There are many open questions about the power of this method, but more and more people find it useful. There are three key areas that are affected by this: preservation metadata, online access tools, and interoperability. Research in ontogenic analysis has resulted in a few new constructs useful for evaluating indexing languages over time. Some are listed below. Subject Ontogeny Subject ontogeny is the life of the subject in an indexing language (e.g., classification scheme like the DDC). Examining how a subject is treated over time tells us about the anatomy of an indexing language. For example, gypsies as a subject has been handled differently in different editions of the DDC. Scheme Change Indexing languages (schemes) change over time. They do so to stay up to date. However, there are implications for discoverability when schemes change. Understanding how schemes change is part of ontogenic analysis and helps designers thing about their future users. Collocative Integrity If an indexing language changes over time, how does that affect the power of the scheme to collocate? Is there a threshold below which a scheme becomes useless? Semantic Gravity Linked to collocative integrity, semantic gravity is the weight of the out dated class number in cataloguing practice. Often libraries will keep an old number because they think it helps users. Coordinate Enunciation Once we have examined the life of a subject, we want to ask whether the concepts in the indexing language match those contemporaneously published literature. We can now mine HathiTrust data to answer these questions. Structural, Word-Use, and Textual Change There are three kinds of change that occur through revising an indexing language (scheme). The first kind is structural change, which affect the semantics of the scheme because they change the relationships that obtain between values in a scheme. E.g., moving eugenics out of biology. Word-use change affect meaning, but not structure per se. An example of word-use change is changing gipsies-outcast races to people with status defined by changes in residence. Textual changes are changes in the semantic relationship between the scheme and the literature it organizes. For example you can find collections that use the DDC that has both “sanitation of the race” books and “berries, nuts, and seeds” books in the same class.
Inferring the hierarchical structure of citation networks to improve semantic search of the scholarly literature
2014 - 2016
In past studies we have found it possible to build a powerful recommendation engine based upon the hierarchical structure of the scholarly literature, as extracted using our InfoMap network clustering algorithm. Here we propose to extend this approach to build methods for semantic search that take the structure of the scholarly literature into account, guiding researchers to important documents within knowledge communities to which their query terms are of greatest relevance. By combining hierarchical citation analysis with text-based searching, we aim to provide new tools for scholarly navigation.