Reification (knowledge representation)

Lambert M. Surhone & Mariam T. Tennoe & Susan F. Henssonow

Language: English

Published: Oct 15, 2022

Description:

Reification (knowledge representation)
Reification in knowledge representation involves the representation of factual assertions, that are referred to by other assertions; which might then be manipulated in some way. e.g., to compare logical assertions from different witnesses in order to determine their credibility.

The message "John is six feet tall" is an assertion involving truth, that commits the speaker to its factuality, whereas the reified statement, "Mary reports that John is six feet tall" defers such commitment to Mary. In this way, the statements can be incompatible without creating contradictions in reasoning. For example the statements "John is six feet tall" and "John is five feet tall” are mutually exclusive (thus, incompatible); but, the statements "Mary reports that John is six feet tall," and "Paul reports that John is five feet tall," are not incompatible, as they both are governed by a conclusive rationale, that either Mary or Paul (or both) is, in fact, incorrect.

See also
Reification (linguistics)
Reification (fallacy)
Reification (computer science)

Knowledge representation
Knowledge representation and reasoning is an area in artificial intelligence that is concerned with how to formally "think", that is, how to use a symbol system to represent "a domain of discourse" - that which can be talked about, along with functions that may or may not be within the domain of discourse that allow inference (formalized reasoning) about the objects within the domain of discourse to occur.

Generally speaking, some kind of logic is used both to supply a formal semantics of how reasoning functions apply to symbols in the domain of discourse, as well as to supply (depending on the particulars of the logic), operators such as quantifiers, modal operators, etc. that, along with an interpretation theory, give meaning to the sentences in the logic.

When we design a knowledge representation (and a knowledge representation system to interpret sentences in the logic in order to derive inferences from them) we have to make trades across a number of design spaces, described in the following sections.

The single most important decision to be made, however is the expressivity of the KR. The more expressive, the easier (and more compact) it is to "say something". However, more expressive languages are harder to automatically derive inferences from. An example of a less expressive KR would be propositional logic.

An example of a more expressive KR would be autoepistemic temporal modal logic. Less expressive KRs may be both complete and consistent (formally less expressive than set theory). More expressive KRs may be neither complete nor consistent.

The key problem is to find a KR (and a supporting reasoning system) that can make the inferences your application needs in time, that is, within the resource constraints appropriate to the problem at hand.

This tension between the kinds of inferences an application "needs" and what counts as "in time" along with the cost to generate the representation itself makes knowledge representation engineering interesting.