The definition of truth value relies on both an interpretation for the constants of KIF and an assignment for its variables. In encoding knowledge, we often have in mind a specific interpretation for the constants in our language, but we want our variables to range over the universe of discourse (either existentially or universally). In order to provide a notion of semantics that is independent of the assignment of variables, we turn to the notion of satisfaction.
An interpretation logically satisfies a sentence if and only if the truth value of the sentence is for all variable assignments. Whenever this is the case, we say that is a model of . Extending this notion to sets of sentences, we say that an interpretation is a model of a set of sentences if and only if it is a model of every sentence in the set of sentences.
Obviously, a variable assignment has no effect on the truth value of a sentence without free variables (i.e. a ground sentence or one in which all variables are bound). Consequently, if an interpretation satisfies such a sentence for one variable assignment, it satisfies it for every variable assignment.
The occurrence of free variables in a sentence means that the sentence is true for all assignments of the variables. For example, the sentence (p $x) means that the relation denoted by p is true for all objects in the universe of discourse. In other words, the meaning of a sentence with free variables is the same as the meaning of a universally quantified sentence in which all of the free variables are boundby the universal quantifier. In KIF, we use this fact to sanction the dropping of prefix universal quantifiers that do not occur inside the scope of existential quantifiers. In other words, we are permitted to write (=> (apple $x) (red $x)) in place of the more cumbersome (forall ($x) (=> (apple $x) (red $x))).
Unfortunately, the notion of satisfaction is disturbing in that it is relative to an interpretation. As a result, different individuals and different programs with different interpretations may disagree on the truth of a sentence.
It is true that, as we add more sentences to a knowledge base, the set of models generally decreases. The goal of knowledge encoding is to write enough sentences so that unwanted interpretations are eliminated. Unfortunately, this is not always possible. In the light of this fact, how are we to interpret the expressions in such situations? The answer is to generalize over interpretations as earlier we generalized over variable assignments.
If is a set of sentences, we say that logically entails a sentence if and only every model of is also a model of .
With this notion, we can rephrase the goal of knowledge representation as follows. It is to encode enough sentences so that every conclusion we desire is logically entailed by our set of sentences. It is a sad fact that this is not always possible, but it is the ideal toward which we strive.