Formal proofs in Relational Logic are analogous to formal proofs in Propositional Logic. The major difference is that there are additional rules of inference and/or axiom schemata to deal with quantifiers and quantified sentences.
As with Propositional Logic, proofs in Relational Logic are based on rules of inference. Recall that a rule of inference consists of (1) a set of relational sentence schemas, called premises, and (2) another set of sentence patterns, called. Whenever we have sentences that match the conditions of the rule, then it is acceptable to infer sentences matching the conclusions. The following paragraphs describe some of these rules.
The rule shown below is called Modus Ponens (MP). The significance of this rule is that, whenever sentences of the form (φ ⇒ ψ) and φ have been established, then it is acceptable to infer the sentence ψ as well.
|φ ⇒ ψ|
Modus Tolens (MT) is the reverse of Modus Ponens. If we believe that (φ ⇒ ψ) is true and we believe that ψ is false, then we can infer that φ is false as well.
|φ ⇒ ψ|
And Elimination (AE) states that, whenever we believe a conjunction of sentences, then we can infer each of the conjuncts. In this case, note that there are multiple conclusions.
|φ ∧ ψ|
And Introduction (AI) states that, whenever we believe some sentences, we can infer their conjunction.
|φ ∧ ψ|
In addition to these rules of inference for logical operators, we have some rules of inference appropriate to quantified sentences.
Universal Instantiation allows us to reason from the general to the particular. It states that, whenever we believe a universally quantified sentence, we can infer an instance of that sentence in which the universally quantified variable is replaced by any appropriate term.
|where τ is free for ν in φ|
For example, consider the sentence ∀y.hates(jane,y). From this premise, we can infer that Jane hates Jill, i.e. hates(jane,jill). We also can infer that Jane hates herself, i.e. hates(jane,jane). We can even infer than Jane hates her mother, i.e. hates(jane,mom(jane)).
In addition, we can use universal instantiation to create conclusions with free variables. For example, from ∀y.hates(jane,y), we can infer hates(jane,y) or, equivalently, hates(jane,z). In doing so, however, we have to be careful to avoid conflicts with other variables and quantifiers in the quantified sentence. This is the reason for the constraint on the replacement term.
As an example of what can go wrong without this constraint, consider the sentence ∀y.∃z.hates(y,z), i.e. everybody hates somebody. From this sentence, it makes sense to infer ∃z.hates(x,z), i.e. everybody mother hates somebody. However, we do not want to infer ∃z.hates(z,z); i.e., there is someone who hates himself.
We can avoid this problem by obeying the restriction on the universal instantiation rule. We say that a term τ is free for a variable ν in an expression φ if and only if ν does not occur within the scope of a quantifier of some variable in τ. For example, the term x is free for y in ∃z.hates(y,z). However, the term z is not free for y, since y is being replaced by z and yoccurs within the scope of a quantifier of z. Thus, we cannot substitute z for y in this sentence, and we avoid the problem we have just described.
Existential Instantiation (EI) allows us to eliminate existential quantifiers. Like universal instantiation, this rule states that we can infer an instance of the quantified sentence in which the existentially quantified variable is replaced by a suitable term. There are two cases of existential instantiation.
The first case of Existential Instantiation covers the situation when the sentence within the quantifier (i.e. the scope) contains no free variables other than the quantified variable. In this case, the quantifier can be dropped and the variable can be replaced by an arbitrary new constant.
|where σ is a new object constant|
|where φ has no free variables other than ν|
When there are free variables other than the quantified variable in the scope of the quantified sentence, those variables must be taken into account. In this case the variable is replaced by a functional term consisting of a new function constant applied to the non-quantified free variables in the quantified sentence.
|φ[&nu←π(ν1, ... , νn)]|
|where π is a new function constant|
|where ν1, ... , νn are all the free variables in φ other than ν|
For example, if we have the premise &existz.hates(y,z) and if foe is a new function constant, we can use Existential Instantiation to infer the sentence hates(y,foe(y)). The term foe(y) here is a term designating the person y hates.
The mention of free variables in the replacement term is intended to capture the relationship between the value of the existentially quantified variable and the values for the free variables in the expression. Without this restriction, we would be able to instantiate the sentence ∀x.∃y.hates(x,y) and the sentence ∃y.∀x.hates(x,y) in the same way, despite their very different meanings.
Of course, when there are no free variables in an expression, the variable can be replaced by a function of no arguments or, equivalently, by a new constant. For example, if we have the sentence ∃y.∀x.hates(x,y), and mike is a new object constant, we can infer ∀x.hates(x,mike),i.e. everyone hates mike.
Note that, in using Existential Instantiation, it is extremely important to avoid object and function constants that have been used already. Without this restriction, we would be able to infer hates(jill,jill) from the somewhat weaker fact ∃z.hates(jill,z).
Given any set of inference rules, we say that a conclusion φ is derivable from a set of premises Δ if and only if (1) φ is a member of Δ, or (2) φ is the result of applying a rule of inference to sentences derivable from Δ. A derivation of φ from Δ is a sequence of sentences in which each sentence either is a member of Δ or is the result of applying a rule of inference to elements earlier in the sequence.
As an illustration of these concepts, consider the following problem. We know that horses are faster than dogs and that there is a greyhound that is faster than every rabbit. We know that Harry is a horse and that Ralph is a rabbit. Our job is to derive the fact that Harry is faster than Ralph. For simplicity, we use the rules of inference mentioned above rather than axiom schemata. The curious reader can check that the proof could also be done with the standard axiom schemata and Modus Ponens.
First, we need to formalize our premises. The relevant sentences follow. Note that we have added two facts about the world not stated explicitly in the problem: that greyhounds are dogs and that our speed relationship is transitive.
Our goal is to show that Harry is faster than Ralph. In other words, starting with the preceding sentences, we want to derive the following sentence.
The derivation of this conclusion goes as shown below. The first six lines correspond to the premises just formalized. The seventh line is the result of applying existential instantiation to the second sentence. Because there are no free variables, we replace the quantified variable by the new object constant gary. The eighth and nine lines come from and elimination. The tenth line is a universal instantiation of the ninth line. In the eleventh line, we use Modus Ponens to infer that Gary is faster than Ralph. Next, we instantiate the sentence about greyhounds and dogs and infer that Gary is a dog. Then, we instantiate the sentence about horses and dogs; we use and introduction to form a conjunction matching the antecedent of this instantiated sentence; and we infer that Harry is faster than Gary. In the final sequence, we instantiate the transitivity sentence, again form the necessary conjunction, and infer the desired conclusion.
|1.||∀x.∀y.(h(x) ∧ d(y) ⇒ f(x,y))||Premise|
|2.||∃y.(g(x) ∧ ∀z.(r(z) ⇒ f(y,z)))||Premise|
|3.||∀y.(g(y) ⇒ d(y))||Premise|
|4.||∀x.∀y.∀z.(f(x,y) ∧ f(y,z) ⇒ f(x,z))||Premise|
|7.||g(gary) ∧ ∀z.(r(z) ⇒ f(gary,z))||Existential Instantiation: 2|
|8.||g(gary)||And Elimination: 7|
|9.||∀z.(r(z) ⇒ f(gary,z))||And Elimination: 7|
|10.||r(ralph) ⇒ f(gary,ralph)||Universal Instantiation: 9|
|11.||f(gary,ralph)||Modus Ponens: 10, 6|
|12.||g(gary) ⇒ d(gary)||Universal Instantiation: 3|
|13.||d(gary)||Modus Ponens: 12, 8|
|14.||∀y.(h(harry) ∧ d(y) ⇒ f(harry,y))||Universal Instantiation: 1|
|15.||h(harry) ∧ d(gary) ⇒ f(harry,gary)||Universal Instantiation: 14|
|16.||h(harry) ∧ d(gary)||And Introduction: 5, 13|
|17.||f(harry,gary)||Modus Ponens: 15, 16|
|18.||∀y.∀z.(f(harry,y) ∧ f(y,z) ⇒ f(harry,z))||Universal Instantiation: 4|
|19.||∀z.(f(harry,gary) ∧ f(gary,z) ⇒ f(harry,z))||Universal Instantiation: 18|
|20.||f(harry,gary) ∧ f(gary,ralph) ⇒ f(harry,ralph)||Universal Instantiation: 19|
|21.||f(harry,gary) ∧ f(gary,ralph)||And Introduction: 17, 11|
|22.||f(harry,ralph)||Modus Ponens: 20, 21|
As with propositional Logic, this derivation is completely mechanical. Each conclusion follows from previous conclusions by a mechanical application of a rule of inference. On the other hand, in producing this derivation, we rejected numerous alternative inferences. Making these choices intelligently is one of the key problems in automating the process of inference.
In our discussion of Propositional Logic, we discovered that it was possible to produce a sound and complete proof system involving just a single rule of inference and some axiom schemata. A similar result holds for Relational Logic. However, instead of limiting ourselves to one rule of inference, we include two, viz. Modus Ponens and Universal Generalization.
As for axiom schemata, we begin with the Mendelson axiom schemata for Propositional Logic. Since Relational Logic includes all of the logical operators in Propositional Logic, we have all of the same axiom schemata (and more).
The Implication Introduction schema (II), together with Modus Ponens, allows us to infer implications.
The Implication Distribution schema (ID) allows us to distribute one implication over another. If a sentence φ implies that ψ implies χ, then, if φ implies ψ, φ also implies χ.
The Contradiction Realization schema (CR) permits us to infer a sentence if the negation of that sentence implies some sentence and its negation.
To these schemata for our logical operators, we add some schemata having to do with quantification.
The Universal Distribution schema (UD) allows us to distribute quantification over implication.
The Universal Instantiation schema (UI) states that, whenever a set of sentences contains a universally quantified sentence ∀ν.φ, it is acceptable to add a copy of φ in which all occurrences of ν have been replaced by any term &tau' that is free for ν in φ.
The Universal Instantiation schema is effectively the Universal Instantiation rule of inference in the form of a schema. In fact, given Modus Ponens, it allows us to draw all the same conclusions. This is the reason we can drop that rule in our definition of proof.
These axiom schemata together form the Mendelson axiom schemata for Relational Logic. The interesting thing about the Mendelson axiom schemata is that, together with Modus Ponens and Universal Generalization, they alone are sufficient to prove all logical consequences from any set of premises.
As with propositional logic, Mendelson's axiom schemata deal with only a subset of our language. They mention only negations and implications and universally quantified sentences. Fortunately, as with Propositional Logic, it is possible to convert any Relational Logic sentence into a logically equivalent sentence in terms of these operators by applying the following transformations.
|(φ ⇔ ψ)||→||(φ ⇒ ψ) ∧ (ψ ⇒ φ)|
|(φ ⇐ ψ)||→||(φ ⇒ ψ)|
|(φ ∧ ψ)||→||¬(¬φ ⇒ ψ)|
|(φ ∨ ψ)||→||(¬φ ⇒ ψ)|
A proof of a conclusion from a set of premises is a sequence of sentences terminating in the conclusion in which each item is either (1) a premise, (2) an instance of an axiom schema, or (3) the result of applying a rule of inference to earlier items in sequence.
If there exists a proof of a sentence φ from a set Δ of premises and the Mendelson axiom schemata using Modus Ponens and Universal Generalization, then φ is said to be provable from Δ (written as Δ |= φ) and is called a theorem of Δ.
Finally, we have soundness and completeness results for this proof system. A set of sentences Δ logically entails a sentence φ if and only if φ is provable from Δ.
Soundness Theorem: If φ is provable from Δ, then Δ logically entails φ.
Completeness Theorem: If Δ logically entails φ, then φ is provable from Δ.
1. Proofs Give a formal proof of the sentence ∀x.(p(x) ⇒ r(x)) from ∀x.(p(x) ⇒ q(x)) and ∀x.(q(x) ⇒ r(x)).
2. Proofs. The law says that it is a crime to sell an unregistered gun. Red has several unregistered guns, and all of them were purchased from Lefty. Give a formal proof that Lefty is a criminal.