Herbrand Semantics

Michael Genesereth
Timothy Hinrichs??
Eric Kao

Abstract: The traditional semantics for First Order Logic (sometimes called Tarskian semantics) is based on the notion of interpretations of constants. Herbrand semantics is an alternative semantics based directly on truth assignments for ground sentences rather than interpretations of constants. Herbrand semantics is simpler and more intuitive than Tarskian semantics; and, consequently, it is easier to teach and learn. Moreover, it is more expressive. For example, while it is not possible to finitely axiomatize integer arithmetic with Tarskian semantics, this can be done easily with Herbrand Semantics. The downside is a loss of some common logical properties, such as compactness and completeness. However, there is no loss of inferential power. Anything that can be proved according to Tarskian semantics can also be proved according to Herbrand semantics. In this article, we define Herbrand semantics and report in detail on its properties.

1. Introduction

The traditional semantics for First Order Logic (sometimes called Tarskian semantics) is based on the notion of interpretations. An interpretation consists of an arbitrary set of objects (called the universe of discourse) and an interpretation function (1) that maps object constants into elements of this set, (2) that maps function constants into functions on this set, and (3) that maps relation constants into relations on this set.

Herbrand semantics is an alternative semantics for First Order Logic based directly on truth assignments for ground sentences rather than interpretations. A model is simply a truth assignment for the ground atoms our language. (Equivalently, it is an arbitrary subset of the ground atoms in our language.) In Herbrand semantics, there is no external universe and no interpretation function for constants. In effect, all ground terms are treated as opaque - they "represent" themselves.

Herbrand semantics is widely used in computational settings, notably in database systems, logic programming, and answer set programming. And it has begun to supplant Tarskian semantics in introductory courses in Logic.

One important advantage of Herbrand semantics is simplicity. It is conceptually simpler than Tarskian semantics; and, consequently, it is easier to teach and to learn. To see this, consider the meaning of quantifiers and variables under the two different semantics.

In Tarskian semantics, the meaning of quantifiers is based on the notion of variable assignments and versions of variable assigments. A variable assignment is an assignment of variables to objects in the universe of discourse associated with an interpretation. A version of a variable assignment is a variable assignment in which one of the variables is mapped to a possibly different object in the universe of discourse. A universally quantified sentence is said to be true for a given interpretation and variable assignment if and only if it is true for every variation of the given variable assignment. It is a mouthful to say and even harder for people to understand.

In Herbrand semantics, the meaning of quantifiers is much simpler. A universally quantified sentence is true if and only if all of the instances of the quantified sentence are true. An existentially quantified sentence is true if and only if there is some instance that is true. That is all. There are no interpretations and no variable assignments and no variations of variable assignments.

Moreover, our common intuitions about quantifiers are respected. In Herbrand semantics, if a sentence phi(tau) holds for every ground term tau, then the sentence Ax.phi(x) is true. In other words, a universally quantified sentence is logically equivalent to the set of instances of its scope. (This property does not hold in Tarskian semantics.)

A second important feature of Herbrand semantics is expressiveness. While it is not possible to finitely axiomatize integer arithmetic with Tarskian semantics, this can be done with Herbrand Semantics. While it is not possible to formalize transitive closure in first-order logic with Tarskian semantics, this can be be done with Herbrand semantics without introducing negation as failure or model minimization.

The downside of Herbrand semantics is a loss of some logical properties possessed by Tarskian semantics. However, as we shall argue, Herbrand semantics preserves the properties that really matter.

First of all, Herbrand Logic is not compact - there are infinite sets of sentences that are inconsistent while every finite subset is consistent. The upshot of this is that there are infinite sets of sentences where we cannot demonstrate unsatisfiability with a finite argument within the language itself. Fortunately, this does not cause any practical difficulties, since in all cases of practical interest we are working with finite sets of premises.

Another deficiency of Herbrand semantics vis a vis Tarksian semantics is that with Herbrand semantics there are restrictions on the cardinality of the worlds that can be axiomatized. Since there is no external universe, the cardinality of the structures that can be axiomatized is equal to the number of ground terms in the language. (To make things easy, we can always choose a countable language. We can even choose an uncountable language, though doing so would ruin some of the nice properties of Herbrand semantics. On the positive side, it is worth noting that in many practical applications we do not care about uncountable sets. Although there are uncountably many real numbers, remember that there are only countably many floating point numbers.) More significantly, recall that the Lowenheim-Skolem Theorem assures us that even with Tarskian semantics we cannot write sentences that distinguish models of different infinite cardinalities. So, it is unclear whether this restriction has any practical significance.

Herbrand Logic shares most important properties with Tarskian Logic. The deductive calculus is complete for all finite axiomatizations. In the absence of function constants, the calculus derives the exact same set of sentences.

When we add functions, we lose this nice property. However, we get some interesting benefits in return. For one, it is possible in Herbrand Logic (with functions) to finitely axiomatize arithmetic. As we know from Godel, this is not possible in a first-order language with Tarskian semantics. The downside is that we lose completeness. However, it is nice to know that we can at least define things, even though we cannot prove them. Moreover, as mentioned above, we do not actually lose any consequences that we are able to deduce with First-Order Logic.

Note that Herbrand semantics is not new here. It is widely used in discussions of automated reasoning and logic programming and databases. However, in most cases, it is treated (justifiably) as a special case of first-order semantics. In this paper, we turn things around - we start with Herbrand semantics, we study its logical and computational properties, and we then talk about how it can be extended to First Order Logic.

We begin the paper with a definition of the syntax of First Order Logic. We then introduce Herbrand semantics. We discuss its expressiveness and equivalence of various subsets to other logics. We then discuss its proof theory. Finally, we talk about the relationship between Herbrand semantics and Tarskian semantics.

2. Syntax

In Propositional Logic, sentences are constructed from a basic vocabulary of propositional constants. In Herbrand Logic, there are no propositional constants; instead we have object constants, function constants, relation constants, and variables.

In our examples here, we write both variables and constants as strings of letters, digits, and a few non-alphanumeric characters (e.g. "_"). By convention, variables begin with letters from the end of the alphabet (viz. u, v, w, x, y, z). Examples include x, ya, and z_2. By convention, all constants begin with either alphabetic letters (other than u, v, w, x, y, z) or digits. Examples include a, b, 123, comp225, and barack_obama.

Note that there is no distinction in spelling between object constants, function constants, and relation constants. The type of each such word is determined by its usage or, in some cases, in an explicit specification.

As we shall see, function constants and relation constants are used in forming complex expressions by combining them with an appropriate number of arguments. Accordingly, each function constant and relation constant has an associated arity, i.e. the number of arguments with which that function constant or relation constant can be combined. A function constant or relation constant that can combined with a single argument is said to be unary; one that can be combined with two arguments is said to be binary; one that can be combined with three arguments is said to be ternary; more generally, a function or relation constant that can be combined with n arguments is said to be n-ary.

A Herbrand vocabulary consists of a set of object constants, a set of function constants, a set of relation constants, and an assignment of arities for each of the function constants and relation constants in the vocabulary. (Note that this definition here is slightly non-traditional. In many textbooks, a vocabulary (called a signature) includes a specification of function constants and relation constants but not object constants, whereas our definition here includes are three types of constants.)

A term is defined to be a variable, an object constant, or a functional expression (as defined below). Terms typically denote objects presumed or hypothesized to exist in the world; and, as such, they are analogous to noun phrases in natural language, e.g. Joe or the car's left front wheel.

A functional expression, or functional term, is an expression formed from an n-ary function constant and n terms enclosed in parentheses and separated by commas. For example, if f is a binary function constant and if a and y are terms, then f(a,y) is a functional expression, as are f(a,a) and f(y,y).

Note that functional expressions can be nested within other functional expressions. For example, if g is a unary function constant and if a is a term, g(a) and g(g(a)) and g(g(g(a))) are all functional expressions.

There are three types of sentences in Herbrand Logic, viz. relational sentences (the analog of propositions in Propositional Logic), logical sentences (analogous to the logical sentences in Propositional Logic), and quantified sentences (which have no analog in Propositional Logic).

A relational sentence is an expression formed from an n-ary relation constant and n terms. For example, if q is a relation constant with arity 2 and if a and y are terms, then the expression shown below is a syntactically legal relational sentence.

q(a, y)

Logical sentences are defined as in Propositional Logic. There are negations, conjunctions, disjunctions, implications, and equivalences. The syntax is exactly the same, except that the elementary components are relational sentences rather than proposition constants.

Quantified sentences are formed from a quantifier, a variable, and an embedded sentence. The embedded sentence is called the scope of the quantifier. There are two types of quantified sentences in Herbrand Logic, viz. universally quantified sentences and existentially quantified sentences.

A universally quantified sentence is used to assert that all objects have a certain property. For example, the following expression is a universally quantified sentence asserting that, if p holds of an object, then q holds of that object and itself.

(∀x.(p(x) ⇒ q(x,x)))

An existentially quantified sentence is used to assert that some object has a certain property. For example, the following expression is an existentially quantified sentence asserting that there is an object that satisfies p and, when paired with itself satisfies q as well.

(∃x.(p(x) ∧ q(x,x)))

Note that quantified sentences can be nested within other sentences. For example, in the first sentence below, we have quantified sentences inside of a disjunction. In the second sentence, we have a quantified sentence nested inside of another quantified sentence.

(∀x.p(x)) ∨ (∃x.q(x,x))

As with Propositional Logic, we can drop unneeded parentheses in Herbrand Logic, relying on precedence to disambiguate the structure of unparenthesized sentences. In Herbrand Logic, the precedence relations of the logical operators are the same as in Propositional Logic, and quantifiers have higher precedence than logical operators.

The following examples show how to parenthesize sentences with both quantifiers and logical operators. The sentences on the right are partially parenthesized versions of the sentences on the left. (To be fully parenthesized, we would need to add parentheses around each of the sentences as a whole.)

x.p(x) ⇒ q(x) (∀x.p(x)) ⇒ q(x)
x.p(x) ∧ q(x) (∃x.p(x)) ∧ q(x)

Notice that, in each of these examples, the quantifier does not apply to the second relational sentence, even though, in each case, that sentence contains an occurrence of the variable being quantified. If we want to apply the quantifier to a logical sentence, we must enclose that sentence in parentheses, as in the following examples.

x.(p(x) ⇒ q(x))
x.(p(x) ∧ q(x))

An expression in Herbrand Logic is ground if and only if it contains no variables. For example, the sentence p(a) is ground, whereas the sentence ∀x.p(x) is not.

An occurrence of a variable is free if and only if it is not in the scope of a quantifier of that variable. Otherwise, it is bound. For example, y is free and x is bound in the following sentence.


A sentence is open if and only if it has free variables. Otherwise, it is closed. For example, the first sentence below is open and the second is closed.

p(y) ⇒ ∃x.q(x,y)
y.(p(y) ⇒ ∃x.q(x,y))

3. Herband Semantics

The Herbrand base for a Herbrand vocabulary is the set of all ground relational sentences that can be formed from the constants of the language. Said another way, it is the set of all sentences of the form r(t1,...,tn), where r is an n-ary relation constant and t1, ... , tn are ground terms.

For a vocabulary with object constants a and b, no function constants, and relation constants p and q where p has arity 1 and q has arity 2, the Herbrand base is shown below.

{p(a), p(b), q(a,a), q(a,b), q(b,a), q(b,b)}

It is worthwhile to note that, for a given relation constant and a finite set of terms, there is an upper bound on the number of ground relational sentences that can be formed using that relation constant. In particular, for a set of terms of size b, there are bn distinct n-tuples of object constants; and hence there are bn ground relational sentences for each n-ary relation constant. Since the number of relation constants in a vocabulary is finite, this means that the Herbrand base is also finite.

Of course, not all Herbrand bases are finite. In the presence of function constants, the number of ground terms is infinite; and so the Herbrand base is infinite. For example, in a language with a single object constant a and a single unary function constant f and a single unary relation constant p, the Herbrand base consists of the sentences shown below.

{p(a), p(f(a)), p(f(f(a))), ...}

A truth assignment for a Herbrand Logic language is a function that maps each ground relational sentence in the Herbrand base to a truth value. As in Propositional Logic, we use the digit 1 as a synonym for true and 0 as a synonym for false; and we refer to the value assigned to a ground relational sentence by writing the relational sentence with the name of the truth assignment as a superscript. For example, the truth assignment i defined below is an example for the case of the language mentioned a few paragraphs above.

p(a)i = 1
p(b)i = 0
q(a,a)i = 1
q(a,b)i = 0
q(b,a)i = 1
q(b,b)i = 0

As with Propositional Logic, once we have a truth assignment for the ground relational sentences of a language, the semantics of our operators prescribes a unique extension of that assignment to the complex sentences of the language.

The rules for logical sentences in Herbrand Logic are the same as those for logical sentences in Propositional Logic. A truth assignment i satisfies a negation ¬φ if and only if i does not satisfy φ. Truth assignment i satisfies a conjunction (φ1 ∧ ... ∧ φn) if and only if i satisfies every φi. Truth assignment i satisfies a disjunction (φ1 ∨ ... ∨ φn) if and only if i satisfies at least one φi. Truth assignment i satisfies an implication (φ ⇒ ψ) if and only if i does not satisfy φ or does satisfy ψ. Truth assignment i satisfies an equivalence (φ ⇔ ψ) if and only if i satisfies both φ and ψ or it satisfies neither φ nor ψ.

In order to define satisfaction of quantified sentences, we need the notion of instances. An instance of an expression is an expression in which all variables have been consistently replaced by ground terms. Consistent replacement here means that, if one occurrence of a variable is replaced by a ground term, then all occurrences of that variable are replaced by the same ground term.

A universally quantified sentence is true for a truth assignment if and only if every instance of the scope of the quantified sentence is true for that assignment. An existentially quantified sentence is true for a truth assignment if and only if some instance of the scope of the quantified sentence is true for that assignment.

As an example of these definitions, consider the sentence ∀x.(p(x) ⇒ q(x,x)). What is the truth value under the truth assignment shown above? According to our definition, a universally quantified sentence is true if and only every instance of its scope is true. For this language, there are just two instances. See below.

p(a) ⇒ q(a,a)
p(b) ⇒ q(b,b)

We know that p(a) is true and q(a,a) is true, so the first instance is true. q(b,b) is false, but so is p(b) so the second instance is true as well. Since both instances are true, the original quantified sentence is true.

Now let's consider a case with nested quantifiers. Is ∀x.∃y.q(x,y) true or false for the truth assignment shown above? As before, we know that this sentence is true if every instance of its scope is true. The two possible instances are shown below.


To determine the truth of the first of these existential sentences, we must find at least one instance of the scope that is true. The possibilities are shown below. Of these, the first is true; and so the first existential sentence is true.


Now, we do the same for the second existentially quantified. The possible instances follow. Of these, again the first is true; and so the second existential sentence is true.


Since both existential sentences are true, the original universally quantified sentence must be true as well.

A truth assignment i satisfies a sentence with free variables if and only if it satisfies every instance of that sentence. A truth assignment i satisfies a set of sentences if and only if i satisfies every sentence in the set.

4. Example - Modular Arithmetic

In this example, we show how to characterize Modular Arithmetic in Herbrand Logic. In Modular Arithmetic, there are only finitely many objects. For example, in Modular Arithmetic with modulus 4, we would have just four integers - 0, 1, 2, 3 - and that's all. Our goal here to define the addition relation. Admittedly, this is a modest goal; but, once we see how to do this; we can use the same approach to define other arithmetic relations.

Let's start with the same relation, which is true of every number and itself and is false for numbers that are different. We can completely characterize the same relation by writing ground relational sentences, one positive sentence for each number and itself and negative sentences for all of the other cases.

same(0,0)   ¬same(0,1)   ¬same(0,2)   ¬same(0,3)
¬same(1,0)   same(1,1)   ¬same(1,2)   ¬same(1,3)
¬same(2,0)   ¬same(2,1)   same(2,2)   ¬same(2,3)
¬same(3,0)   ¬same(3,1)   ¬same(3,2)   same(3,3)

Now, let's axiomatize the next relation, which, for each number, gives the next larger number, wrapping back to 0 after we reach 3.


Properly, we should write out the negative literals as well. However, we can save that work by writing a single axiom asserting that next is a functional relation, i.e., for each member of the Herbrand base, there is just one successor.

x.∀y.∀z.(next(x,y) ∧ next(x,z) ⇒ same(y,z)

In order to see why this saves us the work of writing out the negative literals, we can write this axiom in the equivalent form shown below.

x.∀y.∀z.(next(x,y) ∧ ¬same(y,z) ⇒ ¬next(x,z))

The addition table for Modular Arithmetic is the usual addition table for arbitrary numbers except that we wrap around whenever we get past 3. For such a small arithmetic, it is easy to write out the ground relational sentences for addition, as shown below.

plus(0,0,0)   plus(1,0,1)   plus(2,0,2)   plus(3,0,3)
plus(0,1,1)   plus(1,1,2)   plus(2,1,3)   plus(3,1,0)
plus(0,2,2)   plus(1,2,3)   plus(2,2,0)   plus(3,2,1)
plus(0,3,3)   plus(1,3,0)   plus(2,3,1)   plus(3,3,2)

As with next, we avoid writing out the negative literals by writing a suitable functionality axiom for plus.

x.∀y.∀z.∀w.(plus(x,y,z) ∧ ¬same(z,w) ⇒ ¬plus(x,y,w))

That's one way to do things, but we can do better. Rather than writing out all of those relational sentences, we can use Herbrand Logic to define plus in terms of same and next and use that axiomatization to deduce the ground relational sentences. The definition is shown below. First, we have an identity axiom. Adding 0 to any number results in the same number. Second we have a successor axiom. If z is the sum of x and y, then the sum of the successor of x and y is the successor of z. Finally, we have our functionality axiom once again.

x.∀y.∀z.∀x2.∀z2.(plus(x,y,z) ∧ next(x,x2) ∧ next(z,z2) ⇒ plus(x2,y,z2))
x.∀y.∀z.∀w.(plus(x,y,z) ∧ ¬same(z,w) ⇒ ¬plus(x,y,w))

One advantage of doing things this way is economy. With these sentences, we do not need the ground relational sentences about plus given above. They are all logically entailed by our sentences about next and the definitional sentences. A second advantage is versatility. Our sentences define plus in terms of next for arithmetic with any modulus, not just modulus 4.

5. Example - Peano Arithmetic

Now, let's look at the problem of encoding arithmetic for all of the natural numbers. Since there are infinitely many natural numbers, we need infinitely many terms.

One way to get infinitely many terms is to have a vocabulary with infinitely many object constants. While there is nothing wrong with this in principle, it makes the job of axiomatizing arithmetic effectively impossible, as we would have to write out infinitely many literals to capture our successor relation.

An alternative approach is to represent numbers using a single object constant (e.g. 0) and a single unary function constant (e.g. s). We can then represent every number n by applying the function constant to 0 exactly n times. In this encoding, s(0) represents 1; s(s(0)) represents 2; and so forth. With this encoding, we automatically get an infinite universe of terms, and we can write axioms defining addition and multiplication as simple variations on the axioms of Modular Arithmetic.

Unfortunately, even with this representation, axiomatizing Peano Arithmetic is more challenging than axiomatizing Modular Arithmetic. We cannot just write out ground relational sentences to characterize our relations, because there are infinitely many cases to consider. For Peano Arithmetic, we must rely on logical sentences and quantified sentences, not just because they are more economical but because they are the only way we can characterize our relations in finite space.

Let's look at the same relation first. The axioms shown here define the same relation in terms of 0 and s.

x.(¬same(0,s(x)) ∧ ¬same(s(x),0))
x.∀y.(¬same(x,y) ⇒ ¬same(s(x),s(y)))

It is easy to see that these axioms completely characterize the same relation. By the first axiom, the same relation holds of every term and itself. The other two axioms tell us what is not true. The second axiom tells us that 0 is not the same as any composite term. The same holds true with the arguments reversed. The third axiom builds on these results to show that non-identical composite terms of arbitrary complexity do not satisfy the same relation. Viewed the other way around, to see that two non-identical terms are not the same, we just strip away occurrences of s from each term till one of the two terms becomes 0 and the other one is not 0. By the second axiom, these are not the same, and so the original terms are not the same.

Once we have the same relation, we can define the other relations in our arithmetic. The following axioms define the plus relation in terms of 0, s, and same. Adding 0 to any number results in that number. If adding a number x to a number y produces a number z, then adding the successor of x to y produces the successor of z. Finally, we have a functionality axiom for plus.

x.∀y.∀z.(plus(x,y,z) ⇒ plus(s(x),y,s(z)))
x.∀y.∀z.∀w.(plus(x,y,z) ∧ ¬same(z,w) ⇒ ¬plus(x,y,w))

The axiomatization of multiplication is analogous. Multiplying any number by 0 produces 0. If a number z is the product of x and y and w is the sum of y and z, then w is the product of the successor of x and y. As before, we have a functionality axiom.

x.∀y.∀z.∀w.(times(x,y,z) ∧ plus(y,z,w) ⇒ times(s(x),y,w))
x.∀y.∀z.∀w.(times(x,y,z) ∧ ¬same(z,w) ⇒ ¬times(x,y,w))

That's all we need - just three axioms for same and three axioms for each arithmetic function.

Before we leave our discussion of Peano aritmetic, it is worthwhile to look at the concept of Diophantine equations. A polynomial equation is a sentence composed using only addition, multiplication, and exponentiation with fixed exponents (that is numbers not variables). For example, the expression shown below in traditional math notation is a polynomial equation.

x2 + 2y = 4z

A natural Diophantine equation is a polynomial equation in which the variables are restricted to the natural numbers. For example, the polynomial equation here is also a Diophantine equation and happens to have a solution in the natural numbers, viz. x=4 and y=8 and z=8.

Diophantine equations can be readily expressed as sentences In Peano Arithmetic. For example, we can represent the Diophantine equation above with the sentence shown below.

x.∀y.∀z.∀u.∀v.∀w.(times(x,x,u) ∧ times(2,y,v) ∧ plus(u,v,w) ⇒ times(4,z,w))

This is a little messy, but it is doable. And we can always clean things up by adding a little syntactic sugar to our notation to make it look like traditional math notation.

Once this mapping is done, we can use the tools of logic to work with these sentences. In some cases, we can find solutions; and, in some cases, we can prove that solutions do not exist. This has practical value in some situations, but it also has significant theoretical value in establishing important properties of Herbrand Logic, a topic that we discuss in a later section.

5. Logical Properties

Recall that compactness means that, if an infinite set of sentences is unsatisfiable, there is some finite subset that is unsatisfiable. It guarantees finite proofs.

Theorem 5 (NonCompactness). : Herbrand logic is not compact. Proof. Consider the following infinite set of sentences.

p(f (a))
p(f (f (a))) p(f (f (f (a))))

Assuming the vocabulary is {p, a, f }, the ground terms are a, f (a), f (f (a)), . . . , and this set of sentences entails ∀x.p(x). Add in the sentence ∃x.¬p(x). Clearly, the infinite set is unsatisfiable. However, every finite subset is satisfiable with respect to the vocabulary {p,a,f}. (Every finite subset is missing either ∃x.¬p(x) or one of the sentences above. If it is the former, the set is satisfiable, and if it is the latter, the set can be satisfied by making the missing sentence false.) Thus, compactness does not hold.

Model theory in Herbrand logic is much simpler than it is in first-order logic.

Unlike first-order logic where two models A and B can be equivalent with varying strengths, i.e. elementarily equivalent, secondarily equivalent, ..., isomorphic, and equal, there is no such hierarchy in Herbrand Logic. Two models either satisfy all the same sentences because they are the same exact model, or they disagree on some ground atom, i.e. one satisfies it, and the other satisfies its negation. Thus, the set of ground literals satisfied by a model uniquely identify it.

In our discussion of model theory, we use the usual notation for the set of models that satisfy a given sentence set ∆: Mod[∆]. The set of sentences entailed by a set of sentences ∆ is denoted Cn[∆]. Also, the set of sentences true in all models in the set M is denoted as usual by Th[M].

Theorem 7. : Let S be the set of ground literals satisfied by a Herbrand model M. S is satisfied by exactly one Herbrand model: M. Equivalently, Mod[Th[M]] = {M}.

Corollary 8. : Lowenheim-Skolem-Tarski does not hold in Herbrand logic.

Proof. Lowenheim-Skolem-Tarski is a theorem in first-order logic that states if a set of sentences has a model of any infinite size, it has a model of every infinite size. Clearly this theorem does not hold in Herbrand logic, since every infinite model has countable size.

Unlike Finite Herbrand Logic, not all theories are finitely axiomatizable.

Theorem 8. : Some Herbrand models are not finitely axiomatizable.

Proof. Consider a vocabulary with the set of terms {a,f(a),f(f(a)),f(f(f(a))),...}, and a single, unary relation constant p. The set of all Herbrand models over this vocabulary is a subset of the ground atoms {p(a),p(f(a)),p(f(f(a)),...}. There are 2ω such subsets, which is uncountably infinite.

Since there are only countably many sentences from a countable vocabulary, there are models without finite axiomatizations.

This statement in this Corollary was made earlier with the condition that checking whether a candidate proof actually proves a conjecture is decidable. There is no such condition on this theorem. Skolemization is another mainstay of many logics that does not have the usual effect in Herbrand logic.

Theorem 6. In Herbrand logic, Skolemization does not preserve satisfiability. Proof. The following set of sentences are unsatisfiable.

p(a) ∃x.¬p(x)

Skolemizing produces a satisfiable set of sentences: p(a), ¬p(k)

This result is not surprising given the dependence of Herbrand satisfaction on the vocabulary.

6. Proof Theory

Corollary 7 (Infinite Proofs). : In Herbrand logic, Some entailed sentences admit only infinite proofs.

Proof. The above proof demonstrates a set of sentences that entail ∀x.p(x). The set of premises in any finite proof will be missing one of the above sentences; thus, those premises do not entail ∀x.p(x). Thus no finite proof can exist for ∀x.p(x).

n. Herbrand Logic Versus First Order Logic

Now that we have presented the semantics and proof systems for both First Order Logic and Herbrand Logic, we can talk about deciding which logic to use in a given situation. We start by comparing the two logics with respect to several theoretical properties.

For any given set of equality-free sentences Σ, we know that every satisfying truth assignment in Herbrand Logic corresponds to a satisfying First Order Logic interpretation of Σ. As a result, if an equality-free sentence φ is satisfied in all of the First Order Logic interpretations that satisfy Σ, then φ is also satisfied in all the Herbrand Logic truth assignments of Σ.

From the definition of logical entailment, we obtain the following theorem that whenever a sentence is entailed under First Order Logic, it is also entailed under Herbrand Logic. That is, Herbrand Logic is deductively more powerful than First Order Logic.

Theorem 10.3: Given a set of equality-free sentences Σ and an equality-free sentence φ, if Σ |=FOL φ then Σ |=HL φ.

A similar result holds for sentences with equality when we extend Herbrand Logic to incorporate equality.

We also know that the same set of sentences conveys more information in Herbrand Logic than in First Order Logic in the sense that the set of sentences admits fewer satisfying worlds and entails more conclusions. However, in a given situation, which logic should we use?

In general, we should use the logic that allows us to simply and accurately model the information available. In the case that the set of objects is known and fixed in advance, Herbrand Logic is the natural choice because it is simple and accurate in modeling the information.

In many applications, the exactly set of objects is not fixed, but a "covering" set of identifiers can be used instead. For example, United States presidents is not fixed, but we can fix the Herbrand universe {a, s(a), s(s(a)), ...} to stand for all actual and potential presidents. For another example, new English essays are written everyday, but we can fix a Herbrand universe to stand for all existing and potential English essays. Herbrand Logic continues to make sense in these situations because of its simplicity and deductive power.

Finally, there are cases when not even a covering set of identifiers can be fixed in advance. For example, consider the class of all arithmetic structures where the plus operator is commutative. In this class the size of the underlying set of objects can be arbitrarily large, exceeding any fixed set of identifiers (even an infinite set of identifiers). In these cases, Herbrand Logic is inappropriate because it cannot accurately model the information available. First Order Logic, which imposes no restrictions on the set of objects, is the more appropriate choice.

We stated in section 10.12 that First Order Fitch is sound and complete for First Order Logic. In Herbrand Logic, we have the analogous soundness theorem but not the completeness theorem.

Theorem 10.4 (Soundness of Herbrand Fitch): Given a set of sentences Σ and a sentence φ, if Σ |-HL φ then Σ |=HL φ.

Theorem 10.5 (Incompleteness of Herbrand Fitch): There exists a set of sentences Σ and a sentence φ such that Σ |=HL φ but it is not the case that Σ |-HL φ.

The upshot is that Herbrand Logic itself is expressively powerful but the proof system for Herbrand Logic is not as powerful as its semantics. Even so, the proof system for Herbrand Logic remains more powerful than the proof system for First Order Logic.

Theorem 10.6: Given a set of equality-free sentences Σ and an equality-free sentence φ, if Σ |-FOL φ then Σ |-HL φ.

A similar result holds for sentences with equality when we include the two equality rules into Fitch for Herbrand Logic.