Introduction to
Logic Programming
What
versus
How
 

Chapter 15 - Variations


15.1 Introduction

In this chapter, we give brief descriptions a few additional types of Logic Programming: Logic Production Systems, Constraint Logic Programming, Disjunctive Logic Programming, Existential Logic Programming, Answer Set Programming, and Inductive Logic Programming.

15.2 Logic Production Systems

Production systems are a programming language paradigm that has been widely used both for computer applications such as expert systems and for representing the processes involved in human thinking. Rules in a production system have the form: conditionsactions . The rules are executed in a cycle: facts in a working memory are matched with the conditions to derive the actions to be executed. A working memory is similar to a dataset considered in this volume. If more than one rule matches the conditions, a selection is performed to choose which rule should be executed. The execution of a rule involves adding and removing facts from the working memory. An example rule selection strategy is to associate priorities with the rules, and choose a higher priority rule over a lower priority rule. The repeated execution of the rules generates the successive states of the working memory, and this behavior provides an operational semantics to the rules.

Production rules have been used to model three kinds of situations: stimulus-response associations; forward chaining; and goal-reduction. We saw an example of a stimulus-response association in Section 14.2 in the system with three lights: whenever the user pushes a button, which is a stimulus, the response is for the system to toggle the corresponding light.

push_button :: p ==> ~p
push_button :: ~p ==> p

The following rule which we saw in Section 4.5 is an example of forward chaining, as any time we add a parent fact, new grandparent facts are derived and added to the dataset. Such rules can also be generalized as operations for updating materialized views as shown in Section 15.3.

parent(X,Y) & parent(Y,Z) ==> grandparent(X,Z)

As an example of goal reduction, and in the context of the planning problem considered in Section 11.4, consider the following rule in which we state that if our goal state can be achieved by performing an action a in state s, and it is possible for us to perform the action a in state s, then that goal can be reduced to the goal of achieving state s.

goal(do(a,s)) & possible(a,s) ==> goal(s)

From the above examples, we can see that a natural analog exists between production rules and dynamic rules and operations considered in this volume. In the Logic Production Systems (LPS) framework, the production rules of the first kind are represented as dynamic rules, and the production rules of the second and third kind are represented as view definitions. In addition to giving dynamic rules an operational semantics similar to that of production systems, LPS also gives dynamic rules a logical semantics. It interprets dynamic rules as declarative sentences that need to be made true in a model that contains a sequence of time-stamped relation values, external events and actions.

15.3 Constraint Logic Programming

Consider the Peano Arithmetic from Section 10.2, and the following query:

number(L) & number(M) & add(L,M,N) & add(L,M,s(N))

As the above query is equivalent to proving N = N + 1, it is trivially false, but when posed to a logic programming system, it will run indefinitely. This is because the evaluation algorithm considered in Chapter 8 does not check satisfaction of constraints across subgoals. In a constraint logic program (CLP), the set of all constraints are tested for satisfiability at each step in the evaluation, and therefore, it is able to answer that the above query is false.

In addition to checking global satisfaction of constraints, a CLP system allows constraints to be stated directly as equations; it allows constraints to appear in queries; and during the evaluation of the queries, it can generate new constraints. To illustrate these features, consider the following program that computes the sum S of integers from 0 to N:

sumto(0,0)
sumto(N,S) :- N ≥ 1 & N ≤ S & sumto(N-1,S-1)

In the second rule above, constraints are stated directly using equation terms, and arithmetic expressions appear as arguments to a subgoal. Furthermore, the second rule is unsafe because both N and S are used in its body before they are bound. A CLP system is able to handle such unsafe rules. It can also generate new constraints during the evaluation. For example, during the evaluation of the query, S ≤ 1, the second rule above will lead to the following expanded version of its body:

N = N1 & S = S1 & N1 ≥ 1 & N1 ≤ S1 & sumto(N1-1,S1-1)

The above examples illustrated the simplest form of constraints handled by the CLP systems: arithmetic equality and inequality. Such arithmetic constraints can appear in view definitions considered in this volume. We saw an example use of such arithmetic inequality constraints in the Cryptarithmetic problem in Section 6.5 even though the solution of that problem did not require checking constraints across subgoals. In a constraint logic programming framework, the unification procedure of Chapter 8 is generalized to invoke the constraint solver whenever the expressions to be matched contain constraints. At each step of the evaluation, we must find a unifier between the selected subgoal and the goal to be proven, and generate any new constraints. In addition, we must check the consistency of the current set of constraints with the constraints in the body of the view definition. Thus, two solvers are involved: unification, and the specific constraint solver for the constraints in use.

Numerous extensions to the basic framework of CLP exist for problems that go beyond simple arithmetic equality and inequality constraints. In some CLP systems, it is possible to allow constraints in which values are floating point numbers or are defined using polynomial equations. CLP has also been used for combinatorial search problems, for example, the Map Coloring problem from Chapter 3. In some combinatorial problems, the goal is to not just find one solution, but finding optimal solutions according to one or more optimization criteria; finding all solutions; replacing (some or all) constraints with preferences; and considering a distributed setting where constraints are distributed among several agents.

15.4 Disjunctive Logic Programming

In chapter 2, we defined a dataset as a collection of simple facts that characterize the state of an application area. Facts in a dataset are assumed to be true; facts that are not included in the dataset are assumed to be false. There are situations when our knowledge of the application domain is incomplete in that given a set of facts we know that one or more them must be true but we do not know which of them is true. For example, given a person object joe, we know that either of male(joe) or female(joe) must be true, but we may not know which of these is true.

To understand the difficulty posed by incomplete information, consider a world in which we have two objects a and b, a unary relation p, and a disjunctive sentence (p(a) | p(b) (where | is the or operator). Recall from Chapter 7 that a factoid is logically entailed by a closed logic program if and only if it is true in every model of the program. In this example, a set containing p(a), a set containing p(b), and a set containing both p(a) and p(b) are models of the program, but their intersection is empty, and therefore, the only model is the empty set. This allows us to conclude both ~p(a) and ~p(b) which contradicts our disjunctive sentence. There has been extensive research on disjunctive logic programming to investigate different techniques that allow us to reason effectively with such incompleteness. We consider one such technique next.

We compute a set of definite facts as those ground atomic facts that occur in all the minimal models or in none of the minimal models. if we need to determine whether (p(a) | p(b)) is true, it suffices to check if p(a) is true or if p(b) is true in every minimal model. If that is true, we can conclude that (p(a) | p(b)) is true. To better appreciate this technique, let us consider a more involved disjunctive logic program shown below.

q(a)
p(a) | p(b)

The above program has two minimal models: one containing q(a) and p(a), and the other containing q(a) and p(b). Here, the set of definite facts includes q(a) as it appears in both the minimal models, and it includes q(b) as it does not appear in any of the minimal models. We can establish the truth of p(a) | p(b) by verifying that each minimal model contains either p(a) or p(b).

15.5 Existential Logic Programming

An existential rule is a rule that has an atom with a functional term in its head. Such rules are also known as tuple generating dependencies in database systems. For our purposes, a logic program that contains existential rules is referred to as an existential logic program. Consider the following existential rules.

owns(X,house(X)) :- instance_of(X,person)
has_parent(X,f(X)) :- instance_of(X,person)
has_parent(X,g(X)) :- instance_of(X,male)
instance_of(f(X),person) :- instance_of(X,person)
instance_of(g(X),person) :- instance_of(X,person)

The first rule asserts that if X is a person, then X owns house(X). The second rule asserts that if X is a person, then f(X) is the parent of that person. The third rule asserts that if X is a male, then g(X) is the parent of X. The fourth and fifth rules assert that for every person, f(X) and g(X) are also instances of a person. Each of these five rules has a functional term in its head, and is therefore, an existential rule.

Existential rules can be written in Basic Logic Programming, but their effective usage present two new challenges: termination of reasoning and reasoning with incomplete knowledge.

The first existential rule shown above is the simplest form of an existential rule; and, by itself, it does not present any problem with termination of reasoning. But, the fourth existential rule leads to a non-terminating behavior, because it can be recursively applied to itself, leading to infinitely many conclusions.

Let us next consider an example of incomplete knowledge. From the second rule, we conclude has_parent(X,f(john)), and from the third rule, we conclude has_parent(X,g(john)), but the relationship between f(john) and g(john) is under-specified. Logically, f(john) and g(john) are two separate objects, but there are situations when it may be desirable to conclude that they refer to the same person.

15.6 Answer Set Programming

While defining the semantics of logic programs in Section 7.3, we said that an interpretation D satisfies a ground atom p if and only if p is in D. We further said that D satisfies a ground negation ~p if and only if p is not in D. This approach to defining the semantics is also known as negation as failure, because we assume a negated atom to be satisfied because of its absence in D. For a safe and stratified logic program, negation as failure semantics ensures that there is a unique model.

Answer Set Programming (ASP) is an approach for defining the semantics of logic programs that may not be stratified. For example, consider the following rules.

p(1)p(2)p(3)
q(3) :- ~r(3)r(X) :- p(X) & ~q(X)

The above rules are not stratified. In ASP, the above rules lead to two answer sets shown below.

Answer Set 1:
p(1)p(2)p(3) q(3)r(1)r(2)

Answer Set 2:
p(1)p(2)p(3) r(3)r(1)r(2)

An answer set solver is a program that takes an answer set program as an input and outputs all the answer sets of that program. A typical answer set solver does not require an input query. In this section, we consider the semantics of answer set logic programs, and some of their important extensions.

The first step in defining answer set semantics is to compute the set of all instances of our rules. For example, for the program considered above, the grounded program is shown below.

p(1)p(2)p(3)
q(3) :- ~r(3)r(1) :- p(1) & ~q(1)
r(2) :- p(2) & ~q(2)r(3) :- p(3) & ~q(3)

For a program that does not contain any negated atoms, or if it contains negated atoms but is safe and stratified, it will have exactly one answer set. The answer set of such a program is identical to its extension as defined in Section 7.3.

We next consider the programs that contain negated atoms which are not stratified. To decide whether a set S of ground atoms is an answer set, we form the reduct of the grounded program with respect to S, as follows. For every rule of the grounded program such that S does not contain any of the negated items in the body of the rule, we drop the negated atoms from that rule and only retain its positive atoms in the reduct. All other rules are dropped from the grounded program altogether. The reduct does not contain any negated atoms, and we compute its extension as defined in Section 7.3. If the extension coincides with S, then S is an answer set of the given program.

As an example, suppose we wish to test if S = {p(1), p(2), p(3)} is an answer set of the grounded program shown above. The reduct of the program with respect to S is shown below.

p(1)p(2)p(3)
q(3)r(1) :- p(1)
r(2) :- p(2) r(3) :- p(3)

The extension of the reduct is {p(1), p(2), p(3), r(1), r(2), r(3), q(3)} which is different from S. Hence, S = {p(1), p(2), p(3)} is not an answer set of this program.

Now, suppose S = {p(1), p(2), p(3), q(3), r(1), r(2)}. The reduct of the program with respect to this new answer set is shown below.

p(1)p(2)p(3)
q(3)r(1) :- p(1) r(2) :- p(2)

The extension of this program is {p(1), p(2), p(3), q(3), r(1), r(2)} which is identical to S. Hence S = {p(1), p(2), p(3), q(3), r(1), r(2)} is an answer set of this program.

Answer set semantics provides an elegant way to define the meaning of unstratified logic programs. ASP has been found to be a useful approach for declarative specification for a broad range of combinatorial problems especially the ones that involve specifying complex constraints. In addition, the ASP framework lends itself to easy generalization to deal with arithmetic and disjunction. Public domain and commercial ASP solvers are now available that have impressive run time performance on small problems.

15.7 Inductive Logic Programming

Induction is reasoning from the specific to the general. For example, consider the following dataset on kinship that is similar to what we have considered in the earlier chapters.

parent(a,b)parent(a,c)parent(d,b)
father(a,b)father(a,c)mother(d,b)
male(a)female(c)female(d)

Given the above dataset, we can use inductive reasoning to infer the following rules (or view definitions):

father(X,Y) :- parent(X,Y) & male(X)
mother(X,Y) :- parent(X,Y) & female(X)

In Inductive Logic Programming, given a dataset, a set of starting view definitions, and a target predicate, we can infer a view definition for the target predicate. In the example above, we are given a dataset, no starting view definitions, and we can infer the view definition of father and mother.

In the context of the Inductive Logic Programming, the dataset is also referred to as a set of positive examples. Some inductive reasoning algorithms also take as input a set of negative examples. If negative examples are not provided, they can be computed as the set of ground atoms in the Herbrand base that are not present in the dataset. The combined set of positive and negative examples taken together is also known as training data.

There are two broad classes of Inductive Logic Programming algorithms: top down, and inverse deduction. In a top down approach to learning, we start with a general view definition, and we restrict it until it satisfies all the positive and negative examples. In the inverse deduction approach, we start from the known facts, and we search for view definitions that would have been necessary to derive those facts.