12.2. The Satisfiability Problem

This page introduces an NP-complete problem based on propositional logic.

Clausal logic

A literal is a either a propositional variable or the negation of a propositional variable.

Variables, such as x and y, are called positive literals.

Negated variables, such as ¬x and ¬y, are called negative literals.

A clause is a disjunction (or) of one or more literals. For example,

(x ∨ ¬ yz)

is a clause.

A clausal formula, which we will just call a formula, is a conjunction (and) of one or more clauses.

For example,

xy) ∧
yz) ∧
(x ∨ ¬zy)

is a (clausal) formula.

The satisfiability problem

Definition. A formula φ is satisfiable if there exists an assignment of values to its variables that makes φ true.

For example, formula

xy) ∧
yz) ∧
(x ∨ ¬zy)

is satisfiable. Choose

x = false
y = true
z = true

The satisfiability problem, usually called SAT, is the following language.

SAT = {φ | φ is a satisfiable clausal formula}.

Thought of as a computational problem, the input to SAT is a clausal formula φ and the problem is to determine whether φ is satisfiable.

SAT is NP-complete

Cook and Levin independently proved that SAT is NP-complete.

(Both did so before the term NP-complete was even in use. Their work led to the study of NP.)

Cook/Levin Theorem. SAT is NP-complete.

Proof. There are two parts to the proof because there are two parts to the definition of NP-completeness.

First, you must show that SAT is in NP.

Then you must show that, for every problem X in NP, Xp SAT.

The first part is by far the easiest. The satisfiablity problem can be expressed as a test for existence.

φ is satisfiable ⇔ there exists an assignment A of truth values to the variables in φ so that assignment A makes φ true.

So a nondeterministic algorithm for SAT goes like this.

Evidence for φ. An assignment of truth values to the variables that occur in φ.

The evidence is accepted if: the assignment makes φ true.

The second part (Xp SAT for every X in NP) is considerably more difficult.

The proof is by a quite long generic reduction. Sipser describes the entire proof.

I do not duplicate the proof here, but here is a very rough sketch of how it goes.

Given an arbitrary language X in NP, select a polynomial-time nondeterministic Turing machine M for X.

Since M runs in polynomial time, there must be a k so that M runs in time nk.

To show that Xm SAT, we need a polynomial-time computable function f so that, for every y,

yX ⇔ f(y) ∈ SAT

Suppose that y has length n.

Notice that f(y) produces a clausal formula.

f(y) begins by creating propositional variables e1ez where z = nk.

Since M only runs for nk steps on input y, M cannot look at more bits of evidence than that.

f(y) writes down a logical (clausal) formula φ that expresses, in logic,

"M accepts input (y, e) where e = e1ez."

Formula φ does not say anything directly about the evidence variables. But once they are chosen, the remaining variables are constrained to describe a computation of M on input (y, e).

So φ is satisfiable if and only if there exist values of the evidence variables so that M accepts input (y, e). But, by the choice of M, that is equivalent to saying that yX. ◊

It should not come as a great surprise that you can use logic to describe computation. After all, computers are built from logic gates, such as and-gates, or-gates and not-gates (inverters).

Consequences of the Cook/Levin theorem

First, the Cook/Levin theorem tells us that, if the conjecture that P ≠ NP is correct, then SAT is not in P.

It also makes it unnecessary to do more generic reductions in proofs of NP-completeness.

Suppose that you want to show that some language, W, is NP-complete.

First, of course, you show that W is in NP. So you find a polynomial-time nondeterministic algorithm for W.

Then you must show that X <p W for every X in NP.

But that can be done by showing that SAT ≤p W.

Then, since Xp SAT and SAT ≤p W, Xp W, for every X in NP.