What Are Logic And Games, And How Do They Relate?

Logic And Games intertwine deeply, offering valuable tools for understanding reasoning, strategy, and computation. At polarservicecenter.net, we recognize the importance of logic in understanding complex systems. Our services focus on assisting users of Polar products by providing logical solutions to technical issues, clear warranty information, and effective troubleshooting guides. Optimize your problem-solving abilities with logical thinking, enhance your strategic approach through game analysis, and discover new ways to relate with the world of Polar support and assistance. Explore our website for seamless product assistance, warranty details, and efficient repair solutions.

1. Exploring The Historical Connection Between Logic And Games

The relationship between logic and games dates back to ancient times. The connection between logic and games goes back a long way, think of a debate as a kind of game. Aristotle already made the connection, his writings about syllogism are closely intertwined with his study of the aims and rules of debating. Aristotle’s viewpoint survived into the common medieval name for logic: dialectics. In the mid-twentieth century, Charles Hamblin revived the link between dialogue and the rules of sound reasoning, soon after Paul Lorenzen had connected dialogue to constructive foundations of logic.

1.1. The Role Of Games In Teaching Logic

Games serve as effective pedagogical tools for logic. Writers throughout the medieval period talk of dialogues as a way of ‘teaching’ or ‘testing’ the use of sound reasoning. We have at least two textbooks of logic from the early sixteenth century that present it as a game for an individual student, and Lewis Carroll’s The Game of Logic (1887) is another example in the same genre.

1.2. Mathematical Game Theory And Logic

Mathematical game theory, founded in the early twentieth century, shares deep connections with logic. Although no mathematical links with logic emerged until the 1950s, it is striking how many of the early pioneers of game theory are also known for their contributions to logic: John Kemeny, J. C. C. McKinsey, John von Neumann, Willard Quine, Julia Robinson, Ernst Zermelo and others. In 1953 David Gale and Frank Stewart made fruitful connections between set theory and games. Shortly afterwards Leon Henkin suggested a way of using games to give semantics for infinitary languages.

1.3. Games Gain Acceptance In Logic

Games started to gain acceptance in logical research by the second half of the twentieth century. The first half of the twentieth century was an era of increasing rigour and professionalism in logic, and to most logicians of that period the use of games in logic would probably have seemed frivolous. Wittgenstein’s language games provoked little response from the logicians. But in the second half of the century the centre of gravity of logical research moved from foundations to techniques, and from about 1960 games were used more and more often in logical papers.

1.4. Integration Of Games And Logic In The 21st Century

Games and logic have become widely integrated in the twenty-first century. By the beginning of the twenty-first century it had become widely accepted that games and logic go together. The result was a huge proliferation of new combinations of logic and games, particularly in areas where logic is applied. Many of these new developments sprang originally from work in pure logic, though today they follow their own agendas. One such area is argumentation theory, where games form a tool for analysing the structure of debates.

2. Defining Logical Games: Key Elements

Logical games, as studied by logicians, possess specific characteristics. From the point of view of game theory, the main games that logicians study are not at all typical. They normally involve just two players, they often have infinite length, the only outcomes are winning and losing, and no probabilities are attached to actions or outcomes. The barest essentials of a logical game are as follows.

2.1. Players In Logical Games

Two players are involved in logical games, often referred to as (forall) and (exists). In general we can call them (forall) and (exists). The pronunciations ‘Abelard’ and ‘Eloise’ go back to the mid 1980s and usefully fix the players as male and female making reference easier: her move, his move. Other names are in common use for the players in particular types of logical game.

2.2. Domain And Plays In Logical Games

Players select elements from a set (Omega), known as the domain, to construct sequences, referred to as plays. The players play by choosing elements of a set (Omega), called the domain of the game. As they choose, they build up a sequence

[ a_0, a_1, a_2,ldots ] of elements of (Omega). Infinite sequences of elements of (Omega) are called plays. Finite sequences of elements of (Omega) are called positions; they record where a play might have got to by a certain time.

2.3. The Turn Function In Logical Games

A turn function, denoted as (tau), determines which player’s turn it is based on the game’s current state. A function (tau) (the turn function or player function) takes each position (mathbf{a}) to either (exists) or (forall); if (tau(mathbf{a}) = exists), this means that when the game has reached (mathbf{a}), player (exists) makes the next choice (and likewise with (forall)).

2.4. Winning Conditions In Logical Games

The game rules define two sets (W_{forall}) and (W_{exists}) consisting of positions and plays, with the following properties: if a position (mathbf{a}) is in (W_{forall}) then so is any play or longer position that starts with (mathbf{a}) (and likewise with (W_{exists})); and no play is in both (W_{forall}) and (W_{exists}). We say that player (forall) wins a play (mathbf{b}), and that (mathbf{b}) is a win for (forall), if (mathbf{b}) is in (W_{forall}); if some position (mathbf{a}) that is an initial segment of (mathbf{b}) is in (W_{forall}), then we say that player (forall) wins already at (mathbf{a}). (And likewise with (exists) and (W_{exists}).) So to summarise, a logical game is a 4-tuple ((Omega , tau), (W_{forall}), (W_{exists})) with the properties just described.

2.5. Total Logical Games

In a total logical game, every play results in a win for one of the players, eliminating draws. We say that a logical game is total if every play is in either (W_{forall}) or (W_{exists}), so that there are no draws. Unless one makes an explicit exception, logical games are always assumed to be total. (Don’t confuse being total with the much stronger property of being determined—see below.)

2.6. Well-Founded And Finite Length Games

Logical games can be well-founded or have a finite length, influencing their complexity. It is only for mathematical convenience that the definition above expects the game to continue to infinity even when a player has won at some finite position; there is no interest in anything that happens after a player has won. Many logical games have the property that in every play, one of the players has already won at some finite position; games of this sort are said to be well-founded. An even stronger condition is that there is some finite number (n) such that in every play, one of the players has already won by the (n)-th position; in this case we say that the game has finite length.

2.7. Strategies In Logical Games

A strategy is a set of rules for players, and defines how to choose depending on earlier moves. Mathematically, a strategy for (forall) consists of a function which takes each position (mathbf{a}) with (tau(mathbf{a}) = forall) to an element (b) of (Omega); we think of it as an instruction to (forall) to choose (b) when the game has reached the position (mathbf{a}). (Likewise with a strategy for (exists).)

2.8. Winning Strategies In Logical Games

A winning strategy ensures that a player wins regardless of the opponent’s moves. A strategy for a player is said to be winning if that player wins every play in which he or she uses the strategy, regardless of what the other player does. At most one of the players has a winning strategy (since otherwise the players could play their winning strategies against each other, and both would win, contradicting that (W_{forall}) and (W_{exists}) have no plays in common).

2.9. Determined Games

A game is determined if one of the players has a winning strategy. A game is said to be determined if one or other of the players has a winning strategy. There are many examples of games that are not determined, as Gale and Stewart showed in 1953 using the axiom of choice. This discovery led to important applications of the notion of determinacy in the foundations of set theory (see entry on large cardinals and determinacy). Gale and Stewart also proved an important theorem that bears their name: Every well-founded game is determined. It follows that every game of finite length is determined—a fact already known to Zermelo in 1913.

2.10. Modelling Rationality With Logical Games

Games are used for modelling rationality and bounded rationality. Just as in classical game theory, the definition of logical games above serves as a clothes horse that we can hang other concepts onto. For example it is common to have some laws that describe what elements of (Omega) are available for a player to choose at a particular move. Strictly this refinement is unnecessary, because the winning strategies are not affected if we decree instead that a player who breaks the law loses immediately; but for many games this way of viewing them seems unnatural. Below we will see some other extra features that can be added to games.

2.11. The Dawkins Question In Logical Games

If we want (exists)’s motivation in a game (G) to have any explanatory value, then we need to understand what is achieved if (exists) does win. In particular we should be able to tell a realistic story of a situation in which some agent called (exists) is trying to do something intelligible, and doing it is the same thing as winning in the game. As Richard Dawkins said, raising the corresponding question for the evolutionary games of Maynard Smith,

The whole purpose of our search … is to discover a suitable actor to play the leading role in our metaphors of purpose. We … want to say, ‘It is for the good of … ‘. Our quest in this chapter is for the right way to complete that sentence. (The Extended Phenotype, Oxford University Press, Oxford 1982, page 91.)

For future reference let us call this the Dawkins question. In many kinds of logical game it turns out to be distinctly harder to answer than the pioneers of these games realised. (Marion 2009 discusses the Dawkins question further.)

3. Exploring Semantic Games For Classical Logic

Semantic games offer an intuitive approach to understanding truth conditions in classical logic. In the early 1930s Alfred Tarski proposed a definition of truth. His definition consisted of a necessary and sufficient condition for a sentence in the language of a typical formal theory to be true; his necessary and sufficient condition used only notions from syntax and set theory, together with the primitive notions of the formal theory in question.

3.1. Tarski’s Definition Of Truth

Alfred Tarski defined truth in terms of necessary and sufficient conditions using syntax and set theory. In fact Tarski defined the more general relation ‘formula (phi(x_1 ,ldots ,x_n)) is true of the elements (a_1 ,ldots ,a_n)’; truth of a sentence is the special case where (n = 0). For example the question whether

‘For all (x) there is (y) such that R((x, y))’ is true

reduces to the question whether the following holds:

For every object (a) the sentence ‘There is (y) such that R((a, y))’ is true.

3.2. Henkin’s Extension To Infinitary Languages

Leon Henkin extended Tarski’s definition to handle infinitely long sentences using games. In the late 1950s Leon Henkin noticed that we can intuitively understand some sentences which can’t be handled by Tarski’s definition. Take for example the infinitely long sentence

For all (x_0) there is (y_0) such that for all (x_1) there is (y_1) such that … R((x_0, y_0, x_1, y_1,ldots)).

Tarski’s approach fails because the string of quantifiers at the beginning is infinite, and we would never reach an end of stripping them off. Instead, Henkin suggested, we should consider the game where a person (forall) chooses an object (a_0) for (x_0), then a second person (exists) chooses an object (b_0) for (y_0), then (forall) chooses (a_1) for (x_1, exists) chooses (b_1) for (y_1) and so on.

3.3. Hintikka’s Semantic Games

Jaakko Hintikka adapted and expanded Henkin’s ideas. Soon after Henkin’s work, Jaakko Hintikka added that the same idea applies with conjunctions and disjunctions. We can regard a conjunction ‘(phi wedge psi)’ as a universally quantified statement expressing ‘Every one of the sentences (phi , psi) holds’, so it should be for the player (forall) to choose one of the sentences.

3.4. Game Rules For Quantifiers And Connectives

Hintikka defined game rules for quantifiers and connectives. To bring quantifiers into the same style, he proposed that the game (G(forall x phi(x))) proceeds thus: player (forall) chooses an object and provides a name (a) for it, and the game proceeds as (G(phi(a))). (And likewise with existential quantifiers, except that (exists) chooses.)

3.5. Negation In Semantic Games

Hintikka introduced negation by dualizing the game. Each game G has a dual game which is the same as G except that the players (forall) and (exists) are transposed in both the rules for playing and the rules for winning. The game (G(neg phi)) is the dual of (G(phi)).

3.6. Equivalence Of Semantic Games And Tarski’s Truth

For any first-order sentence (phi), interpreted in a fixed structure (A), player (exists) has a winning strategy for Hintikka’s game (G(phi)) if and only if (phi) is true in (A) in the sense of Tarski. Two features of this proof are interesting. First, if (phi) is any first-order sentence then the game (G(phi)) has finite length, and so the Gale-Stewart theorem tells us that it is determined.

3.7. Semantic Games As Teaching Tools

Computer implementations of these games of Hintikka proved to be a very effective way of teaching the meanings of first-order sentences. One such package was designed by Jon Barwise and John Etchemendy at Stanford, called ‘Tarski’s World’. Independently another team at the University of Omsk constructed a Russian version for use at schools for gifted children.

3.8. The Dawkins Question Revisited For Semantic Games

In the published version of his John Locke lectures at Oxford, Hintikka in 1973 raised the Dawkins question (see above) for these games. His answer was that one should look to Wittgenstein’s language games, and the language games for understanding quantifiers are those which revolve around seeking and finding.

3.9. Game-Theoretic Semantics (GTS)

Later Jaakko Hintikka extended the ideas of this section in two directions, namely to natural language semantics and to games of imperfect information (see the next section). The name Game-Theoretic Semantics, GTS for short, has come to be used to cover both of these extensions.

3.10. Adaptation To Many-Sorted Logic

The games described in this section adapt almost trivially to many-sorted logic: for example the quantifier (forall x_{sigma}), where (x_{sigma}) is a variable of sort (sigma), is an instruction for player (forall) to choose an element of sort (sigma). This immediately gives us the corresponding games for second-order logic, if we think of the elements of a structure as one sort, the sets of elements as a second sort, the binary relations as a third and so on.

4. Semantic Games With Imperfect Information

Semantic games can be adapted to handle scenarios where players have imperfect information. In this and the next section we look at some adaptations of the semantic games of the previous section to other logics. In our first example, the logic (the independence-friendly logic of Hintikka and Sandu 1997, or more briefly IF logic) was created in order to fit the game. See the entry on independence friendly logic and Mann, Sandu and Sevenster 2011 for fuller accounts of this logic.

4.1. Independence-Friendly (IF) Logic

In IF logic, players may not have complete knowledge of previous moves. The games here are the same as in the previous section, except that we drop the assumption that each player knows the previous history of the play. For example we can require a player to make a choice without knowing what choices the other player has made at certain earlier moves. The classical way to handle this within game theory is to make restrictions on the strategies of the players.

4.2. Restrictions On Strategies

Imperfect information is handled by imposing restrictions on players’ strategies. To make a logic that fits these games, we use the same first-order language as in the previous section, except that a notation is added to some quantifiers (and possibly also some connectives), to show that the Skolem functions for these quantifiers (or connectives) are independent of certain variables.

4.3. Example Of Imperfect Information

The sentence ((forall x)(exists y/ forall x)R(x, y)) illustrates imperfect information. For example the sentence

[ (forall x)(exists y/ forall x)R(x, y) ] is read as: “For every (x) there is (y), not depending on (x), such that (R(x, y))”.

4.4. Gale-Stewart Theorem And Imperfect Information

The Gale-Stewart theorem does not hold for games of imperfect information. There are three important comments to make on the distinction between perfect and imperfect information. The first is that the Gale-Stewart theorem holds only for games of perfect information. Suppose for example that (forall) and (exists) play the following game.

4.5. Winning Strategies And Available Information

Winning strategies in games of perfect information may not use all available information. One corollary is that Hintikka’s justification for reading negation as dualising (‘players swap places’), in his games for first-order logic, doesn’t carry over to IF logic. Hintikka’s response has been that dualising was the correct intuitive meaning of negation even in the first-order case, so no justification is needed.

4.6. Signaling In Imperfect Information Games

Signaling is a key concept in games of imperfect information, where players pass information. Hodges 1997 showed this by revising the notation, so that for example ((exists y/x)) means: “There is (y) independent of (x), regardless of which player chose (x)”. Consider now the sentence

[ (forall x)(exists z)(exists y/x)(x=y), ] played again on a structure with two elements 0 and 1. Player (exists) can win as follows. For (z) she chooses the same as player (forall) chose for (x); then for (y) she chooses the same as she chose for (z).

4.7. Intuitiveness And Game-Theoretic Definition

There is a dislocation between the intuitive idea of imperfect information and the game-theoretic definition of it in terms of strategies. Intuitively, imperfect information is a fact about the circumstances in which the game is played, not about the strategies. This is a very tricky matter, and it continues to lead to misunderstandings about IF and similar logics.

4.8. Dependence Logic

Väänänen’s logics make it easy to see why one needs sets of assignments, i.e. teams. He has an atomic formula, called a dependence atom , expressing ‘(x) is dependent on (y)’, or more exactly, ‘(x) is totally determined by (y)’. How can we interpret this in a structure, for example the structure of natural numbers?

4.9. Team Semantics

In Väänänen’s logics, semantics are defined using teams. If we identify the team in the obvious sense with a database, these atoms appear in database theory as examples of database constraints. Väänänen 2007 made this idea the basis for a range of new logics for studying the notion of dependence (see entry on dependence logic). In these logics the semantics is defined without games, although the original inspiration comes from the work of Hintikka and Sandu.

5. Semantic Games For Other Logics

Semantic games extend to various other logics, providing versatile tools for analysis. Structures of the following kind give rise to interesting games. The structure (A) consists of a set (S) of elements (which we shall call states, adding that they are often called worlds), a binary relation (R) on (S) (we shall read (R) as arrow), and a family (P_1 ,ldots ,P_n) of subsets of (S). The two players (forall) and (exists) play a game G on (A), starting at a state (s) which is given them, by reading a suitable logical formula (phi) as a set of instructions for playing and for winning.

5.1. Game Rules For Modal Logic

Game rules for modal logic are defined based on states and relations. If (phi) is (P_i), then player (exists) wins at once if (s) is in (P_i), and otherwise player (forall) wins at once. The formulas (psi wedge theta , psi vee theta) and (neg psi) behave as in Hintikka’s games above; for example (psi wedge theta) instructs player (forall) to choose whether the game shall continue as for (psi) or for (theta).

5.2. Truth In Modal Logic

Truth in modal logic is determined by the existence of a winning strategy. Finally we say that the formula (phi) is true at s in A if player (exists) has a winning strategy for this game based on (phi) and starting at (s). These games stand to modal logic in very much the same way as Hintikka’s games stand to first-order logic. In particular they are one way of giving a semantics for modal logic, and they agree with the usual Kripke-type semantics.

5.3. Memoryless Winning Strategies

In these games, winning strategies do not require memory of past moves. One interesting feature of these games is that if a player has a winning strategy from some position onwards, then that strategy never needs to refer to anything that happened earlier in the play. It’s irrelevant what choices were made earlier, or even how many steps have been played. So we have what the computer scientists sometimes call a ‘memoryless’ winning strategy.

5.4. Logic Of Games

In the related ‘logic of games’, proposed by Rohit Parikh, games that move us between the states are the subject matter rather than a way of giving a truth definition. These games have many interesting aspects. In 2003 the journal Studia Logica ran an issue devoted to them, edited by Marc Pauly and Parikh.

5.5. Logic For Analyzing Decision Making

Influences from economics and computer science have led a number of logicians to use logic for analysing decision making under conditions of partial ignorance. (See for example the article on epistemic logic.) There are several ways to represent states of knowledge.

6. Back-And-Forth Games

Back-and-forth games provide a structural condition for elementary equivalence. In 1930 Alfred Tarski formulated the notion of two structures (A) and (B) being elementarily equivalent, i.e., that exactly the same first-order sentences are true in (A) as are true in (B).

6.1. Tarski’s Vision Of Elementary Equivalence

Alfred Tarski aimed to develop a theory of elementary equivalence as deep as isomorphism. At a conference in Princeton in 1946 he described this notion and expressed the hope that it would be possible to develop a theory of it that would be ‘as deep as the notions of isomorphism, etc. now in use’ (Tarski 1946).

6.2. Ehrenfeucht-Fraïssé Games

Ehrenfeucht-Fraïssé games are used to determine elementary equivalence. The games are now known as Ehrenfeucht-Fraïssé games, or sometimes as back-and-forth games. They have turned out to be one of the most versatile ideas in twentieth-century logic. They adapt fruitfully to a wide range of logics and structures.

6.3. Game Rules And Players

The players are Spoiler and Duplicator. In a back-and-forth game there are two structures (A) and (B), and two players who are commonly called Spoiler and Duplicator. (The names are due to Joel Spencer in the early 1990s. More recently Neil Immerman suggested Samson and Delilah, using the same initials; this places Spoiler as the male player (forall) and Duplicator as the female (exists).) Each step in the game consists of a move of Spoiler, followed by a move of Duplicator.

6.4. Winning Conditions For Spoiler And Duplicator

Spoiler wins if atomic formulas differ between the structures; Duplicator wins by maintaining similarity. This position is a win for Spoiler if and only if some atomic formula (of one of the forms ‘(R(v_0 ,ldots ,v_{k-1}))’ or ‘(mathrm{F}(v_0 ,ldots ,v_{k-1}) = v_k)’ or ‘(v_0 =v_1)’, or one of these with different variables) is satisfied by ((a_0 ,ldots ,a_{n-1})) in (A) but not by ((b_0 ,ldots ,b_{n-1})) in (B), or vice versa.

6.5. Back-And-Forth Equivalence

Back-and-forth equivalence is achieved when Duplicator has a winning strategy. All these games are determined, by the Gale-Stewart Theorem. The two structures (A) and (B) are said to be back-and-forth equivalent if Duplicator has a winning strategy for (EF(A, B)), and m-equivalent if she has a winning strategy for (EF_m (A, B)).

6.6. Elementary Equivalence

If (A) and (B) are (m)-equivalent for every natural number (m), then they are elementarily equivalent. In fact, if Eloise has a winning strategy (tau) in the Hintikka game G((phi)) on (A), where the nesting of quantifier scopes of (phi) has at most m levels and Duplicator has a winning strategy (varrho) in the game (EF_m (A, B)), the two strategies (tau) and (varrho) can be composed into a winning strategy of Eloise in G((phi)) on (B).

6.7. Lindström’s Theorem

The famous Lindström’s Theorem (see entry on model theory) uses fundamental properties of the game (EF_m (A, B))to give a model theoretic characterization of first order logic: the maximal logic satisfying the Compactness Theorem and the Downward Löwenheim-Skolem Theorem.

6.8. Adjustments For Different Equivalences

For example Barwise, Immerman and Bruno Poizat independently described a game in which the two players have exactly (p) numbered pebbles each; each player has to label his or her choices with a pebble, and the two choices in the same step must be labelled with pebbles carrying the same number. As the game proceeds, the players will run out of pebbles and so they will have to re-use pebbles that were already used.

6.9. Applications In Computer Science And Semantics

As a result, these games are one of the few model-theoretic techniques that apply as well to finite structures as they do to infinite ones, and this makes them one of the cornerstones of theoretical computer science. One can use them to measure the expressive strength of formal languages, for example database query languages.

6.10. Nadel’s Investigation On Equivalence Relations

This raises the converse question: if an Ehrenfeucht-Fraïssé game, with rules what the moves are and who wins each play, is given, is it necessarily the elementary equivalence relation with respect to some logical language? In other words, can the binary similarity relation between models, offered by some variant of Ehrenfeucht-Fraïssé game, be always turned to the relation between a sentence and a model? Mark Nadel 1980 investigates this question.

6.11. Shelah’s Infinitary Logic

Interestingly, Saharon Shelah 2012 defines a new infinitary logic by only giving its Ehrenfeucht-Fraïssé game. Saharon Shelah 2012 defines a new infinitary logic by only giving its Ehrenfeucht-Fraïssé game.

6.12. Modal Semantics

There is also a kind of back-and-forth game that corresponds to our modal semantics above in the same way as Ehrenfeucht-Fraïssé games correspond to Hintikka’s game semantics for first-order logic. The players start with a state (s) in the structure (A) and a state (t) in the structure (B). Spoiler and Duplicator move alternately, as before.

6.13. Bisimulation

Let (Z) be a binary relation which relates states of (A) to states of (B). Then we call (Z) a bisimulation between (A) and (B) if Duplicator can use (Z) as a nondeterministic winning strategy in the back-and-forth game between (A) and (B) where the first pair of moves of the two players is to choose their starting states.

7. Other Model-Theoretic Games

Model-theoretic games provide mathematicians with tools to construct and analyze structures. The logical games in this section are mathematicians’ tools, but they have some conceptually interesting features.

7.1. Forcing Games

Forcing games build infinite structures with controlled properties. Forcing games are also known to descriptive set theorists as Banach-Mazur games; see the references by Kechris or Oxtoby below for more details of the mathematical background. Model theorists use them as a way of building infinite structures with controlled properties.

7.1.1. Model Existence Game

In the beginning a countably infinite set (C) of new constant symbols (a_0, a_1, a_2) etc is fixed. (exists) defends a disjunction by choosing one disjunct, and an existential statement by choosing a constant from (C) as a witness. (forall) can challenge a conjunction by choosing either conjunct, and a universal statement by choosing an arbitrary witness from (C). (exists) wins if no contradicting atomic sentences are played.

7.1.2. General Forcing Game

To sketch the idea of the general Forcing Game, imagine that a countably infinite team of builders are building a house (A). Each builder has his or her own task to carry out: for example to install a bath or to wallpaper the entrance hall. Each builder has infinitely many chances to enter the site and add some finite amount of material to the house; these slots for the builders are interleaved so that the whole process takes place in a sequence of steps counted by the natural numbers.

7.1.3. Enforceable Properties

A possible property P of (A) is said to be enforceable if a builder who is given the task of making P true of (A) has a winning strategy. A central point (due essentially to Ehrenfeucht) is that the conjunction of a countably infinite set of enforceable properties is again enforceable.

7.1.4. Löwenheim-Skolem Theorems

Various Löwenheim-Skolem Theorems of model theory can be proved using variants of the Forcing Game. In these variants we do not construct a model but a submodel of a given model. We start with a big model (M) for a sentence (or a countable set of sentences) (phi). Then we list the subformulas of (phi) and each player has a subformula with a free variable to attend to.

7.1.5. Origin Of The Name “Forcing”

The name ‘forcing’ comes from an application of related ideas by Paul Cohen to construct models of set theory in the early 1960s. Abraham Robinson adapted it to make a general method for building countable structures, and Martin Ziegler introduced the game setting. Later Robin Hirsch and Ian Hodkinson used related games to settle some old questions about relation algebras.

7.2. Cut-And-Choose Games

Cut-and-choose games are fundamental in the theory of definitions. Suppose we have a collection (A) of objects and a family (S) of properties; each property cuts (A) into the set of those objects that have the property and the set of those that don’t. Let (exists) cut, starting with the whole set (A) and using a property in (S) as a knife; let (forall) choose one of the pieces (which are subsets of (A)) and give it back to (exists) to cut again, once more using a property in (S); and so on. Let (exists) lose as soon as (forall) chooses an empty piece.

7.2.1. Rank

We say that ((A, S)) has rank at most (m) if (forall) has a strategy which ensures that (exists

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *