Help
Index

Maarten Maartensz:    Philosophical Dictionary | Filosofisch Woordenboek   L  - Logic - Basic

 Basic Logic:  A non-standard basic logic of arguments, statements, terms, abstracts and structures, that covers ordinary propositional logic, predicate logic, quantifier logic, identity logic, and set theory. 1. Fundamental assumptions In some ways, logic is fundamental to everything else, but this does not mean that logic can be stated without prior assumptions. Here, unlike normal treatments logic is taken as a specific extension of natural language that arises when three things are done: The natural language is extended with variables; it is declared which expressions are part of logic; and some rules of inference are assumed for the logical expressions Thus one starts with assuming that one has some natural language - like English, in the present case - one knows fairly well, including its basic logical terms and their common usages, and then attempts to state rules for logical inference by adding variables to the natural language and by defining what count as the well-formed and meaningful expressions of the extended language. And another fundamental assumption is that, e.g. from natural language, one knows and is familiar with the concept of a sequence, and in particular a sequence of terms, such as make up a statement in natural language. The terms in a sequence of terms are separated from each other by spaces or commas, and are themselves sequences of letters without the comma or the space. 2. Symbols Variables for things: undercast letters with or without numerical suffix Variables for statements: uppercast letters with or without numerical suffix Variables for structures: uppercast letters with or without numerical suffix  A variable is a term that differs from all non-variables and may be replaced conditionally by other variables of the same kind, or constants of the kind. The kind of a variable depends on what sort of term it may be replaced with: Terms for things or classes of things, which are name-like or noun-like in English; terms for possible facts, which are statements; or terms for what remains of a statement when one disregards one or more of its variables, for which we use the term structures, that covers predicates and relations.  Interpunction Logical constants Logical constants: '|-' '|=' '&' 'V' '~' '..' '|' '=' There are special symbols to be introduced, next to the variables, that are used for logical constants. Their readings are as follows, with the meanings associated to them by trained speakers of English, but to be rendered more precise by the rules that follow. 'T1 .. Tn'         for 'a sequence of T1 .. Tn' '|- S'              for 'One may write that S' 'S1 .. Sn |= T'  for 'If one may write that S1 .. Sn, then one may write that T' 'S1&S2'           for 'S1 and S2' 'S1VS2'           for 'S1 or S2' '~S1'              for 'not S1' 'Sa|Sb.S'         for 'S with Sa uniformally substituted for Sb in S' 'S1 --> S2'       for 'S1 implies S2' 'S1 IFF S2'       for 'S1 if and only if S2' As just pointed out, the symbolical expressions on the left are made more precise by the rules that follow, but are to be taken as having a meaning that is adequately rendered on the right. The notation for sequences presupposes that one supplies what is missing on the dots, wherever necessary, from the suggestion made by the suffices. Thus 'T1 .. T4' is short for 'T1 T2 T3 T4' and 'Sf .. Sj' is short for 'Sf Sg Sh Si Sj'. Grammar of propositional logic To obtain a logic for statements or propositions, we need to specify what counts as a proposition. Here is a convenient set of principles that says how to obtain more complex propositions from more simple ones, namely by introducing logical constants: G1. If (P) is a proposition, (~(P)) is a proposition. G2. If (P) and (Q) are propositions, (P&Q) and (PVQ) are propositions. G3. Nothing else is a proposition unless by explicit definition. Here it is not laid down what are propositions, but only how to obtain propositions with logical constants from given propositions. Substitution rules To formulate rules for substitution it is convenient to introduce some definitions. First, the term 'expression' stands for 'a sequence of terms', regardless of whether we can attribute a meaning to an expression. Next, if A and B are two expressions, then B is said to be a variant of A if and only if whenever the i-th and the j-th terms of A are the same so are the i-th and the j-th terms of B and whenever the i-th term of A is a constant the i-th term of B is the same constant. Intuitively speaking, a variant of A is just the same as A except for variables, and the variables of a variant of A preserve the pattern of the variables in A. |- If |-S and if S' is a variant of S, then |-S' This is the basic rule of substitution. To be valid it must have the property that in any case that S is true, then S' is also true, and that depends on the rules that allow one to make substitutions according to the above general rule. There is also need for another rule, since variants are always of the same length: |- If |-Sa then |-Sb|Sc.Sa if Sb does not contain any variable contained in Sa. This says that if Sa may be written so may be the result of any uniform substitution in Sa. It enables the inference of e.g. (Sx&Sy)V~(Sx&Sy) from (SzV~Sz) substituting Sx&Sy for Sz. Below there will be a more refined treatment of substitution when variables for terms and structures are also considered. Argument rules |- (Si |= Si) |- (Sk |= Sn  |= ~Sk V Sn) |- (~Sk V Sn |=  Sk |= Sn) These rules insist that one may always write that a statement can be inferred from itself, and gives ways to restate what inferring from amounts to in terms of other logical constants, namely not and or. Any statement of the form '|- (A1 .. An |= C)' is called an inference rule, and each of the A1 .. An a premiss, with C the conclusion. Assumption rules |- (Si*) |- (S Si* .. Sn) |= (S Si |= Sn)) - if Si is the last assumption in (S Si* .. Sn) The first rule says one always may make an assumption, and the star indicates that in fact Si is inferred by the rule that one may assume what one pleases, and is an assumption. This is called the rule of assumption introduction. The second rule states and shows how to get rid of assumptions in a sequence of inferences: If Si is the last assumption in the sequence, all conclusions after it are entailed by it (and what preceeds it, indicated by S). This is called the rule of assumption elimination. Conjunction rules |- (Si .. Sk) |= (Si&Sk) |- (Si&Sk)   |= (Si) |- (Si&Sk)   |= (Sk) The rules for 'and' as used between statements: If one has already inferred both Si and Sk one may also infer their conjunction, and if one has inferred a conjunction, one may infer each of the conjuncts. Disjunction rules |- (Si)                  |= (SiVSj) |- (Sj)                  |= (SiVSj) |- (SiVSj .. ~SiVSj) |= (Sj) The rules for 'or' as used between statements: If one has inferred a statement, one can infer it as part of a disjunction with an arbitrary statement, and if one has inferred two disjunctions of the forms SiVSj and ~SiVSj then one may infer Sj directly. Definitions of logical constants |- 'Si --> Sj' for '~SiVSj' |- 'Si IFF Sj' for 'Si --> Sj & Sj --> Si' The above rules are sufficient to derive the standard systems of propositional logic in terms of proof theory, i.e. the standard theorems of classical propositional logic can be proved from the above rules. (See: MT 2.) Structures Logical constants: '=' '≠' 'Z[t1 .. tn]' ' 'x|y:' '|x.' '(Ex)' '(x)' '#' The above related to the logic of statements. Here we have arrived at the logic of terms and structures, which needs more terminology, that is listed above and explained below. Readings of logical constants 'a=b'                        for 'a equals b' 'a≠b'                        for 'a does not equal b' 'Z[t1 .. tn]'               for 'Z relates t1 .. tn' or 'Z t1 .. tn' 'x|y.Z[t1 .. y .. tn]'    for 'x substitutes for y in Z[t1 .. y .. tn]' '|x:Z[y1 .. x .. yn]'     for 'the x such that Z[y1 .. x .. yn]' 'Ø'                           for '|x.(x≠x)'          '(Ex).S[y1 .. x .. yn]'   for 'there is an x such that S[y1 .. x .. yn]' '(x).S[y1 .. x .. yn]'     for 'every x is such that S[y1 .. x .. yn]' '#(Y)'                       for 'the number of Y' The signs and readings for equality and inequality are standard. The point of the notation 'Z[t1 .. tn]' that obtains a structure-term from a statement by replacing some constant(s) by some variable(s) in the statement: The structure-term is the statement apart from the variable(s), and indeed it makes sense to say that the structure-term relates the other terms in a statement. An alternative name for the notation is formula: A statement with one or more variables as terms, or obtained from such a statement by substitution for the variables. The point of 'x|y.Z[t1 .. y .. tn]' is to have an explicit symbolism for substitutions and replacements of terms in structure-terms or formulas. The notation '|x:S[y1 .. x .. yn]' codifies the idea of abstraction, and does so by using formulas and a prefix that is used in binary form for substitutions. The term 'Ø' is a standard mathematical symbol for 'nothing', and is here defined using abstraction and identity: The term nothing represents the same as what the things that are not equal to themselves represent. The terms '(Ex).S[y1 .. x .. yn]' and '(x).S[y1 .. x .. yn]' are the standard quantifiers 'for some x' and 'for every x', in the above rendered as prefixes of formulas in which 'x' occurs somewhere. Finally, the notation '#(Y)' is used to abbreviate the phrase 'the number of Y' and is written using standard functional notation. Grammar of predicate logic G1. If P is an n-place predicate and s1 .. sn are n subjects, P(s1 .. sn) is a proposition. G2. If P(s1 .. sn) is a proposition and one or more of s1 .. sn is replaced by a subject variable, the result is a formula.  G3. Formulas are propositions. G4. If Z is a formula and v1 .. vn are subject-variables, then |v1 .. vn: Z is a subject-term. G5. If s1 and s2 are subject-terms, s1=s2 is a formula. Structural rule |- (t1)..(tn)(v1)..(vk)(ET)((t1 .. v1 .. ti .. vk .. tn)=T[v1 .. vk]) Here in fact the notion of sequence of terms is used to claim that every sequence of terms that contains some variables can be identified with a structure-term followed by the variables. Linguistically speaking, this amounts to the assumption that one can introduce some convenient abbreviation for any long formula. Substitution rules for equalities |- ('A' for 'B'                                   IFF A=B) This rule takes care of quite a number of occurences of the phrase 'for' in the above, for it restates these in terms of equalities, that are governed by the following rules:  |- ( T[..x..] & x=y                            IFF T[..y..] & x=y ) |- ( a|b.c = c                                  IFF b≠c ) |- ( a|b.c = a                                  IFF b=c ) The first rule lays down what an equality amounts to for a term in a formula, and the other two rules govern equalities for substitutions of terms for terms. All three of these rules are used together with the threesome that follows. Substitution rules and variants Variants were mentioned above, and can be defined here using 'CON' for 'Constant', which includes logical terms: |- (b1 .. bn) e var(a1 .. an) IFF (i)(ai e CON --> bi=ai) & (i)(j)(ai=aj IFF bi=bj) A sequence of terms B is a variant of a sequence of terms A precisely if whenever the i-th term of B is a constant the i-th term of A is a constant and whenever the i-th and the j-th terms of A are the same so are the i-th and the j-th terms of B. The general rule of substitution of variants now can be stated thus: |- { ((A') e var(A)) & (A)} |= (A') In words: A variant of a theorem is a theorem. It is in fact this rule that gives variables a general interpretation and that allows reasoning by substitutions in established formulas. It may also be noted that the definition of variant implies that where y is a variable and y|x.T[x] holds, and c is a constant, c|x.T[x] is a variant of y|x.T[x]. Thus, if some formula holds for every substitution, it holds also for every constant. Substitution rules for terms |- ( x|y.(t1 .. tn)                           IFF (x|y.t1 .. x|y.tn) ) |- ( (x1 .. xn)|(y1 .. yn).(t1 .. tn)      IFF x1|y1 .. xn|yn.(t1 .. tn) ) |- ( x1|y1 x2|y2 .. xn|yn.(t1 .. tn)     IFF x2|y2 .. xn|yn.(x1|y1.t1 .. x1|y1.tn) ) The first of these states what a substitution of a term in a sequence amounts to: A sequence where each term gets substituted when appropriate (as governed by the rules in the previous group). The second is a convenience that allows one to switch between a shorter and a longer formulation for sequences of substitutions. The third governs how a sequence of substitutions for a formula is to be done: The leftmost substitution is first made in the formula (and skips the other substituends). Substitution rules for statements and structures |- ( Sj |= Sk                                 IFF a|b.Sj |= a|b.Sk) |- ( a|b.S[..b..]                            IFF S[..b..] |= S[..a..] ) The first of these states for terms what was earlier stated for statements, and the second shows and states what is involved in valid substitutions in truths: one moves from truth to truth. Abstractions and elements |- ( |x:S[x] = |x:T[x]                      IFF y|x. S[x] IFF T[x] ) |- ( (a1 .. an) e |x1 .. xn. S[x1 .. xn]  IFF a1|x1 .. an|xn. S[x1 .. xn] ) The first effectively explains abstraction in terms of equality and substitutions in formulas: Two abstracts are equal precisely if the equivalence of their formulas is true under arbitrary substitutions (of the proper kinds of terms). It is well to notice here that y|x. S[x] IFF T[x] is general given the rule of substitution: If one can prove this, where y is a variable, one therefore can prove z|x. S[x] IFF T[x] or any other alphabetical variant of variables. The second explains elementhood in terms of abstraction. Here it is restated for the case n=1: a1 is an element of the x1 that are S[x1] precisely if a1 can be validly substituted for x1 in S[x1] i.e. if and only if S[a1]. Quantified structures |- ( (x).S(..x..)                             IFF |x: ~S(..x..) = Ø ) |- ( (x).~S(..x..)                           IFF |x: S(..x..) = Ø ) |- ( (Ex).S(..x..)                            IFF |x: S(..x..) ≠ Ø ) |- ( (Ex).~S(..x..)                          IFF |x: ~S(..x..) ≠ Ø ) The standard quantifiers - for every and for some - get in effect rendered by equalities or inequalities of structures with nothing: Every thing is S precisely if the things that are not S equals nothing; every thing is not S precisely if the things that are S equals nothing; some thing is S precisely if the things that are S does not equal nothing, and some thing is not S precisely if the things that are not S does not equal nothing. Structures, functions and mappings |- Structure[Z X1 .. Xn] IFF (y1)..(yn)((y1 .. yn) e |x1..xn.Z[x1..xn] IFF                                                    y1eX1 & .. & yneXn & Z[y1..yn]) |- Function[F X1 .. Xn]  IFF Structure[F X1 .. Xn] &                                       (y1)..(ym)(yn)(yx)( ((y1..ym yn) e F  &                                                                  (y1..ym yx) e F)  |= yn=yx) |- Mapping[F X1 X2]      IFF Function[F X1 X2] & Function[F X2 X1] |- #(Si)=#(Sj)              IFF (EF)(Mapping[F Si Sj]) |- F(x1 .. xn)=y            IFF (EX1) .. (EXm)(EXn)(Function[F X1 .. Xm Xn] &                                                                  (x1 .. xm y) e F). This concerns the tools and assumptions for the basics of mathematics. A Structure made up of Z and X1 .. Xn is in effect the sequences of things that satisfy a formula while being of appropriate kinds X1 .. Xn. A Function is a structure-term with a special property: Its last term is unique in any case. And a Mapping is a binary function between X1 and X2 that holds both ways. And the number of Si equals the number of Sj precisely if there is a mapping between Si and Sj. The last assumption is a convenience that explains the standard functional notation F(x1 .. xn)=y. Arguments An argument in BL is a sequence of numbered statements with the property that each statement in it is inferred by assumption introduction or is the consequence in a variant of an assumed or proved inference rule of which the premisses occur before the consequence in the argument or is an instance of an already proved formula. Note this implies every argument must start from assumptions. Also note that this does not imply, without further assumption, that what is correctly proved by the above criterion also is validly proved - though this does follow if one has proved that all rules of inference are valid i.e. have the property that their consequences cannot fail to be true if their premisses are all true. And a further point to notice is that, given a sequence of statements in which the assumptions are marked, and a sequence of rules of inference, one can mechanically decide by the above criterion whether or not the sequence of statements is a correct argument. This is important, because this is the basis for one's trust in and reliance on logical argument: One can prove that a purported argument is or is not a correct argument given a series of rules of inference, and one can prove that a purported argument is valid given a series of rules of inference by proving that it is correct and all the rules of inference that were used in the argument are valid. Two reasons why this is important are the following. First, it gives a clear idea of what a proof would be, and indeed a way of checking any purported proof mechanically for correctness. Second, it shifts all problems of deductive inference back to whatever rules of inference were assumed: If only these are valid, then everything that is proved using these rules must be valid too.

See also: Basic Logic - semantics, Logic, Logic Notation

Literature:

Bochenski, Carnap, Cartwright, Hamilton, Hasenjäger, Hilbert & Bernays, Quine, Shoenfield, Slupecki & Borkowski, Tarski, Tennant,

Original: Dec 28, 2004                                                Last edited: 12 December 2011.