Basic Logic: A nonstandard basic logic of arguments, statements, terms, abstracts and structures, that covers ordinary propositional logic, predicate logic, quantifier logic, identity logic, and set theory.
1. Fundamental assumptions
In some ways, logic is fundamental to everything else, but this does not mean that logic can be stated without prior assumptions.
Here, unlike normal treatments
that arises when three things are done:

The natural language is extended with variables;

it is declared which expressions are part of logic; and

some rules of inference are assumed for the logical expressions
Thus one starts with assuming that one has some natural language  like English, in the present case  one knows fairly well, including its basic logical terms and their common usages, and then attempts to state rules for logical inference by adding variables to the natural language and by defining what count as the wellformed and meaningful expressions of the extended language.
And another fundamental assumption is that, e.g. from natural language, one knows and is familiar with the concept of a sequence, and in particular a sequence of terms, such as make up a statement in natural language. The terms in a sequence of terms are separated from each other by spaces or commas, and are themselves sequences of letters without the comma or the space.
2. Symbols
Variables for things: undercast letters with or without numerical suffix
Variables for statements: uppercast letters with or without numerical suffix
Variables for structures: uppercast letters with or without numerical suffix
A variable is a term that differs from all nonvariables and may be replaced conditionally by other variables of the same kind, or constants of the kind. The kind of a variable depends on what sort of term it may be replaced with: Terms for things or classes of things, which are namelike or nounlike in English; terms for possible facts, which are statements; or terms for what remains of a statement when one disregards one or more of its variables, for which we use the term structures, that covers predicates and relations.
Interpunction
Logical constants
Logical constants: '' '=' '&' 'V' '~' '..' '' '='
There are special symbols to be introduced, next to the variables, that are used for logical constants. Their readings are as follows, with the meanings associated to them by trained speakers of English, but to be rendered more precise by the rules that follow.
'T_{1} .. T_{n}' for 'a sequence of T_{1} .. T_{n}'
' S' for 'One may write that S'
'S_{1} .. S_{n} = T' for 'If one may write that S_{1} .. S_{n}, then one may write that T'
'S_{1}&S_{2}' for 'S_{1} and S_{2}'
'S_{1}VS_{2}' for 'S_{1} or S_{2}'
'~S_{1}' for 'not S_{1}'
'S_{a}S_{b}.S' for 'S with S_{a} uniformally substituted for S_{b} in S'
'S_{1} > S_{2}' for 'S_{1} implies S_{2}'
'S_{1} IFF S_{2}' for 'S_{1} if and only if S_{2}'
As just pointed out, the symbolical expressions on the left are made more precise by the rules that follow, but are to be taken as having a meaning that is adequately rendered on the right.
The notation for sequences presupposes that one supplies what is missing on the dots, wherever necessary, from the suggestion made by the suffices. Thus 'T_{1} .. T_{4}' is short for 'T_{1} T_{2} T_{3} T_{4}' and 'S_{f} .. S_{j}' is short for 'S_{f} S_{g} S_{h} S_{i} S_{j}'.
Grammar of propositional logic
To obtain a logic for statements or propositions, we need to specify what counts as a proposition. Here is a convenient set of principles that says how to obtain more complex propositions from more simple ones, namely by introducing logical constants:
G1. If (P) is a proposition, (~(P)) is a proposition.
G2. If (P) and (Q) are propositions, (P&Q) and (PVQ) are propositions.
G3. Nothing else is a proposition unless by explicit definition.
Here it is not laid down what are propositions, but only how to obtain propositions with logical constants from given propositions.
Substitution rules
To formulate rules for substitution it is convenient to introduce some definitions. First, the term 'expression' stands for 'a sequence of terms', regardless of whether we can attribute a meaning to an expression.
Next, if A and B are two expressions, then B is said to be a variant of A if and only if whenever the ith and the jth terms of A are the same so are the ith and the jth terms of B and whenever the ith term of A is a constant the ith term of B is the same constant.
Intuitively speaking, a variant of A is just the same as A except for variables, and the variables of a variant of A preserve the pattern of the variables in A.
 If S and if S' is a variant of S, then S'
This is the basic rule of substitution. To be valid it must have the property that in any case that S is true, then S' is also true, and that depends on the rules that allow one to make substitutions according to the above general rule.
There is also need for another rule, since variants are always of the same length:
 If S_{a} then S_{b}S_{c}.S_{a} if S_{b} does not contain any variable contained in S_{a}.
This says that if S_{a} may be written so may be the result of any uniform substitution in S_{a}. It enables the inference of e.g. (S_{x}&S_{y})V~(S_{x}&S_{y}) from (S_{z}V~S_{z}) substituting S_{x}&S_{y} for S_{z}.
Below there will be a more refined treatment of substitution when variables for terms and structures are also considered.
Argument rules
 (S_{i} = S_{i})
 (S_{k} = S_{n} = ~S_{k} V S_{n})
 (~S_{k} V S_{n} = S_{k} = S_{n})
These rules insist that one may always write that a statement can be inferred from itself, and gives ways to restate what inferring from amounts to in terms of other logical constants, namely not and or.
Any statement of the form ' (A1 .. An = C)' is called an inference rule, and each of the A1 .. An a premiss, with C the conclusion.
Assumption rules
 (S_{i}*)
 (S S_{i}* .. S_{n}) = (S S_{i} = S_{n}))  if S_{i} is the last assumption in (S S_{i}* .. S_{n})
The first rule says one always may make an assumption, and the star indicates that in fact S_{i} is inferred by the rule that one may assume what one pleases, and is an assumption. This is called the rule of assumption introduction.
The second rule states and shows how to get rid of assumptions in a sequence of inferences: If S_{i} is the last assumption in the sequence, all conclusions after it are entailed by it (and what preceeds it, indicated by S). This is called the rule of assumption elimination.
Conjunction rules
 (S_{i} .. S_{k}) = (S_{i}&S_{k})
 (S_{i}&S_{k}) = (S_{i})
 (S_{i}&S_{k}) = (S_{k})
The rules for 'and' as used between statements: If one has already inferred both Si and Sk one may also infer their conjunction, and if one has inferred a conjunction, one may infer each of the conjuncts.
Disjunction rules
 (S_{i}) = (S_{i}VS_{j})
 (S_{j}) = (S_{i}VS_{j})
 (S_{i}VS_{j} .. ~S_{i}VS_{j}) = (S_{j})
The rules for 'or' as used between statements: If one has inferred a statement, one can infer it as part of a disjunction with an arbitrary statement, and if one has inferred two disjunctions of the forms S_{i}VS_{j} and ~S_{i}VS_{j} then one may infer S_{j} directly.
Definitions of logical constants
 'S_{i} > S_{j}' for '~S_{i}VS_{j}'
 'S_{i} IFF S_{j}' for 'S_{i} > S_{j} & S_{j} > S_{i}'
The above rules are sufficient to derive the standard systems of propositional logic in terms of proof theory, i.e. the standard theorems of classical propositional logic can be proved from the above rules. (See: MT 2.)
Structures
Logical constants: '=' '≠' 'Z[t_{1} .. t_{n}]' ' 'xy:' 'x.' '(Ex)' '(x)' '#'
The above related to the logic of statements. Here we have arrived at the logic of terms and structures, which needs more terminology, that is listed above and explained below.
Readings of logical constants
'a=b' for 'a equals b'
'a≠b' for 'a does not equal b'
'Z[t_{1} .. t_{n}]' for 'Z relates t_{1} .. t_{n}' or 'Z t_{1} .. t_{n}'
'xy.Z[t_{1} .. y .. t_{n}]' for 'x substitutes for y in Z[t_{1} .. y .. t_{n}]'
'x:Z[y_{1} .. x .. y_{n}]' for 'the x such that Z[y_{1} .. x .. y_{n}]'
'Ø' for 'x.(x≠x)'
'(Ex).S[y_{1} .. x .. y_{n}]' for 'there is an x such that S[y_{1} .. x .. y_{n}]'
'(x).S[y_{1} .. x .. y_{n}]' for 'every x is such that S[y_{1} .. x .. y_{n}]'
'#(Y)' for 'the number of Y'
The signs and readings for equality and inequality are standard.
The point of the notation 'Z[t_{1} .. t_{n}]' that obtains a structureterm from a statement by replacing some constant(s) by some variable(s) in the statement: The structureterm is the statement apart from the variable(s), and indeed it makes sense to say that the structureterm relates the other terms in a statement. An alternative name for the notation is formula: A statement with one or more variables as terms, or obtained from such a statement by substitution for the variables.
The point of 'xy.Z[t_{1} .. y .. t_{n}]' is to have an explicit symbolism for substitutions and replacements of terms in structureterms or formulas.
The notation 'x:S[y_{1} .. x .. y_{n}]' codifies the idea of abstraction, and does so by using formulas and a prefix that is used in binary form for substitutions.
The term 'Ø' is a standard mathematical symbol for 'nothing', and is here defined using abstraction and identity: The term nothing represents the same as what the things that are not equal to themselves represent.
The terms '(Ex).S[y_{1} .. x .. y_{n}]' and '(x).S[y_{1} .. x .. y_{n}]' are the standard quantifiers 'for some x' and 'for every x', in the above rendered as prefixes of formulas in which 'x' occurs somewhere.
Finally, the notation '#(Y)' is used to abbreviate the phrase 'the number of Y' and is written using standard functional notation.
Grammar of predicate logic
G1. If P is an nplace predicate and s1 .. sn are n subjects, P(s1 .. sn) is a proposition.
G2. If P(s1 .. sn) is a proposition and one or more of s1 .. sn is replaced by a subject variable, the result is a formula.
G3. Formulas are propositions.
G4. If Z is a formula and v1 .. vn are subjectvariables, then v1 .. vn: Z is a subjectterm.
G5. If s1 and s2 are subjectterms, s1=s2 is a formula.
Structural rule
 (t_{1})..(t_{n})(v_{1})..(v_{k})(ET)((t_{1} .. v_{1} .. t_{i} .. v_{k} .. t_{n})=T[v_{1} .. v_{k}])
Here in fact the notion of sequence of terms is used to claim that every sequence of terms that contains some variables can be identified with a structureterm followed by the variables.
Linguistically speaking, this amounts to the assumption that one can introduce some convenient abbreviation for any long formula.
Substitution rules for equalities
 ('A' for 'B' IFF A=B)
This rule takes care of quite a number of occurences of the phrase 'for' in the above, for it restates these in terms of equalities, that are governed by the following rules:
 ( T[..x..] & x=y IFF T[..y..] & x=y )
 ( ab.c = c IFF b≠c )
 ( ab.c = a IFF b=c )
The first rule lays down what an equality amounts to for a term in a formula, and the other two rules govern equalities for substitutions of terms for terms. All three of these rules are used together with the threesome that follows.
Substitution rules and variants
Variants were mentioned above, and can be defined here using 'CON' for 'Constant', which includes logical terms:
 (b_{1} .. b_{n}) e var(a_{1} .. a_{n}) IFF (i)(a_{i} e CON > b_{i}=a_{i}) & (i)(j)(a_{i}=a_{j} IFF b_{i}=b_{j})
A sequence of terms B is a variant of a sequence of terms A precisely if whenever the ith term of B is a constant the ith term of A is a constant and whenever the ith and the jth terms of A are the same so are the ith and the jth terms of B.
The general rule of substitution of variants now can be stated thus:
 { ((A') e var(A)) & (A)} = (A')
In words: A variant of a theorem is a theorem. It is in fact this rule that gives variables a general interpretation and that allows reasoning by substitutions in established formulas.
It may also be noted that the definition of variant implies that where y is a variable and yx.T[x] holds, and c is a constant, cx.T[x] is a variant of yx.T[x]. Thus, if some formula holds for every substitution, it holds also for every constant.
Substitution rules for terms
 ( xy.(t_{1} .. t_{n}) IFF (xy.t_{1} .. xy.t_{n}) )
 ( (x_{1} .. x_{n})(y_{1} .. y_{n}).(t_{1} .. t_{n}) IFF x_{1}y_{1} .. x_{n}y_{n}.(t_{1} .. t_{n}) )
 ( x_{1}y_{1} x_{2}y_{2} .. x_{n}y_{n}.(t_{1} .. t_{n}) IFF x_{2}y_{2} .. x_{n}y_{n}.(x_{1}y_{1}.t_{1} .. x_{1}y_{1}.t_{n}) )
The first of these states what a substitution of a term in a sequence amounts to: A sequence where each term gets substituted when appropriate (as governed by the rules in the previous group).
The second is a convenience that allows one to switch between a shorter and a longer formulation for sequences of substitutions.
The third governs how a sequence of substitutions for a formula is to be done: The leftmost substitution is first made in the formula (and skips the other substituends).
Substitution rules for statements and structures
 ( S_{j} = S_{k} IFF ab.S_{j} = ab.S_{k})
 ( ab.S[..b..] IFF S[..b..] = S[..a..] )
The first of these states for terms what was earlier stated for statements, and the second shows and states what is involved in valid substitutions in truths: one moves from truth to truth.
Abstractions and elements
 ( x:S[x] = x:T[x] IFF yx. S[x] IFF T[x] )
 ( (a_{1} .. a_{n}) e x_{1} .. x_{n}. S[x_{1} .. x_{n}] IFF a_{1}x_{1} .. a_{n}x_{n}. S[x_{1} .. x_{n}] )
The first effectively explains abstraction in terms of equality and substitutions in formulas: Two abstracts are equal precisely if the equivalence of their formulas is true under arbitrary substitutions (of the proper kinds of terms).
It is well to notice here that yx. S[x] IFF T[x] is general given the rule of substitution: If one can prove this, where y is a variable, one therefore can prove zx. S[x] IFF T[x] or any other alphabetical variant of variables.
The second explains elementhood in terms of abstraction. Here it is restated for the case n=1: a_{1} is an element of the x_{1} that are S[x_{1}] precisely if a_{1} can be validly substituted for x_{1} in S[x_{1}] i.e. if and only if S[a_{1}].
Quantified structures
 ( (x).S(..x..) IFF x: ~S(..x..) = Ø )
 ( (x).~S(..x..) IFF x: S(..x..) = Ø )
 ( (Ex).S(..x..) IFF x: S(..x..) ≠ Ø )
 ( (Ex).~S(..x..) IFF x: ~S(..x..) ≠ Ø )
The standard quantifiers  for every and for some  get in effect rendered by equalities or inequalities of structures with nothing: Every thing is S precisely if the things that are not S equals nothing; every thing is not S precisely if the things that are S equals nothing; some thing is S precisely if the things that are S does not equal nothing, and some thing is not S precisely if the things that are not S does not equal nothing.
Structures, functions and mappings
 Structure[Z X_{1} .. X_{n}] IFF (y_{1})..(y_{n})((y_{1} .. y_{n}) e x_{1}..x_{n}.Z[x_{1}..x_{n}] IFF
y_{1}eX_{1} & .. & y_{n}eX_{n} & Z[y_{1}..y_{n}])
 Function[F X_{1} .. Xn] IFF Structure[F X_{1} .. X_{n}] &
(y_{1})..(y_{m})(y_{n})(y_{x})( ((y_{1}..y_{m} y_{n}) e F &
(y_{1}..y_{m} y_{x}) e F) = y_{n}=y_{x})
 Mapping[F X_{1} X_{2}] IFF Function[F X_{1} X_{2}] & Function[F X_{2} X_{1}]
 #(S_{i})=#(S_{j}) IFF (EF)(Mapping[F S_{i} S_{j}])
 F(x_{1} .. x_{n})=y IFF (EX_{1}) .. (EX_{m})(EX_{n})(Function[F X_{1} .. X_{m} X_{n}] &
(x_{1} .. x_{m} y) e F).
This concerns the tools and assumptions for the basics of mathematics.
A Structure made up of Z and X_{1} .. X_{n} is in effect the sequences of things that satisfy a formula while being of appropriate kinds X_{1} .. X_{n}. A Function is a structureterm with a special property: Its last term is unique in any case. And a Mapping is a binary function between X_{1} and X_{2} that holds both ways. And the number of S_{i} equals the number of S_{j} precisely if there is a mapping between S_{i} and S_{j}. The last assumption is a convenience that explains the standard functional notation F(x_{1} .. x_{n})=y.
Arguments
An argument in BL is a sequence of numbered statements with the property that each statement in it is inferred by assumption introduction or is the consequence in a variant of an assumed or proved inference rule of which the premisses occur before the consequence in the argument or is an instance of an already proved formula.
Note this implies every argument must start from assumptions. Also note that this does not imply, without further assumption, that what is correctly proved by the above criterion also is validly proved  though this does follow if one has proved that all rules of inference are valid i.e. have the property that their consequences cannot fail to be true if their premisses are all true.
And a further point to notice is that, given a sequence of statements in which the assumptions are marked, and a sequence of rules of inference, one can mechanically decide by the above criterion whether or not the sequence of statements is a correct argument.
This is important, because this is the basis for one's trust in and reliance on logical argument: One can prove that a purported argument is or is not a correct argument given a series of rules of inference, and one can prove that a purported argument is valid given a series of rules of inference by proving that it is correct and all the rules of inference that were used in the argument are valid.
Two reasons why this is important are the following.
First, it gives a clear idea of what a proof would be, and indeed a way of checking any purported proof mechanically for correctness.
Second, it shifts all problems of deductive inference back to whatever rules of inference were assumed: If only these are valid, then everything that is proved using these rules must be valid too.
