Basic Logic  semantics:
The semantics of a formal system are the assumptions and the theorems that can be proved with their help that explain how the terms of a formal system represent and how the statements of a formal system come to be true or not true.
This continues Basic Logic by explaining the fundamentals of formal semantics. There are various novelties in the present approach, notably  but not only  that Basic Logic is used to express its own semantics.
1. Truthfunctional semantics
2. Denotational semantics
3. Probabilistic semantics
4. Modal semantics
1. Truthfunctional semantics
In BL we have functions, and so we can use it to define a function to assign values to statements:
v maps the statements of BL to {0,1} i.e.
Function(v, statements of BL, {0,1})
Now we can write "v(P)=1" i.e. "the value of (the statement) P equals 1" and "v(P)=0" i.e. "the value of (the statement) P equals 0" etc. We take "0" as "not true" and "1" as true, but could instead have used v to map to {T,F} or {true, false}. The reason to rather use {0,1} is that it makes more sense mathematically and later, when also probabilities are used.
Now we can use v to write a number of assumptions and definitions:
A0.  1≠0
A1.  v(P)=1 IFF P
A2.  v(~P)=1 IFF v(P)=0
A3.  v(~P)=0 IFF v(P)=1
A4.  v(P&Q)=1 IFF v(P)=1 & v(Q)=1
A5.  v(P&Q)=0 IFF v(P)=0 V v(Q)=0
A6.  v(PVQ)=1 IFF v(P)=1 V v(Q)=1
A7.  v(PVQ)=0 IFF v(P)=0 & v(Q)=0
A0 is the explicit assumption that 1 is unequal to 0, and A1 is the explicit assumption that one may write P in BL precisely if the value of P equals 1.
The assumptions A27 are assumed equivalences that may be taken as definitions, that lay down for "~", "&" and "V" what it is to be true and not true in terms of their component statements.
These are also all just what one would expect intuitively if one makes  as we do  the assumption of truthfunctionality:
 Whether a statement with "~", "&" or "V" has the value true (or not) depends on the values of the statements that make up the statement.
This assumption is in many ways the simplest sort of assumption one can make about the truthvalues of statements.
One can summarize the yield of A2 to A7 in English as
 A denial of a statement is true precisely if the statement is not true.
 A conjunction of two statements it true precisely if both conjuncts in the statement are true.
 A disjunction of two statements it true precisely if some disjunct in the statement is true.
In BL we also assumed explicitly two rewriting rules, and we need truthvaluational assumptions for these as well:
A8.  v(P)=1 IFF v(P)=1
A9.  v(P)=0 IFF v(P)=0
A10.  v(P=Q)=1 IFF v(~PVQ)=1
A11.  v(P=Q)=0 IFF v(~PVQ)=0
Thus, in terms of truthvalues being able to be written in BL coincides with being true. And in terms of truthvalues that one may write Q if one has written P coincides with it being true that notP or Q is true.
There is more to rewriting rules than truthvalues, namely that they are permissions to write and assert in BL, but in so far as truthvalues are concerned the assumed equivalences make intuitive sense.
This may be illustrated for A10 and A11.
Suppose it is true that notP or Q is true. Then if one may write that P is true, it follows from the properties assumed for "V" that Q is true, which one then accordingly may write. Thus the Right Hand Side  RHS  of A10 states a property that the LHS of A10 should have, if this is to be a rewriting rule that leads from truths to truths.
Suppose it is true that notP or Q is not true. Then one can show, using other assumed rules, that in fact P is true and Q is not true. But in that case a rewriting rule that leads from truths to truths should not be true, and indeed it is by A11.
Now we note the following fundamental and interesting fact, we in fact used in the last two paragraphs:
 We can use the rewriting rules for & and V and ~ we have assumed with BL to validate these rewriting rules that are truthfunctionally defined in terms of & and V and ~.
And indeed we can now also use BL and the assumed truthvalues for the logical connectives to prove a number of statements about BL, such as the following
MT1. The PLpart of BL is logically valid.
MT2. The PLpart of BL entails standard classical propositional logic.
By MT1 is meant that each of the assumptions for PL = Propositional Logic in BL is a statement that has the value 1, and each inferencerule of the form A = C that was assumed has the property that if v(A)=1 then v(C)=1.
By MT2 is meant that the PLpart can be used to derive the axioms of any standard classical propositional logic, such as that in HilbertBernays, which consists of the following axioms (and I follow Wang, 'Logic, Computers and Sets' p. 308):
HB1a. P > (Q > P)
HB1b. (P > (P > Q)) > (P > Q)
HB1c. (P > Q) > ((Q > R) > (P > R))
HB2a. (P&Q) > P
HB2a. (P&Q) > Q
HB2c. (P > Q) > ( (P > R) > (P > (Q&R)) )
HB3a. P > (PVQ)
HB3b. Q > (PVQ)
HB3c. (P > Q) > ( (Q > R) > ((PVQ) > R)) )
HB4a. (P IFF Q) > (P >Q)
HB4b. (P IFF Q) > (Q >P)
HB4c. (P > Q) > ( (Q > P) > (P IFF Q) ) )
HB5a. (P > Q) > (~Q > ~P)
HB5b. P > ~~P
HB5c. ~~P > P
The above classical system has several virtues, but the reader who doesn't know about this (and also those who do) may try to derive the above statements in BL, to verify MT2.
Now let us consider the truthfunctional semantics we have.
One way to look upon it is as  given the truthfunctional assumption we made  the analysis of logical connectives in terms of the truthvalues of the statements they contain. And this is not so much a definition or analysis of the notion "true" or "truth" as a definition or analysis of the properties of logical connectives given that one makes the truthfunctional assumption.
More specifically: A truthfunctional semantics does not analyse the notion of "true" but takes it as given and uses it to analyse the properties of logical connectives given the assumption of the truthfunctionality of statements.
2. Denotational semantics
But it is also possible to use the tools BL provides to state and explain another sort of semantics for the statements and terms of BL than a truthfunctional semantics.
This is a denotational semantics, that explains that a statement is true by considering whether what the statement represents is or is not in some supposed domain of things (facts, structures, events  whatever).
Such a denotational semantics can be added to a truthfunctional semantics, by adding to start with a function
d maps the statements and terms of BL to the structures and things in some supposed domain D
This function d has the additional benefit of also allowing the analysis of predicates, relations, nouns and names in an intuitive way.
Namely  as we shall see later in some detail  by associating with a predicate the set of things it is true of; by associating with a relation  twoplace predicate  the set of pairs it is true of; by associating with a noun the set of things the noun stands for, and by associating with a name the thing named.
However, here should be first interposed a warning:
Although we suppose a domain D, we are still confined to language, unless D itself is a language, and thus what d really does, in so far as we can write it out, is relate the statements of terms of BL to statements and terms that represent the structures and things in D.
The domain D and the language representing D are often confused, also in the logical literature, but it should be clear that one should avoid this confusion  or explicitly use it, when one assumes D is a language, such as BL itself, or some other language.
Furthermore, what makes all this feasible and intuitive, is that one in fact uses the domain D in a way that is analogous to the truthfunction that assigns 1. Indeed, for PL there is
A12.  v(P)=1 IFF d(P)≠Ø
Note that by earlier assumptions A12 comes with its associate v(P)=0 IFF d(P)=Ø, and by A2 v(~P)=1 IFF d(P)=Ø.
These assumptions do explain truthfunctions to some extent:
 P is assigned the truthvalue 1 under v precisely if P is not assigned the void set under d.
Note this amounts to using '≠Ø' as an analog of 'exists'.
In fact, we have used this approach with d(.) already when formulating BL in case of the quantifiers, though without appealing to a function d. In fact, we can coordinate the notions of abstraction and the interpretation (or denotation) function d along the lines of A12 and A13:
A13.  v(x: Z[..x..]≠Ø)=1 IFF d(Z[..x..])≠Ø
A14.  v(x: ~Z[..x..]≠Ø)=1 IFF d(~Z[..x..])≠Ø
This directly extends to the quantifiers by earlier assumptions. What it says in English is that the value of the statement that the things that are Z is not nothing is true precisely if the denotation of the formula about the things that are Z is not nothing.
However, there is a difference in d(.) for propositions and abstracts: Abstracts may be true of different things, and accordingly both x: Z[..x..] and x: ~Z[..x..] may represent some existing things.
For identity we have the following:
A15.  v(x=y)=1 IFF d(x)=d(y)
which has the merit of explaining identity in terms of identity: It is true two terms are identical precisely if the denotations of the terms are identical.
One may at this point ask what is the use for a denotational semantics, apart from perhaps adding some clarification.
The point lies precisely in the addition of the domain D that contains whatever one's terms are supposed to be about, for once one has a domain D one may impose all manner of properties or restrictions on D, and thereby vary the interpretation of what one may or does mean by one's terms.
A good example  also historically correct  is that once one explicitly supposes that basic arithmetic concerns the positive integers, one rapidly meets the need to extend that domain of positive integers first by adding the number 0, motivated by the properties of subtraction and expressions like 77, and then negative integers, motivated again by the properties of subtraction and expressions like 79, and then also fractions, motivated by the properties of division and expressions like 2:4.
Having introduced the number 0 and negative integers and fractions, one may again ask which of one's assumptions remain true if one removes experimentally from one's domain D all even numbers or all nonprime numbers etc.
Thus introducing a domain and a function to the things in that domain adds rather a lot of sophistication to one's semantics and may clarify many things in it, essentially because it forces one to write out the assumptions one makes about the domain and the relations of one's terms to the things in the domain.
3. Probabilistic semantics
Indeed, one can further sophisticate and clarify semantics, logic and reasoning by adding to the above denotational semantics such assumptions as allow one to include probabilities.
This may be done as follows, using d(.) introduced above, and presupposing at this point some clarifications about numbers and sets, namely by introducing a relation called 'represents symbolically and numerically' abbeviated rsn, with the following definition, that I explain below:
 rsn[L,D] IFF L is a Language & D is a set & (Ed)(E#)
( d : Terms of L > D* &
# : D* > N &
D = d(T_{i}) U d(~T_{i}) &
d(T_{i}) = d(T_{i}&T_{j}) U d(T_{i}&~T_{j}) &
#(D) = #(D_{i}) + #(D_{i}) &
#(D_{i}) = #(D_{i}OD_{j}) + #(D_{i}OD_{j}) &
#(D_{i}) = #(D_{j}) IFF (Ef)(f : D_{i} 11 D_{j}) )
The star  as in D*  is used to indicate the powerset of a set. A set can be taken as what is described or represented by an abstract and is a collection of things, that may be empty if there are no things as described by the abstract. A subset of a given set is any set that only has elements in the given set. The powerset of a set is the set of all its subsets.
Now in the above definition, d and # are two functions that may be read respectively as 'denotation of' and 'number of'. The denotation maps the terms of a language L to the powerset of a domain D, and the number maps the subsets of D to the real numbers N.
Next, D = d(T_{i}) U d(~T_{i}) stipulates that the domain consists of the denotation of any term together with the denotation of its complement, which also encodes a desirable property of denial and negation, namely that what is not so consists of anything that does not have the property of being so.
Likewise, d(T_{i}) = d(T_{i}&T_{j}) U d(T_{i}&~T_{j}) stipulates that any term's denotation can be analyzed as being made up of the union of the conjunction of the denotation of the term and an arbitrary term together with the denotation of the term and the denial of the arbitrary term. This also encodes a desirable property of denial and negation, and conforms to one's intuitively valid claims to the effect that e.g. the term woman stands for the union of redhaired and nonredhaired women.
Turning to # that encodes the notion of number, we can see that #(D) = #(D_{i}) + #(D_{i}) encodes a desirable and intuitive property of the complement of a set: The number of any set consists of the sum of any subset in the set together with the complement of that set. Thus, the number of all the French equals the number of French men plus the number of all French who are not men.
Similarly, #(D_{i}) = #(D_{i}OD_{j})+#(D_{i}OD_{j}) encodes the desirable and intuitive property of sets that the number of things a set comprises is made up of the sum of the number of the set's intersection with an arbitrary set plus the number of the set's intersection with the complement of the arbitrary set. Thus, the number of French women equals the number of French adult women plus the number of French nonadult women, and also the number of French blond women plus the number of French nonblond women.
The last assumed property of # in the above definition was already assumed earlier, and embodies Hume's Postulate: Two sets have the same number precisely if there is a 11 mapping between them. This postulate is sufficient  together with other assumptions  to derive the usual theorems about numbers (as Frege first showed in detail).
Having explained in some formal detail what one means by 'represents numerically and symbolically', one may add two assumptions that at the same time encode intuitions about proportion and enable one to surrect probability theory in Basic Logic:
A16.  p(D_{i}) = #(D_{i}) : #(D)
A17.  p(D_{i}D_{j}) = #(D_{i}OD_{j}) : #(D_{j})
The first of these lays down that the proportion of a subset equals the number of the subset divided by the number of the set it is a subset of. The second of these lays down that the proportion of a subset in another subset equals the number of the intersection of both subsets divided by the number of the other subset.
These assumptions enable the derivation of the standard Kolmogorovaxioms for probability, that accordingly is taken as a kind of proportion and derived from the cardinal numbers of sets (that are represented by #).
This is a new interpretation of probability, that I call the cardinal interpretation of probability, and that has the merit of giving room for both objective chance and personal probability. Namely as follows: Suppose one has a bag of beans. Then there is an objective proportion of white beans in the bag, that may be any number between 0 and 1, depending on the actual constitution of the bag, and you and I may have  quite  different ideas about this proportion, that can be formulated in terms of our personal guesses about the proportion.
It also allows us to say when a personal probability is true: If it equals the objective probability. Thus, if you guess that the probability of randomly drawing a white bean from the bag is 1/2 and my guess is 1/10 and in fact the number of white beans is 500 and the number of beans 1000, your guess is true and mine is not. And if in fact the actual number of white beans is 550, your guess, although not quite true, is more true than mine.
Note also that often we may not know whose personal probability is (more) true, but that we can also often quote a lot of evidence  such as random samples from the bag of beans  that allow us to conclude with high probability what the proportion of the sampled characteristic really is. (See: Rules of Probabilistic Reasoning.)
4. Modal semantics
Modalities of statements are attributes of statements like 'is necessary', 'is possible' and 'is contingent'. Modalities  and there are quite a few more, and quite a few terms that are like modalities  can be treated and incorporated into BL in several ways.
The simplest is as follows and merely uses = and Ø as above, while refining these a little:
A18.  v( Nec P)=1 IFF d(~P)=Ø A19.  v( Pos P)=1 IFF d(P)≠Ø A20.  v( Con P)=1 IFF v( Pos P)=1 & v( Pos ~P)=1
As stated, this considers modal terms only when used to qualify statements, but one can also consider modal terms as qualifying the predicates and relations in statements, and e.g. consider the  supposedly  necessary features a mammal must have to be a mammal.
The analysis of the modalities 'necessary', 'possible' and 'contingent' that was just proposed suffices for many purposes, and can be considered as merely truthfunctional.
There is a related and more sophisticated analysis that is due to Saul Kripke and that takes in consideration the domains, and indeed the possibility that one's terms may range over several or many domains.
When this is done, the several domains that are considered often are called 'worlds' and one may easily find phrases in these treatments to the effect that a statement is necessary iff it is true in 'all possible worlds'.
This also allows for the possibility that some of these  socalled  worlds are related in ways to some of these worlds that others of these worlds are not. Thus, a world in which Gengis Khan has been born and is a happy healthy toddler holds different possibilities from the same world in which he was aborted, and a world in which Gengis Khan has wrought havoc as an adult must be in some sense 'accessible', as the term is, from a world in which he was a healthy toddler, whereas a world in which Gengis Khan was aborted should not give access to a world in which he later lived and wrought havoc on his contemporaries.
This usage of 'possible world' etc. has some justification, but is also sometimes misleading, in that the possibilities one considers are often not so much different possible worlds as different possible situations, actions, developments or expectations concerning the world one lives in, that may run different courses, i.a. dependent on one's own ommissions and commissions in it.
But this usage, that involves relating modal terms to different possible domains, situations or worlds one's terms may represent and may be true of, also has advantages, and explains some relativity of modal terms (namely: to domains) that cannot be brought out if one considers merely one domain, while it also enables one to consider how different domains may or may not relate to other domains, rather like a possible world with Sherlock Holmes involves some possible world with the parents of Sherlock Holmes, whereas a world without Sherlock Holmes also can do without his parents.
In any case, the analysis of modalities that also involves domains is along the following lines, where the denotation function d is made into one that depends not only on a statement but also on a world, domain or situation the statement is about:
A21.  v( Nec P)=1 IFF ~(ED_{i})( d(D_{i},P)=Ø ) A22.  v( Pos P)=1 IFF (ED_{i})( d(D_{i},P)≠Ø ) A23.  v( Con P)=1 IFF v( Pos P)=1 & v( Pos ~P)=1
Thus, in this approach a statement P is necessary precisely if it is true in all possible domains (worlds, situations); a statement P is possible precisely if it is true in some possible domains (worlds, situations); and a statement P is contingent precisely if it is true in some situations and false in others.
Another approach to modality, or a supplementary approach, is to use the probabilistic semantics introduced above, and stipulate that a statement is necessary precisely if its probability equals 1 and possible precisely if its probability is not 0; and contingent precisely if its probability is neither 1 nor 0.
See further: Basic Logic  extended semantics
