< Précis of epistemology

A reasoning is logical when all its affirmations, except its hypotheses, are obvious logical consequences of the hypotheses which precede them. In this way a logical reasoning proves that its conclusion is a logical consequence of its premises. Logical principles are fundamental rules that determine all obvious relations of logical consequence, and from there all relations of logical consequence.

Necessary consequence and logical possibility

The relation of logical consequence relationship can be defined from logical possibility:

C is a logical consequence of the premises P when there is no logically possible world such that C is false and the P are true.

A logical consequence can not be false if the premises are true. The relation of logical consequence necessarily leads from truth to truth.

To define a logically possible world, we give ourselves fundamental properties and relations and a set of individuals to whom we can attribute these properties and relations. A statement is atomic when it affirms a fundamental property of an individual or a fundamental relation between several individuals. An atomic statement can not be decomposed into smaller statements. Any set of atomic statements determines a logically possible world such that they are all true and the only true atomic statements (Keisler 1977). A set of atomic statements is never contradictory because atomic statements do not contain negation.

The truth of compound statements

Statements about a logically possible world are composed of atomic statements with logical connectors. The main logical connectors are the negation not, the disjunction or, the conjunction and, the conditional if then, the universal quantifier for all x, or every x is such that, and the existential quantifier there exists an x ​​such that.

When a statement is composed from atomic statements with logical connectors, its truth depends only on the logically possible world considered, because the truth of a compound statement depends only on the truth of the statements from which it is composed.

The truth of statements composed with negation, disjunction, conjunction, and the conditional is determined with truth tables:

Negation
p not p
TrueFalse
FalseTrue


Disjunction
p q p or q
TrueTrueTrue
TrueFalseTrue
FalseTrueTrue
FalseFalseFalse
Conjunction
p q p and q
TrueTrueTrue
TrueFalseFalse
FalseTrueFalse
FalseFalseFalse
Conditional
p q If p then q
TrueTrueTrue
TrueFalseFalse
FalseTrueTrue
FalseFalseTrue

The truth of statements composed with the universal and existential quantifiers is determined by the following two rules:

For all x, p(x) is true when all the statements p(i)' obtained from p(x) by substituting a name of an individual i at all occurrences of x in p(x)' are true, and false otherwise.

There exists an x ​​such that p(x) is true when at least one statement p(i)' obtained from p(x)' by substituting a name of an individual i at all occurrences of x in p(x) is true, and false otherwise.

The interdefinability of logical connectors

The logical connectors can be defined from each other. For example, the existential quantifier can be defined from the universal quantifier and the negation:

There exists an x ​​such that p means that it is false that all x is such that not p, otherwise formulated, not (for all x not p).

We can also adopt the opposite definition:

For all x, p means that it is false that there exists an x ​​such that not p, that is, not (there is an x such that not p) .

In the same way we can define the disjunction starting from the conjunction, or the opposite:

p or q' means 'not (not p and not q)

p and q means not (not p or not q)

The conditional can be defined from conjunction or or from disjunction:

If p then q means not (p and not q)

If p then q also means q or not p

The biconditional if and only if can be defined from the conditional and the conjunction:

p if and only if q means (if p then q) and (if q then p)

It can also be defined from the other connectors:

p if and only if q means (p and q) or (not p and not q)

or :

p if and only if q means not ( (p and not q) or (not p and q) )

One could also introduce the logical connector neither nor and define all the other connectors from it:

not means neither p nor p

p and q means neither not p nor not q

p or q means not (neither p nor q)

If p then q means not (neither not p nor q)

p if and only if q means neither (p and not q) nor (not p and q)

The fundamental rules of deduction

All relations of logical consequence can be produced with a small number of fundamental rules of deduction from trivial logical consequences, obviously tautological, which are given by the rule of repetition:

Any premise included in a finite list P of premises is a logical consequence of the premises P.

For each logical connector there are two fundamental deduction rules, an elimination rule and an introduction rule (Gentzen 1934, Fitch 1952). Logic looks like a building game. One composes and decomposes statements by introducing and eliminating logical connectors.

The fundamental rules of deduction are intuitively obvious, as soon as one understands the concepts of logical consequence and possibility, and the determination of the truth of statements composed from logical connectors. One can rigorously prove the truth of these intuitions.

The rule of repetition and the fundamental rules of deduction can be considered as the principles of logical principles, because they are sufficient to justify all the other logical principles.

Since three (or even two) logical connectors are enough to define all the others, six (or even four) fundamental rules of deduction suffice to produce all relations of logical consequence, with the repetition rule and the transitivity rule. One can choose for example negation, conjunction and the universal quantifier as fundamental logical connectors. All the deduction rules for the other logical connectors can then be derived from the six rules of the three fundamental connectors and the rule of transitivity of logical consequences :

If C is a logical consequence of the premises Q and if all the premises Q are logical consequences of the premises P then C is a logical consequence of the premises P.

The rule of particularization

If i is an individual then S(i) is a logical consequence of for all x, S(x).

S(i) is the statement obtained from S(x) by substituting i for all occurrences of x in S(x).

This rule is the most important of all logic, because the power of reasoning comes from the laws with which we reason. Whenever we apply a law to an individual, we learn what the law teaches us and reveal the power of reasoning it gives us.

The rule of generalization

If S(i) is a logical consequence of the premises P and if i is an individual which is not mentioned in these premises then For all x, S(x) is a logical consequence of the same premises.

S(x) is the formula obtained from S(i) by substituting x for all occurrences of i in S(i).

An example of the use of this rule is the philosophical, or Cartesian, I. One says I without making any particular hypothesis on the individual so named. Therefore all that is said about him or her can be applied to all individuals. If for example we have proved "I can not think without knowing that I am", we can deduce "Any individual can not think without knowing that he or she is".

The detachment rule

B is a logical consequence of A and If A then B.

The rule of hypothesis incorporation

If B is a logical consequence of the premises P and A, then If A then B is a logical consequence of the premises P.

The principle of reduction to absurdity

If B and not B are both logical consequences of the premises P and A, then not A is a logical consequence of the premises P.

The rule of double negation suppression

A is a logical consequence of not not A.

The rule of analysis

A and B are both logical consequences of the unique premise A and B.

The rule of synthesis

A and B is a logical consequence of the two premises A and B.

The rule of thesis weakening

A or B and B or A are both logical consequences of A.

The elimination rule for a disjunction

If A or B is a logical consequences of the premises P, if C is both a logical consequence of the premises P and A, and a logical consequence of the premises P and B, then C is a logical consequence of the premises P.

The rule of direct proof of existence

If i is an individual, then there exists an x such that S(x) is a logical consequence of S(i).

In the rule of direct proof of existence, S(x) is a formula obtained by substituting x for some, not necessarily all, occurrences of i in S(i).

The elimination rule for the existential quantifier

If there exists an x such that S(x) is a logical consequence of the premises P, if C is a logical consequence of the premises P and S(i), if the individual i is not mentioned in the P, nor in C, then C is a logical consequence of the premises P .

Reasoning without hypothesis and the logical laws

The fundamental rules of deduction can be applied even if P is an empty list of premises, that is, no assumptions were made at the outset. The rule of hypothesis incorporation and the principle of reduction to absurdity make it possible to pass from a reasoning under hypothesis to a reasoning without hypothesis.

The conclusions of a reasoning without hypothesis are universal logical truths, always true whatever the interpretation of the concepts they mention, except for the interpretation of logical connectors. They are called logical laws, or tautologies.

Some examples of logical laws:

Pure tautology: If p then p

Since p is a logical consequence of p according to the repetition rule, if p then p is a logical law according to the rule of hypothesis incorporation.

The principle of non-contradiction: not (p and not p)

p and not p are both logical consequences of p and not p according to the rule of analysis, not (p and not p) is therefore a logical law according to the principle of reduction to absurdity.

The law of the excluded middle: p or not p

p is necessarily true or false. There is no third possibility.

Suppose that the law of the excluded middle may be false:

(1) Hypothesis: not (p or not p)

  • (2) Hypothesis: p
  • (3) Consequence: p or not p according to (2) and the rule of thesis weakening.
  • (4) Consequence: not (p or not p) according to (1) and the rule of repetition.

(5) Consequence: not p according to (3), (4) and the principle of reduction to absurdity.

(6) Consequence: p or not p according to (5) and the rule of thesis weakening.

(7) Consequence: not (p or not p) according to (1) and the rule of repetition.

(8) not not (p or not p) according to (6), (7) and the principle of reduction to absurdity.

p or not p according to (8) and the rule of double negation suppression.

All the deduction rules, fundamental or derived, can be translated into logical laws, because if C is a logical consequence of the premises P then If the conjunction of the P's then C is a logical law. For example, If A and if A then B, then B is a logical law that translates the detachment rule.

The derivation of logical consequences

The fundamental rules of deduction suffice to derive all relations of logical consequence and all logical laws. It is the completeness theorem of first-order logic, proved by Kurt Gödel, in his doctoral dissertation (Gödel 1929, which reasons on a different but equivalent formal system). The fundamental rules of deduction are therefore a complete solution to the old problem, posed but not resolved by Aristotle, of finding a list of all the logical principles.

Let us show for example that If A then C is a logical consequence of If A then B and If B then C.

(1) Hypotheses: If A then B, If B then C

  • (2) Hypothesis: A
  • (3) Consequence: B according to (1), (2) and the detachment rule.
  • (4) Consequence: C according to (1), (3) and the detachment rule.

Consequence: If A then C according to (4) and the rule of hypothesis incorporation.

Another example is the contraposition rule: if not q then not p is a logical consequence of if p then q.

(1) Hypothesis: if p then q

  • (2) Hypothesis: not q
    • (3) Hypothesis: p
    • (4) Consequence: q according to (1), (3) and the detachment rule.
    • (5) Consequence: not q according to (2) and the repetition rule.
  • (6) Consequence: not p according to (4), (5) and the principle of reduction to absurdity.

Consequence: if not q then not p according to (6) and the rule of hypothesis incorporation.

Why does reasoning enable us to acquire knowledge?

When a reasoning is logical, the conclusion can not provide more information than those already given by the premises. Otherwise the reasoning is not logical, because the conclusion could be false when the premises are true. Logical conclusions are always reformulations of what is already said in the premises. In fact many arguments tell us nothing because the conclusion is only a repetition of the premises, in a slightly different form. We then say that they are tautological. They are variations on the theme "it is so because it is so."

In the precise sense defined by logicians, tautologies are logical laws, the laws which are always true regardless of the interpretation given to their words (logical connectors excepted). When a reasoning is logical, the statement 'if the premises then the conclusion' is always a tautology, as defined by logicians.

Conclusions are only repeating what was already said in the premises. A reasoning must be tautological to be logical. But then why do we reason? It seems that a reasoning has nothing to teach us.

The power of reasoning comes from the general principles on which it is based. If we reduce logic to elementary propositional calculus (founded on all the logical principles except those with the universal and existential quantifiers), a logic in which statements are never general, because we do not have the universal quantifier, then yes, the tautological character of our reasoning is usually pretty obvious. When it is not, it is only because our logical intuitions are limited. The propositional calculus serves us especially to rephrase our assertions. This can be very useful, because understanding depends on formulation, but this does not explain why reasoning enables us to know what we do not already know.

A statement is a law when it can be applied to many particular cases. It can always be formulated in the following way:

For all x in D, S(x)

In other words :

For all x, if x is in D then S(x)

D is the scope of the law. S(x) is a statement about x.

All statements of the form S(i), where i is the name of an element of D and S(i) is the statement obtained from S(x) by substituting everywhere i to x, are obvious logical consequences of the law. S(i) is a special case of the law.

When we learn a law, we know at the beginning only one or a few special cases. We can not think at all special cases because they are too numerous. Whenever we apply a known law to a special case which we have not thought of before, we learn something.

A law is like a condensed information. In one statement it determines a wealth of information on all the special cases to which it can be applied. When we reason with laws, what we discover is not said in the premises, it is only involved implicitly. Reasoning enables us to discover all that laws can teach us.

Justification of logic

We recognize a logical reasoning by verifying that it complies with logical principles. But how do we recognize the logical principles? How do we know they are good principles? How to justify them ? Are we really sure that they always lead to true conclusions from true premises?

With the principles of definition of the truth of compound statements, one can prove that our logical principles are true, in the sense that they always lead from truths to truth. For example, one only has to reason on the truth table of the conditional to prove the truth of the detachment rule.

A skeptic might object that these justifications of logical principles are worthless because they are circular. When we reason on logical principles to justify them, we use the same principles that we have to justify. If our principles were false, they would prove falsehoods and so they could prove their own truth. That logical principles enable us to prove their truth does not therefore really prove their truth, since false principles could do the same.

This objection is not conclusive. We just have to look at the suspected circular proofs to be convinced of their validity, simply because they are excellent and irrefutable. No doubt is allowed because everything is clearly defined and proven. A skeptic can point out correctly that such proofs can convince only those who are already converted. But in this case it is not difficult to be converted, because logical principles just formulate what we already know when we reason correctly.

The circularity of logical principles is particularly apparent for the particularization rule:

For every statement S(x) and every individual i S(i) is a logical consequence of for all x, S(x). (1)

For example, If Socrates is a man then Socrates is mortal is a logical consequence of For all x, if x is a man then x is mortal. (2)

To pass from (1) to (2), the particularization rule has been applied twice to itself. The statement S(x) is particularized in If x is a man then x is mortal, the individual i is particularized in Socrates.

The paradox of Lewis Caroll

Thanks to the detachment rule, we can deduce B from A and if A then B. A more complete rule should therefore be that we can deduce B from A, if A then B and the detachment rule. But this rule is not yet complete. A more complete rule, but still incomplete, is that we can deduce B from A, if A then B, the detachment rule and the rule that tells us that we can deduce B from A, if A then B and the detachment rule. But there must be another rule that tells us that we can apply the previous rule, and so on to infinity (Carroll 1895).

If the detachment rule was itself an hypothesis that must be mentioned in our proof, and from which our conclusions are deduced, then our reasoning could never begin, because a second rule would be needed to justify the deductions from the detachment rule, then a third which justifies deductions from the second, and so on to infinity. But logical laws are not hypotheses. We always have the right to adopt them as premises, without any other justification except that they are logical laws, because they can not be false, because they can not lead us to error.

Mathematical knowledge

All mathematical knowledge can be considered as knowledge about the logically possible worlds.

A theory is consistent, or non-contradictory, or coherent, when the contradictions p and not p are not logical consequences of its axioms. Otherwise it is inconsistent, contradictory, incoherent, absurd.

A true theory of a logically possible world is necessarily consistent, since contradictions are false in all logically possible worlds.

A consistent theory is true of at least one logically possible world. This is Gödel's completeness theorem. If we find a theory that is necessarily false, that is to say false in all the logically possible worlds, without it being possible to prove that its axioms lead to a contradiction, it would show that our logic is incomplete, that it would not be sufficient to prove all the necessary logical truths.

We develop mathematical knowledge by reflecting on our own words. The logically possible worlds are defined by words, with sets of atomic statements. To know these worlds is to know the words that define them. Mathematical worlds are nothing more than what we define. Nothing is hidden because they are our work. We can know everything about them because we determine what they are.

Is mathematical truth invented or discovered?

Both, because inventing is always discovering a possibility.

When we invent, we change the actual but we do not change the space of all possibilities. What is possible is possible whatever we do. We often act to make accessible what was previously less accessible, but it is never about making the impossible possible, we are only changing the possibilities relative to our current situation. When we make the possible impossible, these are still relative possibilities. The space of absolute possibilities, whether logical or natural, does not depend on us.

It suffices to explain how we reason on our own words to show how we acquire mathematical knowledge about finite structures, because they are defined with finite sets of atomic statements.

Knowledge about infinite mathematical structures is more difficult to understand. They are defined with infinite sets of atomic statements. We know these infinite sets from their finite definition. Two processes are fundamental to define infinite sets:

  • Recursive constructions

We give ourselves initial elements and rules which make it possible to generate new elements from the initial elements or from already generated elements. For example, we can start from the single initial element 1 and use the rule of generating (x + y) from x and y. The infinite set is then defined by saying that it is the unique set that contains all the initial elements and all the elements generated by a finite number of applications of the rules.

  • The definition of the set of all subsets

As soon as a set x is defined, the power set axiom allows us to define the unique set that contains all the sets included in x.

To explain mathematical knowledge it is necessary to explain how we are able to reason correctly about the infinite sets that we define.

This article is issued from Wikibooks. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.