Contributions of Extended Determinism
to Rational Thinking
Version date: November 4, 2008
This text summarizes in 15 times fewer pages the main ideas of the book:
A Bridge between Science and Philosophy for Rational Thinking"
French versions of this text are available in PDF and HTML formats:
The book shows first that philosophical determinism does not keep its promise when it asserts that it is possible to predict the future and to mentally reconstruct the past.
It then shows how the principles of causality and of scientific determinism are natural consequences of fundamental properties of the Universe.
It then clarifies those two principles, and extends their definition so that they govern the evolution properties of all laws of nature. Those laws then follow extended determinism, whose constructive definition structures it like an axiomatic system; we then prove that it is the only principle that governs all physical laws of evolution.
The book then shows how randomness and chaos intervene only in specific situations of nature, and how extended determinism takes all those situations into account. It also shows how predictability limits also originate in various forms of imprecision, of complexity and of nature's refusal of precision.
Since rational decisions require understanding and predicting, they require knowing extended determinism. The book uses recent scientific advances in the fields of quantum physics and genetics to show limits of the possibility to predict evolution results and to obtain the required precision.
The book then draws the consequences of extended determinism on rational thinking: in spite of his free will, man remains enslaved by the desires originating in his genetic inheritance, his acquired culture and knowledge, and his living context. The book explains how he can, nevertheless, follow the precepts of critical rationalism to find scientific truths, and to what extent he can understand the world and himself.
Last, the book shows the absurdity of pseudo-scientific notions such as "The anthropic principle". It also describes the modern scientific solution of the old philosophical issue of the "First cause".
This easy-to-read book is therefore a contribution to rational thinking intended for intellectuals with modest scientific background who wish to bring it up-to-date in the fields of quantum physics, cosmology, information technology and genetics.
Given the length of the book's complete text, almost 500 pages [Book], it is recommended to read first this summary of its main ideas, which is 15 times shorter [Summary]. All scientific terms such as "eigenvalues" and "matter waves" are explained in the book; understanding them fully is not necessary in this introductory text.
[Summary] "Contributions of Extended Determinism to Rational Thinking" (35 pages) - http://www.danielmartin.eu/Philo/Summary.pdf
The traditional definition of determinism was published by the French mathematician, physicist and astronomer Pierre-Simon de Laplace in his book of 1814 "A Philosophical Essay on Probabilities":
"We should consider the present state of the Universe as the effect of its previous state and the cause of the state that will follow. An intelligence which, at a given time, would know all of the forces that govern nature and the respective states of all its beings – assuming it is vast enough to analyze that data – would grasp in the same formula the movements of the largest bodies of the Universe and those of its lightest atom; nothing would be uncertain for it, the future and the past alike would stand before its eyes."
(That intelligence is often called "Laplace's demon").
According to this founding text, philosophical determinism asserts that:
§ The future is completely determined by the present;
§ The future is completely predictable given perfect knowledge of the present;
§ Perfect knowledge of the present suffices to mentally reconstruct all of the past;
§ For each present situation there is a single causal chain (of events or situations) that starts infinitely far in the past and extends infinitely far in the future.
Philosophical determinism, which promises the possibility to predict all of the future and to mentally reconstruct all of the past, is contradicted by several phenomena of nature quoted in the book. Since a single counter-example suffices to contradict an assertive statement, here is one.
The atoms of a sample of uranium 238 will decay (decompose) spontaneously, without any cause other than passing time; an atom of uranium will decay into an atom of helium and an atom of thorium. The number of atoms of uranium 238 that decay per unit of time follows a known law: 50% of the atoms of a sample of arbitrary size will decay in a fixed amount of time T called "the half-life of uranium 238"; then half of the rest (one quarter) will decay during the next period of time T; then half of the rest (one eighth) will decay during the next period of time T, etc.
Natural (spontaneous) radioactive decay is attributed to the instability of the excitation energy of the neutrons and protons of a radioactive atom's nucleus. That energy varies spontaneously – a phenomenon deemed impossible in traditional deterministic physics, because it attributes an atom's decay to chance. Due to a tunnel effect, that excitation energy may sometimes exceed the potential energy that holds the nucleus together (known as the element's fission barrier), causing such a considerable deformation that the nucleus decays. The tunnel effect and its spontaneous nature can only be explained using the mathematical tools of quantum mechanics, which contradict traditional determinism by introducing spontaneous variations of energy levels and probabilities in the occurrence of an event.
Contrary to the promise of philosophical determinism to predict the future, it is impossible to know which atoms will decay during a given period of time, and when a given atom will decay. Radioactive decay follows a statistical law that applies to a population of atoms, but does not predict the evolution of a given atom.
In addition, when a sample contains decayed atoms, it is impossible to know for any one of them at what time it decayed, which contradicts philosophical determinism as a principle for mentally reconstructing past events knowing the current situation.
Therefore, philosophical determinism cannot keep its promises to predict the future and mentally reconstruct the past: this principle is false in the case of radioactive decay. And since, according to critical rationalism explained in the book, a single counter-example suffices to disprove an assertion, we shall consider philosophical determinism erroneous, in spite of the fact that its definition is in some dictionaries.
Ever since man needs to understand the world around him and predict the evolution of situations, knowing determinism is important for rational thinking. And since philosophical determinism does not keep its promise to predict, we will delve into the issue of understanding and predicting on a less ambitious basis. We will start over from the causal postulate on which philosophical determinism is based, and ignore for the time being its promises to predict the future and reconstruct the past.
Ever since man existed, he noticed links between situations and phenomena: a given situation, S, is always followed by phenomenon P. A natural process of induction made man assert a general postulate: "The same cause always produces the same effect". Reflecting on the conditions that governed the chains of events he observed, he inferred the following causal postulate stated below as a necessary and sufficient condition:
Definition of the causal postulate
§ Necessary condition: in the absence of the cause, the consequence does not happen. In addition, every observed situation or phenomenon was preceded by a cause, and nothing may exist without having been created.
§ Sufficient condition: if the cause exists, its consequence happens (it is certain).
However, that consequence is an evolution phenomenon, not a final outcome: we renounce the promise to predict the result of the evolution and retain only the postulate that it is initiated.
In some favorable cases, the causal postulate meets the need of rational thinking to understand and predict:
§ The necessary condition allows explaining a consequence by following the flow of time backwards up to its cause;
§ The sufficient condition allows predicting a consequence by following the flow of time forwards from its cause: the evolution is certainly initiated.
In order to better understand and predict, rational thinking requires an addition to the above causal postulate; it needs a rule that guarantees stability (reproducibility) in time and space.
The same cause always produces the same effect: the effect of a cause is reproducible. The physical evolution laws consequences of a given cause are stable; they are the same everywhere and at all times.
Consequently, a stable situation never evolved and never will; it is its own cause and its own consequence! Taking into account an evolution after time t requires changing the definition of the observed system. In fact, the flow of time can only be observed when something changes; if nothing changes, time seems to stop. The stability rule is not trivial; one of its consequences is Newton's first law of motion, the law of inertia:
"The velocity vector of a body which is motionless or moves in a straight line at constant velocity will remain constant as long as no force acts on the body."
As far as determinism is concerned, this law implies that motion in a straight line at a constant velocity is a stable situation that will not evolve until a force is applied to the body; such a stable situation is its own cause and its own consequence!
The stability rule allows inducing a physical law of nature from a collection of cause-consequence sequences: after seeing the same cause-consequence sequence many times, I postulate that the same cause always produces the same consequence. We may now group the causal postulate and the stability rule to form a principle that governs all laws of nature describing a time evolution, the postulate of scientific determinism.
The postulate of scientific determinism governs the time evolution of a situation due to laws of nature, in accordance with the causal postulate and the stability rule.
The deterministic nature of a law of the Universe does not entail the predictability of its results or their precision. Philosophers who believe the opposite are mistaken.
In the definitions of the causal postulate and of scientific determinism, we renounced predicting evolution results. Since we know that a cause initiates the application of a law of nature, predicting an evolution result requires predicting the result of such a law.
Nature recognizes situations-causes and automatically initiates applicable laws each time, but it does not know the concept of result, a notion of interest only to humans. This remark allows us to eliminate right away a cause of unpredictability independent of nature: supernatural intervention. Obviously, if we admit that a supernatural intervention may initiate, prevent or alter an evolution, we renounce predicting its result. We will therefore postulate materialism; we will also assume that no intervention originating outside our Universe or independent of its laws is possible. The opposing doctrines of materialism and spiritualism are described and debated in Part 2 of this book, before Part 3, which is devoted to determinism.
Three types of reasons that prevent predicting the result of a deterministic law of evolution are imprecision, complexity and chance.
Since the causal postulate and scientific determinism do not promise to predict a result, they do not promise to predict its precision either, when it is predictable; and this is regrettable since man often needs precise results.
Here are cases where the precision of the calculated or measured result of an evolution law may be considered inadequate by man.
Imprecision of the initial values of an evolution, or of a result's measure
An evolution law applies to variables. If those variables are known with insufficient precision, the calculated result may also be too imprecise. If a quantity is measured, that measure's precision may be inadequate.
Imprecision or non-convergence of calculations
If the calculations required by a formula or to solve an equation are not sufficiently precise, the result may be imprecise. This problem is serious, for example, when solving a system of equations requires inverting a matrix with thousands of rows and columns: inadequate precision may produce degeneracy, which makes calculating the inverse matrix impossible and might also simply produce a result that is insufficiently precise.
When a physical phenomenon has a mathematical model, a computing algorithm in the model may sometimes be unable to provide its result, for example because it converges too slowly. Sometimes, the algorithm stops because a calculation is impossible: the book shows such a case in wave propagation.
Sometimes a very small variation of a phenomenon's initial data, too small to be controlled, produces a considerable and unpredictable variation of the result of a phenomenon whose law is precise. This happens, for example, for the direction in which a pencil standing vertically on its tip will fall. It also happens when predicting the position, thousands of years ahead of time, of an asteroid whose motion is perturbed by the attraction of planets.
Chaos is a phenomenon that amplifies effects enough to switch from one solution of a mathematical model to another. It occurs, for example, in turbulent flows of liquids and in genetic evolution of species, often producing solutions grouped near particular points of phase space termed attractors. In practice, chaos limits the predictability horizon.
The book quotes several laws of physics where nature limits precision. Examples:
§ When a corpuscle moves in a field of electromagnetic force, its position and velocity cannot be determined with an uncertainty better than half the width of the accompanying wave packet. No matter how fast a photograph is taken (in a thought experiment), the corpuscle will always appear fuzzy.
Worse still, the more precise the determination of position, the less precise that of velocity, and vice-versa.
§ Nature's precision refusal may cause quantum fluctuations. Example: at a point of void space between atoms or even between galaxies, energy may vary suddenly without any cause other than nature's refusal of its precision and stability. This energy variation ΔE may be all the greater that its duration ΔT is small. On average, however, the energy at the fluctuation point remains constant: if nature "borrows" energy ΔE from surrounding empty space, it returns all of it less than Δt seconds later.
This phenomenon is far from negligible: a short while after the Big Bang when the Universe was born, it caused the formation of areas of high energy density that later became galaxies. From a predictability standpoint, it is impossible to predict where a fluctuation will occur, or when, or with what energy variation ΔE.
§ At atomic scale, nature allows superpositions of equation solutions. An atom may travel several trajectories simultaneously, producing interference fringes in Young's experiment, when it interferes with itself by going through two parallel slits several thousand atom diameters apart.
A molecule may be in several states at the same time. Example: quantum mechanics predicts that an ammonia molecule NH3, whose shape is a tetrahedron, may have its nitrogen atom vertex on one side or the other of the plane of its 3 hydrogen atoms. It predicts that this plane (whose 3 hydrogen atoms are light) may spontaneously switch to the other side of the (heavy) nitrogen atom vertex because of tunnel effect, without any intervening physical force or absorption of a photon's energy. The hydrogen triangle may oscillate between the two symmetrical positions with a frequency in the range of centimetric wavelengths. This prediction of quantum mechanics is confirmed by radio astronomy observations, both in light absorption and emission by ammonia molecules of interstellar space.
When an experiment determines the state of an NH3 molecule, nature chooses randomly which of the two symmetrical states it will reveal. Its choice is not completely random, it is an element of a predefined set of two elements called spectrum of eigenvalues of the experimental setup: natural randomness is limited to the choice of one of the values of the spectrum, all values of which are known precisely. In the case of the above ammonia molecule, nature chooses between two solutions, each with a certain predefined energy and shape.
§ Nature's refusal to satisfy man's need to know is spectacular in the non-separability phenomenon. The book quotes an experiment where two photons produced together (termed entangled photons) make up a single whole object even when the photons are 144 km apart: if one is absorbed, the other disappears immediately; the consequence is propagated from one to the other at infinite speed since they are part of the same initial object, which conserves its wholeness while it is deformed by the photons' motions.
In quantum physics, many human wishes of result prediction, precision or stability are denied by nature.
Relativity and causality
The book describes in detail a property of space-time, due to the speed of light, which compels one to reflect on the definition of the causality that governs the transition from one event to another. In certain specific cases, two events A and B may be seen by some observers in the order A then B, and by others in the order B then A! The first group of observers will know that A occurred before B, and will draw consequences different from observers of the second group, who will see B appear before A.
The overall effect of many perfectly deterministic phenomena may be unpredictable, even if each phenomenon is simple and its result is predictable. Example: consider a small closed container that holds billions of identical molecules of a liquid or a gas. Since these molecules have a temperature above absolute zero, they keep moving; their kinetic energy results from their temperature. Their agitation makes them bounce into each other and against the container's inner surface, their motion obeying well-known deterministic laws. In spite of their deterministic motions, it is impossible to know the position and velocity at a given time t of all molecules, because there are too many; therefore, it is impossible to calculate (predict) the position and velocity one second later of one particular molecule, because in the mean time it has bounced thousands of times against other moving molecules and against the container's inner surface.
That impossibility is very general: the combined effect of many deterministic phenomena with predictable individual evolutions is an unpredictable evolution, whether these phenomena are of the same type or not. From a philosophical point of view, we can assert that the complexity of a phenomenon with deterministic components generally produces an unpredictable evolution.
In theory that unpredictability does not exist, but in practice it does. It does not affect nature, which never hesitates or predicts the future, but it prevents man from predicting what nature will do. In addition, nature's unpredictability grows with the number of simultaneous phenomena, their diversity, and the number of their interactions.
Actually, interactions between phenomena also affect their determinism. An evolution whose result affects the initial conditions of another evolution affects its stability rule, therefore also the reproducibility of its determinism, which hinders even more the prediction of its result.
That is why even though the most complex phenomena (the phenomena of living beings, of man's psyche, and of human society) are based only on predictable deterministic physical evolutions, their results are generally so unpredictable that man is under the impression that nature does anything. We shall come back to this issue below, and the book goes much deeper into the subject of complexity.
From a philosophical point of view, we should stop believing in chance as a principle of unpredictable behavior of nature. The Schrödinger equation of evolution, whose results are probabilistic matter waves, is deterministic in the traditional sense, and so is Newton's second law of motion, which is also based on energy conservation: a given initial situation always produces the same result, which is sometimes a set of results instead of a single result. No unpredictability there, nature is never unorthodox; in a given situation, its reaction is always the same.
Man must get used to the fact that some situations produce multiple consequences: either several laws of evolution acting in parallel, each producing a single result; or a single law of evolution producing multiple results. When man wants to know the result of evolution (for example using a measuring device), nature chooses one randomly among those resulting from the initial situation and displays it.
Nature's choosing process follows a simple rule governed by a form of determinism that applies to a set of alternatives instead of applying to a single alternative: if a given experiment is iterated a large number of times, each possible alternative appears the same number of times. This set determinism also governs other phenomena; example: radioactive decay of uranium 238, where determinism governs the proportion of decaying atoms per unit of time, not the choice of a particular atom that will decay.
Similarly, there is no randomness in the position, the velocity or the energy of a corpuscle, there is indetermination, a refusal of nature to grant us the possibility of infinite precision that would make us feel comfortable; and this refusal is due to the wavelike nature of each corpuscle.
The unpredictability associated with local energy fluctuations is not due to chance, either. It is a consequence of Heisenberg's uncertainty principle, which states that during a short time interval Δt an energy is not defined with an uncertainty less than ΔE, where ΔE.Δt ≥ ½ä (quantity which is a constant of the Universe). Those fluctuations only embody a refusal of precision on the part of nature; a refusal that only lasts for a short while and does not alter the average local energy. We should accept the existence of those fluctuations as we accept the imprecision on the position of a moving corpuscle, located "somewhere" in its wave packet: in none of those cases does nature act randomly by doing anything. Other examples of nature's limited precision are given in the book in sections that describe chaotic phenomena.
§ Randomness affects the predictability of consequences, not consequences proper (evolution laws or situations); predictability is a human wish nature ignores.
§ In nature's laws, randomness occurs only when an element is chosen in a predefined set of values of a measurable quantity or of applicable laws of evolution.
§ A random choice by nature always obeys one of its laws, nature choosing an alternative among the solutions allowed by that law. The choice never violates another law; in particular, it never violates thermodynamics or conservation of matter+energy.
§ Let us not confuse chance (unpredictability) with indetermination (nature's refusal to be precise).
§ More generally, determinism and predictability are different concepts: the latter does not necessarily result from the former (definition of scientific determinism).
We now know that there are three types of reasons that prevent or limit the prediction of consequences: imprecision, complexity and chance. The latter compels us to make clear the causal postulate: in the sentence "if the cause exists, its consequence happens" we must interpret consequence of a situation as a possibility to be a plural, multiple consequences.
Imprecision, complexity and chance reflect the intrinsic nature of the laws of the Universe, that man cannot circumvent, and against which rebelling is out of question. Therefore, predicting a result (or results) should be done as the case may be, each law being a particular case.
Let us get to the bottom of this subject. We saw above, in the section about chance, that in some situations nature had multiple reactions:
§ Either by initiating several evolution laws simultaneously, each law acting independently and providing a single result.
This happens, for example, in quantum physics, when the trajectory of a corpuscle between a point A and a point B is comprised of an infinite number of simultaneous trajectories, each taking a different path with a different velocity vector, but all trajectories ending in B at the same time.
This also happens when a corpuscle's trajectory is defined, at each moment, by a packet of superposed waves. Those waves are matter waves that describe probability of presence amplitudes that add up taking their phases into account. At a given time, if we could see the corpuscle, it would appear fuzzy near the center of the wave packet, as if it were composed of an infinite number of imprecisely superposed corpuscles.
But man never sees several consequences at the same time, he can only see their result (or results); and in the case of a corpuscle traveling with its wave packet, that result, at a given moment, is a fuzzy position and an imprecise velocity.
§ Or by initiating a single evolution law giving multiple superposed results that exist simultaneously.
That state superposition may last for a while only at atomic scale. At macroscopic scale, the interaction between the state superposition and the environment (that occurs, for example, during a physical measure) terminates the superposition and communicates to an observer only one of the superposed states, chosen randomly. The transition from the superposed states to the unique state is termed decoherence, and it is irreversible.
In each particular situation, in order to predict its evolution and the result (or results) of that evolution with the maximum precision allowed by nature, we shall now take into account all of the laws of nature, by redefining determinism in a constructive way and terming it extended determinism.
The book provides a detailed explanation of the Universe's properties that incite postulating causality. In this introduction, I will only enunciate those properties.
Properties of the Universe from which causality is derived
§ By uniformity of the Universe, I mean its homogeneity (same properties everywhere) and isotropy (at each point, same properties in all directions).
§ By stability of the Universe's properties, I mean the stability rule (reproducibility through time and space) of scientific determinism.
§ By coherence of the Universe's laws, I mean that they complement each other without ever contradicting each other. To be precise, they respect the three fundamental principles of logic: non-contradiction, excluded middle and identity, enunciated in the book.
§ By completeness of the Universe's laws, I mean the fact that nature has all the laws required to react to all situations and account for all phenomena (this is Kant's postulate of complete determination).
In short, nature never improvises; it does not have occasional laws; randomness is limited to choosing between predetermined evolution laws or between predetermined results of a particular law.
In the rest of this text, we shall postulate the uniformity, stability, coherence and completeness of the Universe's laws, and we shall define extended determinism as follows:
Usually a definition describes a word's meaning. Since such a descriptive definition is not suitable for extended determinism, I use below a constructive definition that allows an infinite extension of this notion deduced from properties of the Universe's laws.
Construction: extended determinism first includes scientific determinism, defined above. Then it includes the evolution rules of all the laws of nature, incorporated as follows:
§ We consider all the laws of evolution of the Universe, one by one, in an arbitrary order.
§ Consider one of those laws. If its evolution rule is already included in extended determinism, we ignore it and consider the next law; if its evolution rule is not included yet, we incorporate it in the definition of extended determinism.
§ Whenever we incorporate the evolution rule of a new law, we verify its coherence with rules already included, in order to conform to nature where no evolution law contradicts another law in a given situation. In principle, this verification is useless if the wordings of the laws respect the coherence rule of the Universe's laws.
As defined above, extended determinism is an axiomatic system, whose axioms of facts are the initial conditions of the various evolution laws, and whose deduction axioms are the corresponding evolution rules, according to the following semantics: if a situation satisfies a given set of conditions, then it evolves following a given rule – a rule that may correspond to one evolution law, or to several evolution laws initiated in parallel.
The theoretical validity of that approach was studied and justified by logicians. They showed how an axiomatic system may be complemented gradually, by adding new axioms whenever facts or deduction rules appear that may not be derived from existing axioms, but whose addition is suggested by the field's semantics.
The practical validity of that approach results from its respect of the scientific method, which adds new laws to existing laws or replaces them, as knowledge progresses. The construction of extended determinism adds new rules of evolution from causes to consequences as required by new laws, excluding redundancies and contradictions.
Universality of extended determinism results from its constructive definition, which takes into account all the laws of the Universe: all those that are known at a given moment, and all those that will be discovered subsequently, as they are being discovered.
Uniqueness of extended determinism may be proven as follows. Being an axiomatic system, extended determinism is a set of fact rules and deduction rules, each rule originating (by construction) from at least one law of the Universe. Now consider a second extended determinism, S, supposed distinct from the first, F. Each rule R of S comes from at least one law of nature, a law that was taken into account when building F, since F was constructed from all the laws of nature; therefore, this rule R of S also exists in F. With the same reasoning, each rule of F also exists in S. Therefore, S and F having the same rules are in fact the same set, QED.
This chapter proposes an update of mental representation methods of reality consistent with modern science. It enables readers with modest scientific background to think rationally beyond the simplistic concepts inherited from traditional determinism.
This chapter also clarifies the notion of causality on which nature's determinism is based, and the concept of randomness. It also restates some rules and constraints of rational thinking.
The concepts of cause and consequence relate to nature and its laws as follows.
§ A cause is defined as a situation at a given time, with all of its parameters.
This definition is not trivial. Consider a situation where there is a state superposition that evolves to become a single state. One could think that this is a case where several causes (the superposed states) evolved toward a single consequence, a type of causality different from the usual causality where a single cause evolves toward a set of consequences. With the above definition, the state superposition constitutes a situation to be considered as a whole, a single cause.
§ Nature draws the consequences of a cause automatically and instantaneously, by initiating a set of simultaneous evolutions that will affect some of the cause's parameters (if none were impacted, there would be no evolution).
· This set of evolutions may contain a single element, one evolution only: if I let go a stone, it falls; that fall is the only evolution, and it affects the stone's position and velocity parameters.
· This set of evolutions may contain several simultaneous evolutions: a particle may travel an infinite number of trajectories at the same time, those trajectories being solutions of the Schrödinger equation; there is a most likely trajectory calculated from the individual trajectories weighed by their probabilities.
An interesting particular case: the particles of a group described by a global quantum state (such as a pair of entangled photons) may retain some of the state's properties even when the group's particles move away from one another. If an event affects one of the particles (example: one of the group's entangled photons is absorbed) the event's consequences are instantly propagated to all the group's other particles "at infinite velocity": this phenomenon is called non-separability.
Evolutions initiated simultaneously may last an indeterminate amount of time, forming a superposition. This superposition ends with decoherence, in general after a very short time. The stronger the interaction between the evolving system and its environment and the greater the difference between the various superposed states (which makes the system unstable), the shorter the time before decoherence occurs.
§ Natural instability may be a cause of evolution.
· The instability may result from kinetic energy due to temperature, and produce ceaseless agitation known as Brownian motion.
· The instability may produce energy fluctuations (see below).
· The instability of nonlinear systems (example: dissipative systems such as living beings) far from thermal equilibrium may cause evolution (such as the evolution of species).
· The instability may be a consequence of a dynamic system's nature; it is then reflected in its differential equations.
§ Nature ignores the concept of "result of an evolution", which is exclusively human, just like the wish to predict the result(s) of a set of simultaneous evolutions; nature ignores finality, contrary to the spiritualist doctrine.
By definition, a result is the set of values of all parameters we humans are interested in, considered at a given time. It may be produced by one or several evolutions initiated by a given cause.
The single evolution initiated by a given cause may produce several superposed results, of which decoherence ultimately retains only one, chosen randomly. Equivalent assertion: a given cause may produce several simultaneous evolutions, each with a single result, and decoherence will retain only one result, chosen randomly.
§ A given cause only initiates a set of evolutions; it does not guarantee their duration, or the predictability of their results, or the precision of a given result: these are human subjects of interest.
§ Each initiated evolution obeys a physical law, following the deterministic principle "The same cause always produces the same effect". This principle implies:
· The time and space stability of physical laws.
· The absence of randomness in the choice of the single law that applies when the situation is not chaotic (randomness may occur in the choice of the numerical result of a law that produces a predefined set of eigenvalues, i.e. numerical results).
When the situation is chaotic, the applicable law is chosen randomly among a predefined set of laws. Examples: turbulent flows of fluids, long-term evolution of asteroid orbits perturbed by planet attractions.
§ Nature has all the laws required to react to all situations. It never improvises a consequence and never forgets to draw one.
§ Physical laws make up a coherent collection: in a given situation, if nature initiates several evolution laws simultaneously, those laws complement each other (as the multiple trajectories of a given particle do) without ever conflicting with each other. Example: the various virtual trajectories of a single particle which travels them all at the same time (trajectories with different lengths) are covered at velocities such that the particle reaches its destination at a single time, not at times that depend on individual trajectories.
§ Physical laws obey a number of symmetries (i.e. invariances, conservations) due to the uniformity of the Universe (time and space homogeneity, isotropy), to the left-right symmetry of space, etc.
§ Nature ignores the concepts of scale applied to space and time, which are only convenient abstractions of the human mind. A physical law applies at all scales, but its effects may be negligible or too difficult to calculate at certain scales. Some phenomena are modeled by geometrical structures called fractals, which have the same shape regardless of scale (magnification).
§ Causality and determinism are affected by Relativity, whose laws are laws of nature:
· If two simultaneous events have distinct locations, one cannot be the cause of the other; but seen by a moving observer, they are no longer simultaneous and causality is still impossible.
· If two non-simultaneous events occur in locations that are too far apart for light to have enough time to travel from one to the other during the time interval between the events, then one cannot be the cause of the other.
§ Causality and determinism apply to situations and evolutions of nature, not to human thinking, whose non-deterministic and unpredictable nature is explained below.
Many people divide nature's phenomena into two categories: those that may be explained by laws of nature (phenomena called deterministic in the traditional sense), and those due to chance (random phenomena). They believe that the latter cannot be predicted, that nature may do anything in their case, and that two identical situations may evolve differently.
Since those people describe too often as random a phenomenon they misunderstand, thus finding nature unpredictable, this section provides precisions about the role of randomness in physical laws using a few examples.
The book describes examples of cases where the description of nature is necessarily probabilistic (i.e. probabilities are indispensable for every representation of reality). This is the case, for example, of Louis de Broglie's matter waves, which associate a probability wave with every moving object: a moving object is accompanied by a packet of probability waves, and its position at a given time cannot be known except using a function that defines the probability density of presence at each point of space.
The probability density (probability by unit of volume) d at point M is a positive number with which one can calculate the probability of presence p(M, ΔV) in a volume ΔV around M using the formula p(M, ΔV) = d . ΔV.
Matter waves can penetrate matter or fields of force much as sound waves penetrate solid walls: inside such a region, the probability of presence of a moving body is non-null. The phenomenon of matter or electric field penetration or crossing is very frequent; it is called tunnel effect; it is used, for example, in some transistors.
The experimental proof of the existence of matter waves was observed in 1922-1923, when the Compton Effect was discovered. As a consequence of this effect, the position or the dimension of a moving object with mass m cannot be defined with an accuracy better than h/mc, a quantity known as the Compton wavelength lc (where h is a constant of the Universe known as Planck's constant, and c is another constant of the Universe, the speed of light in a vacuum). The width of the packet of matter waves that accompany a moving particle is approximately equal to lc, so it is illusory to attempt to define its position or dimension with an error less than lc. The existence of lc compelled physicists to abandon another certainty that seemed intuitively obvious, the certainty that an object's position or dimension may be arbitrarily precise: there is a minimum uncertainty.
We should not interpret a moving object's probability wave at time t as a necessary ignorance of its position due to chance, but as the fact that a precise position is an oversimplifying abstraction of the human mind, an abstraction that misrepresents reality at atomic scale. The concept of precise position at time t is replaced with an area of fuzzy presence, where each point of space has a probability density of presence.
Nature appears unpredictable only if we demand that it behaves with a precision that it cannot provide. Instead of considering the moving object's position at time t as random, we should consider it as fuzzy, near of a point of maximum probability of presence.
If we photographed the moving object in a thought experiment that allows an infinitely short exposure to prevent it from moving, its image would still appear fuzzy.
In short, instead of talking about randomness, we should alter our representation of reality; for a moving object, we should replace the notion of precise position with a notion of presence area whose limit is fuzzy, an area where the probability density of presence, maximum at a certain point, decreases rapidly with distance from it.
The fuzzy position of a moving object is not the only imprecision at atomic scale, where some contours are also imprecise, therefore having imprecise dimensions.
§ A proton's radius is imprecise, about 0.8 10‑15 m. It is a sort of ball of positive electric charge, surrounded by a "skin" where this electric charge decreases very fast when the distance to the center increases.
§ The size of the nucleus of some heavy atoms is about 10‑14 m. This size is imprecise because its shape keeps changing: such a nucleus is constantly deformed by the agitation of its protons and neutrons. Since that agitation has no cause external to the nucleus, we must admit that nature can cause ceaseless deformations without external cause, and that those deformations can continue forever because they have no friction!
This agitation may become so intense that it exceeds (due to tunnel effect) the resistance limit of the nucleus, which then spontaneously breaks, producing smaller nucleuses. This happens, for example, when an atom of uranium 238 decays. Therefore, some types of atoms may decay spontaneously, without external influence.
This brings about a philosophical conclusion: nature allows mechanical or energy instability that may be permanent and constitute a cause of evolution.
We should therefore imagine the surface of some atomic particles like the surface of a star, a gaseous mass with a fuzzy external limit constantly deformed by coronal mass ejections. Huge stars and tiny atomic particles both have blurred contours, imprecise dimensions and ceaseless matter agitation.
In addition to the inaccuracies on a particle's shape and position, there is an inaccuracy on its velocity, a variable also defined by a wave packet. And last but not least, quantum mechanics shows that simultaneous knowledge (for example when measuring) of both variables of certain couples, such as position Δx and momentum Δp, or energy ΔE and duration Δt, has a minimum uncertainty: the product of the uncertainties on position and velocity, or on energy and time cannot be less than ½ä, where ä = h/2p; so if the inaccuracy on a moving particle's position is small, the inaccuracy on its velocity is large, and vice-versa. This limitation is known as Heisenberg's uncertainty principle, and it shows nature's refusal to abide by our preconceptions about the precision of measurable variables. Whether we like it or not, nature's precision is limited when it concerns the knowledge of variables of motion.
(Heisenberg's uncertainty principle is in fact a theorem that is rather easy to prove. It is a consequence of a mathematical property of certain couples of operators that represent physical variables, operators that do not commute.)
The Heisenberg uncertainty principle also results in permanent energy instability. For example, at a given point of the empty space (void) between several stars or atoms, each small volume of space around that point contains energy; that energy may vary spontaneously and randomly, its variation ΔE being all the greater that its duration Δt is short. Nature behaves as if it could "borrow" spontaneously from void space an energy ΔE during a very short time not exceeding Δt seconds, provided that it returns that energy immediately afterwards.
This energy fluctuation phenomenon contradicts traditional determinism, which asserts energy conservation for an isolated system, and excludes both fluctuations without an external cause and energy borrowing. Traditional determinism is also contradicted by the impossibility to predict where a fluctuation will occur, when it will occur, and how much energy will be borrowed.
Fluctuations also contradict the second law of thermodynamics during very short periods of time: during a fluctuation, entropy first decreases then increases to restore its initial value.
Physicists tried unsuccessfully for many years to find a law describing the emission of heat by a blackbody as a function of its temperature. They were trying to calculate the quantity of heat emitted as electromagnetic energy such as infrared light. As long as they tried to fit a continuous mathematical law to the physical phenomenon, they failed. In 1900, Max Planck showed that the quantity of heat exchanged by waves of frequency n is always a multiple of a minimum, hn, where h is Planck's constant.
Planck's discovery implied representing the quantity of energy exchanged at frequency n by a discrete value, an integer multiple of hn, instead of a continuous value that would have allowed each exchange to be arbitrarily small.
Physicists had to abandon their usual continuous functions, so easy to understand intuitively and manipulate mathematically, in favor of a non-intuitive representation; that approach required the same openness of mind as replacing the concept of precise position with a concept of fuzzy presence area with blurred contour. A better understanding of reality was obtained at the cost of a slightly more mathematical representation of its phenomena.
It is useful to know that discontinuity also occurs at macroscopic scale. It affects, for example, all motions with friction, which can only occur with small leaps. It also affects the energy of mechanical vibrations, which is always multiple of a quantum called phonon.
In classical mechanics, a force represented by vector F, acting on a body with mass m, imposes to it an acceleration represented by vector a such that F = ma. This is the best-known equation of Newton's physics, and it is a differential equation since the acceleration is the second order time derivative of the body's position. It is a consequence of the conservation of total energy (kinetic energy + potential energy) when a body moves in a field of force.
The same physical law of total energy conservation is valid at both atomic and macroscopic scales because it is a consequence of a property of the Universe: the homogeneity of time, which results in time-invariance of the laws of physics and in energy conservation. When combined with Louis de Broglie's matter waves, better suited to describe the position and motion of a tiny corpuscle, it predicts a trajectory described by the Schrödinger evolution equation, the differential equation that replaces Newton's at microscopic scale.
The Schrödinger equation uses more advanced mathematical tools than Newton's equation does: Hilbert vector spaces, operators and their eigenvalues. Its solutions have the same probabilistic nature as the matter waves they derive from; the notion of trajectory-succession-of-points of Newtonian mechanics is replaced with a moving wave packet: at any time t, the position of the moving particle is defined by the fuzzy contours of that wave packet.
The principle of energy conservation seems so obvious that it is accepted intuitively. However, we just saw that energy may fluctuate at each point of a vacuum. It then "borrows" from the surrounding empty space enough energy to create ephemeral particle-antiparticle pairs, whose annihilation returns the energy to the vacuum after a very short period of time. This phenomenon, frequent at atomic scale, also occurred in the primitive Universe, where it produced energy density variations that gave birth to galaxies; it continues to occur today when black holes "evaporate". We must accept it in spite of its counterintuitive nature, that leads us to believe that "it is impossible to create something from nothing" or, equivalently, accept that vacuum contains energy.
We must also accept that some physical group properties of a pair of particles born together and described by a global quantum state (such as two photons with opposite polarizations, termed entangled) may remain unchanged even if these particles become separated by several kilometers. In other words, the notion of spatial separation of two particles does not apply to all of their properties; any action that alters one is immediately reflected by an alteration of the other, in zero seconds, regardless of their distance. This property of our Universe, called non-separability, is inconceivable in traditional physics; sometimes determinism applies to a group and not to each of its members considered separately, and its consequences are then propagated instantaneously without energy propagation, and not limited by the speed of light.
We must also accept that many molecular bonds of biochemistry have a probability of forming and a probability of breaking apart. Such probabilities, accounted for by quantum mechanics, cause genomic replication "accidents" responsible for mutations of species; they also explain that some populations have genes that adapt their digestion mechanisms to food available locally; they also explain the occasional presence on a single gene of one extra methyl (CH3) radical (4 atoms only!) that inhibits the gene's expression.
There are many physical phenomena that challenge our intuition. We must accept them without attributing them to chance, to a supernatural cause or to nature's unpredictability. Scientific reality is often more surprising than fiction.
When discussing complexity above, we saw that physiological and psychic phenomena of living beings are based on physical cellular phenomena, each of which is deterministic and evolves with a predictable result. However, the number of those physical phenomena and their countless interactions make the phenomena of living beings incredibly complex. That complexity accounted above for the unpredictability of human thinking. Now let us see some details.
According to a simplistic interpretation of the materialistic doctrine I believe in, thinking is merely an aspect of neural mechanisms: connecting and disconnecting neurons through synapses, and communications through those synapses. Those neural mechanisms are, in turn, based on chemical reactions governed by genetic software, the latter being based on molecular biology, a science which is exact and deterministic (in the extended sense), and based on quantum physics.
However, that genetic software coordinates thousands of chemical reactions, which also depend on countless parameters in fields such as perceptions that travel from the senses and the body toward the brain, body health, information memorized in neurons, etc. Those thousands of interdependent reactions make physiological mechanisms of thinking very complex. In addition to this physical complexity, thinking itself involves a complex software hierarchy, with its conscious or unconscious mechanisms that memorize and retrieve information, perform value judgments of each thought, initiate one thought after another using analogies, inductions, deductions and syntheses, etc.
The fantastic complexity of thinking mechanisms is the main reason why thinking is in general non-deterministic, in spite of its deterministic physical base. The number of interdependent mechanisms is such that the stability (reproducibility) condition of the definition of scientific determinism is seldom satisfied; for example, certain mechanisms that depend on others will not even be initiated when the results of those other mechanisms are altered. Depending on the quantity of neurotransmitters such as dopamine or acetylcholine in some areas of the brain, thinking is very different. Long-term memory, which is also subject to a chemical environment and stimulations that vary with circumstances, may forget or deform memories. The mind often makes up thoughts by intuition or analogy while the individual is not even conscious of the thinking process it went through, and some of those thoughts that are erroneous or undecidable seem acceptable to the automatic mechanism that assesses their value.
That is why the stages and the conclusions of human thinking are in general unpredictable. That is why an individual's thinking is so enslaved to his affects that his reason itself is only a tool at their disposal. That is why a man often favors decisions he knows are irrational or immoral over rational or moral decisions.
For a man with modest scientific background, the effort required to adapt his mental representations to the results of quantum mechanics is similar to the effort required to accept the results of Newtonian mechanics. He first needs to know:
§ That some physical properties such as the shape, position or speed of a moving particle are defined by nature with some fuzziness, just like the blurred image seen through binoculars that are out of focus, where objects' contours are imprecise.
In addition to that fuzziness, nature sometimes limits a measure's precision, whether the measure concerns a single quantity, or several quantities whose operators do not commute such as energy and duration.
§ That some measurable quantities, which seem continuous at macroscopic scale, are in fact discontinuous (quantized) at atomic scale. An example is a motionless atom's energy.
§ That the principle of energy conservation of an isolated system may be violated during short time intervals by energy quantities that are borrowed then returned.
It is also necessary to take into account extended determinism in its aspect:
"One cause ® several consequences".
At atomic scale, nature replaces the single consequence-solution of Newtonian physics, by a set of consequence-solutions, the points of a fuzzy region of space where each point has a probability density of presence. This results from a mathematical property of the fundamental evolution model of quantum mechanics, the Schrödinger equation, model whose accuracy of representation of reality is perfect and proven by countless experiments.
The solutions of the Schrödinger equation happen to be probabilistic matter waves. The randomness introduced by nature at that scale is similar to that of a dice throw: the result is deterministic, it is always the same set of solutions (a number between 1 and 6 for a dice), from which nature finally chooses a single element that may not be known before the end of the experiment (when the dice stops rolling).
Therefore, the second mental effort required is accepting that a single well-defined cause (such as throwing a dice or launching an electron) may produce a set of consequences instead of one. The consequence-set is still always the same for a given cause, but it is a set that may have several elements, sometimes even an infinite number.
1st case: choosing a unique solution in the set of consequence-solutions
When a dice stops rolling, a solution from the set of consequence-solutions appears, chosen by nature: a number from 1 to 6. When we analyze a photon's polarization with an analyzer that has two base directions, it provides a YES-NO answer for only one of those directions. In those two cases, a given cause (throwing a dice or analyzing one photon) produced a single consequence.
The choice is made when the dice stops: if we rotate a hollow sphere that mixes numbered balls for a long time like a sphere that draws lottery numbers, chance chooses the ball that falls in the exit basket and stops.
This case occurs for example, at atomic scale, when a measure chooses one value randomly within the set of eigenvalues of a measuring device. The measure causes a transition from atomic scale to macroscopic scale. This transition is extremely brutal, since it multiplies in the measuring device each measured energy billions of times to bring it to a level perceivable by a human; and it is an irreversible transformation.
2nd case: linear superposition of consequence-solutions of the Schrödinger equation
Feynman proved that a corpuscle with mass that moves in a field of force from point A to point B following the Schrödinger equation travels all possible trajectories from A to B simultaneously, each trajectory having a probability and the most likely trajectory being a linear combination of all possible trajectories. At each time, the corpuscle appears fuzzy on its most likely trajectory, at the center of its wave packet.
The possibility of traveling several trajectories simultaneously also produces interference phenomena between moving atoms similar to the interferences between photons in Young's double slit experiment: just like an electromagnetic photon, an atom associated with a packet of matter waves can interfere with itself by passing through both slits at the same time.
In each of the above cases, a single cause (a particle that starts moving: corpuscle with mass, photon or atom) produced several consequences which superpose in both amplitude and phase to produce a single result which is fuzzy:
§ Corpuscle that travels its most likely trajectory in the center of a packet of matter waves;
§ Photon whose electromagnetic wave passes through both slits simultaneously and ends up interfering with itself;
§ Atom whose matter wave passes through both slits at the same time and ends up interfering with itself.
We should understand that randomness does not affect the value of a measurable quantity; it intervenes only by choosing a single solution from the set of possible solutions:
§ By brutally choosing a value randomly when a dice stops rolling, or when a ball is extracted from a rotating sphere, or when a measuring device causes a change of energy scale, such a selection produces a unique and precise result.
§ By producing a solution that is a linear combination of solutions that defines a fuzzy region of space or a superposition of states.
Remark: if randomness determined the value of a physical quantity, it could in some circumstances violate principles such as energy conservation or entropy growth. That is why randomness intervenes only in choices of solutions that respect the other laws of physics.
3rd case: oscillation between two states
We saw an example above: the oscillation of an ammonia molecule between two states.
Multiple consequences of a single cause also occur for macroscopic phenomena. This is the case, for example, for turbulent flows in fluid mechanics. It is also the case for thermodynamically unstable systems that dissipate energy, such as living beings; those systems evolve towards points of their phase space called "strange attractors", a phenomenon that explains Darwinian evolution of living species towards increasing complexity. Randomness intervenes only when nature chooses one of the consequences among all those that are possible; this is a choice of a solution belonging to a predefined set, not of the numerical value itself. In both previous cases, the random choice is caused by a chaotic phenomenon that amplifies an existing instability.
We should stop expecting from nature a single consequence of each cause, a behavior it refuses in many cases; such a behavior has no relationship with fantasy or man's ignorance: nature sometimes refuses a single consequence (which would require a single solution of an evolution equation that has several solutions), exactly as it refuses to display a precise contour for tiny particles or giant stars. For some phenomena, we should also renounce precise results of measures or predictions.
Nature is not obligated to adapt to the simple mental representations we prefer; it is up to us to adapt our representations to nature's reality.
Determinism governs the evolution resulting from the initial cause; it does not guarantee that the result of that evolution may be predicted before, or measured afterward with arbitrary precision. Causality is always respected, but it does not guarantee the predictability of the result or the precision of its measure; causality only guarantees that a cause initiates an evolution.
We should accept that, depending on circumstances, the intrinsic unity of nature appears to us in different forms. This plurality results from our mental representations; it is not a property of nature. Two examples should suffice to clarify this point. Depending on the experiment or the circumstances:
§ A photon's nature may be wave-like or particle-like, and even change from one form to the other.
A photon is an electromagnetic wave of short duration that may cause interferences. It is also a particle without mass, whose shock with another particle such as an electron or an atom may transfer momentum and angular momentum – sometimes with enough force to displace it or even to shatter it.
§ A piece of matter with mass may also be a quantity of energy, and even change from one of these states to the other following Einstein's equation E = mc2.
An atom is a corpuscle with non-null mass. The theory of matter waves shows that it may also behave like a wave when it is moving. Even though that wave is a probability wave, not an electromagnetic wave, it may also produce interference patterns with the matter wave of another moving corpuscle. In addition, the shock of a corpuscle with another or with a photon may transfer momentum and angular momentum.
Depending on circumstances or experiments, the perception of physical reality may change.
Nature has only four types of forces (interactions). Their action (attraction or repulsion) happens through fields of force and is quantized, except perhaps in the case of gravitation, where the quantum of gravity has not been found yet. In increasing order of effect, those forces are:
§ Gravitational attraction between two masses, with infinite range;
§ Electromagnetic force, about 1036 times stronger than gravitation in the case of two protons, also with infinite range;
§ Weak interaction, about 10 times stronger than electromagnetic force, but with extremely short range: about 0.01 fermi (10‑17m) ;
§ Strong interaction, about 100 times stronger than weak interaction, and with 1000 times longer range: about 10 fermis (10‑14m).
Two more forces, as yet very poorly understood, should perhaps be added to the four forces above because their effects over astronomical distances are very important:
§ The dark matter force, which acts like gravitation;
§ The dark energy force, which acts like negative gravitation, causing cosmic expansion.
When I measure the angle between the direction of a celestial body and the horizon with a sextant, my measure does not perturb the measured angle.
However, when I measure a variable of quantum physics with a measuring device, I necessarily perturb the energy of the measured system: all quantum mechanics measures perturb their object; the result I obtain, after the measure, is therefore perturbed. The value of the measured variable before the measure exists, but I cannot know it; and after the measure, it is altered… Fortunately, there are experiments where multiple measures provide an average value, or the set of all possible values. And since the result of a measure is one of the eigenvalues of the measuring device, if the set of eigenvalues is finite and known with sufficient precision, the measure may be accurate.
At macroscopic scale, measures sometimes also perturb the quantity they measure: a voltmeter perturbs the voltage it measures between its contacts, because it takes up some intensity.
Special Relativity applies when two bodies or two observers move relative to one another with a velocity whose vector is a constant. Its effect grows with relative velocity and becomes very important when one of the velocities approaches the speed of light, c. Special Relativity affects the principle of additivity of velocity vectors, which is accurate only when added velocities are both small relative to c. It also affects mass, which increases with velocity. Special Relativity, which is quite simple to understand and prove, demonstrates the equivalence between mass and energy E = mc2. It also affects the notions of position and length, of rate at which time flows, and of simultaneity.
Therefore, Relativity affects causality and determinism: if two simultaneous events A and B are in distinct locations, one cannot be the cause of the other; but seen by an observer, C, in a location distinct from A and B, they are no longer simultaneous. If the observer in C does not know the positions of A and B and their simultaneity, he may believe that one of those two events caused the other.
General Relativity applies when there is acceleration or a gravitation force, which have equivalent effects. At astronomical scale, we should get accustomed to think of space and time as interdependent within a four-dimensional space-time continuum where each point represents an event. We should also admit that space itself is locally deformed by the presence of heavy masses such as a galaxy, a black hole or a star; this deformation makes light follow a shortest path that is curved.
We should also realize that the order of precedence of two events A and B is not always the same, since in some cases A may precede B for some observers, and B may precede A for others. We should also accept the fact that the Universe expands, its radius increasing at a rate of about 1.8 times the speed of light, and galaxy clusters moving apart progressively from one another. 96% of the Universe's energy takes two forms that are invisible and as yet unexplained: the first is called "dark matter", and the second (which creates a negative attraction force responsible for the expansion), "dark energy".
The Relativity theory that governs all those phenomena is a representation of reality confirmed by many experiments, and we can no longer ignore its implications.
The human mind cannot manipulate more than half a dozen concepts at the same time. To comprehend a complex phenomenon, our mind needs to represent it using a multilevel hierarchy. Each level schematizes the concepts of the level below it, thus making its structural and functional organizations simple enough to understand.
Building a representation of a complex phenomenon may use a bottom-up approach, with a succession of syntheses and schematizations that hide details to highlight essential concepts. Alternatively, it may use a holistic approach, studying one of the levels – or one of the phenomena of a given level – as a whole, with the same goal of reducing its complexity to an acceptable level.
(A top-down approach – from general features to their details - is suitable for analyzing a complex phenomenon, by subdividing it into simpler phenomena according to Descartes' 2nd precept. It is also suited for describing and explaining it to someone else, when it is already understood by the person who explains it. It is also suited for specifying the functions of a complex computer application.)
The bottom-up approach corresponds to Descartes' 3rd precept. It may be preceded by an analytical decomposition and should be followed by verifications corresponding to the 4th precept.
Using schemas helps our mind understand a concept such as an imprecise dimension or a fuzzy contour, by associating it with an image such as a sphere of mist. Our mind can picture the probability distribution near a point as a bell-shaped or needle-shaped curve.
One of the best ways to represent a complex object is using an analogy with the structure of a complex piece of software. The architecture of the software's modules models the logical relationship structure of the object's physical structure, and the behavior of the software's modules models the object's functions. This model is the best for understanding man using a hierarchy of software modules from genome level to psyche level.
Another reason for adopting a hierarchical representation of natural phenomena is determinism itself. In addition to local determinism, which governs the time relationships between causes and consequences at individual cause level, there is a global determinism which governs global phenomena such as: the choice of an entire trajectory of a material object that moves between two points; the choice of the path of a light ray across several media; or the choice of the macroscopic thermodynamic evolution of a system that comprises billions of molecules. Global determinism never contradicts local determinism; it complements it, providing an elegant way of understanding a phenomenon's "big picture" (no law of nature contradicts another law).
Sample local determinism: the principle that governs the step-by-step choice of a moving body's next small displacement, following the differential equation of Newtonian dynamics F = ma. At global level, the same phenomenon follows Maupertuis' principle of least action, which "chooses the best trajectory" (the path with minimum action) between a starting point and an arrival point.
The possibility of multiple consequences for a given cause requires replacing the unique causal chain of Laplacian determinism with a hierarchical causal tree. At each stage of a system's evolution, represented by a node of the tree, nature "chooses" the next evolution process, represented by a branch starting from that node. The global representation of possible deterministic evolutions from an initial situation is a tree.
Some philosophers reject the materialist explanation of the world because they find explaining concepts as rich as human personality impossible starting from molecular structures and functions. Indeed, such an explanation is impossible in a single stage, because a multilevel abstraction structure is indispensable. The issue is the same as explaining the architecture of a complex software application, which requires successive levels of detail descending from the highest level - oriented towards the user - to the lowest, which is also the most technical and suitable for execution by the computer.
In addition, in the field of living beings, acquiring and explaining knowledge requires from time to time a holistic approach that considers a specific object or function as a whole whose interaction with its environment is simple and well delimited.
It is therefore obvious that descriptions of psychic phenomena, or even biological phenomena, based directly on physical properties are impossible; but that does not prove that materialism should be rejected: a proof of the absurdity of a doctrine based on an erroneous approach does not prove anything about the doctrine.
The brain memorizes information as hierarchical structures where the details of an information element are stored at the level below this element. When an information element belongs to several hierarchies, the memory creates the links required to store it only once.
This multiple-tree information structure has two advantages: it avoids redundancies, thus saving storage space; and it enables quick retrieval of all "mother" information elements that correspond to a given "daughter". Example: if I see a bright red car, that color will remind me of the color of the dress worn by a woman I noticed yesterday.
In addition to its automatic handling of hierarchical structures to store information, the brain can automatically traverse them very fast to retrieve the details of a tree structure, or find an information element that satisfies several given criteria simultaneously, thus belonging to the corresponding hierarchies.
We should therefore conduct our thinking to take advantage of these capabilities. Descartes' analytical problem solving method is suitable, and so is structuring information elements by hierarchical levels in order to hide as many details as possible that are not required at a given time.
Here is a concise description of methods used to study phenomena by building a representation to understand them, and then anticipate their outcome; such methods have intrinsic limitations.
Mathematical modeling of physical phenomena
When a physical phenomenon has a mathematical model, solving its equations or measuring a physical quantity may give multiple solutions associated with the various eigenvalues of the operator that represents the quantity; and each eigenvalue has a probability if it is discrete and a probability density if it is continuous. This approach results in multiple potential consequences of an initial cause.
Modeling an evolution with the Schrödinger equation provides perfect results: it is impossible to discover another model that would produce fewer solutions or non-probabilistic solutions, since nature itself conforms to this model.
Sometimes a mathematical theory predicts new physical properties or phenomena whose existence was not suspected. This happened, for example, for General Relativity and for Dirac's prediction of the existence of antimatter.
However, a mathematical model sometimes brings about its own limits or constraints, such as:
§ Multiple solutions for an equation, some of which may not apply to the physical phenomenon it represents;
§ Functions that are not computable in some cases, etc.
Undecidability, incompleteness and incoherence of an axiomatic system
It is often worthwhile representing a scientific field by an axiomatic system, to subsequently study it by deducing logically some of its properties as predicates. However, this approach has limits: no matter how an axiomatic system is defined, it will have two serious limitations, undecidability and uncertainties about its coherence (non-contradiction).
§ A logical proposition is a statement that is always true or always false. Every axiomatic system allows enunciating some logical propositions for which no proof of their truth or falsity may exist: it is impossible to prove that one such proposition is true or false using the axiomatic system's axioms and deduction rules.
Such a proposition is termed undecidable. (There are many undecidable thoughts in a human mind that it considers certain, and their presence cannot be explained in a deterministic manner: it cannot be attributed to any effective cause or stable set of circumstances.) Fortunately, if factual observations show that a given proposition of an axiomatic system is true and never disproved, the system may be completed with this proposition, admitted as a complementary axiom.
§ An axiomatic system is termed complete if every logical proposition that may be deduced from its axioms and deduction rules is decidable. Since every axiomatic system allows enunciating undecidable propositions, no axiomatic system is complete.
§ An axiomatic system is termed coherent if every theorem deduced from its axioms is non-contradictory, and does not contradict any other theorem or axiom of the system. Kurt Gödel proved the impossibility to demonstrate the coherence of an axiomatic system as a theorem of this system (without using axioms or deduction rules external to this system). This impossibility is a special case of a more general impossibility: no concept may define itself or compare itself to itself, since a definition or comparison requires an encompassing set.
Therefore, the coherence of an axiomatic system is certain only as long as no incoherence has been discovered in it!
Conclusions: in spite of its rigor and elegance, an axiomatic approach has intrinsic limitations. The human mind can make up undecidable thoughts without being aware of their undecidable nature. Obviously, an axiomatic system's rigor is not compatible with knowledge fields full of nuances.
Level of truth achievable using the scientific method
Today's scientific method based on Karl Popper's critical rationalism, considers plausible every wording (formula, sentence or entire text) which:
§ Is falsifiable, because it may be proven false by a theoretical demonstration or a single experimental counterexample (one will suffice);
§ Has been examined by the scientific community and has not been refuted.
The level of truth achievable by this approach is not based on the number of experimental confirmations of the statement; it is based on a consensus of specialists on the impossibility to disprove it. As long as it has not been examined by other scientists, an author's proposal is only that: a suggestion, a conjecture, a hypothesis, a suggested proof of a theorem, or an account of experiments. A plausible statement becomes true if it has at least one theoretical or experimental proof that has been examined without being refuted.
A statement is always considered temporarily true, since it can be disproved tomorrow by a new discovery. A postulate (axiom) on which other statements have been based for some time without refutation has the same level of truth as an experimental (empirical) law which is falsifiable and has been examined by the scientific community without a single counterexample. The type of truth of a theorem, logically deduced from postulated axioms, is different since it is impossible to contradict a succession of logical deductions without calling into question its base axioms.
With all this in mind, the issue for the author of a discovery is obtaining the attention of the scientific community. This is often very difficult, because publication in respected media such as Physical Review, Science or Nature is filtered by reading committees whose open-mindedness and neutrality are not necessarily always perfect. In addition, the limited amount of space of each issue creates a competition between texts, and also publication delays. Publication is easier in less famous media, on the Internet or through a publishing house, but without certainty of attracting enough attention to obtain in-depth reviews and criticisms. This problem is not new: Ludwig Boltzmann, the great physicist who developed statistical mechanics and whose work enabled the discoveries of Planck and Einstein, committed suicide in 1906 in part because his work had not attracted enough attention.
An additional problem is that research funds are often allocated by politicians whose motives are all but scientific; and sometimes fund allocation is subject to the approval of specialists who are competitors and would like to reserve the funds and potential fame for their own work…
We already saw phenomena (therefore also situations) whose complexity or chaotic nature makes knowing their evolution impossible in practice. We also know that every axiomatic system allows enunciating an infinite number of undecidable propositions. Here are other cases where man can ask questions or enunciate problems without answer.
§ In general, the result of the execution of an algorithm (a well-known process since it was written by a man) cannot be known in advance considering its text written in a computer language.
· Its execution may stop due to a numerical impossibility: division by zero, indeterminate form such as infinity divided by infinity, non-convergence due to the algorithm itself or to the precision of calculations, etc.
· Execution may last forever, or longer than man is ready to wait for the result. This happens, for example, when comparing two numbers whose representation has an infinite number of digits: if the calculations are precise enough, comparing pair by pair the decimals with same rank may, in theory, last as long as they are equal.
Therefore, when the algorithm models a phenomenon of nature, it is in general impossible to predict that phenomenon's evolution, therefore also its result; it is necessary to wait for the execution to stop… if enough time is available. That is why, when an execution time may lengthy, it is possible to try to predict an order of magnitude of its duration using numerical analysis, a science taught to some students. For many years, for example, the use of models of weather evolution was hindered by the requirement to predict the weather X days ahead of time with less that X days of computing time.
From a philosophical point of view, the impossibility in general to predict the result of an algorithm's execution before its end is not one more contradiction of Laplace's philosophical determinism: the predictability promised by that determinism applies to nature's situations and evolutions, not to human thoughts such as algorithms.
§ There are numbers and functions whose existence is proven, but for which we also have a proof that there will never exist a method to calculate or formulate them. More generally, the deterministic nature of a phenomenon such as the propagation of certain waves, or the existence of mathematical concepts such as numbers, functions, and equations do not always suffice to calculate or predict all of their properties. It often possible to predict some properties, but not all we may wish to predict.
§ There are very practical problems, such as "the tile setters' problem" (paving the plane with tiles that have a given set of shapes) for which we know that there are combinations of shapes for which no positioning algorithm may exist, but for which a man found a solution by trial and error; the book provides an example.
The existence of such questions without answers should not incite us to be fatalistic or become discouraged; to the contrary, we should do the theoretical research to better know the list of such questions, field of knowledge by field of knowledge, and the properties of each unknowable answer when some may be found.
Man's mind cannot refrain from speculating, asking questions such as those beginning with "What if…" or "Why…" which ignore the constraints due to limits of knowledge or physical possibilities, i.e. metaphysical questions. Here are two examples.
The first cause
The first cause issue is as old as rational thinking. Since each discovery brings about new questions, and each level of knowledge leaves some questions unanswered, there are only two possibilities: either the causal principle may go back as far as a first cause that would explain everything without itself requiring an explanation – which is illogical, or the scope of causal principle is limited.
Contrary to the first philosophers who raised this issue, we know today how to answer it.
§ First, since every rational deductive chain requires the framework of an axiomatic system, a first cause (if there is one) is the set of axioms of facts and deduction rules of this system; those axioms are necessarily admitted without proof. Attempting to demonstrate them, to explain them or to travel the causal chain backwards beyond them is illogical. Therefore, the issue of the first cause raised by some philosophers should not be raised.
§ Then, scientific thinking cannot go back in time beyond the Planck time, 0.54 10‑43 second after the Big Bang, the beginning of the Universe. Before the Planck time our physical laws are not applicable, and it is possible that such fundamental concepts as time and space are very different from ours.
Any thought about a situation or an evolution before the Planck time is therefore speculative, metaphysical. The first physical cause of every subsequent evolution or situation is therefore the state of the Universe when the Big Bang occurred, a state we know almost nothing about. We should also admit that we do not know either what happened between the Big Bang and the Planck time.
§ Last, the notion of convergence of scientific knowledge defined and justified above in this book, makes the issue of the first cause less and less interesting as our knowledge makes progress.
The anthropic principle
Some scientists are astonished by the known values of many constants of the Universe. They assert that if the value of any one of them had been only slightly different, life would not have appeared on Earth and man would not exist. Given the number of such constants that have exactly "the right value", and the infinite number of different values each constant could have, they conclude that this cannot be due to chance, and therefore that there must be a finality that governs the Universe to make it evolve toward perfection, man.
This rationale is erroneous for two reasons:
§ First, because it violates the identity principle: a situation cannot be different than what it is. Man has a right to find it astonishing and to find disturbing coincidences, but his astonishment is pure speculation: no constant of the Universe may be different than what it is; imagining a different value is pure speculation.
§ Second, the concept of probability of the value of a constant being other than the observed value is absurd. The probability of an outcome is the ratio of the number of favorable outcomes to the number of possible outcomes. Therefore:
The probability for a constant that is a real number to have a given exact value is the ratio of 1 favorable outcome (the given value) to an infinite number of possible outcomes (other values).
To evaluate the probability of a difference of the value of a real constant with a given value, we need to know the constant's probability density law. Since none of the proponents of the anthropic principle has ever been able to enunciate that law for a constant with an astonishing value (for good reason, because a constant cannot have a value other than the one it has!) the probability for one of those constants to be inside an interval of given length is meaningless.
The anthropic principle is therefore a pseudo-scientific version of the teleological argument of God's existence, an argument we find illogical in Part 1 of the book.
Man's free will is his power to act as he wishes; it is his power to choose a course of action between alternatives he sees, without any external constraint or influence. This subject is of philosophical interest: a man who is not free to choose cannot be held responsible of his acts. It also has a practical interest: a man who wishes to make a rational choice should be aware of possible limitations of his freedom to choose.
Recent research confirms Sartre's opinion that each man's consciousness keeps finding reasons to be dissatisfied and things that are missing. Sartre terms this state of mind man's not-being.
Man's consciousness reacts to that dissatisfaction by generating physical needs and psychological desires all the time. Some of those needs and desires cross the threshold of consciousness, thus causing man to reflect, and others remain subconscious. A man always finds something to do to be happier, to satisfy his wishes.
Three categories of variables determine the context in which consciousness functions, i.e. its constraints:
§ Variables of genome interpretation by cellular mechanisms that define innate behavior;
§ Variables which define the contents of long term memory that stores all we have learned; they define acquired behavior;
§ Variables which define the circumstances, the context of a given moment; they determine what the context imposes or offers (perceptions, constraints, opportunities, etc.)
Both a man's values and his wishes always originate in at least one of the three origins above: the innate, the acquired and the context. Those origins are all external to a man, who does not control them at the time of a choice. Desires appear spontaneously in a man's consciousness; he cannot create them; he may only choose between alternatives present in his consciousness the one or the ones he will try to satisfy.
A man can only direct his reflections in a direction that satisfies his values, what he seeks, what he desires. Reason is not a value for a man; it is only a means to satisfy his wishes.
Indeed, man has free will, but his alternatives are limited to predefined wishes evaluated by reference to predefined values: therefore, his free will is illusory.
Spiritualists may believe that a man's choices may, like all of his thoughts, evade nature's deterministic laws, for example by being subject to a supernatural influence. Being a materialist, I cannot concur with them. I am surprised that Sartre, also a materialist, believed that man has an absolute free will due to his not-being: I consider that opinion incoherent.
In a coherent materialist's opinion, a man may only choose between desires he does not control according to predefined values; therefore, he cannot be held responsible of his acts. If, in a spiritualist's view, a man chooses under a transcendent influence, he cannot be held responsible either. I consider the free will granted man by God almighty (according to Saint Augustine and Pelagius) a way to show God to be innocent and to explain why He lets man do so much evil, and also as a reason to make man feel responsible to avoid divine punishment.
When they have opposite interests, society must hold man responsible of his acts, because public interest has priority over private interest. Society must therefore inculcate to each man how to behave toward others, and enforce those rules using laws and police.
We have now summarized, in 15 times fewer pages, the most important subjects of the book. The reader of this text now has the information required to decide if he wants to read the book or not. I did my best to make that reading easy, but I do not think that the text's structure allows reading, in the book's Part 3, a section here and there. So thank you for your patience!