Indeterminism should be a failure in one or more of the many determinisms.
The term is most often used in connection with causal determinism and with limits on physical or mechanical determinism.
Logical philosophers describe indeterminism as simply the contrary of determinism. If a single event is undetermined, then indeterminism would be "true", they say, determinism is false, and this would undermine the very possibility of certain knowledge. 1
Some go to the extreme of saying that indeterminism makes the state of the world totally independent of any earlier states, which is nonsense, but it shows how anxious they are about indeterminism.
The core idea of indeterminism is closely related to the idea of causality. Indeterminism for some philosophers is an event without a cause (the ancient causa sui. But we can have an adequate causality without strict determinism, the "hard" determinism which implies complete predictability of events and only one possible future. We can call this "adequate determinism."
Causality does not entail determinism
An example of an event that is not strictly caused is one that depends on chance, like the flip of a coin. If the outcome is only probable, not certain, then the event can be said to have been caused by the coin flip, but the head or tails result was not predictable. So this causality, which recognizes prior events as causes, is undetermined.
Indeterminism is also closely related to the ideas of uncertainty and indeterminacy. Uncertainty is best known from Werner Heisenberg's principle in quantum mechanics. It states that the exact position and momentum of an atomic particle can only be known within certain (sic) limits. The product of the position error and the momentum error is equal to a multiple of Planck's constant.
Indeterminism is important for the question of free will because strict determinism implies just one possible future. Indeterminism means that the future is unpredictable. Indeterminism allows alternative futures and the question becomes how the one actual present is realized from these potential alternatives.
The departure from strict causality is very slight compared to the miraculous ideas associated with the "causa sui" (self-caused cause) of the ancients.
Despite David Hume's critical attack on the necessity of causes, many philosophers embrace causality strongly. Some even connect it to the very possibility of logic and reason.
Even in a world that contains quantum uncertainty, macroscopic objects are determined to an extraordinary degree. Newton's laws of motion are deterministic enough to send men to the moon and back. Our Cogito model of the Macro Mind is large enough to ignore quantum uncertainty for the purpose of the reasoning will. The neural system is robust enough to insure that mental decisions are reliably transmitted to our limbs.
We call this determinism, limited as it is in extremely small structures, "adequate determinism." The world is adequately determined to send men to the moon. The presence of quantum uncertainty properly leads logical philosophers to call the world "indetermined." But indeterminism gives a misleading impression when most events are overwhelmingly "adequately determined."
There is no problem imagining that the three traditional mental faculties of reason - perception, conception, and comprehension - are all carried on deterministically in a physical brain where quantum events do not interfere with normal operations.
There is also no problem imagining a role for randomness in the brain in the form of quantum level noise. Noise can introduce random errors into stored memories. Noise could create random associations of ideas during memory recall. This randomness may be driven by microscopic fluctuations that are amplified to the macroscopic level.
Our Macro Mind needs the Micro Mind for the free action items and thoughts in an Agenda of alternative possibilities to be de-liberated by the will. The random Micro Mind is the "free" in free will and the source of human creativity. The adequately determined Macro Mind is the "will" in free will that de-liberates, choosing actions for which we can be morally responsible.
1. C.D.Broad, 1934 "Indeterminism, is the doctrine that some, and it may be all, events are not completely determined."
Oxford: "The view that some events have no causes." (sic), Oxford Dictionary of Philosophy, 1994, Simon Blackburn.
Oxford: Indeterminism means - "Determinism is false. Its negation is true - so long as somewhere in the universe some occurrence violates the thesis of determinism."
"an extremely strong version of indeterminism is...The world at any time and in all its aspects is totally independent of its state at any earlier time.", Oxford Companion to Philosophy, 1995, Ted Honderich.
Normal | Teacher | Scholar
I. Determinism and Indeterminism in Philosophical Thought
1. Definition. "Determinism" is commonly understood as the thesis that «the laws which govern the universe (or a subsystem), together with the appropriate initial conditions, uniquely determine the entire time evolution of the universe (or subsystem)». "Indeterminism" is the negation of this thesis.
Only recently has this term entered common usage It dates back to 1927, in the same period in which the "uncertainty principle" was discovered by Heisenberg (1901-1976) and has been used throughout the 20th century by physicists and scientists. However, philosophical problems lie hidden in the term which, being of a philosophical nature, are much older and far-reaching than the current question as to the interpretation of quantum mechanics and physical theories: it involves the classical problems of necessity and contingency, of being and becoming, and of causality. As early as the 4th century B.C., Aristotle observed that "chance as well and spontaneity are reckoned among causes: many things are said both to be and to become as a result of chance and spontaneity" (Aristotle, Physics, II, 4, 195b). Moreover, "we observe that some things always happen in the same way, and others, for the most part. It is clearly of neither of these that chance is said to be the cause, nor can the 'effect of chance' be identified with any of the things that come to pass by necessity and always, or for the most part. But as there is a third class of events besides these two-events which all say are 'by chance' --it is plain that there is such a thing as chance and spontaneity; for we know that things of this kind are due to chance and that things due to chance are of this kind" (Aristotle, Physics, II, 5, 196b). After Aristotle, this problem continued to be one of the central themes of the history of philosophy and can be found in the works of practically every author. For a correct understanding of the relationship between determinism and indeterminism, it is therefore necessary to adopt an interdisciplinary approach which formulates and relates the relevant terms in question from the scientific, philosophical, and theological point of view.
2. Determinism and Liberty. In their quest for knowledge, from antiquity to the present era, human beings have taken two different paths of inquiry: a) one involves knowledge "external to the individual" (the world, nature, the cosmos). b) the other involves knowledge within the individual (thought, emotions, perception of freedom, self-consciousness, etc.).
Often, the first path to knowledge is called the "cosmological approach" and the second is known as the "anthropological approach". One may consider the problem of which path comes first and which really leads to the origin of knowledge. That is, one can study the question whether knowledge originates from sensible experience of the external world ( nihil est in intellectu quod prius non fuerit in sensu , as the the Aristotelian-Thomistic tradition holds), and only later, does the "self" have an internal experience of itself through the process of reflection; or, alternatively, knowledge originates from innate ideas (Plato, Descartes), from intuition (Bergson), or from divine illumination (Augustine, Bonaventure). Yet aside from the philosophical question as to which path is first and which is really related to the origin of knowledge, there remains the problem of the necessity and contingency of beings and events, of the chain of causes in the world external to us; a problem which has always involved that of the interior reality of human liberty. If a univocal (determined ad unum) effect follows necessarily from every cause, then the free choice of the will would be impossible because everything, including the choices one makes, would be already determined. And yet we experience our own free-will. This problem has appeared in the history of thought and involves explaining the internal perception of liberty in a way which is compatible with a correct philosophical and scientific description of the external world.
The result of this intellectual effort has been manifold. Some thinkers have preferred to deny the existence of liberty in favor of another datum of experience. This preference is most congenial to the rationalists. For example, Spinoza (1631-1677) went against the evidence of internal experience by declaring free will to be a pure illusion. Others have denied the causal relation between events ("principle of causality") and have maintained that cause-effect relations are none other than a simple operation performed by the human mind by habit, rather than a law written in the nature of things. This is the vision most congenial to the empiricists such as David Hume (1711-1776). Other authors have sought a way to understand the coexistence of liberty and causality in order to be consistent with what is given through experience, without maintaining either of the two facts of experience, namely the existence of liberty and that of a causal order, be mere appearance. They however acknowledge that both liberty and causality have a full metaphysical meaning. This approach has brought with it an element of "chance" in addition to the notion of "cause". One can think of the clinamen of Epicurus (341-270 B.C.) defined as a sort of random and unpredictable deviation of atoms, entities already postulated by Democritus (460-460 B.C.), from their causally determined trajectories.
Several contemporary thinkers having a scientific background have proposed to found the possibility of free-will on the "uncertainty principle" of quantum mechanics. This type of approach turns out to be too simplistic. It is a kind of ontological transposition of a physical theory, since it reduces the metaphysical horizon to the horizon of "quantity" (even if understood in a very wide sense, as in the Aristotelian definition of quantity, which recalls the definition in modern topology) and of "relation," both being categories which constitute the basis of the quantitative sciences. In this perspective, the notion of causality turns out to be too restrictive, and univocally reduced to the simple "mechanical" (or at most "physical") interaction of the four fundamental forces known today. This radical metaphysical interpretation of a physical principle could be viewed as the flip side of determinism. In fact, such an interpretation would not be qualitatively different from determinism with its "univocal" conception of cause.
Several consequences follow from this way of framing the problem, consequences which are paradoxical from both the anthropological and theological point of view. The first consequence is that the "free" choice of will would turn out to be intrinsic to the behavior of the fundamental components of inanimate matter (quarks, elementary particles), which is identical to the matter in the human body. It would then follow that the electrons freely choose one among all the permitted states known to us on the basis of a probability theory, just as a man chooses freely from several possibilities placed before him. The second consequence is that «we would have to conclude that this inherent unpredictability also represents a limitation of the knowledge even an omniscient God could have» (A. Peacocke, God's Interaction with the World: The Implications of Deterministic Chaos and of Interconnected and Independent Complexity, in R. Russell et al., 1995, p. 279). This would be equivalent to stating that God could not know what traditional philosophy calls "future contingents". In the case in question, the future contingent is the evolution of the single particles which quantum mechanics cannot predict: "this limit on total predictability applies to God as well as to ourselves [...]. God, of course, knows maximally what is possible to know, namely the probabilities of the outcomes of these situations, the various possible trajectories of such systems" (ibidem , p. 281).
One may recall that as early as the Middle Ages, philosophers asked whether the divine intellect can know singulars, and in particular, singulars which are also future contingents. Different objections arose from various arguments. For example, some argued that singulars, knowable only in their material determination, were not suitable for a spiritual intellect; or, also, that it is possible that they do not exist and are therefore contingent; that they depend in turn on a free will other than the divine will; that there number is practically infinite, etc. Thomas Aquinas would confront this theme in a more direct way showing that God knows singulars, infinite things, and future contingents (cf. Summa Contra Gentiles, I, cc. 65-69).
Such paradoxes can be removed at the root, as we shall see later, if one makes recourse to the "analogic" conception of causality, thereby overcoming the completely univocal conception in mechanics. It seems that the necessary physical basis of intellectual and voluntary activity is to be found not in the context of deterministic and indeterministic processes but rather in the complexity of organization of a highly evolved living being and its brain. In this sense, the Aristotelian-Thomistic vision and recent research on the subject of complexity and on the mind-body relationship seem to agree (cf. Basti, 1991).
In the vision of Aristotle --and later of Thomas Aquinas who continued and developed Aristotle's philosophy-- the question of causality is approached using the classical theory of the four causes: material, formal, efficient, and final. In order to let the theory of four causes be understood in a non-equivocal way (recalling that modern language uses words differently from the way they were used in their original context), we have to keep in mind other two metaphysical theories it presupposes, namely, the "hylemorphic" theory and the theory of "potency-act." We review briefly their essential features. a) The "material cause" is that which furnishes the constitutive basis of a material object, making it possible to be what it is with certain properties and not others, that is to receive a certain "form" (in the Aristotelian sense of the term); b) the "formal cause" is that which makes an object to receive a "form," i.e., the "nature" which characterizes it with its properties, and not another; it is that which makes it this object, and not something else; c) the "efficient cause" causes a physical object which is now characterized by a certain form and/or accidental characteristics to assume another form or other accidental characters (quantitative, qualitative, of position, etc.) and is therefore responsible for change and local motion (which is a type of change), and so on; d) the "final cause" lies in the final state reached at the end of a change. In this perspective, the final cause is the most important, since all other causes in some way depend on it. The final state to be reached determines the material constitution of an object and its essential characteristics (form). Moreover, it requires an adequate efficient cause to produce a change from a certain initial state to the final state.
With a similar conception of causality, the cause-effect relationship cannot be reduced to a simple mechanical, electromagnetic, or physical (in the modern sense) interaction. In the strong (metaphysical) sense, cause is rather that which causes a thing "to be" and causes it to be in a certain way, and not simply that which "moves" it locally. Causality is therefore conceived in an "analogic" sense. God, as First Cause can have among his effects the human being, or a being endowed with "free" will who is not univocally determined (cf. Summa Theologiae, I. q. 83, a. 1, ad 3um; De Veritate, q. 24, a. 1, ad 3um). In other words, according to an analogic, and therefore non-mechanistic, conception, there is room for (and the necessity of) a cause whose effects can be free acts of the will of a rational subject, like the human being, acts which God can know singularly and to which he confers being, so making possible any free human choice.
Now we need also understand the way in which free will can act when it uses matter, which is governed by physical laws. This kind of investigation, which directly involves psychology, cognitive science, physiology, and biology, is still to develop. It seems today that an interesting angle to approach this subject is through the sciences of complexity, since they aim to overcome reductionism, to understand scientific reason in a new fashion, and open it to the concept of analogy.
As John Polkinghorne has observed, "the causality which physics most readily describes is a bottom-up causality, generated by the energetic interaction of the constituent parts of a system. The experience of human agency seems totally different. It is the action of the whole person and so it would seem most appropriately to be described by a top-down causality, the influence of the whole bringing about coherent activity of the parts. May not similar forms of top-down causality be found elsewhere, including God's causal influence on the whole of creation?" (The Metaphysics of Divine Action, in R. Russell et al., 1995, p. 151).
II. Determinism and Indeterminism in the Sciences
In the context of the sciences, and especially in the physical and biological sciences, the question of determinism/indeterminism has been presented historically, in different ways, in the area of classical mechanics (and more generally in physics) and in that of quantum mechanics.
1. Mechanistic Determinism. It is known that "classical mechanics" --that is, Newtonian together with Einsteinian (relativistic) mechanics-- predicts, on the basis of its laws, the possibility of determining in an exact way in every instant of time, future, or past, the position and the velocity of a particle, conceived as a material point, as long as one knows the force law acting on the point and the initial conditions, that is, the position and velocity of the particle in a particular instant of time. In this way, we are saying that classical mechanics is "deterministic." In this regard, one recalls Laplace's claim that it is in principle possible to know the future of the universe, as well as its past, if one knows the forces, the positions, and the velocities of all the particles of the universe in a certain instant of time. Then there only remains the technical difficulty that, in practice, the positions and velocities of all the particles of the universe cannot be known. And even if one did know this information, one would not be able to perform the great number of calculations needed to model the evolution of the universe.
2. Statistical Indeterminism. This final difficulty is already encountered when one seeks to control the behavior of molecules moving in a container of liquid or gas. What one can do, in this case, is to approach the problem statistically, that is, one can study the "average behavior" of the particles of the system. "Statistical mechanics" is the tool with which one can give exact information about the "probability" that a particle is found in a certain region and with a velocity in a certain range. Thus, in the context of a deterministic classical mechanics, a certain "uncertainty" arises in the knowledge of the positions and velocities of the single particles, an uncertainty of statistical nature due to the practical impossibility of a complete investigation and calculation. In this case, one speaks of "statistical indeterminism." What we succeed in determining, is only the probability that a particle is found in a certain region or has a certain velocity in a certain range. The uncertainty emerges on the macroscopic level, whereas on the microscopic it is absent. In other words, there is a deterministic mechanics underlying the statistical uncertainty. It must be made clear that this type of uncertainty is not inherent in the laws of classical mechanics, which are deterministic, but stems from the intrinsic limits that define what the observer can know. One could speak of a "subjective" uncertainty rather than an "objective" uncertainty.
3. Quantum Uncertainty . The Copenhagen interpretation of quantum mechanics states that uncertainty is not due to the practical impossibility of accessing all the information necessary to predict exactly the motion of the particles, but that uncertainty is a "law of nature", that is, it is a "theoretical" impossibility to be found precisely at the microscopic level of the system. According to this interpretation, quantum mechanics is not statistical - contrary to what Einstein and the supporters of the "hidden-variable" theories hold. Uncertainty does not stem from ignorance, but from a theoretical impossibility. In this case, we have a kind of "indeterministic mechanics" at the basis of an uncertainty which is of non-statistical nature.
As Schrödinger observed in one of his essays in 1932, there was an attempt in the beginning to overcome "practical" determinism and only later did we came to admit that such determinism was only theoretical. The opinion previously held was: if we knew exactly the initial velocity and position of every molecule, and we had the time to keep track of all the collisions, it would be possible to predict exactly every thing that happens. Only the practical impossibility: a) of determining exactly the initial conditions of the molecules, and b) of keeping track of all single molecular events, has led to the introduction of "average laws," which were deemed satisfactory because they involved quantities which can really be observed with the senses, and because such laws do not have enough precision to allow us to make sufficiently certain predictions. Therefore, it was thought that phenomena were determined in a strictly causal way, as if the atoms and molecules were considered individually. This formed the foundation of statistical laws, which are the only ones accessible to experience. As pointed out by Schrödinger, the majority of physicists held that a strictly deterministic theoretical framework was indispensable for the description of the physical world. They were convinced that an indeterministic universe was not even "conceivable." They admitted that, at least in an elementary process such as the collision of two atoms, the "final result" is implicitly contained with complete certainty and full precision in the initial conditions. It used to be said in the past, and still is sometimes even today, that an exact natural science is not possible if based on premises other than these; and that without a strictly deterministic premise, science would be completely inconsistent. Our "image" of nature would degenerate into chaos and would therefore not correspond to a vision in which nature actually "exists", since, when all is said and done, nature is not complete chaos. All of this is undoubtedly "false." It is without doubt permitted to modify the picture in the kinetic theory of gases of what happens in nature: one may think that in the collision of two molecules, the trajectory is not determined by the "known laws of collisions," but by an appropriate "role of dice"(cf. Schrödinger, 1932).
It must however be made clear that in quantum mechanics there are still deterministic features, and that the wave function y evolves deterministically in time according to Schrödinger's equation. However, its physical meaning is indeterministic, since it contains none other than the probabilities of finding a system in a certain state, at least according to the Copenhagen interpretation.
Max Born's observation that one should not equate "causality" with "determinism," as in the mechanistic viewpoint, is still worthy of consideration. According to Max Born, it is not causality properly speaking that has to be eliminated, but only its traditional interpretation which equates it with determinism. He emphasizes that "the statement, frequently made, that modern physics has given up causality is entirely unfounded. Modern physics, it is true, has given up or modified many traditional ideas; but it would cease to be a science if it had given up the search for the causes of phenomena" (M. Born, Natural Philosophy of Cause and Chance [New York: Dover, 1964], pp. 3-4).
4. Determinism and Indeterminism in Non-Linear systems. Deterministic Chaos. A third situation in which uncertainty appears was already noted by Poincaré in 1890 in the field of "classical non-linear mechanics." However, it was set aside for a long while because of the "quantum mechanics boom", which supplanted classical mechanics, since the latter proved inadequate to explain the microscopic world. Only in the 1960s did were these studies resumed, which enjoyed a wide diffusion in the scientific and later popular literature.
It has been noted that the majority of differential equations that describe mechanical systems, even relatively simple ones, are "non-linear" equations, that is, equations in which the sum of two solutions is not a solution. For this class of equations, and in the majority of cases, the solutions are "unstable." This means that even a slight deviation in the initial conditions from the theoretically desired ones can lead, after a certain time, to an exponentially growing deviation from the theoretically predicted trajectory. Since we cannot know the initial conditions with infinite precision, we are unable to make reliable long-term predictions. In this situation, in the vicinity of the so-called "strange attractors," a dynamical behavior now known "deterministic chaos," comes to appear (cf. Gleick, 1989; Devaney, 1990). Only a minority of physical systems are stable, and therefore not chaotic. In these systems, the error in the initial conditions is bounded. In the presence of dissipation, the error even disappears while time increases, and the actual trajectory tends asymptotically to the theoretical one.
In the past we did not realize that solutions could be highly sensitive to the initial conditions because we had not yet discovered how to treat non-linear differential equations in a general fashion (cf. F.T. and I. Arecchi, 1990, pp. 23-24).
In this case, a mechanical theory governed by "deterministic" laws provides a "non-statistical uncertainty" (since it appears even in the presence of a single particle and does not require a great number of objects). Such uncertainty is a result of the high sensitivity to changes in the initial conditions. The uncertainty here is related to the intrinsic limits of a mathematical tool, such as the non-linear equations involved, and not a physical law, which would be the case for a law such as Heisenberg's uncertainty principle. Mathematics, in this case, proves to be limited in making physical predictions because it is not possible to know, either experimentally or theoretically, the initial conditions of the motion of a physical system with infinite precision. We have here an example in which nature cannot be completely described with a mathematical approach (unpredictability).
This situation gives rise to various questions in modern science. The first of these concerns the adequacy of mathematical theories to describe the natural world. A simple mechanical system consisting of three bodies is already mathematically unpredictable, and biological systems, having complex self-organizing structures, all the more. All of this seems to lead to one of the two following conclusions: either a) one should "widen" the scope and the methods of mathematical theories so that they can be applied to these new aspects of nature; or, b) one should adopt a scientific mind-set in which a mathematical description is not absolutely essential or exhaustive, while at the same time keeping a logical-demonstrative methodology. The first of the two alternatives has been adopted by various researchers in the field of mathematics and logic (cf. De Giorgi et al., 1995; Basti and Perrone, 1996) and developed along the lines of a modern formulation of the theory of analogy. The second alternative has been considered in biology, and to a certain extent, in chemistry.
III. Chance and Finality
What is chance? In scientific, philosophical, and common language, the term "chance" is used in opposition to the term "causal."
Following a known Thomistic classification, we call "random" ("by chance") any event which a) does not appear to have a controllable "direct cause" (per se ) and, as such, is unpredictable; b) appears without any goal or finality (cf. In I Sent. d. 39, q. 2, a. 2, ad 2um). We will look briefly at these two characteristics and later add some observations concerning the metaphysical and theological aspects of the problem.
1. Absence of a Direct Cause. A truly random event consists of two independent and accidentally concomitant events in which each event is the effect of a proper direct cause without there being a direct cause of the concomitance itself of the two events. To use a common everyday example, the fact that two friends meet in a square and take different streets, without arranging an appointment beforehand is "random" (cf. Aristotle, Physics, II, 4, 196a). Certainly, there is a cause to the fact that each of the two left home and proceeded to the square at that moment, but there is no direct cause of their meeting each other. One could say, at least, that there is no cause on the "same level" as that of leaving home: there could have been, for example, a third person who called them up independently to go to the square so that the two friend would meet as a surprise... In this case, we are dealing with a "second-level" cause which makes use of the "first-level" causes (which are, in this case, the free decision of each of the two friends to go out into the square).
In the course of the history of science, two situations which systematically appear before the researcher have been experimentally observed: the first consists in the constant association of two phenomena in which if one occurs, the other always occurs (and not vice-versa) whereby one recognizes the first as the "cause" of the other; the second consists in the existence of two phenomena which seem to occur together without any clear direct cause linking them. The latter are considered "random." If in the sciences it appears that determinism is closely related to a causal description of observed phenomena, uncertainty in its various forms always introduces an uncontrollable element whose origin is considered by the scientist to be random. However, explaining randomness as a manifestation of the limits on what the observer can practically know is quite different from saying that, theoretically, the reason for randomness depends on the nature of things. Philosophically speaking, "randomness," in the strong sense of the word, is only that which could derive from a reason of the theoretical kind and not from our ignorance.
2. Absence of a Finalism. The absence of a direct cause in the random event is related to the fact that the event has no purpose: what happens by chance is, by definition, "aimless."
The problem of the relationship between causality, chance, and finalism has played an important role in contemporary debates on scientific method, such as, for example, the discussion found in the book of J. Monod, Chance and Necessity (1970): yet finality, which is systematically excluded from the physical sciences, has emerged in many cases as an adequate explanatory principle. For example, in the field of cosmology, it appears in the debate over the Anthropic Principle. In biology, it appears in the concept of "teleonomy", which overturns the notion of "initial conditions", typical of physics, replacing it with that of "final conditions." Such final conditions must be realized by a system according to a predetermined "program," as in biology happens considering the DNA genetic code (cf. Cini, 1994, p. 236).
The possibility of interchange between initial and final conditions has always been present, in principle, also in classical mechanics because the theory of differential equations does not specify the time in which such conditions are imposed. However, in a dissipative system with a stable attractor, the symmetry between initial and final conditions is broken in favor of the final conditions. In this case, whatever the initial conditions may be, as long as they start from some inside the basin of attraction, the evolution of the system tends to stabilize asymptotically into the attractor, which turns out to be the final state of the system. The most familiar example is given by the pendulum moving in a medium with friction, which tends towards the position of stable equilibrium regardless of its initial position and velocity; another example is that of the driven LCR circuit, which after a certain time reaches a stable solution after the energy associated with the "transient solution" is dissipated.
3. God and Chance. In the metaphysical-theological perspective, problems related to determinism and indeterminism - as well as the question of free will alluded to earlier (see above, I.2) - lead to questions regarding the modes of divine action within the world. How may God or, in properly theological language, Divine Providence, act on the world, if we admit "random" events, that is, events without a finality, having no direct (per se ) causes, and not simply events whose finalism we do not know?
If one does not admit the analogy of causality, that is, the existence of levels and differentiated modes according to which causality can and must operate and be understood, and if one, instead, supposes that the only possible mode of causality is the physical-mechanical mode, then one is led to attribute to "chance" the same character as an efficient cause from which random events should spring. Since chance has no direct cause in the physical-mechanical sense, then it would assume, in substance, the role of the "first cause," and then everything would spring from chance. This position can be found, for instance, in Jacques Monod, who states that "chance alone is at the source of every innovation, of all creation in the biosphere. Pure chance, absolutely free but blind, at the very root of the stupendous edifice of evolution" (Chance and Necessity [London: Collins, 1972], p. 110). But this is contradictory, because of the very definition of "chance." In fact such a definition assumes the existence of other (indirect) causes which precede chance and whose effects are accidentally concomitant. Therefore, chance cannot take the place of the first cause since it requires the existence of previous causes in order that their effects may be accidentally concomitant; then it results that chance needs the existence of first cause, instead of eliminating it.
An intermediate solution, which is certainly interesting, was proposed by Arthur Peacocke. Opposing Monod's thesis, he acknowledges the existence of causality, and therefore of a first cause, but requires that the first cause be "self-limiting" so as to leave room for chance. The causal action of God differs from the purely physical action because it is "informative," just as an immaterial action which interacts with the world as a whole and lets the laws of complexity govern the single events. God, acting as an "informer" over the "world as a whole," does not have anything to do with singular events, and therefore cannot know the "future contingents" determined by the complexity of physical and biological systems (cf. God's Interaction with the World: The Implications of Deterministic Chaos and of Interconnected and Independent Complexity, in Russell et al., 1995).
This approach, which draws its inspiration from complex systems and information theory, succeeds in introducing a certain diversification in the modes of causality ("physical" action and "informative" action), but seems to have the same limitation of conceiving divine causality and chance as if they were two competitors who need to divide the field of action. Such a conception does not use the analogy of divine causality, which would allow chance to play a role without diminishing the role of the first cause. But even in this perspective, the contradiction is not resolved: in fact, according to its very definition, chance subsists only if there are causes which precede it, producing, randomly, concomitant effects. It is hard to understand how a single random event can exist independently of the "first cause."
The fact that certain concomitances happen by chance, that is, without a "direct" or "secondary cause," to use a philosophical term, does not mean that they do not have a cause in the absolute sense, even if considered separately: it is necessary to keep in mind the hierarchy of levels of causality. Metaphysically speaking, all that exists is caused and maintained in its being by the first cause (God), who is also the final end of all things. And the "first cause" acts through a chain of "secondary causes", up to those causes which are nearest to the single object under observation and act directly upon it. Thus, even events which are random due to their lack of a direct cause, have a cause on a higher level in the chain of causes. And this is where the divine action lies, which even through the happening of random events orients all things to the final end. Thomas Aquinas has adequately treated the problem of the relationship of chance to the existence of Divine Providence. He showed that Providence does not exclude contingency and does not deprive things of fortune and chance. Providence extends over to singular contingents and can, in principle, act either directly or through second causes (cf. Summa Contra Gentiles, book III, chps. 72-77). An interesting way of summarizing the relationship between chance and God was proposed by D. Bartholomew, whose approach is closer to that of St. Thomas. Speaking of "God of chance," he maintains that chance is a deliberate, if not necessary, part of God's creation.
Chance is not something which escapes God's control, nor something which opposes Him or contains within itself its final explanation: "If chance cannot be explained, the life of individuals would be submerged in disorder. On the other hand, if one admits that there can be a universal Cause of the world, this Cause must be responsible for everything that exists, including chance. 'And thus it turns out that everything that happens, if referred to the First Divine Cause, is ordered and does not exist accidentally, even if some can be called per accidens in their relation to other causes' (Thomas Aquinas, In VI Metaph., lect. 3)" (Sanguineti, 1986, p. 239). We may observe, finally, that the same idea is found in the Catechism of the Catholic Church, expressed in a more theological language: "God is the sovereign master of his plan. But to carry it out he also makes use of his creatures' cooperation. This use is not a sign of weakness, but rather a token of almighty God's greatness and goodness. For God grants his creatures not only their existence, but also the dignity of acting on their own, of being causes and principles for each other, and thus of cooperating in the accomplishment of his plan" (CCC 306).