The Realist Interpretation of Quantum Mechanics

Before the quantum revolution, the scientific depiction of the natural world was a deterministic one, i.e., once all the initial parameters of a physical system were known, the evolution of a system could be predicted with exact precision. It was this ability to make exact predictions derived from empirical knowledge that made up the backbone of science, with the field of physics painting this deterministic picture of the world on the fundamental level. From describing the motions of the stars, to the behavior of the atoms which made up our bodies and the materials around us, physics had an advantage over the other sciences, such as biology and chemistry, in that its precision was unmatched, e.g., with what speed an object would hit the ground could be calculated exactly, while how the human body would respond to a certain chemical couldn’t always be precisely predicted. Even in statistical physics–thermodynamics–where the components of a system were too innumerable to treat individually, a deterministic view was suggested. Though an ensemble of particles may approach the innumerable, nothing in the nature of thermodynamic theory suggested that the trajectories of these particles were fundamentally unknowable; it was simply a practical matter to treat the system statistically rather than to treat each molecule individually, though, in principle, each molecule could be isolated and its properties measured. It was this line of reasoning that would inspire the realist position following the quantum revolution.
But once it was shown that Niels Bohr’s model of the atom was incorrect, as was Schrödinger’s model of the electron being a continuous stream of charge distributed around the atom, physical models of theories began to lose precedence in physics.1 Mathematical formalism began to take the stage in the atomic realm–this being so because no physical model seemed to be able to describe what was being measured by experiment when it came to sub-atomic particles. Electrons, when treated probabilistically, were now shown to obey wave equations, and their characteristics, within certain limits, could be measured. This treatment introduced certain limits that were at odds with previously established principles in physics, and much debate has gone into what the wave equation of a particle physically actually represents. What quantum theory suggested was that the location of a particle could not be predicted beyond the realm of probability (as matter of principle, not just practicality), and that measurable quantities, such as position and momentum, could not be simultaneously measured, i.e., knowledge of one forbid knowledge of the other. This concept was mathematically formulated in Heisenberg’s uncertainty principle, originally published in the German physics journal, “Zeitschrift fùr Physik” in 1927–and it’s been a thorn in the philosopher’s side ever since.
To the classical physicist (and the aforementioned theory of determinism in philosophy), these ideas were anathema. It was one thing to say that it is impractical to measure a certain property of a particle, e.g., to measure the trajectory of a specific air molecule, but it is another thing to say that, in principle, a particle’s property couldn’t be measured–that nature was imposing limits as to what it revealed about itself on the fundamental level. If particles’ trajectories were fundamentally random, and the uncertainty principle was a fundamental law of nature, a deterministic view of the universe was now an anachronism. In response to this new, stochastic view of the universe, Einstein made his famous “God does not play dice” statement,2 illustrating his view that the true trajectory of a particle was not a matter of uncertainty, but that it depended on the initial conditions of the system, and if those conditions were known, its trajectory could be predicted and described precisely.
Yet, despite these classical and philosophical oppositions, quantum theory has remained supreme in its respective realm. Its predictions about the fundamental indeterminism of our universe on the atomic scale have been experimentally verified, and though we may not like it, it seems probability governs our world–not simple, linear cause and effect as was previously thought. Even the phenomenon of quantum entanglement, a contradiction in the mathematical formalism of quantum mechanics which Einstein revealed, has been physically demonstrated.3 Today, most physicists have capitulated to the inherent, counter-intuitive realities of nature that the Copenhagen and other non-deterministic interpretations of quantum mechanics suggest. It is widely accepted that knowledge of a quantum mechanical system affects the system, “true” particle trajectories do not exist, matter and light particles are also waves, and an electron can be in two places at once. These phenomena are both what we observe and what the math tells us, and therefore, the physics community must roll with it.
But this hasn’t stopped a small minority of physicists from clinging to a deterministic universe; this interpretation is known as the realist position, or the realist interpretation of quantum mechanics. Though this view is not a denial of the realities of quantum theory, as evidenced by numerous experimental confirmations–this can’t be emphasized enough–it is an insistence on the idea that the picture of the quantum realm is not complete, with all of this hinging on the grounds that quantum mechanics, though useful and consistent, has yet to provide a physical model for the universe–or at least, one that makes even a bit of sense.
Quantum mechanics is a theory without a clear publication date or founder. The formation of the theory consists of the aggregated works of many early twentieth-century physicists, such as Niels Bohr, Enrico Fermi, Erwin Schrödinger, Wolfgang Pauli, and Richard Feynman.4 Even Albert Einstein, already noted as a later opponent of the theory, couldn’t help but contribute to its formation. His work on the photoelectric effect, for which he received the Nobel Prize in physics in 1921, helped illustrate the modern understanding of discrete electron energy levels in the atom–what quantum mechanics is all about and is used for–and the relationship between the energy and frequency of light.5
Another one of the early physicists who helped construct quantum theory was Louis de Broglie. His initial work was on the theoretical development of matter waves, presented in his 1924 PhD thesis. In this brave and groundbreaking doctoral defense, de Broglie predicted that all matter had an associated wavelength, this wavelength becoming more salient as the scale of matter involved decreased, i.e., it wouldn’t be obvious for cars and baseballs, but it would be for sub-atomic particles. This prediction was confirmed by the Davisson-Germer electron diffraction experiments at Bell Laboratories–a serendipitous discovery–and de Broglie was awarded the Nobel Prize in 1928 for his insight into the wave-particle duality exhibited, not only by light, but by matter as well.
If de Broglie’s ideas about the wave-particle duality of all matter were true, they posed a challenge not just for physics, but for the philosophy of science as well. If an electron has a wavelength, then where is the electron, or better, where is the wave? The answer isn’t clear because waves are spread out over a range of space. In order to define a wavelength, one loses the ability to define a position and vice versa. Yet, an electron still can have a defined position as demonstrated by experiments which reveal its particle-like nature; particles aren’t spread out in space. It was from these considerations that Werner Heisenberg developed the famous and already mentioned uncertainty principle. To define a position, an experimentalist must forfeit information about its wavelength (momentum) or vice versa. It was the development of this principle that marked the downfall of determinism in science.
Yet, de Broglie did not originally believe that the probabilistic wave treatment of matter warranted an indeterministic interpretation of the universe. In 1927, the year before his matter-waves theory was confirmed, he proposed the pilot-wave theory, a suggestion that the wave equation in quantum mechanics could be interpreted deterministically. Though he eventually abandoned this interpretation to follow a more mainstream one, the theoretical physicist David Bohm would later continue his work, and the pilot-wave theory would also become known as the de Broglie-Bohm theory.6
In 1952, while employed at Princeton, Bohm published a paper which espoused his realist interpretation of quantum mechanics. In the paper, he suggested the idea that quantum theory was incomplete and that “hidden variables” were not taken into account in its formulation. These hidden variables would explain why the theory was so far probabilistic, and if they were taken into account, the predictive capabilities of the theory would become exact. That is, he believed there were more parameters to consider in the wave equation, and quantum theory had so far failed to predict exact results because not all of the pertinent variables were accounted for. (This is analogous to trying to measure the total kinetic energy of the Earth while only considering its linear kinetic energy and not its rotational energy. You won’t get a precise answer until you account for both.)
Bohm suggested introducing a “quantum mechanical potential energy” to begin a new mathematical treatment of the theory. The double-slit experiment, in which a single electron passes through a single slit exhibiting particle-like properties, while passing through a double-slit exhibiting wave-like properties, could be explained by postulating that the quantum mechanical potential energy of the system was changed when the second slit was opened or closed. The realist’s goal was to then discover the hidden variables and physical phenomena that would induce this change in the said potential energy of the system. In particular, Bohm pointed out that an expansion of the theory in this direction might be needed to solve the problems of quantum mechanics on the nuclear scale, where its laws broke down, and that developments in the direction of a more complete formulation of the theory would expand its domain.7
Bohm also expressed his view that though quantum mechanics was useful, consistent, and elegant, it did not justify its denouncement of determinism–the philosophy behind every field of science, not just physics. To Bohm, nothing in the theory suggested that the Copenhagen–mainstream–interpretation revealed the physical reality of nature, but rather that the theory was still developing, and that, in addition to all the theoretical complications, the instruments used in the experimental verification of the theory were bound to interfere with the precision of the measurements. After all, this was the first time in history that objects of such small size were being precisely measured for their exact location and properties. Renouncing a deterministic world view that the rest of science reinforced didn’t seem justified simply because a practical theory which suggested otherwise had been developed. Bohm, like Einstein, was sure a more complete and physically-sensible theory would one day supplant it.
In fact, Einstein didn’t wait for the future; even after having already developed his groundbreaking theory of relativity and winning the Nobel Prize for the photoelectric effect–it’s widely thought he won it for the former, not the latter–Einstein continued his work in theoretical physics, his eyes set on bringing absolute determinacy back into science. In 1935, Einstein, along with his colleagues Boris Podolsky and Nathan Rosen, published a paper demonstrating the incompleteness of the quantum mechanical description of reality by the wave function.8 In the mathematics of quantum mechanics, Einstein and his colleagues found a paradox, one that predicted the phenomenon of two or more particles becoming “entangled.” This meant that two or more particles would sometimes need to be described by a single quantum state, even when the respective particles were separated by a distance larger than was usually dealt with on the quantum scale–meaning that the speed of light wouldn’t be able to communicate information about the single state between the two particles in time for them to respond accordingly. The transmission of information is limited by the finite speed of light.
This meant that, for entanglement to occur, action at a distance was required, a concept regarded as untenable in most fields of physics–and one that bothered the ancient Greek philosophers as well. It suggested that the physical system in the entanglement question was non-local, and that for action at a distance to occur, the principle of locality must be violated. The importance of this principle rests in the assumption that in order for information to be transmitted between two objects, something must do the transmitting. Be it a particle, a field, or a wave, the information must be physically communicated somehow.
In 1964, the physicist John Stewart Bell proposed a theorem demonstrating the possibility of non-local quantum effects and quantum entanglement. Bell was convinced his work also showed that quantum theory was complete, and that the postulation of hidden variables would not add to the theory, but rather violate it, therefore ruling out the possibility of their existence.9 Going into the technical details of Bell’s Theorem is beyond the scope of this article, but its predications were experimentally proven to be true concerning the non-locality of the quantum world–but proving the nonexistence of the hidden variables would be disproving a negative, something beyond the capabilities of science, at least in its current philosophical form. Quantum entanglement was experimentally verified, proving Einstein and his colleagues wrong and making their predicted physical paradox a reality. 10
Today, there are an appreciable number of physicists who subscribe to the realist interpretation, an esoteric view in the already esoteric discipline of quantum physics. Dr. Emilio Santos of the University of Costa Rica is one of the leading physicists to subscribe to this view. Yet to be convinced that Bell’s Theorem refutes the possibility of Bohm’s hidden variables, Dr. Santos posits that the apparent stochasticism of the quantum universe is due to the interference of measuring apparatuses with their systems in quantum mechanical experiments, as well as the presence of vacuum fluctuations in space-time.1
His conception of the uncertainty principle stems from the unavoidable reality that, in a quantum system, the researcher must examine a microscopic object–which obeys quantum-mechanical laws–while using a macroscopic measuring tool–which obeys Newtonian laws.3 So far, no known theory can link the two realms together. To try and work around this difficulty, Niels Bohr, one of the first developers of quantum mechanics, proposed the correspondence principle. In doing this, he suggested that as we go from the quantum world to the classical or macroscopic one, taking the limit of Plank’s constant as it approaches zero, quantum laws transition into classical ones.1,3 However, it is philosophically contradictory to claim that some aspects of our universe are deterministic and others are not, as determinism implies that all components of a system have predictable, causal trajectories. It seems odd to claim that predictable systems are based on unpredictable foundations. Though he does not state this explicitly in his papers, it’s apparent that Dr. Santos doesn’t subscribe to Bohr’s correspondence principle, and he believes the radically different natures of the experimental system and the measuring apparatus are more to blame.
In addition to the apparatus problem, it is also much more likely that the ostensible indeterminacy of quantum mechanics rises from vacuum fluctuations.1 Dr. Santos ascribes the apparent probabilistic nature of quantum theory due to the inherent difficulties in practically measuring particles on such small scales, where the space they inhabit itself affects the system. Even vacuums participate in quantum mechanical activity, and due to the fact that there are no discontinuities in ordinary space, no system can be truly isolated or claim to be local. To Dr. Santos, while non-locality must be accepted, this does not preclude a realist interpretation of quantum theory, as it does not prove inherent, natural limits to the knowledge we may possess of any physical system; it simply suggests that the systems we study are full of too much background “noise” to precisely measure any individual particle–in the same way there’s too much noise in a crowded room to precisely record any one particular conversation. Dr. Santos suggests that, until a physical model is proposed or an advancement in the mathematical formalism of the theory suggests a realist interpretation, quantum mechanics is incomplete. He says, “I do not propose any modification of that core, but claim that the rest of the quantum formalism is dispensable.”1
It would be interesting to note the technological implications the realist interpretation would have for the modern field of quantum computing. Ordinary computers make use of binary, reducing all stored data to a collection of ones and zeroes arranged in a particular order for each datum. Relatively speaking, computers are limited by all their processes simply being a collection of yes or no, on or off, binary statements, which the computer has to read all the way through in order to perform any command.
Quantum computing would overhaul this limitation of binary by taking advantage of the wide range of quantum phenomena available to us. Instead of a computer going step by step through a collection of yes or no statements, the processors could take advantage of quantum entanglement and perform a number of different computational processes simultaneously–something impossible in binary. The fundamental indeterminacy of quantum mechanics makes these wild processes possible. Instead of electric transistors converting circuit data into binary–current flowing here or there–quantum computer chips make use of the fact that, though counterintuitive, electrons can be in several places at once, meaning an electron can run down several different circuit pathways at once, and therefore perform several different computations at once. While no quantum computer of worthy application has been developed, such devices do exist, and it’s only a matter of time until their capabilities supplant those of a digital computer’s. Already, quantum computing data is stored in something called a “qubit,” the quantum mechanical datum to replace the binary computing “bit.” So far, quantum computers can only handle a measly 16 qubits, but most developers in the field are confident an expansion of quantum computing capabilities is on the horizon.
It is somewhat unclear what a deterministic revolution in quantum theory would mean for quantum computing. This all would depend on what exactly the hidden variables and their physical reality would represent. Would the discovery of the hidden variable reveal that, in actuality, an electron cannot be in two places at once? This is unlikely, as experiment has revealed such a phenomenon to occur, but then again, what if the hidden variables revealed that our measuring in experiments did indeed influence our measured physical systems beyond the limits of forgivable scientific error, and that our measurements effected the so far paradoxical results of electrons–and the rest of matter–having almost phantom like properties? Alas, being that the realist interpretation of quantum mechanics is not the focus of many researchers in quantum theory, its implications with respect to quantum computing have not been fully considered. It could either expand or kill the field. Maybe the reason quantum computers cannot deal with more than 16 qubits is because we are asking nature to do something that is fundamentally against its mechanics, despite the fact that our tentative mathematical theories suggest it is possible.
But technological considerations aren’t the only ones to be had concerning the realist interpretation of quantum mechanics. The philosophical and religious implications of the realist interpretation versus the more mainstream, Copenhagen interpretation are quite profound, and the debate between determinism and uncertainty in quantum mechanics has inspired many philosophers to consider what each interpretation means for the limits of human free will. If the laws of nature are completely deterministic, and every event in the history of the universe can be traced back through particle trajectories to the big bang, then it follows, through inductive reasoning, that all events, even human thoughts, wants, and actions, are simply the reactions of atoms and molecules to physical laws, leaving no room for unnatural agents to participate in the system. In this view of the universe, one doesn’t make choice A instead of B because one is a free agent in a universe of otherwise natural laws, one makes that choice because the information about those two choices induced a certain chemical reaction in the mind of the chooser (the mind is made of atoms as well), and in the same way a rock falls under the influence of gravity, the matter that composes the human mind reacts under the influence of causal particle mechanics.
But if the universe is indeterministic, as suggested by the mainstream interpretations of quantum mechanics, it means human choices aren’t predetermined, and this indeterminacy ostensibly leaves room for human influence. Yet, it remains to be shown how this position can be maintained. Even if all human decisions and actions were not determined at the moment of the big bang, and all the events in the universe could be reduced to the unpredictable nature of stochastic particles, this leaves nowhere for a non-natural influence–free will–to come into the picture. Human choices are still the result of particle trajectories, whether or not those trajectories can be determined, and whether or not the trajectories of those particles are linear or non-linear. Until some unnatural agent is introduced into the complex but natural configuration of the human mind–unnatural in that it would be outside the laws of nature–the position that humans have free will cannot be maintained without appealing to notions of the supernatural. And nature does not approach the supernatural as its systems approach complexity, even the complexity of the human mind. To claim otherwise is to claim that the molecules which make up the brain follow different physical laws than the rest of the molecules in the universe. And if you disagree, I can’t blame you; it’s not like you had a choice in the matter anyways.
But philosophical debate aside, as one of the most successful and useful theories in all of theoretical physics, quantum mechanics does seem to suggest the indeterministic realities of nature. We get our understanding of semi-conductors, the devices used to power your smartphone, through quantum mechanics, and we can’t discard the probabilistic elements without discarding our understanding of the theory altogether. In physics, where experiment is king, and in science, where nature is under no obligation to make sense to us, it seems stubborn to ignore the continuing theoretical and experimental verification of the probabilistic nature of the universe. Yet, the idea that this limit is one of practicality, not principle, is a hard one to overcome. Human science has reduced every other aspect of the universe down to the simple but fascinating level of causal mechanics; it is tempting to say that quantum mechanics will one day reach this point as well.
References
1 E. Santos, Foundations of Science, 20, 357-386 (2015) or arXiv:1203.5688 [quant-ph].
2 W. Hermanns, Einstein and the Poet: In Search of the Cosmic Man, Branden Books; 1st Ed. (2013), p. 58. 

3 V. Singh, Materialism and Immaterialism in India and the West: Varying Vistas (New Delhi, 2010), p. 833-851 or arXiv:0805.1779 [quant-ph]. 

4 J Mehra, H, Rechenberg, The Historical Development of Quantum Theory ( New York, 1982). 

5 ”Albert Einstein – Facts”. gf. N.p., 2017. Web. 24 Feb. 2017. 

6 F. David Peat, Infinite Potential: The Life and Times of David Bohm (1997), p. 125-133. 

7 D. Bohm, B. Podolsky, Phys. Rev. 85, 2 (1952). 

8 A. Einstein, B. Podolsky, and N. Rosen, Phys. Rev. 47, 777 (1935). 

9 H. P. Stapp, Nuovo Cim B 29, 270 (1975). 

10 A. Witze, ”75 Years Of Entanglement”. Science News. N.p., 2017. Web. 24 Feb. 2017. 


8 Comments

  1. We observe that material objects behave differently according to their level of organization as follows:
    (1) Inanimate objects behave passively, responding to physical forces so reliably that it is as if they were following “unbreakable laws of Nature”. These natural laws are described by the physical sciences, like Physics and Chemistry. A ball on a slope will always roll downhill.
    (2) Living organisms are animated by a biological drive to survive, thrive, and reproduce. They behave purposefully according to natural laws described by the life sciences: Biology, Genetics, Physiology, and so on. A squirrel on a slope will either go uphill or downhill depending upon where he expects to find the next acorn.
    (3) Intelligent species have evolved a neurology capable of imagination, evaluation, and choosing. They can behave deliberately, by calculation and by choice, according to natural laws described by the social sciences, like Psychology and Sociology, as well as the social laws that they create for themselves. A child will ask permission of his mother, or his father, depending upon which is more likely to say “Yes”.
    A naïve Physics professor may suggest that, “Physics explains everything”. But it doesn’t. A science discovers its natural laws by observation, and Physics does not observe living organisms, much less intelligent species.
    Physics cannot explain why a car stops at a red traffic light. This is because the laws governing that event are created by society. The red light is physical. The foot pressing the brake pedal is physical. But between these two physical events we find the biological need for survival and the calculation that the best way to survive is to stop at the red light.
    It is impossible to explain this event without addressing the purpose and the reasoning of the living object that is driving the car. This requires nothing that is supernatural. Both purpose and intelligence are processes running on the physical platform of the body’s neurology. But it is the process, not the platform, that causally determines what happens next.
    It is reasonable to presume causal determinism at all three levels: physical, biological, and rational. But we must include both purposeful causation and rational causation to get there. It would be natural, but nevertheless inaccurate to suggest that “in the same way a rock falls under the influence of gravity, the matter that composes the human mind reacts under the influence of causal particle mechanics”. Particle mechanics cannot explain why one group of water molecules flows downhill, while another group of water molecules hops into the car to go grocery shopping.
    And free will, when properly defined as our ability to decide for ourselves what we “will” do, when “free” of coercion or other undue influence, poses no problem for causal determinism, nor does causal determinism pose any threat to our autonomy, when our choices are reliably caused by our own purpose and our own reasons. Because both are simultaneously true.

    • “Particle mechanics cannot explain why one group of water molecules flows downhill, while another group of water molecules hops into the car to go grocery shopping.”
      I’ll have to disagree with you there. The being which transports the water to the car is made up of particles. If everything is made up of particles, then everything can, in theory, be described by particle physics. You have to invoke the inductive principle to come to this conclusion, but when applied, you’ll see all these instances of events seemingly not governed by physics to fall under its jurisdiction. It’s very simple to reduce the biological processes of cells down to the level of mechanics; we already have the field of medical physics doing just that, and neuroscientists are doing the same thing.

      • “If everything is made up of particles, then everything can, in theory, be described by particle physics.”
        I think I just made the case that it cannot be done. Would you like to prove otherwise, by describing a car stopping at a red light, using only the concepts contained in a physics textbook? And will there be anything in that physics text that refers to traffic signals, and what these signals mean to physical particles, such that they would choose to stop when they see a red light?
        There’s a concept called “emergence”, which suggests that certain properties, which do not exist in the individual parts, nevertheless emerge naturally in complex systems, for example when matter is organized as living beings. And additional properties emerge with the evolution of intelligence.
        There also appears to be both top-down as well as bottom-up causation. It originally had to be built-up from the bottom, of course. But it is like the body has evolved a brain and delegated to it the responsibility for dealing with reality at the human macro-level. And all human concepts, including free will and determinism, exist at this level.

        • “I think I just made the case that it cannot be done. Would you like to prove otherwise, by describing a car stopping at a red light, using only the concepts contained in a physics textbook? And will there be anything in that physics text that refers to traffic signals, and what these signals mean to physical particles, such that they would choose to stop when they see a red light?”
          Yeah, it’s simple. When the driver of the car sees light emitted from the traffic light around the ~650nm range it triggers the firing of neurons in the driver’s brain which causes the driver to hit the brakes. There are of course millions of more steps in between all of that, but there is no step in that process that suggests it doesn’t boil down to biology/neuroscience, which boils down to chemistry, which boils down to physics.
          You’re neglecting that the human brain is made of particles operates according to physical laws and falls under the domain of physics, regardless of how long it takes to describe some event in which the brain is a participant physically.

          • We’re perfectly in sync that everything takes place using physical materials and energy. And once you open it up to biology, we have purposeful or goal-directed behavior, something which does not exist in the atoms themselves, but only exists in the living organism, literally an organization of matter as a life form. But we don’t get to free will until we add the computer, the hardware of the brain that enables imagination, evaluation, and choosing. These properties also are not found in the particles themselves, but only when organized into a neural network upon which the mental processes can run. These mental processes make decisions that causally determine what the physical body will do next. Thus we have physical, biological, and rational causation. And that is more than just physical causation.
            Again, let me repeat that we may presume perfectly reliable cause and effect within each level of causation. And we may reasonably believe that every event that occurs is the inevitable result of some combination of these three. Thus we may hold to determinism a the belief that there is always an answer to “why did this happen?”, even if we never find that answer.
            And we may also presume free will as a choice we make according to our own purpose and our own reasons, when free of external coercion. However you reduce a person to their parts, as long as you are asserting that the parts are controlling the choice, then the person is controlling the choice, because one is identical to the other.
            The mental errors we make in denying free will result from two things: (1) The notion that reliable cause and effect is some kind of constraint, some force of nature compelling us to do what we otherwise would not do. But it turns out that what we will inevitably do is exactly identical to us just being us, doing what we do, and choosing what we choose. (2) The irrational notion that “freedom from causation” is required to be “truly” free. Without reliable cause and effect, we cannot reliably cause any effect, and would thus have no freedom to do anything at all. All our freedoms subsume a world of reliable causation.
            Then, of course, we have QM throwing a wrench into the works. But it might be viewed as a 4th level of causation, beneath Newtonian physics, behaving reliably according to its own set of rules, which we have yet to discover. And that is what I believe your Realist view of QM was suggesting.

          • I think our disagreement stems from your belief that the combination of particles which makes up a neurological systems contains some degree of holism. I fervently deny this idea because it, to me, implies too early of a stopping point in the train of thought taken to analyze such a system. Because imagination isn’t found in the particles themselves doesn’t mean it’s not dependent and governed by the physics of said particles. The property of a rock being a rock isn’t found in the particles themselves that make up a rock, but we don’t conclude that, because of this, the rock possesses a holistic attribute independent of physics.

          • Right, that is definitely where we would disagree. Consider a computer, a program, and a process. The computer provides the necessary hardware platform upon which a program can run as a process. Without the hardware, the process cannot run. Without the process, the hardware has nothing to do. The logic of the process is in the program. The program exists separately from the computer. In fact the same program can be run on any number of computer platforms (like a book of recipes that any number of chefs can use to prepare the same meals). But when running on the computer the process is in control of what the hardware does.
            So, that means that the process governs what the physical object does, rather than the physics of the particles governing anything. The process employs the physics to accomplish a purpose that does not reside in the physical particles.
            I’ll admit I’ve found it difficult to describe the nature of a “process”. Is it to be referred to as “metaphysical”? I don’t know, I only know that it seems to be distinct in some fashion from the nature of the hardware platform, but yet it cannot exist except on some such platform, plus the energy required to run the process.
            The neurology of the brain and its connections seem to provide a hardware platform for our mental processes. When we die it is like someone turned off the current, bringing the process to a stop.
            The subjects we contemplate arise from out biological purpose. I’m hungry, so I think about food and when I should get some. Biological purpose is also like a process, except it would be more like firmware, logic that is hard-coded into the physical platform, especially autonomic functions.
            Deliberate choices, on the other hand, involve things that the body as a whole intends to do. These are arrived at through calculation and reasoning. And this reasoning is not accessible to the physical particles that make up the brain, even though those particles are utilized to perform the process.
            So, yes, I think we would disagree on the “holism” thing, because the living object as a whole operates upon reasoning that relates entirely to the needs of the person, a need that is not shared with the physical particles.

Feel free to leave a comment

Previous Story

The Unbelievers

Next Story

Worlds Enough

Latest from Fact & Opinion

Why Warhammer Matters

Reflections on the first academic conference devoted to the grand old franchise, by Dr Mike Ryder,