Does the no miracle argument provide a convincing argument against Kant's transcendental idealism?
The ‘no miracles’ argument has been called the “ultimate argument for realism” , and argues that it would be miraculous for a theory to make as many “correct empirical predictions” as, say, Einstein’s theory of general relativity, without its view of the “fundamental structure of the universe” being “essentially” or “basically” correct. ('Miracle’ here is understood to mean a “desirable but … highly improbable outcome” , without Hume’s requirement for the “volition of a deity”.) This has been read as a major problem for Kant’s metaphysics, who held that the real world of 'things in themselves’ was inherently “supersensible” , i.e. beyond the reach of direct human experience. Does the 'no miracles’ argument provide convincing proof for human capability to touch the “blueprint of the universe” , and so convincingly de-construct the Kantian view? This essay will argue that, both through relation to Kant’s work and internal inconsistencies, the 'no miracle’ argument fails to undo Kant’s transcendental idealism.
The argument does beg some fairly simple initial questions. For example, how many “correct empirical predictions” does a theory have to make to be considered successful? Quantum theory correctly predicts the Lamb Shift to seven decimal places , which scientific realists have argued is proof of the inherent 'truthfulness’ of the theory. However, how can we independently define the appropriate level of accuracy? Seven decimal places seems like an exceedingly high standard in a regular world dominated by integer calculations, but surely the greater question is why a theory which correctly understands the fabric of the universe shouldn’t be correct to seventy-seven decimal places. This may simply be a failure of our current measuring devices, or a theory hitting the limit of its predictive power. Another initial question is the “semantic of approximate truth” , i.e. what does 'approximate truth’ actually mean? A small signifier of this problem can be seen in the prevalence of “basically” among the esteemed papers of scientific realists, a word not generally associated with philosophical or scientific discourse. This can be read as lowering the philosophical bar for realism, seeking a “prima facie plausibility argument” rather than the surety of 'classic’ realism.
A connected issue is whether the concern with 'approximate truth’ is merely the re-formulation of an idea already found in Kant. Kant is not a Berkeleian idealist, who believed that the entirety of existence is found in our minds. Instead, Kant argues that there is 'something’ out there, the thing in itself (“ding an sich selbst” ). We cannot positively know anything about things in themselves, but simply that “empirically real” appearances (“erscheinung” ) must correspond to something which is not in itself “merely in us” , i.e. appearance. Therefore, Kant accepts the value of realism , within the limits set by his transcendental idealism. Is the 'approximate’ of scientific realism not setting similar, if differently intentioned, limits on the realism project? What is generally considered a fundamental division may in fact be an argument over degree, with both sides agreeing that 'ultimate’ realism is impossible. Historically this has been believed for theological reasons, but a 'God’ figure is not required to argue that humans may never perfectly grasp the fabric of the universe.
Another option is that the 'no miracles’ argument and transcendental idealism are fundamentally opposed, but neither lands a fatal strike on the other. Despite searching for deep and foundational truth, both theories seem to require some dogmatic leap, towards (or away from) a belief in man’s ability to interact with the building blocks of the world. If I argue that I’ve ascertained the ultimate basis of the “noumenal world” as 'mice on bicycles’, both will argue from different directions, but neither can successfully invalidate the others position. Scientific realists would hold that their best microscopes can only find electrons, and their best theoretical physics can only find either quarks or strings. My assertion that there is a lower order of existence, where strings are formed by 'mice on bicycles’ of an inconceivably small dimension, is not the fabric of reality because scientists haven’t found that yet. Kant would similarly disagree with me, but because my 'mice on bicycles’ inherently cannot be the 'true’ world. Either they are an appearance, and so only vaguely relational to the thing in itself (perhaps the transcendental world involves cats on mats), or if the thing in itself really is a mouse on a bicycle, I have “no right to say anything” about it. In this way, both parties can argue that I am wrong, but neither can prove that the other is wrong. Scientific induction isn’t strong enough to 'prove’ Kant wrong, and Kant’s transcendental existence inherently cannot be accessed, to be proven or otherwise. This relates to the “underdetermination of theories by evidence” , i.e. what do we do if multiple theories give similarly successful predictive results? Which should be used as part of the 'no miracle argument’? Unfortunately, short of 'wait and see’, I see no strong realist answer to this question.
There are a number of more fundamental problems with the 'no miracles’ argument. For example, the theory is based on induction, i.e. moving from specific observations about the world to general conclusions. But is this a sensible basis for surety? Kant agreed with Hume’s skepticism about induction, that it can only infer answers and not prove them. If we fire one probe into the planet Mercury and find no water, it would be silly to assume on that basis that Mercury is free of H2O. One thousand probes would let us make more accurate guesses, but it would be no less silly to be absolutely sure of our position on that basis. The scientific theories realists are arguing for the inherent value of, such as life depending on water, are similarly dependent on factors which we have little data for. On a cosmic scale, our experience is so limited as to make induction a weak argument for approaching the fabric of the universe.
Induction poses a different problem for the 'no miracles’ argument when reversed. This is the concept of “pessimistic meta-induction” , i.e. we have inductive grounds for believing that our current empirical data will soon be replaced by entirely new theories. History contains a waste-bin of theories which had contemporary predictive success, but have since been proven wrong. In the 18th and 19th centuries, aether theories explained everything from gravity to light through the existence of a specific transmission mechanism. The vast majority of such theories have since been discarded, despite making numerous correct predictions about the world. This poses an obvious problem for scientific realists, who have to explain how we can be more convinced of today’s theories than Newton was sure that light was formed of particulate matter.
A specific question here regards how scientific theories progress. It seems that, in order for the 'no miracles’ argument to have force, science has to be either essentially cumulative, or we would need reason to believe that we have already seen the last great revolution in science, such as the understanding of astronomy between Plato and Galileo , or gravity between Newton and Einstein. The latter argument seems ahistorical and largely without merit, requiring a particular contemporary-centric egoism. Scientific realists have largely attempted to follow the former course, arguing that so-called 'revolutions’ have been somehow cumulative in nature. Unfortunately, being clear about the difference between evolution and revolution is difficult. If one were to take a photograph every second for the fifteen minutes it took for a book to burn to ashes, one would end up with 900 images of the process. Taking each successive picture in turn, it would have to be accepted that what was a book in picture [x] still held the same essence in picture [x+1]. However, it is also clear that the product viewed in the last image, i.e. a lump of ash, is not a book. Even if we can find continuing strands in scientific development, that isn’t enough to prove that the process as a whole is cumulative.
To move the debate forward, Laudan created his 'historical gambit’ , which included a list of previously-successful but discarded theories. Some attempts have been made to slim down the list by attacking weaker outliers on specific grounds, but it seems likely that theories are being discarded more quickly than realists could deny their importance on a case-by-case basis. The more fruitful direction has been to take Laudan’s list as a whole, and attempt to find structural problems that could devalue it at a stroke.
One such attempt is to differentiate between certain types of scientific endeavour. Some, like quantum theory, are clearly not moving on a cumulative basis, but a section of realists have contented themselves with explaining the 'mature’ sciences, i.e. areas of study which have been without major revolution in the past decades and centuries. However, this is problematic. How can 'maturity’ be defined? If simply the length of time since the last revolution, this seems to be an entirely “ad hoc” distinction, created to weed out the problematic developments realists cannot explain. This relates to van Fraassen’s Darwinian explanation , that theories we now consider successful are only so because they haven’t yet been dis-proven. This simultaneously attacks realism and relegates the usefulness of the 'no miracles’ argument. The response from Boyd , a leading scientific realist, is that theories are more successful than a Darwinian random walk would predict, but this is hard to evaluate. It is clear that maturity is a difficult concept, both to define and for the large sections of science it has to abandon.
More generally, scientific realists have attempted to negate Laudan’s gambit by exposing 'threads’ of continuation through scientific development. There are a number of these theories, but most suffer from a very similar set of problematic issues. For example, Boyd argues that we can identify “background theories” running through history, as narrow concepts which have remained steady. This begs the question, where did the background theories originate? In attempting to avoid the 'miraculous’ nature he finds in Kant’s work, Boyd seems to be falling into the trap of requiring a fairly miraculous event himself, i.e. some foundational base of early theories which all happen to be correct. Where and when did this take-off point occur? Why were we so correct early on, then managed to get so much wrong since? Other such attempts have included Psillos’ 'theoretical constituents’, the 'presuppositional’ and 'working’ posits of Kitcher, and the 'content’ and 'structure’ of Worrall’s structural realism . They are attempting to find continuity in a field of study where it doesn’t seem to exist. As mentioned, Newton’s theory of universal gravity had “stunning range of predictive success” . However, it was superseded by Einstein’s theory of general relativity, which views the world in a fundamentally different way. It seems that the two theories are so radically separate that realists can either accept the revolution (thereby denying the 'no miracles’ argument’s value), or cling to a kernel of continuity so small as to become almost meaningless. What value is there in arguing that some small percentage of science is usefully timeless, if we can’t identify which is which?
My conclusion is that, despite numerous attempts, proponents of the 'no miracles’ argument have failed to provide a convincing argument against Kant’s transcendental idealism. This is not to assert that Kant’s position is the true one, but merely that its nemesis is to be found elsewhere.