Levels of Pluralism
Levels of Pluralism
Levels of Pluralism
Conjecture
Jul 17, 2023
Preparing the Pluralism Question
When do we want all our research eggs in the same paradigm basket?
Although most people don't go as far as the extreme paradigmatism of Thomas Kuhn in The Structure of Scientific Revolutions, which only allows one paradigm at time (for a given science), the preference for less rather than more options is still pervasive. In the ideal, a convergence to one, even if it's not always feasible. After all, there's only one correct answer, right?
Putting that debatable question aside, I've become more and more convinced that pluralism, the pursuit of multiple lines of research in parallel, is far more prevalent and integral to the process of science than Kuhn's naive paradigmatism. This realization has emerged from studying History and Philosophy of Science, especially outside of physics which for many reasons is quite an unrepresentative science.[1]
But one crucial preliminary point is that pluralism can appear at multiple levels. And the value of pluralism also depends on the level at which it is applied.
So this post proposes a decomposition of the activity of research into four levels, and introduces the corresponding pluralism at each level. Although the point here is not (yet) to argue for pluralism, I offer some examples of pluralistic successes, as well as arguments for the epistemic circumstances where pluralism seems the most valuable. I also finish by proposing a geometric model for when each level of pluralism makes sense, based around considering bits of evidence as objects in high-dimensional space.
The four levels of pluralism I discuss are:
Individual pluralism: pluralism of the methods, ideas, and analogies used by a single researcher or a single research tradition.
Approach pluralism: pluralism of approaches to the same operationalization of the problem.
Operationalization pluralism: pluralism in the way that the problem itself is operationalized.
Problem pluralism: pluralism in the problem itself.
Thanks to Andrea Motti for feedback on a draft of this post.
Simplifying Assumption: Focus on Epistemic Circumstances
When investigating under which circumstances some epistemic strategy applies, there are many confusing and complicating factors coming from psychology and sociology. Taking pluralism as an example, the following non-exhaustive list comes to mind:
How easy/difficult is it for researchers to keep multiple approaches at different levels?
How confusing is it for researchers to keep multiple approaches at different levels?
Do we have enough resources for implementing the ideal level of pluralism?
How should we implement it, given the social structures and the psychological difficulties?
My stance here, and more generally, is to neglect these issues so I can focus on the ideal epistemic algorithm under the circumstances studied. The rationale is that the sociological and psychological factors can be better dealt with once we know the ideal strategy, and removing them gives us an idealization that is easier to work with. In some this emulates how most physics approximations remove the details (often ultimately important details) to get to the core insight.
Although I expect this to work, note that this is in tension with epistemological vigilance: there is some chance that the sociological and psychological factors matter so much that it makes more sense to include part of them in the ideal answer.
Levels: from Individuals to Problems
Individual Pluralism
If we zoom in on a particular research approach or tradition, we might expect to be too low-level for pluralism to appear. Yet pluralism isn't only about portfolios of approaches — a single tradition can be pluralist in its methods, ideas, and analogies.
The prime example of successful individual pluralism is what Adrian Currie calls the "methodological omnivory" of historical scientists: their opportunistic gathering and fitting of epistemic tools and strategies from all over the board.
(Rock, Bone, and Ruin p158, Adrian Currie, 2018)
Historical scientists are not methodological “obligates,” focused on one method or another. Rather, they are opportunistic—methodological “omnivores.” Just as the right method for finding my way about an unfamiliar city depends on both my epistemic features and the properties of the city in question, different methods for uncovering the past are more or less applicable in different contexts. That is, it depends on both the scientists—their background knowledge, technologies, and interests—and on their targets, as we will see.
Omnivory involves the frequent co-optation of epistemic tools and their being fit to local contexts and needs.
To see this in action concretely, let's turn to one of Currie's examples: the reconstruction of a Mayan sacrifice from a variety of sources of evidence.
(Rock, Bone, and Ruin p187, Adrian Currie, 2018)
I will focus on Mazariegos et al.’s (2015) reconstruction of a human sacrificial scene from this period.
Their target is the 2004 discovery of a partially cremated double burial of two males. Here’s Mazariegos et al.’s description of the basic event:
"Sometime in the fifth century AD, the inhabitants of the Lowland Maya city of Tikal witnessed an extraordinary sacrificial ritual. Two individuals were thrown into a pit, especially dug for the purpose and supplied with sweltering fuel. They may have been dead or nearly dead when thrown, but it is equally possible that they died from burning. Their charred remains were left inside the pit, which was filled shortly afterwards."
As should be par for the course by now, Mazariegos et al. are characteristic methodological omnivores: opportunistically drawing on a varied toolkit. Their reconstruction relies on (1) pathology and taphonomic work, which describes the circumstances of the individuals’ deaths and (2) isotopic studies, which allow them to infer geographic origin. They connect this information to Mayan ritual and spiritual practices using (3) textual evidence from the colonial period (that is, reports from missionaries, etc.), (4) ethnographic accounts of oral narratives, (5) architectural features of Mayan ruins, and (6) iconographic remains. Notice the use of regularities of varying fragility and locality in reconstruction: It is not merely that historical scientists care about fragile, but counterfactual-supporting regularities. Additionally, the integration of evidence from such regularities is essential for successfully reconstructing particular episodes from the past.
On the other hand, we can see successful progress without much individual pluralism in fields like physics: Albert Einstein and Ludwig Boltzmann are two names that come to mind for successful physicists with a focused and trimmed down methodological toolkit.[2] Note though that this isn't much evidence that individual pluralism wouldn't have helped — merely that it was not necessary for their success.
That being said, my current model of individual pluralism predicts that the epistemic circumstances where Einstein and Boltzmann found themselves were not particularly fitting for individual pluralism. What they had at their disposal were meta-principles (the shape of field theories and the nonsense of action at a distance for Einstein, the constructive mathematical building blocks of statistical mechanics for Boltzmann) guiding their productive mistakes to the heart of the problem. Whereas the historical scientists that Currie describes have highly underdetermined settings where they need to reach for as many epistemic tools and angles as possible to get anywhere.
We can think of it geometrically: a high-dimensional space where "bits of evidence" are taken literally as things in this space. Then for individual pluralism to be useful requires dispersed bits that need to be gathered by a host of techniques and tricks, rather than a rich vein that can be mined for many bits of evidence.
Approach Pluralism
Whether or not individual pluralism is present, we can take a portfolio perspective on research and have multiple approaches in parallel.[3] This is probably what comes to mind when most people think of pluralism: taking decorrelated stabs at the same problem.
To observe approach pluralism in practice, nothing beats the history of chemistry. Hasok Chang provides a great example with his history of electrolysis and the so-called "distance problem". For reminder, electrolysis is the experimental technique of passing a current through an electrolyte (at the time mostly water), which leads to reactions at the two electrodes (in the case of water the formation of hydrogen and oxygen gases). When electrolysis was discovered it was assumed to split water; yet this interpretation contradicted the experimental evidence in a massive way...
(Is Water H2O? p74, Hasok Chang, 2012)
But the very cleanness of Nicholson—Carlisle electrolysis also revealed a deep problem. If the action of electricity was to break down each molecule of water into a particle of oxygen and a particle of hydrogen, why did the two gases not issue from the same place, but in different locations separated by a macroscopic distance, easily a few inches? And why did oxygen always come from the wire connected with the positive pole of the battery, and hydrogen from the negative.
We know now that the question is ill-posed: electrolysis doesn't split water, it reacts with the ions present in water. Yet this took a lot of time to emerge, after many unsatisfactory options.
(Is Water H2O? p79, Hasok Chang, 2012)
According to Ritter, what happened when electricity was passed through water was synthesis, not decomposition: at the positive pole of the battery, positive electricity combined with water and created oxygen; at the negative pole, negative electricity combined with water and created hydrogen. Then the two gases naturally came out at separate places, which were the locations for the supply of the two types of electricity. So water was seen again as an element, and oxygen and hydrogen as compounds.
(Is Water H2O? p83-84, Hasok Chang, 2012)
There were three options available to those who wished to defend the compound view of water. [...]
(a) Imbalance. [...] Monge’s view was that electrolysis resulted in an imbalance of substances around each electrode: “the galvanic action tends to abstract, in each of the waters, one of its constituent parts, leaving in it an excess of the other constituent part.” (quoted in Wilkinson 1804, 150)
(b) Invisible transport. According to this story, the electricity entering into the water grabs hold of one part of a water molecule, freeing up the other to be released there; then the electricity, along with its captive, rushes over to the other electrode, and releases the captive there; the electricity itself goes on back into the battery, completing the circuit. [...]
(c) Molecular chains. [...] This hypothesis did not involve an invisible transport of lone particles of hydrogen and oxygen through a body of water, but an invisible chain of molecules within the body of water connecting the two poles. In this picture, each water molecule is electrically polarized, with hydrogen positive and oxygen negative. Grotthuss (1806, 335) called the Voltaic pile “an electrical magnet”, and imagined that the molecules would connect up in a line, like a set of little bar magnets between the poles of a larger magnet, or like iron filings tracing the lines of magnetic force connecting the poles of a bar magnet. [...]
When the battery is switched on, the decomposing action begins. The negative electrode grabs the hydrogen particle (electro-positive) right next to it, neutralizes it, and releases it. Having been deprived of its partner, the oxygen particle in that water molecule then goes and grabs the hydrogen particle next to it, forming a new water molecule. This partner-swapping is propagated throughout the chain, and it is matched perfectly by the action originating from the positive electrode. And then each of the newly-formed water molecules flips around, due to the electrical repulsion/attraction from the electrodes, so the initial sort of configuration is restored.
As you might expect from the placement in this section, Chang argues convincingly that this approach pluralism was instrumental in finally resolving this problem (and creating ionic theory).
(Is Water H2O? p86, Hasok Chang, 2012)
Given this situation, it seems to me that the nineteenth-century scientists were wise when they decided not to decide—or rather, not to declare a clear winner amongst a group of imperfect contenders. Leaving the ultimate truth undecided, electrochemists got on with their work, experimental and theoretical, as we will see in some detail in Sect. 2.2.2. That seems to me like the right and mature thing to have done, rather than giving in to the temptation of a clear choice—as in the case of the Lavoisierian bandwagon that made the Chemical Revolution.
(Is Water H2O? p111, Hasok Chang, 2012)
There were two main ways in which the flourishing of various electrochemical systems was beneficial: encouragement of different strengths, and productive interactions. First, by tolerating different systems, science could benefit from their different strengths. Since there wasn’t any one system of electrochemistry that was strong in every way, multiple systems were needed if electrochemistry as a whole were to gain and retain sufficient empirical adequacy and explanatory power. Electrochemists in the nineteenth century recognized this fact, and organized their science in a suitably pluralistic way. Even though there were some dominant figures and strong personalities in the field, they advocated their own systems without the book-burning, name-calling destructive hostility of the kind displayed by Lavoisier and some of his associates, which was geared to annihilate the opposition instead of admitting that they could learn something from it. The only major exception in that regard was the suppression of Ritter’s synthesis view, which is linked to the Lavoisierian legacy.
(Is Water H2O? p112-113, Hasok Chang, 2012)
The simultaneous maintenance of multiple systems also created and maintained more conceptual possibilities for productive syntheses. Arrhenius’s ushering-in of twentieth-century electrochemistry rested on a productive interplay between the three systems of Berzelius, Faraday, and Clausius: Berzelius provided charged ions, Faraday broke the hold of electrostatic reasoning, and Clausius provided the idea of spontaneous dissociation arising from kinetic factors. Had there been a monopoly by one of these systems in the years leading up to Arrhenius’s work, his breakthrough would not have been possible, at least not in the form and not by the path that it took. Even if what one ultimately wants is one system that is comprehensively superior, it may only be possible to get there through a properly supported pluralistic phase of development; a premature enforcement of consensus would create obstacles in this process.
If we want to look at success without approach pluralism, most instances of what Kuhn calls normal science will do.[4] So newtonian mechanics for example got an enormous mileage out of one approach, once the principles were laid in the Principia. Enough that we can send rockets to space using the result.
Where individual pluralism depended on whether there was a vein rich in bits of evidence, approach pluralism seems more dependent to me on whether there is more than one place where we can mine for bits. That might mean many rich veins of bits, some veins of bits and some dispersed bits, or just a lot of dispersed bits that constitute bigger clusters.
Operationalization Pluralism
We did individual research directions and a portfolio of approaches; what else could there be? Operationalization is a crucial level, but it's also an almost invisible one. Originally designed to make sense of the extension of measurements by Percy Bridgman, operationalization includes all the decisions and choices we make to concretize the problem we're attacking.
To take examples from alignment, prosaic alignment and brain-like alignment are operationalizations in the most obvious sense, because they fix most details of what the instantiation of the problem will look like.
Formalizations of a problem also count as operationalization. For example, there are three operationalizations of classical mechanics that shine in different contexts: newtonian, lagrangian, and hamiltonian.
The trouble with operationalization is that it generally embeds such strong and detailed assumptions that it can just as well be a harmful mistake than a productive one. It also tends to be internalized so thoroughly that no one maintains epistemological vigilance. Last but not least, settling for an operationalization can lead to filling in too many details about a more general problem.
Operationalization pluralism avoids these issues by pushing for multiple parallel operationalizations, which produce both productive disagreement and potentially unification in the end.
Once again, the history of chemistry and Hasok Chang offer us with a perfect instance of the benefits of operationalization pluralism: the determination of chemical formulas during the nineteenth-century. The problem was that the data didn't completely determine the formulas of even simple molecules like water.
(Is Water H2O? p139-140, Hasok Chang, 2012)
And how do we know that the correct formula is H2O? Give this problem to a bright schoolchild of today, and this is the sort of answer we will receive: the atomic weights of hydrogen and oxygen are 1 and 16 (hydrogen being taken as the unit); when we break down water in the lab, we get 1g of hydrogen for 8g of oxygen; so there must be two atoms of hydrogen that combine with each atom of oxygen to make a water molecule. What the bright student usually can’t say, having just memorized these values from the textbooks, is how we know that the atomic weights of hydrogen and oxygen are 1:16. A clever answer would be that we know the molecular formula of water to be H2O, so if the gross (macroscopic) combining weights of hydrogen and oxygen are 1:8, then the ratio of their atomic weights must be 1:16. But then we have to ask how we know the formula of water is H2O, which is exactly where we started! Now this is the circularity that plagued Dalton and all of his contemporaries. All we can observe directly is the gross combining weights. If we know the molecular formula, we can infer the atomic weights from the combining weights; if we know the atomic weights, we can infer the molecular formula. But observation by itself gives us neither the atomic weights nor the molecular formula.
We can make up any self-consistent system of atomic weights and molecular formulas, and observation cannot refute our system. Without breaking this circularity, atomic chemistry could not really get off the ground.
Because chemists had no way of knowing what atoms actually were and to observe them, they resorted to various operationalizations of the concept of atoms: based on weight-measurements, volume-measurements, what could be substituted in compounds, what could be separated by electrolysis, geometric structure. Each made weird assumptions or needed some additional ingredients to break the circularity.
Yet by pursuing these in parallel, and by allowing productive disagreement, the formula conundrum was eventually resolved.
(Is Water H2O? p150, Hasok Chang, 2012)
It is very interesting to see how the nineteenth-century chemists made progress in this doubly frustrating situation: with lots of alternatives, none of them perfect. (And it seems to me that this sort of situation is actually quite typical in science, as in the rest of life.) What we know is that, somehow, by the 1860s, consensus was reached on basically the same system of atomic weights and molecular formulas as what we now accept.
[...]
Initial progress was made by having a plurality of systems, each “zooming in” on what it could handle particularly well. For example, the weight-only system focused on gravimetric analytical chemistry, flourishing into the middle of the nineteenth century. The electrochemical dualistic system focused on substances that were amenable to clean electrolysis. And so on. Each system delivered a different set of new facts and insights, and contributed to the progress of chemical knowledge in ways that other systems could not easily manage.
After much development, it was possible to “zoom out” to make a synthesis of some of the competing systems. Most crucially, the concept of valency enabled the synthesis of the last three systems mentioned above (with some suitable modification of each system). When the substitution—type system improved its operational success, this gradually encouraged chemists to attribute reality to the models of molecular structure that they had invented for the purpose of classification. So they began to think that an oxygen atom really did bind two hydrogen atoms together in a water molecule, that a carbon atom held four hydrogen atoms to make a molecule of marsh gas (methane), and so on. Happily for them, the molecular formulas worked out in that way matched up well enough with the formulas used in the physical volume—weight system. And the increasing confidence and realism in that synthesis also allowed a further synthesis with the geometric-structural system. The key there was to take the carbon atom as a tetrahedral structure in three dimensions.
You might expect me to pull another physics paradigm for my example of a success without operationalization pluralism. Not when I have a fascinating alternative: comparative linguistics!
Comparative linguistics aims at reconstructing models of the parent language of different families of languages (like Proto-Indo-European for the Indo-European family). Language families and lower-level subgroups are established based on extensive morphological (structure of words) similarities.
I'm still reading up on it, but as far as I can see, comparative linguistics has been wildly successful in reconstructing (phonological) features of PIE, with even some empirical confirmation of incredibly specific predictions. Yet their methods and operationalizations all follow one paradigm: the comparative method, the operationalization of similarity as the result of common descent[5], and of differences as mostly coming from sound change.
For operationalization pluralism, my mental picture is a frame/perspective on the space where bits of evidence lies. Since the initial space is high-dimensional, these frames operate some dimensionality-reduction, projection, and other transformations to make the space more humanly understandable. A productive operationalization will ease the reveal of more bits of evidence, while a less productive one will obscure things. Then the question of when to push operationalization pluralism amounts to whether there's only one proper productive frame, or more a group of them.
Problem Pluralism
At this level, pluralism becomes tricky. Because looking at a variety of problems instead of just the original one encourages "Avoiding The Hard Bit", as John Wentworth says. Because with many problems on the table, some will end up easier than others, and these will be the focus of research. Which is fine when our aim is general epistemic progress, but not so great when we need to solve a specific problem or die.
Alignment is such a field where you actually have to solve the problem, not just publish papers about related ideas.
Yet I can definitely imagine that such pluralism could serve sometimes. I just have trouble finding a concrete example in the History of Science and Technology. So problem pluralism feels tricky to implement correctly, and I expect it to not be the correct choice most of the time.
In terms of our geometric model, the problem is the space itself as well as the placement of bits of evidence, and changing the problem amounts to moving to a different space. Without strong ties between the two spaces, this sounds like a distraction.
When is Pluralism Useful?
Let's summarize the geometric model of bits of evidence that I've been building piece by piece in the previous section.
We have a high-dimensional space with objects in it. The space is the problem and the objects are bits of evidence.
Because we suck at high-dimensional geometry, we use frames/perspectives that reduce the dimensionality and highlight some aspects of the space. These are operationalizations.
There are clusters of bits of evidence in the space (whether they are rich or poor). These clusters are veins of evidence.
Then my tentative answer to when pluralism is valuable at each levels is as follows:
At the individual level, pluralism is worth it if the veins under consideration are poor — bits are dispersed in the space.
At the approach level, pluralism is worth it if there are multiple rich veins and/or dispersed areas — all bits are not clustered in the same place.
At the operationalization level, pluralism is worth it if there are multiple operationalizations that are decently productive (whether or not one is significantly more productive than the others).
At the problem level, pluralism is worth it if there are other spaces where the bits of evidence relate to the ones in the original space.
I feel that this model gives a good intuition of where I expect pluralism to generally yield the most benefits: approach and operationalization. On the individual level, I expect some rich streams to exist for a lot of problems; and the problem level seems tricky to get right at all.
The next (massive) steps to leverage pluralism for accelerating alignment are to operationalise this model and the criteria, such that we can actually apply them to a concrete epistemic circumstance like alignment. I will also publish later this week a post proposing an abstraction of the alignment problem that makes it easier to instantiate operationalization pluralism without missing the hard part of the problem.
Here are some works that will be quoted and mentioned in this post: Is Water H2O? by Hasok Chang and Rock, Bone, and Ruin by Adrian Currie.
The case of Einstein is subtle because he described himself as a methodological opportunist; that being said, I currently agree with this SEP page that he acted far more methodologically focused than he sometimes claimed.
One subtlety here is that operationalization is fixed; that is, the way the problem is concretised and made precise doesn't change at this level.
Although I place the pluralism vs paradigmatism question more at the level of operationalization pluralism.
The assumption of common descent is a hilarious example of productive mistake: it apparently comes from a biblical explanation of the spread of language by William Jones, the first linguist to point out Indo-European as a family.
Preparing the Pluralism Question
When do we want all our research eggs in the same paradigm basket?
Although most people don't go as far as the extreme paradigmatism of Thomas Kuhn in The Structure of Scientific Revolutions, which only allows one paradigm at time (for a given science), the preference for less rather than more options is still pervasive. In the ideal, a convergence to one, even if it's not always feasible. After all, there's only one correct answer, right?
Putting that debatable question aside, I've become more and more convinced that pluralism, the pursuit of multiple lines of research in parallel, is far more prevalent and integral to the process of science than Kuhn's naive paradigmatism. This realization has emerged from studying History and Philosophy of Science, especially outside of physics which for many reasons is quite an unrepresentative science.[1]
But one crucial preliminary point is that pluralism can appear at multiple levels. And the value of pluralism also depends on the level at which it is applied.
So this post proposes a decomposition of the activity of research into four levels, and introduces the corresponding pluralism at each level. Although the point here is not (yet) to argue for pluralism, I offer some examples of pluralistic successes, as well as arguments for the epistemic circumstances where pluralism seems the most valuable. I also finish by proposing a geometric model for when each level of pluralism makes sense, based around considering bits of evidence as objects in high-dimensional space.
The four levels of pluralism I discuss are:
Individual pluralism: pluralism of the methods, ideas, and analogies used by a single researcher or a single research tradition.
Approach pluralism: pluralism of approaches to the same operationalization of the problem.
Operationalization pluralism: pluralism in the way that the problem itself is operationalized.
Problem pluralism: pluralism in the problem itself.
Thanks to Andrea Motti for feedback on a draft of this post.
Simplifying Assumption: Focus on Epistemic Circumstances
When investigating under which circumstances some epistemic strategy applies, there are many confusing and complicating factors coming from psychology and sociology. Taking pluralism as an example, the following non-exhaustive list comes to mind:
How easy/difficult is it for researchers to keep multiple approaches at different levels?
How confusing is it for researchers to keep multiple approaches at different levels?
Do we have enough resources for implementing the ideal level of pluralism?
How should we implement it, given the social structures and the psychological difficulties?
My stance here, and more generally, is to neglect these issues so I can focus on the ideal epistemic algorithm under the circumstances studied. The rationale is that the sociological and psychological factors can be better dealt with once we know the ideal strategy, and removing them gives us an idealization that is easier to work with. In some this emulates how most physics approximations remove the details (often ultimately important details) to get to the core insight.
Although I expect this to work, note that this is in tension with epistemological vigilance: there is some chance that the sociological and psychological factors matter so much that it makes more sense to include part of them in the ideal answer.
Levels: from Individuals to Problems
Individual Pluralism
If we zoom in on a particular research approach or tradition, we might expect to be too low-level for pluralism to appear. Yet pluralism isn't only about portfolios of approaches — a single tradition can be pluralist in its methods, ideas, and analogies.
The prime example of successful individual pluralism is what Adrian Currie calls the "methodological omnivory" of historical scientists: their opportunistic gathering and fitting of epistemic tools and strategies from all over the board.
(Rock, Bone, and Ruin p158, Adrian Currie, 2018)
Historical scientists are not methodological “obligates,” focused on one method or another. Rather, they are opportunistic—methodological “omnivores.” Just as the right method for finding my way about an unfamiliar city depends on both my epistemic features and the properties of the city in question, different methods for uncovering the past are more or less applicable in different contexts. That is, it depends on both the scientists—their background knowledge, technologies, and interests—and on their targets, as we will see.
Omnivory involves the frequent co-optation of epistemic tools and their being fit to local contexts and needs.
To see this in action concretely, let's turn to one of Currie's examples: the reconstruction of a Mayan sacrifice from a variety of sources of evidence.
(Rock, Bone, and Ruin p187, Adrian Currie, 2018)
I will focus on Mazariegos et al.’s (2015) reconstruction of a human sacrificial scene from this period.
Their target is the 2004 discovery of a partially cremated double burial of two males. Here’s Mazariegos et al.’s description of the basic event:
"Sometime in the fifth century AD, the inhabitants of the Lowland Maya city of Tikal witnessed an extraordinary sacrificial ritual. Two individuals were thrown into a pit, especially dug for the purpose and supplied with sweltering fuel. They may have been dead or nearly dead when thrown, but it is equally possible that they died from burning. Their charred remains were left inside the pit, which was filled shortly afterwards."
As should be par for the course by now, Mazariegos et al. are characteristic methodological omnivores: opportunistically drawing on a varied toolkit. Their reconstruction relies on (1) pathology and taphonomic work, which describes the circumstances of the individuals’ deaths and (2) isotopic studies, which allow them to infer geographic origin. They connect this information to Mayan ritual and spiritual practices using (3) textual evidence from the colonial period (that is, reports from missionaries, etc.), (4) ethnographic accounts of oral narratives, (5) architectural features of Mayan ruins, and (6) iconographic remains. Notice the use of regularities of varying fragility and locality in reconstruction: It is not merely that historical scientists care about fragile, but counterfactual-supporting regularities. Additionally, the integration of evidence from such regularities is essential for successfully reconstructing particular episodes from the past.
On the other hand, we can see successful progress without much individual pluralism in fields like physics: Albert Einstein and Ludwig Boltzmann are two names that come to mind for successful physicists with a focused and trimmed down methodological toolkit.[2] Note though that this isn't much evidence that individual pluralism wouldn't have helped — merely that it was not necessary for their success.
That being said, my current model of individual pluralism predicts that the epistemic circumstances where Einstein and Boltzmann found themselves were not particularly fitting for individual pluralism. What they had at their disposal were meta-principles (the shape of field theories and the nonsense of action at a distance for Einstein, the constructive mathematical building blocks of statistical mechanics for Boltzmann) guiding their productive mistakes to the heart of the problem. Whereas the historical scientists that Currie describes have highly underdetermined settings where they need to reach for as many epistemic tools and angles as possible to get anywhere.
We can think of it geometrically: a high-dimensional space where "bits of evidence" are taken literally as things in this space. Then for individual pluralism to be useful requires dispersed bits that need to be gathered by a host of techniques and tricks, rather than a rich vein that can be mined for many bits of evidence.
Approach Pluralism
Whether or not individual pluralism is present, we can take a portfolio perspective on research and have multiple approaches in parallel.[3] This is probably what comes to mind when most people think of pluralism: taking decorrelated stabs at the same problem.
To observe approach pluralism in practice, nothing beats the history of chemistry. Hasok Chang provides a great example with his history of electrolysis and the so-called "distance problem". For reminder, electrolysis is the experimental technique of passing a current through an electrolyte (at the time mostly water), which leads to reactions at the two electrodes (in the case of water the formation of hydrogen and oxygen gases). When electrolysis was discovered it was assumed to split water; yet this interpretation contradicted the experimental evidence in a massive way...
(Is Water H2O? p74, Hasok Chang, 2012)
But the very cleanness of Nicholson—Carlisle electrolysis also revealed a deep problem. If the action of electricity was to break down each molecule of water into a particle of oxygen and a particle of hydrogen, why did the two gases not issue from the same place, but in different locations separated by a macroscopic distance, easily a few inches? And why did oxygen always come from the wire connected with the positive pole of the battery, and hydrogen from the negative.
We know now that the question is ill-posed: electrolysis doesn't split water, it reacts with the ions present in water. Yet this took a lot of time to emerge, after many unsatisfactory options.
(Is Water H2O? p79, Hasok Chang, 2012)
According to Ritter, what happened when electricity was passed through water was synthesis, not decomposition: at the positive pole of the battery, positive electricity combined with water and created oxygen; at the negative pole, negative electricity combined with water and created hydrogen. Then the two gases naturally came out at separate places, which were the locations for the supply of the two types of electricity. So water was seen again as an element, and oxygen and hydrogen as compounds.
(Is Water H2O? p83-84, Hasok Chang, 2012)
There were three options available to those who wished to defend the compound view of water. [...]
(a) Imbalance. [...] Monge’s view was that electrolysis resulted in an imbalance of substances around each electrode: “the galvanic action tends to abstract, in each of the waters, one of its constituent parts, leaving in it an excess of the other constituent part.” (quoted in Wilkinson 1804, 150)
(b) Invisible transport. According to this story, the electricity entering into the water grabs hold of one part of a water molecule, freeing up the other to be released there; then the electricity, along with its captive, rushes over to the other electrode, and releases the captive there; the electricity itself goes on back into the battery, completing the circuit. [...]
(c) Molecular chains. [...] This hypothesis did not involve an invisible transport of lone particles of hydrogen and oxygen through a body of water, but an invisible chain of molecules within the body of water connecting the two poles. In this picture, each water molecule is electrically polarized, with hydrogen positive and oxygen negative. Grotthuss (1806, 335) called the Voltaic pile “an electrical magnet”, and imagined that the molecules would connect up in a line, like a set of little bar magnets between the poles of a larger magnet, or like iron filings tracing the lines of magnetic force connecting the poles of a bar magnet. [...]
When the battery is switched on, the decomposing action begins. The negative electrode grabs the hydrogen particle (electro-positive) right next to it, neutralizes it, and releases it. Having been deprived of its partner, the oxygen particle in that water molecule then goes and grabs the hydrogen particle next to it, forming a new water molecule. This partner-swapping is propagated throughout the chain, and it is matched perfectly by the action originating from the positive electrode. And then each of the newly-formed water molecules flips around, due to the electrical repulsion/attraction from the electrodes, so the initial sort of configuration is restored.
As you might expect from the placement in this section, Chang argues convincingly that this approach pluralism was instrumental in finally resolving this problem (and creating ionic theory).
(Is Water H2O? p86, Hasok Chang, 2012)
Given this situation, it seems to me that the nineteenth-century scientists were wise when they decided not to decide—or rather, not to declare a clear winner amongst a group of imperfect contenders. Leaving the ultimate truth undecided, electrochemists got on with their work, experimental and theoretical, as we will see in some detail in Sect. 2.2.2. That seems to me like the right and mature thing to have done, rather than giving in to the temptation of a clear choice—as in the case of the Lavoisierian bandwagon that made the Chemical Revolution.
(Is Water H2O? p111, Hasok Chang, 2012)
There were two main ways in which the flourishing of various electrochemical systems was beneficial: encouragement of different strengths, and productive interactions. First, by tolerating different systems, science could benefit from their different strengths. Since there wasn’t any one system of electrochemistry that was strong in every way, multiple systems were needed if electrochemistry as a whole were to gain and retain sufficient empirical adequacy and explanatory power. Electrochemists in the nineteenth century recognized this fact, and organized their science in a suitably pluralistic way. Even though there were some dominant figures and strong personalities in the field, they advocated their own systems without the book-burning, name-calling destructive hostility of the kind displayed by Lavoisier and some of his associates, which was geared to annihilate the opposition instead of admitting that they could learn something from it. The only major exception in that regard was the suppression of Ritter’s synthesis view, which is linked to the Lavoisierian legacy.
(Is Water H2O? p112-113, Hasok Chang, 2012)
The simultaneous maintenance of multiple systems also created and maintained more conceptual possibilities for productive syntheses. Arrhenius’s ushering-in of twentieth-century electrochemistry rested on a productive interplay between the three systems of Berzelius, Faraday, and Clausius: Berzelius provided charged ions, Faraday broke the hold of electrostatic reasoning, and Clausius provided the idea of spontaneous dissociation arising from kinetic factors. Had there been a monopoly by one of these systems in the years leading up to Arrhenius’s work, his breakthrough would not have been possible, at least not in the form and not by the path that it took. Even if what one ultimately wants is one system that is comprehensively superior, it may only be possible to get there through a properly supported pluralistic phase of development; a premature enforcement of consensus would create obstacles in this process.
If we want to look at success without approach pluralism, most instances of what Kuhn calls normal science will do.[4] So newtonian mechanics for example got an enormous mileage out of one approach, once the principles were laid in the Principia. Enough that we can send rockets to space using the result.
Where individual pluralism depended on whether there was a vein rich in bits of evidence, approach pluralism seems more dependent to me on whether there is more than one place where we can mine for bits. That might mean many rich veins of bits, some veins of bits and some dispersed bits, or just a lot of dispersed bits that constitute bigger clusters.
Operationalization Pluralism
We did individual research directions and a portfolio of approaches; what else could there be? Operationalization is a crucial level, but it's also an almost invisible one. Originally designed to make sense of the extension of measurements by Percy Bridgman, operationalization includes all the decisions and choices we make to concretize the problem we're attacking.
To take examples from alignment, prosaic alignment and brain-like alignment are operationalizations in the most obvious sense, because they fix most details of what the instantiation of the problem will look like.
Formalizations of a problem also count as operationalization. For example, there are three operationalizations of classical mechanics that shine in different contexts: newtonian, lagrangian, and hamiltonian.
The trouble with operationalization is that it generally embeds such strong and detailed assumptions that it can just as well be a harmful mistake than a productive one. It also tends to be internalized so thoroughly that no one maintains epistemological vigilance. Last but not least, settling for an operationalization can lead to filling in too many details about a more general problem.
Operationalization pluralism avoids these issues by pushing for multiple parallel operationalizations, which produce both productive disagreement and potentially unification in the end.
Once again, the history of chemistry and Hasok Chang offer us with a perfect instance of the benefits of operationalization pluralism: the determination of chemical formulas during the nineteenth-century. The problem was that the data didn't completely determine the formulas of even simple molecules like water.
(Is Water H2O? p139-140, Hasok Chang, 2012)
And how do we know that the correct formula is H2O? Give this problem to a bright schoolchild of today, and this is the sort of answer we will receive: the atomic weights of hydrogen and oxygen are 1 and 16 (hydrogen being taken as the unit); when we break down water in the lab, we get 1g of hydrogen for 8g of oxygen; so there must be two atoms of hydrogen that combine with each atom of oxygen to make a water molecule. What the bright student usually can’t say, having just memorized these values from the textbooks, is how we know that the atomic weights of hydrogen and oxygen are 1:16. A clever answer would be that we know the molecular formula of water to be H2O, so if the gross (macroscopic) combining weights of hydrogen and oxygen are 1:8, then the ratio of their atomic weights must be 1:16. But then we have to ask how we know the formula of water is H2O, which is exactly where we started! Now this is the circularity that plagued Dalton and all of his contemporaries. All we can observe directly is the gross combining weights. If we know the molecular formula, we can infer the atomic weights from the combining weights; if we know the atomic weights, we can infer the molecular formula. But observation by itself gives us neither the atomic weights nor the molecular formula.
We can make up any self-consistent system of atomic weights and molecular formulas, and observation cannot refute our system. Without breaking this circularity, atomic chemistry could not really get off the ground.
Because chemists had no way of knowing what atoms actually were and to observe them, they resorted to various operationalizations of the concept of atoms: based on weight-measurements, volume-measurements, what could be substituted in compounds, what could be separated by electrolysis, geometric structure. Each made weird assumptions or needed some additional ingredients to break the circularity.
Yet by pursuing these in parallel, and by allowing productive disagreement, the formula conundrum was eventually resolved.
(Is Water H2O? p150, Hasok Chang, 2012)
It is very interesting to see how the nineteenth-century chemists made progress in this doubly frustrating situation: with lots of alternatives, none of them perfect. (And it seems to me that this sort of situation is actually quite typical in science, as in the rest of life.) What we know is that, somehow, by the 1860s, consensus was reached on basically the same system of atomic weights and molecular formulas as what we now accept.
[...]
Initial progress was made by having a plurality of systems, each “zooming in” on what it could handle particularly well. For example, the weight-only system focused on gravimetric analytical chemistry, flourishing into the middle of the nineteenth century. The electrochemical dualistic system focused on substances that were amenable to clean electrolysis. And so on. Each system delivered a different set of new facts and insights, and contributed to the progress of chemical knowledge in ways that other systems could not easily manage.
After much development, it was possible to “zoom out” to make a synthesis of some of the competing systems. Most crucially, the concept of valency enabled the synthesis of the last three systems mentioned above (with some suitable modification of each system). When the substitution—type system improved its operational success, this gradually encouraged chemists to attribute reality to the models of molecular structure that they had invented for the purpose of classification. So they began to think that an oxygen atom really did bind two hydrogen atoms together in a water molecule, that a carbon atom held four hydrogen atoms to make a molecule of marsh gas (methane), and so on. Happily for them, the molecular formulas worked out in that way matched up well enough with the formulas used in the physical volume—weight system. And the increasing confidence and realism in that synthesis also allowed a further synthesis with the geometric-structural system. The key there was to take the carbon atom as a tetrahedral structure in three dimensions.
You might expect me to pull another physics paradigm for my example of a success without operationalization pluralism. Not when I have a fascinating alternative: comparative linguistics!
Comparative linguistics aims at reconstructing models of the parent language of different families of languages (like Proto-Indo-European for the Indo-European family). Language families and lower-level subgroups are established based on extensive morphological (structure of words) similarities.
I'm still reading up on it, but as far as I can see, comparative linguistics has been wildly successful in reconstructing (phonological) features of PIE, with even some empirical confirmation of incredibly specific predictions. Yet their methods and operationalizations all follow one paradigm: the comparative method, the operationalization of similarity as the result of common descent[5], and of differences as mostly coming from sound change.
For operationalization pluralism, my mental picture is a frame/perspective on the space where bits of evidence lies. Since the initial space is high-dimensional, these frames operate some dimensionality-reduction, projection, and other transformations to make the space more humanly understandable. A productive operationalization will ease the reveal of more bits of evidence, while a less productive one will obscure things. Then the question of when to push operationalization pluralism amounts to whether there's only one proper productive frame, or more a group of them.
Problem Pluralism
At this level, pluralism becomes tricky. Because looking at a variety of problems instead of just the original one encourages "Avoiding The Hard Bit", as John Wentworth says. Because with many problems on the table, some will end up easier than others, and these will be the focus of research. Which is fine when our aim is general epistemic progress, but not so great when we need to solve a specific problem or die.
Alignment is such a field where you actually have to solve the problem, not just publish papers about related ideas.
Yet I can definitely imagine that such pluralism could serve sometimes. I just have trouble finding a concrete example in the History of Science and Technology. So problem pluralism feels tricky to implement correctly, and I expect it to not be the correct choice most of the time.
In terms of our geometric model, the problem is the space itself as well as the placement of bits of evidence, and changing the problem amounts to moving to a different space. Without strong ties between the two spaces, this sounds like a distraction.
When is Pluralism Useful?
Let's summarize the geometric model of bits of evidence that I've been building piece by piece in the previous section.
We have a high-dimensional space with objects in it. The space is the problem and the objects are bits of evidence.
Because we suck at high-dimensional geometry, we use frames/perspectives that reduce the dimensionality and highlight some aspects of the space. These are operationalizations.
There are clusters of bits of evidence in the space (whether they are rich or poor). These clusters are veins of evidence.
Then my tentative answer to when pluralism is valuable at each levels is as follows:
At the individual level, pluralism is worth it if the veins under consideration are poor — bits are dispersed in the space.
At the approach level, pluralism is worth it if there are multiple rich veins and/or dispersed areas — all bits are not clustered in the same place.
At the operationalization level, pluralism is worth it if there are multiple operationalizations that are decently productive (whether or not one is significantly more productive than the others).
At the problem level, pluralism is worth it if there are other spaces where the bits of evidence relate to the ones in the original space.
I feel that this model gives a good intuition of where I expect pluralism to generally yield the most benefits: approach and operationalization. On the individual level, I expect some rich streams to exist for a lot of problems; and the problem level seems tricky to get right at all.
The next (massive) steps to leverage pluralism for accelerating alignment are to operationalise this model and the criteria, such that we can actually apply them to a concrete epistemic circumstance like alignment. I will also publish later this week a post proposing an abstraction of the alignment problem that makes it easier to instantiate operationalization pluralism without missing the hard part of the problem.
Here are some works that will be quoted and mentioned in this post: Is Water H2O? by Hasok Chang and Rock, Bone, and Ruin by Adrian Currie.
The case of Einstein is subtle because he described himself as a methodological opportunist; that being said, I currently agree with this SEP page that he acted far more methodologically focused than he sometimes claimed.
One subtlety here is that operationalization is fixed; that is, the way the problem is concretised and made precise doesn't change at this level.
Although I place the pluralism vs paradigmatism question more at the level of operationalization pluralism.
The assumption of common descent is a hilarious example of productive mistake: it apparently comes from a biblical explanation of the spread of language by William Jones, the first linguist to point out Indo-European as a family.
Preparing the Pluralism Question
When do we want all our research eggs in the same paradigm basket?
Although most people don't go as far as the extreme paradigmatism of Thomas Kuhn in The Structure of Scientific Revolutions, which only allows one paradigm at time (for a given science), the preference for less rather than more options is still pervasive. In the ideal, a convergence to one, even if it's not always feasible. After all, there's only one correct answer, right?
Putting that debatable question aside, I've become more and more convinced that pluralism, the pursuit of multiple lines of research in parallel, is far more prevalent and integral to the process of science than Kuhn's naive paradigmatism. This realization has emerged from studying History and Philosophy of Science, especially outside of physics which for many reasons is quite an unrepresentative science.[1]
But one crucial preliminary point is that pluralism can appear at multiple levels. And the value of pluralism also depends on the level at which it is applied.
So this post proposes a decomposition of the activity of research into four levels, and introduces the corresponding pluralism at each level. Although the point here is not (yet) to argue for pluralism, I offer some examples of pluralistic successes, as well as arguments for the epistemic circumstances where pluralism seems the most valuable. I also finish by proposing a geometric model for when each level of pluralism makes sense, based around considering bits of evidence as objects in high-dimensional space.
The four levels of pluralism I discuss are:
Individual pluralism: pluralism of the methods, ideas, and analogies used by a single researcher or a single research tradition.
Approach pluralism: pluralism of approaches to the same operationalization of the problem.
Operationalization pluralism: pluralism in the way that the problem itself is operationalized.
Problem pluralism: pluralism in the problem itself.
Thanks to Andrea Motti for feedback on a draft of this post.
Simplifying Assumption: Focus on Epistemic Circumstances
When investigating under which circumstances some epistemic strategy applies, there are many confusing and complicating factors coming from psychology and sociology. Taking pluralism as an example, the following non-exhaustive list comes to mind:
How easy/difficult is it for researchers to keep multiple approaches at different levels?
How confusing is it for researchers to keep multiple approaches at different levels?
Do we have enough resources for implementing the ideal level of pluralism?
How should we implement it, given the social structures and the psychological difficulties?
My stance here, and more generally, is to neglect these issues so I can focus on the ideal epistemic algorithm under the circumstances studied. The rationale is that the sociological and psychological factors can be better dealt with once we know the ideal strategy, and removing them gives us an idealization that is easier to work with. In some this emulates how most physics approximations remove the details (often ultimately important details) to get to the core insight.
Although I expect this to work, note that this is in tension with epistemological vigilance: there is some chance that the sociological and psychological factors matter so much that it makes more sense to include part of them in the ideal answer.
Levels: from Individuals to Problems
Individual Pluralism
If we zoom in on a particular research approach or tradition, we might expect to be too low-level for pluralism to appear. Yet pluralism isn't only about portfolios of approaches — a single tradition can be pluralist in its methods, ideas, and analogies.
The prime example of successful individual pluralism is what Adrian Currie calls the "methodological omnivory" of historical scientists: their opportunistic gathering and fitting of epistemic tools and strategies from all over the board.
(Rock, Bone, and Ruin p158, Adrian Currie, 2018)
Historical scientists are not methodological “obligates,” focused on one method or another. Rather, they are opportunistic—methodological “omnivores.” Just as the right method for finding my way about an unfamiliar city depends on both my epistemic features and the properties of the city in question, different methods for uncovering the past are more or less applicable in different contexts. That is, it depends on both the scientists—their background knowledge, technologies, and interests—and on their targets, as we will see.
Omnivory involves the frequent co-optation of epistemic tools and their being fit to local contexts and needs.
To see this in action concretely, let's turn to one of Currie's examples: the reconstruction of a Mayan sacrifice from a variety of sources of evidence.
(Rock, Bone, and Ruin p187, Adrian Currie, 2018)
I will focus on Mazariegos et al.’s (2015) reconstruction of a human sacrificial scene from this period.
Their target is the 2004 discovery of a partially cremated double burial of two males. Here’s Mazariegos et al.’s description of the basic event:
"Sometime in the fifth century AD, the inhabitants of the Lowland Maya city of Tikal witnessed an extraordinary sacrificial ritual. Two individuals were thrown into a pit, especially dug for the purpose and supplied with sweltering fuel. They may have been dead or nearly dead when thrown, but it is equally possible that they died from burning. Their charred remains were left inside the pit, which was filled shortly afterwards."
As should be par for the course by now, Mazariegos et al. are characteristic methodological omnivores: opportunistically drawing on a varied toolkit. Their reconstruction relies on (1) pathology and taphonomic work, which describes the circumstances of the individuals’ deaths and (2) isotopic studies, which allow them to infer geographic origin. They connect this information to Mayan ritual and spiritual practices using (3) textual evidence from the colonial period (that is, reports from missionaries, etc.), (4) ethnographic accounts of oral narratives, (5) architectural features of Mayan ruins, and (6) iconographic remains. Notice the use of regularities of varying fragility and locality in reconstruction: It is not merely that historical scientists care about fragile, but counterfactual-supporting regularities. Additionally, the integration of evidence from such regularities is essential for successfully reconstructing particular episodes from the past.
On the other hand, we can see successful progress without much individual pluralism in fields like physics: Albert Einstein and Ludwig Boltzmann are two names that come to mind for successful physicists with a focused and trimmed down methodological toolkit.[2] Note though that this isn't much evidence that individual pluralism wouldn't have helped — merely that it was not necessary for their success.
That being said, my current model of individual pluralism predicts that the epistemic circumstances where Einstein and Boltzmann found themselves were not particularly fitting for individual pluralism. What they had at their disposal were meta-principles (the shape of field theories and the nonsense of action at a distance for Einstein, the constructive mathematical building blocks of statistical mechanics for Boltzmann) guiding their productive mistakes to the heart of the problem. Whereas the historical scientists that Currie describes have highly underdetermined settings where they need to reach for as many epistemic tools and angles as possible to get anywhere.
We can think of it geometrically: a high-dimensional space where "bits of evidence" are taken literally as things in this space. Then for individual pluralism to be useful requires dispersed bits that need to be gathered by a host of techniques and tricks, rather than a rich vein that can be mined for many bits of evidence.
Approach Pluralism
Whether or not individual pluralism is present, we can take a portfolio perspective on research and have multiple approaches in parallel.[3] This is probably what comes to mind when most people think of pluralism: taking decorrelated stabs at the same problem.
To observe approach pluralism in practice, nothing beats the history of chemistry. Hasok Chang provides a great example with his history of electrolysis and the so-called "distance problem". For reminder, electrolysis is the experimental technique of passing a current through an electrolyte (at the time mostly water), which leads to reactions at the two electrodes (in the case of water the formation of hydrogen and oxygen gases). When electrolysis was discovered it was assumed to split water; yet this interpretation contradicted the experimental evidence in a massive way...
(Is Water H2O? p74, Hasok Chang, 2012)
But the very cleanness of Nicholson—Carlisle electrolysis also revealed a deep problem. If the action of electricity was to break down each molecule of water into a particle of oxygen and a particle of hydrogen, why did the two gases not issue from the same place, but in different locations separated by a macroscopic distance, easily a few inches? And why did oxygen always come from the wire connected with the positive pole of the battery, and hydrogen from the negative.
We know now that the question is ill-posed: electrolysis doesn't split water, it reacts with the ions present in water. Yet this took a lot of time to emerge, after many unsatisfactory options.
(Is Water H2O? p79, Hasok Chang, 2012)
According to Ritter, what happened when electricity was passed through water was synthesis, not decomposition: at the positive pole of the battery, positive electricity combined with water and created oxygen; at the negative pole, negative electricity combined with water and created hydrogen. Then the two gases naturally came out at separate places, which were the locations for the supply of the two types of electricity. So water was seen again as an element, and oxygen and hydrogen as compounds.
(Is Water H2O? p83-84, Hasok Chang, 2012)
There were three options available to those who wished to defend the compound view of water. [...]
(a) Imbalance. [...] Monge’s view was that electrolysis resulted in an imbalance of substances around each electrode: “the galvanic action tends to abstract, in each of the waters, one of its constituent parts, leaving in it an excess of the other constituent part.” (quoted in Wilkinson 1804, 150)
(b) Invisible transport. According to this story, the electricity entering into the water grabs hold of one part of a water molecule, freeing up the other to be released there; then the electricity, along with its captive, rushes over to the other electrode, and releases the captive there; the electricity itself goes on back into the battery, completing the circuit. [...]
(c) Molecular chains. [...] This hypothesis did not involve an invisible transport of lone particles of hydrogen and oxygen through a body of water, but an invisible chain of molecules within the body of water connecting the two poles. In this picture, each water molecule is electrically polarized, with hydrogen positive and oxygen negative. Grotthuss (1806, 335) called the Voltaic pile “an electrical magnet”, and imagined that the molecules would connect up in a line, like a set of little bar magnets between the poles of a larger magnet, or like iron filings tracing the lines of magnetic force connecting the poles of a bar magnet. [...]
When the battery is switched on, the decomposing action begins. The negative electrode grabs the hydrogen particle (electro-positive) right next to it, neutralizes it, and releases it. Having been deprived of its partner, the oxygen particle in that water molecule then goes and grabs the hydrogen particle next to it, forming a new water molecule. This partner-swapping is propagated throughout the chain, and it is matched perfectly by the action originating from the positive electrode. And then each of the newly-formed water molecules flips around, due to the electrical repulsion/attraction from the electrodes, so the initial sort of configuration is restored.
As you might expect from the placement in this section, Chang argues convincingly that this approach pluralism was instrumental in finally resolving this problem (and creating ionic theory).
(Is Water H2O? p86, Hasok Chang, 2012)
Given this situation, it seems to me that the nineteenth-century scientists were wise when they decided not to decide—or rather, not to declare a clear winner amongst a group of imperfect contenders. Leaving the ultimate truth undecided, electrochemists got on with their work, experimental and theoretical, as we will see in some detail in Sect. 2.2.2. That seems to me like the right and mature thing to have done, rather than giving in to the temptation of a clear choice—as in the case of the Lavoisierian bandwagon that made the Chemical Revolution.
(Is Water H2O? p111, Hasok Chang, 2012)
There were two main ways in which the flourishing of various electrochemical systems was beneficial: encouragement of different strengths, and productive interactions. First, by tolerating different systems, science could benefit from their different strengths. Since there wasn’t any one system of electrochemistry that was strong in every way, multiple systems were needed if electrochemistry as a whole were to gain and retain sufficient empirical adequacy and explanatory power. Electrochemists in the nineteenth century recognized this fact, and organized their science in a suitably pluralistic way. Even though there were some dominant figures and strong personalities in the field, they advocated their own systems without the book-burning, name-calling destructive hostility of the kind displayed by Lavoisier and some of his associates, which was geared to annihilate the opposition instead of admitting that they could learn something from it. The only major exception in that regard was the suppression of Ritter’s synthesis view, which is linked to the Lavoisierian legacy.
(Is Water H2O? p112-113, Hasok Chang, 2012)
The simultaneous maintenance of multiple systems also created and maintained more conceptual possibilities for productive syntheses. Arrhenius’s ushering-in of twentieth-century electrochemistry rested on a productive interplay between the three systems of Berzelius, Faraday, and Clausius: Berzelius provided charged ions, Faraday broke the hold of electrostatic reasoning, and Clausius provided the idea of spontaneous dissociation arising from kinetic factors. Had there been a monopoly by one of these systems in the years leading up to Arrhenius’s work, his breakthrough would not have been possible, at least not in the form and not by the path that it took. Even if what one ultimately wants is one system that is comprehensively superior, it may only be possible to get there through a properly supported pluralistic phase of development; a premature enforcement of consensus would create obstacles in this process.
If we want to look at success without approach pluralism, most instances of what Kuhn calls normal science will do.[4] So newtonian mechanics for example got an enormous mileage out of one approach, once the principles were laid in the Principia. Enough that we can send rockets to space using the result.
Where individual pluralism depended on whether there was a vein rich in bits of evidence, approach pluralism seems more dependent to me on whether there is more than one place where we can mine for bits. That might mean many rich veins of bits, some veins of bits and some dispersed bits, or just a lot of dispersed bits that constitute bigger clusters.
Operationalization Pluralism
We did individual research directions and a portfolio of approaches; what else could there be? Operationalization is a crucial level, but it's also an almost invisible one. Originally designed to make sense of the extension of measurements by Percy Bridgman, operationalization includes all the decisions and choices we make to concretize the problem we're attacking.
To take examples from alignment, prosaic alignment and brain-like alignment are operationalizations in the most obvious sense, because they fix most details of what the instantiation of the problem will look like.
Formalizations of a problem also count as operationalization. For example, there are three operationalizations of classical mechanics that shine in different contexts: newtonian, lagrangian, and hamiltonian.
The trouble with operationalization is that it generally embeds such strong and detailed assumptions that it can just as well be a harmful mistake than a productive one. It also tends to be internalized so thoroughly that no one maintains epistemological vigilance. Last but not least, settling for an operationalization can lead to filling in too many details about a more general problem.
Operationalization pluralism avoids these issues by pushing for multiple parallel operationalizations, which produce both productive disagreement and potentially unification in the end.
Once again, the history of chemistry and Hasok Chang offer us with a perfect instance of the benefits of operationalization pluralism: the determination of chemical formulas during the nineteenth-century. The problem was that the data didn't completely determine the formulas of even simple molecules like water.
(Is Water H2O? p139-140, Hasok Chang, 2012)
And how do we know that the correct formula is H2O? Give this problem to a bright schoolchild of today, and this is the sort of answer we will receive: the atomic weights of hydrogen and oxygen are 1 and 16 (hydrogen being taken as the unit); when we break down water in the lab, we get 1g of hydrogen for 8g of oxygen; so there must be two atoms of hydrogen that combine with each atom of oxygen to make a water molecule. What the bright student usually can’t say, having just memorized these values from the textbooks, is how we know that the atomic weights of hydrogen and oxygen are 1:16. A clever answer would be that we know the molecular formula of water to be H2O, so if the gross (macroscopic) combining weights of hydrogen and oxygen are 1:8, then the ratio of their atomic weights must be 1:16. But then we have to ask how we know the formula of water is H2O, which is exactly where we started! Now this is the circularity that plagued Dalton and all of his contemporaries. All we can observe directly is the gross combining weights. If we know the molecular formula, we can infer the atomic weights from the combining weights; if we know the atomic weights, we can infer the molecular formula. But observation by itself gives us neither the atomic weights nor the molecular formula.
We can make up any self-consistent system of atomic weights and molecular formulas, and observation cannot refute our system. Without breaking this circularity, atomic chemistry could not really get off the ground.
Because chemists had no way of knowing what atoms actually were and to observe them, they resorted to various operationalizations of the concept of atoms: based on weight-measurements, volume-measurements, what could be substituted in compounds, what could be separated by electrolysis, geometric structure. Each made weird assumptions or needed some additional ingredients to break the circularity.
Yet by pursuing these in parallel, and by allowing productive disagreement, the formula conundrum was eventually resolved.
(Is Water H2O? p150, Hasok Chang, 2012)
It is very interesting to see how the nineteenth-century chemists made progress in this doubly frustrating situation: with lots of alternatives, none of them perfect. (And it seems to me that this sort of situation is actually quite typical in science, as in the rest of life.) What we know is that, somehow, by the 1860s, consensus was reached on basically the same system of atomic weights and molecular formulas as what we now accept.
[...]
Initial progress was made by having a plurality of systems, each “zooming in” on what it could handle particularly well. For example, the weight-only system focused on gravimetric analytical chemistry, flourishing into the middle of the nineteenth century. The electrochemical dualistic system focused on substances that were amenable to clean electrolysis. And so on. Each system delivered a different set of new facts and insights, and contributed to the progress of chemical knowledge in ways that other systems could not easily manage.
After much development, it was possible to “zoom out” to make a synthesis of some of the competing systems. Most crucially, the concept of valency enabled the synthesis of the last three systems mentioned above (with some suitable modification of each system). When the substitution—type system improved its operational success, this gradually encouraged chemists to attribute reality to the models of molecular structure that they had invented for the purpose of classification. So they began to think that an oxygen atom really did bind two hydrogen atoms together in a water molecule, that a carbon atom held four hydrogen atoms to make a molecule of marsh gas (methane), and so on. Happily for them, the molecular formulas worked out in that way matched up well enough with the formulas used in the physical volume—weight system. And the increasing confidence and realism in that synthesis also allowed a further synthesis with the geometric-structural system. The key there was to take the carbon atom as a tetrahedral structure in three dimensions.
You might expect me to pull another physics paradigm for my example of a success without operationalization pluralism. Not when I have a fascinating alternative: comparative linguistics!
Comparative linguistics aims at reconstructing models of the parent language of different families of languages (like Proto-Indo-European for the Indo-European family). Language families and lower-level subgroups are established based on extensive morphological (structure of words) similarities.
I'm still reading up on it, but as far as I can see, comparative linguistics has been wildly successful in reconstructing (phonological) features of PIE, with even some empirical confirmation of incredibly specific predictions. Yet their methods and operationalizations all follow one paradigm: the comparative method, the operationalization of similarity as the result of common descent[5], and of differences as mostly coming from sound change.
For operationalization pluralism, my mental picture is a frame/perspective on the space where bits of evidence lies. Since the initial space is high-dimensional, these frames operate some dimensionality-reduction, projection, and other transformations to make the space more humanly understandable. A productive operationalization will ease the reveal of more bits of evidence, while a less productive one will obscure things. Then the question of when to push operationalization pluralism amounts to whether there's only one proper productive frame, or more a group of them.
Problem Pluralism
At this level, pluralism becomes tricky. Because looking at a variety of problems instead of just the original one encourages "Avoiding The Hard Bit", as John Wentworth says. Because with many problems on the table, some will end up easier than others, and these will be the focus of research. Which is fine when our aim is general epistemic progress, but not so great when we need to solve a specific problem or die.
Alignment is such a field where you actually have to solve the problem, not just publish papers about related ideas.
Yet I can definitely imagine that such pluralism could serve sometimes. I just have trouble finding a concrete example in the History of Science and Technology. So problem pluralism feels tricky to implement correctly, and I expect it to not be the correct choice most of the time.
In terms of our geometric model, the problem is the space itself as well as the placement of bits of evidence, and changing the problem amounts to moving to a different space. Without strong ties between the two spaces, this sounds like a distraction.
When is Pluralism Useful?
Let's summarize the geometric model of bits of evidence that I've been building piece by piece in the previous section.
We have a high-dimensional space with objects in it. The space is the problem and the objects are bits of evidence.
Because we suck at high-dimensional geometry, we use frames/perspectives that reduce the dimensionality and highlight some aspects of the space. These are operationalizations.
There are clusters of bits of evidence in the space (whether they are rich or poor). These clusters are veins of evidence.
Then my tentative answer to when pluralism is valuable at each levels is as follows:
At the individual level, pluralism is worth it if the veins under consideration are poor — bits are dispersed in the space.
At the approach level, pluralism is worth it if there are multiple rich veins and/or dispersed areas — all bits are not clustered in the same place.
At the operationalization level, pluralism is worth it if there are multiple operationalizations that are decently productive (whether or not one is significantly more productive than the others).
At the problem level, pluralism is worth it if there are other spaces where the bits of evidence relate to the ones in the original space.
I feel that this model gives a good intuition of where I expect pluralism to generally yield the most benefits: approach and operationalization. On the individual level, I expect some rich streams to exist for a lot of problems; and the problem level seems tricky to get right at all.
The next (massive) steps to leverage pluralism for accelerating alignment are to operationalise this model and the criteria, such that we can actually apply them to a concrete epistemic circumstance like alignment. I will also publish later this week a post proposing an abstraction of the alignment problem that makes it easier to instantiate operationalization pluralism without missing the hard part of the problem.
Here are some works that will be quoted and mentioned in this post: Is Water H2O? by Hasok Chang and Rock, Bone, and Ruin by Adrian Currie.
The case of Einstein is subtle because he described himself as a methodological opportunist; that being said, I currently agree with this SEP page that he acted far more methodologically focused than he sometimes claimed.
One subtlety here is that operationalization is fixed; that is, the way the problem is concretised and made precise doesn't change at this level.
Although I place the pluralism vs paradigmatism question more at the level of operationalization pluralism.
The assumption of common descent is a hilarious example of productive mistake: it apparently comes from a biblical explanation of the spread of language by William Jones, the first linguist to point out Indo-European as a family.
Latest Articles
Dec 2, 2024
Conjecture: A Roadmap for Cognitive Software and A Humanist Future of AI
Conjecture: A Roadmap for Cognitive Software and A Humanist Future of AI
An overview of Conjecture's approach to "Cognitive Software," and our build path towards a good future.
Feb 24, 2024
Christiano (ARC) and GA (Conjecture) Discuss Alignment Cruxes
Christiano (ARC) and GA (Conjecture) Discuss Alignment Cruxes
The following are the summary and transcript of a discussion between Paul Christiano (ARC) and Gabriel Alfour, hereafter GA (Conjecture), which took place on December 11, 2022 on Slack. It was held as part of a series of discussions between Conjecture and people from other organizations in the AGI and alignment field. See our retrospective on the Discussions for more information about the project and the format.
Feb 15, 2024
Conjecture: 2 Years
Conjecture: 2 Years
It has been 2 years since a group of hackers and idealists from across the globe gathered into a tiny, oxygen-deprived coworking space in downtown London with one goal in mind: Make the future go well, for everybody. And so, Conjecture was born.
Sign up to receive our newsletter and
updates on products and services.
Sign up to receive our newsletter and updates on products and services.
Sign up to receive our newsletter and updates on products and services.
Sign Up