Let’s take stock of where we are. We’ve seen that if Inference to the Best Explanation is to be defensible, then there must be some link between loveliness and likeliness. In other words, we infer that explanation that, if true, would provide the greatest understanding. Furthermore, we’ve argued that understanding might be a species of explanatory knowledge. In this case, we infer the potential explanation that, if true, would provide the greatest explanatory knowledge. Given what we know about knowledge, this means that IBE amounts to the following:
All else being equal, an explanation provides the greatest understanding if and only if it:
1. Provides the greatest number of true beliefs about explanations;
2. Provides the best justification for these beliefs; and
3. Renders these beliefs as luck-free or ‘safe’ as possible.
(A passing observation: There will be circumstances where these three conditions pull us in opposite directions. In this case, IBE will not provide clear counsel, or we will need some way to rank conditions (1)-(3)’s inferential importance.)
We’ve already had a go at condition (3), when we discussed Morris vs. Grimm. This week, we’ll look at (1). After giving a brief gloss on Khalifa vs. Lipton, I’ll devote a chunk of this to Elgin’s paper. We can discuss further details about how Lipton and I disagree in class.
(1) Lipton vs. Khalifa
Inferentially speaking, the first of these conditions is most important, since an inference is typically assessed by how well the truth of the premises secures the truth of the conclusion. However, this is also the most contentious claim in this week’s readings. Moreover, challenging the role of truth in understanding forces a corresponding adjustment in the role of belief, for to believe that p entails taking p as true. While Lipton finds nothing problematic with these points, Elgin and I demur. I’ll quickly go through my schtick vs. Lipton.
One challenge to the idea that understanding entails true beliefs is that we have no means of making sure that a true explanation is to be found among the potential explanations that we’re considering. This is called the underconsideration or bad lot argument. Historically, this gets lots of purchase from the fact that many of current best explanations (coming from relativity, quantum, evolution, plate tectonics, etc.) were not considered for long stretches in the history of science.
Lipton tries to dispel this worry on more or less a priori grounds, by setting certain kinds of ground rules, e.g. the Ranking Premise, the claim that scientists reliably rank explanations. He attempts to show that this cannot be held consistently with the No Privilege assumption, which holds that no true explanation is to be found in the ranks of considered explanations. As I show, Lipton’s argument fails to account for the non-monotonicity of inductive inference. Non-monotonicity is the idea that the goodness of an inference is affected by the addition of new information, such that A is a good reason to believe B, but A & E is not a good reason to believe B, A & E & F is a good reason to believe B, etc.
Realists also are fond of invoking background theories to make their case. Roughly, the idea is that some aspect of explanatory reasoning would be very unlikely if the background theories used in such reasoning were radically false. As I argue, these appeals generally don’t live up to their billing, in large part because something less than true background theories could do the trick.
Since we understand a good deal, I don’t think that the first condition is necessary for understanding: a good explanation might not provide very many true beliefs about explanations. Note that this denial can be heard in at least three ways:
(A) A good explanation can provide very few true beliefs about explanations;
(B) A good explanation can provide many untrue beliefs about explanations;
(C) A good explanation can provide many true non-beliefs about explanations.
These can—but need not—intersect. My paper says nothing about the quantity of explanations a good explanation provides, though since I think explanatory virtues/considerations can be justificatory, and consilience is a virtue, let’s drop (A) from further consideration: a good explanation has to explain a lot.
That leaves us with (B) and (C). Furthermore, I’ve put them in negative terms, but of course they also admit of positive glosses. Specifically, I had in mind the following:
(BK) A good explanation can entitle people to beliefs about explanations.
Essentially, this says that we should give up on the truth that good explanations provide, and focus on the justification. (I do make some concessions to veriphiles, but the kind of concessions that aren’t likely to make them very happy.)
Questions
1. Adopt the position of Lipton, and read my paper. Am I uncharitable to him at any juncture?
2. Even if my criticisms are right, my antirealist proposal may be wrong. Give me hell.
(2) Elgin’s “True Enough” (2004)
Elgin provides a distinct set of challenges to the idea that understanding provides true explanatory beliefs. (She doesn’t say too much about explanations, so let’s bracket that issue for now.) The next few weeks or so, we’ll be focusing quite a bit on her work, as she provides a really interesting challenge to my view. Roughly, Elgin argues that understanding is not factive (i.e. doesn’t entail truth). Since knowledge is factive, this would appear to imply that understanding is not a species of knowledge. Bad news for my view, right?
I’ll explain why I’m not worried (yet) by Elgin’s challenge, perhaps in class. First, I want you to see how I go about readying myself for addressing work with which I aim to disagree. The first crucial step is getting the person’s arguments down. So that’s where I’m starting:
Elgin’s Main Argument:
(1) There are some phenomena p such that for some propositions x:
a. x is acknowledged to be untrue, and
b. x figures in scientific understanding of p.
(2) For all propositions x, if x figures in scientific understanding of some phenomenon, then x is epistemically acceptable (120).
(3) There are some propositions x such that x is acknowledged to be untrue and x is epistemically acceptable.
(NB: The aim is to always reconstruct the author’s argument so as to make it valid. This is an important principle of charity, and also frees you up to focus on the veracity of the premises.)
Arguments for Premise (1)
A. Curve-Smoothing
(CS) There are some phenomena p, represented by a smooth curve C, such that the proposition all of the data points lie on C:
a. Is acknowledged to be untrue, and
b. Figures in scientific understanding of p.
Thus, (1) is true.
Argument for (CS.b.ii):
In the relevant examples, consider an extremely jagged curve C* such that all of the data points lie on C*.
Because C* is jagged, C is not an intelligible pattern.
But if C is not an intelligible pattern, then p is not understood scientifically.
B. Ceteris Paribus
(CP) There are some phenomena p, explained by some law of the form All else being equal, all Fs are Gs, such that the proposition all else is equal:
a. Is acknowledged to be untrue, and
b. Figures in scientific understanding of p.
Thus, (1) is true.
C. Idealizations
(I) There are some phenomena p, explained by some idealized law L, such that the proposition L:
a. Is acknowledged to be untrue, and
b. Figures in scientific understanding of p.
Thus, (1) is true.
Example: Ideal gas laws posit molecular structure and interactions that never obtains.
D. Stylized Facts
(SF) There are some phenomena p, such that the proposition representing p:
a. Is acknowledged to be untrue, and
b. Figures in scientific understanding of p.
Thus, (1) is true.
Example: Economic models aim to explain why profit rates are constant, though profit rates are not constant.
E. A fortiori arguments from limiting cases
(AF) There are some phenomena p, explained on analogy with a limiting case C, such that the proposition is p is identical to C:
a. Is acknowledged to be untrue, and
b. Figures in scientific understanding of p.
Thus, (1) is true.
Elaboration of Premise (2)
Elgin doesn’t really argue for (2). Instead, she provides a plausible just-so story about how epistemic acceptance could diverge from belief:
(1) To cognitively accept that p is to take it that p’s divergence from truth, if any, does not matter cognitively.
(2) If something figures in an understanding of how things are, then it matters cognitively.
(3) To cognitively accept that p is to take it that p’s divergence from truth, if any, does not figure in an understanding of how things are, i.e. p is “true enough.” [See 120]
Elgin continues this elaboration (A), and then rebuts a potential objection (B).
A. Under what conditions does divergence from the truth cognitively matter?
Let p be a candidate for acceptability. Acceptable propositions “belong to and perform function in [i.e. figure in] larger bodies of discourse, such as arguments, explanations, or theories that have purposes.” (121) Whether or not p is true enough depends on several contextual factors:
(Context) The larger body of discourse D to which p belongs. For example, other elements in D, such as background assumptions, affect p’s acceptability.
(Purpose) p is true enough for D’s purpose, i.e. true enough to serve certain D’s ends.
(Function) If p performs no role in achieving D’s purpose, then p is not true enough.
B. Veritistic Alternative, with Reply
Those who hold that inquiry is always truth-driven (veritists), might well argue that the function of all acceptable propositions is to approximate the truth.
Elgin’s critiques of this proposal:
(1) Not all approximations perform the same function (122). Some approximations are less accurate, but better “serve as evidence or constraints on future theorizing” than their more accurate counterparts, e.g. a first-order approximation that admits of an analytical solution vs. a second-order equation that does not. In this case, “A felicitous falsehood thus is not always accepted only in default of the truth. Nor is its acceptance always ‘second best.’ It may make cognitive contributions that the unvarnished truth cannot match.” (122)
(2) Not all felicitous falsehoods are approximations. The value of an idealization is not always directly proportional to its approximating the truth.
Fictions
If you're still on Elgin's bandwagon at this point, she's still elaborating her very interesting picture of understanding. In particular, how is it that an acceptable proposition can be false (strictly speaking) yet provide understanding that is somehow constrained by the facts? Elgin suggests that acceptable propositions (what she sometimes calls "felicitous falsehoods") are better construed as fictions than as mistaken or inaccurate statements of fact. Elgin then elaborates that fictions shed light on reality through a cognitive process called exemplification (A), and rebuts an objection to this proposal.
A. Exemplification
Elgin argues that various proposals by Lewis, Walton, Yablo, and Kitcher about fictions all fail because they cannot tell us why something fictitious can shed light on how we understand real things.
She offers exemplification as a more promising alternative.
1. A simple example of exemplification: Paint samples exemplify the colors of different paints.
B. Possible Objection: Exemplifying Fictions are simply a way station to the truth
We often begin with a highly idealized model, and then make corrections to get closer to the truth. Hence, it’s the corrections, rather than the fictions that they correct, which are more epistemically valuable. As before, this would play into the veritist's hands.
Elgin offers three replies to this objection:
(1) Some corrections are unnecessary or unproductive. A fortiori arguments from limiting cases are one such example.
(2) In practice, corrections are not always aimed at truth, but only at more refined models. “They replace one falsehood with another.”
(3) Even where corrections yield truths, fictions can “structure our understanding in a way that makes available information we would not otherwise have access to.”
C. Possible Objection: Replacing Truth with Truth-Enough Leads to Intractable Relativism
There is a still another objection to Elgin's view. Suppose that a theory attesting to the healing powers of crystals is offered, and the inquirer in question aims to sell more healing crystals. Won’t the claim that some crystals heal be true enough?
Elgin’s Reply: “A theory can claim to make sense of a range of facts only if it is factually defeasible—only if, that is, there is some reasonably determinate, epistemically accessible factual arrangement which, if it were found to obtain, would discredit the theory.” (129)
Furthermore, “an acceptable theory must be at least as good as any available alternative, when judged in terms of currently available standards of cognitive goodness.” (129)
This helps to pinpoint truth’s role in understanding: “A factually defeasible theory has epistemically accessible implications which, if found to be false, discredit the theory. So a defeasible theory, by preserving a commitment to testable consequences retains a commitment to truth.” Such a view is still compatible with there being other propositions in the theory which are acknowledged to be false.
Questions
1. Look at Elgin’s various arguments (the Main Argument, the Arguments for Premise (1), and the Argument for Premise (2)). What strikes you as a dubious piece of reasoning? Why? Can you provide an argument to that effect?
I found Professor Khalifa’s critique particularly incisive, though I found his argument against Lipton’s privilege account (p.93-94) confusing. In class, could someone explain contrary/contradiction difference and use a somewhat less confusing example? Responsible reasoning fits well with Khalifa’s previous thoughts on the nature of explanations and, if we accept such anti-realist reports, provides a basic, positive framework for justifying and elaborating on the current practice of science (and perhaps other modes of reasoning).
ReplyDeleteI am also fond of Elgin’s discussion of truth. To be honest, I have long been suspicious of claims to “truth”, whether Plato’s idealizations or modern scientific accounts, and Elgin more than adequately articulated some of my concerns (e.g., our practices, especially non-scientific actions, often appear to be not direct claims of truth). Additionally, I want to explore the possibility of an accountabilist augmentation of Elgin’s theory. That is, to avoid the postmodern trap, we define or acknowledge acceptability in terms of Khalifa’s account of responsible epistemic reasoning within a community with rules and procedures.
I'll take a stab at explaining the difference between contrary and contradiction as Khalifa/Lipton use the terms. A contradiction is, well, a formal logical contradiction. Not-p is the contradiction of p. A contrary is just an incompatible proposition. Take two positive theories, r and s, which are incompatible with one another. Then s is a contrary of r. That is, the truth of s entails the truth of not-r.
ReplyDeleteWith a firm understanding of this distinction in hand, I want to try to reformulate Lipton's argument in an easier-to-digest form. The ranking premise holds that among a set T of contrary theories t1, t2... tn, scientists can reliably rank the elements of T in order of likelihood. But, if this is absolutely true, then any individual theory in T could be the contradiction of another theory in T. So, with the ability to rank whether a given theory tk is more likely than not-tk, that is, whether tk is more likely than not, a scientist can rule out all contrary theories in one fell swoop. And this is because all contrary theories entail not-tk.
First of all, this would almost never work, because if you have a lot of mutually contrary theories, the likelihood of any one of them being true is less than 50%. But this is not a knockdown argument.
At the risk of falling into his trap, I want to take up the banner of the potential objection he raises on 156, that the ranking premise only concedes the ability to rank contraries, not contradictories. This is because, for a given theory t, its contradiction not-t is almost certainly not a full theory of the same depth.
Bracketing something I don't know what to do with: [Sometimes, as with the existence or non-existence of entities, a contradictory is a contrary, but in those kinds of cases I might want to rethink the ranking premise entirely.]
Now it's possible that no one will believe me, but I had the exact same argument in my head as Khalifa p. 93 when I read Lipton's argument for why the ability to rank contraries entails the ability to rank contradictories.
I am usually on board with Lipton, and have been for much of class, but this week I was definitely not.
I'm going to attempt to plant my banner in defiance of underconsideration. The general strain of my argument is perhaps at risk to some of the moves in Khalifa, but I'd enjoy fleshing this out. Instead of relying on rerlative ranking of contradictories or on the truth of background theories, is it possible to defend the idea that the methods by which we generate competing hypotheses follow certain virtues that are, in a similar manner to the virtues we employ in IBE, useful in generating a set of likely hypotheses from which we more finely filter the loveliest explanation through IBE?
ReplyDeleteMy thinking here is that, although we take background beliefs as being approximately true in generating new hypotheses, this is not so much a concern if the processes used to generate these background beliefs, from observations themselves, are also based upon virtues of consideration that tend to generate hypotheses likely to be approximately true. This also seems to me to account for the examples in which background beliefs are rejected in favor of a new hypothesis, which takes into account new data. I suppose these would be abductive virtues of a sort.
Clearly, with the number of theories that have, in the past, been rejected, we cannot say that all of the theories we generate are true. Still, I feel we might be able to make a similar move here as Lipton does with regards to asserting that scientists' rankings of hypotheses is generally reliable, and hence at least moving toward more true beliefs. If our abductive virtues are generally truth-tropic, and our inferential virtues are generally truth-tropic, then it seems plausible that, as we gather more data, then our theories will over time begin to approach the truth.
Is there, aside form trying to define a set of abductive principles and defend why we should consider them as generally producing a set of hypotheses likely to contain approximately true accounts (a topic I also feel worth considering), some linchpin or pins to my reasoning that do not hold?
As seems to happen a lot, I second Josh on this, which means I'm also seconding Khalifa. Surely it's pretty obvious that the sort of ranking Lipton thinks we can do will lead to incoherencies?
ReplyDeleteObviously there's not a lot of point in me talking at length this close to class-time... but I'm going to want to talk about Elgin. Exemplification seems to me to be hitting on something good and important that we do, and that does grant understanding, but I don't know if it's as important as she makes it out to be. I'm particularly sympathetic to Khalifa's point B (which is more or less what I came to while reading). The possible objections don't seem to me particularly salient. But I feel like there are interesting things here to which I'm perhaps not giving due consideration.
I'd like to discuss Lipton's way of collapsing the distinction between relative and absolute evalution. More precisely, i disagree with Litpon's argument in regard to contradictories and contraries. The problem with his argument that "it is not clear how to ban the ranking of contradictories while allowing the ranking of contraries (156)" is that he fails to recognize the definition of domain is different in the case of contradictories from that of contraries. Think of a pie chart that is equally devided to two parts, T and not-T. This kind of domain applies to cases of contradictories. However, for contraries, a domain is more like a straight line made up of dots that represent T, T2, T&T2, T2&T3, T-T3 and so on. Lipton is correct in pointing out that all the dots other than T is technically dots of not-T. However, these dots of not-T are not the same as the not-T in the pie chart. I understand that this might be partially similar to Professor Khlifa's criticism in regard to Lipton's use of those two terms on page 93. However, i want to further argue that Lipton's argument does not only entail the the obvious false claim that "T can be simoultaneously ranked ahead and below of not-T (93)", but also fail to close the gap between relative and absolute evaluation in the sense that Lipton's argument falsely assumes that the domain of all possible hypothesis is a definite and closed group, just like the pie chart. Once this key distinction between contradictories and contraries is recognized, i think it weakens, or at least complicates, a later argument Lipton makes about how "the level of reliability a background confers depends on its content, not just on the method by which it was genereated, and that what matters about the content is, among other things, how close it is to the truth (159)." Hopefully more all this in class!
ReplyDelete