Lately, I've been thinking about how to pitch my project in a relatively succinct way. Here's a stab:
I basically have three questions that animate my research:
1) What is understanding? Here, I'm interested in a particular kind of understanding, as discussed in my previous post. My answer is basically that understanding is the ability to provide a good and correct explanation. This naturally leads to my second question...
2) What is explanation? This is really the question I've wrestled with the longest, and I'm savvy enough to know that it can be parsed in two important ways:
a) Suppose there are two propositions A and B. What must be their relationship in order for A to explain B? I call this a question about explanation simpliciter, and have offered my own account of it here. The core idea is this. There's been a long tradition of thinking of explanation as involving the fitting of an explanandum into a larger inferential network (Hempel & Kitcher are probably the most celebrated advocates of this view). The general problem with this is overpermissiveness: certain kinds of "inferential fittings" aren't explanatory (e.g. the flagpole and the shadow). My own view puts social and pragmatic constraints on this general "inferentialist" picture of explanation that block the pernicious cases. The interesting bit is that it achieves this in a seemingly non-ad-hoc way. Specifically, I begin with recent social‐psychological literature that explanations are accounts, i.e. social devices use to restore one’s social standing when charged with performing an objectionable action, and then treat the explanations that interest epistemologists and philosophers of science as a species of these accounts, specifically ones in which the objectionable actions are inferences (or better yet, "inferrings").
There are 2 things to note about this "accountabilist" model of explanation:
(i) it is broad, as would be required if one were to reduce understanding to explanation, as I have done, and
(ii) it is a social and pragmatic analysis of explanation.
As I noted above, there is a second way of interpreting the question, "What is an explanation?" This presupposes an answer to the question about explanation simpliciter:
b) Suppose that A1 explains B simpliciter, and A2 explains B simpliciter. Under what conditions would A1 be a better explanation of B than A2? I call this a question about explanatory goodness, and have not yet offered an account of it. I'd also like for this analysis to be social, pragmatic, and broad. The general idea is that explanation simpliciter is a social-epistemic practice, social epistemology provides us with some resources for evaluating good and bad social-epistemic practices, so good explanations lie at the intersection of these social-epistemological ideas and my account of explanation simpliciter.
Finally, there is a third question that I'm only starting to get clear about:
3) What is the value of explanation/understanding? Folks like Kvanvig and Pritchard have argued that understanding is distinctively valuable, and hint at times that it should supplant knowledge as the keystone to our inquiries. Part of their gambit involves segregating understanding from explanatory knowledge. Since I'm staunchly opposed to that kind of segregation, my challenge is to recoup some of the distinctively valuable aspects of understanding without making these capitulations. I have a faint sense that this can be done by forging a connection between the scientific realism and epistemic value debates. In short, if the kind of understanding cum explanatory knowledge that I'm developing plays a pivotal role in explaining the success of science, then I think we've discharged our responsibilities. I've gestured at these ideas elsewhere, but I'm eager to hammer out the details.
No comments:
Post a Comment