- Each Friday at 4:00 p.m. - 5:45 p.m.
- 308 Louise Pound Hall (the philosophy seminar room)
- Undergraduates are welcome!
- For more information, contact Trevor Adams: tadams23@huskers.unl.edu
- Due to UNL's guidelines to prevent the spread of COVID-19, colloquia will meet via Zoom (please email Trevor Adams for the Zoom link)
Fall 2020 |
|
November 6, 2020 | |
Presenter: Eunhong Lee |
Abstract: In a theory of partiality, relationships views, including Kolodny’s and Scheffler’s, are criticized by opponents because the relationship itself cannot play an essential role in partiality. The opponents of relationships views say "even if the relationship itself can become a necessary normative reason, it seems not to be a motivational reason." However, I argue that the other theories of partiality can also have similar problems that relationships views have. I clarify the problem of standard relationships views, and criticize the other theories of partiality including projects views, individuals views, and the history view (the view that appeals to a history of so relating to someone). Finally, I argue for a modified version of relationships views. |
November 6, 2020 | |
Presenter: Zach Wrublewski Title: "A Nozickean Argument for Universal Healthcare" |
Abstract: In general, left-Libertarian arguments in support of universal healthcare programs tend to rely, mostly, on a specific reading of the "Lockean Proviso," which is, generally, the idea that appropriation of property or resources is permissible so long as “enough and as good” is left for those not appropriating the specific properties or resources in question. Specifically, they tend to argue that leaving "enough and as good" for others should be understood such that others are entitled to a share of the benefits that result from the acquisition (though the specific reasons for this will be various). Right-libertarians, on the other hand, tend to accept weaker readings of the Lockean proviso, or outright reject it, which leads them to reject arguments for universal healthcare programs. I contend that, while it might be the case that right-libertarians should reject arguments in support of universal healthcare programs on the basis of the strong reading of the Lockean Proviso outlined above, justification for such programs can be found in more humble assumptions (and entailments) of other things they would/do accept. While I think this is true, in general, in this presentation I will narrow my focus to one particular right-Libertarian system: the one outlined in Nozick’s Anarachy, State, and Utopia. I will argue, generally, that Nozick’s arguments justify certain types of universal healthcare programs; In particular, I will argue that Nozick’s justification for the "minimal state" would justify such programs, and that his arguments against the justification of states more robust than the "minimal state" would not rule out such programs. |
October 30, 2020 | |
Presenter: Zack Garrett Title: "If Fermat's Last Theorem is False, then this is a Colloquium Talk" |
Abstract: Mark Jago argues that Ted Sider's world sentences cannot be used for an ersatz theory of impossible worlds. He claims that regardless of whether the world sentences represent their contents implicitly, through implication, or explicitly, by including a conjunct for every sentence, they are not usable for providing a semantics for counterpossibles — one of the main goals of creating a theory of impossible worlds. In this paper, I argue that world sentences can represent impossible worlds implicitly through implication and still be usable for providing a semantics for counterpossibles. This is accomplished by including in each world sentence a conjunct specifying what inferences are correct in that world. For logically impossible worlds, it need not be the case that ex falso quodlibet is valid, and so world sentences that implicitly represent impossible worlds need not be indistinguishable. |
October 23, 2020 | |
Presenter: Seungchul Yang |
|
October 16, 2020 | |
Presenter: Janelle Gormley Title: Prohairesis and Philia: Aristotle on the Genesis of Friendship" |
Abstract: “In friendships based on excellence on the other hand, complaints do not arise, but the choice of the doer is a sort of measure; for in choice lies the essential element of excellence and character.” –EN 1165a20-24 Aristotle accepts psychological eudaimonism—the position that human beings desire to flourish. In the Nicomachean Ethics, Aristotle claims that in order to flourish, one needs friends. And if one needs friends, then getting clear on what it is to be a friend is paramount. Many authors have taken up this project of articulating what it is to be a friend. Scholars tend to argue that Aristotle thinks friendship is a shared activity, and each friend is reciprocating well-wishing and having good will toward the other. However, this articulation explains only what it is to be a friend once one is in a friendship. This account does not provide an understanding of how one becomes a friend. John Cooper’s “Aristotle on Friendship,” claims that this question goes unanswered by Aristotle. He writes that Aristotle “does not, except incidentally, have anything to say about how friendships are formed in the first place.” Further, “Aristotle’s theory does not imply any stronger connection than this [initial desire for pleasure, profit, or good] between these motives and the formation of the corresponding types of friendship. According to Cooper, the silence on Aristotle’s part is unproblematic because it fits with our ordinary intuitions about friendships in that the initial meetings of those that will be friends are highly accidental and contingent. In this paper, I will show that Aristotle does have the resources to provide an account of the genesis of friendship, and that, contrary to Cooper, this account will better capture our intuitions about how we come to the friendships we have. |
October 9, 2020 | |
Presenter: John Del Rosario |
Abstract: William Alston argues that it is “ill-advised” to think of epistemic justification in terms of deontology. He thinks that deontology “does not hook up in the right way with an adequate truth-conducive grounds.” Instead, Alston maintains that truth conduciveness is what guarantees that p is true and not false for believer S. In the process, Alston dismisses the most defensible form of indirect doxastic voluntarism, and then argues that even that is not enough to put S in an epistemic position such that her belief in p is true on adequate objective grounds.
My paper on Kant will seek to address Alston’s two-fold worry. The provisionary thesis that I have written for my paper goes like this:
(EA1) the ultimate causal and imputable source of doxastic attitudes; and
(EA2) always, a potential explanation for why S is able to hold such doxastic attitudes.
The ultimate goal of this paper is to show that, to the extent that S’s will is both (EA1) and (EA2), it would become infelicitous for Alston (and Pojman) to diminish the plausibility of justifying belief in terms of deontology or epistemic responsibility. As it stands, I feel that my thesis addresses the worry on indirect doxastic voluntarism and epistemic responsibility (which is the main point). But it is not yet explicit how my thesis would address Alston’s truth-conduciveness challenge (which is a legitimate point). |
October 2, 2020 | |
Presenter: Chen Xia |
Abstract: In “Freedom of the Will and the Concept of a Person,” Frankfurt claims that the ability to form second-order desires distinguishes humans from other creatures. But Watson points out that one can be a wanton with respect to one’s second-order desires and volitions, so assent is not necessary (1975). I will defend second-order desires by arguing that to form second-order desires is not equal to simply choosing one from two conflicted first-order desires to be the effective desire that moves someone to act. What works here is not choosing, but reflective self-evaluation. We will be alienated from first-order desires because we regard them as objects not belonging to us when we evaluate them. But we will not be alienated from second-order desires or higher-order desires where we end, because we acknowledge them to motivate our actions after deliberate reflection and evaluation based on what we are concerned with. Also, I do not think infinite regress is a problem here. |
September 25, 2020 | |
Presenter: Trevor Adams |
Abstract: Hope is a very common attitude. Our family and friends express hopes for one another, businesses hope for their own future growth, and both politicians and religious leaders call for hope in the face of hard times. In philosophy much discussion of hope has been in the field of ethics, but this paper will explore the epistemic aspects of hope. Recently there has been a lot of literature on the nature and rationality of hope (e.g. Mathew Benton ( 2019), Adrienne Martin (2013), Michael Milona (2018), Katie Stockdale (2017), Luc Bovens (1999), Ariel Meirav (2009), etc.) which has many epistemic insights. However, the relationship between hope and knowledge has been explored much less. In this paper I will explore this relationship and some of the unique ways in which hope interacts with knowledge. It has mostly been taken for granted that people do not hope for things to occur that they know will occur. However, I will be giving an argument that hope and knowledge are compatible and I will defend that argument against objections. More specifically, I will argue that someone can hope that p is true and know that p without being irrational. |
September 18, 2020 | |
Presenter:Talhah Mustafa Title: "The White Oppressor" |
Abstract: The experiences of White men have molded this society into what it is through the subordination of Black Americans. That subordination has evolved into other forms by invalidating and aggravating the experiences of the Black community. White supremacy ideologies continue to influence today’s society and are now more dangerous than ever. This paper will explore how the evolution of White supremacy invalidates and aggravates all Black experiences. This paper will explore how Black experiences are not politically meaningful in the context of White supremacy because in political discourse, meaning-making is in the purview of Whites. |
September 11, 2020 | |
Presenter: Mark Selzer Title: "Reasons from Higher Order Abilities" |
Abstract: In his influential article, “Internal and External Reasons” (1979), Bernard Williams introduces the Explanatory Constraint: EC: The fact that p is a normative reason for A to ϕ only if A can ϕ because p. There is a problem with EC: if ‘can’ means that there is some possible world where A can ϕ because p, then almost anything would count as a normative reason for A to ϕ. Therefore, a plausible interpretation of EC must avoid such a ‘bare possibility’ interpretation of ‘can’. Building from Hille Paakkunainen’s interpretation of EC (2018), I shall argue for an interpretation of EC that avoids the bare possibility problem and provides a plausible account of reasons based on higher order abilities. A key advantage of this account is that it explains a wide range of cases better than first-order accounts |
September 4, 2020 | |
Presenter: Christopher Stratman Title: "On the Incoherence of Phenomenal Mental States" |
Abstract: In previous chapters we discussed how, according to the Phenomenal Intentionality Theory (PIT), all genuine intentional mental states are either identical to or in some sense partly grounded in phenomenal mental states. In chapter two, we discussed several motivations and arguments that have been advanced in support of PIT, which assume the existence of phenomenal mental states. In chapter three, I showed that reductive versions of PIT falsely predict that the phenomenal content involved in a subject’s perceptual experience never outstrips the phenomenal content involved in a subject’s directly perceived visual perception. But, when we consider cases of what Kind (2018) calls "Imaginative Presence," a subject’s perceptual experience can have absent phenomenal content that outstrips or lingers beyond the phenomenal content of a subject’s directly perceived visual perception. Thus, reductive accounts of PIT are empirically inadequate. I then argued that, if one restricts their account of PIT in order to avoid this problem, it will not count as a fully general theory of intentionality, and, therefore, it will not count as a theory of the deep, metaphysical nature of what intentionality is. There is an alternative solution that needs to be explored. If we adopt a view of intentional content understood in terms of experiential, first-personal mental events rather than phenomenal mental states, then, perhaps, we can avoid the problem of imaginative presence raised in the previous chapter. In this chapter, I shall begin to investigate this strategy, and argue that there is no coherent way to make sense of what a phenomenal mental state is. That is, our concept of a phenomenal mental state is, as it is typically deployed in theories of mental content, is confused. If this is correct, it will have significant consequences for numerous theories in philosophy of mind that posit the existence of phenomenal mental states, since they seem to be ineliminable posits in many of the views espoused in current debates in the philosophy of mind. These mental states occupy a fundamental place in contemporary philosophy of mind for, according to many accounts, they are a crucial part of our understanding of the mind. But phenomenal mental states are typically assumed not demonstrated, and the existence and nature of such states is rarely a topic explored. The time is right for submitting this issue to closer examination. The goal of this chapter is to present and defend an argument that purports to show that the concept of a phenomenal mental state is incoherent. I shall call the argument to be defended in what follows the "Incoherence Argument" (IA), which can be stated as follows:
(P1) All phenomenal mental states are instantiations of experiential properties.
(P2) Experiential properties are either (a) events of some form or (b) facts about experiences construed in terms of events.
(P3) If (a), then we make a metaphysical category mistake by conflating events with states.
(P4) If (b), then phenomenal mental states are not phenomenal since there is nothing-it-is-like to be a fact (i.e., facts lack a phenomenal feel).
(P5) Therefore, there is no coherent way to make sense of what a phenomenal mental state is.
The primary concern in this chapter is to leverage this argument against reductive versions of PIT that posit phenomenal mental states as the fundamental constituent of intentional content. Much of the chapter will be devoted to defending (P2)—(P4), since these are the controversial premises. |
Summer 2020 |
|
July 17, 2020 | |
Title: UNL Philosophy Graduate Student Fall Teaching Session |
Abstract: This week, instead of meeting to discuss the work of a scheduled presenter, we will meet to brainstorm and discuss issues surrounding teaching and TAing our Fall 2020 courses. |
July 10, 2020 | |
Presenter: Christopher Stratman Title: "Phenomenal Intentionality and the Problem of Imaginative Presence" |
Abstract: The Phenomenal Intentionality Theory (PIT) claims that all intentional mental states are either identical to phenomenal intentional mental states, or are partly grounded in phenomenal intentional mental states. I shall explore cases of what Kind (2018) calls "imaginative presence" in order to develop a novel challenge to PIT. In cases of imaginative presence, one’s perceptual experience outstrips what is immediately perceived. For example, if Alex sees a cup on the edge of the table, it has phenomenal intentional content C. But Alex can also imagine the cup falling off the table, which has the phenomenal intentional content C+. In such cases, the phenomenal intentional content C+ lingers beyond its reductive base of Alex’s visual perception and, therefore, resists reduction. PIT falsely predicts that a subject’s perceptual experience can never have phenomenal intentional content C+ that lingers beyond its reductive base. So, PIT cannot give an adequate account of imaginative presence. |
July 3, 2020 | |
Presenter: Talhah Mustafa |
Abstract: One of the great chasms in philosophy of mind has to do with perception, which is too great in of itself to remain as a subfield of mind. A contemporary debate amongst perceptionists has to do with perceiving and experiencing, and which of the two is more fundamental. Those that claim experience comes prior to an agent's perception (experience-firsters) argue that experiencing constitutes other perceptual states whereas perception-firsters claim that perceiving things as they are is metaphysically and explanatorily prior to other perceptual states. Perception-firsters, such as Lisa Miracchi in her Perception First, provide an alternative account in order to avoid an objection directed towards those who claim perception comes first. This paper will attack Misracchi's Competence View (the alternative account she proposes). |
June 26, 2020 | |
Presenter: Adam Thompson Title: "Moral Judgment Skepticism and Blame" |
Abstract: The approach to moral responsibility inspired by P.F. Strawson (1962) attempts to demonstrate that being morally responsible is based on our common practices: that is, that our attitudinal responses to (alleged) morally evaluable behavior are explicable in terms of appropriateness standards that apply directly to those attitudes and their expression. This is appealing due in part to the fact that the approach promises to avoid certain sources of skepticism. Roughly, if praise or blame of an individual’s act is appropriate by standards that develop(ed) through our interpersonal affairs as the Strawsonian approach alleges, there’s neither (a) a need to elaborate a type of freedom sufficient for being praiseworthy or blameworthy nor (b) a need to show how that freedom can be possessed by persons in a causally (in)deterministic world. Interestingly, however, the most prominent accounts of the attitudes constitutively linked to praise and blame harbor another skepticism at their core—namely, skepticism about moral judgment. I show that their skepticism about moral judgment is unwarranted. |
June 19, 2020 | |
Presenter: Zack Garrett Title: "Glitchless Marathon" |
Abstract: There are many proposals for what constitutes the moral wrongness of performance enhancement in sports. In this paper, I argue that there are cases of performance enhancement that are controversial and that avoid all of the moral worries so far put forward. One such example is the use of Nike's Vaporfly shoe, which gives an advantage to any runner wearing them without any risk of harm. I argue for a different approach to the morality of performance enhancement that can adjudicate on any form of enhancement. The moral wrongness of some forms of enhancement comes from the moral wrongness in tyranny by the minority. When a minority of athletes decide to use a new form of performance enhancement, they force others to follow suit to stay competitive. If most athletes in a sport do not want to use the new form of performance enhancement, they have been forced to do something they do not want to by a minority of athletes. Forms of performance enhancement can be acceptable when a large majority of athletes in a sport are in favor of them. They are unacceptable when only a small minority is in favor of them. In the case where the athletes in a sport are divided on the use of a new form of performance enhancement, the best option is to split the sport into new divisions, one that accepts the new form of enhancement and one that does not. |
June 12, 2020 | |
Presenter: Trevor Adams Title: "Egalitarianism and the Separation Problem" |
Abstract: Liberal egalitarianism in our contemporary western societies endorses the thesis that, with a few exceptions, all human beings are each other's moral equals. In the work Challenges to Human Equality, Jeff Mcmahan argues that liberal egalitarians have an unsolvable problem due to this thesis they endorse. This is because one component of this view is that all wrongful killings of human beings are equally wrong (with some exceptions) (81). A more refined way of putting this view is what Mcmahan calls the "equal wrongness thesis" which says that all wrongful killings of persons—which are individuals with "psychological capacities beyond a certain threshold of self-consciousness and minimal rationality"—are equally wrong (82). Mcmahan says that almost all egalitarians believe that nonhuman animals are not our moral equals (83). This view is commonly defended by referring to certain capacities that only humans are thought to have (83). Mcmahan thinks that if we attempt to base this thesis on certain capacities then we will both end up excluding a number of humans who lack some number of these capacities and elevating those who have these capacities to a greater degree (83). Both of these are in conflict with egalitarianism. |
June 5, 2020 | |
Presenter: Alfred Tu Title: "Puzzles of Gastronomic Expertise" |
Abstract: In our culinary culture, some people such as food critics and sommeliers are commonly regarded as experts of food. Their judgment and suggestions seem to be enjoyed in more respects than judgments made by ordinary people as well as experts in other areas. There are various accounts of "expert," and according to Goldman (1991, 2001, 2011), a cognitive expert is someone who (1) possesses a substantial body of true beliefs in a certain epistemological domain, (2) has a capacity to deliver true answers to new questions posed in the domain, and (3) has an extensive body of knowledge on both primary and secondary questions in the domain. It is a common belief that gastronomic experts have more sensitive taste than ordinary people. Since they are capable of forming more true beliefs through tasting than ordinary people, their expert status seems to be able to fit into Goldman’s account of expert. However, there remain some puzzles in this picture. For instance, tasting seems to be based on taste sensation. But in some people’s mind, taste is constituted by objective and subjective factors, if it is not purely subjective. Tasting results usually seem to have various subjective aspects, such as interpretation, metaphor and usage of figurative terms. And these subjective aspects seem to be the interesting, if not the most important, part of judgment of food. Therefore, it seems that proponents of the veritistic account of expert needs to either maintain a story to cover expertise include various subjective aspects or deny such expertise exists, which is against our practice. In this paper, I am going to argue that Goldman’s veritistic account of expert cannot be compatible with the understanding of taste sensation. Therefore, either proponents of veritistic accounts of expert need to revise their theory in order to cover gastronomic expertise or we need an alternative account to replace it. |
May 29, 2020 | |
Presenter: Christopher Stratman Title: "Revisiting Moore's Anti-Skeptical Argument in 'Proof of an External World'" |
Abstract: External world skepticism is often associated with questions about what can be known with certainty. This way of looking at the issue is mistaken. The reason why the external world skeptic’s challenge is so jarring is because it questions the nature of reality itself, not merely our ability to know things. G. E. Moore responded to the skeptic’s challenge by arguing that we can "prove" that the external world exists simply by holding up our hands and saying, "look, here is a human hand, and here is another one." Philosophers have largely disagreed about how to understand Moore’s anti-skeptical argument, but most have assumed that it targets an epistemic version of the skeptic’s challenge. However, this assumption is false. In order to properly understand Moore’s argument, we need to think of it as targeting an ontological form of skepticism. If this is correct, then it follows that the conclusion of Moore’s argument is an ontological claim about the nature of reality. Thus, the conclusion outstrips the scope of the epistemic premises of the argument and proves too much. |
Spring 2020 |
|
May 1, 2020 | |
Presenter: Talhah Mustafa Title: "Language in Culture" |
Abstract: What do the following words mean: "boot," "flat," and "football"? Well, it depends on the country we’re in and the culture in that country. Culture is a constituent of language. If culture changes, the language associated with that culture changes. There are three factors that define language and how it's used: political, societal, and religious. This paper explores a possible fourth option, colonialism. Without this fourth option, these three options don't seem to work when we’re looking at the language in Pakistan and India. The language that's used in these countries are similar, and it seems as if only my fourth option explains the similarities. Political, societal, and religious explanations all fail, and I’m going to explore that. |
Presenter: John Del Rosario |
Abstract: Professor Louis Pojman maintains that belief is not necessary for faith. More specifically, he argues that belief that God exists is not a necessary condition for an embrace of some theistic lifestyle. This is anchored on his contention that (Bi) belief in God does not entail (Bt) belief that God exists; put simply, belief-in does not entail belief that (Hope, instead, for Pojman, is sufficient for faith. Hope does not rely on the warrant of evidence which belief seems to require and that it needs only to latch on to the possibility that God exists for it to be sufficient). My argument in this paper is an attempt to address the alleged non-entailment between belief-in and belief-that. (1) On a more pragmatic tenor, it is rational for S to have an attitudinal (Bi) which accepts the propositional (Bt) (Cohen, Alston). The attitude may not be as spontaneous; still, it is truth oriented and revelatory of some cognitive commitment to a certain God-exists policy (Schellenberg). (2) It is harder, but not impossible, to show the logical relation between (Bi) and (Bt). A route to take is to argue that (Bi), as a religious belief, is both factive and evaluative (Price). If the conjunct obtains, then it might be possible to say that the “trust-in” pro-attitude exemplified in (Bi) must, at the very least, entertain the plausibility of the propositional (Bt). For what it is worth, this entertaining some propositions about God goes deep into the heart of poignant narratives on the beginnings of what can be called faith in God. The entailment could be spelled, thus, as (Bi) presupposes, at the very least, entertaining (Bt). |
April 17, 2020 | |
Presenter: Trevor Adams Title: "The Compatibility of Hope and Knowledge" |
Abstract: In Epistemological Aspects of Hope, Benton argues that hope is incompatible with knowledge (1). Put more precisely, if someone were to hope that p then they wouldn’t have knowledge that p. This is because, "when one has such propositional hope, one hopes that the world is (or will turn out) a certain way" (1). Since we hope for current or future outcomes, our hopes are fulfilled when we come to know that the outcome has happened, and our hopes are dashed when we find out that the outcome does not occur (1). Thus, we do not hope for propositions we take ourselves to know (1). While our conceptual and linguistic judgments suggest that knowledge and hope are inconsistent, I think things are more complicated than they appear. First, it may be possible that an agent would ascribe knowledge of p in one context to themselves, while in another context hope that p. Thus, they may be compatible across contexts but not at the same time in the same context. Second, if one accepts both the "chances license hope" principle and infallibilism, then perhaps hope and knowledge are in fact compatible. Lastly, when one tries to articulate what about hope is inconsistent with knowledge, I think we come to the realization that the inconsistency actually lies in an agents believing themselves to know p while hoping p, not their actually knowing p and hoping p. |
March 13, 2020 | |
Presenter: Bjorn Flanagan Title: "Tragedy as the Highest Art: Does Representing Tragic Action Involve Intellectual Virtue?" |
PRESENTATION CANCELLED DUE TO UNL'S MEASURES TO PREVENT THE SPREAD OF COVID-19 Abstract: The Poetics offers the definition of tragedy as being the representation of a certain kind of action; however, there remains a question of how one should interpret "action" in this context. Elizabeth Belfiore presents an interpretation of action as an event that is not subject to moral considerations; if true, it calls into question those interpretations that assert tragedy as ethically educative. However, I contend that this misses a crucial qualification that Aristotle asserts upon tragedy. If Aristotle presents tragedy as an expression of the highest art, then it requires a qualification for how it is which would, by extension, require a moral qualification of the action. I agree with Belfiore’s assessment of tragedy where "action" does not involve moral considerations but only those of the practical virtues; I will present a case that the purpose of representing a tragic action is contemplative and concerns intellectual virtue which qualifies it as the highest art form. |
March 6, 2020 | |
Presenter: Christopher Stratman Title: "Agentive Phenomenology as the Experiential Basis of Cognitive Phenomenology" |
Abstract: Recently, many philosophers (e.g., Horgan and Tienson (2002), Loar (2003), Kriegel (2013), Mendelovici (2018)) have endorsed "Phenomenal Intentionality" (PIT). This view asserts that there is a kind of intentionality that is, in some important sense, constitutively determined by phenomenology alone. Additionally, PIT claims that this sort of "original intentionality" is distinct from and prior to all other forms of intentionality. A central thesis involved in PIT claims that there is a kind of cognitive phenomenology. As Horgan and Tienson have suggested: "mental states of the sort commonly cited as paradigmatically intentional (e.g., cognitive states such as beliefs, and conative states such as desires), when conscious, have phenomenal character that is inseparable from their intentional content" (p. 520). However, there has emerged a stalemate between those who accept the cognitive phenomenology thesis and those who deny it. In this paper, I attempt to diagnose the stalemate and offer some ways in which it can be avoided by appealing to agentive phenomenology as the experiential basis of cognitive phenomenology. |
February 28, 2020 | |
Presenter: Alfred Tu Title: "What Is Gourmet: A Social Epistemological Perspective " |
Abstract: Issues of expert and expertise have become important topics in recent social epistemology. Goldman (1991, 2001, 2011) proposed his characterization of "expert;" According to him, a cognitive expert is someone who (1) possessed substantial body of true beliefs in a certain epistemological domain, (2) have capacity to deliver true answers to new questions posed in the domain, and (3) with extensive body of knowledge on both primary and secondary questions in the domain. Nevertheless, Goldman’s veritistic account of expert seems to be tricky if we applied it to various expertise that based on sensations. For instance, gastronomic experts, such as food critics and sommeliers, their expertise —tasting— seems to be based on taste sensation. But in some people’s mind, taste is constituted by objective and subjective factors, if it is not pure subjective. On one hand, modern physiology told us there are five basic tastes which can be considers as objective factors of taste. On the other hand, tasting results usually seem to have various subjective aspects, such as feeling and opinion. And these subjective aspects seem to be the interesting, if not the most important, part of judgment on food. Therefore, it seems that proponents of veritistic account of expertise needs to either maintain a story to cover expertise include various subjective aspects or deny such expertise exists, which is against our practice. In this paper, I am going to argue that Goldman’s veritistic account of expertise cannot be compatible with the understanding of taste sensation, and suggesting we need a non-veritistic account of expertise to cover more types of expertise. |
February 21, 2020 | |
Presenter: Brant Barnes Title: "Skeptical Problems with the Dual Aspect Theory" |
Abstract: In recent years, the discussion of property individuation — providing identity conditions for properties — has overtaken the discussion of a property’s nature. Questions such as "is a property a cluster of causal powers" have been overtaken by questions such as "can a property be individuated by its causal role." Of course, how one answers the former sort of question will have a bearing on how one answers the latter. In this paper, I will discuss a specific theory on the nature of properties, namely, the dual aspect theory. I will argue that the dual aspect theory of properties does not provide a viable answer to the related question of property individuation. |
February 14, 2020 | |
Presenter: Zack Garrett Title: "Plurivaluationism" |
Abstract: A sentence is supertrue if and only if it is true on all complete precisifications. A complete precisification sets a precise cutoff for the application of every word. Some recent theories of vagueness have forgone the move from precisifications to supertruth, opting instead to relativize truth to precisifications. These theories are sometimes called plurivaluationist. Some plurivaluationist theories include Diana Raffman's multi-range theory, John MacFarlane's expressivism about vagueness, and Nicholas J.J. Smith's plurivaluationist degree theory. I will argue that a complete precisification is not always possible, and so there may be sentences that, relative to an incomplete precisification, receive non-classical values. I will also provide a variety of arguments that are targeted at the idiosyncrasies of the different kinds of plurivaluationism. |
Fall 2019 |
|
December 13, 2019 | |
Presenter: Zach Wrublewski Title: "Reason and Being Rational" |
Abstract: In his most recent article, "Normativity versus Rationality," John Broome analyzes some of the most recent work on the connection between reasons and rationality. In the course of this analysis, Broome offers two things: The intuitive idea that the faculty of reason and the property of being rational are linked, and a formulation of this connection that I will call the "additive" conception of the link between the reason and the property of rationality. Broome formulates the connection as follows: "[...][S]atisfying requirements of rationality is sufficient for possessing the property of rationality: if you satisfy all the requirements of rationality you are fully rational. Moreover, the degree to which you satisfy requirements of rationality is the degree to which you have the property of rationality. These degrees are partially ordered. For example, if in one possible situation you satisfy all the requirements of rationality that you satisfy in another, and you satisfy at least one more, you are more rational in the first situation than in the second." My project has two major components: First, I'll agree that there does seem to be an intuitive link between reason and the property of being rational, but object to Broome's formulation of the link. Second, I'll outline a few reasons to think that a theory of rationality like Fogal's "pressure view," one which rejects rational requirements, better explains the connection between reason and the property of being rational. |
November 22, 2019 | |
Presenter: Jason Lemmon Title: "On Weakness of Will" |
Abstract: On the standard (contemporary) account of weakness of will, the phenomenon occurs when one acts, intentionally and freely, contrary to what one judges to be the overall better option. Recently, the standard account has been challenged. Weakness of will actually occurs when one fails to act on one’s prior intentions – when, that is, one loses one’s resolve to see their intentions through. Richard Holton and Alison McIntyre are two of the main proponents of this novel account (and their work forms the basis for my defense of this view). Despite the new account’s having garnered serious attention, the standard account has remained the dominant view. I examine why this is so — in particular, I examine (what I have found to be) two of the more prevalent arguments in the literature against the new account. I argue that the new account has the resources to handle these arguments and that, more generally, it constitutes a plausible alternative to the standard account. |
November 15, 2019 | |
Presenter: Ryan Turner Title: "Men First? — Asking for Directions to Egalitarian Gender" |
Abstract: Critiques of masculinity’s implication in systems of oppression tend to offer hope in egalitarian redefinitions of masculinity. Gender abolitionism recommends, for similar reasons, that we eliminate gender categories altogether. Each view has difficulties. It is not obvious that we can replace masculinity with something resembling masculinity but better, even leaving aside essentialist claims that gender is an immutable characteristic. Properly substantive accounts of what features could distinguish such a progressive masculinity from toxic or hegemonic masculinity have been thin on the ground. The necessity and utility of the project for wider egalitarian aims are just tacitly assumed. Abolitionism for its part neglects two valuable redeeming qualities of gendered identities. The first is the obvious fact that gender identities are for many people a cherished source of self-understanding, including many trans identities that have been subject to harmful — and, charitably, inadvertent — hostility from abolitionist critics of gender. Secondly, identifying as a member of a subordinated group such as a gender, race, sexual orientation, and so forth has itself historically been, for countless people, an enabling precondition of resistance to oppression. Here I seek to reconcile these two views by defending abolitionism in a limited form. By this "strategic abolitionism" I argue two major claims. Surveying literature in masculinities studies, I first show that progressive revisions such as Black and gay masculinities are superfluous to resistance of the respective oppressions to which they are responses. In each case, solidarity within the subordinated group as such is enough to understand the success or potential of resistance to its oppression. Masculinity qua masculinity can be divided through, leaving no significant remainder; progressive masculinities are an empty concept. Finally, I argue that the liberatory potential of gender abolitionism can be salvaged if we recognize an asymmetry in its practical demands of different, actually existing genders. |
October 25, 2019 | |
Presenter: Mark Selzer Title: "Reason-Implies-Can" |
Abstract: If one ought to do something, does it follow that one can do it? To answer yes is to affirm what is known as the ought-implies-can principle (OIC). Recently, opponents of OIC have raised strong counterarguments against the principle. Since we can plausibly construe why one ought to do as derived from what one has reason to do, one would expect an analogous principle to OIC, a reason-implies-can principle (RIC), to fall prey to the same objections. I shall argue that RIC evades the strong objections made against OIC and, in fact, provides the basis for a version of OIC that escapes the same objections. |
October 18, 2019 | |
Presenter: Zack Garrett Title: "Vagueness and Luminosity" |
Abstract: In Knowledge and its Limits, Timothy Williamson argues that being in a mental state does not entail that one is in a position to know that one is in that state in Williamson's terminology, most mental states are not luminous. Some have claimed that Williamson's argument relies on the vagueness of our mental states or the vagueness of belief. These responses to Williamson fail for the very reasons that Williamson cites in Knowledge and its Limits. However, there are interesting points to be made about the connection between vagueness and luminosity. In this paper, I argue that if contextualists about vagueness are correct about the way that context shifts in sorites arguments, then Williamson's argument as it is written is unsound. Some changes to the argument are then sufficient to avoid this worry regardless of which theory of vagueness is correct. Finally, I argue that some theories of vagueness allow for some mental states to avoid this modified anti-luminosity argument. So, the generalizability of anti-luminosity is brought into question. |
October 4, 2019 | |
Presenter: Trevor Adams Title: "Does Factivity Imply Certainty?" |
Abstract: The paper I will be addressing by Moti Mizrahi entitled "You Can’t Handle the Truth: Knowledge = Epistemic Certainty". The primary thesis of Mizrahi’s paper is that if his argument succeeds, then "epistemologists who think that knowledge is factive are thereby also committed to the view that knowledge is epistemic certainty" (p.225). The leading argument in the paper is the following: |
September 27, 2019 | |
Presenter: Adam Thompson Title: "On Keeping the Blame in Blame" |
Abstract: Moral judgments play an integral role in the active and reactive phenomena that animate practical life. For instance, as constituents of the active they feature in deliberation about what to do and thereby aid in the development of intentional action. On the reactive end, moral judgments structure responses to intentional action and it’s agential sources. However, many hold that moral judgment cannot function as blame proper without aid from an emotion like righteous anger, resentment, or indignation. One way of articulating that idea is to argue that moral judgment alone lacks the opprobrium characteristic of blame — that is, as some put it, moral judgment alone cannot keep the blame in blame. I explore three ways of making that point and reject each. |
September 20, 2019 | |
Presenter: Christopher Stratman Title: "In Defense of Inflationism " |
Abstract: The Phenomenal Intentionality Theory (PIT) claims that there are phenomenally intentional mental states and that all other forms of intentional mental states are either grounded in or in some way arise from these more basic phenomenal mental states. However, this view of intentionality faces an apparently obvious problem: it seems as though there are intentional mental states, such as standing beliefs and desires, that are not phenomenally conscious. Proponents of PIT must give some plausible explanation of how these nonconscious intentional mental states get their intentional content. In The Phenomenal Basis of Intentionality (2018), Angela Mendelovici considers and rejects several attempts to show that nonconscious intentional mental states such as beliefs and desires get their intentional content derivatively. Mendelovici then argues in support of eliminativism about genuinely intentional standing states (169). While it might be the case that a thinker can be in a mental state such that they have a disposition to have an occurrent belief or desire, according to Mendelovici, such states are not genuinely intentional (99).
|
Februrary 13, 2019 | |
Presenter: Steve Byerly Steve is trying to flesh out a view he calls Knowledge as (Robust) True Belief. |
Abstract: Despite the popularity of either strengthening the justificatory condition, adding an additional condition, or modifying the parameters of justification, I will put forth 'Knowledge as (Robust) True Belief' (RTB), which is a theoretical account of knowledge.
|
Februrary 6, 2019 | |
No Colloquium |
Philosophy Welcome Back |
Spring 2019 |
|
April 19, 2019 | |
Presenter: Genessa Eddy Title: "Conventionality of Math." |
Abstract: The numbers zero through nine are the basic symbols of our number system. You can make an infinite amount of unique numbers by just combining these ten symbols in different ways. Therefore, a finite amount of symbols can make up an infinite amount of complex numbers in our number system.
|
April 12, 2019 | |
Presenter: Adam Thompson |
|
March 29, 2019 | |
Presenter: Jeffrey Schade Title: "The Simplicity of the One." |
Abstract: The paper argues that we should take seriously Neo-Platonist assumptions such as the ontological primacy of mindful consciousness over matter, and the "Principle of Simplicity", which states that that which is simplest is that which is most perfect. This principle is consistent in many ways with Proclan metaphysics; however, the Proclan Rule creates problems for the Principle of Simplicity. These problems might be resolved by a conception of causation as "grounding by subsumption", as well as by making a distinction between perfect and imperfect on the one hand, and better and worse on the other. The paper then argues that Proclan metaphysics is consistent with an immaterial or non-corporeal conception of matter upon which form is imposed by Intellect, or Consciousness. |
March 29, 2019 | |
Presenter: Kevin Patton |
Abstract: Georgi Gardiner has argued that all modal conditions on knowledge ultimately get 'swamped' -- that is, they fail to provide any additional value to what knowledge is beyond what is already contributed by mere true belief. She offers three brief arguments in support of this thesis. Despite the force of her arguments, I argue that her thesis is too broad. I motivate this by attempting to explore a belief that is false, but almost true (i.e. the basis for the false belief was almost producing a true belief). These kinds of beliefs, though false, seem to be quite valuable. Why? Because the process that formed them almost got us to the truth. But wait, Gardiner will exclaim, that just proves that those modal conditions, even ones that almost provide true beliefs, are only valuable insofar as we care about truth! While it is true that they may be instrumental to truth, I claim that they are still quite valuable. There are a number of ways to motivate this position. Here is one example.
|
March 1, 2019 | |
Presenter: Zach Wrublewski |
Abstract: Many philosophers are skeptical of the claim that the "ought" of rationality is normative in the sense that the requirements involved are necessarily accompanied by reasons to conform to them. Some believe that requirements of rationality are no more normative than the requirements of chess or the requirements of etiquette. Others, such as John Broome, accept that rationality is normative, but also hold that there are no good arguments to establish this conclusion. In The Normativity of Rationality, Benjamin Kiesewetter takes on the ambitious project of defending the normativity of rational requirements with an interesting, novel solution to the so-called "normativity problem." In short, Kiesewetter argues for a view that holds that reasons are evidence-relative facts, and that rational requirements are "non-structural" in the sense that they do not concern combinations of attitudes that an agent holds, but rather the reason(s) one has for (or against) holding particular attitudes. Crucially, Kiesewetter's project depends on the necessary link between the requirements of rationality and one's reasons, such that one always has a reason to do what rationality requires of her. To make this connection, Kiesewetter argues for a "backup view" of reasons.
|
February 22, 2019 | |
Presenter: Andrew Christmas
|
|
February 15, 2019 | |
Presenter: Zack Garrett Title: "Precisifications" |
Abstract: Semantic nihilism is the position that sentences containing vague components like "Winston is bald" are not truth-apt. The theory has been thought to be, at best, too revisionary, and, at worst, self-undermining. Since natural language is riddled with vagueness, only a small portion of sentences will count as truth-apt. Even the sentences used to express semantic nihilism will not be truth-apt since they contain vague words like "vague." These objections appear to be devastating to semantic nihilism, but David Braun and Theodore Sider, as well as John MacFarlane, have recently attempted to rehabilitate the theory.
|
February 8, 2019 | |
Presenter: Trevor Adams |
Abstract: In David Lewis's paper Logic for Equivocators Lewis gave an example of something he called "fragmentation". Lewis was attempting to describe what is going on when someone holds two contradictory beliefs simultaneously and how. In that work, Lewis gave an example of how he himself once had contradictory beliefs, saying "I used to think that Nassau Street ran roughly east-west; that the railroad nearby ran roughly north-south; and that the two were roughly parallel" (p. 436). The problem for Lewis was that the different fragments of this triple would come into use and guide his behavior at different times but that, "the whole system of beliefs never manifested itself at once" (p. 436). But, "once the fragmentation was healed, straightaway my beliefs changed" (p. 436). This example has now become the classic example of a phenomenon called fragmentation. What I want to do in this paper is give some clarity to the fragmentation discussion and also defend the fragmentation thesis from objections. First, I will propose one interpretation of belief fragmentation. Next, I will state and explain Aaron Norby's objections from his paper Against Fragmentation, by giving some evidence of fragmentation from cognitive science. Lastly, I will consider another objection to fragmentation Norby offers and by showing how fragmentation is in fact a substantive thesis about belief. |
February 1, 2019 | |
|
The Graduate Student Research Colloquium will not be held. Instead, we will have the Spring 2019 Faculty and Graduate Student Colloquium. Joey Dante will present. |
January 25, 2019 | |
Presenter: Mark Selzer Title: "Importing Reasons from Other Worlds: the Latent Capacity Interpretation of the Explanatory Constraint." |
Abstract: This is a heavily revised version of a paper I presented last semester. I've modified it to provide a stronger intuitive appeal for my explanation of motivating reasons, and I've also added some features that protect my view against several objections. The revisions required me to develop my account in three important directions. The view is now committed (or further committed) to moral rationalism and diachronicity and globalism about reasons. For those who are interested, below is a copy of my abstract from last time. In his influential article, "Internal and External Reasons" (1979), Bernard Williams argues for the Explanatory Constraint: EC: The fact that p is a normative reason for A to Φ only if A can Φ because p. There is a problem with EC: if 'can' means that there is some possible world where A can Φ because p, then almost anything would count as a normative reason for A to Φ. Therefore, a plausible interpretation of EC must avoid such a 'bare possibility' interpretation of 'can'.
AC: The fact that p is a normative reason for A to Φ only if A has an actual present capacity to Φ because p. First, I argue that AC is an unsatisfactory interpretation of EC because it conflicts with the normative reasons that the akratic or the person with a poorly developed character has. Second, to address these shortcomings, I argue for the Latent Capacity interpretation of 'can' in EC: LC: The fact that p is a normative reason for A to Φ only if A has a latent capacity to Φ because p. LC is an account that is not trivialized by a bare possibility interpretation of EC—yet, contra AC, LC remains in harmony with the normative reasons the akratic or the person with a poorly developed character has. |
January 18, 2019 | |
Presenter: Adam Thompson Title: "On Balance and Teaching Philosophy" |
Abstract: As with many things that lend meaning, support, and significance, balance in nearly any context where it is called for is difficult to realize let alone recognize or understand. This essay focuses on these difficulties as they pertain to the design and implementation in philosophy courses. It offers a strategy for finding the right distribution of content coverage and skill development. The strategy begins with the observation that developing an evaluative grasp of philosophical material is to understand a complex of, among other things, subtle distinctions, analyses, relations of support, and normative implicature as well as interrogative statements and declarative ones. Further, it is to understand elements like those through a dialogical narrative wrapped in difficult prose.
|
Fall 2019 |
|
December 7, 2018 | |
Presenter: Chelsea Richardson Title: "Ancestry Without Race" |
Abstract: Given my past assessments of ancestry as it pertains to race, that it naturalizes race, giving race an unjustified but science-like credence that shores race up as a heinous social juggernaut -- one might wonder if there's anything redeeming in ancestry at all. Additionally, one might wonder how to draw a distinction between ancestry as it pertains to race and ancestry as it pertains to anything else. In this essay I'll aim to develop substantive answers to some of these questions. Primarily, I'll use what I take to be a few meaningful categories of cases to develop a clear(er) distinction between ancestry as it pertains to race -- which I'll refer to as racial ancestry, and ancestry as it pertains to something other than race -- which I'll call non-racial ancestry. The categories of cases I'll focus on are those of finding one's biological parents, drawing one's family tree, and testing one's DNA. I'm choosing to focus on these categories not because each falls neatly within the bounds of non-racial ancestry or racial ancestry, but because they help to illuminate the distinctions between racial and non-racial ancestry. I will also discuss the normative conclusions that can be drawn about both racial and non-racial ancestry. Racial ancestry is something we would do well to eliminate, but to eliminate non-racial ancestry might be to throw out the baby with the bathwater. |
November 16, 2018 | |
Presenter: Zach Wrublewski Title: "Subjunctives, Dispositions, and Rationality." |
Abstract: There is an ongoing debate about how rational requirements should be understood, when those requirements involve a conditional. Wide-scope theorists believe that the "ought" of rationality ranges over the entirety of the conditional -- or, in other words, the ought has "wide scope." Their opponents, the narrow-scope theorists, believe that the "ought" of rationality ranges over just the consequent of such conditionals -- or, that it has "narrow scope." As might be expected, there are advantages and disadvantages associated with each of these views. The wide-scope theorists generally have to grapple with what I call "easy satisfaction problems," while narrow-scope theorists must contend with a slightly more varied set of problems that I will call "bootstrapping problems." In this presentation, I will argue for a view which I'm currently calling the "subjunctive dispositional view" (though, this is subject to change if I find a flashier name for it.) In general, I will argue that understanding the relevant conditionals as subjunctive conditionals rather than material conditionals leads to a view that can avoid both types of problems mentioned above. Then, I will show that my specific version of this view, which relies on agents' dispositions in determining the truth values for the relevant subjunctive conditionals, is intuitive plausible and has the advantages of both the wide-scope and narrow-scope views. |
November 9, 2018 | |
Presenter: Jason Lemmon Title: "Robust Moral Realism, Evidence, and the Evolutionary Debunking of Morality." |
Abstract: I will explore some of the central threads of the recent debate between evolutionary debunkers of morality and their opponents. One of the most pervasive responses to debunkers is that their main line of argument proves too much. The main debunking line, in short, is that since evolutionary forces have shaped our "moral faculties"/capacities to produce beliefs that were advantageous (rather than truth-tracking), it would be, probabilistically, basically miraculous if -- supposing for argument's sake, there really are mind-independent moral truths -- the aforementioned beliefs just happened to line up with the mind-independent moral truths. This would be a coincidence of monumental, and unacceptable, proportions; thus, we should accept that our moral capacities are (very, very likely) UNRELIABLE. Now, the response that the debunkers' main argument proves too much goes, roughly, as follows: The main debunking line can be applied to practically any mental capacity or faculty; e.g. swap out 'moral' with 'perceptual', and you get the result that our perceptual faculties are UNRELIABLE. Thus, the main debunking line leads to global skepticism. This response is given by Shafer-Landau, among many others. I argue that the response fails, on empirical grounds. The evolutionary accounts we have regarding the etiology of perception, or of our mathematical capacities, are of a quite different kind (perception) -- or non-existent (mathematics) -- than the etiology of our moral capacities. For the response to work, it cannot be given from the armchair -- instead, one must show, as with our moral capacities, that it really is likely that the deliverances of capacity X are beliefs that are either likely unrelated to the possible truths, or, are such that we must simply Withhold. |
November 2, 2018 | |
Presenter: Katerina Psaroudaki
|
Abstract: Do white people owe compensation to black people in the context of race-based affirmative action? I will discuss various arguments and show that, the most tenable version of affirmative action, which best explains why white people owe compensation and why black people deserve compensation, is of the following shape: white people would not have developed the skills they currently possess if they had not been benefited by their membership to a socially privileged group, and, analogously, black people would not have suffered their present competitive disadvantages if they had not been subjected to racial discrimination. I will, then, defend a hybrid interpretation of the "principle of fair play" according to which: a) white people have not knowingly and willingly accepted the benefits of racial injustice, b) the benefits conferred upon white people do not clearly outweigh the cost they have to pay, c: the distribution of the compensatory benefits is not fair since the black people who have suffered the most from racial discrimination will not be the ones obtaining the affirmative opportunities, and d) the distribution of the compensatory burdens cannot be proven fair since the best qualified white people who will be paying the price have not necessarily gained the most from racial injustice. Through a series of thought experiments, I will conclude that only a very weak compensatory duty can be established in the context of affirmative action. |
October 26, 2018 | |
Presenter: Mark Selzer Title: "Importing Reasons from Other Worlds (without Tariffs!): or the Latent Capacity Interpretation of the Explanatory Constraint." |
Abstract: n his influential article, "Internal and External Reasons" (1979), Bernard Williams argues for the Explanatory Constraint: EC: The fact that p is a normative reason for A to Φ only if A can Φ because p. There is a problem with EC: if 'can' means that there is some possible world where A can Φ because p, then almost anything would count as a normative reason for A to Φ. Therefore, a plausible interpretation of EC must avoid such a 'bare possibility' interpretation of 'can'.
AC: The fact that p is a normative reason for A to Φ only if A has an actual present capacity to Φ because p. First, I argue that AC is an unsatisfactory interpretation of EC because it conflicts with the normative reasons that the akratic or the person with a poorly developed character has. Second, to address these shortcomings, I argue for the Latent Capacity interpretation of 'can' in EC: LC: The fact that p is a normative reason for A to Φ only if A has a latent capacity to Φ because p. LC is an account that is not trivialized by a bare possibility interpretation of EC—yet, contra AC, LC remains in harmony with the normative reasons the akratic or the person with a poorly developed character has. This new and improved version of the talk features the latest and most advanced theoretical machinery, making it better, faster, and stronger than all previous models. It is undoubtedly the best iPhone yet |
October 19, 2018 | |
Presenter: Andrew Spaid
|
Abstract: Some have recently argued that, given how different pleasure and pain are from one another, hedonism cannot lay claim to the theoretical advantages monistic theories of value are thought to have over pluralistic ones, such as explanatory adequacy and commensurability. Some have also suggested that this is a reason to prefer the desire satisfaction theory over hedonism (since the former has these theoretical advantages.) As I try to show, however, the argument reveals only that the extent to which hedonism has these advantages over standard pluralist views is smaller than previously thought, not that hedonism lacks the advantages altogether. I also argue that the desire satisfaction theory faces a similar challenge. |
October 12, 2018 | |
Presenter: Adam Thompson Title: "Challenging Hybrid Accounts of Race." |
Abstract: So-called hybrid accounts of race aim to offer a compromise between social constructivism about race and views that hold race to be biologically real. The idea is that race has a dual nature insofar as it is a socially constructed, biological reality. Of course the view that there are some genetic divisions in the human population corresponding to our racial categories that might interest scientists or aid in the identification of skeletal remains is not completely unreasonable. But the inferences that underwrite a move from that fact to the claim that race is biologically real are fallacious. To show this, I'll look at the three most prominent types of hybrid accounts to make the case that they cannot establish the dual nature of race. As it stands then, it appears that, ontologically speaking, race is at best merely a social construction. |
October 5, 2018 | |
|
The Graduate Student Research Colloquium will not be held. Instead, we will have the Fall 2018 Faculty and Graduate Student Colloquium. Christopher Stratman will present. |
September 28, 2018 | |
Presenter: Aaron Elliott Title: "Non-Naturalist Moral Perception: An Exploration" |
Abstract: Non-Naturalists have an epistemological problem. Their metaphysical commitments make them particularly susceptible to geneological debunking challenges, which charge that there is something about the etiology of our normative beliefs that prevents them from being in good epistemic standing. I will explain what I take to be the strongest version of the challenge (as a Gettier challenge), and then explore some options for non-naturalists to get out of it. Many standard Gettier cases (e.g. the sheep in the field case) depend on a deviant explanation of the justified true belief -- the truth does not explain the belief nor its justification. This seems to be the non-naturalist's position with regard to moral beliefs, as they hold that normative facts don't causally explain natural facts and facts about beliefs are natural facts. Third-factor explanations, where the belief and the fact that it is about are both explained by the same third fact, aren't viable either. Even if they can have a natural fact explain our beliefs and the normative facts they're about without undermining their non-naturalist commitments (they can), the structure of explanation they give would have to resolve Gettier cases (it doesn't).
|
September 21, 2018 | |
Presenter: Joey Dante
|
Abstract: I will be attempting to provide an abductive argument for the thesis that all value(-systems) is created via a process of social interaction. I take it that this implies, at the least, that there are no necessarily existent values and further, that no moral facts hold necessarily (at least in so far as moral facts supervene on or are grounded in moral value.)
|
September 14, 2018 | |
Presenter: Adam Thompson Title: "On Keeping the Blame in Blame: Anti-Humeans Do What Humeans Cannot" |
Abstract: Recently, several accounts have emerged on which blame is neither confined to the emotional nor always affectless. Rather than focus on the emotional aspects of blame, these theories focus on its motivational features. Some construe blame along Humean lines insofar as they adopt the Humean idea that cognitive states cannot motivate absent aid from independent desire. Others allow that a moral judgment can motivate despite the fact that no independent desire lends a helping hand. A primary concern for those non-emotion-based accounts is that they take the blame out of blame. This essay looks at three different ways to understand that objection -- as taking emotion out, as taking implicit demands out, and as taking the deservingness out. I argue that all and only those accounts of blame's nature that appeal to the Humean idea fall to all three versions. Thus, only anti-Humean accounts of blame remain viable. |
September 7, 2018 | |
Presenter: Zack Garrett Title: "The Logic of States of Affairs" |
Abstract: In this chapter, I describe a logic built from states of affairs that resolves the sorites paradox when vagueness is treated as a metaphysical phenomenon and provides a general account of metaphysical vagueness. The logic takes states of affairs as its atomic elements. States of affairs are made up of objects and properties. A state of affairs can obtain or fail to obtain, depending on whether or not its object instantiates its property. However, it may indeterminately obtain if its object indeterminately instantiates its property.
|
August 31, 2018 | |
Presenter: Christopher Stratman Title: "Ectogenesis and the Moral Status of Abortion" |
Abstract: Ectogenesis involves the gestation of a fetus in an ex utero environment. While we tend to think of such technology as mere science fiction, in the future this will likely change. Indeed, given the plausibility of ectogenesis, a number of morally significant questions arise. One such question concerns the moral status of abortion. The aim of this paper is to show that ectogenesis, which makes it possible to perform an abortion without the destruction of the fetus, provides a good reason to believe that it is nearly always morally impermissible to kill the fetus. |
Spring 2018 |
|
April 27, 2018 | |
Presenter: Andrew Christmas
|
Abstract: I will discuss the role that a community's portrayal of history plays in framing the way that members of the community view the world. In particular, I will focus on the common assumption that views the history of humanity as one of social, cultural, moral, etc. progress and argue that this assumption is not well supported and is only plausible when viewed within a particular framework. |
April 6, 2018 | |
|
The Graduate Student Research Colloquium will not be held. Instead the Faculty / Graduate Student Colloquium will take place. Zachary Garrett will present. |
March 30, 2018 | |
Presenter: Mark Selzer
|
|
March 16, 2018 | |
Presenter: Andrew Spaid
|
|
March 9, 2018 | |
Presenter: Chelsea Richardson Title: "Ancestry in Visual Experience" |
Abstract: Both the folk and Critical Race theorists appeal to ancestry as something that is present in visual experience. If they are correct, then acquaintance, a relational concept popular in philosophy of mind, seems like the most likely relation by which one could come to have a visual experience of ancestry. Literature on acquaintance tells us that it is epistemically rewarding. Assuming this is correct, being in an acquaintance relation with some person should make us more likely to form true beliefs about their ancestry. However this is problematic since, acquaintance often provides false beliefs about the location or visual appearance of someone's ancestors. So, either acquaintance is not virtuous (because it is likely to provide false beliefs in these cases), or we are wrong about the sense in which acquaintance makes us more likely to form true beliefs about ancestry. If we conceive of ancestry in a social way, as an understanding of the present social hierarchy, then acquaintance is epistemically rewarding because it tends to provide true beliefs about people's positions in this hierarchy. This account of ancestry preserves the virtues of acquaintance but requires revising our common understanding of what ancestry is. |
March 2, 2018 | |
Presenter: Aaron Elliott Title: "Grounding the Duty of Non-Maleficence: Why doctors should do-no-harm, and what this tells us about public policy." |
Abstract: The folk conception of physicians' duty to do no harm considers the Hippocratic Oath as its basis. Standard medical ethics textbooks do not address the grounds for the duty of non-maleficence (henceforth "theDuty"). Both are mistakes. First I'll argue that, because the Hippocratic Oath is at best a promise, it is an inadequate basis for the Duty. Second, I'll support an alternative two-part account of the basis--the badness of harm, and healthcare practitioners' special role in society ground the Duty. Third, I'll show how this alternative account has wider implications on the morality of individual care choices, and on the morality of certain public policy positions. Even if my proposed basis for the Duty is wrong, this shows that alternative proposals can have concrete normative implications, and so medical ethics education needs to include discussion of the bases for standard duties of medical ethics. |
February 23, 2018 | |
Presenter: Adam Thompson Title: "On Balance and Course-Design: A Balance-Primitivist Strategy for Squaring Content Coverage with Skill Building." |
Abstract: Balancing content coverage with philosophical skill/disposition building is a particularly pernicious course-design problem. By delineating a strategy for approaching the balance-challenge, as I'll call it, this essay aims to help philosophy teachers overcome it. One key to overcoming the challenge is to push against the orthodox view that treats the balance-challenge as subsidiary to adopting learning objectives and aligning them with educative assessments. Furthermore, the paper demonstrates how executing the strategy has the following three payoffs: (1) It helps us build rigor into the heart of the course; (2) It aids in the development of assessments that draw more on intrinsic motivators as opposed to extrinsic motivators; and (3) It straightforwardly connects grades to learning-objective mastery. For illustration, I focus on how I used the strategy to build my upper-level course on ethical theory. I follow up by generalizing the strategy. In particular, I show how to use the strategy to design an intro-level course on applied ethics and how to apply it to design a graduate course in philosophy. |
February 16, 2018 | |
Presenter: Lauren Sweetland Title: "Interpreting and Evaluating Legal Practices" |
Abstract: Hart argues that whether a rule is obeyed or accepted, broken or rejected, is determined by "good reasons" from an internal perspective (The Concept of Law 55). Ronald Dworkin objects that Hart's view of rules gives an internal participant's point of view, as well as the role of interpretation in legal theory, too little treatment. Hart replies that his secondary rules, especially rules of recognition, are the basis of reason from an internal perspective. As for the role of interpretation in legal theory, Hart insists that from an external observer's point of view, no moral judgment need be made with respect to some rule when describing that rule. I think that while Hart's secondary rules do indeed figure into reasons from an internal perspective, what is necessary for the legal theorist (external perspective) to describe those reasons is a value judgment. How much interpretation, if any, on the part of the legal theorist is sufficient for describing the practice of law, according to Hart's view? I argue that without interpretive and evaluative judgments about rules of recognition on the part of the legal theorist, the legal theorist could not understand reasons to regard rules in certain ways from an internal perspective. Hart's legal theorist utilizes value judgments in describing the internal perspective on reasons for rules more than he envisions. |
February 9, 2018 | |
Presenter: Shane George Title: "APR" |
Abstract: Traditional accounts of autonomy have assumed autonomy stems from a connection to an authentic self. However, in my dissertation I argue that not only is this position untenable, this understanding of the relationship is backwards. Authenticity is a property which is generated by autonomy which is itself not a simple relation but a recursive process. In this presentation I will explain the ab initio Problem and the Value Formation Problem which undermine traditional explanations of autonomy. I will then argue for a Coherentist Psychosocial account of the self which is necessary for the autonomous process, and explain how this account contributes to solving both the ab initio Problem and the Value Formation Problem. |
February 2, 2018 | |
Presenter: Joseph Dante
|
|
January 26, 2018 | |
Presenter: Adam Thompson Title: "On Balance and Course-Design: A Balance-Primitivist Strategy for Squaring Content Coverage with Skill Building" |
Abstract: Balancing content coverage with philosophical skill/disposition building is a particularly pernicious course-design problem. By delineating a strategy for approaching the balance-challenge, as I'll call it, this essay aims to help philosophy teachers overcome it. One key to overcoming the challenge is to push against the orthodox view that treats the balance-challenge as subsidiary to adopting learning objectives and aligning them with educative assessments. Furthermore, the paper demonstrates how executing the strategy has the following three payoffs: (1) It helps us build rigor into the heart of the course; (2) It aids in the development of assessments that draw more on intrinsic motivators as opposed to extrinsic motivators; and (3) It straightforwardly connects grades to learning-objective mastery. For illustration, I focus on how I used the strategy to build my upper-level course on ethical theory. I follow up by generalizing the strategy. In particular, I show how to use the strategy to design an intro-level course on applied ethics and how to apply it to design a graduate course in philosophy. |
January 12, 2018 | |
Presenter: Kevin Patton Title: "Safety and Swamping." |
Abstract: It is common for epistemologists to use the value problem as a kind of litmus test for a theory of knowledge. The value problem is, roughly, the problem philosophers run into when they attempt to explain how knowledge is more valuable than its components. If knowledge is not more valuable than its components, then our intuitions about its value are left unexplained. If knowledge is more valuable than its components, it is not/has not been uncontroversial how to articulate why. Either way, many feel that if a theory of knowledge cannot address this problem, then the theory is not worth considering. Georgi Gardiner has recently argued that any theory of knowledge which is even partially explicated in modal terms cannot, in principle, provide an answer to the value problem, and hence, is not worth considering. Gardiner specifically targets Duncan Pritchard's modal condition safety. Safety is, again roughly, a condition on knowledge which invokes possible worlds to assess if the truth of a given belief was a matter of luck. Gardiner modifies the swamping problem and uses it to argue that safety adds no value to a true belief, and, so, safety cannot answer the value problem. She then claims that this kind of argument can be generalized and applied to any theory which uses a modal condition on knowledge. In this paper, I will demonstrate that the structure of Gardiner's argument relies on an assumption about what is epistemically valuable and what is not. Once made explicit, this assumption actually serves to undermine a great many more theories than Gardiner is aware. This assumption, however, is problematic. The core issue is between epistemic value monism and epistemic value pluralism. Replacing monism with pluralism avoids all of the standard reasons to adopt the swamping problem. As such, Gardiner's argument against safety fails. |
Fall 2017 |
|
December 8, 2017 | |
|
The Graduate Student Research Colloquium will not be held this week due to the Faculty-Grad Colloquium. Andy Spaid will present. |
December 1, 2017 | |
Presenters: Joseph Dante, C. L. Richardson, and Adam Thompson.
|
Abstracts:
|
November 17, 2017 | |
Presenter: Christopher Stratman Title: "Fundamentality and Significance." |
Abstract: I believe that there are metaphysical and theoretical pressures to accept an austere ontology. One perplexing metaphysical pressure is discussed by Peter Unger in "I Do Not Exist" (1979), where he argues that any complex object or entity (i.e., that which has parts), is vulnerable to a sorties paradox argument and, therefore, does not exist, Indeed, Unger argues that minds, thinkers, and their thoughts do not exist. And so, I do not exist. Of course, this intuitively seems disastrous. Recently, in their book Austere Realism: Contextual Semantics Meets Minimal Ontology (2008), Terrance E. Horgan and Matjaz Potrc have made similar arguments. They agree with Unger's conclusion, but argue that one can still be a realist about minds, thinkers, and their thoughts, if one adopts a distinction between a Direct Correspondence theory of truth (DC), and an Indirect Correspondence theory of truth (IC).
|
November 10, 2017 | |
Presenter: Alfred Tu Title: "Wielenberg on Egoism in the Nichomachean Ethics." |
Abstract: In Nichomachean Ethics, Aristotle gives a full account of what is a virtuous person and what they would do in various situations. According to Aristotle, a virtuous person would choose a course of action that promotes eudimonia. The problem is, would a virtuous person always promote his own eudimonia? If we can give an interpretation of Nicomachean Ethics such that a virtuous person would always maximize his own eudimonia, then it seems that Aristotle's account would be a kind of egoism. In "Egoism and Eudaimonia-Maximization in the Nichmachean Ethics" (2004), Eric Wielenberg argues for his egoistic interpretation of Nicomachean Ethics and against Richard Kraut's anti-egoistic interpretation of Nichomachean Ethics. In this paper, I am going to argue that Wielenberg's interpretation would have some peculiar results in some situations and Kraut's interpretation can handle these situations better. |
November 3, 2017 | |
Presenter: Jason Lemmon |
Abstract: Some have recently argued that belief/desire psychology is not fundamental to practical reason. This work is fairly limited in range, especially among philosophers (though less so among psychologists.) I will examine the main philosophical proposals and argue that they are quite implausible. The main thrust of these proposals is that results from empirical psychology, such as 'framing effects' and 'ongoing unrelated experiences,' affect our decisions in ways that purportedly cannot be explained by the belief/desire model. As an example of an ongoing unrelated experience, holding a teddy bear affects the way a subject judges other people in social settings; but the teddy bear, it is claimed, has no bearing on the subject's beliefs and desires. Opponents of the belief/desire model admit that proponents have responses to make here, but opponents go on to argue that all plausible responses boil down to either positing an extra, unnecessary mental state, or else admitting that a nice, warm teddy-bear-caused mood, say, influences our beliefs; we must reject the former by parsimony and reject the latter because it is purely ad hoc. In response, I will show that the latter is not only not ad hoc but that it is just what we would expect from a reasonable belief/desire model. |
October 27, 2017 | |
Presenter: Lauren Sweetland Title: "Descriptive Mental Files?" |
Abstract: What is the relationship between external objects and mental representation? How can we think of an object as one and the same even as it changes through time? Some people offer the notion of a mental file to explain how we track objects through changes in time and think of them as individual objects and not merely possessors of certain properties. A mental file is an acquaintance-relation based mental representation of some object. Mental files are typically construed as having essentially relational contents rather than descriptive contents. But are there some descriptive mental files? That is, can we think singular thoughts of an object via its relation to ourselves or via some description of that object? Or do we think of an object primarily via its (non-descriptive) modes of presentation? According to Recanati, singular thoughts are non-descriptive. According to Goodman, one can think a singular thought about an object not necessarily by accessing singular content, but by understanding the description conveyed by its name. Some, but not all, descriptive files are singular.
|
October 20, 2017 | |
Presenter: Samuel Hobbs Title: "Augustine and the non-existence of the past." |
Abstract: Augustine argues that the present depends upon non-being since the present depends upon passing into the past and the past doesn't exist. Since, for Augustine, past, present, and future all depend upon non-being, then they must only exist simultaneously with perception. For Augustine's theory of time, the past exists in present memories, the present exists in present perception, and the future exists in present expectation. This paper argues that if the past merely exists in memory, then Augustine must accept metaphysically absurd events. To avoid this result, Augustine has to accept that the past depends upon existence. Since the past depends upon existence, and the present depends upon the past, then the present depends upon existence. This puts Augustine in a dilemma: either he must accept metaphysical absurdities, or else he has to reject his psychological view of time. |
October 13, 2017 | |
Presenter: Adam Thompson Title: "(Un)Marginalizing Interests: Correcting Profession-Wise, Unjust Treatment of Those Interested in Studying Teaching and Learning in Philosophy." |
Abstract: The search for truth is best carried out by those well-equipped to critically interrogate and evaluate propositions, their perceptions, their memory, the testimony of others, etc. This should give pride of place to those primarily interested in effectively facilitating the development of those capacities. However, as is well-known, professional philosophy by-and-large marginalizes those interested in the study of teaching and learning. This essay argues that that marginalization is unjust and offers suggestions for correcting this wrong. Further, since this essay argues that, in most settings in higher education, it is a mistake to value interest in the search for truth above interest in how best to develop students' critical faculties, it explores explanations for the fact that many intelligent, well-meaning people make the mistake. |
September 29, 2017 | |
Presenter: Chelsea Richardson Title: "Where is your family from originally?: A conceptual analysis of the role of ancestry in the philosophy of race." |
Abstract: Linda Martin Alcoff, Charles Mills, and Sally Haslanger each appeal to a notion of ancestry in their accounts of race. I will examine these appeals and argue that as a collective they face two key problems: the regress problem and the inference problem. The regress problem shows that the scope of ancestry as it is used for racial membership is ill-defined. Further, what can be inferred about racial membership on the basis of ancestry and its complex relationship with visible properties of the body is similarly ill-defined — the inference problem illuminates this. These two problems ultimately show that while ancestry plays a key role in our concept of race, both folk views and appeals by the philosophers I analyze do too little to converge on a clear account of what ancestry actually is. The way Alcoff, Mills, and Haslanger treat ancestry sometimes obscures our understanding of race and racial membership and potentially reinforces a folk view of ancestry (and, as it pertains, race) that, all of these authors agree is morally dubious. In the end, I present a view of ancestry that seeks to avoid the key problems I identify. |
September 15, 2017 | |
Presenter: Zack Garrett
|
|
September 8, 2017 | |
Presenter: Joey Dante
|
Abstract: As we all may be aware, J. L. Mackie famously argues that there are no objective values. I want to investigate Mackie's interpretation of what objective values indeed ARE, and then consider and attempt to understand (one of) his arguments against such entities. Specifically, I want to see whether his arguments apply to Kantian objective values. As such, this talk is as much an interpretation of Kant as it is of Mackie (at least in so far as I understand them.)
|
September 1, 2017 | |
Presenter: Kevin Patton Title: "Safety and Skepticism." |
Abstract: Duncan Pritchard has advocated for a necessary condition on knowledge known as safety. Pritchard's definition of safety is an explicitly modal one in which a true belief is safe if and only if in all / nearly all close possible worlds to ours, the belief is also true, and we believe it on the same basis. Since Pritchard's 2005 book, there have been a flurry of powerful criticisms, some of which have resulted in Pritchard modifying his general framework. One criticism that Pritchard has not responded to is raised by Dylan Dodd. Dodd argues for the following conditional: if safety, as Pritchard contends, explains the lottery intuition, then skepticism follows. In this paper I will argue that Dodd's formulation of the problem can be easily addressed by a safety theorist such as Pritchard. In so replying, however a dilemma will result for Pritchard: either he must reject common sense cases of knowledge, or he must reject his stated motivations for safety. |
Spring 2017 |
|
April 28, 2017 | |
Presenters: Zachary Garrett, Andrew Christmas, and Adam Thompson |
Zachary Garrett, "Semantic Nihilism and Supervaluationism"
ABSTRACT: In recent years, David Braun and Ted Sider as well as John MacFarlane have attempted to revitalize semantic nihilism, a theory of vagueness that rejects the truth-evaluability of sentences containing vague words. Both views make use of things resembling supervaluationism's admissible precisifications, but they reject the identification of truth with supertruth. In this paper, I argue that Braun and Sider and MacFarlane have not given a sufficient reason for avoiding the identification of truth with supertruth. Supervaluationism fits the story for how vague communication works just as well as these new forms of nihilism, but with the added bonus that it can account for our everyday intuitions about truth. I begin by objecting to arguments for semantic nihilism and then respond to the objections Braun, Sider and MacFarlane level at supervaluationism. Andrew Christmas, "Kant's Theory of the Good and the Justification for Agent-centered Constraints" ABSTRACT: David Cummiskey argues that Kant's ethical theory is normatively consequentialist. Cummiskey focuses most of his effort on showing that the second formulation of the categorical imperative could allow for the sacrifice of some rational beings if it meant promoting the existence of more rational beings and that Kant provides no justification for agent-centered constraints on our actions. I argue that Cummiskey's argument presupposes an agent-neutral theory of the good that fails to provide an adequate account of Kant's ethical system. I also argue that Kant's conception of the good does provide justification agent-centered constraints and does not allow for the sacrifice of innocent people even if that sacrifice would save the lives of more. Adam Thompson, "Against GTA Restraint: Why GTAs Should Practice Learner-Centered Pedagogy (and How to Do So)" ABSTRACT: Graduate student teaching assistants (GTAs) typically begin their assistantships playing a supporting role in a course prepared by someone else. It is typical for GTAs to believe that they can only use a very limited subset of the full range of teaching strategies. Call that belief, the GTA Restraint Belief. Though this belief is often supported through (explicit or implicit) advocacy by the discipline and professionals in it, the GTA Restraint Belief stands as a major obstacle to student learning and GTA-pedagogical growth. For one, it is used by many in academia as an excuse for GTAs to essentially ignore the literature on effective pedagogy. For another, the GTA Restraint Belief is leaned on as a reason for GTAs to forgo trainings focused on improving their pedagogy. Thus, typically, the students served by GTAs are unacceptably underserved as are the GTAs with respect to their professional development. On those grounds and others, this essay (a) argues that the GTA Restraint Belief should be rejected and (b) shows how GTAs can discharge the obligation to reject that belief and permissibly practice effective pedagogical strategies. |
April 21, 2017 | |
Presenter: Aaron Elliott Title: "What Naturalism Is, What Non-Naturalism Isn't" |
Abstract: Non-Naturalists need to be able to explain exactly what their view is. While perhaps an obvious requirement, the need is made salient in two ways. The first is objections from non-reductive naturalists, like Sturgeon, who challenge non-naturalists to say what it is that excludes the normative from the broader class of the natural. Since, he says, we have good reason to think normative properties are causally efficacious, we have good reason to include them along with physical, biological, and chemical properties. Until we are given an explanation for what excludes them from this group, we have a presumption of naturalism.
|
April 14, 2017 | |
Presenter: Joseph Dante Title: "Panpsychism?" |
Abstract: I will be 'arguing' that if all entities are fundamental then panpsychism is plausible.
|
April 07, 2017 | |
|
The Graduate Student Research Colloquium will not be held this week due to the Faculty / Graduate Student Colloquium. Professor Jennifer McKitrick will present. Her title is "Whites, Women, and Witches: Analogies and Disanalogies among Social Kinds". |
March 31, 2017 | |
Presenter: Kiki Yuan Title: "Perception and Perceptual Inference" |
Abstract: Is perception a bottom-up or a top-down processing? It seems that the top-down processing is a more plausible theory. If so, what's the mechanism of top-down approach? Psychologists and philosophers have offered many modules to show the top-down mechanism of perception. Gregory and his "Charlie Chaplin Optic Illusion" case has offered a good demonstration of perception as a cognitive mechanism. Similarly, Helmholtz suggests that perception is mediated by unconscious inference and that inference is the same as that for ordinary reasoning and scientific inference. With the development of modern psychology, many scholars' modules of perception separated perceptual inference from the cognitive inference that associates with reasoning or knowledge, such as Rock and Fodor. Based on modern psychology study, I will argue that the separation of two kinds of inference is more plausible. In addition, I will discuss the role of perceptual inference in moral perception and how it distinguishes moral perception from other moral cognitive activities, such as moral reasoning or moral knowledge. |
March 17, 2017 | |
Presenter: C. L. Richardson Title: "Ailefs and Singularity" |
Abstract: Non-doxastic attitudes are a subject of little concern in most of the singular thought literature. However, it's becoming clear to theorists who work in this area that specific accounts of singular thought for a variety of non-doxastic attitudes are crucial for motivating the view that we can have singular thoughts at all. One interesting candidate for such an account is Tamar Gendler's notion of Ailef. Ailefs are non-doxastic, sometimes-propositional, mental states that appear to go a long way in explaining social biases and marginalizing treatment of certain groups of people. In an influential analysis, Robin Jeshion claims that providing an account of de re belief (i.e. singular thought) compels us to answer the questions of what it is to believe, and what the conditions are on believing a singular proposition. [1] Insofar as providing an account of de re ailef requires one to address similar questions, I'll aim to meet Jeshion's challenge. This paper will be concerned with two major questions with respect to the concepts of ailef and singular thought: 1) Can ailefs refer singularly to their objects? 2) Can ailefs be explained in terms of mental files? I'll argue that ailefs do refer singularly and that they can be explained in terms of mental files. I'll provide an account of file dynamics for ailef-type mental files. Additionally, I'll explain how my account serves to inform and potentially motivate more general views of singular thought and implicit bias. [1] Robin Jeshion, New Essays on Singular Thought (Oxford: Oxford University Press, 2010), 54. |
March 10, 2017 | |
Presenter: Mark Diep Title: "The Implausibility of Bratman's No-Regret Condition" |
Abstract: In his 1999 book, Michael Bratman argues that his No-Regret condition solves problems of rationality that he argues sophistication theory and resolution theory can't. He argues that the main problem for both theories is that they fail to account for the fact that we are temporal and causal rational agents. His No-Regret condition, he claims, solves the problems by accounting for these features of rational agents. In this paper, I argue that Bratman's No-Regret condition is not a plausible condition of rational agency. I shall show that there are cases where the No-Regret condition offers recommendations that we normally find difficult to follow. The primary problem, I shall argue, is the regret feature of the No-Regret condition. There are times when an agent is faced with a future regret that does not factor into the agent's decision on whether or not to follow through with prior plans. In the cases that I will discuss, if there are future regrets that don't align with the agent's current preferences, she is still rational to act contrary to those future regrets that the No-Regret condition recommends her to take seriously. Since the No-Regret condition always recommends that an agent acts on the bases of preferences of these future selves in these cases, the No-Regret condition is implausible.
|
March 3, 2017 | |
Presenter: Alfred Tu Title: "Modal Skepticism and Similarity" |
Abstract: In Modal Epistemology, Peter Van Inwagen argues that anyone who accepts Yablo's theory should become a modal skeptic. Modal Skepticism, in Van Inwagen's term, is a conservative epistemological position that believes we can have "basic" modal knowledge, such at "It is possible that Lincoln has an earthquake" or "It is possible that I will have pasta as dinner tonight." But it is not the case that we can have "remote" modal knowledge, such as "It is possible that transparent iron exists" or "It is possible that purple cows exist." Modal skepticism has one obvious theoretical advantage: we can keep most of our ordinary, commonsensical modal knowledge, but we can refute some puzzling philosophical arguments that build on remote possibilities (Van Inwagen 1998, Hawke 2016). Van Inwagen's modal skepticism is defended and developed by Peter Hawke later (Hawke 2010, 2016). Nevertheless, I think the crucial point of modal skepticism is that modal skeptics must give us some principals that can adequately differentiate remote possibilities from basic or uncontroversial possibilities, and deny we can have knowledge of remote possibilities. And I am going to argue that Hawke's work is not satisfactory for adequately differentiating remote possibilities and ordinary possibilities. |
February 24, 2017 | |
|
The Graduate Student Research Colloquium will not be held this week due to the Speakers' Series. The guest speaker is Lucy Allias. Click here for more information |
February 17, 2017 | |
Presenter: Lauren Sweetland Title: "Cooperation: From Joint Intention to Evolutionary Explanation." |
Abstract: How can the notion of joint or we-mode intentions be incorporated into theories of social science? In particular, how can joint intentions fit with Tooby and Cosmides' Integrated Causal Model, according to which social behavior is a product of evolved information-processing systems? How can joint intentions fit with Henrich and Henrich's Dual Inheritance Theory? According to this view, human biological/psychological adaptations produce prosocial cultural behavior.
|
February 10, 2017 | |
Presenter: Zach Wrublewski Title: "Conceivability, Abduction, and Modal Knowledge." |
Abstract: In his paper "Is Conceivability a Guide to Possibility?" Stephen Yablo analyzes several existing conceptions of conceivability before ultimately offering a positive account of conceivability that he believes better gives us prima facie knowledge of possibility. In this paper, I will argue that while Yablo might be successful in ruling out the relevant conceptions of conceivability as methods for reliably gaining modal knowledge, he is unsuccessful in offering his positive account of such a method. To show that this is the case, I will offer an objection to Yablo's account of conceivability that centers on a problem with the completeness of conceived worlds that plagues his account. Furthermore, I will argue that one of the conceptions of conceivability that Yablo rules out can still be useful if applied in the generative step of a two-step theory of abductive inference about modal knowledge, while his preferred positive account of conceivability cannot be used in such a way. |
February 3, 2017 | |
Presenter: Mark Albert Selzer Title: "The Latent Capacity Interpretation of the Explanatory Constraint." |
Abstract: In his influential article, "Internal and External Reasons" (1979), Bernard Williams argues for the Explanatory Constraint:
|
January 20, 2017 | |
Presenter: Christopher Stratman Title: "Heilian Truthmaking." |
Abstract: John Heil's account of truthmaking, what I call "Heilian Truthmaking" (HTM), fails to avoid several significant objections. This claim depends on a controversial interpretation of Heil's view of truthmaking, one that interprets it as a primitive concept. I will consider whether or not it is fair to interpret HTM as primitive and the consequences that follow from such an interpretation. It will be shown that the proponent of HTM faces an important dilemma, which is stated below:
The structure of the paper will be divided into two parts: In part one I will consider Heil's general approach to ontology and his appeal to truthmaking. In order to get a better grip on why one might interpret HTM as a primitive concept I will consider the second horn of the dilemma first. It will be argued that the only way the second horn of the dilemma can be avoided is by interpreting HTM as a primitive concept. In part two I will consider the first horn, arguing that interpreting HTM as primitive undermines Heil's realist ontology because we will not be in a position to know what makes our sentences true. I will consider how Heil might respond to this objection prior to concluding. |
January 13, 2017 | |
Presenter: Joey Dante Title: "Controversial?!" |
Abstract: I will be discussing Sarah McGrath's "Moral Disagreement and Moral Expertise." McGrath argues that, at least many of, our moral beliefs do not amount to knowledge. McGrath argues that "CONTROVERSIAL beliefs do not amount to knowledge." (page 92). Where "CONTROVERSIAL" is understood as follows: "Thus your belief that p is CONTROVERSIAL if and only if it is denied by another person of whom it is true that: you have no more reason to think that he or she is in error than you are." (page 91). She then argues that many of our moral beliefs are indeed CONTROVERSIAL. As such, many of our moral beliefs do not amount to knowledge.
|
Fall 2016 |
|
December 2, 2016 | |
Presenter: Andrew Spaid Title: "Desire Theory: How to Measure Well-being." |
Abstract: Desire theorists about well-being accept the following view about how to measure a person's level of well-being: a person is well-off to the extent that they are getting what they want. There are two options for making this view more precise. According to the first, your level of well-being is represented as a ratio of satisfied desires over total desires. According to the second, your level of well-being is represented as an integer equal to the number of your satisfied desires minus the number of your frustrated desires. I believe more turns on which of these two options the desire theorist accepts than desire theorists have tended to notice. I explore some of the advantages and disadvantages of each option and argue that, for most desire theorists, the first option should appear the most attractive. |
November 18, 2016 | |
|
Due to the Speaker's Series, the Graduate Student Research Colloquium is cancelled this week. The guest speaker is Karen Bennett, her topic is "Causing"Click here for more information |
November 18, 2016 | |
|
Due to the Speaker's Series, the Graduate Student Research Colloquium is cancelled this week. The guest speaker is Neil Sinhababu, his topic is "Empathic Hedonists Escape Moral Twin Earth". Click here for more information |
November 4, 2016 | |
Presenter: Christopher Stratman Title: "Why I Am Not A Color Realist." |
Abstract: I am not a color realist. I do not believe that colors exist independent of one's phenomenal experience of them; they are not a part of the correct ontology; they are not a part of the way the cosmos looks from the perspective of the ontology room; they are not in the book of the world; they are not fundamental. Our commonsense perception of the world around us errs when it tells us that ordinary objects are colored. I don't think that ordinary objects are colored, because I don't think that ordinary objects exist. In this paper I will consider the Sorites Paradox and several issues concerning material constitution in order to demonstrate that there is good reason to believe that mereological nihilism is true. The aim of this paper is to argue that color realism is false by showing that mereological nihilism is true. Consequently, mind-independent, color properties that are possessed by ordinary objects don't exist. Hence, there is good reason to deny color realism. The astute reader will notice, however, that it is technically incorrect to say that "I" am not a color realist. The correct way to make this point would be to say that, simples arranged "I—wise" am not a color realist. After presenting the main argument against color realism I will consider similar challenges and argue that such worries are benign. |
October 21, 2016 | |
Presenter: Joseph Dante Title: "Contra Sound as Disturbances." |
Abstract: I will be offering various considerations that call into question Casey O'Callahan's recent work on Sounds. O'Callahan argues that sounds are disturbance events. I will argue that O'Callahan fails to meet his own desiderata for any adequate theory of sounds. And further that his arguments to suggest that sounds cannot occur in a vacuum fail. He wants his theory to be neutral with respect to the proper metaphysics of events yet also he wants sounds to be causally powerful. I will argue that either his theory allows for sounds to be epiphenomenal or else he must be committed to the thesis that every event has only one cause (namely he would have to be committed to a controversial theory of event individuation). Either way his own desiderata are not met. Further, he argues that because there are no audible qualities in a vacuum there is no sound in a vacuum. I will point out flaws in this way of arguing. Namely, the premise "If there are no audible qualities in X then there are no sounds in X" will either commit O'Callahan to saying that there are no sounds in places where his theory should allow or else leave room for sounds existing in vacuums. |
October 7, 2016 | |
|
Due to the Speaker's Series, the Graduate Student Research Colloquium is cancelled this week. The guest speaker is Alex Rosenberg, his topic is "The Program of Strong Scientism and its Challenges". Click here for more information |
September 30, 2016 | |
Presenter: Zachary Garrett Title: "The Epistemic Theory of Vagueness." |
Abstract: Epistemicism is a theory of vagueness that makes two claims: (i) all vague predicates have sharp cutoff points and (ii) for any borderline case we cannot know whether the predicate applies or not. The primary proponents of epistemicism are Timothy Williamson and Roy Sorensen. Williamson, unlike Sorensen, attempts to give an account of how predicates get their sharp borders. He claims that our uses of predicates set the cutoff via some very complicated procedure. He does not spell out the procedure itself. The plausibility of our uses setting unique cutoff points for vague predicates has been questioned extensively. Even attempts to set the cutoffs via other means have come up short. As for Williamson’s version of (ii), he explains our limited knowledge to a margin for error principle. Essentially, we lack knowledge in borderline cases because we could have easily been wrong. Sorensen thinks that Williamson is misguided in his efforts to give an explanation of the procedure that sets sharp borders. Instead, Sorensen argues that we are forced to accept the existence of sharp borders by virtue of the fact that the sorites argument is invalid. He claims that this is enough reason to accept sharp borders. Sorensen explains our ignorance as a result of a lack of truthmakers for propositions about borderline cases. Since the propositions are not linked to the world in any way we cannot come to know them. I argue that Sorensen’s explanation of our ignorance is worse than Williamson’s. Since Williamson’s explanation of how vague predicates get their borders is problematic I consider a hybrid view that utilizes both Sorensen’s optimism about the existence of cutoffs and Williamson’s account of our ignorance. I finally argue that this account also fails because it gets the direction of explanation between logic and the phenomenon of vagueness wrong. |
September 23, 2016 | |
Presenter: Jason Lemmon Title: "Aristotle on Non-Contradiction: Logical, not Metaphysical" |
Abstract: In book IV, chapter 4 of the Metaphysics, Aristotle defends the principle of non-contradiction (PNC) -- namely, that it is impossible for something both to have some feature and to not have that feature, at the same time and in the same respect. Edward Halper, among others, denies that Aristotle is concerned to defend PNC. He complains that many philosophers have treated Met. IV 4 "as an island of logic in a sea of metaphysics." He argues that Aristotle's project in IV 4 isn't to defend PNC; rather PNC is being used by Aristotle as an assumption in an argument for the conclusion that beings have essential definitions. Thus, says Halper, Met. IV 4 consists of metaphysics, not logic, and should be seen as continuous with Aristotle's main concerns in subsequent books of the Metaphysics. I argue that Met. IV 4 is indeed an island of logic in a sea of metaphysics. I offer four main arguments against the metaphysical reading (and the claim that PNC functions as a premise). One of my arguments, for example, includes the claim -- borrowing from Jonathan Lear, and ultimately Carroll -- that PNC, as a principle of inference (acceptance of Fx is all it takes to reject ˜Fx), can no more constitute a premise in an argument than Modus Ponens can. |
September 16, 2016 | |
Presenter: Aaron Elliott Title: "Avoiding Bruteness Revenge." |
Abstract: In this paper I examine Tristram McPherson's presentation of the supervenience challenge for non-naturalism in metaethics, and focus on the issue of brute necessary connections. The Challenge is that non-naturalists must either: i. allow that the supervenience of the normative on the natural entails an unexplained necessary connection between distinct existence; ii. accept an explanation for the necessary connections that is committed to naturalism; or, iii. reject supervenience. I explain why McPherson concludes that any non-naturalist explanation for supervenience must rely on positing some further brute necessary connection, and therefore makes no progress towards discharging the explanatory burden. I then argue that McPherson's account conflates two kinds of bruteness, and show that in light of this distinction explanatory progress is possible. There are brute necessary connections and there are brute absences, and commitment to each is a cost to a view. Due to this distinction, I replace Hume's Dictum with a principle against positing brute impossibility, because this better captures the concern over both kinds of bruteness. We can decrease the amount of bruteness a non-naturalist is committed to by eliminating brute connections without positing further brute connections or additional brute absences. This reduces the cost of supervenience to non-naturalism, and makes explanatory progress, even if some commitment to brute absence remains. |
September 9, 2016 | |
Presenter: Kevin Patton Title: "The Psychology of Skepticism." |
Abstract: The thesis of this presentation will be mostly fragmented. The idea behind this presentation will be to frame the first (eventual) chapter of my (eventual) dissertation. The goal/project for the first chapter will be to recast the challenge of epistemic skepticism in positive, rather than negative terms. Nearly every author I have surveyed addresses epistemic skepticism as a challenge that needs answering. This has the effect of producing combative attitudes toward both skepticism and the core issues the skeptic is focused on. I want to view skepticism as a welcomed friend, not as a foe to be vanquished. The skeptic is someone who desires certainty and any theory of knowledge which does not produce such certainty is faulty - or so says the skeptic. This reframing of the debate in positive terms will help me address some naturalized epistemologists who have attempted to refute the skeptic (probably chapter 2 of the dissertation?) |
September 2, 2016 | |
Presenter: Adam Thompson Title: "How Anti-Humeans Keep the Blame in Blame (and Why Humeans Cannot)." |
Abstract: Recently, several accounts have emerged on which blame is neither confined to the emotional nor always affectless. These accounts seem antithetical to popular Reactive Attitudes Accounts on which blame is constitutively tied to certain emotions. This essay shows that only a subset of those emerging accounts run contrary to Reactive Attitudes Accounts. It primarily argues that those accounts of blame's nature that appeal to the Humean idea that cognitive states cannot motivate absent aid from independent desire should be rejected. Thus, if we reject Reactive Attitudes Accounts of blame, we should adopt an anti-Humean construal of blame. The key idea is that only anti-Humean accounts capture the essence of the reactive emotions. Hence, only anti-Humean accounts keep intact the aspects of the reactive attitudes that render them paradigmatic of blame. In other words, it shows how anti-Humeans keep the blame in blame and explains why Humeans cannot. |
Spring 2016 |
|
April 29, 2016 | |
Presenter: Christopher Richards Title: "Grounding and Univocity." |
Abstract: In this paper, I argue against what I call the Koslicki-Wilson objection to grounding.. |
April 15, 2016 | |
Presenter: Joey Dante
|
|
April 8, 2016 | |
Presenter: Alfred Tu Title: "Grounding and Primitiveness." |
Abstract: Grounding has become a central topic of metaphysics in recent years. According to Daly, in various theories of grounding, they hold some general features: grounding is intelligible, grounding is primitive, and grounding is useful. Some grounding skeptics argue against grounding theories and they claim it is not the case that these general features can apply to grounding. This paper has two parts. The first part includes my concerns about grounding skeptics’ strategies and why some grounding theorists claim grounding is primitive. In the second part, I try to argue that Jonathan Schaffer’s novel contrastive account of grounding is not well integrated with his general claims of grounding relation. It seems implausible to accept the contrastivity – an unorthodox and counter-intuitive formal property – as one formal property of a primitive notion. Therefore, either Schaffer cannot take grounding as a primitive notion or grounding does not have contrastivity. |
April 1, 2016 | |
Presenter: Lauren Sweetland Title: "A Question of Justice for Indigenous People and Environment." |
Abstract: Principles of climate mitigation in environmental ethics often draw on either considerations of fairness and forward-looking concerns, or on justice and backward-looking concerns. That is, according to some theorists, considerations of the current distribution of climate benefits and burdens are foremost, while others take repairing historic wrongs as paramount. Some theorists integrate considerations of fairness and justice to formulate hybrid climate principles. Such an integrative approach is promising particularly in the context of environmental harm to indigenous subsistence peoples, who are among those suffering the most from climate change. I argue that existing integrative climate principles tend not to sufficiently emphasize considerations of backward-looking justice. This is a problem for indigenous peoples seeking reparations for environmental harm and violations of their human rights, according to Rebecca Tsosie. I argue that the current climate situation facing some Native people is unfair according to Rawls' second principle of justice. In addition, the situation is unjust as indigenous people suffer from emissions by others and few attempts are made for reparations. Thus, Rawlsian fairness combined with reparative justice provide a befitting theoretical framework. I conclude that an acceptable climate principle will adequately integrate considerations of both fairness and justice, both forward-looking and backward-looking considerations. |
March 18, 2016 | |
Presenter: Zachary Garrett Title: "The Structure of Higher Order Vagueness." |
Abstract: Take a sample of native English speakers and a line of men with differing numbers of hairs on their heads. Arrange the men in order by the number of hairs they have. Start with the man with 0 hairs and end with a man of 150,000 (50% more than average). Now, as you walk the English speakers along the line of men ask them to determine whether or not the man they are currently looking at is bald. At the two extremes we should expect the participants to confidently answer either bald or not. As we get closer to the middle, though, we expect that it will become more and more difficult for the participants to answer. There are some cases that are neither clearly bald nor clearly not bald. We call these borderline cases. Now, instead of asking whether some man x is bald we ask whether or not it is clear that he is bald. Just like how it is hard to tell when we move from bald to not bald, it is hard to tell when we move from clearly bald to borderline bald. The same thing can be repeated for the move from clearly clearly bald to borderline borderline bald. This procedure has led many to believe that there is a hierarchy of higher order vagueness. |
March 11, 2016 | |
Presenters: Chris Gibilisco and Adam Thompson Title: "Quiddistic Individuation Without Tears" |
Abstract: In a recent paper, Deborah C. Smith notes that quidditism comes in two varieties, I-quidditism and R-quidditism. I-quidditism is a view about how properties are individuated, while R-quidditism is a view about how properties may be recombined. In this paper we present and defend a novel version of I-quidditism that doges the traditional problems with other versions of I-quidditism on the market. Further, our view is extremely ecumenical: it is consistent with R-quidditism of all varieties, causal structuralism, immanent realism, and trope theory. |
March 4, 2016 | |
Presenters: Aaron Elliott Title: "Why Non-naturalists Need Something Else to Explain Supervenience" |
Abstract: Non-naturalists are supposed to explain why the normative supervenes on the natural. Since supervenience is a phenomenon of property distributions, an explanation for supervenience requires an explanation for the distribution of normative properties. I provide a taxonomy of frameworks for such an explanation, in terms of the elements the explanation appeals to. I then rule out options that employ only natural elements on the basis of losing the non-naturalist commitment, and options that fail to employ natural elements on the basis of being unable to explain supervenience. This leaves a core set of non-naturalist frameworks. I argue that the framework that employs only features of normative properties and features of natural properties fails, thus leaving only frameworks that employ in addition some element that is external to both normative properties and natural properties. I show this by considering two prima facie plausible models, identifying why they fail, and why these reasons generalize to other models in this framework. |
February 26, 2016 | |
Presenter: Shane George
|
Abstract: Adaptive preferences arise when one changes their goals based on events in their life, or realizations of limitations (unexpected or otherwise.) These potentially cause a problem for accounts of autonomy. While my inability to be a Jedi is likely an innocuous feature which urges me to change my life goal preferences, my being enslaved is not. Structuralist accounts of autonomy have argued that the key feature for determining innocuous and problematic instances of adaptation centers on the relational properties and contexts that cause these adaptations. These accounts cannot then be value neutral as they have to evaluate said contexts. Proceduralists disagree and present value neutral accounts of autonomy which seek to center the issue squarely upon the processes one undergoes to maintain/achieve autonomy. Christman argues that due to asymmetries in the social context of selected cases, structuralist solutions do not provide the correct answer to these problems. I will argue that Christman's account also fails, and perhaps a stronger structuralist inspired response is required. |
February 12, 2016 | |
Presenter: Christopher Stratman Title: "A Critique of Moore's Anti-Skeptical Argument in 'Proof of an External World.'" |
Abstract: In this paper I will argue that a philosophically important distinction needs to be made between skeptical arguments that involve our everyday empirical knowledge of ordinary objects and those that involve ontological explanations of such knowledge. Once this distinction is established, I argue that the proponent of G. E. Moore's anti-skeptical argument, as it is presented in "Proof of an External World", faces the following dilemma: she can either (i) accept that the argument misses its intended target of ontological skepticism; or (ii) accept that it does engage with its intended target of ontological skepticism. If she chooses the first horn, then the argument fails to answer the skeptic's challenge insofar as the argument focuses on epistemic skepticism. If the second horn is chosen, then the conclusion goes far beyond the scope of its premises insofar as the conclusion is ontological in nature while the premises are epistemic in nature. In both cases, I argue, Moore's response to the skeptic's challenge is inadequate.
|
January 15, 2016 | |
Presenter: Adam Thompson Title: "Reasonable Fear Is Killing Justly and What You Should Do About It." |
Abstract: The primary justification offered for not indicting a police officer for murder in connection to their involvement in killing an individual while on-duty is that the officer reasonably feared for their life. Recently, the failure to charge on-duty police officers who kill non-white individuals with murder has sparked nation-wide and world-wide outrage. Much of that outrage targets and rejects the claim that it was reasonable for the officer to fear that they, a fellow officer, or an innocent bystander would likely suffer serious bodily harm. This paper argues that the problem is not the truth-value of a claim of reasonable fear. Rather, the problem is that so many are apt to fear non-white individuals and so many others are disposed to accept the claim that the fear was reasonable as true. Here, I diagnose why, in the current U.S. climate, it makes sense (a) for so many to feel fear when encountering non-white individuals and (b) for so many to accept the reasonable-fear defense when cops kill. I go on to offer strategies for positive change. |