Graduate Student Research Colloquia

Fall 2019

November 22, 2019

Presenter: Jason Lemmon

Title: "On Weakness of Will"

Abstract:

On the standard (contemporary) account of weakness of will, the phenomenon occurs when one acts, intentionally and freely, contrary to what one judges to be the overall better option. Recently, the standard account has been challenged. Weakness of will actually occurs when one fails to act on one’s prior intentions – when, that is, one loses one’s resolve to see their intentions through. Richard Holton and Alison McIntyre are two of the main proponents of this novel account (and their work forms the basis for my defense of this view). Despite the new account’s having garnered serious attention, the standard account has remained the dominant view. I examine why this is so — in particular, I examine (what I have found to be) two of the more prevalent arguments in the literature against the new account. I argue that the new account has the resources to handle these arguments and that, more generally, it constitutes a plausible alternative to the standard account.

November 15, 2019

Presenter: Ryan Turner

Title: "Men First? — Asking for Directions to Egalitarian Gender"

Abstract:

Critiques of masculinity’s implication in systems of oppression tend to offer hope in egalitarian redefinitions of masculinity. Gender abolitionism recommends, for similar reasons, that we eliminate gender categories altogether. Each view has difficulties. It is not obvious that we can replace masculinity with something resembling masculinity but better, even leaving aside essentialist claims that gender is an immutable characteristic. Properly substantive accounts of what features could distinguish such a progressive masculinity from toxic or hegemonic masculinity have been thin on the ground. The necessity and utility of the project for wider egalitarian aims are just tacitly assumed. Abolitionism for its part neglects two valuable redeeming qualities of gendered identities. The first is the obvious fact that gender identities are for many people a cherished source of self-understanding, including many trans identities that have been subject to harmful — and, charitably, inadvertent — hostility from abolitionist critics of gender. Secondly, identifying as a member of a subordinated group such as a gender, race, sexual orientation, and so forth has itself historically been, for countless people, an enabling precondition of resistance to oppression. Here I seek to reconcile these two views by defending abolitionism in a limited form. By this "strategic abolitionism" I argue two major claims. Surveying literature in masculinities studies, I first show that progressive revisions such as Black and gay masculinities are superfluous to resistance of the respective oppressions to which they are responses. In each case, solidarity within the subordinated group as such is enough to understand the success or potential of resistance to its oppression. Masculinity qua masculinity can be divided through, leaving no significant remainder; progressive masculinities are an empty concept. Finally, I argue that the liberatory potential of gender abolitionism can be salvaged if we recognize an asymmetry in its practical demands of different, actually existing genders.

October 25, 2019

Presenter: Mark Selzer

Title: "Reason-Implies-Can"

Abstract:

If one ought to do something, does it follow that one can do it? To answer yes is to affirm what is known as the ought-implies-can principle (OIC). Recently, opponents of OIC have raised strong counterarguments against the principle. Since we can plausibly construe why one ought to do as derived from what one has reason to do, one would expect an analogous principle to OIC, a reason-implies-can principle (RIC), to fall prey to the same objections. I shall argue that RIC evades the strong objections made against OIC and, in fact, provides the basis for a version of OIC that escapes the same objections.

October 18, 2019

Presenter: Zack Garrett

Title: "Vagueness and Luminosity"

Abstract:

In Knowledge and its Limits, Timothy Williamson argues that being in a mental state does not entail that one is in a position to know that one is in that state &emdash;in Williamson's terminology, most mental states are not luminous. Some have claimed that Williamson's argument relies on the vagueness of our mental states or the vagueness of belief. These responses to Williamson fail for the very reasons that Williamson cites in Knowledge and its Limits. However, there are interesting points to be made about the connection between vagueness and luminosity. In this paper, I argue that if contextualists about vagueness are correct about the way that context shifts in sorites arguments, then Williamson's argument as it is written is unsound. Some changes to the argument are then sufficient to avoid this worry regardless of which theory of vagueness is correct. Finally, I argue that some theories of vagueness allow for some mental states to avoid this modified anti-luminosity argument. So, the generalizability of anti-luminosity is brought into question.

October 4, 2019

Presenter: Trevor Adams

Title: "Does Factivity Imply Certainty?"

Abstract:

The paper I will be addressing by Moti Mizrahi entitled "You Can’t Handle the Truth: Knowledge = Epistemic Certainty". The primary thesis of Mizrahi’s paper is that if his argument succeeds, then "epistemologists who think that knowledge is factive are thereby also committed to the view that knowledge is epistemic certainty" (p.225). The leading argument in the paper is the following:

1. If S knows that p on the grounds that e, then p cannot be false given e.
2. If p cannot be false given e, then e makes p epistemically certain.
3. Therefore, if S knows that p on the grounds that e, then e makes p epsitemically certain. (p.225)

My thesis is that Mizrahi’s definition of factivity is too strong, and that only with that definition in play does his argument work. If we replace it with a more standard and less demanding definition his conclusion doesn’t follow. Thus without an independent argument for why we should assume his analysis of factivity, we need not accept that knowledge is epistemic certainty. My argument is that the factivity of knowledge only requires that p be true but doesn’t require the following: it is necessary that p given e. Mizrahi’s definition of factivity just excludes fallibilism at the start, and thus a fallibilist need not accept it. Mizrahi takes his first premise to require no defense since he says it is "simply a statement of the thesis that knowledge is factive, which contemporary epistemologists generally accept" (p.226). This is the exact point I wish to dispute and thus I will try to show that premise (1) as stated is too strong. If we replace premise one with a more moderate form of factivity then we in fact get an argument that knowledge doesn’t imply epistemic certainty:

a.) If S knows that p on grounds that e, then p is true but not guaranteed given e.
b.) If p is true but not guaranteed given e, then e does not make p epistemically certain.
c.) Therefore, if S knows that p on grounds that e, then e does not make p epistemically certain.

September 27, 2019

Presenter: Adam Thompson

Title: "On Keeping the Blame in Blame"

Abstract:

Moral judgments play an integral role in the active and reactive phenomena that animate practical life. For instance, as constituents of the active they feature in deliberation about what to do and thereby aid in the development of intentional action. On the reactive end, moral judgments structure responses to intentional action and it’s agential sources. However, many hold that moral judgment cannot function as blame proper without aid from an emotion like righteous anger, resentment, or indignation. One way of articulating that idea is to argue that moral judgment alone lacks the opprobrium characteristic of blame — that is, as some put it, moral judgment alone cannot keep the blame in blame. I explore three ways of making that point and reject each.

September 20, 2019

Presenter: Christopher Stratman

Title: "In Defense of Inflationism "

Abstract:

The Phenomenal Intentionality Theory (PIT) claims that there are phenomenally intentional mental states and that all other forms of intentional mental states are either grounded in or in some way arise from these more basic phenomenal mental states. However, this view of intentionality faces an apparently obvious problem: it seems as though there are intentional mental states, such as standing beliefs and desires, that are not phenomenally conscious. Proponents of PIT must give some plausible explanation of how these nonconscious intentional mental states get their intentional content. In The Phenomenal Basis of Intentionality (2018), Angela Mendelovici considers and rejects several attempts to show that nonconscious intentional mental states such as beliefs and desires get their intentional content derivatively. Mendelovici then argues in support of eliminativism about genuinely intentional standing states (169). While it might be the case that a thinker can be in a mental state such that they have a disposition to have an occurrent belief or desire, according to Mendelovici, such states are not genuinely intentional (99).

I argue that Mendelovici’s eliminativism about nonconscious intentional mental states should be rejected in favor of inflationism. On the view of inflationism proposed in this paper, some standing states, such as the disposition to be in a diachronic episode of thinking, can be both genuinely intentional and faintly phenomenally conscious. Moreover, I argue that there are good reasons to think that there are no occurrent phenomenally conscious beliefs, but only diachronic phenomenally conscious episodes of thinking (See Crane, 2013, “Unconscious Belief and Conscious Thought”). These considerations put into focus the primary purpose of this paper: to demonstrate that Mendelovici assumes that there is a genuine distinction between standing intentional mental states that are not phenomenally conscious, and occurrent intentional mental states that are phenomenally conscious. I argue that this distinction should be rejected, because, at least in the case of beliefs, there are no occurrent beliefs, and in the case of standing intentional mental states, the claim that these cannot be phenomenally conscious is false.

Mendelovici thinks that once the derivativist solutions has been ruled out, there are only two live options left: eliminativism and inflationism. And since, according to Mendelovici, “it is implausible to maintain that allegedly nonconscious intentional states are in fact phenomenally conscious”, the only solution available to the proponent of PIT is to articulate a salient version of eliminativism (169). But why must we think that an inflationist solution is simply implausible?

The apparent implausibility of inflationism is really just an artifact of accepting that there is a genuine distinction between unconscious standing intentional mental states and phenomenally conscious occurrent intentional mental states. No philosophical considerations of the inflationist solution are ever provided by Mendelovici. But, it will be shown that there are reasons to be skeptical of this distinction, and that the inflationist solution is not implausible. Therefore, proponents of PIT, such as Mendelovici, cannot simply reject without any argument the plausibility of the inflationist solution.

The paper is structured in the following way: In part one I offer several reasons to think that Mendelovici’s eliminativist solution collapses into self-ascriptivism, which is supposed to be a default position and which, I argue, is consistent with the inflationist solution argued for in this paper. Part two argues for the claim that, at least in the case of beliefs, there are no phenomenally conscious occurrent beliefs, only phenomenally conscious diachronic episodes of thinking. This is because, given the plausible claim that a crucial (though not definitional) feature of beliefs are the role they are supposed to play in our folk-psychological explanations and predictions of behavior, then arguably, a necessary condition for P to count as a belief is that P must able to persist beyond the initial moment P was acquired. If P lacks this condition, then P will not count as a belief. Since occurrent beliefs seem to lack this condition, occurrent beliefs do not count as genuine beliefs. Finally, I argue that dispositional mental states are both intentional and not necessarily unconscious. That is to say, it is not the case that there is nothing it is like to be in an intentional mental state of being disposed to experience a diachronic episode of thinking that such and such is the case. In conjunction with the claim that there are no occurrent beliefs, this is sufficient to show that the distinction between unconscious standing intentional mental states and phenomenally conscious occurrent intentional mental states should either be rejected, or at the very least, one would need to provide a positive argument in support of the distinction.

In support of this claim, the following reasons will be offered: (i) I argue for a distinction between what I call “Acute Phenomenology” and “Shadow Phenomenology” in order to show that the inflationist solution is not implausible. An example of shadow phenomenology would be when one looks at the front of an object such as a couch, right there in one’s acute phenomenology of the front-side of the couch there is also a kind of shadow phenomenology of the back-side of the couch. While this is typically understood as a kind of perceptual phenomenology, I believe it is intrinsically tied to our cognitive ability to imagine ourselves walking around to the other side of the couch to observe it. As such, there is room to argue that the shadow phenomenology involved in such cases can be extended to cases of standing intentional mental states.

(ii) I then appeal to work being done by philosophers such as Carruthers and Tye to show that, at least in the case of perception, there is a kind of shadow phenomenology associated with dispositional states. If this is correct, then (iii) it would provide proponents of PIT the recourses for a novel response to challenging cases that appear to involve mental states that are intentional but are not phenomenally conscious. That is, it will provide the basis for an inflationist solution to the problem of apparently unconscious standing states.

Februrary 13, 2019

Presenter: Steve Byerly

Steve is trying to flesh out a view he calls Knowledge as (Robust) True Belief.

Abstract:

Despite the popularity of either strengthening the justificatory condition, adding an additional condition, or modifying the parameters of justification, I will put forth 'Knowledge as (Robust) True Belief' (RTB), which is a theoretical account of knowledge.

I will reject what I'll call 'bare-propositional beliefs' (BPR): a BPR will be defined as a belief that is independent of justificatory provenance (I'm working on refining this definition; it needs work).
Robust belief def: A belief is a robust belief (i) iff the propositional statement/conclusion that is the locus of our potential knowledge is held together with the inferentially-prior propositions, through which, and in virtue of which, the conclusion is held to be true, and (ii) in conjunction with the implicit presuppositions, necessary to make the given argument make sense (stability, lack of deception, etc). RTB, then, is a holistic and interconnected doctrine. So, an article of 'knowlege' is in fact knowledge iff it is a soundly concluded argument, held together as a complex state-of-affairs, with all of its necessary presuppositions holding true.

Echoing the initial responses to Gettier cases, false premises in the inferential/dependent chain exclude a proposition from being the basis of a robust belief.

A distinction will be drawn between the pure theoretical account of knowledge in contrast to practical desiderata. The pure theoretical account does not offer a satisfactory way to guarantee that a given propositional statement is knowledge, but only an account of the meaning of a knower knowing something about a facet of reality. In these lights, an article of propositional truth may be abstracted in the mind, represented symbolically through a proposition, BUT to know something in a robust way, is to apprehend a facet of reality through the mind, held within an interconnected nexus of other facets. The whole gem, so to speak, is the intelligible landscape of reality. This view excludes 'feelings' of certainty, against the gambler; deception, against the barns; presuppositional errors, against Grabit's twin/delusions.

It is true that — practically — this is a very high bar to pass, and that we may not be able to grant our justificatory imprimatur on many of our alleged articles of propositional knowledge. Most of our so-called knowledge would be reduced to educated guesses. This view distinguishes what knowledge is per se, in general, from our practical ability to know which articles of belief are grasped, in particular (which I suspect most people are working on). A distinction will be maintained between 'what knowledge is', from 'how to know what you know, and whether you 'really' know something,' (though these are obviously related things). Since humans are clearly fallible, subject to both external and internal deception, only sound arguments devoid of deception, instability, and false-presuppositions, would qualify as knowledge. We enjoy speaking loosely of knowledge out of practical necessity, but it would be a mistake to confuse the weaker sense of knowledge (admixed with practical methods), with the robust account of knowledge. 

This is a work in progress. As an auxiliary, as one becomes frustrated with how 'impractical' this view is, I am considering the merits of different types of knowledge, their varying justificatory thresholds, and lightweight knowledge, which, I'm hoping, will fit with RTB.

Februrary 6, 2019

No Colloquium

Philosophy Welcome Back

Spring 2019

April 19, 2019

Presenter: Genessa Eddy

Title: "Conventionality of Math."

Abstract:

The numbers zero through nine are the basic symbols of our number system. You can make an infinite amount of unique numbers by just combining these ten symbols in different ways. Therefore, a finite amount of symbols can make up an infinite amount of complex numbers in our number system.

What if we make this finite amount of basic symbols infinite? Let's replace the symbols "1" and "one" with a new symbol " ⚀ ", "2" and "two" with " ⚁ ", "3" and "three" with " ⚂ " and so on to infinity. Thus, we would no longer have an infinite amount of numbers made up of a finite amount of symbols but instead an infinite amount of numbers made up of an infinite amount of symbols.

By doing this we are able to assign new symbols to numbers and can see that there must be a separation between numbers and their symbols. Does this separation show us anything about the existence of numbers of mathematics itself? I claim that it does and what it shows us is that numbers, and other mathematical objects, may be metaphysical objects but also that the symbols of mathematics are merely conventional objects that happen to track these metaphysical objects quite well. The proof of this is in our ability to separate numbers from their symbols which we can arbitrarily assign.

April 12, 2019

Presenter: Adam Thompson

March 29, 2019

Presenter: Jeffrey Schade

Title: "The Simplicity of the One."

Abstract:

The paper argues that we should take seriously Neo-Platonist assumptions such as the ontological primacy of mindful consciousness over matter, and the "Principle of Simplicity", which states that that which is simplest is that which is most perfect. This principle is consistent in many ways with Proclan metaphysics; however, the Proclan Rule creates problems for the Principle of Simplicity. These problems might be resolved by a conception of causation as "grounding by subsumption", as well as by making a distinction between perfect and imperfect on the one hand, and better and worse on the other. The paper then argues that Proclan metaphysics is consistent with an immaterial or non-corporeal conception of matter upon which form is imposed by Intellect, or Consciousness.

March 29, 2019

Presenter: Kevin Patton

Abstract:

Georgi Gardiner has argued that all modal conditions on knowledge ultimately get 'swamped' -- that is, they fail to provide any additional value to what knowledge is beyond what is already contributed by mere true belief. She offers three brief arguments in support of this thesis. Despite the force of her arguments, I argue that her thesis is too broad. I motivate this by attempting to explore a belief that is false, but almost true (i.e. the basis for the false belief was almost producing a true belief). These kinds of beliefs, though false, seem to be quite valuable. Why? Because the process that formed them almost got us to the truth. But wait, Gardiner will exclaim, that just proves that those modal conditions, even ones that almost provide true beliefs, are only valuable insofar as we care about truth! While it is true that they may be instrumental to truth, I claim that they are still quite valuable. There are a number of ways to motivate this position. Here is one example.

Motivation: let's consider a reliable drill used for a job. The safety/reliability of the drill leads to the building of the solid shelf. Now, it's certainly true that once I have the shelf, I no longer need the drill for that job. In Gardiner's terms, the value of the shelf swamps the safety/reliability of the drill. The next time that I need to build something, however, I'm going to pick up the safe/reliable drill again and again, and will continue to do so anytime that I need to build something. This tells me something important that the swamping problem gets wrong. Namely, that I do value safety/reliability even when it's instrumental. Why? Because if I used a harbor freight drill (usually an inferior product), my shelves would be less well built, would take longer periods of time to build, and would be more frustrating to construct. In other words, I value the errors that the safe/reliable drill avoids. Sure, this is a practical consideration. But certainly those are valuable when building shelves, or constructing beliefs! When my beliefs are formed by a safe belief forming process, then I will arrive at truth while avoiding numerous pitfalls. Does this mean that the value which a safe process provides is swamped? I think not. I want to be an epistemic agent that not only arrives at the truth, but does so efficiently. I am, after all, limited in the time that I have to be a belief forming agent. A true belief's value cannot swamp this because efficiency is independent from truth and belief -- even if it's instrumental.

Or so I currently think.

March 1, 2019

Presenter: Zach Wrublewski

Abstract:

Many philosophers are skeptical of the claim that the "ought" of rationality is normative in the sense that the requirements involved are necessarily accompanied by reasons to conform to them. Some believe that requirements of rationality are no more normative than the requirements of chess or the requirements of etiquette. Others, such as John Broome, accept that rationality is normative, but also hold that there are no good arguments to establish this conclusion. In The Normativity of Rationality, Benjamin Kiesewetter takes on the ambitious project of defending the normativity of rational requirements with an interesting, novel solution to the so-called "normativity problem." In short, Kiesewetter argues for a view that holds that reasons are evidence-relative facts, and that rational requirements are "non-structural" in the sense that they do not concern combinations of attitudes that an agent holds, but rather the reason(s) one has for (or against) holding particular attitudes. Crucially, Kiesewetter's project depends on the necessary link between the requirements of rationality and one's reasons, such that one always has a reason to do what rationality requires of her. To make this connection, Kiesewetter argues for a "backup view" of reasons.

In this presentation, I will argue that Kiesewetter's backup view, as sketched, is problematic, and should be rejected. After sketching the basics of Kiesewetter's view, I will offer a counterexample to it. Also, I will consider potential ways in which Kiesewetter might amend his view to avoid the problems associated with my counterexample, and conclude by suggesting that Kiesewetter should either amend his view of evidence, or give up the view that reasons are evidence-relative facts.

February 22, 2019

Presenter: Andrew Christmas

February 15, 2019

Presenter: Zack Garrett

Title: "Precisifications"

Abstract:

Semantic nihilism is the position that sentences containing vague components like "Winston is bald" are not truth-apt. The theory has been thought to be, at best, too revisionary, and, at worst, self-undermining. Since natural language is riddled with vagueness, only a small portion of sentences will count as truth-apt. Even the sentences used to express semantic nihilism will not be truth-apt since they contain vague words like "vague." These objections appear to be devastating to semantic nihilism, but David Braun and Theodore Sider, as well as John MacFarlane, have recently attempted to rehabilitate the theory.

In this chapter, I argue that semantic nihilists (i) fail to avoid some serious intuitive costs, even if they can avoid some of the worst objections, and (ii) they do not have a sufficient reason for rejecting the truth-evaluability of sentences containing vague components.

February 8, 2019

Presenter: Trevor Adams

Abstract:

In David Lewis's paper Logic for Equivocators Lewis gave an example of something he called "fragmentation". Lewis was attempting to describe what is going on when someone holds two contradictory beliefs simultaneously and how. In that work, Lewis gave an example of how he himself once had contradictory beliefs, saying "I used to think that Nassau Street ran roughly east-west; that the railroad nearby ran roughly north-south; and that the two were roughly parallel" (p. 436). The problem for Lewis was that the different fragments of this triple would come into use and guide his behavior at different times but that, "the whole system of beliefs never manifested itself at once" (p. 436). But, "once the fragmentation was healed, straightaway my beliefs changed" (p. 436). This example has now become the classic example of a phenomenon called fragmentation.

What I want to do in this paper is give some clarity to the fragmentation discussion and also defend the fragmentation thesis from objections. First, I will propose one interpretation of belief fragmentation. Next, I will state and explain Aaron Norby's objections from his paper Against Fragmentation, by giving some evidence of fragmentation from cognitive science. Lastly, I will consider another objection to fragmentation Norby offers and by showing how fragmentation is in fact a substantive thesis about belief.

February 1, 2019

The Graduate Student Research Colloquium will not be held. Instead, we will have the Spring 2019 Faculty and Graduate Student Colloquium. Joey Dante will present.

January 25, 2019

Presenter: Mark Selzer

Title: "Importing Reasons from Other Worlds: the Latent Capacity Interpretation of the Explanatory Constraint."

Abstract:

This is a heavily revised version of a paper I presented last semester. I've modified it to provide a stronger intuitive appeal for my explanation of motivating reasons, and I've also added some features that protect my view against several objections. The revisions required me to develop my account in three important directions. The view is now committed (or further committed) to moral rationalism and diachronicity and globalism about reasons. For those who are interested, below is a copy of my abstract from last time.

In his influential article, "Internal and External Reasons" (1979), Bernard Williams argues for the Explanatory Constraint:

EC: The fact that p is a normative reason for A to Φ only if A can Φ because p.

There is a problem with EC: if 'can' means that there is some possible world where A can Φ because p, then almost anything would count as a normative reason for A to Φ. Therefore, a plausible interpretation of EC must avoid such a 'bare possibility' interpretation of 'can'.
In "Internalism and Externalism about Reasons" (forthcoming), Hille Paakunainen argues for the Actual Capacity interpretation of EC:

AC: The fact that p is a normative reason for A to Φ only if A has an actual present capacity to Φ because p.

First, I argue that AC is an unsatisfactory interpretation of EC because it conflicts with the normative reasons that the akratic or the person with a poorly developed character has. Second, to address these shortcomings, I argue for the Latent Capacity interpretation of 'can' in EC:

LC: The fact that p is a normative reason for A to Φ only if A has a latent capacity to Φ because p.

LC is an account that is not trivialized by a bare possibility interpretation of EC—yet, contra AC, LC remains in harmony with the normative reasons the akratic or the person with a poorly developed character has.

January 18, 2019

Presenter: Adam Thompson

Title: "On Balance and Teaching Philosophy"

Abstract:

As with many things that lend meaning, support, and significance, balance in nearly any context where it is called for is difficult to realize let alone recognize or understand. This essay focuses on these difficulties as they pertain to the design and implementation in philosophy courses. It offers a strategy for finding the right distribution of content coverage and skill development. The strategy begins with the observation that developing an evaluative grasp of philosophical material is to understand a complex of, among other things, subtle distinctions, analyses, relations of support, and normative implicature as well as interrogative statements and declarative ones. Further, it is to understand elements like those through a dialogical narrative wrapped in difficult prose.
Hence, first we must dissect this complex and helpfully arrange its elements. This essay organizes the elements intertwined in philosophical material along a multifaceted continuum. At one extreme is pure first-order content. There we find things like the most austere declarative and interrogative sentences. At the other is pure-second order content which comprises forms of, say, inference, fine distinction, and evidential support. Striking the right balance involves focusing attention on the right blend of first-order and second-order content at the right time and for the right reasons. This essay shows that learner-centered, integrated pedagogy and other best pedagogical practices can secure a proper blend and application schedule.
An interesting upshot is that the appropriate blend for most, if not all, undergraduate courses and in some places early graduate courses is second-order heavy. This should immediately raise the worry that the majority of our undergraduate courses must focus exclusively on things like natural deduction, the vagaries of induction, or the multitude of informal fallacies.
To assuage this concern, I show how to design and implement a second-order heavy introductory course as well as a junior-level course in ethical theory. The essay uses these examples to help us understand how to develop higher level courses with second-order heavy content-blends. However, my primary aim is to advance our understanding of cultivating a pedagogically effective, well-balanced course. The essay concludes with a discussion of concerns for the strategy as well as the fringe benefits that result from using it.

Fall 2019

December 7, 2018

Presenter: Chelsea Richardson

Title: "Ancestry Without Race"

Abstract:

Given my past assessments of ancestry as it pertains to race, that it naturalizes race, giving race an unjustified but science-like credence that shores race up as a heinous social juggernaut -- one might wonder if there's anything redeeming in ancestry at all. Additionally, one might wonder how to draw a distinction between ancestry as it pertains to race and ancestry as it pertains to anything else. In this essay I'll aim to develop substantive answers to some of these questions. Primarily, I'll use what I take to be a few meaningful categories of cases to develop a clear(er) distinction between ancestry as it pertains to race -- which I'll refer to as racial ancestry, and ancestry as it pertains to something other than race -- which I'll call non-racial ancestry. The categories of cases I'll focus on are those of finding one's biological parents, drawing one's family tree, and testing one's DNA. I'm choosing to focus on these categories not because each falls neatly within the bounds of non-racial ancestry or racial ancestry, but because they help to illuminate the distinctions between racial and non-racial ancestry. I will also discuss the normative conclusions that can be drawn about both racial and non-racial ancestry. Racial ancestry is something we would do well to eliminate, but to eliminate non-racial ancestry might be to throw out the baby with the bathwater.<

November 16, 2018

Presenter: Zach Wrublewski

Title: "Subjunctives, Dispositions, and Rationality."

Abstract:

There is an ongoing debate about how rational requirements should be understood, when those requirements involve a conditional. Wide-scope theorists believe that the "ought" of rationality ranges over the entirety of the conditional -- or, in other words, the ought has "wide scope." Their opponents, the narrow-scope theorists, believe that the "ought" of rationality ranges over just the consequent of such conditionals -- or, that it has "narrow scope." As might be expected, there are advantages and disadvantages associated with each of these views. The wide-scope theorists generally have to grapple with what I call "easy satisfaction problems," while narrow-scope theorists must contend with a slightly more varied set of problems that I will call "bootstrapping problems." In this presentation, I will argue for a view which I'm currently calling the "subjunctive dispositional view" (though, this is subject to change if I find a flashier name for it.) In general, I will argue that understanding the relevant conditionals as subjunctive conditionals rather than material conditionals leads to a view that can avoid both types of problems mentioned above. Then, I will show that my specific version of this view, which relies on agents' dispositions in determining the truth values for the relevant subjunctive conditionals, is intuitive plausible and has the advantages of both the wide-scope and narrow-scope views.

November 9, 2018

Presenter: Jason Lemmon

Title: "Robust Moral Realism, Evidence, and the Evolutionary Debunking of Morality."

Abstract:

I will explore some of the central threads of the recent debate between evolutionary debunkers of morality and their opponents. One of the most pervasive responses to debunkers is that their main line of argument proves too much. The main debunking line, in short, is that since evolutionary forces have shaped our "moral faculties"/capacities to produce beliefs that were advantageous (rather than truth-tracking), it would be, probabilistically, basically miraculous if -- supposing for argument's sake, there really are mind-independent moral truths -- the aforementioned beliefs just happened to line up with the mind-independent moral truths. This would be a coincidence of monumental, and unacceptable, proportions; thus, we should accept that our moral capacities are (very, very likely) UNRELIABLE. Now, the response that the debunkers' main argument proves too much goes, roughly, as follows: The main debunking line can be applied to practically any mental capacity or faculty; e.g. swap out 'moral' with 'perceptual', and you get the result that our perceptual faculties are UNRELIABLE. Thus, the main debunking line leads to global skepticism. This response is given by Shafer-Landau, among many others. I argue that the response fails, on empirical grounds. The evolutionary accounts we have regarding the etiology of perception, or of our mathematical capacities, are of a quite different kind (perception) -- or non-existent (mathematics) -- than the etiology of our moral capacities. For the response to work, it cannot be given from the armchair -- instead, one must show, as with our moral capacities, that it really is likely that the deliverances of capacity X are beliefs that are either likely unrelated to the possible truths, or, are such that we must simply Withhold.

November 2, 2018

Presenter: Katerina Psaroudaki

Abstract:

Do white people owe compensation to black people in the context of race-based affirmative action? I will discuss various arguments and show that, the most tenable version of affirmative action, which best explains why white people owe compensation and why black people deserve compensation, is of the following shape: white people would not have developed the skills they currently possess if they had not been benefited by their membership to a socially privileged group, and, analogously, black people would not have suffered their present competitive disadvantages if they had not been subjected to racial discrimination. I will, then, defend a hybrid interpretation of the "principle of fair play" according to which: a) white people have not knowingly and willingly accepted the benefits of racial injustice, b) the benefits conferred upon white people do not clearly outweigh the cost they have to pay, c: the distribution of the compensatory benefits is not fair since the black people who have suffered the most from racial discrimination will not be the ones obtaining the affirmative opportunities, and d) the distribution of the compensatory burdens cannot be proven fair since the best qualified white people who will be paying the price have not necessarily gained the most from racial injustice. Through a series of thought experiments, I will conclude that only a very weak compensatory duty can be established in the context of affirmative action.

October 26, 2018

Presenter: Mark Selzer

Title: "Importing Reasons from Other Worlds (without Tariffs!): or the Latent Capacity Interpretation of the Explanatory Constraint."

Abstract:

n his influential article, "Internal and External Reasons" (1979), Bernard Williams argues for the Explanatory Constraint:

EC: The fact that p is a normative reason for A to Φ only if A can Φ because p.

There is a problem with EC: if 'can' means that there is some possible world where A can Φ because p, then almost anything would count as a normative reason for A to Φ. Therefore, a plausible interpretation of EC must avoid such a 'bare possibility' interpretation of 'can'.
In "Internalism and Externalism about Reasons" (forthcoming), Hille Paakunainen argues for the Actual Capacity interpretation of EC:

AC: The fact that p is a normative reason for A to Φ only if A has an actual present capacity to Φ because p.

First, I argue that AC is an unsatisfactory interpretation of EC because it conflicts with the normative reasons that the akratic or the person with a poorly developed character has. Second, to address these shortcomings, I argue for the Latent Capacity interpretation of 'can' in EC:

LC: The fact that p is a normative reason for A to Φ only if A has a latent capacity to Φ because p.

LC is an account that is not trivialized by a bare possibility interpretation of EC—yet, contra AC, LC remains in harmony with the normative reasons the akratic or the person with a poorly developed character has.

This new and improved version of the talk features the latest and most advanced theoretical machinery, making it better, faster, and stronger than all previous models. It is undoubtedly the best iPhone yet

October 19, 2018

Presenter: Andrew Spaid

Abstract:

Some have recently argued that, given how different pleasure and pain are from one another, hedonism cannot lay claim to the theoretical advantages monistic theories of value are thought to have over pluralistic ones, such as explanatory adequacy and commensurability. Some have also suggested that this is a reason to prefer the desire satisfaction theory over hedonism (since the former has these theoretical advantages.) As I try to show, however, the argument reveals only that the extent to which hedonism has these advantages over standard pluralist views is smaller than previously thought, not that hedonism lacks the advantages altogether. I also argue that the desire satisfaction theory faces a similar challenge.

October 12, 2018

Presenter: Adam Thompson

Title: "Challenging Hybrid Accounts of Race."

Abstract:

So-called hybrid accounts of race aim to offer a compromise between social constructivism about race and views that hold race to be biologically real. The idea is that race has a dual nature insofar as it is a socially constructed, biological reality. Of course the view that there are some genetic divisions in the human population corresponding to our racial categories that might interest scientists or aid in the identification of skeletal remains is not completely unreasonable. But the inferences that underwrite a move from that fact to the claim that race is biologically real are fallacious. To show this, I'll look at the three most prominent types of hybrid accounts to make the case that they cannot establish the dual nature of race. As it stands then, it appears that, ontologically speaking, race is at best merely a social construction.

October 5, 2018

The Graduate Student Research Colloquium will not be held. Instead, we will have the Fall 2018 Faculty and Graduate Student Colloquium. Christopher Stratman will present.

September 28, 2018

Presenter: Aaron Elliott

Title: "Non-Naturalist Moral Perception: An Exploration"

Abstract:

Non-Naturalists have an epistemological problem. Their metaphysical commitments make them particularly susceptible to geneological debunking challenges, which charge that there is something about the etiology of our normative beliefs that prevents them from being in good epistemic standing. I will explain what I take to be the strongest version of the challenge (as a Gettier challenge), and then explore some options for non-naturalists to get out of it. Many standard Gettier cases (e.g. the sheep in the field case) depend on a deviant explanation of the justified true belief -- the truth does not explain the belief nor its justification. This seems to be the non-naturalist's position with regard to moral beliefs, as they hold that normative facts don't causally explain natural facts and facts about beliefs are natural facts. Third-factor explanations, where the belief and the fact that it is about are both explained by the same third fact, aren't viable either. Even if they can have a natural fact explain our beliefs and the normative facts they're about without undermining their non-naturalist commitments (they can), the structure of explanation they give would have to resolve Gettier cases (it doesn't).
My research will be on whether non-naturalists can offer an account where the normative facts explain our beliefs, but without violating causal inertness. Fortunately, there is philosophical precedent for non-causal perception. Cognitive penetration is the purported phenomenon of our natural kind concepts "penetrating" our perceptual content in an (often) epistemically acceptable way. A standard example is our ability to see the property 'pine tree' (or the fact that there is a pine tree), despite 'pine tree' not causally affecting our visual systems.
So, I will be exploring the viability of this avenue. There are several problems to resolve to make this work (and I hope that you'll point out more.) The first is the problem of requiring our concepts to accurately refer, as would be required for cognitive penetration to do the work I want. The second is an increased worry about accommodating moral disagreement. The third is that cognitive penetration isn't always epistemically benign.

September 21, 2018

Presenter: Joey Dante

Abstract:

I will be attempting to provide an abductive argument for the thesis that all value(-systems) is created via a process of social interaction. I take it that this implies, at the least, that there are no necessarily existent values and further, that no moral facts hold necessarily (at least in so far as moral facts supervene on or are grounded in moral value.)
P1: Some value-systems (like dog-show systems) are such that they arise as a result of social interactions (i.e. these value systems and the VALUES THEMSELVES would NOT exist unless said interaction occurred) and that any theory concerning these values must be one such that according to that theory, these values were created as a result of social interaction.
P2: A theory that offers a unified or homogeneous account of the means of generation (possible creation conditions) of some kind of entity, K, is more preferable to a theory that offers a radically dis-unified account of the possible creation conditions for K.
SUPPORT OF INTUITIVE PRINCIPLE BY ANALOGY: Suppose we find (biological/organic) life on some planet in the next galaxy over (Andromeda I think.) And suppose theory T offers an account as follows: (i) life on earth arose out of evolutionary processes and (ii) life on Andromeda arose by divine creation (or arose by some other means than evolutionary processes.) That theory would be LESS preferable, ceteris paribus, than a theory that gave the same explanation as to the creation of living things (i.e. said either that God created all life or that life results from evolutionary processes.)
Therefore, a theory that has it that all value-systems and the values thereof LITERALLY arise out of the result of social interaction is more preferable to one that has it that some values (moral values) are necessary existences and others (like etiquette-values) are contingent existences.
(A few hidden premises are hanging about but I hope we can see that the argument is more or less valid as stated.)

September 14, 2018

Presenter: Adam Thompson

Title: "On Keeping the Blame in Blame: Anti-Humeans Do What Humeans Cannot"

Abstract:

Recently, several accounts have emerged on which blame is neither confined to the emotional nor always affectless. Rather than focus on the emotional aspects of blame, these theories focus on its motivational features. Some construe blame along Humean lines insofar as they adopt the Humean idea that cognitive states cannot motivate absent aid from independent desire. Others allow that a moral judgment can motivate despite the fact that no independent desire lends a helping hand. A primary concern for those non-emotion-based accounts is that they take the blame out of blame. This essay looks at three different ways to understand that objection -- as taking emotion out, as taking implicit demands out, and as taking the deservingness out. I argue that all and only those accounts of blame's nature that appeal to the Humean idea fall to all three versions. Thus, only anti-Humean accounts of blame remain viable.

September 7, 2018

Presenter: Zack Garrett

Title: "The Logic of States of Affairs"

Abstract:

In this chapter, I describe a logic built from states of affairs that resolves the sorites paradox when vagueness is treated as a metaphysical phenomenon and provides a general account of metaphysical vagueness. The logic takes states of affairs as its atomic elements. States of affairs are made up of objects and properties. A state of affairs can obtain or fail to obtain, depending on whether or not its object instantiates its property. However, it may indeterminately obtain if its object indeterminately instantiates its property.
The standard logical connectives are treated as higher-order states of affairs that have as components other states of affairs. For example, a conjunction is a state of affairs that contains two states of affairs and the relation of co-obtaining.
The chapter also discusses how the logic of states of affairs handles higher-order vagueness. Higher-order vagueness in the logic of states of affairs involves states of affairs about the obtaining of other states of affairs that either determinately obtain, indeterminately obtain, or fail to obtain. For example, the state of affairs of the state of affairs of Mikhail Gorbachev being bald indeterminately obtaining may indeterminately obtain. That is to say, it might be indeterminate whether or not it is indeterminate that Gorbachev is bald. Finally, I discuss how the logic of states of affairs can handle higher-order vagueness better than supervaluationism can.

August 31, 2018

Presenter: Christopher Stratman

Title: "Ectogenesis and the Moral Status of Abortion"

Abstract:

Ectogenesis involves the gestation of a fetus in an ex utero environment. While we tend to think of such technology as mere science fiction, in the future this will likely change. Indeed, given the plausibility of ectogenesis, a number of morally significant questions arise. One such question concerns the moral status of abortion. The aim of this paper is to show that ectogenesis, which makes it possible to perform an abortion without the destruction of the fetus, provides a good reason to believe that it is nearly always morally impermissible to kill the fetus.

Spring 2018

April 27, 2018

Presenter: Andrew Christmas

Abstract:

I will discuss the role that a community's portrayal of history plays in framing the way that members of the community view the world. In particular, I will focus on the common assumption that views the history of humanity as one of social, cultural, moral, etc. progress and argue that this assumption is not well supported and is only plausible when viewed within a particular framework.

April 6, 2018

The Graduate Student Research Colloquium will not be held. Instead the Faculty / Graduate Student Colloquium will take place. Zachary Garrett will present.

March 30, 2018

Presenter: Mark Selzer

March 16, 2018

Presenter: Andrew Spaid

March 9, 2018

Presenter: Chelsea Richardson

Title: "Ancestry in Visual Experience"

Abstract:

Both the folk and Critical Race theorists appeal to ancestry as something that is present in visual experience. If they are correct, then acquaintance, a relational concept popular in philosophy of mind, seems like the most likely relation by which one could come to have a visual experience of ancestry. Literature on acquaintance tells us that it is epistemically rewarding. Assuming this is correct, being in an acquaintance relation with some person should make us more likely to form true beliefs about their ancestry. However this is problematic since, acquaintance often provides false beliefs about the location or visual appearance of someone's ancestors. So, either acquaintance is not virtuous (because it is likely to provide false beliefs in these cases), or we are wrong about the sense in which acquaintance makes us more likely to form true beliefs about ancestry. If we conceive of ancestry in a social way, as an understanding of the present social hierarchy, then acquaintance is epistemically rewarding because it tends to provide true beliefs about people's positions in this hierarchy. This account of ancestry preserves the virtues of acquaintance but requires revising our common understanding of what ancestry is.

March 2, 2018

Presenter: Aaron Elliott

Title: "Grounding the Duty of Non-Maleficence: Why doctors should do-no-harm, and what this tells us about public policy."

Abstract:

The folk conception of physicians' duty to do no harm considers the Hippocratic Oath as its basis. Standard medical ethics textbooks do not address the grounds for the duty of non-maleficence (henceforth "theDuty"). Both are mistakes. First I'll argue that, because the Hippocratic Oath is at best a promise, it is an inadequate basis for the Duty. Second, I'll support an alternative two-part account of the basis--the badness of harm, and healthcare practitioners' special role in society ground the Duty. Third, I'll show how this alternative account has wider implications on the morality of individual care choices, and on the morality of certain public policy positions. Even if my proposed basis for the Duty is wrong, this shows that alternative proposals can have concrete normative implications, and so medical ethics education needs to include discussion of the bases for standard duties of medical ethics.

February 23, 2018

Presenter: Adam Thompson

Title: "On Balance and Course-Design: A Balance-Primitivist Strategy for Squaring Content Coverage with Skill Building."

Abstract:

Balancing content coverage with philosophical skill/disposition building is a particularly pernicious course-design problem. By delineating a strategy for approaching the balance-challenge, as I'll call it, this essay aims to help philosophy teachers overcome it. One key to overcoming the challenge is to push against the orthodox view that treats the balance-challenge as subsidiary to adopting learning objectives and aligning them with educative assessments. Furthermore, the paper demonstrates how executing the strategy has the following three payoffs: (1) It helps us build rigor into the heart of the course; (2) It aids in the development of assessments that draw more on intrinsic motivators as opposed to extrinsic motivators; and (3) It straightforwardly connects grades to learning-objective mastery. For illustration, I focus on how I used the strategy to build my upper-level course on ethical theory. I follow up by generalizing the strategy. In particular, I show how to use the strategy to design an intro-level course on applied ethics and how to apply it to design a graduate course in philosophy.

February 16, 2018

Presenter: Lauren Sweetland

Title: "Interpreting and Evaluating Legal Practices"

Abstract:

Hart argues that whether a rule is obeyed or accepted, broken or rejected, is determined by "good reasons" from an internal perspective (The Concept of Law 55). Ronald Dworkin objects that Hart's view of rules gives an internal participant's point of view, as well as the role of interpretation in legal theory, too little treatment. Hart replies that his secondary rules, especially rules of recognition, are the basis of reason from an internal perspective. As for the role of interpretation in legal theory, Hart insists that from an external observer's point of view, no moral judgment need be made with respect to some rule when describing that rule. I think that while Hart's secondary rules do indeed figure into reasons from an internal perspective, what is necessary for the legal theorist (external perspective) to describe those reasons is a value judgment. How much interpretation, if any, on the part of the legal theorist is sufficient for describing the practice of law, according to Hart's view? I argue that without interpretive and evaluative judgments about rules of recognition on the part of the legal theorist, the legal theorist could not understand reasons to regard rules in certain ways from an internal perspective. Hart's legal theorist utilizes value judgments in describing the internal perspective on reasons for rules more than he envisions.

February 9, 2018

Presenter: Shane George

Title: "APR"

Abstract:

Traditional accounts of autonomy have assumed autonomy stems from a connection to an authentic self. However, in my dissertation I argue that not only is this position untenable, this understanding of the relationship is backwards. Authenticity is a property which is generated by autonomy which is itself not a simple relation but a recursive process. In this presentation I will explain the ab initio Problem and the Value Formation Problem which undermine traditional explanations of autonomy. I will then argue for a Coherentist Psychosocial account of the self which is necessary for the autonomous process, and explain how this account contributes to solving both the ab initio Problem and the Value Formation Problem.

February 2, 2018

Presenter: Joseph Dante

January 26, 2018

Presenter: Adam Thompson

Title: "On Balance and Course-Design: A Balance-Primitivist Strategy for Squaring Content Coverage with Skill Building"

Abstract:

Balancing content coverage with philosophical skill/disposition building is a particularly pernicious course-design problem. By delineating a strategy for approaching the balance-challenge, as I'll call it, this essay aims to help philosophy teachers overcome it. One key to overcoming the challenge is to push against the orthodox view that treats the balance-challenge as subsidiary to adopting learning objectives and aligning them with educative assessments. Furthermore, the paper demonstrates how executing the strategy has the following three payoffs: (1) It helps us build rigor into the heart of the course; (2) It aids in the development of assessments that draw more on intrinsic motivators as opposed to extrinsic motivators; and (3) It straightforwardly connects grades to learning-objective mastery. For illustration, I focus on how I used the strategy to build my upper-level course on ethical theory. I follow up by generalizing the strategy. In particular, I show how to use the strategy to design an intro-level course on applied ethics and how to apply it to design a graduate course in philosophy.

January 12, 2018

Presenter: Kevin Patton

Title: "Safety and Swamping."

Abstract:

It is common for epistemologists to use the value problem as a kind of litmus test for a theory of knowledge. The value problem is, roughly, the problem philosophers run into when they attempt to explain how knowledge is more valuable than its components. If knowledge is not more valuable than its components, then our intuitions about its value are left unexplained. If knowledge is more valuable than its components, it is not/has not been uncontroversial how to articulate why. Either way, many feel that if a theory of knowledge cannot address this problem, then the theory is not worth considering. Georgi Gardiner has recently argued that any theory of knowledge which is even partially explicated in modal terms cannot, in principle, provide an answer to the value problem, and hence, is not worth considering. Gardiner specifically targets Duncan Pritchard's modal condition safety. Safety is, again roughly, a condition on knowledge which invokes possible worlds to assess if the truth of a given belief was a matter of luck. Gardiner modifies the swamping problem and uses it to argue that safety adds no value to a true belief, and, so, safety cannot answer the value problem. She then claims that this kind of argument can be generalized and applied to any theory which uses a modal condition on knowledge. In this paper, I will demonstrate that the structure of Gardiner's argument relies on an assumption about what is epistemically valuable and what is not. Once made explicit, this assumption actually serves to undermine a great many more theories than Gardiner is aware. This assumption, however, is problematic. The core issue is between epistemic value monism and epistemic value pluralism. Replacing monism with pluralism avoids all of the standard reasons to adopt the swamping problem. As such, Gardiner's argument against safety fails.

Fall 2017

December 8, 2017

The Graduate Student Research Colloquium will not be held this week due to the Faculty-Grad Colloquium. Andy Spaid will present.

December 1, 2017

Presenters: Joseph Dante, C. L. Richardson, and Adam Thompson.

Abstracts:


Presenter: Joseph Dante ABSTRACT: To be announced.

Presenter: C. L. Richardson
ABSTRACT: In previous chapters I've argued that there are two types of ancestry. First, there's the type we think we're using when we define racial categories. I call this Biological Ancestry or BA for short. Second, there's the type we're actually using when we define racial categories. I'll call this Social Ancestry or SA. BA is the location and appearance of one's past genetic relatives. SA is the position in a social hierarchy that one inhabits as a result of events in human and natural history which assign social meanings to markings on one's body. These markings are taken to manifest as a result of the location and appearance of one's past genetic relatives. This chapter aims to show that the moral value of Ancestral Pride (feeling some sense of empowerment, connectedness, or self-satisfaction associated with the appearance and/or location of one's past genetic relatives) can be distinguished by appealing to the distinction between BA and SA. I argue that Ancestral Pride is morally permissible (in the sense that it is beneficial to some individuals or at least not harmful to them) when it concerns BA, or SA that involves overcoming or enduring oppression done by the hierarchy. Ancestral Pride is not morally permissible (it may even be blameworthy) when it celebrates SA that involves oppression done by the hierarchy either directly or indirectly. I'll offer a variety of cases in order to make this argument.

Presenter: Adam Thompson Title: "Children of Men: Dissolving the Paradox of Self-Ownership"
ABSTRACT: Many moral libertarians adhere to Self-Ownership: Every person is initially a self-owner; and Fruits-of-Labor-Ownership: Every person morally owns the fruits of her labor. Coupled with the fact that everyone is the fruit of someone's labor, the theses of Self-Ownership and Fruits-of-Labor-Ownership entail that no one owns herself. Hence, there appears to be a paradox at the heart of moral libertarianism. After getting clear on the so-called Paradox of Self-Ownership, and showing that the most prominent attempts at a solution fail, I dissolve it.

November 17, 2017

Presenter: Christopher Stratman

Title: "Fundamentality and Significance."

Abstract:

I believe that there are metaphysical and theoretical pressures to accept an austere ontology. One perplexing metaphysical pressure is discussed by Peter Unger in "I Do Not Exist" (1979), where he argues that any complex object or entity (i.e., that which has parts), is vulnerable to a sorties paradox argument and, therefore, does not exist, Indeed, Unger argues that minds, thinkers, and their thoughts do not exist. And so, I do not exist. Of course, this intuitively seems disastrous. Recently, in their book Austere Realism: Contextual Semantics Meets Minimal Ontology (2008), Terrance E. Horgan and Matjaz Potrc have made similar arguments. They agree with Unger's conclusion, but argue that one can still be a realist about minds, thinkers, and their thoughts, if one adopts a distinction between a Direct Correspondence theory of truth (DC), and an Indirect Correspondence theory of truth (IC).
I will argue that they fail to show how sentences or judgments about minds, thinkers, and their thoughts can be true if these do not exist as the needed relata for any correspondence theory of truth. Additionally, I argue that Horgan and Potrc fail to give an adequate account of how we can think about non-existent objects, which does not assume the existence of minds, thinkers, and their thoughts. As such I believe that, if one accepts Unger's conclusion, as Horgan and Potrc do, then they should abandon the project of attempting to give an adequate theory of truth in favor of a distinction between fundamentality and significance. Once this distinction is adopted, then one can accept that minds, thinkers, and their thoughts do not exist, insofar as they are not ontologically fundamental, while still accepting that minds, thinkers and their thoughts "exist" insofar as they are ontologically significant.

November 10, 2017

Presenter: Alfred Tu

Title: "Wielenberg on Egoism in the Nichomachean Ethics."

Abstract:

In Nichomachean Ethics, Aristotle gives a full account of what is a virtuous person and what they would do in various situations. According to Aristotle, a virtuous person would choose a course of action that promotes eudimonia. The problem is, would a virtuous person always promote his own eudimonia? If we can give an interpretation of Nicomachean Ethics such that a virtuous person would always maximize his own eudimonia, then it seems that Aristotle's account would be a kind of egoism. In "Egoism and Eudaimonia-Maximization in the Nichmachean Ethics" (2004), Eric Wielenberg argues for his egoistic interpretation of Nicomachean Ethics and against Richard Kraut's anti-egoistic interpretation of Nichomachean Ethics. In this paper, I am going to argue that Wielenberg's interpretation would have some peculiar results in some situations and Kraut's interpretation can handle these situations better.

November 3, 2017

Presenter: Jason Lemmon

Abstract:

Some have recently argued that belief/desire psychology is not fundamental to practical reason. This work is fairly limited in range, especially among philosophers (though less so among psychologists.) I will examine the main philosophical proposals and argue that they are quite implausible. The main thrust of these proposals is that results from empirical psychology, such as 'framing effects' and 'ongoing unrelated experiences,' affect our decisions in ways that purportedly cannot be explained by the belief/desire model. As an example of an ongoing unrelated experience, holding a teddy bear affects the way a subject judges other people in social settings; but the teddy bear, it is claimed, has no bearing on the subject's beliefs and desires. Opponents of the belief/desire model admit that proponents have responses to make here, but opponents go on to argue that all plausible responses boil down to either positing an extra, unnecessary mental state, or else admitting that a nice, warm teddy-bear-caused mood, say, influences our beliefs; we must reject the former by parsimony and reject the latter because it is purely ad hoc. In response, I will show that the latter is not only not ad hoc but that it is just what we would expect from a reasonable belief/desire model.

October 27, 2017

Presenter: Lauren Sweetland

Title: "Descriptive Mental Files?"

Abstract:

What is the relationship between external objects and mental representation? How can we think of an object as one and the same even as it changes through time? Some people offer the notion of a mental file to explain how we track objects through changes in time and think of them as individual objects and not merely possessors of certain properties. A mental file is an acquaintance-relation based mental representation of some object. Mental files are typically construed as having essentially relational contents rather than descriptive contents. But are there some descriptive mental files? That is, can we think singular thoughts of an object via its relation to ourselves or via some description of that object? Or do we think of an object primarily via its (non-descriptive) modes of presentation? According to Recanati, singular thoughts are non-descriptive. According to Goodman, one can think a singular thought about an object not necessarily by accessing singular content, but by understanding the description conveyed by its name. Some, but not all, descriptive files are singular.
I will argue that if there are no descriptive mental files, then we will have a hard time explaining some cases of functional and communicative success in reference when one thinks with mistaken information about the object. If mental files are exclusively relational, and yet one successfully refers to an object on the basis of mistaken relational information, it seems a natural way to account for the mistake is by appealing to some description, at least in some cases. An answer to whether there are some descriptive mental files will help answer the larger question: Are all mental files singular thoughts?

October 20, 2017

Presenter: Samuel Hobbs

Title: "Augustine and the non-existence of the past."

Abstract:

Augustine argues that the present depends upon non-being since the present depends upon passing into the past and the past doesn't exist. Since, for Augustine, past, present, and future all depend upon non-being, then they must only exist simultaneously with perception. For Augustine's theory of time, the past exists in present memories, the present exists in present perception, and the future exists in present expectation. This paper argues that if the past merely exists in memory, then Augustine must accept metaphysically absurd events. To avoid this result, Augustine has to accept that the past depends upon existence. Since the past depends upon existence, and the present depends upon the past, then the present depends upon existence. This puts Augustine in a dilemma: either he must accept metaphysical absurdities, or else he has to reject his psychological view of time.

October 13, 2017

Presenter: Adam Thompson

Title: "(Un)Marginalizing Interests: Correcting Profession-Wise, Unjust Treatment of Those Interested in Studying Teaching and Learning in Philosophy."

Abstract:

The search for truth is best carried out by those well-equipped to critically interrogate and evaluate propositions, their perceptions, their memory, the testimony of others, etc. This should give pride of place to those primarily interested in effectively facilitating the development of those capacities. However, as is well-known, professional philosophy by-and-large marginalizes those interested in the study of teaching and learning. This essay argues that that marginalization is unjust and offers suggestions for correcting this wrong. Further, since this essay argues that, in most settings in higher education, it is a mistake to value interest in the search for truth above interest in how best to develop students' critical faculties, it explores explanations for the fact that many intelligent, well-meaning people make the mistake.

September 29, 2017

Presenter: Chelsea Richardson

Title: "Where is your family from originally?: A conceptual analysis of the role of ancestry in the philosophy of race."

Abstract:

Linda Martin Alcoff, Charles Mills, and Sally Haslanger each appeal to a notion of ancestry in their accounts of race. I will examine these appeals and argue that as a collective they face two key problems: the regress problem and the inference problem. The regress problem shows that the scope of ancestry as it is used for racial membership is ill-defined. Further, what can be inferred about racial membership on the basis of ancestry and its complex relationship with visible properties of the body is similarly ill-defined — the inference problem illuminates this. These two problems ultimately show that while ancestry plays a key role in our concept of race, both folk views and appeals by the philosophers I analyze do too little to converge on a clear account of what ancestry actually is. The way Alcoff, Mills, and Haslanger treat ancestry sometimes obscures our understanding of race and racial membership and potentially reinforces a folk view of ancestry (and, as it pertains, race) that, all of these authors agree is morally dubious. In the end, I present a view of ancestry that seeks to avoid the key problems I identify.

September 15, 2017

Presenter: Zack Garrett

September 8, 2017

Presenter: Joey Dante

Abstract:

As we all may be aware, J. L. Mackie famously argues that there are no objective values. I want to investigate Mackie's interpretation of what objective values indeed ARE, and then consider and attempt to understand (one of) his arguments against such entities. Specifically, I want to see whether his arguments apply to Kantian objective values. As such, this talk is as much an interpretation of Kant as it is of Mackie (at least in so far as I understand them.)
It seems that Mackie thinks objective values, if they exist, have strange causal profiles. Specifically, these entities have the POWER to cause in a subject that is aware of them a certain motivation (to comply with their recommendations, whatever that means.) Does Kant think the objectively valuable entities have such POWER? Is such power metaphysically strange?

September 1, 2017

Presenter: Kevin Patton

Title: "Safety and Skepticism."

Abstract:

Duncan Pritchard has advocated for a necessary condition on knowledge known as safety. Pritchard's definition of safety is an explicitly modal one in which a true belief is safe if and only if in all / nearly all close possible worlds to ours, the belief is also true, and we believe it on the same basis. Since Pritchard's 2005 book, there have been a flurry of powerful criticisms, some of which have resulted in Pritchard modifying his general framework. One criticism that Pritchard has not responded to is raised by Dylan Dodd. Dodd argues for the following conditional: if safety, as Pritchard contends, explains the lottery intuition, then skepticism follows. In this paper I will argue that Dodd's formulation of the problem can be easily addressed by a safety theorist such as Pritchard. In so replying, however a dilemma will result for Pritchard: either he must reject common sense cases of knowledge, or he must reject his stated motivations for safety.

Spring 2017

April 28, 2017

Presenters: Zachary Garrett, Andrew Christmas, and Adam Thompson

Zachary Garrett, "Semantic Nihilism and Supervaluationism"
ABSTRACT: In recent years, David Braun and Ted Sider as well as John MacFarlane have attempted to revitalize semantic nihilism, a theory of vagueness that rejects the truth-evaluability of sentences containing vague words. Both views make use of things resembling supervaluationism's admissible precisifications, but they reject the identification of truth with supertruth. In this paper, I argue that Braun and Sider and MacFarlane have not given a sufficient reason for avoiding the identification of truth with supertruth. Supervaluationism fits the story for how vague communication works just as well as these new forms of nihilism, but with the added bonus that it can account for our everyday intuitions about truth. I begin by objecting to arguments for semantic nihilism and then respond to the objections Braun, Sider and MacFarlane level at supervaluationism.

Andrew Christmas, "Kant's Theory of the Good and the Justification for Agent-centered Constraints"
ABSTRACT: David Cummiskey argues that Kant's ethical theory is normatively consequentialist. Cummiskey focuses most of his effort on showing that the second formulation of the categorical imperative could allow for the sacrifice of some rational beings if it meant promoting the existence of more rational beings and that Kant provides no justification for agent-centered constraints on our actions. I argue that Cummiskey's argument presupposes an agent-neutral theory of the good that fails to provide an adequate account of Kant's ethical system. I also argue that Kant's conception of the good does provide justification agent-centered constraints and does not allow for the sacrifice of innocent people even if that sacrifice would save the lives of more.

Adam Thompson, "Against GTA Restraint: Why GTAs Should Practice Learner-Centered Pedagogy (and How to Do So)"
ABSTRACT: Graduate student teaching assistants (GTAs) typically begin their assistantships playing a supporting role in a course prepared by someone else. It is typical for GTAs to believe that they can only use a very limited subset of the full range of teaching strategies. Call that belief, the GTA Restraint Belief. Though this belief is often supported through (explicit or implicit) advocacy by the discipline and professionals in it, the GTA Restraint Belief stands as a major obstacle to student learning and GTA-pedagogical growth. For one, it is used by many in academia as an excuse for GTAs to essentially ignore the literature on effective pedagogy. For another, the GTA Restraint Belief is leaned on as a reason for GTAs to forgo trainings focused on improving their pedagogy. Thus, typically, the students served by GTAs are unacceptably underserved as are the GTAs with respect to their professional development. On those grounds and others, this essay (a) argues that the GTA Restraint Belief should be rejected and (b) shows how GTAs can discharge the obligation to reject that belief and permissibly practice effective pedagogical strategies.

April 21, 2017

Presenter: Aaron Elliott

Title: "What Naturalism Is, What Non-Naturalism Isn't"

Abstract:

Non-Naturalists need to be able to explain exactly what their view is. While perhaps an obvious requirement, the need is made salient in two ways. The first is objections from non-reductive naturalists, like Sturgeon, who challenge non-naturalists to say what it is that excludes the normative from the broader class of the natural. Since, he says, we have good reason to think normative properties are causally efficacious, we have good reason to include them along with physical, biological, and chemical properties. Until we are given an explanation for what excludes them from this group, we have a presumption of naturalism.
The second is a dialectical concern inside of debates about the viability of non-naturalism. There are two overarching objections to non-naturalism (that I care about—queerness is either unmotivated or collapses into one of the two), the metaphysical and the epistemic. Both are basically "we need an account of X, but non-naturalism can't deliver." I think the better way of framing this objection is "the minimal commitments it takes to offer an account of X entail naturalism." So, in order to assess this in principle claim, we have to be clearer on what marks the difference between these families of views, and which commitments are strictly incompatible with each. Allowing no connections means non-naturalism cannot respond to the metaphysical and epistemological challenges. So, the non-naturalist is on the hook to show which connections between the normative and the natural come with a presumption of naturalism and which don't. Clearly, identity between normative properties and natural properties would revoke a theory's non-naturalist credentials. But we need to consider other relations, and whether something could defeat a presumption of naturalism.
In light of these two related concerns, I will develop an account of non-naturalism that respects the distinctness of the non-natural without foreclosing the possibility of responding to the metaphysical and epistemological challenges.

April 14, 2017

Presenter: Joseph Dante

Title: "Panpsychism?"

Abstract:

I will be 'arguing' that if all entities are fundamental then panpsychism is plausible.

Flat-worldism: All entities that exist are fundamental; there are no entities whose existence depends on more fundamental entities.

Panpsychism: All entities are minded.

Argument:
P1: Some entities are minded (assumption)
P2: All entities are fundamental (assumption)
P3: If some entities are minded and all entities are fundamental then all entities are minded.
Therefore: All entities are minded.

I assume P1 and P2. Argument for P3:
Assume for reductio that all entities are fundamental and that some but not all of the fundamental entities are minded.
Then, there must be a non-arbitrary criterion to distinguish the non-minded from the minded entities.
There is no non-arbitrary criterion to distinguish the minded from the non-minded entities (as we cannot employ any notion of emergence or grounding, arguing that some are minded due to their relation/interaction with other entities.)
Therefore, If some entities are minded and all entities are fundamental then all entities are minded.

April 07, 2017

The Graduate Student Research Colloquium will not be held this week due to the Faculty / Graduate Student Colloquium. Professor Jennifer McKitrick will present. Her title is "Whites, Women, and Witches: Analogies and Disanalogies among Social Kinds".

March 31, 2017

Presenter: Kiki Yuan

Title: "Perception and Perceptual Inference"

Abstract:

Is perception a bottom-up or a top-down processing? It seems that the top-down processing is a more plausible theory. If so, what's the mechanism of top-down approach? Psychologists and philosophers have offered many modules to show the top-down mechanism of perception. Gregory and his "Charlie Chaplin Optic Illusion" case has offered a good demonstration of perception as a cognitive mechanism. Similarly, Helmholtz suggests that perception is mediated by unconscious inference and that inference is the same as that for ordinary reasoning and scientific inference. With the development of modern psychology, many scholars' modules of perception separated perceptual inference from the cognitive inference that associates with reasoning or knowledge, such as Rock and Fodor. Based on modern psychology study, I will argue that the separation of two kinds of inference is more plausible. In addition, I will discuss the role of perceptual inference in moral perception and how it distinguishes moral perception from other moral cognitive activities, such as moral reasoning or moral knowledge.

March 17, 2017

Presenter: C. L. Richardson

Title: "Ailefs and Singularity"

Abstract:

Non-doxastic attitudes are a subject of little concern in most of the singular thought literature. However, it's becoming clear to theorists who work in this area that specific accounts of singular thought for a variety of non-doxastic attitudes are crucial for motivating the view that we can have singular thoughts at all. One interesting candidate for such an account is Tamar Gendler's notion of Ailef. Ailefs are non-doxastic, sometimes-propositional, mental states that appear to go a long way in explaining social biases and marginalizing treatment of certain groups of people. In an influential analysis, Robin Jeshion claims that providing an account of de re belief (i.e. singular thought) compels us to answer the questions of what it is to believe, and what the conditions are on believing a singular proposition. [1] Insofar as providing an account of de re ailef requires one to address similar questions, I'll aim to meet Jeshion's challenge. This paper will be concerned with two major questions with respect to the concepts of ailef and singular thought: 1) Can ailefs refer singularly to their objects? 2) Can ailefs be explained in terms of mental files? I'll argue that ailefs do refer singularly and that they can be explained in terms of mental files. I'll provide an account of file dynamics for ailef-type mental files. Additionally, I'll explain how my account serves to inform and potentially motivate more general views of singular thought and implicit bias.

[1] Robin Jeshion, New Essays on Singular Thought (Oxford: Oxford University Press, 2010), 54.

March 10, 2017

Presenter: Mark Diep

Title: "The Implausibility of Bratman's No-Regret Condition"

Abstract:

In his 1999 book, Michael Bratman argues that his No-Regret condition solves problems of rationality that he argues sophistication theory and resolution theory can't. He argues that the main problem for both theories is that they fail to account for the fact that we are temporal and causal rational agents. His No-Regret condition, he claims, solves the problems by accounting for these features of rational agents. In this paper, I argue that Bratman's No-Regret condition is not a plausible condition of rational agency. I shall show that there are cases where the No-Regret condition offers recommendations that we normally find difficult to follow. The primary problem, I shall argue, is the regret feature of the No-Regret condition. There are times when an agent is faced with a future regret that does not factor into the agent's decision on whether or not to follow through with prior plans. In the cases that I will discuss, if there are future regrets that don't align with the agent's current preferences, she is still rational to act contrary to those future regrets that the No-Regret condition recommends her to take seriously. Since the No-Regret condition always recommends that an agent acts on the bases of preferences of these future selves in these cases, the No-Regret condition is implausible.
Lastly, I present a case where an agent's possible future choices are in conflict. Despite conflicts in these choices, Bratman's No-Regret conditions claims that the agent is rational if she forms a plan about these conflicting choices and acts on the plan. I show that No-Regret's recommendations in this case show that No-Regret is incoherent.

March 3, 2017

Presenter: Alfred Tu

Title: "Modal Skepticism and Similarity"

Abstract:

In Modal Epistemology, Peter Van Inwagen argues that anyone who accepts Yablo's theory should become a modal skeptic. Modal Skepticism, in Van Inwagen's term, is a conservative epistemological position that believes we can have "basic" modal knowledge, such at "It is possible that Lincoln has an earthquake" or "It is possible that I will have pasta as dinner tonight." But it is not the case that we can have "remote" modal knowledge, such as "It is possible that transparent iron exists" or "It is possible that purple cows exist." Modal skepticism has one obvious theoretical advantage: we can keep most of our ordinary, commonsensical modal knowledge, but we can refute some puzzling philosophical arguments that build on remote possibilities (Van Inwagen 1998, Hawke 2016). Van Inwagen's modal skepticism is defended and developed by Peter Hawke later (Hawke 2010, 2016). Nevertheless, I think the crucial point of modal skepticism is that modal skeptics must give us some principals that can adequately differentiate remote possibilities from basic or uncontroversial possibilities, and deny we can have knowledge of remote possibilities. And I am going to argue that Hawke's work is not satisfactory for adequately differentiating remote possibilities and ordinary possibilities.

February 24, 2017

The Graduate Student Research Colloquium will not be held this week due to the Speakers' Series. The guest speaker is Lucy Allias.  Click here for more information

February 17, 2017

Presenter: Lauren Sweetland

Title: "Cooperation: From Joint Intention to Evolutionary Explanation."

Abstract:

How can the notion of joint or we-mode intentions be incorporated into theories of social science? In particular, how can joint intentions fit with Tooby and Cosmides' Integrated Causal Model, according to which social behavior is a product of evolved information-processing systems? How can joint intentions fit with Henrich and Henrich's Dual Inheritance Theory? According to this view, human biological/psychological adaptations produce prosocial cultural behavior.
I argue that the notion of joint intention, with modifications, may be incorporated into an analysis of Dual Inheritance Theory and the Integrated Causal Model. Both Searle and Tuomela argue that individual (not group) intentions are crucial for understanding joint action. For Searle, individuals have intentions "in the we-mode": psychological states with the content of we-actions. For Tuomela, joint action involves we-mode intentions in the form of team reasoning, where each agrees to promote the ends of the group by cooperating and coordinating with other members. To accommodate joint intentions into these broader models, we should forego two assumptions contained in the notion of joint intentions. First, we should give up the assumption that there is a large divide between the social and the natural sciences (akin to a divide of mental and physical). Second, we might give up the assumption that the social level cannot be reduced to the individual level. In addition, a notion of joint intentions should meet several criteria: i) Henrich and Henrich's Core Principle, ii) Tooby and Cosmides' description of the frame problem and iii) the evolvability criterion, and iv) Hamilton's rule.

February 10, 2017

Presenter: Zach Wrublewski

Title: "Conceivability, Abduction, and Modal Knowledge."

Abstract:

In his paper "Is Conceivability a Guide to Possibility?" Stephen Yablo analyzes several existing conceptions of conceivability before ultimately offering a positive account of conceivability that he believes better gives us prima facie knowledge of possibility. In this paper, I will argue that while Yablo might be successful in ruling out the relevant conceptions of conceivability as methods for reliably gaining modal knowledge, he is unsuccessful in offering his positive account of such a method. To show that this is the case, I will offer an objection to Yablo's account of conceivability that centers on a problem with the completeness of conceived worlds that plagues his account. Furthermore, I will argue that one of the conceptions of conceivability that Yablo rules out can still be useful if applied in the generative step of a two-step theory of abductive inference about modal knowledge, while his preferred positive account of conceivability cannot be used in such a way.

February 3, 2017

Presenter: Mark Albert Selzer

Title: "The Latent Capacity Interpretation of the Explanatory Constraint."

Abstract:

In his influential article, "Internal and External Reasons" (1979), Bernard Williams argues for the Explanatory Constraint:
       EC   The fact that 𝑝 is a normative reason for A to Φ only if A can Φ because 𝑝.
There is a problem with EC: if 'can' means that there is some possible world where A can Φ because 𝑝, then almost anything would count as a normative reason for A to Φ. Therefore, a plausible interpretation of EC must avoid such a 'bare possibility' interpretation of 'can'.

In "Internalism and Externalism about Reasons" (forthcoming), Hille Paakkunainen argues for the Actual Capacity interpretation of EC:
       AC   The fact that 𝑝 is a normative reason for A to Φ only if A has an actual present capacity to Φ because 𝑝.

First, I argue that AC is an unsatisfactory interpretation of EC because it conflicts with the reasons that the akratic or the person with a poorly developed character has. Second, to address these shortcomings, I argue for the Latent Capacity interpretation of 'can' in EC:
       LC   The fact that 𝑝 is a normative reason for A to Φ only if A has a latent capacity to Φ because 𝑝'.
LC is an account that is not trivialized by a bare possibility interpretation of EC—yet, contra AC, LC remains in harmony with the reasons the akratic or the person with a poorly developed character has.

January 20, 2017

Presenter: Christopher Stratman

Title: "Heilian Truthmaking."

Abstract:

John Heil's account of truthmaking, what I call "Heilian Truthmaking" (HTM), fails to avoid several significant objections. This claim depends on a controversial interpretation of Heil's view of truthmaking, one that interprets it as a primitive concept. I will consider whether or not it is fair to interpret HTM as primitive and the consequences that follow from such an interpretation. It will be shown that the proponent of HTM faces an important dilemma, which is stated below:

  1.   Either HTM is a primitive concept or not.
  2.   If HTM is a primitive concept, then there are missing truthmakers.
  3.   If HTM is not a primitive concept, then it fails to avoid Kit Fine's three objections.
  4.   Therefore, whether or not one interprets HTM as a primitive concept, one has good reason to reject it.
This shows that Heil owes us an account of truthmaking that can adequately address the objections in each horn. Moreover, if such an account cannot be given, then it is not clear how Heil can maintain that he holds a realist ontology. It will be argued presently that such an account is not even on the offing. Therefore, it is reasonable to reject Heil's account of truthmaking.

The structure of the paper will be divided into two parts: In part one I will consider Heil's general approach to ontology and his appeal to truthmaking. In order to get a better grip on why one might interpret HTM as a primitive concept I will consider the second horn of the dilemma first. It will be argued that the only way the second horn of the dilemma can be avoided is by interpreting HTM as a primitive concept. In part two I will consider the first horn, arguing that interpreting HTM as primitive undermines Heil's realist ontology because we will not be in a position to know what makes our sentences true. I will consider how Heil might respond to this objection prior to concluding.

January 13, 2017

Presenter: Joey Dante

Title: "Controversial?!"

Abstract:

I will be discussing Sarah McGrath's "Moral Disagreement and Moral Expertise." McGrath argues that, at least many of, our moral beliefs do not amount to knowledge. McGrath argues that "CONTROVERSIAL beliefs do not amount to knowledge." (page 92). Where "CONTROVERSIAL" is understood as follows: "Thus your belief that p is CONTROVERSIAL if and only if it is denied by another person of whom it is true that: you have no more reason to think that he or she is in error than you are." (page 91). She then argues that many of our moral beliefs are indeed CONTROVERSIAL. As such, many of our moral beliefs do not amount to knowledge.

I will be discussing whether or not her argument overgeneralizes. She maintains that her argument does not overgeneralize. I will argue that she has not given compelling reason to suppose that her argument does not overgeneralize.

Let us focus on one of the beliefs that she focuses on; namely, the belief that there is an external world. McGrath maintains that that belief is not CONTROVERSIAL.

She supposes that one or a few (highly intelligent) dissenters should not tip the scales in such a way that we should be skeptical about that belief. Lack of unanimity should not render a belief CONTROVERSIAL. She supposes that there should be substantial disagreement before a belief is rendered CONTROVERSIAL.

Now, she does not give criteria for what substantial disagreement would be but it seems that she relies on the thought that if a vast number of more or less equally epistemically situated people disagree with a very small number of equally epistemically situated people then that kind of disagreement does not render the relevant belief (of the vast number of people) CONTROVERSIAL.

She gives a case: if you and I disagree about when the train departs (and we are peers in the relevant sense) then my belief concerning the departure is rendered CONTROVERSIAL. However, if 10 people all agree the train will depart at time X and I (and only I) believe that the train will depart at time Y then the 10 people's beliefs are not rendered CONTROVERSIAL (although my belief is rendered CONTROVERSIAL).

This is supposed to illustrate that numbers can make a difference as to whether or not a belief is rendered CONTROVERSIAL due to disagreement among peers. She is thinking that because there are so many people that disagree with me concerning the train departure I should feel as if I am more likely to be in error and as such that the others are less likely to be in error.

My Worries:

  • (i) The example she gives may not generalize. Maybe ordinary beliefs such as when a train is departing are such that numbers matter more than other kinds of beliefs.
  • (ii) Who is the relevant class of people that should matter when considering whether or not disagreement renders a belief CONTROVERSIAL? One could feasibly think that the average everyday person's beliefs, let's say concerning religion or souls or the external world, shouldn't matter as much. If we can coherently limit the relevant class of peers then it seems that it is not merely a very small minority (a few geniuses) that deny that we have knowledge or the external world.
  • (iii) What people believe is contingent. As such, it is perfectly possible that everyone (aside from a few geniuses) believe that there is no external world. In this case the relevant CONTROVERSIAL belief would be that there IS an external world. Of course, this is not really an objection since it will be contingent whether or not I have knowledge concerning a particular belief. But it seems that a vast majority of people could believe something false. This isn't saying much since what is at issue here is whether or not the mere fact that you know someone disagrees should alter your credence in your disparate beliefs; but it seems that someone (like some imminent thinker that is going against the status quo) who denies the status quo position loses knowledge about their revolutionary theory merely because the vast majority of people disagree. (Let's take geo versus heliocentric theories and the thinkers who wanted to push for heliocentrism; is it really the case that some of those thinkers in fact never had knowledge in their lives concerning the truth of heliocentrism, merely because they were aware that the vast majority disagreed?)
  • (iv) Let us grant that the belief in the external world is not rendered CONTROVERSIAL. That does not mean that her argument does not overgeneralize. I imagine that McGrath herself would NOT want her argument to extend to this belief: that perception is a reliable indicator of the truth. However, her argument does indeed extend to such a belief. That is because many people from different ways of life and different cultures across time have denied this (think of some religious traditions that think everything is an illusion, philosophical traditions.) Or at least, she cannot explain why that belief is not CONTROVERSIAL by appeals numbers in the way she does with the belief about the external world.

Fall 2016

December 2, 2016

Presenter: Andrew Spaid

Title: "Desire Theory: How to Measure Well-being."

Abstract:

Desire theorists about well-being accept the following view about how to measure a person's level of well-being: a person is well-off to the extent that they are getting what they want. There are two options for making this view more precise. According to the first, your level of well-being is represented as a ratio of satisfied desires over total desires. According to the second, your level of well-being is represented as an integer equal to the number of your satisfied desires minus the number of your frustrated desires. I believe more turns on which of these two options the desire theorist accepts than desire theorists have tended to notice. I explore some of the advantages and disadvantages of each option and argue that, for most desire theorists, the first option should appear the most attractive.

November 18, 2016

Due to the Speaker's Series, the Graduate Student Research Colloquium is cancelled this week. The guest speaker is Karen Bennett, her topic is "Causing".   Click here for more information

November 18, 2016

Due to the Speaker's Series, the Graduate Student Research Colloquium is cancelled this week. The guest speaker is Neil Sinhababu, his topic is "Empathic Hedonists Escape Moral Twin Earth". Click here for more information

November 4, 2016

Presenter: Christopher Stratman

Title: "Why I Am Not A Color Realist."

Abstract:

I am not a color realist. I do not believe that colors exist independent of one's phenomenal experience of them; they are not a part of the correct ontology; they are not a part of the way the cosmos looks from the perspective of the ontology room; they are not in the book of the world; they are not fundamental. Our commonsense perception of the world around us errs when it tells us that ordinary objects are colored. I don't think that ordinary objects are colored, because I don't think that ordinary objects exist. In this paper I will consider the Sorites Paradox and several issues concerning material constitution in order to demonstrate that there is good reason to believe that mereological nihilism is true. The aim of this paper is to argue that color realism is false by showing that mereological nihilism is true. Consequently, mind-independent, color properties that are possessed by ordinary objects don't exist. Hence, there is good reason to deny color realism. The astute reader will notice, however, that it is technically incorrect to say that "I" am not a color realist. The correct way to make this point would be to say that, simples arranged "I—wise" am not a color realist. After presenting the main argument against color realism I will consider similar challenges and argue that such worries are benign.

October 21, 2016

Presenter: Joseph Dante

Title: "Contra Sound as Disturbances."

Abstract:

I will be offering various considerations that call into question Casey O'Callahan's recent work on Sounds. O'Callahan argues that sounds are disturbance events. I will argue that O'Callahan fails to meet his own desiderata for any adequate theory of sounds. And further that his arguments to suggest that sounds cannot occur in a vacuum fail. He wants his theory to be neutral with respect to the proper metaphysics of events yet also he wants sounds to be causally powerful. I will argue that either his theory allows for sounds to be epiphenomenal or else he must be committed to the thesis that every event has only one cause (namely he would have to be committed to a controversial theory of event individuation). Either way his own desiderata are not met. Further, he argues that because there are no audible qualities in a vacuum there is no sound in a vacuum. I will point out flaws in this way of arguing. Namely, the premise "If there are no audible qualities in X then there are no sounds in X" will either commit O'Callahan to saying that there are no sounds in places where his theory should allow or else leave room for sounds existing in vacuums.

October 7, 2016

Due to the Speaker's Series, the Graduate Student Research Colloquium is cancelled this week. The guest speaker is Alex Rosenberg, his topic is "The Program of Strong Scientism and its Challenges". Click here for more information

September 30, 2016

Presenter: Zachary Garrett

Title: "The Epistemic Theory of Vagueness."

Abstract:

Epistemicism is a theory of vagueness that makes two claims: (i) all vague predicates have sharp cutoff points and (ii) for any borderline case we cannot know whether the predicate applies or not. The primary proponents of epistemicism are Timothy Williamson and Roy Sorensen. Williamson, unlike Sorensen, attempts to give an account of how predicates get their sharp borders. He claims that our uses of predicates set the cutoff via some very complicated procedure. He does not spell out the procedure itself. The plausibility of our uses setting unique cutoff points for vague predicates has been questioned extensively. Even attempts to set the cutoffs via other means have come up short. As for Williamson’s version of (ii), he explains our limited knowledge to a margin for error principle. Essentially, we lack knowledge in borderline cases because we could have easily been wrong. Sorensen thinks that Williamson is misguided in his efforts to give an explanation of the procedure that sets sharp borders. Instead, Sorensen argues that we are forced to accept the existence of sharp borders by virtue of the fact that the sorites argument is invalid. He claims that this is enough reason to accept sharp borders. Sorensen explains our ignorance as a result of a lack of truthmakers for propositions about borderline cases. Since the propositions are not linked to the world in any way we cannot come to know them. I argue that Sorensen’s explanation of our ignorance is worse than Williamson’s. Since Williamson’s explanation of how vague predicates get their borders is problematic I consider a hybrid view that utilizes both Sorensen’s optimism about the existence of cutoffs and Williamson’s account of our ignorance. I finally argue that this account also fails because it gets the direction of explanation between logic and the phenomenon of vagueness wrong.

September 23, 2016

Presenter: Jason Lemmon

Title: "Aristotle on Non-Contradiction: Logical, not Metaphysical"

Abstract:

In book IV, chapter 4 of the Metaphysics, Aristotle defends the principle of non-contradiction (PNC) -- namely, that it is impossible for something both to have some feature and to not have that feature, at the same time and in the same respect. Edward Halper, among others, denies that Aristotle is concerned to defend PNC. He complains that many philosophers have treated Met. IV 4 "as an island of logic in a sea of metaphysics." He argues that Aristotle's project in IV 4 isn't to defend PNC; rather PNC is being used by Aristotle as an assumption in an argument for the conclusion that beings have essential definitions. Thus, says Halper, Met. IV 4 consists of metaphysics, not logic, and should be seen as continuous with Aristotle's main concerns in subsequent books of the Metaphysics. I argue that Met. IV 4 is indeed an island of logic in a sea of metaphysics. I offer four main arguments against the metaphysical reading (and the claim that PNC functions as a premise). One of my arguments, for example, includes the claim -- borrowing from Jonathan Lear, and ultimately Carroll -- that PNC, as a principle of inference (acceptance of Fx is all it takes to reject ˜Fx), can no more constitute a premise in an argument than Modus Ponens can.

September 16, 2016

Presenter: Aaron Elliott

Title: "Avoiding Bruteness Revenge."

Abstract:

In this paper I examine Tristram McPherson's presentation of the supervenience challenge for non-naturalism in metaethics, and focus on the issue of brute necessary connections. The Challenge is that non-naturalists must either: i. allow that the supervenience of the normative on the natural entails an unexplained necessary connection between distinct existence; ii. accept an explanation for the necessary connections that is committed to naturalism; or, iii. reject supervenience. I explain why McPherson concludes that any non-naturalist explanation for supervenience must rely on positing some further brute necessary connection, and therefore makes no progress towards discharging the explanatory burden. I then argue that McPherson's account conflates two kinds of bruteness, and show that in light of this distinction explanatory progress is possible. There are brute necessary connections and there are brute absences, and commitment to each is a cost to a view. Due to this distinction, I replace Hume's Dictum with a principle against positing brute impossibility, because this better captures the concern over both kinds of bruteness. We can decrease the amount of bruteness a non-naturalist is committed to by eliminating brute connections without positing further brute connections or additional brute absences. This reduces the cost of supervenience to non-naturalism, and makes explanatory progress, even if some commitment to brute absence remains.

September 9, 2016

Presenter: Kevin Patton

Title: "The Psychology of Skepticism."

Abstract:

The thesis of this presentation will be mostly fragmented. The idea behind this presentation will be to frame the first (eventual) chapter of my (eventual) dissertation. The goal/project for the first chapter will be to recast the challenge of epistemic skepticism in positive, rather than negative terms. Nearly every author I have surveyed addresses epistemic skepticism as a challenge that needs answering. This has the effect of producing combative attitudes toward both skepticism and the core issues the skeptic is focused on. I want to view skepticism as a welcomed friend, not as a foe to be vanquished. The skeptic is someone who desires certainty and any theory of knowledge which does not produce such certainty is faulty - or so says the skeptic. This reframing of the debate in positive terms will help me address some naturalized epistemologists who have attempted to refute the skeptic (probably chapter 2 of the dissertation?)

September 2, 2016

Presenter: Adam Thompson

Title: "How Anti-Humeans Keep the Blame in Blame (and Why Humeans Cannot)."

Abstract:

Recently, several accounts have emerged on which blame is neither confined to the emotional nor always affectless. These accounts seem antithetical to popular Reactive Attitudes Accounts on which blame is constitutively tied to certain emotions. This essay shows that only a subset of those emerging accounts run contrary to Reactive Attitudes Accounts. It primarily argues that those accounts of blame's nature that appeal to the Humean idea that cognitive states cannot motivate absent aid from independent desire should be rejected. Thus, if we reject Reactive Attitudes Accounts of blame, we should adopt an anti-Humean construal of blame. The key idea is that only anti-Humean accounts capture the essence of the reactive emotions. Hence, only anti-Humean accounts keep intact the aspects of the reactive attitudes that render them paradigmatic of blame. In other words, it shows how anti-Humeans keep the blame in blame and explains why Humeans cannot.

Spring 2016

April 29, 2016

Presenter: Christopher Richards

Title: "Grounding and Univocity."

Abstract:

In this paper, I argue against what I call the Koslicki-Wilson objection to grounding..

April 15, 2016

Presenter: Joey Dante

April 8, 2016

Presenter: Alfred Tu

Title: "Grounding and Primitiveness."

Abstract:

Grounding has become a central topic of metaphysics in recent years. According to Daly, in various theories of grounding, they hold some general features: grounding is intelligible, grounding is primitive, and grounding is useful. Some grounding skeptics argue against grounding theories and they claim it is not the case that these general features can apply to grounding. This paper has two parts. The first part includes my concerns about grounding skeptics’ strategies and why some grounding theorists claim grounding is primitive. In the second part, I try to argue that Jonathan Schaffer’s novel contrastive account of grounding is not well integrated with his general claims of grounding relation. It seems implausible to accept the contrastivity – an unorthodox and counter-intuitive formal property – as one formal property of a primitive notion. Therefore, either Schaffer cannot take grounding as a primitive notion or grounding does not have contrastivity.

April 1, 2016

Presenter: Lauren Sweetland

Title: "A Question of Justice for Indigenous People and Environment."

Abstract:

Principles of climate mitigation in environmental ethics often draw on either considerations of fairness and forward-looking concerns, or on justice and backward-looking concerns. That is, according to some theorists, considerations of the current distribution of climate benefits and burdens are foremost, while others take repairing historic wrongs as paramount. Some theorists integrate considerations of fairness and justice to formulate hybrid climate principles. Such an integrative approach is promising particularly in the context of environmental harm to indigenous subsistence peoples, who are among those suffering the most from climate change. I argue that existing integrative climate principles tend not to sufficiently emphasize considerations of backward-looking justice. This is a problem for indigenous peoples seeking reparations for environmental harm and violations of their human rights, according to Rebecca Tsosie. I argue that the current climate situation facing some Native people is unfair according to Rawls' second principle of justice. In addition, the situation is unjust as indigenous people suffer from emissions by others and few attempts are made for reparations. Thus, Rawlsian fairness combined with reparative justice provide a befitting theoretical framework. I conclude that an acceptable climate principle will adequately integrate considerations of both fairness and justice, both forward-looking and backward-looking considerations.

March 18, 2016

Presenter: Zachary Garrett

Title: "The Structure of Higher Order Vagueness."

Abstract:

Take a sample of native English speakers and a line of men with differing numbers of hairs on their heads. Arrange the men in order by the number of hairs they have. Start with the man with 0 hairs and end with a man of 150,000 (50% more than average). Now, as you walk the English speakers along the line of men ask them to determine whether or not the man they are currently looking at is bald. At the two extremes we should expect the participants to confidently answer either bald or not. As we get closer to the middle, though, we expect that it will become more and more difficult for the participants to answer. There are some cases that are neither clearly bald nor clearly not bald. We call these borderline cases. Now, instead of asking whether some man x is bald we ask whether or not it is clear that he is bald. Just like how it is hard to tell when we move from bald to not bald, it is hard to tell when we move from clearly bald to borderline bald. The same thing can be repeated for the move from clearly clearly bald to borderline borderline bald. This procedure has led many to believe that there is a hierarchy of higher order vagueness.
This hierarchy has been objected to in two different ways. First, Mark Sainsbury and Diana Raffman, have argued that higher order vagueness is incoherent. Sainsbury has argued that the hierarchy collapses into three clear sets, contradicting accepted features of vagueness. Sainsbury argues that the failure of higher order vagueness reveals a problem with classical approaches to vagueness. He goes on to provide a non-classical analysis of vagueness. Raffman provides similar arguments and tends to agree with Sainsbury’s analysis. Raffman goes on to argue for a way of making sense of higher order vagueness, but one that is only uninterestingly hierarchical. Second, Susanne Bobzien argues for higher order vagueness, but rejects a hierarchical view. Bobzien claims that vagueness just is higher order vagueness, and so if it is clear that A, then it is clear that it is clear that A. Similarly, if it is borderline that A, then it is borderline that it is borderline that A. She calls this view columnar higher order vagueness.
In this presentation I will defend hierarchical higher order vagueness. Sainsbury and Raffman’s arguments against higher order vagueness make use of what Bobzien calls the in-between borderlineness, which is the understanding of borderlines as classes falling in between clear classes. I will follow Bobzien and argue for undecidability borderlineness, which says that borderline cases are ones that cannot be decided one way or another. Under undecidability borderlineness Sainsbury and Raffman’s objections can be handled. Next I will argue that Bobzien’s columnar higher order vagueness is built off of the wrong understanding of clearness. Where Bobzien’s understanding of clearness comes from the way that individual rational agents would respond to sorites paradoxes, she should look at how large numbers of rational agents would respond.

March 11, 2016

Presenters: Chris Gibilisco and Adam Thompson

Title: "Quiddistic Individuation Without Tears"

Abstract:

In a recent paper, Deborah C. Smith notes that quidditism comes in two varieties, I-quidditism and R-quidditism. I-quidditism is a view about how properties are individuated, while R-quidditism is a view about how properties may be recombined. In this paper we present and defend a novel version of I-quidditism that doges the traditional problems with other versions of I-quidditism on the market. Further, our view is extremely ecumenical: it is consistent with R-quidditism of all varieties, causal structuralism, immanent realism, and trope theory.

March 4, 2016

Presenters: Aaron Elliott

Title: "Why Non-naturalists Need Something Else to Explain Supervenience"

Abstract:

Non-naturalists are supposed to explain why the normative supervenes on the natural. Since supervenience is a phenomenon of property distributions, an explanation for supervenience requires an explanation for the distribution of normative properties. I provide a taxonomy of frameworks for such an explanation, in terms of the elements the explanation appeals to. I then rule out options that employ only natural elements on the basis of losing the non-naturalist commitment, and options that fail to employ natural elements on the basis of being unable to explain supervenience. This leaves a core set of non-naturalist frameworks. I argue that the framework that employs only features of normative properties and features of natural properties fails, thus leaving only frameworks that employ in addition some element that is external to both normative properties and natural properties. I show this by considering two prima facie plausible models, identifying why they fail, and why these reasons generalize to other models in this framework.

February 26, 2016

Presenter: Shane George

Abstract:

Adaptive preferences arise when one changes their goals based on events in their life, or realizations of limitations (unexpected or otherwise.) These potentially cause a problem for accounts of autonomy. While my inability to be a Jedi is likely an innocuous feature which urges me to change my life goal preferences, my being enslaved is not. Structuralist accounts of autonomy have argued that the key feature for determining innocuous and problematic instances of adaptation centers on the relational properties and contexts that cause these adaptations. These accounts cannot then be value neutral as they have to evaluate said contexts. Proceduralists disagree and present value neutral accounts of autonomy which seek to center the issue squarely upon the processes one undergoes to maintain/achieve autonomy. Christman argues that due to asymmetries in the social context of selected cases, structuralist solutions do not provide the correct answer to these problems. I will argue that Christman's account also fails, and perhaps a stronger structuralist inspired response is required.

February 12, 2016

Presenter: Christopher Stratman

Title: "A Critique of Moore's Anti-Skeptical Argument in 'Proof of an External World.'"

Abstract:

In this paper I will argue that a philosophically important distinction needs to be made between skeptical arguments that involve our everyday empirical knowledge of ordinary objects and those that involve ontological explanations of such knowledge. Once this distinction is established, I argue that the proponent of G. E. Moore's anti-skeptical argument, as it is presented in "Proof of an External World", faces the following dilemma: she can either (i) accept that the argument misses its intended target of ontological skepticism; or (ii) accept that it does engage with its intended target of ontological skepticism. If she chooses the first horn, then the argument fails to answer the skeptic's challenge insofar as the argument focuses on epistemic skepticism. If the second horn is chosen, then the conclusion goes far beyond the scope of its premises insofar as the conclusion is ontological in nature while the premises are epistemic in nature. In both cases, I argue, Moore's response to the skeptic's challenge is inadequate.
In Part 1, an exposition of G. E. Moore's anti-skeptical argument will be offered. This will be followed by a consideration of Anthony Rudd's distinction between epistemic and ontological skepticism, as well as its relevance for the traditional skeptic's challenge. Then, in Part 2, some common objections to Moore's anti-skeptical argument will be considered, as well as Peter Baumann's responses. There I examine Baumann's argument that the best response to these objections involves interpreting Moore as being focused on ontological skepticism rather than epistemic skepticism. Prior to concluding in Part 3, I offer a final objection grounded in the above dilemma, consider several potential problems with the dilemma, and defend a version of ontological skepticism regarding the external world in support of the above dilemma.

January 15, 2016

Presenter: Adam Thompson

Title: "Reasonable Fear Is Killing Justly and What You Should Do About It."

Abstract:

The primary justification offered for not indicting a police officer for murder in connection to their involvement in killing an individual while on-duty is that the officer reasonably feared for their life. Recently, the failure to charge on-duty police officers who kill non-white individuals with murder has sparked nation-wide and world-wide outrage. Much of that outrage targets and rejects the claim that it was reasonable for the officer to fear that they, a fellow officer, or an innocent bystander would likely suffer serious bodily harm. This paper argues that the problem is not the truth-value of a claim of reasonable fear. Rather, the problem is that so many are apt to fear non-white individuals and so many others are disposed to accept the claim that the fear was reasonable as true. Here, I diagnose why, in the current U.S. climate, it makes sense (a) for so many to feel fear when encountering non-white individuals and (b) for so many to accept the reasonable-fear defense when cops kill. I go on to offer strategies for positive change.