December 22, 2007

"God bless us, every one!"

Berit says I should get my ass back on my blog. So here's the update. Christmas in Jersey with the family. There are rumors of a reunion with the Brooklyn side. They usually like to add a little coffee to their sambuca, so I expect it to be colorful. Giving commentary about intuitionistic truth at the APA. Will post on that soon. Looking forward to catching up with friends in Baltimore, although I'll be pretty busy with my department's metaphysics hire. May spend some time in NY and then off to Tucson for what I expect to be a very intense ontology conference. Berit and I will ride horses across the desert in the rain and discuss our counterfactual theory of essential properties.

December 06, 2007

ANU offers to Schellenberg and Southwood

ANU recently delivered offers to Susanna Schellenberg (Pittsburgh) and Nicholas Southwood (ANU).

Schellenberg currently holds a postdoc at ANU. Her paper "The Situation-Dependency of Perception" is forthcoming in JPhil and "Action and Self-location in Perception" appeared in Mind. Her book in progress is titled, Perception in Perspective.

Southwood currently holds a postdoc at ANU and was a Fulbright Scholar at Princeton University. His book Contractualism and the Foundations of Morality is forthcoming with OUP.

With the recent hire of Jonathan Schaffer, acceptances from Schellenberg and Southwood would really add to the strength of the department.

UPDATE: Schellenberg and Southwood have accepted.

November 18, 2007

Mindpapers


Dave Chalmers and his assistant editor, David Bourget, have launched Mindpapers, which is a bibliographic database of over 13,000 papers in the philosophy of mind. Dave discusses the new tools over at Fragments---among them, highly flexible search options and statistical information. For instance, based on Google Scholar data, we find the top 100 most cited works in Mindpapers.

Here are the top 10 most cited by philosophers:

1. [3060] Jerry Fodor (1983). The Modularity of Mind. (More) [7.2b. Modularity]

2. [2608] Daniel Dennett (1991). Consciousness Explained. (More) [1.4c. Dennett's Functionalism]

3. [2469] Gilbert Ryle (1949). The Concept of Mind. (More) [4.3a. Logical Behaviorism]

4. [1974] Saul Kripke (1972). Naming and Necessity. (More) [1.3c. Kripke's Modal Argument]

5. [1750] David Chalmers (1996). The Conscious Mind: In Search of a Fundamental Theory. (More) [1.1a. Consciousness, General Works]

6. [1612] Daniel Dennett (1987). The Intentional Stance. (More) [2.1b. The Intentional Stance]

7. [1451] Jerry Fodor (1975). The Language of Thought. (More) [2.1a. The Language of Thought]

8.[1388] Jon Barwise & John Perry (1983). Situations and Attitudes. (More) [2.3a. Information-Based Accounts]


9. [1266] Jerry Fodor & Zenon Pylyshyn (1988). Connectionism and cognitive architecture. (More) [6.3a. Connectionism and Compositionality]

10.[1257] John Searle (1980). Minds, brains and programs. (More) [6.1c. The Chinese Room]


Here are the top 10 most cited overall:

1. David Marr (1982). Vision. Freeman. (Cited by 5170 | Google | More links | Annotation | Edit) [7.4d. Levels of Analysis]
Defines computational, algorithmic and implementational levels.

2. James Gibson (1979). The Ecological Approach to Visual Perception. Houghton Mifflin. (Cited by 4886 | Google | More links | Edit) [7.2e. Embodiment and Situated Cognition]

3. Antonio Damasio (1994). Descartes' Error: Emotion, Reason, and the Human Brain. Putnam. (Cited by 4670 | Google | More links | Edit) [5.1c. Emotions]
Additional links for this entry:

4. Daniel Kahneman, Paul Slovic & Amos Tversky (eds.) (1982). Judgment Under Uncertainty: Heuristics and Biases. Cambridge University Press. (Cited by 3476 | Google | More links | Edit) [7.2d. Rationality]

5. Jerry Fodor (1983). The Modularity of Mind. MIT Press. (Cited by 3060 | Google | More links | Annotation | Edit) [7.2b. Modularity]
Perception happens in informationally encapsulated, domain-specific modules. Central systems aren't encapsulated, and so may be impossible to understand.

6. Edwin Hutchins (1995). Cognition in the Wild. MIT Press. (Cited by 2764 | Google | More links | Edit) [2.4h. Collective Intentionality]

7. Terry Winograd & Fernando Flores (1987). Understanding Computers and Cognition. Addison-Wesley. (Cited by 2756 | Google | More links | Edit) [6.6. Philosophy of AI, Misc]

8. Daniel Dennett (1991). Consciousness Explained. Penguin. (Cited by 2608 | Google | More links | Annotation | Edit) [1.4c. Dennett's Functionalism]
Argues against the "Cartesian Theatre", advocating a "multiple drafts" model of consciousness. Presents a detailed model of processes underlying verbal report, and argues that there is nothing else (e.g. qualia) to explain.

9. Mihaly Csikszentmihalyi (1990). Flow: The Psychology of Optimal Experience. Harper & Row. (Cited by 2522 | Google | Edit) [8.3j. The Stream of Consciousness]

10. Gilbert Ryle (1949). The Concept of Mind. Hutchinson and Co. (Cited by 2469 | Google | More links | Annotation | Edit) [4.3a. Logical Behaviorism]

How do they compile such a massive resource...you ask? Apparently, the software harvests JSTOR and other archives and personal websites for papers, and recognizes and parses bibliographic information. Then if the program determines that the entry is highly likely relevant to Mindpapers, it is automatically categorized based on further Bayesian programming. The statistical information is automated as well. Pretty cool!

November 13, 2007

Ruth Barcan Marcus Wins Lauener Prize


The Lauener-Stiftung announces that the Lauener Prize for an Outstanding Oeuvre in Analytical Philosophy 2008 goes to Ruth Barcan Marcus (Reuben Post Halleck Professor of Philosophy, Emeritus, at Yale University).

The 3rd International Lauener Symposium on Analytical Philosophy held in honour of Ruth Barcan Marcus will take place in 2008 in Bern, Switzerland.

HT: Leiter

October 24, 2007

Arizona Ontology Conference 2008


Speakers Include
Berit Brogaard, ANU
Andy Egan, Michigan
Adam Elga, Princeton
Hilary Greaves, Rutgers/Oxford
Thomas Hofweber, UNC-Chapel Hill
Jenann Ismael, Arizona/Sydney
Robin Jeshion, UC-Riverside
John MacFarlane, UC-Berkeley
Daniel Nolan, Nottingham
Jill North, Yale
Josh Parsons, Otago
Joe Salerno, ANU/Saint Louis
Brian Weatherson, Rutgers

October 21, 2007

Conference Announcements


The Epistemology and Methodology of Jaakko Hintikka
a symposium
Roskilde University, Denmark
November 16-17, 2007

Speakers include
* Adam Didrichsen
* Vincent F. Hendricks
* Jaakko Hintikka
* Stig Andur Pedersen
* Ahti-Veikko Pietarinen
* Robert Stalnaker
* Frederik Stjernfelt
* Tim Williamson


Ontological Commitment Conference
Hosted by the Centre for Time
University of Sydney
November 30 - Dec 1, 2007

Speakers include
* Berit Brogaard (Missouri/ANU)
* Mark Colyvan (Sydney)
* Uriah Kiegel (Arizona/Sydney)
* Kristie Miller (Sydney)
* Luca Moretti (Sydney)
* Jonathan Schaffer (ANU)
* Amie Thomasson (Miami)


The first annual Midwest Epistemology Workshop
Northwestern University
November 30-December 1, 2007.

Speakers include
* Ernie Sosa
* Robert Audi
* Al Casullo
* Richard Fumerton
* Sandy Goldberg
* John Greco
* David Henderson
* Jennifer Lackey
* Matt McGrath
* Baron Reed.


Btw, the latest issue of The Reasoner is now online.

October 18, 2007

Conditionals at Konstanz


View Larger Map


First Formal Epistemology Festival
Conditionals and Ranking Functions

at Konstanz University, July 28-30, 2008
Call for Papers here.

Special Issue of Erkenntnis
Conditionals and Ranking Functions
Guest editors: Franz Huber, Eric Swanson, and Jonathan Weisberg.
Call for submissions here.

Formal Epistemology Postdoc Positions
Project: Belief and Its Revision
1/2008 to 12/2012

October 13, 2007

Copenhagen Conference Pictures

1st Synthese Annual Conference


Pics by Brit Brogaard, Vincent Hendricks, Fenrong Liu and Joe Salerno

October 12, 2007

Lewis Conference (Day 3)

Day 3 of the conference began with an animated talk by Alan Hájek on whether formal methods are a boon or bane for philosophy. He also discussed whether we should be doing traditional epistemology or Baysian epistemology. He argued both sides of both issues, and then painted Lewis' intellectual history as a synthesis of the two sides of these two disputes. It was quite a treat.

Great talks were also given by Ulrich Meyer, Vladan Djordjevic, and Rohit Parikh.

Photos to come, but a small sample appear over at Lemmings.

October 04, 2007

Lewis Conference (Day 2)

The talks today were on the semantics for conditionals. John Cantwell proposed a branching-time framework that aimed to unify our understanding of indicative and subjunctive conditionals. The variation in truth-value of corresponding indicative and subjunctive "Oswald sentences" is, on John's view, to be explained without positing a plurality of conditionals. The job can be done by tense and our understanding of open futures.

Hannes Leitgeb offered a probabilistic semantics for subjunctive conditionals. His very precise proposal (which I won't go into here) is a version of the thought that subjunctives are true just in case the consequent is sufficiently likely (in some objective sense) given the antecedent. By default Hannes rejects the strong and weak centering assumptions---respectively,

(A & B) --> (A []--> B), and

(A []--> B) --> (A --> B)

What this means is that, unlike the standard semantics, we get the desirable outcome that the truth of A and B is not sufficient to imply a counterfactual dependence between A and B, and that the truth of A and ~B is not sufficient to undermine a counterfactual dependence between A and B. The actual world can be one of the exceptional worlds where what does occur is not highly likely to occur (and where what is highly likely to occur does not occur).

Hannes replaces the centering assumptions with weaker centering-like assumptions---viz.,
(T []--> (A & B)) --> (A []--> B), and

(A []--> B) --> (T []--> (A --> B))

I believe T is meant to be a tautology, and so, the following rough paraphrase can be given: the truth of A & B does entail A []--> B, when A & B is sufficiently likely on its own, and the truth of A & ~B entails the negation of A []--> B, when A & ~B is sufficiently likely on its own. Perhaps we can put it in something like Lewisian terms. The stronger of the two says that no world is as close to the actual world as are the very likely worlds; and the weaker thesis is that no world is closer to the actual world than are the very likely worlds.



Photos:

1. Statue

2. John Cantwell

3. Hannes Leitgeb

4. Niels Bohr Mansion

Lewis Conference (Day 1)

Today began the 1st Synthese Annual Conference, Between Intuition and Logic: David Lewis and the Future of Formal Philosophy, which was hosted at the Honorary Niels Bohr Mansion in Copenhagen and organized by Johan van Benthem, Vincent Hendricks and John Symons.

John Collins started things off with his paper "Formal and Informal Models of Belief", in which he embraced a Lewisian theory of knowledge:

if S knows that X, then there is no uneliminated possibility that is very close to actuality and in which X is false.
He argued, among other things, that the threat of skepticism is not as ubiquitous as Vogel and Hawthorne suggest. The statistical information that n number of otherwise healthy people die of a heart attack is not enough to make the world in which i die of a heart attack very close to the actual world. That's because people don't just die of heart attacks if absolutely nothing is medically wrong with them. Since the actual world is not one where unbenownst to me there is something wrong with me, there is no very close world where I am one of the unlucky few to die in this way. So, my original ordinary knowledge claim about where I'll be tomorrow still stands. Of course, if it turns out that unbenownst to me and my doctors there is something medical wrong with my heart, then my ordinary knowledge claim falters.

Allesandro Torza gave the most formal of the talks thus far, titled "How to Lewis a Kripke-Hintikka". He argued that BL (i.e., [quantifier] independence friendly modal logic) is more expressive than QML (quantified modal logic); there are modal notions (e.g., the notion of rigidity for general terms) that can be expressed by BL but not by QML. However, this portion of BL cannot be translated into counterpart theory, and so, there is reason to doubt that counterpart theory is adequate to model our modal intuitions.


Brit and I gave a version of our paper "Remarks on Counterpossibles", in which we motivate and defend a modified version of Daniel Nolan's impossible worlds account of counterpossible conditionals.


Laurie Paul argued that the trumping examples, which have forced Lewis to give up his old theory of causation (see Schaffer's famous paper) do not obviously show what they were intended to show. The thrust of the objection was that until we clarify what it is to "interrupt a causal process", it is unclear how to interpret the trumping examples. The military handbook tells us that a Major's orders trump the Sgt.'s orders, but how do we get from there to a case of *causal* trumping? Merlin's (and not Morgana's) spell is stipulated to be the consequential of the two spells. But how do we get from there to Merlin's, but not Morgana's, spell caused the outcome?


September 22, 2007

David Lewis Conference

SYNTHESE ANNUAL CONFERENCE

Title / Between Logic and Intuition: David Lewis and the Future of Formal Methods in Philosophy

Synthese hosts its first annual conference at the Carlsberg Academy in Copenhagen, October 3- 5 , 2007. The conference is sponsored by PHIS - The Danish Research School in Philosophy, History of Ideas and History of Science and Springer.

Abstract / David Lewis is one of the most important figures in contemporary philosophy. His approach balances elegantly between the use of rigorous formal methods and sound philosophical intuitions. The benefit of such an approach is reflected in the substantial impact his philosophical insights have had not only in many core areas of philosophy, but also in neighboring disciplines ranging from computer science to game theory and linguistics. The interplay between logic and intuition to obtain results of both philosophical and interdisciplinary importance makes Lewis' work a prime example of formal philosophy. Lewis' work exemplifies the fruitful interplay between logic and intuition that is central to contemporary philosophy. This conference serves as a tribute to Lewis and as a venue for adressing questions concerning the relationship between logic and philosophical intuition.

This first Synthese Annual Conference is the venue for discussing the future of formal methods in philosophy.

Invited Speakers
John Collins, Alan Hájek, Hannes Leitgeb, Rohit Parikh and L.A. Paul

Speakers
Berit Brogaard and Joe Salerno, John Cantwell, Vladan Djordjevic, Ulrich Meyer, Neil Tennant

Program Committee and Conference Chairs
Johan van Benthem, Vincent F. Hendricks, John Symons (SYNTHESE) and Stig Andur Pedersen (PHIS)


Conference Manager
Pelle Guldborg Hansen

Registration
Please write conference manager Pelle Guldborg Hansen to register:

Department of Philosophy and Science Studies
Roskilde University, P6
P.O. Box 260
DK4000 Roskilde, Denmark
Phone: (+45) 4674 2540
Cell: (+45) 2334 2175
Fax: (+45) 4674 3012
Email: pgh@ruc.dk

A conference fee is to be paid cash upon final registration (Wednesday, October 3, 2007). The conference fee is 150,00 Danish kroner a day, thus participation for the entire duration of the conference (Thursday, October 3 – Saturday 5, 2004) is 450,00 Danish kroner. The conference fee covers the lunches with free beverages, conference booklet, tea and co¤ee during the breaks. NOTICE: Please remember exact amount. Deadline for registration Monday, October 1, 2007. If email is used include ‘SAC 2007’ in the subject entry. All questions pertaining to registration and accommodations should be directed to Pelle Guldborg Hansen.

Conference Website:

http://www.springer.com/west/home/philosophy?SGWID=4-40385-70-35761018-0

Academic Satire by Emily Beck Cogburn

The Breaking of Things is a novel by Emily Beck Cogburn. The story is told from the point of view of an overweight male philosophy professor stuck in a dead-end job in Louisiana. Below is an excerpt from Chapter 1, which is available in its entirety here.

... The rest of the philosophy faculty ignored Brady’s tantrum. Near the door, two men whispered together, their desks almost touching. One had a long Santa Claus beard, but his mad-scientist eyebrows and skeletal frame would have frightened the bravest of children. The other man’s distorted face looked like it had been shaped from modeling clay by a careless child. I wondered if the obstetrician had used forceps to pull him out of his mother. He leaned forward and said something in a French accent. Santa Claus stroked his beard and answered inaudibly.

Directly in front of the whisperers, a short, trim Indian man stood and walked across the room to the windows. He grabbed the sash and pulled up. The window didn’t budge, but he kept trying, grunting each time he yanked on it. Even from across the room, I saw his forehead vein bulge from the exertion. I expected him to break the glass with his fist, or maybe throw a chair through. “Sack of shit janitors,” he muttered, returning to his seat.

After glancing around to make sure no one was watching, I slipped off my tie and hid it under the campus newspaper. Just as I unfastened the top button of my shirt, Richard Matthews, the Chair of the department, looked over at me. I feigned a stretch and smiled, and he quickly went back to shuffling through a folder of papers. My first day on campus, Matthews had introduced himself and then stood there, as though waiting to take my order. Restraining myself from saying, “I’ll have a cheeseburger and a Coke, please,” I’d talked inanely about how I loved Louisiana so far. He’d nodded, finally shuffling off when the secretary called him.

The only other person I could positively identify was Jane Campbell, the lone woman in the department. Sitting behind Matthews, she tapped a Birkenstock on the tiled floor and read Harper’s Magazine. Her gray hair was cut short, and she wore a dark cotton dress that reached her ankles. She glanced at her watch, shaking her head as though she’d long ago become resigned to this kind of irritation.

I leaned forward and pulled at the back of my shirt, trying to let in some air. Sweat dripped from my scalp and ran down my face like tears. I was trying to remember if I’d ever felt hotter when someone said, “Don’t panic until they start handing out the soap.”

I laughed, the noise echoing in the quiet room. Here we were, effectively trapped in a steaming room, waiting for something. Was the university administration planning to gas the whole philosophy department? Get rid of the undesirables?

The silence grew more pronounced as Santa Claus and the Frenchman stopped whispering. Six philosophy professors looked at me as though I’d broken a sacred pact. I slid down in my chair as much as my bulk would allow, trying to disappear.

The joker lifted his desk by its arms and turned it until he faced me. I wondered why I hadn’t noticed him before. He wore wire-rimmed glasses and a yellowing dress shirt. He wiggled the bare toe sticking out of his canvas sneaker and assessed me for a few seconds. “After class, Miller.” Smiling conspiratorially, he returned his desk to its original position.

The other faculty gradually returned to their diversions. The last one to stop staring at me was the Indian man. If he’d been a cartoon character, smoke would have been coming out of his ears.

Forcing myself to look away, I picked up The Daily Crawfish. The main article was a list of tips for incoming freshman: wear both straps of your backpack, leave your tube tops at home, and take notes in every class. Good advice all around, I thought. In this heat, I could almost understand the tube tops, but the thought of exposing that much of my flabby midriff was frightening. This summer, I’d started wearing a T-shirt while swimming. Diet starts tomorrow, I reminded myself. No more cheeseburgers.

The Indian man grumbled and fanned himself with a yellow folder. I was sure the sweat stains under my arms had met in the middle of my back. As I contemplated unfastening another button on my shirt, two overweight women in cheap suits entered the classroom. Holding cardboard boxes, they flanked the door while a tall, lanky man entered. Everyone stood up. I tried, but only managed to crush my gut against the desktop.

The man wore an obviously expensive suit that provided an arresting contrast to his polka dot bow tie and goofy smile. He waved his hand to indicate that we should sit. As he strode toward the front of the room, he nodded to Matthews, but said nothing. I wondered if he remembered our Chair’s name. Behind the lectern, he surveyed the sweating philosophers.

“My friends,” he began, “as President of Louisiana A&M, I wanted to come and show my gratitude in person for all the great work you do here. It is very important to me to meet you in small groups like this. To meet you all and know those who work so hard to make this institution what it is.

“Too often administrators don’t show their appreciation to the faculty. We think we are too busy to make the little gestures that mean so much. You all work hard. Today I want to give credit where credit is due. I want each of you to realize how important you are. The classes you teach enlighten our youth— our future! Your research ennobles us all. You are what this is all about. Without faculty, what is a university? Just a bunch of buildings.

“Nothing I can do for you is too much. Whatever I do will be too little. But today I just want to make a small gesture of my appreciation. Psychology is a very important subject.” At this point one of the women set down her box, came a few steps closer and staged-whispered, “Philosophy.”

“Philosophy. Of course,” the president continued. “Always been one of my favorites. And we have a first-class department here. You are all doing a bang-up job. I say, keep up the good work! Keep thinking! And here’s to that.”

He nodded and watched, smiling, as the women distributed a shiny apple to each of us. “May they give you energy to pursue the truth,” he said as he left. His entourage trailed behind like two dowdy bridesmaids. ...

September 12, 2007

Links and Photos

Duncan Pritchard has posted most of the papers and commentaries from the Social Epistemology Conference held at the University of Stirling.

Still waiting on Declan Smithies' photos from the Value of Knowledge Conference held at the Vrije Universiteit Amsterdam.

PHIBOOK - The Yearbook for Philosophical Logic now has a web presence.

Among my favorite philosophy blogs is Jon Cogburn's. Here's a recent dark post about the philosophy job market for newbies.

In case you haven't seen the Philosophy Journal Information wiki go here. Includes info on editorial practices, response times, backlogs on publishing, policies on providing comments to authors, etc.

...and last but not least, some photos from a party at Declan Smithies' bitchin' crib.

September 09, 2007

Philosophical Progress (Bryan Frances)

Intelligent, smart-assed, freshman will often say, in class, that philosophy never settles anything or answers any questions. This is a good challenge, I think. I tell them that it’s easy to prove that there is tremendous philosophical progress. Moreover, I claim that they will experience it for themselves over the next fortnight, for that is all the time it takes for the proof.

I start my freshman course by going over epistemological concepts. After several centuries of experience, I know that in their minds ‘knowledge’, ‘belief’, ‘evidence’, ‘justified’, ‘true’ and related concepts form one big messy conceptual structure. They form one big blob. After two weeks, the blob has taken form: there are now several blobs, and they have some idea how the blobs may be related. It’s as though things are starting to come into focus; there are many more pixels per square inch. I don’t care what anyone else says: this is genuine and personally significant philosophical progress.

I tell them about John Madden. He was the head couch of the Oakland Raiders football (American) team for many years. Then he became a “color” commentator. He started out using terms like ‘force’, ‘momentum’, and ‘energy’ pretty much as synonyms. Then some incredibly geeky physics types wrote him letters and tried to straighten him out on how those are different concepts and are related in certain interesting ways. (This was back when I actually watched TV, especially sports.) He tried to mend his ways, presenting his new understanding on the air like a brave soul.

Something similar happens in students in a philosophy course. I don’t care what anyone else says: this is genuine and personally significant philosophical progress.

I remember talking with Adam Pautz when he was an undergraduate at Minnesota (he’s now an excellent philosopher at Texas). I was a graduate student and he wanted to diss David Lewis a bit. He accosted me in the hallway and started to talk about the feebleness of David Lewis’s system of laws, counterfactuals, possibilia, etc. I was blown away of course; how can an undergraduate at Minnesota be so fucking smart and well read? I tried to defend Lewis by saying that he had the virtue of showing how many seemingly disparate concepts are really related in fascinating ways. If I remember right, Adam thought this was an interesting response.

There are other forms of philosophical progress. For one thing, there are good questions that only philosophical reflection reveals. Who would have thought of the all the good questions regarding truth, vagueness, and material constitution without the liar, sorities, tibbles, and statue-clay paradoxes? In these cases philosophy reveals problems that science and other intellectual pursuits miss entirely.

I personally think that these forms of progress are tremendously valuable. But what other forms of philosophical progress are there? Maybe things of the form: if you buy into theses X, Y, and Z, then the arguments and evidence say you should also accept theses A and B.

But is there more than this?

September 02, 2007

Expressivism Conference Photos

Expressivism, Pragmatism and Representationalism Conference 2007

Update: Chalmers has thoughts about the conference here, and more pics here, and Brit has pics here.

August 29, 2007

Expressivism, Pragmatism, and Representationalism

Busy conference season in Australia. Today ends the Evolution and Cooperation Conference at ANU, organized by Richard Joyce & Kim Sterelny.

Speakers:
Steve Downes (University of Utah); Christian List (LSE); Ben Kerr (University of Washington); Timothy Ketelaar (University of New Mexico); Matteo Mameli (Cambridge); Fiery Cushman (Harvard); Peter Godfrey-Smith (Harvard); Kai Spikerman (LSE); Brett Calcott (ANU)

Day 1 included Christian List, who had neat empirical data on the Condorcet jury theorem as applied to deliberative democracy. Another great talk was Fiery Cushman's, which included loads of data about our judgments regarding moral consequences and intentions. And there were others.

Ah, I took my first spin around the block on the left side of the road. It was at night, and some street signs were apparently missing, but overall the episode went down without a hitch. Driving on the left is easier than I thought. Just take everything you do with your left hemisphere and replace it with everything you do with your right hemisphere, and vice versa. The drive was in part preparation for my trip to Sydney for the Expressivism, Pragmatism and Representationalism conference, hosted by the Centre for Time. I didn't drive to Sydney (Chalmers and Fish took care of business), but I did get stuck with the honor of driving around to find a parking spot in Potts Point. No problemo, but I think i'll stay on my bicycle for a while.

Arrived during Simon Blackburns talk. He waxed broadly about the ins and outs of expressivism. Also caught Jamie Dreier's talk on the difference between irrealism and realism, and the pitfalls of trying to articulate said difference. This was also our 3 and a half year old daughter's first time attending philosophy talks. She had a blast. She wanted to know what order-of-explanation has to do with the reality of the subject matter. Will check out more talks tomorrow. Still don't know how long we are staying in the fabulous city, but won't be upset if we leave later rather than sooner.

August 28, 2007

Most Counterfactuals are False

The title is of Alan Hájek's very interesting paper, which can be found on his website. I mentioned the main argument in a post last January:

1. 'Might' and 'Would' are dual operators. So "If A were the case, B might not be the case" entails "It's false that if A were the case B would be the case":

2. Indeterminism (in particular, chanciness) and indeterminacy (in particular, vagueness) in all the interesting cases underwrite a 'might not' claim. For instance, any chance of both A and not-B (no matter how small) underwrites the 'might not' claim, viz., "If A were the case, B might not be the case":

Hence, in all the interesting cases, its false that if A were the case B would be the case. So most (uttered) counterfactuals are false.

Alan rejects the contextualist response, but I won't develop his arguments. I'll just mention here what I think the contextualist should say. Counterfactuals are context sensitive. Whether apparently bizarre {A, not-B} possibilities are sufficiently close depends on the context. In conversational contexts that involve discussion of quantum indeterminacy, etc., such possibilities are relevant and close. Hence, the would-claim, , is false. But in ordinary conversational contexts, the apparently bizarre {A, not-B} possibilities are irrelevant and remote. Hence, the might-claim, , is false. In neither context is both and true. Luckily for us and the would-claims we ordinarily use, quantum indeterminacy is rarely seriously entertained.

August 23, 2007

Reasons, Reasoning and Rationality

Today at ANU concluded the conference on Reasons, Reasoning and Rationality, Themes From the Work of John Broome. Speakers: Jamie Dreier (Brown), Nic Southwood (RSSS), Andrew Reisner (McGill), Geoffrey Brennan (RSSS), Garrett Cullity (Adelaide), Daniel Star (CAPPE), Wlodek Rabinowicz (Lund), John Broome (Oxford). I won't try to do justice to all of the interesting papers, but instead will touch on a couple.

Jamie Dreier examined tensions between various formulations of two principles, which he aimed to reformulate and vindicate:

(Buck-Passing) For something to be good is for it to have properties that provide sufficient reason to choose, prefer, ... or admire it.

(Subjectivism) R is a reason for S to Phi iff R explains why Phi-ing promotes something S wants.

We had the most fun thinking about the Narcissus Bomb Example:

Philosopher chemists at Washington University have invented a Narcissus Bomb. Once triggered, this bomb will explode unless it is in the presence of someone who admires it enormously. You are now in the presence of a triggered Narcissus Bomb and nobody else is in the room.

This part of the talk was designed to show the limitation of Buck-passing. The explosive potential of the bomb is sufficient reason to admire it, but the bomb is evil. Jamie relied here on a fix by Nomy Arpaly, which hypothesizes that the bomb is sufficient reason to make it the case that you admire it but not reason to admire it.

I don't know yet how to think about reasons, but arguably a reason for Phi-ing is a reasoning Psi-ing when Phi-ing entails Psi-ing. But then since 'makes it the case' is factive, 'making it the case that you admire the bomb' entails 'you admire the bomb'. Hence, by the above closure principle, a reason for the former is a reason for the latter, and we're back to the original problem.

Another exciting paper was by Wlodek Rabinowicz, in which he argued Incommensurability is possible if there is vagueness. Incommensurability obtains when two thing x and y are such that neither is better than the other, yet they are not equally good. Wlodek was responding to an argument that Incommensurability is not possible. The argument depended on the following symmetry claim: if it is indeterminate that x is better than y then it's indeterminate that y is better than x.

Berit had a great counterexample to symmetry. Consider: x seems to have the temperature absolute 0. It's determinate that y doesn't seem colder than x, because on all sharpenings of the vague predicate 'seems absolutely cold' it is true that y doesn't seems colder than x. After all, y can't seem colder than absolute 0. But it's indeterminate that x seems colder than y, because on some sharpenings of the predicate 'seems absolutely cold' y doesn't seem absolutely cold. Hence, symmetry fails.

August 19, 2007

Objects and Arrows

Objects and Arrows is a new blog by John Symons, editor-in-chief of Synthese. The blog provides updates from the frontier on emergence, epistemology, philosophy of psychology and related matters.

August 14, 2007

On to Oz


Today Berit and I enjoyed our first full day at Australian National University. We got a little work done in the morning, listened to Wlodek Robinowicz' very interesting talk, "Dutch Books Against Groups and Jury Voting", which as you might surmise was about exploitation strategies against rational subjects in betting and jury situations. Lessons were drawn about the limitations of betting interpretations of probability. Then we met more of the department and its visitors over the traditional afternoon tea on the outside balcony. The sounds of the local birds are pretty amazing. I expect soon to walk with my daughter to the nearby hills where the kangaroos are rumored to run rampant.

July 28, 2007

Sieg's Survey

As Gregory Wheeler notes at Certain Doubts, Wilfried Sieg, Christian Schunn and Melissa Nelson are conducting a survey for those teaching introductory logic courses. It's part of project investigating the effectiveness of different ways of teaching such a course. I just took the survey and it consumed only a few minutes of my time.

July 21, 2007

Oswald was Indeed CIA!


I just stumbled on a couple of interesting declassified documents floating around the internet, and I couldn't resist sharing them with my readers. The first was written by the director of the CIA in 1964 and asserts that Lee (Harvey) Oswald did recon for the agency in the late 50s and reveals in the last paragraph that Oswald's famous 1959 "defection" to Russia was an agency assignment. Oh, that's nice. So when exactly was Oswald NOT on assignment?

The second document includes an attached 1947 memo that links Oswald's assassin Jack Ruby (formerly, Jack Rubenstein) to Congressman Richard Nixon as an informant of some sort. WTF?

Don't know when these docs were declassified or even if they are authentic, but here they are.

UPDATE: Here's a blog devoted to questions about the authenticity of the first document, also known as the McCone-Rowley Document.

Reasoner 1(4)

The fourth issue of the Reasoner is now available online. Here are the contents:


  • Interview with Brendan Larvor
    David Corfield

  • Conceivability, Possibility and Counterexamples
    Anand Jayprakash Vaidya

  • A Counterfactual Account of Essence
    Berit Brogaard and Joe Salerno

  • Knowledge, Truth and Justification in Legal Fact Finding
    Déirdre M. Dwyer

  • The Principle of Agreement
    John L. Pollock

  • A Note on Kripke's Puzzle about Belief
    Christian Constantinescu

July 12, 2007

Oneness


Today Knowability turns one. We've had just under 20,000 visits, 14,000 uniques (first timers or those returning after an hour), and 8,000 first time visitors (based purely on a cookie). Contributions have been made by guest authors, Bryan Frances and John Greco, and much support was received from Brit over at Lemmings and from other weblog administrators. Thanks guys.

Epistemology and Methodology of Jaakko Hintikka

The Danish Research School in Philosophy History of Ideas and History of Science is sponsoring a Hintikka Symposium
November 16-17, 2007

Roskilde University, Denmark

Invited Speakers

* Adam Didrichsen
* Vincent F. Hendricks
* Jaakko Hintikka
* Stig Andur Pedersen
* Ahti-Veikko Pietarinen
* Robert Stalnaker
* Frederik Stjernfelt
* Tim Williamson

Program and Organizing Committee

* Vincent F. Hendricks
* Frederik Stjernfelt
* Stig Andur Pedersen

July 08, 2007

Remembering Justin D'Arcy Isom

I met Justin when we were both graduate students at the Ohio State University. He received his B.A. in philosophy at the University of Texas. He traveled much of the world alone and his ordinary descriptions of the world around him were often worth more than a thousand words. This was sort of remarkable since Justin had lost his sight at the age of one to a retinal nerve cancer. What I didn't realize at the time was that Justin's overall condition would degenerate. He died on April 23, 2007.

Justin and Neil Tennant devised a braille-like proof system so that Justin could more fully participate in discussions about (intuitionistic) proofs. Neil shares his memory of Justine in the department newsletter, Logos.

July 06, 2007

Second-order logic and Gödel’s first incompleteness theorem (Frances)

That’s a very impressive title for a blog entry, but I promise that things will not get very complicated in what follows. For I am no logician. I’m not even a philosopher of logic. So, I don’t know much about logic. But I know just enough to be confused about something having to do with second-order logic and Gödel’s first incompleteness theorem.

Let L be the formal first-order language with a name for zero and function symbols for the successor function, addition, and multiplication (and no other non-logical symbols). So L is a pretty simple formal language. Let N be the interpretation of L that has as its domain the set of natural numbers {0, 1, 2, …} and assigns zero to the name ‘0’, addition to the function symbol ‘+’, etc. So N is the natural way to interpret L. Arithmetic is the set of sentences of L that are true under N. So, arithmetic is a set of sentences of a formal language, where each sentence is most naturally interpreted as an arithmetic truth. This will be an infinite set.

Arithmetic counts as a theory of L because it is “closed under logical consequence”, which means that any sentence of L that is entailed by the sentences in arithmetic is in arithmetic. Arithmetic contains all its logical consequences; that’s what makes it a “theory”.

Now a theory T in language L is decidable when there is an algorithm for deciding whether any given sentence of L is a theorem (member) of the theory. So a theory T in L is decidable when there is some algorithm that when fed a sentence S of L will tell you in a finite number of steps whether or not S is in T.

Roughly put, a theory T is axiomatizable just in case everything in T is a logical consequence of some nice subset of T. The nice subset of T generates, so to speak, everything in T (just like how the central Euclidean claims generate all the truths of plane geometry). The member sentences of the nice subset are axioms of T.

The subset could be infinite! But even so it has to be nice. That means: there has to be an algorithm that when fed a sentence of L will tell you in a finite number of steps whether or not the sentence is one of the axioms. That is, the set of axioms has to be decidable.

The famous consequence of Gödel’s first incompleteness theorem is that arithmetic isn’t axiomatizable. This is surprising! You’d think it wouldn’t be that hard to write down a nice set of axioms (say, the Peano axioms) for generating all the truths of arithmetic. But it can’t be done.

Now suppose we add second-order quantifiers and variables to L. Now let arithmetic* be the set of first-order and second-order sentences of L that are true under N. Arithmetic* contains everything in arithmetic plus some more arithmetic truths (the second-order ones).

The cool thing, in my opinion, is that arithmetic* is axiomatizable. You can write down one long sentence that generates, through logical consequence, everything in arithmetic*. So when you hear someone say ‘Gödel famously showed that arithmetic isn’t axiomatizable’, you should say to them ‘Wait a damn minute buddy. First-order arithmetic isn’t axiomatizable, but second-order arithmetic is’. So you can write down just one relatively simple sentence that generates all the (first-order) truths of arithmetic—-but it will generate a whole bunch of other arithmetic truths as well, the second-order ones.

I take it that not that many non-logicians and non-philosophers of logic are aware of that result. Hopefully I’m right about it and as a consequence this blog entry is worthwhile.

But now we come to my question: why is it thought to be such a big honking deal that arithmetic isn’t axiomatizable? I take it that people who know about these things think the second-order axiomatization is not terribly significant. But why?

I guess some people are allergic to second-order logic, but I don’t know anything about that issue. What if you think second-order logic is a-okay?

Is the problem connected to this: whereas there is an algorithm that when fed a first-order sentence S of L that’s logically true, will tell you in a finite number of steps that S is indeed logically true, but there is no algorithm that when fed a second-order sentence S of L that’s logically true, will tell you in a finite number of steps that S is indeed logically true? If so, why does that matter so much?

June 28, 2007

Reasoner 1(3)

The third issue of the Reasoner just hit the e-stands. Here's the TOC:

"Pierre May be Ignorant but He's Not Irrational"
Jesse Steinberg (UC-Riverside)

"Free Will and Lucky Decisions"
Gerald Harrison (University of Bath)

"United States v. Shonubi: Statistical Evidence and 'The Same Course of Conduct' Rule"
Amit Pundik (Law, University of Oxford)

"Williamson on Counterpossibles"
Brit Brogaard (University of Missouri)
Joe Salerno (St Louis University)

"Mathematical Blogging"
David Cornfield (Max Planck Institute, Tübingen)

"Does Direct Inference Require Pollock's Principle of Agreement?"
Stephen Fogdall (Schnader Harrison Segal and Lewis LLP)

"The Pirahã Language, the Language Template, and the Mind"
William Abler (Geology, The Field Museum)

June 13, 2007

Williamson on Counterpossibles


Lewis/Stalnaker semantics has it that all counterpossibles are vacuously true. Non-vacuism, by contrast, says that the truth-values of counterpossibles are affected by the truth-values of the consequents. Williamson, in his Hempel Lectures, objects to non-vacuism. He asks us to consider someone who answered `11' to `What is 5 + 7?' but who mistakenly believes that he answered `13'. For the non-vacuist, (1) is false, (2) true:


(1) If 5 + 7 were 13, x would have got that sum right
(2) If 5 + 7 were 13, x would have got that sum wrong

Williamson is not persuaded by the initial intuitiveness of such examples:

... they tend to fall apart when thought through. For example, if 5 + 7 were 13 then 5 + 6 would be 12, and so (by another eleven steps) 0 would be 1, so if the number of right answers I gave were 0, the number of right answers I gave would be 1. (Lecture 3)

That's the whole argument. It isn't initially clear what the full argument is, but Brogaard and I commented on our abbreviated version of it in "Why Counterpossibles are Non-Trivial" (The Reasoner v.1, no. 1). Alan Baker's critique (The Reasoner v.1, no. 2) of our paper has prompted us to say more. Here's what we think.

Williamson's conclusion is this:

(3) If the number of right answers I gave were 0, then the number of right answers I gave would be 1.

The implicit reductio must be this: If (3) is true, then (1) and (2) are true --- contrary to what the non-vacuist supposes. For if I gave 0 right answers (in close worlds where 0=1), then I also gave 1 right answer (in those worlds). Hence, I got the sum right and wrong (in those worlds).

Williamson's abbreviated eleven-plus-one steps must be these:

(i) If 5 + 7 were 13, then 5 + 6 would be 12
(ii) If 5 + 7 were 13, then 5 + 5 would be 11
...
(xi) If 5 + 7 were 13, then 5 + -4 would be 2.
(xii) If 5 + 7 were 13, then 5 + -5 would be 1.

Getting to (3) from here, however, is trickier than Williamson supposes. The argument must be that any world where 5 + -5 = 1 is one where 0 = 1, substituting `0' for `5 + -5'. Hence,
(xiii) If 5+7 were 13 then 0 would be 1.

Therefore, if 5+7 were 13 (and I gave 0 right answers), then (since 0 would be 1) the number of right answers I gave would be 1.

The argument is unsuccessful. First, substituting `0' for `5+5' is illicit, since as Williamson himself notes the non-vacuous counterfactual is hyperintensional. Hyperintensional operators do not permit substitutions of co-referring terms salva veritate.

Incidentally, Williamson takes the hyperintensionality to be a mark against non-vacuism, because substitution is valid in more ordinary counterfactual contexts. However, we need not throw out the baby with the logically ill-behaved bath water. This sort of substitution preserves truth at every world on every standard model. Only the counterpossible context (i.e., the counterfactual context whose accessibility relation invokes impossible worlds) are hyperintensional. Our logical principles can be restricted accordingly.

A second problem for Williamson emerges in steps (i) through (xiii). These conclusions hold, if the game is to evaluate the consequent of each at deductively closed worlds where 5+7 = 13. But if there are non-trivial counterpossibles, the relevant worlds of evaluation must not be deductively closed---lest they collapse into the trivial world where everything is true.

Once we deny deductive closure, Williamson's reasoning fails. Let the following world, (W), be non-deductively closed:

(W) {5 + 7 = 13, the number of right answers I gave wasn't 1, the number of right answers I gave was 0, ... }

In contexts where W-worlds are closest, (2) is true and (1) false, as the non-vacuist predicts. For Williamson's argument to succeed, the relevant impossible worlds in which I gave 0 right answers and 1 right answer must always be closer than the relevant impossible W-worlds. This hasn't been shown. Indeed, (W) is closer to the actual world than Williamson's envisaged impossible worlds, since (W) is constituted by fewer explicit contradictions.

UPDATE: changed 'explicit' to 'implicit'

June 12, 2007

New Essays on Knowability


I just updated the page for the volume that i'm editing, New Essays on the Knowability Paradox. The introduction and bibliography have been substantially revised. A contributor list and contents page has been added. I also included a link to the Appendix to Alonzo Church's "Referee Reports on Fitch's 'A Definition of Value'", which i put together with Julien Murzi. Updated my own contribution as well (Essay 3). There are links to other contributions by authors who have made their papers electronically available. Will be glad when this thing is finally published.

June 09, 2007

Philosophy and Common Sense (Frances)

Not every philosophy professor takes philosophy seriously in the sense that she thinks that some purely philosophical theories that go against common sense have a good chance to be true. These philosophers respect anti-commonsensical theories, in that they admit such theories are very important in the pursuit of philosophical understanding. But they also think that there is no real chance that they are true. If you have a valid argument based not on scientific but purely philosophical reasoning, and that argument concludes with something against cross-cultural and timeless common sense, then at least one of the premises isn’t true, or so they say. It might be tremendously difficult to identify the mistaken premise, but we can start our investigation off assuming that our assumption that the conclusion is false is safe. These philosophers take philosophising seriously, of course, but they don’t take seriously the idea that purely philosophical (so not empirical, not mathematical) theories have a good chance at overthrowing parts of common sense. Here is a good sample of anti-commonsensical philosophical theories.

1. 2 + 2 doesn't equal 4. (No positive mathematical truth.)
2. No vague claims are true. (Sider and Braun 2007.)
3. There are no people. (Peter Unger.)
4. Thermometers have beliefs. (Certain information-fanatic philosophers.)
5. There are no chairs. (No non-living composite physical objects exist.)
6. Stones are not solid objects. (Inspired by Sir Arthur Eddington.)
7. No one has ever had a dream. (Norman Malcolm at one point.)
8. Cats don’t feel any pain when their paws are cut off. (Descartes.)
9. The world could not have turned out even a bit better than it actually is. (Leibnizians.)
10. It isn’t wrong to torture young children purely for fun. (No moral truths.)
11. Kant didn’t live after Descartes died. Alternatively: Nothing ever happened in the past. (Time doesn’t exist; isn’t “real”.)
12. No one has ever done anything because they wanted to do it. (Various reasons.)
13. Rocks have mental characteristics. (Idealists.)
14. There could be two wholly physical objects that during their entire existence occupied the very same space and were composed of the very same particles in the very same manner. (Some contemporary metaphysicians.)
15. Other statue-clay claims.
16. Supervaluationism stuff about true disjunctions without true disjuncts.
17. Dialethicism; true contradictions.
18. Taking one cent from a rich person can make them no longer rich. (Epistemic theory of vagueness.)
19. No one is free to do anything. (No one is free, period.)
20. No one knows anything, or much of anything. (Radical sceptics.)

I hate that attitude. I wonder: what percentage of contemporary philosophers are allergic to anti-commonsensical theories?

May 10, 2007

Talk in Aberdeen


I just finished a draft of my paper "The Most General Factive Mental State Operator", which I will present on Saturday at the International Conference on Linguistics and Epistemology in Aberdeen, Scotland. Helpful comments will likely not save me in time for the talk, but may save me some future embarrassment.

Then I'm off to the University of Edinburgh (May 15) for an Epistemology Workshop, where Brit and I will present a paper on counterpossible conditionals. I also expect to crash the Adjectives Conference at St. Andrews (May 18-19).

May 09, 2007

Essential but Contingent Properties

My continued existence depends on lots of things. It depends on my breathing oxygen. It depends on food, water, and my not getting run over by a bus. We might say that these things are essential for my existence, since I would not exist without them. Moreover, my relation to other people may be essential to my existence. If it weren't for my uncle Mike and his power and influence in the "family", my enemies would have iced me by now. These enemies still wait in the wings. If anything were to happen to Uncle Mike, I'd be a gonner. One other example is from Paul McCartney, who wrote, "I won't live in a world without love". There are many contingent features of the world without which one would not exist.

In philosophical circles, by contrast, we require that any essential property of a thing be a necessary property of the thing. We say that something is an essential property of x iff x has the property in every possible world in which x exists. But 'essence' is said in less strict ways in the vernacular. If the doctors need to operate immediately in order to save your life, then time literally is of the essence. This suggests that the philosophical account of essence is too strong to capture ordinary use. I noted in the previous post that Berit and I agree with Fine that the strict philosophical account is too weak, for the reason that on that account any necessity is an essential property of everything. I believe that the alternative account, proposed in the last post, handles both problems nicely. The account was this:

(1) x is essentially F iff if nothing were F then x wouldn't exist,

or semantically,

(2) all the closest worlds (whether possible or impossible) where nothing is F are worlds where x doesn't exist.

Thanks to Mike Almeida, who pointed to an objection to this account (in the comments thread of the previous post). His worry boils down to this: it follows from (2) that some essential properties are contingent properties. However, as Mike has inspired me to argue here, I think this is a virtue of the account, and not an objection. The account expressed by (2) handles both the strict philosophical, and a predominant ordinary, use of 'essence'.

The key is to note that the closeness relation widens or narrows with the conversational context; the closest worlds are ones that preserve the highest proportion of the relevant background facts at the actual world. In the typical strict philosophical context where essential properties (for whatever reason) are expected to be necessary properties, the relevant background facts are the facts about what is metaphysically possible. So, the closest worlds will be all the metaphysically possible worlds. (The exceptions are philosophical contexts in which impossibilities are sincerely entertained. See previous post.) In the ordinary, non-philosophical, contexts alluded to at the beginning of this post, the relevant background facts include, for instance, that McCartney was suicidal after the loss of a girlfriend. Hence, the closest worlds without love are worlds where McCartney kills himself. Therefore, if there were no love, McCarney would cease to exist. By (2), it follows that being loved is essential to McCartney's existence. Or consider that the background facts include that I just don't run from, or rat out, other mobsters (because I'm just that brand of Sicilian American wise guy). Then the closest worlds without protection from Uncle Mike are worlds where I'm dead meat. Yes, I could go into hiding or witness protection, but then I would no longer be the wise guy that I am. So, by the above account, it is essential to what I am that I am protected by Uncle Mike.

In sum, the traditional account of essence (in terms of what properties a thing has in every metaphysically possible world in which it exists) is both too strong and too weak. Account (2) seems to take care of these problems.

April 25, 2007

Essences and Impossible Antecedents

The first issue of The Reasoner has just appeared online. Berit and I contributed a paper, "Why Counterpossibles are Non-Trivial", in which we give three reasons for rejecting a vacuous reading of counterpossibles. One reason is that a non-trivial reading facilitates an analysis of essences. While Kripke's wooden table, Tabby, is necessarily a member of the set {Tabby}, it is not essential to Tabby that it be a member of that set. Neither is it essential to Tabby that seven is prime. It is tempting to offer the following explanation. If there hadn't been sets, Tabby might still have existed; and if seven hadn't been prime, Tabby might still have existed. But this sort of explanation requires, for its non-triviality and informativeness, that counterpossibles be non-trivial and informative. At some of the closest (impossible) worlds where there are no sets (or numbers), Tabby exists. By contrast, Tabby fails to exist at all the closest worlds where there is no wood. It is essential to Tabby that he be wood, but not essential to Tabby that seven is prime. Generally, x is essentially F iff x would not have existed if nothing had been F

April 17, 2007

Lewis on Non-Trivial Counterpossibles

Lewis offers a number of reasons for his, the now standard, reading of counterpossibles--viz., that they are all vacuously true. He does not take his reasons to be decisive. Let's revisit the reasons. Lewis writes,

There is some intuitive justification for the decision to make a 'would' counterfactual with an impossible antecedent come out vacuously true. Confronted by an antecedent that is not really an entertainable supposition, one may react by saying, with a shrug: If that were so, anything you like would be true! Further, it seems that a counterfactual in which the antecedent logically implies the consequent ought always to be true; and one sort of impossible antecedent, a self-contradictory one, logically implies any consequent. [Counterfactuals 24]

There are two reasons given here. The first is difficult to disagree with. An antecedent that is not really an entertainable supposition, invokes triviality. If Socrates were a potato, then, sure, anything goes! I'm down with that intuition. But this is not an objection to the position that I'm more inclined to, which says that counterpossibles are sometimes non-vacuously true (and sometimes false). Lewis' point suggests incorrectly that all counterpossibles involve an antecedent that is not entertainable. Counterlogicals, such as 'if excluded middle were invalid, then double-negation elimination would be invalid too", sometimes express non-trivial consequences of alternative logics. The sincere entertaining of such logics is something that we really can, and often do, do. That said, I share Lewis' view about non-entertainable suppositions. In sum, the position that allows for a non-vacuous reading of some counterpossibles should also allow for the vacuous reading of others.

The second reason that Lewis articulates above says that a counterfactual whose antecedent logically implies the consequent ought always to be true. That is, it ought to be that logically strict implication implies counterfactual implication. Perhaps there is a pre-semantic-theoretic intuition here. But counterpossibles are strange animals from any point of view. It shouldn't be too much of a concern if they run contrary to a pre-theoretic intuition. Moreover, putting too much weight on the intuition would be question-begging. Anyone who is convinced that there are non-trivial counterpossibles will not share this intuition about the relation between strict and counterfactual implication. Lewis seems to have ruled out in advance that there might be impossible antecedents that we sincerely entertain. For this reason, I don't think that he was considering the more interesting candidates for the non-trivial reading. The bottom line is that the Lewis intuition just isn't shared by those who take seriously the idea that some impossibilities may be sincerely entertained counterfactually.

Lewis offers an explanation for why some of us believe in non-trivial counterpossibles. He hypothesizes a mistaken need to explain why some counterpossible that we do want to assert are true and others that we do not want to assert are false. He writes,
I do not think, however, that we need to discriminate in truth value among such counterfactuals. Of course there are some we would assert and some we would not:
If there were a largest prime p, p!+1 would be prime.
If there were a largest prime p, p!+1 would be composite

are both sensible things to say, but

If there were a largest prime p, there would be six regular solids.
If there were a largest prime p, pigs would have wings.

are not. But what does that prove? We have to explain why things we do want to assert are true (or at least why we take them to be true, or at least why we take them to approximate to truth), but we do not have to explain why things we do not want to assert are false. We have plenty of cases in which we do not want to assert counterfactuals with impossible antecedents, but so far as I know we do not want to assert their negations either. Therefore, they do not have to be made false by a correct account of truth conditions; they can be truths which (for good conversational reasons) it would always be pointless to assert. [24-25]

I agree that a willingness to assert the former, but not the latter, examples is not a good reason to hypothesize distinct truth-values. Indeed, it appears to me that all four examples have the same truth-value. The context of the latter two presumably invokes the trivial world where anything goes, and the context of the former two presumably invokes non-trivial reasons for thinking that the consequents obtain in the counterfactual circumstances described. What Lewis leaves out are cases where, in fact, we do want to assert the negation of a counterpossible. For instance, it is false that if intuitionistic logic were the correct logic, then excluded middle would still be valid. This counterpossible is not merely something that we wouldn't want to assert; it is something we should want to deny! Lewis' pragmatic explanation of the desire for non-trivial truth conditions does not appear to generalize.

April 15, 2007

Synthese Deadline Approaching

This is a reminder about the May 1 deadline for paper submission for the special issue of Synthese, "Knowability and Beyond", which aims to cover modal epistemic issues relevant to knowability, broadly construed. The issue will contain invited papers by Jonathan Kvanvig, Gabriel Sandu and Neil Tennant, but will also provide the opportunity for other authors to make original contributions. Submissions will be double-blind reviewed. If you have something but aren't sure whether it would be appropriate for consideration, email me.

knowability@gmail.com

April 12, 2007

Vonnegut 1922-2007

Kurt Vonnegut on the purpose of life: "to be the eyes and ears and conscience of the creator of the universe, you fool"

March 26, 2007

Recent Colloquia at SLU

We're having a great colloquium series at SLU this semester. On Friday Liz Harman delivered a super-interesting talk, "Does Blameless Ignorance Exculpate?", in which she argued that an otherwise epistemically blameless subject may nevertheless be morally blameworthy. Other speakers this semester include Ernie Sosa, Ernie Lepore, Berit Brogaard, Rob Koons and Steve Stich.

The Reasoner

The deadline for the first issue of The Reasoner is April 15. The Reasoner is edited by Jon Williamson and "is a monthly digest highlighting exciting new research on reasoning and interesting new arguments. It is interdisciplinary, covering research in, e.g., philosophy, logic, AI, statistics, cognitive science, law, psychology, mathematics and the sciences."

March 15, 2007

Greco on Skepticism

My colleague and guest author, John Greco, will be discussing Skepticism live on KALW 91.7 FM (San Francisco) Sunday March 25 at 10a.m. on Philosophy Talk. The show can be heard on other stations as well. Will attempt a link to the podcast when it becomes available.

[Update: the audio is now available here]

March 07, 2007

ANU Appointment

I'm very pleased to announce that I've just accepted a visiting fellow appointment at the RSSS Philosophy Program at Australian National University. Will be there from August 2007 to August 2008, and am looking forward to spending time in that fantastic department.

March 04, 2007

Copenhagen Riots

Photo by: Kristian Linnemann
A third day of rioting in Copenhagen after police repelled from helicopters, and stormed a youth center to remove its occupants. The occupants were squatters (or "left-wing activists" to use BBC's term of endearment) that have been using the building as a youth center (called Ungdomshuset) since 1982.

Photo by: Kristian Linnemann

The Christian group (Faderhuset) purchased the building in 2000 and have a court order for the eviction. The occupants vow never to leave and claim that the city had no right to sell the building while it was in use.

Protests continue throughout the city. Sympathizers protested in Germany, Sweden, Norway and Austria. The violence began in Copenhagen last December during a protest against the eviction plan.

Websites like the Anarchist Black Cross inform protesters of their rights. (Some sections in English.) An article by Indymedia about the December violence can be found here.

Safia reports from Copenhagen. Outstanding photography here.


Photo by: Safia


Left: (Danish embassy in Budapest, March 2)







[UPDATE: Videos moved here.]

[UPDATE: Protests Continued Sunday, some peaceful.]
[UPDATE March 10: Anarchist sightings and arrests, but things seem to have calmed down.]

The Scientific Revolution in Linguistics

I received this from Michael Hand:


Greetings all! This year marks the fiftieth anniversary of the publication of Chomsky's *Syntactic Structures*, the founding document of "generative-transformational" linguistics (in a broad sense). This is arguably the defining moment of the scientific revolution that replaced the prescientific, crassly empiricist Saussurean-Bloomfieldian linguistics with scientific linguistics as it is even today (though admittedly linguistic theory doesn't look much like the Chomskian theory of the late fifties). Chomsky's 1959 review of Skinner's *Verbal Behavior* is usually cited as the founding document of today's "cognitive science", but a case can be made that *Structures* should be accorded a share of that honor as well (though the review is more explicit in relevant ways). For an accessible and brief account of the revolution, see the early chapters of Newmeyer's *Linguistic Theory in America*; for an extensive and sometimes hilarious account of the empiricist attempt at counterrevolution, see Harris's *The Linguistic Wars*.

A moment of silent, appreciative reflection and awe right about now would not be inappropriate.

March 01, 2007

Context of Evaluating Counterfactuals

If we call a tail 'a leg', then how many legs would a horse have? It would still have four. The concepts we use to evaluate the question are our actual concepts, even if the truth of the antecedent (at a closest world w) involves (at w) a revision of our actual concepts. A principle is suggested:


(*) when we evaluate the truth of a counterfactual, the concepts employed (in drawing a path in a close world w from the antecedent to the consequent) are determined by the context of evaluation and not by the semantic facts at w.
An analogous principle is this (paraphrased from Jenkins in the comments thread to the last post):
(**) when we evaluate the truth of a counterfactual about logical principles, the logic employed (in drawing a path from the antecedent to the consequent) is determined by the context of evaluation and not by the logical facts at the relevantly close worlds.
The idea is that the logic we use to evaluate the truth of a counterfactual is our default logic (ex hypothesi, classical logic), even if the truth of the antecedent (at close worlds) requires an alternative to our default logic. (*) and (**) would seem to stand or fall together.

However, (*) is not unrestrictedly true. Consider:
(1) If 'one' meant two, then 'one plus one' would mean two plus two.
If the concepts of evaluation are held fixed, then we should insist that the consequent (i.e. 'one plus one' means two plus two) is false in the closest worlds where the antecedent is true. And so, we should insist that (1) is false. But, (1) is prima facie true. Therefore, (*) is prima facie false as a general principle.

We should then expect that (**) likewise fails as a general principle. It had better fail. Otherwise we cannot engage non-vacuously, when we consider what (logical) principles would be the case if others were not. The reason we can in fact engage non-vacuously about such matters is this. When we evaluate counterfactuals about logical principles, the logic at the closest worlds is relevant to---indeed, is essential for---a proper evaluation of the consequent.

February 24, 2007

How is Modal Knowledge Possible?

If They Had No Tails...
In Chapter 5 of Williamson's manu The Philosophy of Philosophy we find a proposed answer. To know that it is metaphysically necessarily that A, one needs no more than whatever it takes to know that not-A counterfactually implies a contradiction. And this requires nothing over and above the cognitive faculties employed to acquire ordinary knowledge about the empirical world. Therefore, if (metaphysical) modal knowledge---indeed, if philosophy---is possible at all (and we suppose it is), then it requires no special cognitive faculties over and above what is required for ordinary knowledge of the world. Let's focus on the purported connection between modal and counterfactual knowledge.

Williamson demonstrates that modal claims are logically equivalent to some counterfactual claims. A necessary proposition is one whose negation counterfactual implies a contradiction, and a possible proposition is one that does not counterfactual imply a contradiction. Respectively,


(1) □A (¬A □→ ⊥)


(2) A ¬(A □→ ⊥)



The lessons, according to Williamson, are these: (i) Having what it takes to understand □→ and ¬, implies having what it takes to understand and □, and (ii) modal thinking is a special case of counterfactual thinking.

I take it that (ii) is meant to entail (i)---or is equivalent to it. So there is a natural objection here. Understanding A (or grasping its meaning) is not closed under logical consequence. If A is logically equivalent to B, s may grasp A without grasping B, because B embeds concepts that s doesn't understand---e.g., 'A' is equivalent to 'A or A', and though s understands the former, he doesn't understand the latter because he doesn't grasp the meaning of 'or'. Hence, (i), and so (ii), is false, and we lose the argument for thinking that an epistemology of counterfactuals will give us everything needed for an epistemology of modality. Carrie Jenkins expresses a version of the concern over at Long Words.

I don't think the objection hits Williamson. We need to read him as offering an account of how modal knowledge is possible for beings like us, and not as defending the empirical thesis that our modal and counterfactual knowledge covary. Whether or not our knowledge of necessity and possibility does (or must) take a detour through counterfactual knowledge, is not the question. The epistemological claim is that beings like us, who are in a position to know the right-hand side of the equivalence, are also in a position to know the modality on the left. After all, we might infer the left from the right.

Here's a different objection. It is one that I think is more serious than Williamson suggests. Williamson's proof of the first equivalence requires a certain theory about how to handle counterpossibles (i.e., counterfactuals with impossible antecedents). He sides with Lewis that we should treat them as vacuously true. If this is right, then necessary implications entail counterfactual implications:

(3) □(A → B) (A □→ B)

(3) underwrites Williamson's proof of (1). If this choice about how to handle counterpossibles is a mistake, then things unravel for Williamson's epistemology of modality. For a genuine counterexample to (3) can be turned into a counterexample to (1).

Here's one way to do it:

Let "LEM" be the universal formulation of the law of excluded middle, "∀A(A v ¬A)". And let "DNE" be the classical theorem "∀A(¬¬ A → A)". Suppose classical logic is unrestrictedly valid, so that LEM is metaphysically necessary (since logically necessary). Then the strict conditional, "□(¬ LEM → DNE)", is true. It's vacuously true. However, the corresponding counterfactual appears to be false: "¬LEM □→ DNE". This is false because if LEM were false then intuitionistic logic (and not classical logic) would be unrestrictedly valid. And in intuitionistic logic, ¬LEM does not imply DNE (but rather entails ¬DNE).

If this is right, then we also have a counterexample to (1). Although LEM is in fact metaphysically necessary, it is not equivalent to "¬LEM □→ ⊥". For in a counterfactual circumstance where ¬LEM is true, intuitionistic logic (but not classical logic) obtains. And, ¬LEM does not enjoin an intuitionistic contradiction.

Williamson evaluates other purported counterexamples to (3). I don't see that his replies touch the above example. Williamson attributes a confusion akin to the confusion made by philosophers for centuries prior to the realization that a generalization, "All S are P", is vacuously true when nothing is an S. But I'm not guilty of this confusion. If our counterfactual supposition is the denial of LEM, then not all of our knowledge of classical logical truth is available in the description of the counterfactual circumstances from which we are to develop a path to the consequent. In particular, any logical truth intuitionistically equivalent to LEM is not available in the description of the counterfactual circumstances. But then in such counterfactual (intuitionistic) circumstances, ¬LEM does not materially imply DNE. The reason is that, in those circumstances, ¬LEM is not a logical falsehood. Vacuity vanishes!

[UPDATE: Berit locates a vacuous assertion of Williamson's that he did not intend to be vacuous, and draws a dark lesson for the prospects of doing metaphysics.]

February 14, 2007

Are You Competent in Your Research Areas? (Frances)

Here’s a theory about how most philosophers are working in the wrong areas. I came up with it when I was a graduate student. I don’t believe it, but I wonder how much truth there is to it.

As a graduate student one attends a bunch of classes, encounters a bunch of philosophical problems and questions, and encounters a bunch of favored responses to those problems and questions. For instance, in epistemology one hears about skepticism and the favored responses; in metaphysics one hears about the statue-clay problem and the favored responses. The students with “good sense” regarding that topic will see that none of the offered responses is very good. The students don’t have anything to offer themselves, but they see that the favored responses stink. As a result, they won’t do research in that topic, as they will find the literature a turn-off (since (a) nearly everyone in the literature is working on the favored responses, which she thinks stink, and (b) there are no known responses to work through that she thinks have a prayer of being right).

Who are the students who do end up researching the topic? Answer: the ones who got excited by it. And why did they get excited by it? Answer: in many cases, they got excited by it because they found one of the favored responses quite plausible. “Here is the solution to a philosophical issue that’s been around for centuries!” Due to this excitement, they research that topic. But they didn’t have the good sense to see that the response was lousy.

The upshot is that the people doing research on topic X are the ones who didn’t have good sense regarding X. Pretty depressing.

Clearly, this theory, competence pessimism, is overblown. For instance, one might research a topic merely because one thinks it’s a great topic, and not because one is enamored with any favored response to it. And of course one could argue that many of favored responses are really quite plausible but only appear, at first sight, to be defective. Only the people with insight into the topic can see the misleading nature of the initially apparent deficiencies of some of the favored responses. So it actually goes this way: the pessimist is right to say that the people who end up researching the topic are the ones who got excited about the topic, but the ones who got excited are the ones who correctly realized that one of the favored responses is on the right track despite its misleading appearance. That’s the opposite extreme, competence optimism, which says that the people working on topic X are the rare ones who saw that the apparent deficiencies were merely apparent. Is the truth closer to competence pessimism or competence optimism? Do you want victory for us or the terrorists?