Click here for impressive animation of David Sosa talking shop.
December 16, 2006
December 15, 2006
Dave Chalmers points out that Timothy Williamson's book manuscript, The Philosophy of Philosophy is online. Chapter 2 is about the methodology of the vagueness debate and of analytic philosophy more generally. Chapter 5, Knowledge of Metaphysical Modality, also looks particularly interesting. Its conclusion is that the epistemology of metaphysical modality is a special case of counterfactual thinking about the spatio-temporal world.
Posted by Joe Salerno at 5:18 PM
December 13, 2006
If one hasn’t worked hard on the topic of vagueness, it can be hard to take epistemicism seriously. You wonder: everyone SAYS that Tim Williamson is unbelievably smart, but since he believes in cutoffs doesn’t that mean there is something seriously wrong with him? I mean, really: how much good sense could he have if he believes that my remark to a visiting speaker ‘The auditorium is a short walk from here’ is true if it’s X inches away and false if it’s X + 1 inches away? Williamson just doesn’t know when to give up on predicate calculus!
Before I thought hard about vagueness I didn’t actually have that attitude but I had some attraction to it. Now that I’ve thought hard about vagueness I think epistemicism is one of the two most plausible theories of vagueness (the other being the semantic nihilism of Sider & Braun, which says that all vague sentences aren’t true).
Shortly before Halloween you are walking to Farmer Fred’s farm. Your children want to see the pumpkins that they will carve. You say to them, in an obviously apt and relevant circumstance, ‘There is a pumpkin by the tree’. Call the situation you were in S1; so ‘There is a pumpkin by the tree’ is true when evaluated with respect to S1. Now an atom or atomic particle inside the pumpkin moves out of the pumpkin. Call the resulting situation S2. Consider the claim you made earlier, in S1, with your use of ‘There is a pumpkin by the tree’: is that claim true when evaluated with respect to S2 instead of S1? Obviously, the answer is ‘yes’, assuming there are any pumpkins and trees at all. When we consider the ordinary, everyday meaning of ‘There is a pumpkin by the tree’, given that it was true and not false in S1 it must be true when evaluated with respect to S2 as well. Continue the process and you get a series like this (pumkin claim = the claim you made in S1 when you uttered 'There is a pumpkin by the tree'), where the first column has the situations and the second column has the alethic status of the pumpkin claim:
S1 ------------ true
S2 ----------- true
S3 ----------- true
Sn ----------- ??
Sbig – 2 ---- false
Sbig – 1 ---- false
Sbig --------- false
It sure seems as though the ‘true’ entries in the second column have to stop somewhere. Perhaps the entries in the second column of our table don’t go from ‘true’ to ‘false’. That is, maybe the claim made by your use in S1 of ‘There is a pumpkin by the tree’, when applied to situations Sn – 1 and Sn goes from true to indeterminate—or maybe to indeterminately indeterminate (or indeterminately indeterminately indeterminately … indeterminate). Or maybe to just plain meaningless. Or maybe to both true and false (so it keeps being true but just adds falsity for some strange reason). Or maybe its status with respect to Sn changes with the wind, or my hair color, or some more likely factor. Or maybe it has no satisfaction status whatsoever with respect to Sn (not even meaningless). Or, what might not be any different, there might be no fact of the matter as to the satisfaction status with respect to Sn (whatever that idea comes to). Or perhaps it becomes incoherent to even apply the pumpkin claim to Sn. Or maybe it isn’t true, it isn’t false, it isn’t neither true nor false, and it isn’t neither true, false, nor neither true nor false (got that?). Finally, maybe the truth about the pumpkin claim with respect to Sn is best captured by a Zen master’s reaction to ‘What is the sound of one hand clapping?’
The great strength of epistemicism is just this: IT DOESN'T MATTER which of these many options one takes. Be as clever or as simple as you like with your theory regarding the status of the pumpkin claim, it still seems inevitable that its truth-value is ridiculously dependent on the minuscule movement of a single electron (a nanometer, say). After all, we know that the pumpkin claim applied to Sn – 1 is just plain true and not false: surely, S1, S2, S3, S4, and another trillion or so situations involved perfectly good healthy pumpkins, if pumpkins exist at all (the sum total of a trillion of these changes wouldn’t even be visible to the naked eye and wouldn’t effect the functioning of the pumpkin), and ‘There is a pumpkin by the tree’, understood to have its perfectly ordinary meaning expressed in S1, was nothing other than just plain true with respect to those trillion or so situations. However, we also know that it’s not the case that the pumpkin claim applied to Sn is just plain true (because it’s meaningless, indeterminate, indeterminately indeterminate, alethically unstable, alethically overdetermined or inconsistent, lacks any satisfaction status, [insert Zen master’s response], etc). So, something happened as a result of that ridiculously tiny change from Sn – 1 to Sn; it marks some very sharp cutoff that did not happen in the change from Sn – 2 to Sn – 1. It makes no difference (for the existence of satisfaction cutoffs) as to what descriptions of the situation after the change are correct (if any). The point is that ‘There’s a pumpkin by the tree’, understood in the perfectly normal way, is true, meaningful, and not false when evaluated with respect to the first trillion or so situations, but at some point in the series of situations it stops having that exact status.
It’s no good to protest that the table given above can’t be completed, or that it’s indeterminate whether it can be completed, or that it’s indeterminate whether it’s indeterminate whether it can be completed, or…. The first trillion or so slots in the second column CAN be completed: they all have nothing other than ‘true’ in them. Now you tell me: starting from the top, what is the last row we can correctly complete with just ‘true’? The trillionth row? Then that’s our satisfaction cutoff, and I couldn’t care less what you want to say about the trillionth row, no matter how sophisticated it is. You might want to say, ‘We might as well stop at this point, although we could have stopped earlier’. But in the trillionth row you could NOT have stopped putting in ‘true’; that would have been just as much of a mistake as if you had stopped after the first row or the thousandth row.
You might think that I’m illicitly assuming that for any pair of consecutive rows the question ‘Do they have the same alethic status?’ has an answer. Sadly, no! Most everyone will agree that ‘true’ goes in the first row, and they’ll agree that ‘Do the first and second rows have the same alethic status?’ has an answer: ‘yes, they do have the same status’. And most everyone will agree that that ‘Do the second and third rows have the same alethic status?’ has an answer: ‘yes, they do have the same status’. It doesn’t take a genius to see where this is going. If one is like Michael Tye, for instance, one will agree with what I just said about the first three rows, but one will hold that ‘Do the nth and (n + 1)st rows have the same alethic status?’ sometimes has an answer but sometimes it doesn’t. Fine: when does it first not have an answer? We know it has answer for the first three rows. Does it first fail to have an answer for rows 10,000 and 10,001? Then that’s our sharp cutoff. That is, whereas the pumpkin claim was true and not false when evaluated with respect to S10,000, there is no answer to whether it’s true and not false when evaluated with respect to S10,001.
Eventually, we take seriously both epistemicism and nihilism!
Posted by Bryan Frances at 1:56 PM
December 07, 2006
The problem of temporary intrinsics, as is well known, has its modal analog---the problem of accidental intrinsics. But each of these problems cuts much deeper than their names suggest. I query whether the deeper problems have a response in the literature.
Here's the initial temporal problem. At one time Brit is bent (because sitting). Later she is straight (because standing). Is this a violation of Leibniz's Law? A well rehearsed endurantist answer is NO. We simply index to times. Brit has both the property being bent at time t and the property being straight at t+1. No contradiction here. Lewis' objection to the fix is that so called intrinsic properties (such as being straight or bent) are now treated as relational since indexed to time. In sum, there are no temporary intrinsics.
The modal analog, the problem of accidental intrinsics, arises in critique of transworld identity---the view that objects may exist in more than one possible world. In the actual world Brit is 5'10''. In other merely possible worlds she is taller. The transworlder will insist that there is no violation of Leibniz's law. Brit has both the property having height 5'10'' in the actual world and the property having height n, where n>5'10'' in world w2. No contradiction here. Again, the reply to the fix is that our intrinsic properties have suddenly become relational.
But isn't the problem with this fix to transworld identity much more general than stated? Indexing to worlds robs objects of their accidental properties (intrinsic and relational). For if an object's properties (and relations) are indexed to worlds, then the object has them necessarily. In every possible world it is true that in w1 Brit is 5'10'' (is a philosopher, lives in St Louis, etc.). The problem with indexing to worlds is then not simply a problem for accidental intrinsics, but a problem for accidental properties and relations more generally. (Analogously, the problem of temporary intrinsics underwrites a problem for temporary properties and relations more generally. No properties are temporal! A fortiori none of them are temporary intrinsics.)
Posted by Joe Salerno at 4:22 PM
December 06, 2006
December 03, 2006
The previous post on spiritual and visual experience has generated comments on several blogs. Some of the comments are based on misunderstandings of the original idea (since I did a lousy job in the initial post). Since the comments are well thought out, I thought it would be worth another post to elaborate on the socks-spirituality comparison so that the misunderstandings go away and I can be refuted properly.
In the socks case, I believe that the socks are blue, and I believe it based on my experience of them. In the spiritual case, I believe that God exists, and I believe it based on my experience of Him.
In the socks case, the scientists in question say to me ‘Yes, I agree that your experience seemed to be of blue socks. Many, perhaps most, of us had pretty much the same experience as you did from the general perspective you took. But more careful empirical examination will show that your experiences were misleading in that they were not of blue socks but really of some weird green socks. The experiences you had were genuine visual perceptions, but were somewhat crude. Further visual experience will show you your error!
In the spiritual case, the naysayers in question say to me ‘Yes, I agree that your experience seemed to be of God. Many, perhaps most, of us had pretty much the same experience as you did from the general perspective you took. But more careful empirical examination will show that your experiences were misleading in that they were not of God but instead were the beginnings of some levels of consciousness that are more advanced than those we have in most situations (and that merit the title ‘spiritual’) but don’t call out for the existence of a god. The experiences you had were genuine spiritual perceptions, but were somewhat crude. Further spiritual experience will show you your error!
In the socks case, as far as I can determine the scientists in question are about as knowledgeable about color, funny color illusions, etc. as anyone. Never mind whether there are other color experts much more knowledgeable about color.
In the spiritual case, as far as I can determine the naysayers in question are about as knowledgeable about spiritual experience as anyone. Never mind whether there are other spiritual experts much more knowledgeable about spirituality.
Finally: in the socks case the color scientists are mistaken; in the spiritual case the naysayers are mistaken. But this stipulation really isn’t very important.
That's all stipulation. Now I claim: in the socks case one is epistemically blameworthy if one retains one’s blue socks belief. Of course, one could easily avoid the blame. The main color scientist could be joking and you find out that she’s joking.
I also claim that if there’s blameworthiness in the socks story, then there’s blameworthiness in the spiritual story.
Most people have misunderstood what the theistic naysayers are saying. They aren’t saying that you (the spiritual theist) have gone insane, or that you are temporarily deranged or having a seizure or anything like that. They aren’t saying you are “screwed up”—exactly how the color scientists aren’t saying you’re screwed up. Your perceptual and spiritual faculties are working fine; it’s just that circumstances are odd and you’ve erred in interpreting them. The naysayer isn’t disrespectful, so to speak, of spiritual experience. Take the socks-spirituality analogy seriously!
In the socks case, you had some utterly typical visual experiences and immediately formed the belief that the socks are blue. We can stipulate, if you like, that the same happened in the spiritual case. I’m not saying that the move from the spiritual experience to the theistic belief amounts to any more of an “interpretation” or “inference” than in the socks case. In the spiritual case there is no more of an argument to the best explanation as in the socks case.
Posted by Bryan Frances at 12:37 PM
December 02, 2006
You see a sock in the usual excellent viewing conditions: just four feet away, in perfect light, etc. It looks, and is, blue. But it’s your colleague’s sock, and his wife is a color scientist and he insists that he is wearing some of her “trick” socks she uses in her experiments, in that although they look blue and normal, they’re actually very weird and really green. We can suppose that he’s made an innocent mistake in that the socks he is wearing are entirely normal and blue. You mistakenly think he trying to fool you even though he’s actually a pillar of honesty, so you persist in your belief that the socks are blue. Suppose his wife comes in and says ‘Well there are those trick socks! We were looking for them all morning in the lab! What are you doing with them on?’ Other people concur with her (her lab assistants and children say). She and other color theorists have created various other strange objects, strange in ways having to do with their color appearances. You are somewhat aware of these objects, involving rapidly rotating disks with special holes in them, unusual materials, and the like. So you know of the existence of such objects.
Your blue-socks belief is true and reliably produced in the entirely ordinary way, but is this belief epistemically upstanding once you’ve encountered the weird-socks story, especially given that you’ve heard and understood loads of intelligent, sincere, and honest experts saying that the socks are really green—not just his wife, but her assistants, other professors, etc.? Don’t you have to rule out, at least to some significant extent (to ask for proof seems to be asking too much) the weird-socks hypothesis to retain the upstanding status of your belief that the socks are blue? I think you would be committing some significant epistemic crimes if you retained your belief.
I just described a case that seems to have the following features: one acquires a true belief under virtually the best and most reliable circumstances possible, the belief initially amounts to knowledge, and yet the awareness of some information that is ultimately misleading but endorsed by relevant professionals and plausible given other information ruins the epistemic upstandingness of the belief (when the belief is retained after the additional information has been encountered).
I find this story interesting. First, I wonder whether it’s really the case that after encountering the ultimately misleading evidence against the blue-socks belief your blue-socks belief is epistemically blameworthy. Second, does the alleged lesson carry over to the belief that God exists? That is, assuming for the sake of argument that one can know that God exists through some kind of quasi-perceptual spiritual experiences of Him, does the presence of alternative, expertly endorsed explanations of that experience render that theistic belief blameworthy—even though the explanations are ultimately misleading?
In the theistic case I assume that one is in the position of the person in the color case: one encounters the alternative explanations and can do nothing to suggest that they’re wrong. I don’t think one can just say, “Well, the alternative explanations must be wrong, as I already know through experience that God exists”. After all, the corresponding explanation in the perceptual case doesn’t seem to work: “Well, the trick-socks explanation being offered by the color scientists must be wrong, as I already know from visual perception that the socks are blue”.
Posted by Bryan Frances at 3:29 AM
November 26, 2006
Click here for a nice "Recent Comments" add-on for your Blogger service. It's free and includes a comment feed. Just installed one on Knowability in about 15 minutes. It promises to be compatible with your eventual free upgrade to Blogger Beta.
Incidentally, my feed URLs are at the bottom of the sidebar. Consider signing in for an email subscription to posts and/or comments.
Posted by Joe Salerno at 5:35 PM
November 22, 2006
Thanks to Joe for the invitation to communicate.
Nowadays philosophers rarely call one another idiots in print (things tend to get a little nastier in private). But this doesn’t mean that they don’t insult one another in their publications. I thought it might be fun to catalogue some of the insults from recent literature. I’d like people to share their favorites in the comments.
My favorites are the more subtle ones. Suppose Jones publishes a criticism of Smith and then Smith responds in print. Smith can insult Jones in many ways. One common way is to use the phrase ‘It is curious that’, as in ‘It is curious that Jones thinks that my view includes the claim that P’. What is often (not always) meant is this: ‘Jones is a f**king idiot. He thinks I said that P, when any fool can see that I said no such thing’. Similar points hold for ‘It is interesting that Jones thinks that I said that P’ and ‘The proposal that Jones makes on my behalf is very strange, even borderline incoherent’ and ‘I am surprised that Jones says that P’.
Here’s a rather different way to insult: ‘In a useful article’, as in ‘In a useful article, Jones considers the claim that P’. What is often (not always) meant is something like this: ‘Jones wrote a largely boneheaded article on the claim that P; however, by working through his confusions we will be able to see the important points more clearly.’ I hope it’s legitimate of me to point out that Davidson did this in one of his classic articles, although I can’t remember which one. When I was a beginning graduate student at Minnesota, and full of myself, I did it as well in a paper written for a class. My professor, Joseph Owens, drew a line through the phrase and wrote in the margin ‘Out’. It was clear that I wasn’t going to get away with such nonsense.
I don’t want to imply that these insults are never deserved. On the contrary, on many occasions the author is responding to some jerk. And even when one isn’t responding to a jerk, the insult can be non-personal in the sense that the author is such a lover of the truth and hater of the false that he or she hurls invectives not at people but merely at ideas that strike her as false. I once had a colleague who often publicly destroyed visiting speakers, but it was plain to most of us—and often enough the visiting speaker—that his target was the ideas he thought were false. It was never the person advocating the ideas. This made the behavior more tolerable, even admirable.
The insults noted above seem a bit subtle. But maybe that’s the wrong predicate. Perhaps not subtle but restrained?
Baker & Hacker insulted Kripke’s interpretation of Wittgenstein with real gusto. Hacker also insulted the recent history books by Soames, who like Kripke is a rather mediocre philosopher. But their insults are usually unrestrained. Over the years Dennett and Searle have traded many insults, or at least remarks that looked pretty insulting. But I’m not sure that all those were real insults. Dennett has a good sense of humor, and it comes out in his writing. So perhaps he wasn’t really insulting Searle, although his Journal of Philosophy review of Searle’s Rediscovery of the Mind seemed pretty tough to me. I can’t speak with any authority about Searle. Of course, in that book Seale seemed to be telling us that just about everyone in the philosophy of mind had been making terribly elementary and boneheaded mistakes for many years.
As any child will tell you, being ignored can be quite an insult. In that vein, I recall a footnote in an article on the nature of belief by some moderately famous person. He noted that perhaps he should consider what Dennett’s theory would do with the example being considered in the body of the essay. But he declined to make the probe, saying that in reality Dennett’s exceedingly vague remarks could hardly amount to anything like a view worth considering. Ouch!
So: what are your favorite examples—restrained or not? Please don’t reveal the identity of the insulter, at least if he or she is still alive and isn’t already well known as one who insults opponents. I realize that it is easy to thumb through Nietzsche, say, and find some pretty potent insults. But I’m more interested in contemporary writers, not least because I think it might be fun to try to figure out the identity of the insulter!
It also might be interesting to note which famous contemporaries never insulted any of their many critics. For instance, did Rawls or Lewis ever insult any of their critics?
Posted by Bryan Frances at 4:34 AM
November 21, 2006
SYNTHESE ANNUAL CONFERENCE
Synthese - An International Journal for Epistemology, Logic and Philosophy of Science hosts its first annual conference at the Carlsberg Academy in Copenhagen, October 3- 5 , 2007. The conference is sponsored by PHIS - The Danish Research School in Philosophy, History of Ideas and History of Science and Springer.
Title / Between Logic and Intuition: David Lewis and the Future of Formal Methods in Philosophy
Abstract / David Lewis is one of the most important figures in contemporary philosophy. His approach balances elegantly between the use of rigorous formal methods and sound philosophical intuitions. The benefit of such an approach is reflected in the substantial impact his philosophical insights have had not only in many core areas of philosophy, but also in neighboring disciplines ranging from computer science to game theory and linguistics. The interplay between logic and intuition to obtain results of both philosophical and interdisciplinary importance makes Lewis' work a prime example of formal philosophy. Lewis' work exemplifies the fruitful interplay between logic and intuition that is central to contemporary philosophy. This conference serves as a tribute to Lewis and as a venue for adressing questions concerning the relationship between logic and philosophical intuition. This first Synthese Annual Conference is the venue for discussing the future of formal methods in philosophy.
Invited Speakers / John Collins, Hannes Leitgeb, Rohit Parikh, L.A. Paul, Brian Weatherson
Program Committee and Conference Chairs / Johan van Benthem, Vincent F. Hendricks, John Symons (SYNTHESE) , Stig Andur Pedersen (PHIS)
Conference Manager / Pelle Guldborg Hansen
Call for papers / Synthese invites papers on the work of David Lewis and formal philosophy in accordance with the conference abstract. The final papers should be sent electronically to Editor-in-Chief, Vincent F. Hendricks at firstname.lastname@example.org, using "SAC"-submission in the subject entry. The deadline for submitting a paper for consideration is April 1, 2007. Notification of acceptance for presentation at the conference is August 1, 2007.
Publication / A selection of the best papers will be published as an anthology in the Synthese Library book series.
Posted by Joe Salerno at 9:25 AM
November 02, 2006
Thanks to Joe for the invitation to post.
I want to raise some considerations about attributor contextualism and interest-dependent invariantism. (These considerations come from a paper called “What’s Wrong with Contextualism” that I am working on for Philosophical Quarterly. ) First, I want to raise some methodological considerations about how we ought to decide the case between the two. Second, I want to argue that these considerations at least point us in the direction of attributor contextualism for knowledge attributions. Here is the gist of the argument: John Hawthorne and Jason Stanley have made a lot of the idea that knowledge is the norm of practical reasoning, using this plausible thesis to argue in favor of invariantism and against contextualism. But it seems to me that the argument can be turned on its head. In other words, the idea that knowledge is for practical reasoning seems to me to count in favor of contextualism. The main idea is this: If knowledge is for practical reasoning, then it is the relevant practical reasoning context that should govern. Sometimes this is the subject-context-- we make a knowledge attribution in the context of discussion about what S should do. But other times it is the attributor context-- we make the attribution while wondering what we should do. Still other times it is a third party's context-- we are discussing what some third guy should do. Technically this turns out to be a version of attributor contextualism, since the truth value of knowledge attributions varies across attributor context. The governing idea, though, is that truth values are relative to practical reasoning contexts.
Below this idea is developed more fully.
The usual methodology for adjudicating between contextualism and interest-dependent invariantism in epistemology is two-fold: consult our intuitions about possible cases, and consult the linguistic data regarding actual language use. We may include in the latter descriptions about how certain grammatical kinds in our language in fact behave. So, for example, it is common to describe the ways that indexicals behave, and to describe analogies or disanalogies with “knows.” This methodology invites different ways to explain the relevant data. For example, contextualists and invariantists will offer competing explanations for why a knowledge claim seems true or seems false in a possible case, with one side explaining the intuition in terms of semantic competence, the other in terms of pragmatics, perhaps together with “semantic blindness.”
I think we can supplement this two-fold methodology in a fruitful way. Specifically, we can ask what our concept of knowledge and our knowledge language are for. We can ask what roles they play in our conceptual economy and our linguistic practices. By doing so, I suggest, we gain further insight about how our concepts and language can be expected to behave. This same methodology has recently been defended by Edward Craig.
There seems to be no known language in which sentences using “know” do not find a comfortable and colloquial equivalent. The implication is that it answers to some very general needs of human life and thought, and it would surely be interesting to know which and how . . . .
Instead of begining with ordinary usage, we begin with an ordinary situation. We take some prima facie plausible hypothesis about what the concept of knowledge does for us, what its role in our life might be, and then ask what a concept having that role would be like, what conditions would govern its application. (2)
What might such a “prima facie plausible hypothesis” be? Here are two ideas that have received a lot of play lately: that knowledge is the norm of practical reasoning, and that an important function of our knowledge language is to flag good information and good sources of information. The first idea is emphasized by, among others, Hawthorne and Stanley. The latter idea has been defended in detail by Craig.
Human beings need true beliefs about their environment, beliefs that can serve to guide their actions to a successful outcome. That being so, they need sources of information that will lead them to believe truths . . . So any community may be presumed to have an interest in evaluating sources of information; and in connection with that interest certain concepts will be in use. The hypothesis I wish to try out is that the concept of knowledge is one of them. To put it briefly and roughly, the concept of knowledge is used to flag approved sources of information. (11)
Putting the two ideas together, we get the following plausible thesis: that an important function of our concept of knowledge and our knowledge language, perhaps its primary function, is to flag information and sources of information for use in practical reasoning.
Now suppose this is right. Does that speak in favor of attributor contextualism or interest-dependent invariantism? To my mind, it speaks in favor of attributor contextualism. More specifically, it speaks in favor of the version of attributor contextualism that allows the attributor context to be sensitive to the interests and purposes operative in the subject context. My thinking is this: if the function of knowledge is to serve practical reasoning, it should be tied to the interests and purposes that are relevant to the practical reasoner.
To make the point more clearly we may use Hawthorne’s notion of a “practical environment.” One’s practical environment is constituted by those aspects of one’s environment that are relevant to practical reasoning. Often enough, the practical reasoner with whom we are concerned will be in the attributor’s practical environment. Often enough, that is, one attributes knowledge for the purpose of practical reasoning in one’s own practical environment. But sometimes the practical reasoner will be outside the attributor’s practical environment. For example, sometimes we attribute knowledge for the purpose of practical reasoning in the subject’s practical environment. In that case, it would seem, it is the interests and purposes operative in the subject’s practical environment that are relevant.
These considerations suggest the following general rule: the truth-value of knowledge attributions (and the like) depends on the interests and purposes operative in the relevant practical reasoning context. Sometimes this will be the practical environment of attributor, sometimes that of the subject, and sometimes that of some third party. The position that results, however, will be a version of attributor contextualism, since it entails that the truth-value of knowledge claims is variable over attributor contexts. More exactly, the position is a version of interest-dependent, subject-sensitive contextualism.
As I said above, these remarks are at best suggestive. I don’t pretend to have established the present version of contextualism over its competitors. It is worth noting, however, that the proposed view does very well in relation to Hawthorne’s scorecard for evaluating contextualist and invariantist positions. In fact, it does better than any position that Hawthorne considers. Not pretending to have argued for these claims, I will simply assert the following: subject-sensitive contextualism respects the Moorean constraint that most of our knowledge claims are true, respects plausible closure principles, preserves the intuitive connections between knowledge, assertion and practical reasoning, and can (near enough) respect disquotational schemas for ‘knows’. We get this last result because all knowledge attributions must satisfy fairly high minimal standards, and so a knowledge claim in one context can normally be imported into another. I say “near enough” because there will be exceptions to this general rule. Specifically, we cannot disquote into contexts where stakes drive relevant standards unusually high. That there are such exceptions, however, seems correct. That is, we do not expect disquotation to go in that direction. (Hawthoren correctly notes that no anti-skeptical view can respect both the “Epistemic Possibility Constraint” (If the probability for S that p is not zero, then S does not know that not-p) and the “Objective Chance Principle” ( that epistemic probability follows knowledge of objective probability). (94))
Finally, the proposed view deals nicely with a kind of counter-example that gets posed against contextualism--ones involving attributions of knowledge to a high-stakes subject context from a low-stakes attributor context. For example, consider the case where we are considering whether S “knows” that the bank is open on Saturday, based on the evidence that he was at the bank two weeks ago and it was open on Saturday then. Nothing much depends on his being right for us but a lot depends on it for him. Intuitively, we should judge that S’s claim to “know” is false, even though we are evaluating his claim from a low-stakes context. As Hawthorne and Stanley point out, it seems that it is the interests and purposes operative in the subject’s practical environment that should govern the standards for “knowledge” here. But so long as the attributor context can be properly sensitive to the interests operative in the subject’s practical environment, attributor contextualism can accommodate this point. More specifically, insofar as it is the practical reasoning of the subject that is at issue in the case, the present view rules that it is the interests and purposes operative in the subject’s practical environment that ought to govern our evaluation of the knowledge claim. On the other hand, if the knowledge claim is being evaluated for use in our own practical reasoning, then it is the interest and purposes operative in our own practical environment that should govern. All that seems intuitively correct to me. That is, the proposed position seems to me to yield the right results in each case.
Posted by John Greco at 2:18 PM
October 22, 2006
In Chapter 3 of Knowledge and Its Limits, Timothy Williamson offers recombination arguments for the primeness of knowledge (and other mental states). To say that knowledge is prime is to deny that it is a composite of narrow (internal) and broad (external) conditions. Each argument begins with two cases of knowledge that are like with respect to their internal condition but different with respect to their external condition. If knowledge is composite, then recombining the internal condition of one case with the external condition of the other case will produce a third case of knowledge. Williamson needs one such recombination that fails to produce a case of knowledge to undermine the thesis that knowledge is composite. Here's Williamson:
...in [case 1] there is water on the right and gin (which looks just like water) on the left, and a brain lesion causes one visually to register only what is on the right. In [case 2] there is gin on the right and water on the left, and a brain lesion causes one visually to register only what is on the left; in the [case 3] internally like [case 1] and externally like [case 2], there is gin on the right and water on the left (as in ), and the brain lesion causes one visually to register only what is on the right (as in ). Thus, given appropriate background conditions one sees water in  and  but not in . (70)
My criticism of Williamson's argument is this. It is all but clear that cases 1 and 2 are in fact cases of seeing. After all, for Williamson seeing entails knowing (Chapter 1), and he is sympathetic to the thesis that one can know only if she could not easily have gotten it wrong (Chapter 4). But it would seem that in cases 1 and 2 the subject could very easily have gotten it wrong, since she could very easily have looked at the gin when forming her water-belief. The worries here are the same for barn-beliefs in Barn County. Just as it seems strange to say that I know that there is a barn when there are fake barns in the vicinity, it seems strange to say that the subject in cases 1 and 2 knows that there is water when there is gin (indistinguishable from water) in the vicinity. Additionally, the subject in each of the two cases has a brain lesion blocking half of her visual information! One would not be remiss to pause and question the respectability of the partially disabled visual process. In sum, two worries arise. There are Ginet-Goldman style barn considerations to worry about, and there are Bon Jour-Plantinga clairvoyance-brain lesion considerations about the epistemic respectability of strange but reliable belief-forming processes. Both worries go against a claim to knowledge in cases 1 and 2. And so, if seeing entails knowing, then arguably the subjects in cases 1 and 2 fail to see. Related worries surround Williamson's other arguments. Here is Williamson arguing more directly for the primeness of knowledge.
Let [case 1] be a case in which one knows by testimony that the election was rigged; Smith tells one that the election was rigged, he is trustworthy, and one trusts him; Brown also tells one that the election was rigged, but he is not trustworthy, and one does not trust him. Let  be a case which differs from  by reversing the roles of Smith and Brown.... Now consider a case  internally like  and externally like . In , one does not trust Brown, because one does not trust him in , and  is internally like . Equally in , Smith is not trustworthy, because he is not trustworthy in , and  is externally like . Thus, in , neither Smith nor Brown is both trustworthy and trusted. Consequently, in , one does not know that the election was rigged. Thus the condition that one knows that the election was rigged is prime. (72)
My criticism here is this. In recombining the internal condition from case 1 and the external condition from case 2, Williamson fails to include the entire external condition from case 2. Part of the external story in case 2 is that the belief in question was produced by reliable testimony. In case 3, the belief was not produced by reliable testimony. But if in case 3 the belief was not produced by reliable testimony, then Williamson has not properly recombined the cases. He has not included in case 3 the full external condition from case 2. Therefore, the recombination is incomplete.
A general criticism that I am making is that Williamson, in all of his recombination arguments, fails to include some causal or counterfactual conditions as part of the broad (external) condition of the subject in cases 1 and 2. The result is a mistaken attribution of knowledge in the first argument and an incomplete recombination in the second argument. Therefore, the question about the primeness of knowledge remains open.
Posted by Joe Salerno at 4:27 PM
October 04, 2006
Last week I re-read the first chapter of Timothy Williamson's Knowledge and Its Limits. TW argues that 'knows' is the most general factive mental state operator. To be a factive mental state operator (FMSO) is to be a factive semantically unanalyzable expression that attributes a propositional attitude to a subject. The semantic unanalyzability claim is that, by definition, an FMSO is never synonymous with a complex expression whose meaning is composed of the meanings of its parts. So, for instance, 'could hear' is a FMSO. There is a reading of it such that the presumption of truth is not cancelable, as is revealed by the deviance of
(1) She could hear that the volcano was erupting, but it was not erupting.
Moreover, the meaning of 'could hear' is not composed of the meanings of 'could' and 'hear', for that would assimilate 'could hear' to something like 'it is merely possible that s heard that p', which is not factive.
Additionally, 'could hear' is further evidence that 'knows' is the most general FMSO, since 's could hear that p' implies 's knows that p'.
Let's explore the properties of other FMSOs. I want to argue that there is a more general FMSO than 'knows'.
Consider the ambiguity in each of the following expressions:
'could see that'
'could hear that'
'could feel that',
'can't believe that'
'is not happy that'
'is not surprised that'
'failed to realize that'
'is not impressed that'
'is not able to taste that'
Each of these has a factive and a non-factive reading. For instance, 'cannot believe' is factive in
(2) I cannot believe that you are smoking again,
but is not factive in
(3) I cannot believe that I don't have any beliefs.
Now the non-factive readings of the above list items are semantically decomposable. They may be paraphrased roughly as
'it is false that s believes/is happy/is surprised/realizes/is impressed/is able to taste that'
'it is (merely) possible that the subject s sees/hears/feels that'.
Exactly analogous remarks may be made about knowability- and ignorance-attributions. More carefully, 'could have known that' and 'does not know that' both have a factive and a non-factive reading. Let's discuss 'could have known that'. The non-factive reading, perhaps not common in ordinary English, is that it is merely possible that s knows that p. The other reading carries a presumption of truth as in 's was in a position to know that p'. Notice that the presumption of truth is not cancelable. This is demonstrated by the deviance of the following claims:
(4) Andy could have known that grandmother was ill, even though she was not ill.
(5) Sally was in a position to know that Andy was cheating, but he was not cheating.
The deviance of the claims suggests that the presumption of truth is semantic and not cancelable.
As with the factive readings of the items on the above list, we should expect that the factive readings of 'could have known that' and 'is not known that' are not analyzable. My hypothesis is that they are not analyzable. And I suggest that the burden is on one who thinks otherwise to show that 'could have known' is different from all of our other factive operators of the form 'could have ___ed'.
Incidentally, the non-factive readings of all of the aforementioned expressions fail to attribute a propositional attitude to a subject. They either outright deny the presence of the attitude or affirm merely its possibility of obtaining. The factive readings of the above operators, on the other hand, all attribute a propositional attitude to a subject (with the exception of 'does not know that').
According to Williamson, when a propositional attitude that p is attributed, so is a grasp of the concepts in p. Since the factive reading of 'does not know' fails to attribute grasp of meaning, we may conclude that it is not a mental state operator. A fortiori it is not an FMSO. Importantly, 'could have known' does attribute grasp of meaning. Consider,
(6) Andy doesn't understand high-energy physics, yet he could have known that there are top quarks in pp collisions.
(7) Andy doesn't grasp any of the rules of Chess. He was nevertheless in a position to know that his King was about to be mated.
The oddities of (6) and (7) suggest that knowability is a mental state---that 'to s it is knowable that p' implies 's has an attitude that p'---minimally, it implies 's grasps the meaning that p'. Similar things can be said about 's failed to realize that p'. It wouldn't be a failure to realize that p, if the subject didn't have a grasp of the concepts in p.
It would seem then that 'knowable' or 'could have known' is an FMSO. The problem for Williamson's account is that 's could have known that p' does not entail 's knows that p'. Hence, 'knows' is not the most general FMSO. Instead, the entailment goes the other way. Are we to conclude that 'could have known' is the most general FMSO?
Posted by Joe Salerno at 6:51 AM
September 21, 2006
Richard Zach's Log Blog has brought to my attention the recent death of Martin Löb. It is fitting that we think here about his work. The famous Löb Theorem (in "Solution of a Problem of Leon Henkin" JSL, 1955) generates Löb's paradox (ibid.), which goes something like this:
Notice that (*) is provable for an arbitrary proposition A. Here's the proof. Suppose (*) is true. Then it satisfies its own antecedent. So it follows that A. By conditional proof, if (*) is true then so is A. And that is just to say that (*) is true.
Notice also, that the provability of (*) underwrites the truth of any proposition A. For this reason and since both (*) and its proof are negation-free, Löb offers (*) as a test for inconsistency of negation-free languages (that allow self-reference).
Löb credits an anonymous referee for extracting the paradox and the insight about how to test for inconsistency without negation. Curry (1942), and not Löb (1955), usually gets credit for the above insights. Johan van Benthem ("FourParadoxes" JPL 1978), however, argues that the Löb+Referee insights were developed independently of Curry's work. Moreover, the Curry paradox is treated by Curry and his students as a feature of formal systems only, whereas Löb's paradox is a natural language paradox.
An interesting loose-end is the identity of the 1955 anonymous referee that extracted the paradox from Löb's Theorem.
Posted by Joe Salerno at 7:55 AM
September 18, 2006
Knowability & Beyond
Special issue of Synthese
- Can there be non-actual knowledge of what is actually the case?
- Is the concept of knowability basic or is it semantically decomposable into knowledge and (alethic) possibility?
- Should an intuitionist find a way to express an existential commitment to some ignorance and undecidedness?
- Are there more truths than knowables?
Posted by Joe Salerno at 2:23 PM
September 11, 2006
September 06, 2006
Modest modal epistemic reasoning reveals the equivalence of the following two principles:
(1) Any truth can be known.
(2) All truths are known.
Jon Kvanvig, in his latest book The Knowability Paradox (2006), poses a challenge to anyone who accepts the validity of the reasoning---viz., explain the loss of the apparent logical distinction between these two principles. Here's one way to go.
It is logically necessary that each of the above propositions is false. Logically necessary propositions often appear to express different thoughts, especially if one involves a concept not involved in the other.
Why think that the above propositions are necessarily false? An appendix to Nicholas Rescher's Epistemic Logic (2005) inspires an answer. It is a logical fact that there are more truths than knowables. Knowledge requires thought, and we could at most think a countable number of propositions. However, the true propositions themselves are uncountable. A diagonalization argument is required to make this stick, but it shouldn't be difficult to construct one for, say, a class of truths about the rational numbers. So if (i) there are more truths about the rational numbers than things that can be known about the rational numbers and (ii) the proof of this rests on no non-logical facts, then it is logically necessary that there is an unknown truth. (2) is logically false. And by the same reasoning so is (1).
The apparent logical distinction is explained by the fact that (2) seems on first glance to be stronger than (1). But the appearance is the result of not immediately recognizing that both propositions are logical falsehoods.
Posted by Joe Salerno at 5:12 AM
August 30, 2006
August 26, 2006
Fantle and McGrath have an excellent new paper on pragmatic encroachment.
The latest version of the Fantle and McGrath argument against epistemic purism goes like this. Suppose that possible subjects S1 and S2 are alike with respect to strength of epistemic position. For instance, we suppose that they share precisely the same evidence regarding a true proposition that p. And we suppose that S1 knows that p. S2 is just like S1 in every respect accept that she differs with respect to stakes. The matter is much more important to S2, and so she is not rational to act as if p. But then by the pragmatic condition on knowledge, S2 fails to know p. And since, ex hypothesi, S1 does know p, it therefore follows that whether one is in a position to know does not supervene on strength of epistemic position. Epistemic purism is false.
The argument harbors some fundamental assumptions. First, subjects S1 and S2 are thought to be the same with respect to position of epistemic strength because they are said to be evidential twins. One implicit fundamental assumption then is this: only evidence effects position of epistemic strength. Second, S1 and S2 are thought to be evidential twins, because it is thought that practical interests do not effect how much evidence one has.
The notion of evidence, for F and M, is meant to be "a broad intuitive concept, that internalists and externalists might analyze in different ways." And in defense of their position, they remark that "it ought to be common ground between theories of evidence that having a lot at stake in whether p is true does not, by itself, provide evidence for or against p." Further, they explain that evidence for p, but not stakes in whether p, affect the probability of p (in some appropriate sense of 'probability').
Here is a possible reply that I discussed at the Epistemic Value Conference. Having a lot at stake does affect evidence. Consider, when stakes are high, evidence previously ignored becomes salient. Such "new" evidence may reduce the probability that p is true. For instance, S1 knows that the train is the Express train based on another traveler's testimony, but had it meant more to him to be right he might have recalled that a small number of travelers are clueless. Weighing in that a small number of travelers are clueless suddenly reduces the probability that the train is the Express. If practical interests can make salient previously ignored evidence, then Assumption 2 is false. Practical interests do affect the amount of evidence one has, and so, by Assumption 1, practical interests (at least indirectly) affect the strength of one's epistemic position.
Posted by Joe Salerno at 6:03 AM
August 25, 2006
The Danish Epistemology Network, Namicona, and the Department of Philosophy at the University of Copenhagen hosted an epistemology workshop on August 22. Speakers included Lars Gundersen (Aarhus), Jesper Kallestrup (Edinburgh), Berit Brogaard and yours truly.
Gundersen developed an account of why neither disjunctivism nor contextualism has the resources for dealing adequately with “abominable conjunctions”. The natural way for these theories to deal with such conjunctions leaves them vulnerable when we reformulate the conjunctions in terms of claimability/assertibility: (1) it is claimable that I know that p (where p is some ordinary proposition); (2) if it’s claimable that p and claimable that p entails q, then it is claimable that q; and (3) it is not claimable that I know that q (where q is the negation of the skeptical hypothesis).
Kallestrup’s paper, “Reliabilist Justification: Basic, Easy and Brute”, offered a way of blocking track-record versions of the easy knowledge objection. The key is to motivate a Wrightian restriction on the transmission of justification across valid deduction. Doing so blocks the very first inferential step in the track-record/bootstrapping arguments.
Brogaard in her presentation “In Defense of a Perspectival Semantics for ‘Knows’” first defends relativism against objections (including Stanley’s objections that it cannot accommodate the factivity of ‘knows’ and that it entails that circumstances of evaluation have features that cannot be shifted by any intensional operator), but then shows that a perspectivalist semantics can do all the same work without relativizing sentence truth to contexts of assessment.
I presented “Knowability Noir: 1945-1963”, which evaluates an unpublished debate between Fitch and Church in 1945. Their debate was primarily over the effectiveness of the proof we today call the “knowability paradox”. My primary concern was to offer an account of what Fitch perceived to be the significance of the proof in his 1963 paper. I argued that the significance was to draw general and special lessons about how to avoid conditional fallacies in philosophical analysis.
Posted by Joe Salerno at 1:52 AM
The final day of the epistemic value conference included a paper by Ward Jones. He developed some ideas about the nature of doxastic goods in his attempt to say what it is that makes knowledge valuable. Pascal Engel’s position was that none of the arguments for pragmatic encroachment on truth, evidence, justification or knowledge work. Christian Piller argued that our interest in truth is not captured by the idea that we desire to believe all and only truths. In particular, he argued, that we do not wish to believe only truths. And Martin Kusch developed an account of the social value of knowledge partially in terms of a very interesting fictional geneology of a proto-concept of knowledge.
Posted by Joe Salerno at 1:40 AM
August 20, 2006
Here are some highlights from a full day of interesting talks at the epistemic value conference. Wayne Riggs developed a conception of epistemic luck to compliment his credit approach to the value problem. Much use was made of Jennifer Lackey's published criticisms of the credit approach and her criticisms of Pritchard's theory of luck.
Matt Weiner explained that knowledge is like a Swiss army knife. Its value is derivative of the value of its components. Moreover, knowledge is not more valuable that any of its proper parts. Matt's positions hinged on the connections between knowledge and practical rationality.
Berit Brogaard argued that a perspectivalist semantics supports epistemic value monism better than does contextualism or relativism. Along the way she denies that there are any genuinely relative truths.
Mark Kaplan came to terms with human fallibility by arguing that a determination of one's confidence that p does not determine her opinion regarding p; "being confident" is different from "being willing to say". Without paradox I can take it to be highly likely, say, that there are errors in my book, even though I endorse all of the claims therein.
There were other interesting talks as well. Must get some sleep before tomorrow's marathon.
Posted by Joe Salerno at 11:09 AM
August 19, 2006
Today was the pre-conference workshop on epistemic value at the University of Stirling. Stephen Grimm set up a dilemma for epistemic value monism. Either epistemic appraisals apply only to "interesting truths" (Alston, Goldman) or they apply to all truths equally (Lynch). If the former, then absurdly epistemic appraisals such as 'is justified' do not apply to uninteresting true beliefs. If the latter, then believing that there are n blades of grass in the yard is absurdly as valuable as any other belief.
Jason Baehr argued that the guiding intuition behind the value problem does not warrant the standard formality and generality constraints on a solution. That is, the intuition that knowledge is more valuable than mere true belief does not motivate the traditional thought that one or more components of knowledge must have "truth-independent" value or the thought that knowledge is always more valuable than true belief.
Jay Wood discussed a wide spectrum of epistemic values and argued against a sharp distinction between epistemic and moral value.
Posted by Joe Salerno at 2:59 PM
August 14, 2006
July 30, 2006
In Church's "First Anonymous Referee Report on Fitch's 'A Definition of Value'" from 1945 we find not only the first formulation of the proof today known as the knowability paradox, but we find the first proposed solution. Church rejects any principle stating that propositional attitudes necessitate (other) propositional attitudes. For instance, Church rejects plausible closure principles for belief. He explains,
To be sure, one who believes a proposition without believing its more obvious logical consequences is a fool; but it is an empirical fact that there are fools. It is even possible there might be so great a fool as to believe the conjunction of two propositions without believing either of the two propositions; at least, an empirical law to the contrary would seem to be open to doubt. On this ground it is empirically possible that a might believe k' at time t without believing k at time t (although k' is a conjunction one of whose terms is k.
Church here denies that belief is closed under conjunction elimination. The context reveals that he is also denying that knowledge is so closed. More generally the idea is that if a propositional attitude is the result of another, it is so contingently. Notice that we have today departed dramatically from Church's thought. In all of the literature on the knowability paradox, for instance, it is granted that it is at least metaphysically necessary that one knows the conjuncts of known conjunctions. An exeception is found in a position articulated in Kelp and Pritchard's very interesting forthcoming " Anti-realism, Factivity and Fitch".
To pick another example,
...there is no valid law of psychology according to which anything whatsoever about my desires may be inferred from the fact that I know so-and-so.
It should be noted that Church (in the second referee report) allows Russell's theory of types to avoid such obvious counterexamples as "Necessarily, if a knows that she desires that p then a desires that p". Accordingly, the approach blocks the instance of the factivity of knowledge that is employed in the knowability paradox---viz., K(p & ~Kp) => (p & ~Kp). In this regard Church forshadow's Bernie Linsky's independent thoughts on the matter, forthcoming as "Logical Types in Arguments about Knowability and Belief".
In sum, Church's position is that his knowability paradox is indeed an invalid proof. On his view knowledge is neither closed under conjunction elimination nor unrestrictedly factive.
Posted by Joe Salerno at 6:40 AM
July 28, 2006
The following is a chronological catalog of archival documentation pertaining to the early history of Fitch's knowability paradox. The items were identified for the first time by myself or those aiding my research. I discuss their content in "Knowability Noir: 1945-1963", which will appear in New Essays on the Knowability Paradox.
The documents can be found in one of three archives:
(FFP) Frederic B. Fitch Papers: Manuscripts and Archives. Yale University Library.
(ACP) Alonzo Church Papers: Department of Rare Books and Special Collections. Princeton University Library.
(ENP) Ernest Nagel Papers: Rare Book and Manuscript Library. Columbia University.
* Church, Alonzo. "First Anonymous Referee Report on Fitch's 'A Definition of Value'". January or February 1945. Hand written by Alonzo Church to Ernest Nagel, coeditor of JSL. Contains the first proof of the modal epistemic result, today known as the knowability paradox. (ENP: Box 1)
An edition of this and the second referee report (listed below) is being prepared by Julien Murzi and myself for publication in New Essays on the Knowability Paradox.
* Nagel, E. "Letter to Church: March 6, 1945". Explains that Fitch has returned the manuscipt and offered replies to the first report. (ACP: Box 20)
* "Second Anonymous Referee Report on Fitch's 'A Definition of Value'". Late March or early April 1945. Includes a more formal characterization of the knowability result in Lewis and Langford's proof theory. Includes replies to Fitch's discussion of the first report. (ENP: Box 1)
* Nagel, E. "Letter to Church: April 13, 1945". Announces that Fitch has withdrawn his paper owing to a defect in his definition of value. (ACP: Box **)
* Fitch, F. "A Logical Analysis of Some Value Concepts". Fitch's December 23, 1961 Presidential Address to the Association for Symbolic Logic. (FFP: Box 33)
* Fitch, F. "A Logical Analysis of Some Value Concepts". Penultimate draft. (FFP: Box 33)
* Postcard to Fitch: January 18, 1963. Regarding remaining typographical edits to be made prior to the printing of "A Logical Analysis of Some Value Concepts" in JSL. (FFP: Box 33)
Posted by Joe Salerno at 3:34 PM
July 19, 2006
July 14, 2006
I’ve come up with a draft of a program for testing my Fibonacci betting strategy for statistical reliability. It applies to a basic game of craps. Informal discussion of the strategy appears in the previous post.
The program defined below has three possible outcomes:
1. it yields a bankroll value B that is 500 or greater, and halts. FORMAL WIN
2. it yields a bankroll value B that is too low to continue, and halts. FORMAL LOSS
3. it yields neither a formal WIN nor a formal LOSS after exceedingly long play and stops itself.
B = bankroll value. (I use BB to calculate a change in the bankroll value (e.g., BB = (B - x), and then reassign B to that number.)
R = outcome of the roll of two fair six-sided dice = 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 or 12.
P = point made = 4, 5, 6, 8, 9, or 10.
x = pass line bet = 1, 2, 3, 5, 8, 13, or 21 (defined by the Fibonacci series).
y = free odds bet = 10(x).
C = Counter #. (Shuts program down at 25. I use CC to calculate a change in the counter (e.g., CC = (C+1)), and then reassign C to the new number.)
S0: Let B = 400. C=0. Go to S1.
[Notes: Initial bankroll and counter values]
S1: Let CC=(C + 1). Let C=CC.
If C=26, then END.
If C<26, then Go to S2.
[Counter to stop play after 25 starts.]
S2: Let x = 1. Go to S3
[Notes: Begin Fibonacci series]
S3: Let y = 10(x). Go to S4.
S4: If (y + x) ≤ B, then Roll dice. Outcome=R. Go to S5.
If (y + x) > B, then END.
[Notes: on come out roll, game shuts down if bankroll is lower than the required bet]
S5: If R=7 or R=11, then Let BB = (x + B). Let B = BB. Go to S11.
If R=2, 3, or 12, then Let BB = (B - x). Let B = BB. Go to S4.
If 6, then let P = 6 and Go to S6.
If 8, then let P = 8 and Go to S6.
If 5, then let P = 5 and Go to S7.
If 9, then let P = 9 and Go to S7.
If 4, then let P = 4 and Go to S8.
If 10, then let P = 10 and Go to S8.
S6: Roll dice. Outcome = R.
If R=P, then Let BB = ((x + y + .2(y)) + B). Let B = BB. Go to S10.
If R=7, then Let BB = (B - (x + y)). Let B = BB. Go to S9.
If R≠P and R≠7, then Go to S6.
S7: Roll dice. Outcome = R.
If R=P, then Let BB = ((x + y + .5(y)) + B). Let B = BB. Go to S10.
If R=7, then Let BB = (B - (x + y)). Let B = BB. Go to S9.
If R≠P and R≠7, then Go to S7.
S8: Roll dice. Outcome = R.
If R=P, then Let BB = ((x + 2(y)) + B). Let B = BB. Go to S10.
If R=7, then Let BB = (B - (x + y)). Let B = BB. Go to S9.
If R≠P and R≠7, then Go to S8.
If x = 1, then let x = 2 and Go to S3.
If x = 2, then let x = 3 and Go to S3.
If x = 3, then let x = 5 and Go to S3.
If x = 5, then let x = 8 and Go to S3.
If x = 8, then let x = 13 and Go to S3.
If x = 13, then let x = 21 and Go to S3.
If x = 21, then END.
S10: If B ≥ 500, then END.
If B < 500, then Go to S1.
S11: If B ≥ 500, then END.
If B < 500, then Go to S4.
Posted by Joe Salerno at 5:30 AM
July 13, 2006
Here is a Fibonacci-based betting strategy that I have been developing. It is the simplest and the safest of my strategies for beating a standard house at craps. It seems effective but I wonder whether its reliability can be decided more precisely
The basic game of craps is to match your point on the dice before you "seven-out". The probability that you will seven-out (and fail to match your point) depends on the point, but on average is .66.
Let us define a loss as sevening-out six times in a row. The probability that you will lose is .08. The probability of a win is then .92. And the probability of winning 5 times in a row is .65 and of winning 8 times in a row is .51. The strategy outlined below yields a 25% profit with roughly 5 to 8 wins. So it is likely on this betting algorithm to come out ahead by 25%.
Here's the strategy.
Begin with $400. Remember to quit when you have earned $100 (or go bankrupt, which is about 6 losses in a row).
Bet only Fibonacci numbers (in order) on the "pass line"
1, 2, 3, 5, 8, 13 ...
So, always begin with $1 on the pass line. If you fail to match your point, then bet $2 on the passline. If you fail to match your next point, then bet $3, and so on, traversing the Fibonacci series. If at any stage you do match your point, then begin the sequence again.
Most importantly, always take 10x odds behind the line. Completely ignore wins and losses on the come out roll, since the gains and losses there will be negligible (i.e., the game play that matters begins only after the point is established and 10X odds are placed behind the line ). The game is over when you're up $100 (and walk with $500 total) or you go bankrupt (and walk with $0 total).
Give the method a try here. Click on "options" to set your virtual bankroll and to set the game to 10X odds.
What is the precise reliability of the strategy described above? If the problem is undecidable (as I expect it is), then might one nonetheless be able to design a computer program that would run the strategy a sufficient number of times for statistical probability?
Posted by Joe Salerno at 7:44 AM