Monday 29 January 2018

Update on my Necessity and Propositions account (and my haste to declare it false)

In some recent posts here I have discussed propositions like 'Air is airy' (due to Jens Kipper) which we know to be necessarily true, but only because we know empirically that air is not a natural kind, and hence that all there is to being air is being airy, and 'Eminem is not taller than Marshall Mathers' (due to Strohminger and Yli-Vakkuri), which we know to be necessarily true, but only because we know empirically that Eminem is Marshall Mathers, in relation to the account of necessity defended in my thesis. That account says that a proposition is necessarily true iff it is in the deductive closure of the set of true inherently counterfactually invariant propositions. (Roughly, a proposition is ICI if it does not vary across counterfactual scenarios when held true. For more detail see Chapter 5 of my thesis.)

At first, I reacted by thinking that such propositions show that account to be false. I then came up with another account, based on the idea of a counterfactual invariance decider. I still find this new account more elegant, but I soon came to have doubts about just how threatening they are to the ICI-based account in my thesis.

I have recently realised that the ICI-account fares even better in the face of these examples than suggested in the post mentioned above. There, I suggested in effect that 'All there is to being air is being airy' could be argued to imply 'Air is airy' on a suitably rich notion of implication, thus saving the ICI-account, and similarly that 'Eminem is Marshall Mathers' could be argued to imply 'Eminem is not taller than Marshall Mathers' on a suitably rich notion of implication.

But, I have realised, no such rich notion of implication is required! We just need to conjoin the empirical proposition which decides the modal matter with the proposition whose modal status is in question. 'Air is not a natural kind and air is airy', or 'All there is to being air is being airy and air is airy', are both true and ICI, and they both - very straightforwardly, by conjunction elimination - imply the desired proposition. For the Eminem case we have 'Eminem is Marshall Mathers and Eminem is not taller than Marshall Mathers'. So there was never a serious problem for the ICI-account after all! 

Admittedly, these impliers do perhaps seem a bit "clever", a bit artificial in some way, and this - together with not requiring any appeal to implication at all - is why I still think the CI decider account is more elegant. 

One thing that I think went wrong in my thought process around this is that I got a kind of kick out of concluding that my original account was false. Doing so made me feel like a virtuous philosopher, open to changing their views. But I am glad that I now have a more elegant account, and the notion of a CI decider. (I wonder: Would the CI decider account still have come to me if I had not overreacted and thought my original account falsified? Or did my foolishness here cause me to come up with the CI decider account?)

Tuesday 9 January 2018

Robin Hanson Responds

I recently posted criticisms of Robin Hanson and Kevin Simler's excellent new social science book The Elephant in the Brain. Hanson responds here. The response is short so I will reproduce it here:

The fourth blog review was 1500 words, and is the one on a 4-rank blog, by philosopher Tristan Haze. He starts with praise:

A fantastic synthesis of subversive social scientific insight into hidden (or less apparent) motives of human behaviour, and hidden (or less apparent) functions of institutions. Just understanding these matters is an intellectual thrill, and helpful in thinking about how the world works. Furthermore – and I didn’t sufficiently appreciate this point until reading the book, … better understanding the real function of our institutions can help us improve them and prevent us from screwing them up. Lots of reform efforts, I have been convinced (especially for the case of schooling), are likely to make a hash of things due to taking orthodox views of institutions’ functions too seriously.
But as you might expect from a philosopher, he has two nits to pick regarding our exact use of words.
I want to point out what I think are two conceptual shortcomings in the book. … The authors seem to conflate the concept of common knowledge with the idea of being “out in the open” or “on the record”. … This seems wrong to me. Something may satisfy the conditions for being common knowledge, but people may still not be OK talking about it openly. … They write: ‘Common knowledge is the difference between (…) a lesbian who’s still in the closet (though everyone suspects her of being a lesbian), and one who’s open about her sexuality; between an awkward moment that everyone tries to pretend didn’t happen and one that everyone acknowledges’ (p. 55). If we stick to the proper recursive explanation of ‘common knowledge’, these claims just seem wrong.
We agree that the two concepts are in principle distinct. In practice the official definition of common knowledge almost never applies, though a related concept of common belief does often apply. But we claim that in practice a lack of common belief is the main reason for widely known things not being treated as “out in the open”. While the two concepts are not co-extensive, one is the main cause of the other. Tristan’s other nit:
Classical decision theory has it right: there’s no value in sabotaging yourself per se. The value lies in convincing other players that you’ve sabotaged yourself. (p. 67).
This fits the game of chicken example pretty well. But it doesn’t really fit the turning-your-phone-off example: what matters there is that your phone is off – it doesn’t matter if the person wanting the favour thinks that your phone malfunctioned and turned itself off, rather than you turning it off. … It doesn’t really matter how the kidnapper thinks it came about that you failed to see them – they don’t need to believe you brought the failure on yourself for the strategy to be good.
Yes, yes, in the quote above we were sloppy, and should have instead said “The value lies in convincing other players that you’ve been sabotaged.” It matters less who exactly caused you to be sabotaged.
So Hanson paints me as a nitpicky philosopher, but nevertheless takes the points. He didn't mention the second point under the second heading, about theory of mind, which I think is maybe the most important. This omission better lets him get away with painting me as a nitpicky philosopher. But I am happy to see the response, and will not be daunted in making conceptual points that in fast-and-loose mode may seem like mere nitpicks.

What may seem like mere nitpicks at the stage of airing these ideas and getting them a hearing can turn into important substantive points in the context of actually trying to develop them further and make them more robust. 

Wednesday 3 January 2018

Two Critical Remarks on The Elephant in the Brain

UPDATE: See my response to Robin Hanson's response.

The Elephant in the Brain, the new book by Robin Hanson and Kevin Simler, is a fantastic synthesis of subversive social scientific insight into hidden (or less apparent) motives of human behaviour, and hidden (or less apparent) functions of institutions. Just understanding these matters is an intellectual thrill, and helpful in thinking about how the world works. Furthermore - and I didn't sufficiently appreciate this point until reading the book, despite being exposed to some of the ideas on Hanson's blog and elsewhere - better understanding the real function of our institutions can help us improve them and prevent us from screwing them up. Lots of reform efforts, I have been convinced (especially for the case of schooling), are likely to make a hash of things due to taking orthodox views of institutions' functions too seriously.

Without trying to summarise the book here, I want to point out what I think are two conceptual shortcomings in the book. This is friendly criticism. Straightening these confusions out will, I think, help us make the most of the insights contained in this book. Also, avoiding these errors, which may cause some to be unduly hostile, in future or revised presentations of these insights may aid in their dissemination.

I'm not sure how important the first shortcoming is. It may be fairly trifling, so I'll be quick. The second one I suspect might be more important.

1. Being Common Knowledge Confused With Being Out in the Open

One conceptual issue came up for me in Chapter 4, 'Cheating'. Here, around p. 55 - 57, the authors seem to conflate the concept of common knowledge with the idea of being "out in the open" or "on the record".

A group of people have common knowledge of P if everyone in the group knows that P, and knows that everyone in the group knows that P, and knows that everyone in the group knows that everyone in the group knows that P, and so on.

On the other hand, a bit of knowledge is on the record or out in the open if it is 'available for everyone to see and discuss openly' (p. 55). 

The authors conflate these ideas, asserting that 'Common knowledge is information that's fully "on the record," available for everyone to see and discuss openly' (p. 55). (This comes shortly after the proper recursive explanation of 'common knowledge'.)

This seems wrong to me. Something may satisfy the conditions for being common knowledge, but people may still not be OK talking about it openly. The popular notion of an open secret gets at this point (somewhat confusingly for present purposes, since here the word 'open' gets used on the other side of the distinction). Something may be widely known, indeed even commonly known in the special recursive sense, while being taboo or otherwise unavailable for free discussion.

In addition to muddying the proper recursive explanation by asserting that common knowledge is that which is on the record and out in the open, the authors give supplementary example-based explanations of 'common knowledge' which seem to pull this expression further towards being unhelpfully synonymous with 'out in the open' and 'on the record'. For instance when they write: 'Common knowledge is the difference between (...) a lesbian who's still in the closet (though everyone suspects her of being a lesbian), and one who's open about her sexuality; between an awkward moment that everyone tries to pretend didn't happen and one that everyone acknowledges' (p, 55). If we stick to the proper recursive explanation of 'common knowledge', these claims just seem wrong. There could be cases where a lesbian is not open about being a lesbian, yet the hierarchy of conditions for common knowledge is fulfilled. Likewise for the awkward moment that everyone wants swept under the rug.

2. Excessive Preconditions Posited for Adaptive 'Self-Sabotage'

The authors give fascinating, instructive explanations of how what they call 'self-sabotage' can be adaptive in some situations (pp. 66 - 67). One example they give is visibly removing and throwing out your steering wheel in a game of chicken (provided you do it first, this is a good strategy, since your opponent then knows that their only hope of avoiding collision is to turn away themselves, losing the game of chicken). Another is closing or degrading a line of communication, e.g. turning your phone off when you think you might be asked a favour you don't want to grant. Another is avoiding seeing your kidnapper's face so that they don't kill you in order to prevent you identifying them to authorities. Another example is a general believing, despite contrary evidence, that they are in a good position to win a battle - while epistemically bad, this may cause the general (and in turn the troops) to be more confident and intimidating, and could even change the outcome in the general's favour.

But some of the things they then say about this sort of thing seem confused or wrong to me. The underlying problem, I think, is hasty generalisation. For instance:
Classical decision theory has it right: there's no value in sabotaging yourself per se. The value lies in convincing other players that you've sabotaged yourself. (p. 67).
This fits the game of chicken example pretty well.

But it doesn't really fit the turning-your-phone-off example: what matters there is that your phone is off - it doesn't matter if the person wanting the favour thinks that your phone malfunctioned and turned itself off, rather than you turning it off. Indeed having them think the former thing may be even better. But still, it might be right in this case that it's important that the person calling believes that you were uncontactable. If you have your phone off but they somehow nevertheless believe they succeeded in speaking to you and asking the favour, you may not have gained anything by turning it off.

It similarly doesn't fit the example of the kidnapper. It doesn't really matter how the kidnapper thinks it came about that you failed to see them - they don't need to believe you brought the failure on yourself for the strategy to be good. But still, it seems right in this case that it's important that they believe you didn't see their face.

Now it really doesn't fit the example of the general, and here the failure of fit is worse than in the previous two cases. If the point is that the epistemically dodgy belief of the general makes them more confident and intimidating, potentially causing them to win, then it doesn't matter how the general got the belief. The "sabotage" could just as well be due to an elaborate ruse carried out by a small cadre of the general's subordinates. And here there's not even a 'but still' of the sort in the two previous cases. The general's epistemically dodgy belief does not have to be known to be epistemically dodgy by the enemy in order for it to intimidate them and cause them to lose. Indeed, that would undermine the effectiveness of the strategy!

So, things are not as simple as the above quote suggests. Realising this and appreciating the nuances here could pay dividends.

Another claim made about this sort of thing which may at first seem striking and insightful, but which I think does not hold up, is this:
Sabotaging yourself works only when you're playing against an opponent with a theory-of-mind (p. 68).
(Theory-of-mind is the ability to attribute mental states to oneself and others.)

This doesn't really fit the game of chicken example, or at least it doesn't fit possible cases with a similar structure. It may be that to truly have a game of chicken, you need theory-of-mind on both sides, but you could have a situation where you're up against a robotic car with no theory-of-mind, and it may still be best to throw out your steering wheel. (As to why you wouldn't just forfeit the "game of chicken": there may be (theory-of-mind-less) systems monitoring you which will bring about your death if you swerve.)

I don't think it really fits the kidnapper case in a deep way. It may be a contingent fact that this sort of thing only works in our world with kidnappers with theory-of-mind, but one can easily imagine theory-of-mind-less animals who have evolved, rather than worked out by thinking, the behaviour of killing captives when seen by them.

I think it quite clearly doesn't fit the general example. Imagine the general and their army were fighting beasts with no theory-of-mind. All that matters is that the beasts can be intimidated by the confident behaviour caused by the general's dodgy belief. No theory-of-mind in the opponent required.

This seems like more than a quibble, for going along with this mistaken overgeneralization may stop us from seeing this kind of mechanism at work in lots of situations where there is no theory-of-mind at work on the other end of the adaptive sabotage.