tag:blogger.com,1999:blog-11295132.post8480652555385828017..comments2018-02-12T01:50:07.787-08:00Comments on A Neighborhood of Infinity: Death>Dishonour =/=> ~Dishonour>~DeathDan Piponihttps://plus.google.com/107913314994758123748noreply@blogger.comBlogger4125tag:blogger.com,1999:blog-11295132.post-50349832894741874722007-07-08T08:55:00.000-07:002007-07-08T08:55:00.000-07:00Or you can take the next step and make it into a f...Or you can take the next step and make it into a full inferential structure: a set S which is the range of some random variable, representing the actual experimental outcome, a class of distributions \Omega which could have produced this outcome, a space of decisions D, which could be an action to take or a position to assert ("epsilon has value 3.5"), and a cost function W :: \Omega -> D -> R. Then your utility matrix just becomes a cost function, and the task is to construct a sensible statistical procedure, that is a map t :: S -> D given that cost function. See Kiefer's lovely statistics book for details.Fred Rosshttps://www.blogger.com/profile/14346595409269776793noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-57429539980585326142007-07-06T16:20:00.000-07:002007-07-06T16:20:00.000-07:00I don't think it's a red herring. The utilities si...I don't think it's a red herring. The utilities simply describe your decision process -- or rather, how desirable you see the different possible outcomes, which then dictates what course of action you should take. If I, for example, prefer dishonor over death, you can model my behavior by having U_life_dishonor > U_death_honor in my utility matrix. Decision-making is then a simple matter of estimating the probabilities of the results of different courses of action, computing and comparing e.g. E(utility|run_to_the_hills) and E(utility|never_retreat_never_surrender), and picking the highest value. In various situations, then, you and I may then choose different courses of action even we estimate the same probabilities for the outcomes.<BR/><BR/>Of course, whether subjective utility is linear in probability is another question entirely... But hey, it's a reasonable first approximation. :)Rahulnoreply@blogger.comtag:blogger.com,1999:blog-11295132.post-33166042531216978222007-07-06T15:50:00.000-07:002007-07-06T15:50:00.000-07:00The assignment of utilities seems reasonable. We h...The assignment of utilities seems reasonable. We have:<BR/><BR/>Life and Honour>Death and Honour>Death and Dishonour<BR/><BR/>The probability assignment simply says that dying dishonourably never happens (and it will work the same if it just happens fairly rarely). One can imagine some people with an ethic that says that any death in battle is honourable.<BR/><BR/>What I think isn't natural about this setup is that I don't think it models any kind of reasonable decision process. There's nothing in that table that depends on any kind of choice I might make. Normally when peopke say "death before dishonour" they are making some statement about how they would act in certain situations. So this death before dishonour may be a bit of a distraction. Nonetheless, it is true statement about probability theory that A>B =/=> ~B>~A.sigfpehttps://www.blogger.com/profile/08096190433222340957noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-30585070700218516882007-07-06T14:58:00.000-07:002007-07-06T14:58:00.000-07:00That's funny, when people say "death before dishon...That's funny, when people say "death before dishonor", they usually mean that the options are death+honor vs. life+dishonor, not death+dishonor vs. death+honor vs. life+honor. Of course nobody would want dishonor if it also guarantees you death! :)<BR/><BR/>To be honest, I'm having a hard time getting intuition about the utility matrix you've set up, particularly because you've made a life of dishonor impossible. Since that is actually one of the horns of the original dilemma, let's keep it in there and see if we can get the same result with more sensible probabilities.<BR/><BR/>Dying dishonorably: never<BR/>An honorable death: 2<BR/>A life of dishonor: 1<BR/>Honor and survival: 3<BR/><BR/>(Serendipitously, they all line up in monospace!)<BR/><BR/>* 1<BR/>2 3<BR/><BR/>So... U(death) = 2, U(dishonor) = 1, U(life) = 2, U(honor) = 2.5! That doesn't get us what we want -- and with these probabilities, no tweaking of U(life+honor) will change that as long as U(death+honor) > U(life+dishonor) <=> U(death) > U(dishonor).<BR/><BR/>So maybe the counter-intuitive result only holds when the probabilities are counter-intuitive? It's got to do with the fact that you took out an off-diagonal entry while I took out a diagonal one, I bet, but I've never played with utility matrices before so I don't have a strong intuition. It might be interesting to figure out the general conditions on the utility values and probabilities under which this sort of thing happens.Rahulnoreply@blogger.com