tag:blogger.com,1999:blog-11295132.post2543189541743588733..comments2015-09-02T20:28:38.829-07:00Comments on A Neighborhood of Infinity: Lossless decompression and the generation of random samplesDan Piponihttps://plus.google.com/107913314994758123748noreply@blogger.comBlogger21125tag:blogger.com,1999:blog-11295132.post-31531568171950567982013-07-26T15:32:37.674-07:002013-07-26T15:32:37.674-07:00How about primenumber distribution. What's the...How about primenumber distribution. What's the entropy, and how could they be stored with minimal bits. <br /><br />(Question might seam simple but its quite fundamental math)PBonthemovehttp://www.blogger.com/profile/16172090981841209426noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-37295792092482785312012-02-13T14:08:33.861-08:002012-02-13T14:08:33.861-08:00Very interesting insight. This is essentially an a...Very interesting insight. This is essentially an application of the source coding theorem which says that one can find a code that is arbitrarily close the entropy (which also is the lower bound). <br /><br />Computing the entropy of p (=1.54 in this case) then tells us that, using this method, we have to do 1.5 evaluations on average on the tree to generate one sample.Thomashttp://www.blogger.com/profile/03095996544306624851noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-64561809395124577572012-02-13T13:55:23.762-08:002012-02-13T13:55:23.762-08:00This comment has been removed by the author.Thomashttp://www.blogger.com/profile/03095996544306624851noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-32629308187493910492012-02-06T00:50:19.077-08:002012-02-06T00:50:19.077-08:00I don't think Huffman coding yields suitable t...I don't think Huffman coding yields suitable trees for this problem, since the trees for random variate generation have the additional requirement of being ordered. Optimal alphabetic binary trees or optimal BSTs require a different algorithm that is not a variation of Huffman coding:<br /><br />"This is also known as the Hu-Tucker problem, after the authors of the paper presenting the first linearithmic solution to this optimal binary alphabetic problem, which has some similarities to Huffman algorithm, but is not a variation of [Huffman coding]. These optimal alphabetic binary trees are often used as binary search trees."Yang Zhanghttp://yz.mit.edu/noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-40960913873562749672012-01-23T13:43:06.560-08:002012-01-23T13:43:06.560-08:00As I read the problem statement I immediately want...As I read the problem statement I immediately wanted to mention Optimal Binary Search Trees (http://en.wikipedia.org/wiki/Binary_search_tree#Optimal_binary_search_trees).<br /><br />As it turns out, the publication "Darts, Dice, and Coins" already does it. Argh!caustichttp://www.blogger.com/profile/16656485247989243827noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-49741323314477769702012-01-20T11:37:37.624-08:002012-01-20T11:37:37.624-08:00Hi I hate to jump to a different subject. But a wh...Hi I hate to jump to a different subject. But a while back you had some wonderful notes on <b>Target Enumeration</b> and a Haskell implementation.<br />My question is how could you extend what you did such that our boundary points lie on real number coordinates...so to refer to the pixel examples our area of coverage would contain the counted 'holes' plus the partially contained 'holes' or say pixel areas? How could we get the exact coverage based on the way you implemented? I appreciate any information as I know this was a while back---but I JUST found the blog if you have any resources please feel free to contact me at:<br />jcarrola@swri.org ---thanks for any time you can spare. RegardsAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-11295132.post-77066615191690119542012-01-16T12:25:17.618-08:002012-01-16T12:25:17.618-08:00I came to this same realization when I started stu...I came to this same realization when I started studying data compression (almost 30 years ago -- yikes!) seriously. Build a decoder for the appropriate arithmetic code, and feed it random bits. You get your output distributed according to the probability model used to build the arithmetic code.Victorhttp://www.blogger.com/profile/12861873904800186860noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-86599358097133872392012-01-14T18:39:00.310-08:002012-01-14T18:39:00.310-08:00Bear in mind that if this code is a bottleneck, br...Bear in mind that if this code is a bottleneck, branch instructions whose conditions depend on random numbers are effectively unpredictable, and hence expensive.<br /><br />Here's a neat little trick you should know. If the number of symbols is a power of 2, there's also this solution:<br /><br />double x = random_number();<br />unsigned s = 0;<br />for (i = 0; i < log2(numberOfSymbols); ++i)<br />{<br /> s = (s << 1) | (a[s] < x);<br />}<br />return s;Pseudonymhttp://www.blogger.com/profile/04272326070593532463noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-58508938353860809302012-01-09T01:30:27.549-08:002012-01-09T01:30:27.549-08:00What if we are sampling/compression observations f...What if we are sampling/compression observations from a latent variable model? The bits-back method makes a connection. See p353 of MacKay's textbook for a sketch and references.Iain Murrayhttp://www.blogger.com/profile/02208307937369256100noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-74333028710381614312012-01-08T20:32:51.852-08:002012-01-08T20:32:51.852-08:00Neat! Had this problem before and wasn't sure ...Neat! Had this problem before and wasn't sure how to solve it. Thanks for the explanation.bwhttp://www.blogger.com/profile/14392657676100237874noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-14195450108116055902012-01-08T20:32:05.956-08:002012-01-08T20:32:05.956-08:00Neat! Had this problem before and wasn't sure ...Neat! Had this problem before and wasn't sure how to solve it efficiently.bwhttp://www.blogger.com/profile/14392657676100237874noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-29040133933920302792012-01-08T11:04:28.614-08:002012-01-08T11:04:28.614-08:00Another elegant solution (Walker's method) is ...Another elegant solution (Walker's method) is described by Mihai here-- http://infoweekly.blogspot.com/2011/09/follow-up-sampling-discrete.htmlAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-11295132.post-57454686578963456572012-01-08T08:14:50.162-08:002012-01-08T08:14:50.162-08:00jkff,
You use the tree given by the Huffman algor...jkff,<br /><br />You use the tree given by the Huffman algorithm but you don't have a 50/50 decision at each branch. You derive the probability of going each way from the sum of the probabilities in each branch below.sigfpehttp://www.blogger.com/profile/08096190433222340957noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-8709473978923467182012-01-07T22:41:18.669-08:002012-01-07T22:41:18.669-08:00I see how this tells us about the optimal number o...I see how this tells us about the optimal number of decisions you have to make while generating a random number, but we still cannot generate the numbers by just taking a random bit string and decompressing it with the Huffman tree over the desired distribution. E.g. consider a distribution {a:0.4, b:0.6} - it will have a Huffman tree with just 2 leaves, and decompression will give {a:0.5, b:0.5}. Am I missing something?jkffhttp://www.blogger.com/profile/16923431648214439769noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-13129101820112943552012-01-07T13:51:25.109-08:002012-01-07T13:51:25.109-08:00@sigfpe, I don't believe so, it's a rather...@sigfpe, I don't believe so, it's a rather clever way of doing random sampling. That said, I haven't looked at the literature beyond the article linked.Justinhttp://www.blogger.com/profile/12806086460285980607noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-86690968557676121322012-01-07T13:49:24.879-08:002012-01-07T13:49:24.879-08:00@Justin,
I've had links to that article on bo...@Justin,<br /><br />I've had links to that article on both G+ and Twitter now! Looks like I missed out on a nice article a few weeks ago. But does the alias method imply anything interesting about compression algorithms?sigfpehttp://www.blogger.com/profile/08096190433222340957noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-86982396672033365332012-01-07T13:45:43.216-08:002012-01-07T13:45:43.216-08:00There'es a faster algorithm for selecting elem...There'es a faster algorithm for selecting elements at random from a finite distribution known as the Alias Method that doesn't suffer from the O(log n) worse case. Keith Schwarz has a good description written up called <a href="http://www.keithschwarz.com/darts-dice-coins/" rel="nofollow">Darts, Dice, and Coins: Sampling from a Discrete Distribution</a>.Justinhttp://www.blogger.com/profile/12806086460285980607noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-16608203753185252412012-01-07T12:34:20.913-08:002012-01-07T12:34:20.913-08:00@lvps1000vm
I really need to read that book!
(PS...@lvps1000vm<br /><br />I really need to read that book!<br /><br />(PS I was a student at the same time and place as MacKay. It was interesting suddenly seeing his name everywhere years later.)sigfpehttp://www.blogger.com/profile/08096190433222340957noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-47817225564130326342012-01-07T12:31:40.868-08:002012-01-07T12:31:40.868-08:00This concept of a decoder as a random sample gener...This concept of a decoder as a random sample generator is explained in MacKay's book, my personal reference book in the subject.<br /><br />You can download it for free at<br />http://www.inference.phy.cam.ac.uk/mackay/itila/book.html<br /><br />It appears at chapter 6.3, page 118 of the book, 130 of the PDF file.lvps1000vmnoreply@blogger.comtag:blogger.com,1999:blog-11295132.post-34905423686497754372012-01-07T11:58:08.556-08:002012-01-07T11:58:08.556-08:00Nice! This is similar to a Galois tech talk I gave...Nice! This is similar to a Galois tech talk I gave in May:<br /><br />http://corp.galois.com/blog/2011/5/16/tech-talk-video-empirical-sampling-with-haskell.htmlChad Scherrerhttp://www.blogger.com/profile/07015145495335911032noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-61937725134543103452012-01-07T11:28:56.266-08:002012-01-07T11:28:56.266-08:00That's a really great insight! It goes to show...That's a really great insight! It goes to show the value in asking the right question :) At last I know when a raven is like a writing desk.Nickhttp://www.blogger.com/profile/07554585746004123227noreply@blogger.com