Thursday, April 27, 2006

The Solar Corona Problem

It's all been a bit abstract a bit lately. So I thought I'd bring things down to Earth a bit. But only metaphorically speaking.

I've decided to volunteer some time at the Chabot Space and Science Center. I'll mainly be manning their 20" reflector pictured here:



Basically I'll be something like a tour guide for visitors, answering any questions they may have. Only during daylight hours for now. It's a good excuse to read up a little on astronomy and if I stick at it then one day I'll be qualified enough to work the telescope at night, and maybe even catch a few photos through it with my own camera.

Anyway, one of the visitors was asking about the temperature of the Sun. As any photographer can tell you, the surface of the Sun has at a temperature of 5800K. But observations during eclipses show that the corona has some pretty weird spectral emission lines that correspond to atoms at very high temperatures, of the order of a few million K. Strangely, this is a bit of a mystery. At first you might think what's the problem? There's a gigantic fusion reactor with something like 2×1030kg of fuel burning right next to it. But the problem is this - how is that energy getting to the corona when the surface of the sun is at 5800K. As everyone knows, heat doesn't generally flow from a cooler region to a hotter one. To me this is much more interesting than the Solar neutrino problem. Neutrinos don't play a large part in anyone's lives and all of the evidence that there is even a problem rests on indirect evidence and theoretical work. But the solar corona problem is pretty immediate. You can literally see the temperature of the sun's surface and the anomalous high temperature emission lines require nothing more abstract than spectrography. When a visitor asked about this I had no answer so I decided to read up on the issue.

Turns out there are a few possible explanations, though none of the corresponding predicted physical proceses have been directly observed yet. My money is on Alfvén waves. These are essentially waves that form solutions to the combined Maxwell and Navier-Stokes equations. If you imagine wave travelling down a whip when it is cracked, all that enerfy gets to the end of the whip and gets concentrated in the tip resulting in it moving at supersonic speeds. Something similar may be happening with these waves. They proceed out from the sun but when they get to the corona there's nowhere for the energy to go except into heating it to incredible temperatures.

Anyway, next time someone asks about the corona I won't be telling them this. Most people don't want this much detail and even if they did, they have impatient kids with them. But it's good to have people prodding me to read up on interesting physics.

Labels: ,

Tuesday, April 25, 2006

Zog, The Devil and Me.

Zog runs a casino on the planet of Vega Sals. Like his competitors on the galaxy-famous Landing Strip he's been using probability to estimate his earnings based on the odds that he offers in his games. It needs quite a bit of care - offer odds that are too good and he'll lose the games, offer stingy odds and his competitors undercut him.

Anyway, Zog decided to attend some adult education classes at the local university and learnt about the concept of falsifiable propositions. The effect was disastrous, he found that he couldn't attach a meaning to the statement "this coin has a 50% probability of turning up left-head". (Note that the inhabitants of Vega Sals have two heads.) Whether it turns up left-head or right-head it's consistent with the original statement which can never be falsified. Yet surely it has a meaning of some sort.

Zog wrestled and wrestled with this problem, suffering nights of insomnia, when suddenly in a flash of smoke the Devil appeared in front of him.

"I'll do a deal with you," he said. "You're struggling because you can't get a falsifiable proposition from a probabilistic one. So here's what I'll do for you. Whenever you want, I'll let you convert a probabilistic statement into a falsifiable one with probability one.". "

"Sounds great," replied Zog, "but what's the catch?"

"Hmmm...I see my reputation precedes me. Anyway, it's just a small catch. For every proposition that I convert you'll have to suffer the fires of Hell. If it's an unlikely proposition you might suffer for trillions of years. But if it's 90% likely you'll only have to suffer, oh, let's say a billion years. If you can make it 99.99% likely you might only suffer for a million years but if you make it something like 99.9999999% likely you'll only have to suffer a short while. I have a brochure here that explains the details."

"Sign me up right away," said Zog, and he slept well that night.

But the next day when Zog was doing the weekly profit and loss forecast he found he had a problem. If he had a thousand people play space-roulette that day then it'd cost him trillions of years of sufffering to make the outcomes of these games certain. In the cold light of day he seemed no better off than before. But then he had an idea. Why not batch together the days takings? Instead of predicting the outcome of one game he would predict the outcome for the entire day. By using a suitable variant of the central limit theorem he found that he could predict the day's winnings to within 10 galactic credits with 99.9999% certainty. "Great!" he thought looking this up in the brochure. "99.9999% - 3 days of Hellfire." As the Astro-Priest had told him that even one second in Hell was worse than a lifetime of suffering it didn't seem such a good idea after all.

So Zog decided to consult a mathematician to buy a theorem. "I can construct just the thing you need," the mathematician said, "for only 1000 galactic credits." So the mathematician sold him a theorem a bit like this: "if, over the course of your lifetime, you make your decisions based on probability theory, then you have a 99.999999999% chance that you won't make a total loss at your casino tables and that nobody else will undercut you." The cunning mathematician had taken Zog's batching trick and batched together the entirety of Zog's future transactions to make this prediction. It was a perfectly valid theorem and Zog thought it was money well spent. He also no longer needed to worry about the meaning of probabilistic statements. He knew that if he just used probability theory, instead of thinking about its meaning, he would be guaranteed to succeed in the casino business.

And Zog lived happily ever after. When he died, the hundredth of a second that he suffered in Hell was pretty bad, but the everlasting happiness that came afterwards more than made up for it.

Of course you're probably getting suspicious by now. There isn't really such a place as Vega Sals. And I just made up Zog. This is of course a fable. The truth of the matter is that it was me who made the deal with the Devil, not Zog. And it wasn't exactly this deal. The deal I made is based on the theorem that I got for free from here. The Devil said that any time there was a system whose wavefunction was ψ he could make it φ where the amount of suffering I would have to endure was computed from |ψ-φ|. Using that theorem I was able to get the devil to remove from the universal wavefunction that part in which the probabilities of the Copenhagen interpretation didn't work. It was a pretty small part so I should suffer even less than Zog did. And the Devil offered me a 50% discount if I blogged aobut how good my deal was.

But I'm not the only person who's made such a deal with the Devil. Another person is David Deutsch in this paper. But I feel a little sorry for David. You see he claims "we have proved that a rational decision maker will maximise the expectation value of his utility." Well, yes, this is true. But only because of how he's defined 'rational'. But by using sleight of hand he's made it appear as if this statement is true for people who are rational in the usual sense of the word and of course people who read this paper will be less inclined to deal with the Devil. The Devil really doesn't like it when people deny they have had dealings with him, it makes him feel sordid and shady. So even as we speak the Devil is warming a very special place in Hell for David.

Oh...and maybe you can now see why I've been thinking about "almost certainty" a bit recently. I should also add that S4 doesn't quite do the job I want. I think ◊¬(◊p∧◊¬p) really ought to be a theorem of my logic but it's provably not a theorem of S4. (Where I now use ◊ to mean "almost certainly".) I think I need some more axioms in addition to S4...

Labels:

Wednesday, April 19, 2006

S4 and Partial Evaluation

I never cease to be amazed by the ubiquity of mathematical ideas. I'm still not entirely certain that S4 captures what I mean by "almost certain" but it's neat that S4 does a half-decent job of representing what we mean when we say "I know that X". It also has another application that relates back to partial evaluation.

Under the Curry-Howard isomorphism we have an isomorphism between (intuitionistic) propositional calculus and 'types' in a programming language. If A and B are types then A∧B is the cartesian product type, A∨B is the disjoint union type and A→B is the type of functions that take an A as input and return a B. That's all standard stuff. But now we can augment these types with a new type □A which means a piece of data whose value can be worked out by the compiler. For example we normally think of 1.0 as an object of type Float, but it's really of type □Float because the compiler can do stuff with 1.0 during compilation. The meaning of that should become apparent in the next paragraph.

The function sin is of type □(Float->Float) because, like 1.0, sin is just a constant so the compiler knows its value. If the compiler knows the value of something that is a function, and it knows the value of its argument, then the compiler is in a position to perform the computation at compile time. In this case, the compiler could compute sin 1.0 during compilation rather than defer the computation to the final executable. In other words we have a function □(a→b)->(□a→□b). But this is just the axiom K of S4. It's also, essentially, the same as partial evaluation as a described here.

If the compiler knows the value of something, it's still at liberty to delay the evaluation until runtime. In other words, we have a map □a→a. This is the axiom of S4 known as T. Notice that we don't have the reverse. A value of type a might be computed from an input from the user. As the compiler is presumed to have run before the user gets to input anything to the program there's no way the compiler can have a general way to promote an object of type a to □a. This axiom might be used if we chose to write code to add a number of type □Float to one of type Float. The compiler could cast the one of type □Float to type Float in order to allow the addition be performed in the executable at runtime.

Though we don't have a→□a we do have □a→□□a. This is more or less by fiat. It just says the trivial thing that if you say it's known at compile time twice over then that's the same as knowing it once. So we have axiom 4 of S4.

So intuitionistic S4 can be viewed as the logic we need to annotate types with information that tells the compiler what it can compute at compile time and what should be left until runtime. In a sense it's a form of macro evaluation providing some of the features that the C/C++ preprocessor provides when it runs just before compilation. But what's nice is that it's expressed in the language itself by extending the type system rather than as a separate language (which the C/C++ preprocessor essentially is.)

Anyway, this description is very rough and there are a few gotchas with this interpretation. So I suggest reading a real paper on the subject. My summary doesn't correspond exactly to any one paper I looked at so rather than reference one I can use the great universal qualifier known as Google to express them all in one go. Apparently there does exist an implementation of a subset of ML that supports these modal types.

By the way, I thought I'd revisit self-reproducing code armed with this modal logic to see if I could produce a typed version of the theorem that a language with partial evaluation always has a fixed point. But the axioms of S4 seem to rule out this kind of self-reference because I can't construct a function whose type allows it to accept inputs of its own type.

There seems to also be a connection with temporal logic with □ saying something about the time when a piece of data is available.

Labels:

Thursday, April 13, 2006

A Modal Logic of Near Certainty?

If this "□" isn't a square then the following will be unreadable for you. For some reason □ is less standard than ◊ and may be absent from a font when the latter is present, especially when printing.

So having read some stuff on modal logic (basically just what the Stanford Encyclopedia of Philosophy has to say) I was trying to make up a modal logic of my own. So what follows is just brainstorming. It might not make any sense at all if you try to work out the details. But as I was stuck in trying to figure out those details I thought I'd mention it here.

Usually □ is used to mean things like "it is necessarily the case that", "I know that" or "it ought to be the case that" but I thought I'd see if it could be used to mean "it is almost certainly the case that". Here's the intuition I'm trying to capture: □p would be roughly like saying p holds with probability 1-O(ε). So □□p would mean that we expect it holds with probability (1-O(ε))² which is the same as 1-O(ε). So we'd have □□p⇒□p. I intend "almost certainty" to mean so close to certainty that if A almost certainly tells the truth, and so do B, C, D up to Z, then if Z tells you that Y says that X claimed....that A said p then you could still be nearly certain that p is true.

Other examples of things that I'd expect to hold would be "continuity" conditions like

□A∧B⇒□(A∧B)⇒□A∧□B

and I'm nearly certain that

□A⇒◊A (where ◊A=¬□¬A by definition).


Anyway, it seems like a perfectly good modal logic to me. But when I tried to work out the possible world semantics I got nowhere, mainly because p⇒□p upsets things. I was also toying with the idea that "almost certain" is a bit like the strict mathematical sense of "almost always" and that there might be some kind of semantics that involves sets dense in some topology. But I couldn't work out the details of that either.

So can anyone who is knowledgable about these things tell me if what I'm trying to do is sensible? If I fill in more details can I use possible worlds to prove completeness and soundness?

The really wacky idea I had (and remember, I'm just brainstorming) is that you might be able to set up a differential calculus of logic. Just as you compute derivatives of a function w.r.t. x by perturbing x slightly and looking at how f(x) varies, □A might be seen as a slight deformation of A, hence my use of the term "continuity" above. I'd then expect to see a version of the Leibniz rule appear somewhere in the theorems of this modal logic.

Anyway, this might be going nowhere, so brainstorming over.

Update

Ordinary propositional calculus can be viewed as being set theory in disguise. Pick a set X. Now translate a logical proposition to a statement of set theory by 'atomic' propositions to subsets of X, ∧ to ∩, ∨ to ∪, ⊤ to X; ⊥ to ∅ and ¬ to the complementation in X operator. A proposition is true if and only if its translation always equals X. For example ¬p∨p translates to P∪(X\P) which is clearly X.

For my proposed modal logic put a topology on X. Then □ translates to the Cl operator where Cl(A) is the smallest closed set containing A. The things I want to be true again correspond to propositions whose traslations are all of X.

I think this modal logic captures exactly what I want. You can get a feel for it by thinking about propositions like x=3 on the real line. This is false "almost everywhere" (ie. on a dense subset of the real line, ie. a subset whose closure is the entire real line) so it makes some kind of sense to want to say □¬(x=3).

With some keywords to search on I was now able to find this paper. Page 29 (following the document's own page numbering) essentially describes what I have except that they have swapped □ and ◊. Essentially they're saying that this is the standard modal logic known as S4. This topological interpretation of S4 goes back to work by Tarski and McKinsey in 1944.

And now I can reason about almost-certainty with certainty.

Labels:

Monday, April 10, 2006

Mongruences

Before saying anything I'd be interested in feedback on whether the mathematical symbols below are visible in your browser. Things like right arrow ⇒, infinity ∞ and union ∪.

Anyway, I've been trying to understand exactly what coindunction is all about. In particular I've been trying to figure out exactly what the duality is between induction and coinduction. The problem I was having was that induction and coinduction seemed not to be dual to each other at all, and weirder still, almost every discussion of coindunction that I looked at seemed to completely ignore the issue.

So here's my problem: in order to carry out an induction argument over the naturals, say, we find a predicate on the naturals, P, such that

P(0) and P(n) ⇒ P(s(n))

We can then conclude that P holds for all naturals.

In order to carry out a coindunction argument over the set of streams over A, say, we find a binary relation R (called a bisimulation) such that

R(a,b) ⇒ head(a)=head(b) and R(tail(a),tail(b)).

I can point out a certain amount of duality: The naturals form an initial F-algebra which is a fancy way of saying that they form a least fixed point to some equation, in this case N≅1+N. Here '1' means the type with one element called '*' and '+' is the disjoint union. We have a map s:1+N→N where s(*) = 0 and s(n) = n+1. s can
be thought of as a 'constructor'. Given something of type 1+N it shows how to construct an object of type N from it. Loosely speaking we can now see that induction is about forming a predicate that still holds after we apply a constructor.


Now consider the coinduction over all streams. A stream is a final F-coalgebra which is a fancy way of saying it's a greatest fixed point of the equation S≅A×S. We have a map (head,tail):S→A×S which maps a stream to its head (an element of A) and its tail (another stream). We can think of these as 'destructors' which reduce a stream back down to its component parts. And now we can see that coinduction is essentially about finding a binary predicate that still holds after applying a destructor.


But to my eyes, despite the duality there is a glaring difference: induction is about a unary predicate and coindunction is about a binary predicate. And through no amount of messing about with categories and their opposites was I able to find a way to see a unary predicate to be dual to a binary predicate.


So I gave in and tried to find something on the web about this subject and eventually was led to Bart Jacobs' paper Mongruences and Cofree Coalgebras. Inductive predicates aren't dual to bisimulations at all, rather inductive predicates are dual to what Jacobs calls mongruences. Unsurprisingly, this horrible neologism was dropped in favour of the term τ-invariant or just invariant in later papers. But I'm going to use the term out of sheer perversity.


But this paper still doesn't explicitly talk about coinduction over the naturals leaving me to work out the details. After chasing round the diagrams, including a nice demonstration of left and right adjoint functors (about which I seem to have managed to develop a crude intuition, who said you can't teach an old dog new tricks?) I managed to figure out what I think is the fairly simple definition of a congruence predicate for the naturals.


So instead of working over the naturals we actually work over N'=N∪{∞}, the set of 'conaturals'. The reason is that we are now looking at the final F-coalgebra that is the greatest fixed point to the equation X≅X+1, not the least fixed point. N' is equipped with a map p:N'→1+N' where p(0)=*, p(∞)=&infin and p(n)=n-1 otherwise. With this in mind a coinductive predicate for the conaturals (ie. a mongruence or invariant) turns out to be a P such that


P(n) ⇒ n=0 or n=∞ or P(n-1).

At least I think so. (This could be embarassing.) And now I can see why coinduction talks about bisimulations - mongruences aren't very interesting for the conaturals. Bit of a letdown really!


On the other hand there are some mongruences that are of mild interest in a more general setting. For example consider a finite state automaton with some terminal states. You can define a state to be safe if at no future point it's possible to reach the terminal state. So a state is safe if it isn't terminal and if all of its successor states are safe. If you think of a finite state automaton as a function f:X→(X×O)I, where X is the set of states, I is the set of inputs and O is the set of outputs, then f is a 'destructor' and you can see that this definition fits into the same general pattern for coinductive definitions. So safety is a mongruence.


The obvious next question is: what is the dual of a bisimulation? These are called congruences but I haven't yet worked through the details of what these are.


My next question is this: given an inductive predicate you get to apply "proof by induction". What is the corresponding "proof by coinduction" for the conaturals? I'm not sure there is one. I have a vague sense of what proof by coinduction might look like for a more general mongruence but haven't figured out a non-trivial example yet.


But despite having learned about a concept that was uninteresting, and a proof method that I can't figure out how to apply, I have picked up something useful. The standard notions of proof by induction and proof by coinduction aren't straightforwardly dual to each other. So now I can stop worrying about it and get on with reading Vicious Circles.

Saturday, April 08, 2006

The Minority Game

The El Farol restaurant in Santa Fe has "one of the best bars on earth" according to the New York Times. Unfortunately, on the days when it becomes too crowded it's more fun to stay at home. This can be modelled by a game known as the El Farol Bar Game or its simpler cousin, the Minority Game.

In the Minority Game, N players (with N odd) get to cast a secret vote either for A or for B. When all the votes have been cast the winners are the members of the minority group and they receive a payoff. But what is surprising about this game isn't just the game itself but the people who are studying it - physicists. The idea is that for large N the game looks like certain types of system from statistical thermodynamics. For example, consider the Ising model. In this we have an array of atoms, each of which can be spin up or spin down. Depending on the parameters of the model atoms either want to be aligned like their neighbours or aligned in the opposite direction. In the latter case we have a situation similar to the Minority Game. The Minority Game looks a lot like a 'frustrated' physical system with no obvious ground state. Just as a group start moving towards what seems like the optimum it ceases to be an optimum.

The game has been studied by many people using computer simulations. Typically the simulations result in the number of winners in each round hovering around one value. But the feature that many researchers have found interesting is the way that over multiple plays, independent players seem to spontaneously cooperate, despite the lack of communication. The result is that they reduce the variance, and hence risk, of the outcome below what you would expect from a purely randomised strategy.

MG can be used to model a wide variety of phenomena besides Saturday night drinking - including financial markets, traffic congestion, network congestion and even the ebb and flow of fashion.

The original paper by Mallet and Zhang may be found here but a web search will reveal that there are in fact hundreds of papers on the subject. It seems to be the in thing.

Labels:

Tuesday, April 04, 2006

Anti-Foundation and Strong Extensionality

Barwise and Moss's Vicious Circles is about the Anti-Foundation axiom, an alternative to the usual axiom of foundation in set theory. I had a look at various online introductions to this subject and they all seem a bit long winded. So here's my own quick summary pared down to the bare minimum. (I sacrifice a bit of accuracy along the way but it should be enough to give a feel for the subject.)

Firstly: the whole point of the Axiom of Foundation is to eliminate infinite chains of membership. So the Axiom of Foundation says that there are no infinite descending chains:

...∈A3∈A2∈A1∈A

(I hope you can read that, I'm using this list.)

The kind of thing that the Axiom of Foundation eliminates are sets x such that x∈x. It means we can define functions inductively over sets as we're always guaranteed to have a base case to start the induction. More generally imagine a system of equations such as

x = {x,y,z,4}
y = {x,z,2}
z = {{3},x,y}

The Axiom of Foundation rules out any solution to this. The Axiom of Anti-Foundation, on the other hand, says that any set of equations like this, with unknowns on the left, and sets containing those unknowns on the right, has a unique solution. (Note that the containing in that sentence is essential, x = x doesn't have a unique solution.)

And that's it, the Axiom of Anti-Foundation.

But there's a slight problem. Consider the equation x = {x} and the equation pair y = {z}, z = {y}. The solution to the first equation seems like it ought to be the solution to the second as well. But we have no way of telling this. In fact, there are issues just telling whether or not the sets x = {x} and y = {y} are equal. Normally in set theory we'd use the Axiom of Extension. Two sets are equal iff their elements are equal. So in this case we have x = y iff their elements are equal, ie. iff x = y. So the Axiom of Extension tells us nothing here. We need a new kind of Extension.

The idea is to find something like an isomorphism between two sets. For example, define f(x) = y for our last example above. Notice that for every element x' of x, there is an element y' of y such that f(x') = y'. f also has an inverse. It's a bit trivial in this case but it's a start. But suppose we compare the set x defined by x={x} with y defined by y={z} and z = {y}. We want some kind of isomorphism that maps x to both y and z. And with more complex sets of equations there might be any number of elements of one set that we need to map to any number of elements in the other. So what we need isn't a function but a relation connecting elements in one set to the other. The relation R is said to be a bisimulation between sets a and b if

aRb
for all a'∈a there exists a b'∈b such that a'Rb'
for all b'∈b there exists an a'∈a such that a'Rb'

If two sets are related by a bisimilarity then it can be shown that they satisfy the same equation and hence by the uniqueness of solutions they must be the same set. This property is known as Strong Extension. (I've paraphrased this from the original but I think it's essentially correct.)

And that's it - you can now go off and make your own non-well founded sets and compare them for equality.

I should add some interesting connections with computer science. Bisimulations are more familiar as a relation holding between automata. The usual Axiom of Extension gives a recursive definition of equality. Recursive definitions require a base case and with a base case we can get inductin proofs started. Without base cases we have corecursion instead and have to use a different proof technique, coindunction, whose main tool is bisimilarity.

Another interesting link with computer science is through F-algebras and F-coalgebras. I mentioned that F-algebras and F-coalgebras may have initial and final objects so that there is an isomorphism X≅F(X). The Axiom of Anti-Foundation often allows us to make the isomorphism equality. For example it's not hard to see that for a finite state automaton we can make a state simply equal to the set of transitions available from that state. In fact, that gives a nice way to answer the question I was originally asking in that earlier posting - how can I remove the labelling of the states and leave just the bare information about what transitions take me where?

And one important thing I should add - you can make a model in ZF for set theory with Anti-Foundation and Strong Extensionality. So it's consistent if ZF is. Additionally, Barwise and Moss also work with urelements, elements of sets that are not the empty set and yet contain no elements. I'm not sure, but I don't think these are essential to making sense of Anti-Foundation. Anti-Foundation, as described by Barwise and Moss, does make use of urelements in the definition of an equation (in particular to label the 'unknowns') but I think Anti-Foundation can be rewritten so as not to use urelements.

Update: I originally stated that Strong Extension was an axiom which it isn't. Just to add to the confusion, the Axiom of Anti-Foundation that I described above is called "The Solution Lemma" in Vicious Circles, even though it is an axiom.

Labels:

Saturday, April 01, 2006

SIGFPE's Law

I've decided it's time to stake my claim to the law that will ever be remembered and linked with my name. If Moore can get a law named after him, or even Murphy, then so can I.



SIGFPE's Law

This law is the prediction that the rate of growth of the number of bits on the best available quantum computers is going to grow linearly with time.


My reason for saying this is essentially the same as the reason that many people are sceptical of quantum computers - decoherence. It seems obvious to me that even with the cleverest of error correction schemes we're going to see the coherence of states decay exponentially fast. I'm bored of reading papers on quantum error correction and seeing that every one makes some assumption or other about the Hamiltonian that is not going to hold in practice. And any little deviation from that assumption becomes the coefficient in an exponential describing that decay.


On the other hand people are ingenious. As time goes by people will find ever more ingenious ways to fix the errors caused by decoherence. But the difficulty is going to grow exponentially and the solutions won't scale. So just as classical computing has seen an exponential growth in power I expect to see the logarithm of this and hence a linear growth. Maybe, just maybe, there'll be polynomial growth. But definitely not exponential growth. In fact, I expect it to be linear with a pretty shallow gradient, just a couple of qubits a year at most, and probably less. Humans will, one day in the distant future, crack large RSA keys using quantum computers - but when that day is reached humans will be well beyond caring about such trivial matters.


I also have nothing to lose by saying this. If I turn out to be wrong, and I sincerely hope I am, I can look forward to a world full of wonderful computing machines whose interest value will far outweigh the ignominy of being forever linked with an incorrect law.


BTW When I say n-bit quantum computer I mean a computer whose total memory is n qubits, not a computer with the qubits addressable by an n-bit word.

:-)