Suzanne Gildert is a scientist at D-Wave, which claims[1] to have created the first quantum computer. They currently have sold one to Lockheed Martin, and at least rent time on one to Google, even though the scientific community doesn't know if they're really selling a quantum computer or not.

One of the reasons that all of the scientists have such trouble saying whether what their machine does is quantum computing or not is because their computer could be doing either one of two different things, and getting very similar results.  They are either performing Quantum Annealing or Simulated Annealing which are very similar techniques, and there is certainly no reason their computer shouldn't work regardless of what they're doing, and will be quite good at solving any problem that can be expressed by the equation $$U = \sum_i h_i S_i - \sum_{<i,j>}J_{ij}S_i S_j, \quad (1)$$

which are much more numerous than you might think[2]. The big prize at stake, though, is being able to claim that you made the first reuseable quantum computer, which is naturally a huge claim to fame in the realm of quantum computing research.

Both quantum and simulated annealing work by minimizing Eq. 1, in which all of the $$ S_n$$'s represent what physicists like to call "two-level systems."  That is, something with two main states -- eg. spin, which is up or down or polarization which can be reduced to being horizontal or perpendicular to a plane -- which can form a quantum equivalent to the binary 1 and 0 we're all used to now.

An example set of states and energies for annealing runs.

In annealing, the overall group of two-state systems will have different amounts of energy when some of the parts are in different states. So the equation will be minimized when the parts are in the correct combination of 1's or 0's.  Suzanne wrote up a nice blog post about that for D-Wave, so I'll just send you off to that.

The important difference between quantum and simulated annealing lies in how the group of objects goes from a starting state, say all spins pointing down (we'll call it 111 in binary) to whatever the lowest energy state is.  In the picture, I made up a system where the x-axis shows three two-state systems and what state they are in (i.e. '010' means particle one is up, particle two is down, and particle three is up) and the y-axis is the energy of the group of three in that state.  As is common, we start in state 111, at the very rightmost point of the graph.  In both cases we hope the system will end in 100, the lowest energy state (See? It's the lowest point on the graph).

Quantum Annealing


In quantum annealing, we use the phenomenon called quantum tunnelling. Tunnelling is the quantum property by which a system can move from a low energy state to a lower energy state by skipping over an intermediate, higher-energy state.  In macroscopic terms, it would be like a ball on the ground next to a well suddenly being inside the well.  We don't expect it to end up in the well because it would need to raise itself up the side of the well first, something which would take energy to do, and since the ball was just sitting there, nicely, we wouldn't expect that at all.  Quantum objects do this all the time and the effects just don't add up to much on the scale of our ball.  Nevertheless, there's a calculable probability that all of the atoms in the sun will jump simultaneously to right where Earth is, but it's so small that its chances of happening before the universe ceases to exist are on the order of you winning the lottery a few times while getting hit by lightning and dying in plane crash simultaneously (please don't check that statement mathematically).

In the quantum world, we can take that graph and turn it upside down in our heads.  Now we're imagining a sort of probability distribution representation of the system.  The quantum way of thinking about it is that if we measure the state of the three particles over and over, we'll find them in state 100 most of the time, because that is the loest energy state, but then we'll also find it in states 011 and 111 sometimes, since they're more energetic than 100, but not by much and so are the next most likely states.

This method is dependent on maintaining coherence among the particles -- a special quantum property where the properties of the particles are interrelated.  One of the hallmark measurements of quantum computing is the coherence time of the qubits, which determines how long you have to do quantum-style work with them before they turn back into regular old pumpkin particles.  This is one of the big criticisms of D-Wave -- they haven't published these numbers, and so nobody can say for certain that they're doing quantum annealing as opposed to, say...

Simulated Annealing


Which is not quantum at all!  Simulated annealing serves the same purpose, though, and, by analogy, uses a balloon instead of a ball, and hopes it deflates over the well and falls in.  In this case, the system is exposed to some energy, so it can travel up and down the slopes of the graph as it likes, and is slowly cooled (or some equivalent, energy-removing operation) in the hope that, as it cools, it will fall into that low energy state.

Behind Closed Doors Does Not a Community Make


While I certainly don't like the method of keeping all of your cool research to yourself, I do think that D-Wave is doing cool research.  That said, I think their marketing department is probably stretching the truth if not outright lying about what they actually are selling.  As I was attempting to say, though, simulated annealing will get the job done in any event, I think most academics are just upset they're building something even remotely quantum while most of the rest of us are spending hours adjust delicate equipment in labs, trying to get a handful of qubits to do anything while they're claiming hundreds.  If you're interested in some of the controversy, you can spend a while reading Scott Aaronson's very interesting blog.

[1] D-Wave only publishes select information to the scientific community at large, and, as of the last time I really looked into it (a few months ago), they hadn't published anything that anyone had taken as definite results.
[2] Google seems to be using this for training image recognition software.