Our understanding of complex quantum mechanical processes is limited by our inability to solve the equations that govern them except for simple cases. Numerical simulation of quantum systems appears to be our best option to understand, design and improve quantum systems. It turns out, however, that computational problems in quantum mechanics are notoriously difficult to treat numerically. The computational time that is required often scales exponentially with the size of the problem.
One of the most radical approaches for treating quantum problems was proposed by Feytiman in 1982 : he suggested that quantum mechanics itself showed a promising way to simulate quantum physics. This idea, the so called quantum computer, showed its potential convincingly in one important regime with the development of Shor's integer factorization algorithm which improves exponentially on the best known classical algorithm.
In this thesis we explore six different computational problems from quantum mechanics, study their computational complexity and try to find ways to remedy them. In the first problem we investigate the reasons behind the improved performance of Shor's and similar algorithms. We show that the key quantum part in Shor's algorithm, the quantum phase estimation algorithm, achieves its good performance through the use of power queries and we give lower bounds for all phase estimation algorithms that use power queries that match the known upper bounds. Our research indicates that problems that allow the use of power queries will achieve similar exponential improvements over classical algorithms. We then apply our lower bound technique for power queries to the Sturm-Liouville eigenvalue problem and show matching lower bounds to the upper bounds of Papageorgiou and Woźniakowski . It seems to be very difficult, though, to find nontrivial instances of the Sturm-Lionville problem for which power queries can be simulated efficiently.
A quantum computer differs from a classical computer that uses randomness, because it allows "negative probabilities" that can cancel each other (destructive interference). Ideally we would like to transfer classical randomized algorithms to the quantum computer and get speed improvements. One of the simplest classical randomized algorithm is the random walk and we study the behavior of the continuous-time quantum random walk. We analyze this random walk in one dimension and give analytical formulas for its behavior that demonstrate its interference properties. Is interference or cancellation really the most important advantage that a quantum computer has over a classical computer? To answer that question we study the class StociMA of "stochastic quantum" algorithms that only use classical gates, but are given a quantum "witness", i.e. an arbitrary quantum state that can guide the algorithm in computing the correct answer, but should not be able to "fool" it. We show that there exists a complete problem for this class, which we call the stoquastic local Hamiltonian problem. In this problem we try to compute the lowest eigenvalue of a Hamiltonian with interactions that span only a fixed number of particles and all contribute negatively. With the help of this problem we prove that MA ⊆ StocIMA ⊆ SBP ∪ QMA. This shows that interference is one of the most important parts of quantum computation.
The simulation of the evolution of a general quantum system in time requires a computational time that is exponential in the dimension of the system. But maybe the problem that we ask for is too general and we can simulate special systems in polynomial time. In particular it would be interesting to study quantum systems of "limited energy", i.e. for which the state at starting time consists mainly out of components with small energy. We model this in the theory of weighted reproducing kernel Hilbert spaces with two different sets of weights: product weights and finite-order weights. We will show that the information cost of computing the evolution for starting states from these spaces is tractable, i.e. the cost does not grow exponentially with the dimension of the problem.
Finally we study a computational problem from lattice quantum chromodynamics (QCD). In most popular algorithms that treat problems in QCD the (gauged) Dirac matrix has to be inverted numerous times. Since this matrix is large, sparse, and ill-conditioned, iterative approaches have to be used. Unfortunately a direct application of methods like conjugate gradient (CC) or minimal residual algorithms seem to give poor performance in practice. We study a newly proposed multigrid method, adaptive smoothed aggregation , that has promise to overcome these difficulties We show that while classical CG's convergence becomes worse as the matrix becomes almost singular, adaptive smoothed aggregation will still perform well.