I. Introduction
The Riemann hypothesis occupies a position in mathematics that no other open conjecture quite matches. It is not the oldest unsolved problem in number theory — questions about the distribution of twin primes, perfect numbers, and odd perfect numbers all predate it by centuries or millennia — and it is not the most elementary to state. It is, however, the conjecture around which a substantial portion of modern analytic number theory has been organized, and the conjecture whose resolution would, in a single stroke, sharpen hundreds of conditional theorems into unconditional ones.
The hypothesis was published in 1859 in a brief memoir by Bernhard Riemann, where it appeared less as a centerpiece than as a working remark in the course of a wider investigation into the distribution of prime numbers. Riemann conjectured, in effect, that all the nontrivial zeros of a particular complex-analytic function — the function now bearing his name, ζ(s) — lie on a single vertical line in the complex plane: the line where the real part of s equals one-half. The hypothesis has resisted proof for more than 165 years. It has been verified computationally for the first ten trillion or so zeros, with no exception found. It has been generalized in directions Riemann could not have anticipated, with some of those generalizations proved (in the function field setting) and others standing open alongside it. It has been the subject of multiple announced proofs that did not survive scrutiny. And it has acquired a cultural standing — within mathematics and to a lesser extent outside it — as the paradigmatic hard problem.
This white paper is concerned with the historical record. It traces the conceptual and technical antecedents of the hypothesis from Euler in the eighteenth century through the work of Gauss, Legendre, Dirichlet, and Chebyshev in the nineteenth; examines Riemann’s 1859 memoir and the place of the hypothesis within it; follows the proof of the prime number theorem in 1896 and the elementary proof in 1949; surveys the twentieth-century work on zeros of ζ on the critical line, the computational history beginning with Siegel’s recovery of Riemann’s unpublished formula in 1932, the broader family of generalized hypotheses that have grown around the original, and the institutional history culminating in the Clay Millennium Prize. The paper closes with a reflection on what one and two-thirds centuries of resistance suggests about the texture of the problem itself.
The aim is not to advocate for any particular line of attack or to predict resolution. The aim is to set down, with as much accuracy as a survey of this scope permits, the historical record of how the Riemann hypothesis came to occupy the place it does.
II. Pre-Riemann Foundations
Euler and the Zeta Function as Bridge Between Analysis and Arithmetic
The single most consequential antecedent of the Riemann hypothesis is Leonhard Euler’s discovery, published in 1737 in Variae observationes circa series infinitas, of what is now called the Euler product formula. For real values of s greater than one, Euler showed that the infinite series
ζ(s) = 1 + 1/2^s + 1/3^s + 1/4^s + …
is equal to the infinite product taken over all prime numbers p,
ζ(s) = ∏_p (1 − 1/p^s)^{−1}.
The proof is a direct application of the unique factorization of integers into primes, combined with the geometric series expansion. Its consequence is that the analytic behavior of the function on the left encodes, in compressed form, all the information about the multiplicative structure of the integers. Euler used this identity to give a new proof, distinct from Euclid’s, that there are infinitely many primes: if the product on the right were finite, ζ(s) would remain bounded as s descends to one, but the harmonic series ζ(1) diverges.
Euler also computed ζ(2) = π²/6, ζ(4) = π⁴/90, and the values of ζ at all positive even integers. His treatment of the function at negative integers, by way of a heuristic functional equation derived through divergent series methods, anticipated the rigorous functional equation Riemann would establish a century later. Euler did not, however, treat ζ as a function of a complex variable. The extension of ζ into the complex plane, and the recognition that this extension is the proper context for studying primes, was Riemann’s contribution.
Gauss’s Conjecture and Legendre’s Form
The empirical study of how primes are distributed among the integers was initiated, in something resembling its modern form, by Carl Friedrich Gauss in his teenage years. Gauss tabulated primes up to several million and observed that the density of primes near a large integer x appeared to be approximately 1/log x. Integrating, he conjectured that the number of primes less than or equal to x — denoted π(x) — is asymptotically given by the logarithmic integral,
Li(x) = ∫₂^x dt/log t.
Gauss did not publish this conjecture in any prominent form during his most productive years; it appears in correspondence and notebooks, and was communicated more publicly only later. Independently, Adrien-Marie Legendre, working from his own tables, conjectured in 1798 (and refined in 1808) that π(x) is approximately x/(log x − A) for some constant A, which he estimated empirically as approximately 1.08366.
Both forms predict the same leading behavior — π(x) ~ x/log x — but Gauss’s logarithmic integral provides a more accurate approximation at the next order. The numerical superiority of Li(x) over Legendre’s form became clear as tables extended further, but neither conjecture was proved during the lifetimes of their authors. The proof of what came to be called the prime number theorem would wait until 1896 and would proceed by methods neither Gauss nor Legendre had at their disposal.
Chebyshev’s Bounds
The first substantial progress toward the prime number theorem came from Pafnuty Chebyshev in two memoirs of 1849 and 1852. Chebyshev introduced two functions that have remained central to the subject:
ϑ(x) = ∑{p ≤ x} log p, ψ(x) = ∑{p^k ≤ x} log p.
These are weighted prime-counting functions: ϑ counts each prime p with weight log p, while ψ counts each prime power p^k with weight log p. The asymptotic statement π(x) ~ x/log x is equivalent to ψ(x) ~ x, and the latter form proves more tractable analytically.
Chebyshev proved by elementary means that there exist positive constants c₁ and c₂, with c₁ < 1 < c₂, such that
c₁ x ≤ ψ(x) ≤ c₂ x
for all sufficiently large x. He gave explicit values close to the optimum, with c₁ ≈ 0.92 and c₂ ≈ 1.11. Chebyshev’s argument did not establish that the limit ψ(x)/x exists, only that if it did, it would have to equal one. As a corollary, he proved Bertrand’s postulate: for every integer n greater than one, there is a prime between n and 2n.
Chebyshev’s bounds were the first quantitative result on the distribution of primes that improved on Euclid’s infinitude argument. They demonstrated that the conjectured asymptotic was at least within a constant factor of correct. They did not, however, supply the analytic machinery that would be needed to close the gap to a precise asymptotic.
Dirichlet, Characters, and L-Functions
The other essential precursor to Riemann’s framework was Peter Gustav Lejeune Dirichlet’s 1837 proof of the infinitude of primes in arithmetic progressions. Given coprime positive integers a and q, Dirichlet proved that the arithmetic progression a, a + q, a + 2q, … contains infinitely many primes. The strategy was to introduce, for each residue class modulo q, a multiplicative character χ — a homomorphism from the multiplicative group (Z/qZ)* to the unit circle — and to form what are now called Dirichlet L-functions:
L(s, χ) = ∑_{n=1}^∞ χ(n)/n^s = ∏_p (1 − χ(p)/p^s)^{−1}.
The principal character produces a function closely related to ζ; the nonprincipal characters produce L-functions whose behavior at s = 1 is the key technical input to the proof. Dirichlet’s argument required showing that L(1, χ) is nonzero for every nonprincipal character χ, a fact that is not obvious and whose proof in the case of real characters uses the analytic class number formula.
Dirichlet’s work introduced two structural ideas that would prove central to all subsequent analytic number theory. First, the L-function for a character generalizes the zeta function in a way that respects arithmetic structure: the zeros and poles of L(s, χ) encode information about primes in specific residue classes. Second, the technique of detecting arithmetic conditions through orthogonality of characters — averaging over χ to pick out a single residue class — became the workhorse method for studying primes in progressions. The Dirichlet L-functions are the most elementary nontrivial members of the family of L-functions for which a generalized Riemann hypothesis is conjectured today.
III. Riemann’s 1859 Memoir
Institutional Context
Bernhard Riemann was elected a corresponding member of the Berlin Academy of Sciences in 1859. The convention of the Academy was that newly elected members would submit a memoir on a topic of their choosing as a kind of inaugural offering. Riemann, then thirty-three years old and recently appointed full professor at Göttingen following the death of Dirichlet, submitted Über die Anzahl der Primzahlen unter einer gegebenen Größe — “On the Number of Primes Less Than a Given Magnitude” — in November of that year.
The memoir is approximately eight printed pages. It is the only paper Riemann ever published on number theory. Its style is characteristic of him: dense, allusive, suggestive rather than fully proved at every step, with several substantial claims left as remarks or asserted as evident. Subsequent generations of analysts, beginning with Hadamard and von Mangoldt in the 1890s, devoted considerable effort to supplying rigorous proofs of statements Riemann had treated as routine.
The Analytic Continuation and Functional Equation
Riemann’s first substantive contribution was to establish that ζ(s), defined initially by the Dirichlet series for Re(s) > 1, extends to a meromorphic function on the entire complex plane, with a simple pole at s = 1 and no other singularities. The continuation is achieved through a contour integral representation involving the gamma function. Two distinct proofs of the continuation are sketched in the memoir.
Riemann then established the functional equation. Defining the completed zeta function
ξ(s) = (1/2) s(s−1) π^{−s/2} Γ(s/2) ζ(s),
Riemann proved that ξ is an entire function and satisfies
ξ(s) = ξ(1 − s).
The functional equation exhibits a symmetry of ζ about the line Re(s) = 1/2 — the so-called critical line. The trivial zeros of ζ — at s = −2, −4, −6, … — are produced by the gamma factor, and the functional equation relates them to the pole of ζ at s = 1 and to features in the right half-plane. Any zero of ζ that is not trivial must lie in the strip 0 ≤ Re(s) ≤ 1, called the critical strip, by classical estimates ruling out zeros in the half-planes Re(s) > 1 and Re(s) < 0 (the latter following from the functional equation and the absence of zeros in Re(s) > 1 from the Euler product).
The Product Over Zeros and the Explicit Formula
Riemann then asserted, without complete proof, that ξ(s) admits a product expansion over its zeros, of the form
ξ(s) = ξ(0) ∏_ρ (1 − s/ρ),
where the product is taken over the nontrivial zeros ρ of ζ, with appropriate convergence conventions. Hadamard’s theory of entire functions of finite order, developed in the 1890s, supplied the rigorous foundation for this kind of product representation; Riemann’s assertion was vindicated but not by means available to him.
From this product, Riemann derived what is now called the Riemann–von Mangoldt explicit formula, which expresses the prime-counting function (in a smoothed form) directly in terms of the zeros of ζ:
ψ(x) = x − ∑_ρ x^ρ/ρ − log(2π) − (1/2) log(1 − x^{−2}),
where the sum is over the nontrivial zeros ρ. This formula is, in a sense, the central insight of analytic number theory: it converts the question of how primes are distributed into the question of where the zeros of ζ lie. Each zero ρ contributes an oscillatory term to ψ(x) of magnitude x^{Re(ρ)}/|ρ|. If all nontrivial zeros have real part 1/2, then the cumulative deviation of ψ(x) from its main term x is bounded by O(x^{1/2} (log x)²), which is the strongest possible bound and would yield correspondingly strong estimates for π(x).
The Hypothesis Itself
The Riemann hypothesis appears in the memoir as a remark in the course of Riemann’s discussion of the distribution of zeros. Riemann observes that the number of zeros in the critical strip up to height T is asymptotically (T/2π) log(T/2π) − T/2π — a count later proved rigorously by von Mangoldt — and notes that “it is very probable” that all of these zeros have real part exactly 1/2. He acknowledges that he has not been able to prove this and remarks that the question, while of interest, is not essential for the immediate purposes of his investigation, which is to understand the deviation of π(x) from Li(x) on average.
This framing — RH as a probable but unverified conjecture, peripheral to the explicit purposes of the memoir — is striking in retrospect. Riemann was not staking his memoir on the hypothesis. He was setting it down as a working remark in a paper whose primary aim was to develop the analytic apparatus through which the question of prime distribution could be studied. That the remark would become, within a generation, the central open problem of an entire mathematical discipline was not a result he could have foreseen.
Riemann died in 1866 at the age of thirty-nine, of tuberculosis, while traveling in Italy for his health. He left behind a substantial body of unpublished notes, much of which his widow consigned to the housekeeper, who burned a portion before the rest was salvaged by colleagues at Göttingen. What remained of the Nachlass passed eventually to the Göttingen library, where it would lie largely unexamined for nearly seventy years.
IV. The Prime Number Theorem and Its Proof
Hadamard and de la Vallée Poussin
The prime number theorem — the asymptotic π(x) ~ x/log x conjectured by Gauss and Legendre — was proved independently in 1896 by Jacques Hadamard and Charles-Jean de la Vallée Poussin. Both proofs used Riemann’s framework. Both established, as the central technical input, that ζ(s) has no zeros on the line Re(s) = 1.
The argument from no zeros on Re(s) = 1 to the prime number theorem proceeds through the explicit formula or a variant of it. If ζ has a zero ρ with Re(ρ) = 1, the explicit formula contains a term of order x^{Re(ρ)} = x, which would obstruct the asymptotic ψ(x) ~ x. Conversely, if ζ has no zeros on Re(s) = 1, then the contributions from the critical strip are of strictly lower order than x, and ψ(x) ~ x follows.
The proof that ζ(1 + it) ≠ 0 for real t was the crux. Hadamard’s argument used his theory of entire functions of finite order, which he had developed for the purpose. De la Vallée Poussin’s argument was more elementary in its analytic content but more intricate. Both proofs used the inequality, due originally to Mertens in a different context,
3 + 4 cos θ + cos 2θ ≥ 0,
applied to the logarithm of |ζ(σ)³ ζ(σ + it)⁴ ζ(σ + 2it)| for σ slightly greater than one and t a putative ordinate of a zero on the line. The inequality forces the logarithm to remain bounded below as σ descends to one, contradicting what would follow if ζ had a zero at 1 + it.
De la Vallée Poussin extended his argument in 1899 to obtain a zero-free region — a region of the form Re(s) > 1 − c/log(|t| + 2) inside which ζ has no zeros — and from this derived an effective error term for the prime number theorem. Hadamard’s proof did not give an effective error term in its original form, though it could be modified to do so. The effective form of the prime number theorem proved by de la Vallée Poussin is
π(x) = Li(x) + O(x exp(−c√log x))
for some positive constant c, an estimate that has since been improved but never replaced as the unconditional benchmark by anything fundamentally stronger than Vinogradov–Korobov bounds.
What the Prime Number Theorem Requires
It is worth stating explicitly that the prime number theorem requires substantially less than the Riemann hypothesis. PNT is equivalent, in a precise sense, to the absence of zeros of ζ on the line Re(s) = 1. RH is the much stronger statement that there are no zeros with real part greater than 1/2 (equivalently, by the functional equation, no zeros with real part less than 1/2 within the critical strip). The historical fact that PNT was provable in 1896 while RH remains unproved more than a century later reflects the gap between these two assertions: ruling out zeros on a single line is achievable through clever inequalities; ruling out zeros throughout an open half-strip is, on the present evidence, a problem of an entirely different order.
The Erdős–Selberg Elementary Proof
For half a century after 1896, it was widely believed that any proof of the prime number theorem must use complex analysis — specifically, must use the analytic continuation of ζ to the line Re(s) = 1 and an argument that ζ has no zero there. G. H. Hardy stated this view explicitly, suggesting that an elementary proof would require a fundamental change of perspective.
In 1948–1949, Atle Selberg and Paul Erdős, working at first in collaboration and later separately, produced an elementary proof of PNT — elementary in the technical sense that it avoided complex analysis and used only real-variable methods. The proof rests on what is now called Selberg’s symmetry formula:
∑{p ≤ x} (log p)² + ∑{pq ≤ x} log p · log q = 2 x log x + O(x).
From this identity, by an intricate but elementary argument, the prime number theorem follows.
The proof produced a priority dispute that has been documented in considerable detail by historians of mathematics. Selberg discovered the symmetry formula and saw how to use it; Erdős, after Selberg communicated the formula to him, found the path from the formula to PNT before Selberg did, using an estimate on prime gaps. The two then collaborated briefly before disagreement led them to publish separately, with Selberg eventually receiving the Fields Medal in 1950 for the result (and other work). The dispute was bitter and personally costly, and the historical literature on it is substantial.
The elementary proof did not eliminate the place of complex analysis in the study of primes — analytic methods remain the source of all sharper results — but it did show that the dependence of PNT on the analytic continuation of ζ was not absolute. What an elementary proof of RH would look like, or whether one is possible, remains a separate open question.
V. Hilbert’s 1900 Address and the Eighth Problem
The Address and Its Context
David Hilbert delivered his address “Mathematische Probleme” at the Second International Congress of Mathematicians in Paris in August 1900. The address listed twenty-three problems (ten in the spoken version, twenty-three in the printed version) that Hilbert considered central to the future of the discipline. The list was not intended as exhaustive, nor as an authoritative ranking; it was a proposal for the directions in which mathematics, in Hilbert’s judgment, should be pressed in the new century.
The eighth problem on Hilbert’s list was titled “Problems of prime numbers.” It contained three principal components: the Riemann hypothesis, the Goldbach conjecture, and the question of the infinitude of twin primes. Hilbert presented the Riemann hypothesis first and at greatest length, and his framing of it has shaped subsequent reception. He described the hypothesis as “of the greatest importance for the theory of numbers as well as for many other branches of mathematics,” and he emphasized the wide range of arithmetic consequences that would follow from its proof.
Hilbert’s address served, in effect, as the canonization of the hypothesis. After 1900, RH was no longer simply a remark in an 1859 memoir; it was a designated central problem of the discipline, with the institutional weight of Hilbert’s reputation behind it.
Hilbert’s Reported Remarks
Several remarks attributed to Hilbert about the Riemann hypothesis have entered mathematical folklore, with varying degrees of documentation. The most frequently cited is his reported statement that, if he were to awaken after sleeping for five hundred years, his first question would be: has the Riemann hypothesis been proved? The remark survives in secondhand recollections and is consistent with Hilbert’s general attitude toward the problem; whether he uttered it in precisely this form is uncertain.
A second remark, also frequently cited, is Hilbert’s reported answer to a question about which problem from his list he expected to see solved first. He is said to have replied that the easiest of the twenty-three would prove to be the Riemann hypothesis, the hardest the seventh (concerning irrationality of certain expressions), and that he expected to see neither solved in his lifetime. The seventh problem was substantially resolved by Gelfond and Schneider in the 1930s, while RH remained open at his death in 1943. The anecdote, if accurate, illustrates the difficulty of predicting which problems will yield to which techniques, and it has been cited in this connection many times since.
Reception in the Twentieth Century
Hilbert’s framing established RH as the standing open problem of analytic number theory. It also established a particular style of relating to the problem: the conviction that progress on RH should be one of the main organizing aims of the discipline, even when direct attack proves infeasible. Through the first half of the twentieth century, work on RH took the form of partial results — bounds on the number of zeros off the line, bounds on the proportion of zeros on the line — rather than direct attempts at proof. The accumulation of partial results gave the problem its characteristic shape: a hypothesis around which a substantial conditional theory had been constructed, but whose central claim remained inaccessible.
VI. Twentieth Century Developments on the Critical Line
Hardy’s Theorem
The first major result on zeros of ζ on the critical line itself was proved by G. H. Hardy in 1914. Hardy showed that ζ has infinitely many zeros on the line Re(s) = 1/2.
The proof uses the Riemann–Siegel function Z(t), defined so that Z(t) is real for real t and |Z(t)| = |ζ(1/2 + it)|. Hardy considered the integral
∫_T^{2T} Z(t) dt
and obtained estimates that forced Z to change sign infinitely often. Each sign change corresponds to a zero of ζ on the critical line.
Hardy’s theorem did not establish that all zeros are on the line, nor did it establish that a positive proportion of zeros are on the line; it established only that infinitely many zeros are. Given that there are infinitely many zeros in total, infinitely many on the line is a substantially weaker assertion than the conjecture.
Hardy and Littlewood’s Lower Bound
In 1921, Hardy and J. E. Littlewood improved Hardy’s theorem by showing that the number of zeros of ζ on the critical line up to height T is at least cT for some positive constant c. This is an absolute lower bound on the count of critical-line zeros. Combined with the von Mangoldt formula, which gives the total number of zeros up to height T as asymptotically (T/2π) log(T/2π), the Hardy–Littlewood result implies that the proportion of zeros on the critical line, while bounded below away from zero in absolute count, was not yet shown to constitute a positive fraction of all zeros up to height T.
Selberg’s Positive Proportion
The decisive step toward proportional results was taken by Atle Selberg in 1942. Selberg proved that a positive proportion of the nontrivial zeros of ζ lie on the critical line. That is, there exists a positive constant κ such that the number of zeros on the critical line up to height T is at least κ times the total number of zeros up to height T.
Selberg’s proof introduced what are now called Selberg’s mollifiers — auxiliary functions designed to dampen the variability of |ζ(1/2 + it)| in a controlled way, so that integral estimates could be obtained. The constant κ produced by Selberg’s argument was explicit but small. Nonetheless, the qualitative conclusion was a substantial advance over Hardy–Littlewood: a definite, if small, proportion of zeros are confirmed to lie where the hypothesis predicts.
Levinson’s One-Third
For more than three decades after Selberg’s result, the constant κ was improved only modestly. In 1974, Norman Levinson achieved a substantial breakthrough: he proved that at least one-third of the nontrivial zeros of ζ lie on the critical line.
Levinson’s method differed from Selberg’s in detail. He used a different mollifier and a different way of counting zeros — counting zeros of a related auxiliary function rather than zeros of ζ directly. The argument also yielded, as a byproduct, that at least one-third of the zeros are simple (have multiplicity one), a separate statement of independent interest.
Levinson’s one-third was widely viewed at the time as a striking advance, and his methods provided the template for subsequent improvements.
Conrey’s Two-Fifths
In 1989, J. Brian Conrey improved the proportion to two-fifths. Conrey’s method refined Levinson’s mollifier through more delicate analytic estimates and used a longer mollifier — one that captures more of the variability of ζ on the line — at the cost of substantially heavier computation.
The two-fifths threshold has remained the published benchmark for some time, with subsequent work improving it incrementally. Bui, Conrey, and Young in 2011 improved the proportion to slightly above 41 percent. Pratt, Robles, Zaharescu, and Zeindler announced an improvement to above 5/12 (approximately 41.7 percent) in 2019. These improvements reflect technical refinements rather than conceptual breakthroughs; the underlying method remains Levinson’s, suitably extended.
What These Results Mean
It is essential to be clear about the relationship between proportional results and the Riemann hypothesis itself. RH asserts that one hundred percent of nontrivial zeros lie on the critical line. The best published unconditional results give roughly forty-one or forty-two percent. The gap between these is not narrow. Furthermore, the Levinson–Conrey method, by its nature, appears to face an asymptotic ceiling: pushing the proportion past some threshold below one hundred percent appears to require methods qualitatively different from those that have produced the steady incremental improvements of the past fifty years. Whether such a method exists is one of the open meta-questions of the subject.
VII. Computational History
Riemann’s Unpublished Formula
Among the materials in Riemann’s Nachlass at Göttingen were notebooks containing computations of zeros of ζ. In the 1920s and early 1930s, the Berlin mathematician Carl Ludwig Siegel undertook a careful examination of these papers. Siegel discovered, buried in Riemann’s calculations, an asymptotic formula for the function Z(t) — a formula vastly more efficient than any then known for high-precision computation of zeros at large height.
Siegel published the formula in 1932, in a paper that established the formula as a recovery from Riemann’s work rather than as Siegel’s own discovery, although the rigorous justification of the formula’s error term was Siegel’s contribution. The formula is now called the Riemann–Siegel formula. It expresses Z(t) as a finite main sum of approximately √(t/2π) terms, plus a correction series, with explicit estimates on the remainder.
The historical significance of the discovery is twofold. First, it showed that Riemann had computed zeros far beyond what his published memoir suggested — that the eight published pages substantially understated the depth of his investigation. Second, it provided the practical instrument for all subsequent computational verification of RH. Every large-scale computation of zeros from the 1930s onward has used the Riemann–Siegel formula or a refinement of it.
Turing and Computational Verification
Alan Turing took up the problem of computing zeros of ζ in the 1930s and continued the work after the war. Turing made two contributions of lasting importance.
First, he developed an improved error analysis for the Riemann–Siegel formula, giving rigorous bounds on the remainder term that were tighter than those Siegel had supplied. Turing’s bounds remain the basis for rigorous verification of RH at finite heights.
Second, Turing devised what is now called Turing’s method for verifying that all zeros up to a given height lie on the critical line. The method does not require directly locating each zero; instead, it uses the von Mangoldt zero-counting formula combined with sign changes of Z(t) to verify that the number of zeros found on the line equals the total number of zeros expected up to that height. If the counts match, all zeros up to that height are on the line.
Turing performed computations on the Manchester computer in the early 1950s — among the first substantial mathematical computations on a stored-program electronic computer — and verified RH for the first 1,104 zeros. The computations were limited by the available machinery but established the methodology that all subsequent verifications have refined.
Lehmer’s Phenomenon
In 1956, Derrick Lehmer was conducting computations of zeros of ζ when he discovered an unusual configuration: two zeros so close together that the function Z(t) almost — but not quite — failed to change sign between them. Specifically, near t ≈ 7005, Z(t) attained a local minimum on the positive side that was extremely close to zero, with a corresponding local maximum on the negative side immediately following.
The configuration, now called Lehmer’s phenomenon, has the following significance. If at some height the local extrema of Z(t) ever failed to straddle zero properly, the count of sign changes would fall short of the expected number of zeros, indicating that some zero must lie off the critical line. Lehmer’s pair came close enough to such a failure that it served as a vivid demonstration that RH, while well supported numerically, is not numerically guaranteed by the kind of margin that makes failure inconceivable.
Subsequent computations have found additional Lehmer-type pairs at greater heights, with similar near-failures of the sign-change criterion. None has actually failed. But the phenomenon has stood as a caution against complacency: the numerical evidence for RH is overwhelming, but the function does not behave with the kind of rigid regularity that would make a violation, at some sufficiently large height, beyond imagination.
Modern Verification
The computational verification of RH has been pushed to extraordinary heights. Andrew Odlyzko, beginning in the 1980s, computed millions and then billions of zeros at very large heights — including zeros around the 10^{20}-th zero — in part to test Montgomery’s pair correlation conjecture, which I will return to in Paper 3.
Xavier Gourdon, in 2004, verified RH for the first 10^{13} zeros using a refined version of the Odlyzko–Schönhage algorithm. David Platt and others have produced rigorous verifications at lower heights using methods that produce computer-checked proofs rather than merely numerical confirmations.
The current state of the verification is, in round figures, that all of the first ten trillion zeros (and isolated samples at much greater heights) have been confirmed to lie on the critical line. No counterexample has been found at any height to which computation has been carried.
The numerical evidence is, by any ordinary standard of evidence, overwhelming. It does not, of course, constitute a proof, and the history of mathematics contains conjectures supported by far more extensive numerical evidence that have nonetheless turned out to be false. (The most famous example, Skewes’s number, concerns precisely a question about the prime-counting function π(x) versus Li(x): Littlewood proved in 1914 that the difference π(x) − Li(x) changes sign infinitely often, despite the difference being negative for all values of x to which any direct computation has been extended.) The numerical evidence for RH is suggestive; it is not conclusive.
VIII. Generalizations as Historical Tributaries
The Generalized Riemann Hypothesis
The first natural generalization of RH replaces ζ by a Dirichlet L-function L(s, χ) for a nontrivial character χ. The Generalized Riemann Hypothesis, GRH, asserts that all nontrivial zeros of L(s, χ) — for every Dirichlet character χ — lie on the critical line Re(s) = 1/2.
GRH is strictly stronger than RH (the principal character recovers ζ up to a finite product) and has substantially more arithmetic content. Among its conditional consequences are: a deterministic primality test in polynomial time (Miller’s test, conditionally), strong forms of the Chebotarev density theorem with effective error, sharper bounds on the least prime in an arithmetic progression (sharper than Linnik’s unconditional theorem), and various results on class numbers of imaginary quadratic fields. The number of theorems stated as “if GRH, then…” runs into the hundreds.
The Extended Riemann Hypothesis
A further generalization replaces Dirichlet L-functions with the Dedekind zeta function ζ_K(s) of a number field K. Recall that for K a finite extension of Q, the Dedekind zeta function is
ζ_K(s) = ∑{a} 1/N(a)^s = ∏{p} (1 − 1/N(p)^s)^{−1},
where the sum is over nonzero ideals a of the ring of integers O_K and the product is over prime ideals p, with N denoting the absolute norm. The Extended Riemann Hypothesis, ERH, asserts that all nontrivial zeros of ζ_K lie on the critical line for every number field K.
ERH is, in a sense, the natural setting for the hypothesis: the Dedekind zeta function captures the prime factorization theory of O_K just as ζ captures that of Z. The arithmetic consequences of ERH extend GRH’s consequences to the setting of arbitrary number fields and provide the conditional foundations for substantial portions of algebraic number theory.
The Grand Riemann Hypothesis and the Selberg Class
The most general Riemann hypothesis is formulated within the Selberg class S, an axiomatically defined family of L-functions introduced by Selberg in 1989 to capture the structural features common to all “L-functions arising from arithmetic.” A function F is in the Selberg class if it satisfies a Dirichlet series representation, an Euler product, an analytic continuation, a functional equation of a prescribed form, and a Ramanujan-type bound on coefficients.
The Grand Riemann Hypothesis asserts that every L-function in the Selberg class has all its nontrivial zeros on the critical line Re(s) = 1/2. The class includes ζ, Dirichlet L-functions, Dedekind zeta functions, Hecke L-functions, automorphic L-functions for GL(n), and various other L-functions associated with arithmetic objects. Whether the class is closed under the natural operations (Rankin–Selberg convolution, symmetric powers) is itself a series of open conjectures.
Selberg also proposed an orthogonality conjecture for the class, governing the correlations of Dirichlet coefficients of distinct primitive L-functions. The orthogonality conjecture, combined with the analytic structure, would imply substantial portions of the Langlands program.
The Function Field Analog
The most consequential development in the broader Riemann hypothesis story was the proof of the function field analog. The setting transposes the integers Z to the polynomial ring F_q[T] over a finite field F_q with q elements, and number fields to function fields — finite extensions of F_q(T). For each such function field K, or equivalently for each smooth projective curve C over F_q, one defines a zeta function
Z_C(s) = exp(∑_{n=1}^∞ N_n / n · q^{-ns}),
where N_n is the number of points of C over F_{q^n}. The function Z_C(s) has the form P(q^{-s})/((1−q^{-s})(1−q^{1-s})), where P is a polynomial of degree 2g (with g the genus of C). The Riemann hypothesis for C is the assertion that all zeros of P, viewed as a polynomial in q^{-s}, satisfy |q^{-s}| = q^{-1/2} — equivalently, that all zeros of Z_C(s) have Re(s) = 1/2.
The function field RH was proved by Helmut Hasse for elliptic curves (genus one) in 1934. André Weil proved it for curves of arbitrary genus in 1948, using a substantial new theory of intersection on algebraic surfaces. Pierre Deligne, in 1974, proved the Riemann hypothesis component of the Weil conjectures for arbitrary smooth projective varieties over finite fields, using Grothendieck’s machinery of étale cohomology together with novel ideas on monodromy.
The function field case is, in a strict sense, an analog of RH that has been proved. The methods that prove it — geometric, cohomological, ultimately resting on a positivity statement (the Hodge index theorem in Weil’s case, a deeper monodromy argument in Deligne’s) — have no current analog over Q. The disparity between the two settings, where one has yielded to geometric methods and the other has not, is one of the central facts shaping current thinking about possible proofs. Paper 3 in this suite treats this disparity in detail.
IX. Institutional and Cultural History
The Clay Millennium Prize
In May 2000, the Clay Mathematics Institute, founded the previous year by businessman Landon Clay and mathematician Arthur Jaffe, announced seven Millennium Prize Problems, each carrying a prize of one million United States dollars for the first published and verified solution. The Riemann hypothesis was the second problem on the list, following the P versus NP question.
The Clay list was a deliberate echo of Hilbert’s 1900 address — seven problems for the new millennium, mirroring the twenty-three problems Hilbert had posed for the new century. Two of Hilbert’s problems were carried over: the Riemann hypothesis and the Poincaré conjecture (which appeared in different form on Hilbert’s list, but whose modern statement was on Clay’s). The Poincaré conjecture was solved by Grigori Perelman within seven years; he declined the Clay prize. As of the present writing, the Riemann hypothesis remains the most prominent of the unresolved Clay problems.
The institutional formalization of unsolved problems through prize structures has a complicated effect on a discipline. It directs attention, supplies a quantum of public visibility, and places a particular kind of weight on the announced problems. It also generates a steady stream of incorrect submissions — the Clay Institute, like the Riemann hypothesis specifically, has been the recipient of many announced proofs that have not survived examination.
Failed Proofs and the Sociology of Attempts
The Riemann hypothesis has attracted a steady flow of announced proofs, both from established mathematicians and from amateurs. Most have not survived peer review. Some have made it through informal review, generated press coverage, and then been retracted or quietly abandoned.
The most prominent ongoing case is Louis de Branges’s series of announced proofs over a span of more than two decades. De Branges, a distinguished mathematician at Purdue who in the 1980s gave the proof of the Bieberbach conjecture, has posted multiple manuscripts claiming proofs of RH. The proofs employ his theory of Hilbert spaces of entire functions, a substantial and useful body of work. In 1998, Conrey and Xian-Jin Li identified a specific obstruction to the strategy in a particular form de Branges had pursued: they showed that a positivity condition required by the strategy is in fact violated. De Branges has continued to refine his approach, but the wider community has not accepted any version as a proof.
The de Branges case illustrates several features of the cultural history of RH. It illustrates the difficulty of definitively closing a proof attempt: each manuscript can be modified, and identifying a specific irrecoverable error requires substantial expert engagement. It illustrates the toll on attention: each new manuscript requires that experts decide whether it merits the time of a careful reading. And it illustrates the way the hypothesis exerts a gravitational pull on mathematicians of substantial accomplishment, who pursue it across years or decades despite the absence of clear progress.
Other cases include Hans Rademacher’s announced disproof in 1945, which Time magazine reported before the error was found; Matthew Watkins’s catalogued list of announced proofs and disproofs, which documented dozens of attempts; and Michael Atiyah’s brief 2018 announcement, which was met with skepticism and did not survive scrutiny. The recurring pattern is that the hypothesis attracts attempts in proportion to its prominence, and the prominence is reinforced by the prize and the institutional history.
The Hypothesis in Broader Mathematical Culture
Beyond the specific community of analytic number theorists, RH has acquired a cultural standing as the paradigmatic deep mathematical problem. It is referenced in popular accounts of mathematics as the question whose resolution would constitute the most significant single development in pure mathematics. It has been the subject of a substantial popular literature — books by John Derbyshire, Marcus du Sautoy, Karl Sabbagh, and others — directed at readers without specialized training.
This cultural standing has consequences within the discipline. It shapes how graduate students choose problems, how funding agencies frame analytic number theory, and how the discipline relates to neighboring fields. It also produces a certain pressure toward conservatism: the hypothesis has resisted so many efforts by so many capable mathematicians that the working assumption among most experts is that no current line of attack is close to success, and that announcements of imminent proof should be treated with substantial skepticism.
X. Conclusion
One hundred sixty-six years after Riemann’s memoir, the hypothesis is in a particular condition. It is verified to ten trillion zeros. It is connected to so many parts of mathematics — number theory, harmonic analysis, random matrix theory, mathematical physics, the theory of L-functions, the Langlands program, arithmetic geometry — that its resolution would propagate consequences across the whole discipline. It has resisted every line of direct attack that has been tried. It has its function field analog proved, by methods that have no current counterpart in the number field setting. It is supported by structural reasons, deeper than mere numerical experiment, for taking it to be true: the random matrix predictions on zero spacings, confirmed numerically with great accuracy, fit the hypothesis and would be difficult to make sense of if the hypothesis failed.
The historical record points to several features of the problem. The first is that progress on RH has not been linear. The proof of the prime number theorem in 1896 used the framework Riemann had set up; the proportional results from 1942 onward refined a technique whose ceiling appears to be below one hundred percent; the function field proofs introduced methods that, despite their power, have not crossed back into the number field setting. Each major advance has come from a structural reframing rather than from incremental sharpening of existing methods.
The second feature is that the hypothesis has acquired a wide circle of generalizations and consequences without itself yielding. This is unusual. Most central conjectures in mathematics either fall to direct attack within a generation or two of their formulation, or else are gradually whittled down through partial cases. RH has remained roughly where Riemann left it, while the surrounding theory has grown vastly more sophisticated.
The third feature is that the function field success constitutes both a model and a puzzle. It is a model in that it shows the kind of structural ingredients — a geometric setting, a cohomology theory, a Frobenius operator, a positivity statement — that suffice for a proof. It is a puzzle in that the absence of a corresponding geometric setting for Spec(Z) is, on present evidence, exactly the obstacle that makes RH over Q intractable. The “field with one element” program and Connes’s noncommutative geometric approach are both attempts to supply the missing geometry. Neither has produced a proof. Whether either or some third approach will eventually do so is the live open question of the subject.
The historical record, taken as a whole, suggests that the Riemann hypothesis is the kind of problem that yields, when it yields, to a structural reconception rather than to an ingenious combination of existing techniques. It also suggests that one hundred sixty-six years of resistance is not, by the standards of mathematical history, an unreasonable interval for a problem of this depth: the prime number theorem itself was conjectured in the 1790s and proved in 1896, an interval of about a century, and RH is, by every available measure, a substantially deeper problem than PNT.
What can be said with confidence is that Riemann’s eight pages, written as the inaugural memoir of a newly elected academician, contained a remark that has organized a substantial portion of pure mathematics for more than a century and a half, and that the remark continues to do so. The hypothesis was offered as probable; the probability has been confirmed at every height where it has been tested; the proof has not arrived. The discipline waits, and works.
═══════════════════════════════════════════════
