A Conjecture on Stratified Zero–Prime Resonance: Pair Correlation Refinements under the Riemann Hypothesis

I. Introduction

The pair correlation conjecture of Hugh Montgomery, formulated in 1973 and discussed in Paper 3 of this suite, predicts that the local statistics of the imaginary parts of the nontrivial zeros of ζ — under the Riemann hypothesis — match the local statistics of eigenvalues of large random Hermitian matrices drawn from the Gaussian Unitary Ensemble. The prediction has been confirmed numerically with great accuracy, and it has motivated substantial subsequent work on moments of L-functions, on extreme values of ζ, and on the broader Keating–Snaith framework. Pair correlation, in its standard form, treats the zeros of ζ as a single statistical population, computing average correlations across the full set without regard to additional structure that might be imposed by conditioning on arithmetic information.

This paper proposes a refinement. The refinement is motivated by a structural observation: while the zeros of ζ globally satisfy GUE statistics (under RH and the pair correlation conjecture), the prime numbers themselves are not without structure — they distribute, modulo any fixed integer q, into residue classes coprime to q with proportions governed by Dirichlet’s theorem. This residue-class structure of the primes is encoded analytically in the Dirichlet L-functions L(s, χ) for characters χ modulo q, and the zeros of these L-functions are conjectured (under the Generalized Riemann Hypothesis) to lie on the same critical line as the zeros of ζ. The question this paper takes up is whether the residue-class structure of primes leaves a quantifiable signature on the local correlation statistics of ζ-zeros when one conditions on prime subsets defined by Dirichlet characters.

The conjecture proposed here, in compressed form, is that under RH the local pair correlation of zeros of ζ, weighted by character-restricted prime counting functions, exhibits a stratified deviation from the unconditional GUE prediction. The deviation is governed quantitatively by a weighted sum over low-lying zeros of the associated Dirichlet L-functions, with explicit constants depending on the modulus q. The conjecture is intended to be sharper than current pair correlation predictions, falsifiable computationally with present resources, and to entail concrete arithmetic consequences for prime gaps in arithmetic progressions, refined Bombieri–Vinogradov-type estimates, and possibly Linnik-type bounds.

This paper is structured to make the conjecture precise, to motivate it heuristically through random matrix considerations and through the analytic structure of the explicit formula, to outline its potential consequences if true, to position it within the existing conjectural ecosystem (so that it is clear what is novel and what is borrowed), to specify a computational program that would test it within a reasonable budget, and to be candid about its limitations and possible failure modes.

The conjecture is offered in the spirit of forward-looking hypothesis. It is not a proof of RH or a path to one. It is a sharper version of pair correlation that, if confirmed numerically and if eventually proved (conditionally on RH), would constitute a quantitative addition to the theory of L-function zeros. If it is falsified by computation, the falsification itself would be informative — it would indicate that the analytic structure of L-functions does not, in fact, leave the kind of fingerprint on ζ-zero statistics that the heuristics suggest, and would prompt revision of those heuristics.

II. Preliminaries

This section establishes notation and recalls the relevant analytic and number-theoretic background. The treatment is brief; the reader is referred to the standard references (Davenport, Iwaniec–Kowalski, Montgomery–Vaughan) for fuller exposition.

The Explicit Formula

The Riemann–von Mangoldt explicit formula relates the prime-counting function ψ(x) = ∑_{p^k ≤ x} log p to the nontrivial zeros of ζ:

ψ(x) = x − ∑_ρ x^ρ/ρ − log(2π) − (1/2) log(1 − x^{−2}),

where the sum is over the nontrivial zeros ρ of ζ, taken in symmetric pairs (ρ, 1 − ρ̄). For test functions f satisfying appropriate conditions, a weighted form of the explicit formula reads

ρ f(γ) = (1/2π) ∫{-∞}^{∞} f(t) Ω(t) dt − ∑p ∑{k≥1} (log p)/p^{k/2} · F(k log p),

where γ denotes the imaginary part of a zero ρ = 1/2 + iγ (under RH), F is the Fourier transform of f, and Ω(t) is the (logarithmic) main density of zeros. The right-hand side decomposes the sum over zeros into a smooth main term and a sum over prime powers; the prime side and the zero side of this identity are the two faces of the explicit formula.

Dirichlet Characters and L-Functions

For a positive integer q (the modulus), a Dirichlet character χ mod q is a homomorphism (Z/qZ)* → C*, extended by zero to integers not coprime to q. There are φ(q) characters modulo q, including the principal character χ_0 (which sends every n coprime to q to 1). A character is primitive if it does not factor through (Z/q’Z)* for any q’ properly dividing q.

The Dirichlet L-function attached to χ is

L(s, χ) = ∑_{n=1}^∞ χ(n)/n^s = ∏_p (1 − χ(p)/p^s)^{−1}.

For χ ≠ χ_0 primitive, L(s, χ) is entire, satisfies a functional equation relating L(s, χ) and L(1 − s, χ̄), and (under GRH) has all nontrivial zeros on the critical line Re(s) = 1/2. Denote these zeros by 1/2 + iγ_n(χ), with γ_n(χ) ordered by absolute value. The lowest zero of L(s, χ) is 1/2 + iγ_1(χ); for most characters of small modulus, γ_1(χ) is of order 1 to 10, with explicit values computable.

Pair Correlation

The standard pair correlation function for ζ-zeros is, in Montgomery’s normalization,

F(α, T) = (1/N(T)) ∑_{0 < γ, γ’ ≤ T} T^{iα(γ − γ’)} w(γ − γ’),

where N(T) is the number of zeros up to height T, and w is a weight function (typically w(u) = 4/(4 + u²)). Montgomery’s conjecture, in this notation, is that as T → ∞,

F(α, T) → F_GUE(α)

uniformly on compact sets, where F_GUE is the GUE pair correlation function. Montgomery proved this for |α| ≤ 1 under RH; the full conjecture for arbitrary α remains open.

Bombieri–Vinogradov and Friends

The Bombieri–Vinogradov theorem is one of the central unconditional results in analytic number theory. It asserts that for any A > 0, there exists B = B(A) such that

{q ≤ x^{1/2} (log x)^{−B}} max{(a,q)=1} |ψ(x; q, a) − x/φ(q)| ≪ x (log x)^{−A},

where ψ(x; q, a) is the analog of ψ(x) restricted to integers congruent to a mod q. The theorem says that, on average over q up to about √x, the primes distribute uniformly among the residue classes coprime to q with the strength one would expect from GRH for individual q. It is, in this sense, a “GRH on average” theorem.

A famous open question, the Elliott–Halberstam conjecture, asserts that one can replace x^{1/2} by x^{1−ε} in Bombieri–Vinogradov. This conjecture would follow from quite sharp control on Dirichlet L-function zeros and is widely believed but unproven.

Selberg Orthogonality

The Selberg orthogonality conjectures, formulated for the Selberg class S, predict that distinct primitive L-functions in the class have orthogonal Dirichlet coefficients in a precise sense. For F, G primitive in S with Dirichlet coefficients a_n(F), a_n(G), the conjecture asserts

{p ≤ x} a_p(F) a_p(G) / p = δ{F,G} log log x + O(1),

where δ_{F,G} = 1 if F = G and 0 otherwise. Selberg orthogonality has been proved for various subclasses, including for Dirichlet L-functions of distinct primitive characters, where it follows from the orthogonality of the characters themselves.

For our purposes, the relevant case is Dirichlet L-functions: for distinct primitive characters χ_1, χ_2 modulo q,

∑_{p ≤ x} χ_1(p) χ̄_2(p) / p = O(1),

which is a consequence of Dirichlet’s theorem and partial summation.

III. Statement of the Conjecture

This section presents the conjecture in successive levels of precision.

Setup

Fix a modulus q ≥ 3 and a non-principal primitive Dirichlet character χ modulo q. Consider a smooth, compactly supported test function f: R → R, with Fourier transform f̂, both rapidly decreasing. For T large, define the character-weighted prime sum

S_χ(T; f) = ∑_p (χ(p) log p)/√p · f̂(log p / log T).

This is a smoothed version of the Dirichlet L-function’s contribution to its own explicit formula: the sum is supported on primes, weighted by the character, with the weight (log p)/√p providing the natural normalization for the critical line, and f̂(log p/log T) providing a smooth cutoff at primes up to roughly T.

Define correspondingly the character-conditional pair correlation of ζ-zeros:

F_χ(α, T; f) = (1/N(T)) ∑_{0 < γ, γ’ ≤ T} f(γ − γ’) T^{iα(γ − γ’)} S_χ(T; f)^* S_χ(T; f),

where N(T) is the number of nontrivial zeros of ζ up to height T (counted with multiplicity, presumed simple under RH and the simplicity hypothesis) and the asterisk denotes complex conjugation. The function F_χ thus measures the pair correlation of ζ-zeros, weighted by the character-restricted prime data through the factor S_χ.

The Conjecture, First Form

Conjecture (Stratified Zero–Prime Resonance, weak form). Under RH and GRH for L(s, χ), as T → ∞,

F_χ(α, T; f) − F(α, T; f) · |S_χ(T; f)|² → C(α, χ; f),

where C(α, χ; f) is a function of α, χ, and f, depending on the low-lying zeros of L(s, χ) but not on additional information from ζ.

The weak form asserts only that there is a deviation from the product F · |S_χ|² (which would be the “independent” expectation, where the zero correlations are independent of the character-weighted prime data) and that the deviation depends on L(s, χ).

The Conjecture, Strong Form

The strong form gives the leading term of C(α, χ; f) explicitly.

Conjecture (Stratified Zero–Prime Resonance, strong form). Under RH and GRH for L(s, χ), as T → ∞,

C(α, χ; f) = κ(q) · f̂(α γ_1(χ) / log T) · |L(1, χ)|² · (1 + ε(α, χ; f, T)),

where γ_1(χ) is the imaginary part of the lowest nontrivial zero of L(s, χ), L(1, χ) is the value of L at s = 1, κ(q) is an explicit constant of the form

κ(q) = c · log q / φ(q)

for an absolute constant c > 0, and ε(α, χ; f, T) → 0 as T → ∞ at a rate of (log T)^{−1/2 + ε}.

The strong form predicts a specific functional form: the leading deviation is a Fourier transform of f evaluated at a point determined by the lowest zero of L(s, χ), scaled by L(1, χ) (the value at the edge of the critical strip), with a modulus-dependent constant.

The choice of γ_1(χ) as the relevant zero rather than a sum over all zeros is the key prediction. The heuristic argument in Section IV motivates this: at the relevant scale (set by f̂ at heights of order log T), only the lowest L-function zero contributes leading-order signal; higher zeros contribute terms that are smaller by factors of (log T)^{−1}.

Remarks on the Statement

The conjecture is conditional on RH for ζ and on GRH for L(s, χ). The conditioning is essential: without RH, the imaginary parts γ are not real, and the pair correlation is not even defined in the form given; without GRH for L(s, χ), the value γ_1(χ) is not well-defined (the lowest zero might not lie on the critical line). The conjecture takes both hypotheses as given and predicts a sharper structural fact downstream of them.

The dependence on f is through f̂, which is a standard feature of pair correlation results. The dependence on χ enters through three quantities: γ_1(χ) (which sets the scale of the deviation), L(1, χ) (which sets its size), and q (which sets the constant prefactor). All three are computable for any specific character.

The error term ε(α, χ; f, T) → 0 at rate (log T)^{−1/2+ε} is a heuristic prediction. The rate corresponds to what one would expect from random matrix theory analogs and from the structure of error terms in standard pair correlation. It is not derived rigorously here.

IV. Heuristic Justification

The conjecture is motivated by two complementary lines of reasoning: a random matrix analog and an analytic argument from the explicit formula. Each suggests the same functional form.

The Random Matrix Analog

Random matrix theory predicts that the zeros of ζ behave statistically like eigenvalues of large random Hermitian matrices from the Gaussian Unitary Ensemble. The pair correlation function F_GUE captures the leading-order joint statistics of pairs of eigenvalues.

In the Keating–Snaith framework, finer structures of L-functions correspond to finer random matrix ensembles. The L-function L(s, χ) for a primitive character χ corresponds, in this framework, to a different ensemble — typically interpreted as a unitary symmetry class with additional symmetry constraints from the character. The zeros of L(s, χ) are predicted to follow the corresponding ensemble’s statistics.

The question then is: when one conditions ζ-zero correlations on the prime data weighted by χ, does the resulting conditional statistics deviate from the unconditional GUE prediction? The random matrix analog suggests yes: conditioning on character-weighted data corresponds, in the random matrix translation, to projecting onto a subspace defined by the character-symmetry. Pair correlation in such a subspace deviates from the full GUE pair correlation, with a deviation governed by the structure of the subspace.

The lowest zero γ_1(χ) of L(s, χ) plays a special role in this framework: in the random matrix analog, the lowest eigenvalue of the corresponding ensemble sets the scale of the lowest “mode” of the projection. Higher modes contribute at lower amplitude. This translates, on the L-function side, into the prediction that the leading deviation in F_χ is governed by γ_1(χ), with higher zeros contributing subleading corrections.

The Analytic Argument

The same prediction can be motivated from the explicit formula directly. The explicit formula for L(s, χ), in a form analogous to that for ζ, reads

ψ(x; χ) = ∑{n ≤ x} χ(n) Λ(n) = − ∑{ρ_χ} x^{ρ_χ}/ρ_χ + (lower order),

where ρ_χ runs over the nontrivial zeros of L(s, χ) and Λ is the von Mangoldt function. Under GRH, ρ_χ = 1/2 + iγ_n(χ).

When one computes F_χ(α, T; f) by expanding S_χ(T; f) in terms of primes and then applying the explicit formula in reverse, one obtains an expression involving sums over ζ-zeros and L(s, χ)-zeros simultaneously. The cross-terms — where a ζ-zero pairs with an L(s, χ)-zero — produce the deviation from the product F · |S_χ|².

At the relevant scale (test functions f̂ supported on intervals of order log T), the dominant cross-term comes from the lowest L-function zero: γ_1(χ), being the smallest in absolute value, produces the largest exponential T^{iα γ_1(χ)/log T}, and this exponential dominates the Fourier transform f̂. Higher zeros γ_n(χ) for n ≥ 2 produce smaller exponentials and contribute terms suppressed by factors of (log T)^{−1} relative to the leading term.

The factor L(1, χ) appears naturally as the residue-like quantity at the edge of the critical strip — it captures the “global density” of the character-weighted prime data. The constant κ(q) = c log q/φ(q) reflects the modulus dependence: the character-weighted sum has natural scale log q (the conductor), normalized by the number of relevant residue classes φ(q).

Comparison of the Two Lines

The random matrix analog and the explicit formula argument arrive at the same functional form by different routes. This convergence is encouraging — it suggests that the conjectured form captures something structural about the joint behavior of ζ and L(s, χ), rather than being an artifact of one particular framework.

The convergence is not a proof. Both arguments make assumptions: the random matrix argument assumes that the L-function-conditioned statistics of ζ-zeros really do correspond to a subspace projection in the random matrix analog; the explicit formula argument assumes that the cross-terms can be controlled by the lowest L-function zero alone, with higher zeros suppressed. Each of these assumptions is plausible but not derived from first principles.

V. Consequences If True

If the conjecture is correct, several arithmetic consequences follow. This section sketches the most important.

Sharper Bombieri–Vinogradov Estimates

The Bombieri–Vinogradov theorem gives strong-on-average control of primes in arithmetic progressions. The conjecture, applied to derive bounds on prime sums weighted by characters, predicts an explicit error term whose size depends on γ_1(χ) and L(1, χ) for each character χ.

Specifically, for q in a range where γ_1(χ) and L(1, χ) are computable, the conjecture predicts

ψ(x; q, a) = x/φ(q) + O_χ(x^{1/2} (log x)^{C(γ_1(χ))}),

where the implied constant depends on χ through γ_1(χ) and L(1, χ) in an explicit way derived from the conjecture’s leading-term formula. The exponent C(γ_1(χ)) is conjecturally smaller than the standard exponent of 2 in known forms of GRH-conditional bounds.

The improvement is modest in absolute terms but structurally significant: it represents a refinement of GRH-conditional bounds using information about individual L-function zeros, rather than just the assumption that all zeros lie on the critical line.

Refined Linnik-Type Bounds

Linnik’s theorem asserts that for coprime integers a and q, the least prime p ≡ a (mod q) is bounded by q^L for some absolute constant L. The unconditional value of L is approximately 5 (with various improvements over the years). Under GRH, L = 2 + ε is achievable.

The conjecture predicts a refinement: the constant in Linnik’s theorem under GRH should depend on L(1, χ) and γ_1(χ) for the relevant characters mod q, in a way that is sharper than the L = 2 + ε bound for moduli q where these quantities behave favorably. Specifically, for moduli q where γ_1(χ) is bounded below by a constant for all χ mod q (a property that can be checked computationally for any specific q), the conjecture predicts L = 2 + δ(q) for an explicit δ(q) → 0.

Implications for Chowla’s Conjecture

Chowla’s conjecture concerns the correlations of the Möbius function: it predicts that

∑_{n ≤ x} μ(n) μ(n + h_1) μ(n + h_2) … μ(n + h_k) = o(x)

for any fixed distinct nonzero shifts h_1, …, h_k. The conjecture is a “non-correlation” statement: the Möbius function should look statistically independent on shifted versions of itself.

Chowla’s conjecture is closely connected to the distribution of L-function zeros: under suitable orthogonality and zero-spacing assumptions, the conjecture follows. The Stratified Zero–Prime Resonance Conjecture, by giving explicit information about how character-weighted prime data interact with ζ-zero statistics, contributes to the body of conditional results that would, taken together, imply Chowla. Specifically, the conjecture would refine the error terms in known partial cases of Chowla due to Tao and others, where progress has been made under various analytic assumptions.

Implications for Twin Primes and Prime Gaps

The Hardy–Littlewood twin prime conjecture predicts that the number of twin primes (p, p + 2) up to x is asymptotically C_2 · x/(log x)², where C_2 is the twin prime constant. This conjecture lies beyond what is currently provable even under GRH.

The conjecture proposed here does not directly imply the twin prime conjecture. But it would refine partial results on prime gaps: under the conjecture, the variance of ψ(x + h) − ψ(x) for short intervals h of order x^{1/2 + ε} is governed by an explicit formula involving γ_1(χ) for characters χ of small modulus. This is a quantitative sharpening of variance results due to Saffari, Vaughan, Goldston–Montgomery, and others, with the new content being the explicit dependence on individual L-function zeros.

A Caution on Consequences

These consequences are conditional on the strong form of the conjecture. The weak form (which only asserts that some deviation exists) yields qualitative analogs but not quantitative refinements with explicit constants. The arithmetic consequences thus depend on the more speculative strong form, with its specific functional dependence on γ_1(χ) and L(1, χ).

If only the weak form turns out to be correct, the qualitative consequences (existence of deviations, structural connections to L-function zeros) would still hold, but the explicit refinements of Bombieri–Vinogradov, Linnik, Chowla, and prime gap variance would not be derivable in the form sketched.

VI. Logical Position Relative to RH and GRH

The conjecture is conditional on RH and GRH. This section addresses the logical relationships more carefully.

Conditional on RH for ζ

The conjecture concerns pair correlation of imaginary parts of ζ-zeros, which presupposes that the zeros are of the form 1/2 + iγ with γ real — that is, RH for ζ. Without RH, the conjecture is not well-defined in the form stated.

A reformulation without RH would replace the imaginary parts with the relevant projections of the zeros onto the critical line. Such a reformulation is possible but cumbersome. The cleaner statement assumes RH.

Conditional on GRH for L(s, χ)

The conjecture references γ_1(χ), the imaginary part of the lowest zero of L(s, χ), under the assumption that this zero lies on the critical line. Without GRH for L(s, χ), the lowest zero might lie off the critical line, and the conjecture’s statement would need revision.

A natural variant: the conjecture could be stated as predicting deviations governed by the lowest zero of L(s, χ) wherever that zero is, real or off-line, with the corresponding modification of the formulas. This variant is potentially more interesting in that it would be testable against L-functions whose RH analog is unproved, but it loses the clean form of the strong conjecture and is harder to motivate from the random matrix analog (which presumes self-adjoint structure and hence real eigenvalues).

Does the Conjecture Imply GRH for Some Subclass?

A natural question: if the conjecture is true (in its strong form) for all primitive Dirichlet characters χ, does it imply GRH for Dirichlet L-functions?

The answer is: not directly. The conjecture takes GRH as input. A strong form holding for all χ would constrain the joint behavior of ζ-zeros and L(s, χ)-zeros, but it would not force the L(s, χ)-zeros onto the critical line: the conjecture’s statement involves only the lowest such zero, not the entirety of the zero set.

A potential implication might be obtained by a different route: if the conjecture’s strong form is so sharp that any failure of GRH for L(s, χ) (e.g., a Siegel zero) would produce a violation, then conditional on the strong form, GRH for L(s, χ) follows. This kind of implication would require working out, in detail, what a Siegel zero would do to F_χ(α, T; f), and showing that it is incompatible with the predicted asymptotic. Whether this can be made rigorous is open.

Could the Conjecture Be True If RH Failed?

If RH fails for ζ — that is, if some ζ-zeros lie off the critical line — the conjecture as stated does not apply. A modified version, treating ζ-zeros as complex numbers with possibly nontrivial real parts, could be formulated. In such a modified version, the predicted deviations from “independent” pair correlation would still arise from the analytic structure of L-functions, but the formulas would be more complex.

The question of what the conjecture would look like in a counterfactual world without RH is not a frivolous one. It is closely related to the question of how robust the heuristics behind the conjecture are. If the heuristics rely essentially on RH (e.g., on the random matrix analog presupposing self-adjointness), then the conjecture is properly understood as a refinement of pair correlation conditional on RH. If the heuristics are more robust, then the conjecture admits a more general form, with RH as a special case.

VII. Computational Program

A central virtue of the conjecture is that it is testable computationally with present resources. This section outlines the testing program.

Methodology

The test proceeds as follows:

  1. Compute zeros of ζ to sufficient height. Existing computational efforts (Odlyzko, Platt, and others) have produced ζ-zeros to heights well into the trillions. For purposes of testing the conjecture, heights of order T = 10^6 to 10^8 are sufficient — a regime accessible to modest computational resources (high-end personal computer with multi-day computation, or modest cluster time).
  2. Compute the lowest zeros of L(s, χ) for primitive characters χ of small modulus. For q in the range 3 ≤ q ≤ 50 or so, all primitive characters have computable lowest zeros. The values γ_1(χ) for such χ have been tabulated or are obtainable through standard L-function computation packages (the LMFDB database is one source).
  3. Compute L(1, χ) for the same characters. These values are likewise available through standard tools.
  4. For test functions f of standard form (e.g., Gaussian, or compactly supported smoothings), compute F_χ(α, T; f) directly from the data of step 1, weighted using the prime data implicit in step 1.
  5. Compute the predicted right-hand side: F(α, T; f) · |S_χ(T; f)|² + C(α, χ; f), with C(α, χ; f) given by the strong-form formula.
  6. Compare. The conjecture predicts that the difference is small (order (log T)^{−1/2}) and has the predicted functional form.

Predicted Values for Small q

For q = 3, the unique non-principal primitive character is the real character χ_3 (the Legendre symbol mod 3), with L(s, χ_3) the L-function of the Dirichlet series 1 − 1/2^s + 1/4^s − 1/5^s + …. The lowest zero γ_1(χ_3) is approximately 8.039. The value L(1, χ_3) = π/(3√3) ≈ 0.6046.

For q = 4, the unique non-principal primitive character is χ_4 (the non-trivial character mod 4). The lowest zero γ_1(χ_4) is approximately 6.020. The value L(1, χ_4) = π/4 ≈ 0.7854.

For q = 5, there are four non-principal characters, of which two are primitive (the others factor through smaller moduli). The lowest zeros are of order 6 to 9, with explicit values computable.

For q = 7, there are six non-principal characters, all primitive. The lowest zeros are of order 4 to 8.

For each of these characters, the strong-form conjecture makes a specific quantitative prediction for the deviation of F_χ(α, T; f) from the independent expectation. The prediction can be compared against the computed value.

Statistical Methodology

The deviation predicted by the conjecture is small relative to the unconditional pair correlation: it is of relative size κ(q) · L(1, χ)² / |S_χ|², which for small q is on the order of 10^{−3} to 10^{−2}. Detecting a signal of this size requires computing F_χ at sufficient height that the statistical noise is smaller than the predicted signal.

The standard variance of pair correlation estimates at height T is of order (log T)^{−1}. To distinguish a signal of order 10^{−2} from noise, one needs (log T)^{−1} ≪ 10^{−2}, i.e., log T ≫ 100, i.e., T ≫ exp(100). This is not a feasible computation directly.

However, by averaging over a range of T values (effectively bootstrapping) and over multiple test functions f, one can reduce the effective noise by a factor of √k where k is the number of independent samples. Achieving k of order 10^4 (which is feasible at heights around 10^6 to 10^7 by averaging across the spectrum of zeros) brings the noise down to 10^{−2}/100 = 10^{−4}, sufficient to detect the predicted signal.

The computational program is thus feasible but not trivial. A careful statistical design — choosing test functions, averaging schemes, and characters — is necessary. A rough estimate of computational requirements: roughly 10^4 to 10^5 CPU-hours, distributed over multiple machines, would suffice for a definitive test of the strong form for q ≤ 10. This is within reach of academic computational resources.

Falsifiability

The conjecture is falsifiable in a precise sense. If, for some specific character χ of small modulus, the computed F_χ(α, T; f) differs from the predicted value by more than the statistical noise allows, the strong form is refuted. The computational program can produce such a refutation within a definite budget.

This is a virtue. Many conjectures in analytic number theory are stated in forms (asymptotic relations as T → ∞) that are not directly testable: any finite computation is consistent with the asymptotic claim. The conjecture proposed here, by predicting an explicit functional form with specific constants, is testable at finite T, with quantifiable confidence.

VIII. Limitations and Cautions

This section is candid about the conjecture’s limitations.

Speculative Status

The conjecture is offered as a working hypothesis, not as a claim of priority or completion. It is motivated by heuristic arguments, not derived from first principles. It is consistent with existing results on pair correlation but goes beyond them. It is testable but has not been tested.

The author of this paper is not staking the suite of papers on the conjecture’s truth. The conjecture is offered as an example of the kind of forward-looking hypothesis whose investigation is the natural successor to the survey of the first three papers. If it turns out to be false, the falsification will itself be informative.

Potential Failure Modes

Several specific failure modes deserve consideration.

Hidden cancellation. The leading-term prediction in the strong form assumes that the contribution from γ_1(χ) dominates higher zeros γ_n(χ) for n ≥ 2. If, by some unexpected cancellation, the contributions from the first few low-lying zeros add up to something substantially different from the γ_1(χ) prediction alone, the strong form would fail in its specific functional form, even if some weaker version (involving multiple low zeros) holds.

Misidentified scale. The conjecture predicts that the deviation is governed by a function of α γ_1(χ)/log T. This specific scaling is motivated by the heuristics but is not derived rigorously. If the actual scaling involves a different combination of γ_1(χ) and log T, the strong form would fail in its detailed form. Computationally, this would manifest as a deviation that has the right order of magnitude but a different functional shape.

Dependence on uniformity hypotheses. The heuristic arguments use random matrix analogies that are themselves conjectural (the Montgomery–Odlyzko law has been verified only in restricted ranges of test functions, even under RH). If the random matrix predictions break down at some level of detail, the conjecture’s heuristic foundation is weaker than supposed.

Modulus dependence. The constant κ(q) = c log q/φ(q) is asserted as an absolute form, but the heuristic arguments determine only the order of magnitude, not the precise constant c. If the actual modulus dependence has a different functional form (e.g., involves the discriminant of the cyclotomic field Q(ζ_q) in a non-trivial way), the strong form would fail in its detailed prefactor.

For each of these failure modes, the computational program can detect the failure. A test that disagrees with the strong-form prediction in its specific form, but agrees with a modified form, would be informative about which aspect of the heuristic argument failed.

Sharp Falsification Within Reach

The strong virtue of the conjecture is that it can be falsified within a reasonable computational budget. Many conjectures in number theory are not falsifiable in this sense: they are stated as asymptotic claims that any finite computation leaves consistent. The conjecture proposed here makes specific, finitely checkable predictions, with the predicted signal of size detectable by averaging at heights of order 10^6 to 10^7.

If, after a careful test, the predicted signal is not found, the conjecture is wrong as stated. If a different signal is found — one consistent with a modified version of the conjecture — the modification is informative and may point toward a corrected hypothesis. If the predicted signal is found, the conjecture is supported (though not proven), and further investigation of its consequences and possible proof becomes warranted.

IX. Relation to Existing Conjectures

The conjecture sits within an existing ecosystem of conjectures about L-function zeros. This section places it in relation to the most relevant of those.

Hybrid Euler–Hadamard Product Approach

Gonek, Hughes, and Keating in 2007 proposed a “hybrid” model of ζ in which the function is approximated by a product of two factors: a “primes” factor (a finite Euler product) and a “zeros” factor (a Hadamard product over zeros up to some height). The hybrid model has been used to derive predictions for moments and for extreme values of ζ on the critical line.

The conjecture proposed here is consistent with the hybrid model: the character-weighted prime sum S_χ(T; f) corresponds to a character-weighted version of the “primes” factor, and the deviation in F_χ corresponds to the cross-terms between the “primes” and “zeros” factors when the prime side is weighted by χ.

What is novel in the conjecture is the specific functional form of the deviation, governed by the lowest L-function zero γ_1(χ). The hybrid model alone does not single out γ_1(χ); it provides a framework in which various character-weighted statistics can be computed, but the explicit prediction that γ_1(χ) (and not some other combination of L-function data) governs the leading deviation is the new content.

Refinements of Pair Correlation

Goldston, Gonek, and others have developed refined pair correlation estimates that go beyond Montgomery’s original conjecture. These refinements include explicit dependence on test function choices, sharpened error terms in restricted ranges, and connections to other zero statistics (triple correlations, n-level correlations).

The conjecture proposed here is, in this taxonomy, a character-stratified refinement: it conditions pair correlation on character-weighted prime data, where the existing refinements have not. The novelty is the explicit dependence on individual low-lying L-function zeros rather than on aggregate statistics.

Selberg Orthogonality Conjectures

The Selberg orthogonality conjectures, treated in Paper 2, predict that distinct primitive L-functions in the Selberg class have orthogonal Dirichlet coefficients. For Dirichlet L-functions, this orthogonality is known. The conjecture proposed here can be interpreted as adding a quantitative layer on top of orthogonality: not only are the L-functions orthogonal, but their interaction with ζ-zero statistics is governed quantitatively by their lowest zeros and their values at s = 1.

The Random Matrix Conjectures of Keating–Snaith

Keating and Snaith’s random matrix conjectures predict moments of |L(1/2 + it, χ)|^{2k} for fixed χ. The conjecture proposed here addresses a different statistic — pair correlation of ζ-zeros conditioned on χ-weighted prime data — but is consistent with the broader Keating–Snaith framework.

A potentially fruitful direction is to combine the two: use the Keating–Snaith framework to compute moments of S_χ(T; f), then use the conjecture proposed here to relate those moments to pair correlation data for ζ. This combination has not been worked out and could be a target of subsequent investigation.

X. Open Questions Generated by the Conjecture

The conjecture, if confirmed numerically and proved rigorously, would generate further questions. This section sketches the most important.

A Function Field Analog

The function field analog of RH and GRH is proved (Weil, Deligne). Within the function field setting, one can ask whether an analog of the Stratified Zero–Prime Resonance Conjecture holds. The function field analog would predict that the analogous character-stratified pair correlation of “ζ-zeros” (i.e., Frobenius eigenvalues on cohomology) deviates in the predicted way from the unconditional GUE-style prediction.

In the function field setting, the analog would be checkable rigorously, since the relevant zeros are eigenvalues of finite-dimensional operators with computable spectral data. If the function field analog of the conjecture holds, this would be substantial evidence for the number field version. If it fails, the failure would indicate that the conjecture is specifically a number field phenomenon, with properties not present in the function field setting.

This investigation has not been carried out. It would constitute a natural follow-up project.

Higher-Rank Generalizations

The conjecture concerns pair correlation of ζ-zeros conditioned on Dirichlet L-function data. Dirichlet L-functions are degree-1 L-functions in the automorphic sense. A natural generalization is to higher-rank L-functions: degree-2 L-functions of modular forms (or, more generally, of automorphic representations of GL(2)), degree-n L-functions of automorphic representations of GL(n).

Each higher-rank L-function has its own zeros, and a generalization of the conjecture would predict deviations in F-statistics conditioned on the corresponding prime sums. The functional form of the deviation would presumably involve the lowest zero of the higher-rank L-function and its value at s = 1, in analog with the Dirichlet case.

Working out the higher-rank generalization, and testing it against modular-form data, is a substantial project. The relevant L-function data are available (modular forms of small weight and level have been extensively computed), and the test would proceed by methods analogous to the Dirichlet case.

A Noncommutative Geometric Interpretation

The Connes program (treated in Paper 3) provides a noncommutative geometric framework in which ζ-zeros are interpreted spectrally. A natural question is whether the conjecture proposed here has a noncommutative geometric interpretation: does the character-stratification correspond to a decomposition of the noncommutative space into pieces indexed by characters?

If such an interpretation exists, it would provide structural insight into why the conjecture takes the form it does. The character χ would correspond to a “sector” of the noncommutative space; the lowest L-function zero γ_1(χ) would correspond to a lowest spectral mode in that sector; the deviation in F_χ would correspond to an interaction between sectors.

This interpretation is speculative. The Connes program has not been developed in this direction, and the connection between Dirichlet L-functions and the noncommutative adèle class space is not fully worked out at the level of detail required. But the question of whether such a connection exists is natural and could be productive.

Relations to the Langlands Program

The Langlands program predicts that automorphic L-functions form a coherent family with deep symmetries. Dirichlet L-functions are the simplest members of this family. The conjecture proposed here predicts a specific quantitative relationship between Dirichlet L-functions and ζ — through the joint statistics of their zeros — that is not, on its face, a Langlands-style prediction.

It is possible that a Langlands-program lens would reveal the conjecture as a special case of a much more general phenomenon. The general phenomenon would be that L-functions in a Langlands family do not merely have orthogonal Dirichlet coefficients; their zero statistics are jointly correlated with ζ-zero statistics, with explicit constants governed by the L-function data.

Working out this Langlands-style generalization is speculative. The Dirichlet case is tractable because Dirichlet L-functions are well understood; higher-rank cases would require substantial Langlands machinery and computational L-function data that are still being developed. The question of whether the conjecture is a “shadow” of a much larger Langlands-style phenomenon is open.

XI. Conclusion

The Stratified Zero–Prime Resonance Conjecture is offered as a forward-looking hypothesis about the joint behavior of zeros of ζ and zeros of Dirichlet L-functions. The conjecture is sharp (it predicts explicit constants), falsifiable (it can be tested computationally with present resources), and structurally connected to existing frameworks (pair correlation, Keating–Snaith moments, the hybrid Euler–Hadamard model, Selberg orthogonality).

It is not a proof of RH. It does not provide a path to a proof. It assumes RH and GRH as input, and it predicts a refinement of pair correlation that, if true, sits inside the existing conditional framework rather than transcending it.

Why offer such a conjecture, given that it does not bring the proof of RH any closer? The answer is structural. The body of mathematics surrounding RH is vast, and progress on RH itself has been incremental at best for decades. Forward progress on the broader framework of L-function statistics — what zeros do, how they correlate, what their finer structures look like — is, on present evidence, where genuine new mathematics is being produced. The Stratified Zero–Prime Resonance Conjecture is offered as one specific contribution to that body of forward progress: a conjecture sharp enough to be tested, structured enough to be either confirmed or instructively refuted, and connected enough to existing frameworks to fit into the ongoing conversation.

If the conjecture is confirmed numerically, the next step is to attempt a proof, conditional on RH and GRH. The methods would presumably involve careful analysis of the explicit formula for ζ and L(s, χ) jointly, with character-orthogonality reductions and random matrix theory inputs. A proof in the conditional sense would not resolve RH but would constitute a substantial addition to the conditional theory.

If the conjecture is refuted, the refutation will indicate either that the heuristics motivating it are flawed, or that the relationship between ζ and L(s, χ) is more subtle than the conjecture supposes. Either outcome is informative. The space of possible refinements of pair correlation is vast, and falsification of one specific candidate narrows the space and points toward correct candidates.

What can be said with confidence is that the joint statistics of ζ-zeros and L-function-zeros constitute a domain in which substantial mathematics remains to be done. The Riemann hypothesis itself may yield slowly or not at all in the coming decades; the surrounding theory of L-function statistics, by contrast, is actively developing, with new conjectures, new computational data, and new structural insights appearing regularly. The conjecture offered here is a contribution to that active domain — one specific hypothesis among many possible, advanced not as final truth but as a working proposition to be tested and either confirmed or improved upon.

The four papers of this suite have traced the Riemann hypothesis from its historical origins (Paper 1), through the field-theoretic framework that situates it within the broader landscape of L-functions and arithmetic geometry (Paper 2), through the survey of strategies that have been developed for its proof and the structural reasons for their success or stagnation (Paper 3), and now to a forward-looking conjecture that aims to add quantitative structure to the conditional theory (Paper 4). The hypothesis itself remains where Riemann left it: probable, supported, central, and unproved. The mathematics around it continues to grow, and contributions to that mathematics — including modest contributions like the conjecture proposed here — are how the discipline carries forward in the absence of a proof.

═══════════════════════════════════════════════

Unknown's avatar

About nathanalbright

I'm a person with diverse interests who loves to read. If you want to know something about me, just ask.
This entry was posted in Graduate School and tagged , , . Bookmark the permalink.

Leave a Reply