The last few days have been an exciting one in mathematics, as a claimed proof of the Riemann Hypothesis was posted on the arXiv.
The Riemann Hypothesis, now 149 years old, is probably the most famous unsolved problem in mathematics, since Fermat's Last Theorem has been proven. (Had the RH appeared on a roll of toilet paper or something, with a flippant remark or a curse word beside it, it might well have eclipsed FLT in fame.)
The author is Xian-Jin Li of Brigham Young University, a mathematician amidst a group of people who have been making a serious push to solve the RH for several years. Li posted his proof on Tuesday at 12:43 pm MDT. Peter Woit blogged about it yesterday, and yesterday evening at 7:28 pm MDT Terry Tao of UCLA claimed to have found a problem in the proof.
But a new version of the proof (v3) was posted by Li last night at 8:44 pm MDT, so perhaps the story is not yet finished.
Li's proof is not long and does not look overly complex (of course, I'm not an expert in analytic number theory) -- you'd almost think you could understand it. There are even some integrals, which I did not realize were still allowed in modern mathematics.
Yang said there are two kinds of mathematical papers, the kind you can't understand past the first page and the kind you can't understand past the first sentence. Wouldn't it be amazing if the proof of the RH was the former!
PS: There is no shortage of purported proofs of the RH....
12 comments:
There is now a v4 of Li's proof
It would indeed be lovely. Of course, Li isn't the first one to post an absurdly short proof of RH on the arXiv.
Also of possible interest: here is another (compact) summary of some recent failed RH-proofs. The most entertaining reaction to any such failed proof is undoubtedly Brian Conrad's. Seems to be one of those times when duty calls. :)
This year, exactly one hundred and fifty years since Riemann's celebrated conjecture first appeared, someone has produced a plausible, ingenious, and not too difficult to follow (if you can remember complex integration by the residue method) proof that the hypothesis is true.
The trick turns out to be an extra change of variable, complex exponentiation, which produces a whole new world of poles and residues to count up. The spacing of poles is logarithmic in one direction (real) and constant in the other (imaginary) direction. I’d still prefer a few diagrams, but that just proves I’m still a physicist underneath…
The media seems not to have yet caught on (being somewhat wary of Yet Another Purported Proof, no doubt). Let’s hope the Clay Institute didn’t invest in Bernard L. Madoff Investment Securities!
Proof of Riemann's zeta-hypothesis
Arne Bergstrom
http://arxiv.org/abs/0809.5120
11:51 pm?
Yes.
Jul 3, 2008?
No way.
Arne Bergstrom's proof is a joke. Quotient of zeta(s)+O(1/N^(3-sigma)) and zeta(s)+O(1/N^(2+sigma)) is not 1.
When zeta(s)=0, this quotuent is of O (N^(-1+2*sigma)).
Of course if you assume this to be 1, which is of O(N^0), then sigma must be 1/2. But it the quotient is not 1.
Hung, I think you don't understand asymptotic notation. That said, I have no idea whether his proof is correct, but your comment is surely wrong based on what I do know.
In section "3. Poles and residues" in Arne Bergstrom's paper "Proof of Riemann's zeta-hypothesis there is the claim that:
"Exp[Exp[u]] + 1 == 0
which can be verified to have the following solutions (m and n are integers, n > 1)
u = Log[Pi*(2*n-1)]+I*Pi*(1/2+m)"
That is incorrect.
The correct solution is given by Mathematica as:
u == Log[I*Pi*(2*n +1) + 2*I*Pi*m
where m and n are integers
To try this yourself enter the following line into Wolfram Alpha:
Reduce[Exp[Exp[u]] + 1 == 0, u]
Mats, you merely show that you are not at all familiar with complex numbers. Wolfram Alpha did not simplify the answer. So you are simply wrong about there being a mistake in section 3. Anyone who knows complex analysis can easily check what I say for himself.
I don't know any complex analysis, and I don't how to simplify the expression with a complex number inside the logarithm.
However by looking at the decimal numbers when trying some integer values in Mathematica, I make the guess that the correct simplified solution for n>=0 and m>=0 could be something like:
Log[Pi*(2*n + 1)] + I*Pi*(1/2 + m*2)
which would be only two minor corrections. There then is a plus instead of minus next to 2*n, and the variable "m" times 2. Arne Bergstrom himself pointed out this "m" times 2 correction. For n<0 and m<0 than zero with this guess, there is still disagreement of a term 2*Pi*I. But since the paper considers only m and n greater than or equal to 1, I don't know if it matters.
The wolfram alpha code for verifying this guess is:
N[Table[Table[2 I \[Pi]*m + Log[I (\[Pi] + 2 \[Pi]*n)], {m, -6, 6}], {n, -6, 6}]] - N[Table[Table[Log[Pi*(2*n + 1)] + I*Pi*(1/2 + m*2), {m, -6, 6}], {n, -6, 6}]]
Hopefully I am not offending anyone.
I realize now that for n>=0 and m>=0 Mathematica gives a subset of the solutions given by Arne Bergstrom. Therefore the difference by a factor of 2.
If you don't even know any complex analysis, don't go around telling people what you think are errors in someone's paper. All I will tell you is that if you actually go and learn about complex numbers (and it's very easy if you want to), you can verify the formula given in the paper, and you will also know why it is exactly equivalent to the solution given by Wolfram Alpha.
As I said before, don't be silly and spout all kinds of things, while not knowing what you're talking about. The complex log is multi-valued, so both Wolfram Alpha and the paper's solutions are actually identical.
Post a Comment