The Sum of the Totient Function, and Montgomery’s Lower Bound.

1 History
In 1874, Mertens proved that

\displaystyle\sum_{n\leq x}\phi(n)=\frac{3}{\pi^{2}}x^{2}+O\left(x\log x\right),

where \phi(n) is Euler’s totient function. A natural question we can ask is whether or not this error term can be improved, and what we should expect it to look like. Throughout we use the notation \Phi(x):=\sum_{n\leq x}\phi(n) for the totient summatory function, and E(x):=\Phi(x)-\frac{3}{\pi^{2}}x^{2} for the error function. Our goal is to explore the known results for this error term, and in particular the proof of the best known lower bound due to Montgomery. The best unconditional upper bound for E(x) was given by Walfisz in 1963 using exponential sums:

\displaystyle E(x)\ll x\left(\log x\right)^{\frac{2}{3}}\left(\log\log x\right)^{\frac{4}{3}}.

Prior to this, in 1930, Chowla and Pillai showed that the error term cannot be improved too much as that E(x) is not o\left(x\log\log\log x\right). In particular they looked at the average of E(n) and proved that

\displaystyle \sum_{n\leq x}E(n)\sim\frac{3}{2\pi^{2}}x^{2}.

This tells us that E(n)\asymp n on average, and some more work can be done to gain the factor of \log\log\log x. In 1950, Erdos and Shapiro proved that it alternates sign, that there exists c such that for infinitely many integers N,M we have

\displaystyle E(N)>cN\log\log\log\log N\ \ \text{and}\ \ E(M)<-cM\log\log\log\log M,

or more concisely, using big-\Omega notation,

\displaystyle E(x)=\Omega_{\pm}\left(x\log\log\log\log x\right).

In 1987 Montgomery improved this to

\displaystyle E(x)=\Omega_{\pm}\left(x\sqrt{\log\log x}\right).

Moreover, Montgomery conjectured that both E(x)=\Omega_{\pm}\left(x\log\log x\right), and E(x)=O\left(x\log\log x\right) hold. In what follows, we will examine Montgomery’s proof in more detail.

2 Preliminaries: Mertens Result and Restating the Question

We can use the multiplicative properties of the \phi function to obtain the identity \frac{\phi\left(n\right)}{n}=\sum_{d|n}\frac{\mu(d)}{d} where \mu(n) is the mobius function. Using this, and then rearranging the order, we may write

\displaystyle \Phi(x)=\sum_{n\leq x}n\sum_{d|n}\frac{\mu(d)}{d}=\sum_{d\leq x}\frac{\mu(d)}{d}\sum_{\begin{array}{c}  n\leq x\\  d|n  \end{array}}n.

Taking out the factor of d and using Gauss’s identity the right hand side becomes

\displaystyle \sum_{d\leq x}\mu(d)\left(\frac{\left[\frac{x}{d}\right]^{2}+\left[\frac{x}{d}\right]}{2}\right)=\frac{1}{2}\sum_{d\leq x}\mu(d)\left[\frac{x}{d}\right]^{2}+\frac{1}{2}\sum_{d\leq x}\mu(d)\left[\frac{x}{d}\right].

The rightmost sum is exactly \frac{1}{2} since \sum_{d|n}\mu(d)=0 for n>1, and \sum_{d\leq x}\mu(d)\left[\frac{x}{d}\right]=\sum_{n\leq x}\sum_{d|n}\mu(d)=1. Hence

\displaystyle 2\Phi(x)-1=\sum_{d\leq x}\mu(d)\left[\frac{x}{d}\right]^{2}.

Writing the floor function as \left[x\right]=x-\left\{ x\right\} where \left\{ x\right\} is the fractional part, we see that

\displaystyle 2\Phi(x)-1=x^{2}\sum_{d\leq x}\frac{\mu(d)}{d^{2}}-2x\sum_{d\leq x}\frac{\mu(d)}{d}\left\{ \frac{x}{d}\right\} +\sum_{d\leq x}\mu(d)\left\{ \frac{x}{d}\right\} ^{2}.

Using the fact that |\mu(d)|\leq1, and |\left\{ \frac{x}{d}\right\} ^{2}|\leq1, the first and last term may be rewritten as

\displaystyle x^{2}\sum_{d\leq x}\frac{\mu(d)}{d^{2}}+\sum_{d\leq x}\mu(d)\left\{ \frac{x}{d}\right\} ^{2}=x^{2}\sum_{d=1}^{\infty}\frac{\mu(d)}{d^{2}}+O\left(x\right)=\frac{x^{2}}{\zeta(2)}+O\left(x\right).

(Remark: We can actually get an error term of O\left(xe^{-c\sqrt{\log x}}\right) with more work. Since it will not affect anything, we continue with O(x) ). Putting this together with the definition of E(x), we see that

\displaystyle  E(x)=-x\sum_{d\leq x}\frac{\mu(d)}{d}\left\{ \frac{x}{d}\right\} +O(x),

and the bound |\mu(d)\left\{ \frac{x}{d}\right\} |\leq1 implies Merten’s Result

\displaystyle E(x)=O\left(x\sum_{d\leq x}\frac{1}{d}\right)=O\left(x\log x\right).

The main goal of this section was to show that we can completely understand E(x) by understanding the correlation between the mobius function and the fractional part in the sum \sum_{d\leq x}\frac{\mu(d)}{d}\left\{ \frac{x}{d}\right\}. Using the prime number theorem, that is the result \sum_{n\leq x}\mu(n)=O\left(xe^{-c\sqrt{\log x}}\right), partial summation tells us that \sum_{d\leq x}\frac{\mu(d)}{d}\ll1, which allows us to rewrite our identity as

\displaystyle E(x)=-x\sum_{d\leq x}\frac{\mu(d)}{d}s\left(\frac{x}{d}\right)+O\left(x\right)

where s(x)=\left\{ x\right\} -\frac{1}{2} is the sawtooth function. We also define

\displaystyle R(x):=\frac{E(x)}{x}=\sum_{d\leq x}\frac{\mu(d)}{d}s\left(\frac{x}{d}\right)+O(1),

and will deal with this function to avoid the extraneous factor of x.

3 Montgomery’s Lower Bound

Chowla’s proof that E(x)\neq o\left(x\log\log\log x\right) was centered around evaluated the average value of E(x). In a similar way, Montgomery’s proof is based on looking at the average of E(x) over an arithmetic progression. That is, the goal will be to show that given $latedx x$ , for some suitable q,a we have

\displaystyle  \sum_{k\leq x}R\left(qk+a\right)\approx x\sqrt{\log\log x}

and similarly for -x\sqrt{\log\log x}, the negative value. From this it follows that for some k, R\left(qk+a\right), will be greater then or less then the average, implying the desired result about E(x).

An intuitive reason why we expect to be able to manipulate the average of R(n) over an arithmetic progression is the identity for a sum of the sawtooth function over a progression:

\displaystyle \sum_{n=1}^{k}s\left(\frac{nb}{r}+x\right)=s\left(rx\right).

If I sum R(n) over an arithmetic progression, and then switch the orders of summation, this identity gives us a hope that we will be able to deal with the inner sum comfortably. Exactly this approach is facilitated by the following lemma which allows us to shift and line up the end of the inner sum’s before switching the order, and without creating a large error.

Lemma We have

\displaystyle R(x)=\sum_{d\leq y}\frac{\mu\left(d\right)}{d}s\left(\frac{x}{d}\right)+O\left(1\right)

uniformly for x\geq2, x\notin\mathbb{Z}, and y\geq xe^{-c\sqrt{\log x}}.

Proof Sketch. We largely skip the analytic details. This is straightforward and follows from using the fact that \sum_{n\leq x}\mu(n)=O\left(xe^{-c\sqrt{\log x}}\right), and then bounding the various parts of the sum.

We will now try switching, and simplifying the inner sum. By the lemma, we see that for y\geq qxe^{-c\sqrt{\log qx}},

\displaystyle \sum_{n\leq x}R(nq+\alpha)=\sum_{n\leq x}\left(\sum_{d\leq y}\frac{\mu\left(d\right)}{d}s\left(\frac{nq+\alpha}{d}\right)+O\left(1\right)\right)

\displaystyle =\sum_{d\leq y}\frac{\mu\left(d\right)}{d}\sum_{n\leq x}s\left(\frac{nq+\alpha}{d}\right)+O\left(x\right).

Using the periodicity of the sawtooth function, and the identity \sum_{n=1}^{k}s\left(\frac{nb}{r}+x\right)=s\left(rx\right), we have that

\displaystyle \sum_{n\leq x}s\left(\frac{nq+\alpha}{d}\right)=\frac{N}{d}\sum_{n=0}^{d-1}s\left(\frac{nq+\alpha}{d}\right)+O(d)=(d,q)s\left(\frac{\alpha}{(d,q)}\right)\frac{N}{d}+O(d).

Hence

\displaystyle \sum_{n\leq x}R(nq+\alpha)=N\sum_{d\leq y}\frac{\mu(d)(d,q)}{d^{2}}s\left(\frac{\alpha}{(d,q)}\right)+O(x).

In what follows the modulus q cannot be too large to deal with the error term, so we take q\leq e^{c\sqrt{\log x}}, and then specify y=qxe^{-c\sqrt{\log x}}. For the sum in the main term, we partition based on the values of (d,q). Since we have a factor of \mu(d), assume that d is squarefree and write d=ef with e|q and (f,q)=1 so that (d,q)=e. Then

\displaystyle \sum_{d\leq y}\frac{\mu(d)(d,q)}{d^{2}}s\left(\frac{\alpha}{(d,q)}\right)=\sum_{e|q}\frac{\mu(e)}{e}s\left(\frac{\alpha}{e}\right)\sum_{\begin{array}{c}  f\leq\frac{y}{e}\\  \left(f,q\right)=1  \end{array}}\frac{\mu(f)}{f^{2}}.

Removing the condition f\leq\frac{y}{e}, we introduce and error term

\displaystyle \ll\sum_{e|q}\frac{\mu(e)^{2}}{e}\sum_{f>\frac{y}{e}}\frac{1}{f^{2}}\ll\sum_{e|q}\frac{\mu(e)^{2}}{e}\frac{e}{y}\ll\frac{2^{\omega(q)}}{y}\ll x

where the last \ll follows from the upper bound condition on q. Thus we have proven

Theorem Uniformly for q\leq e^{-c\sqrt{\log x}} we have

\displaystyle \sum_{n\leq x}R(nq+\alpha)=C(q,\alpha)x+O(x)

where

\displaystyle C(q,\alpha)=\frac{6}{\pi^{2}}\prod_{p|q}\left(1-p^{-2}\right)^{-1}\sum_{d|q}\frac{\mu(d)}{d}s\left(\frac{\alpha}{d}\right).

Note that this estimate is nontrivial since C(q,\alpha) can grow with x as q can grow with x. Also, it is worth mentioning that the error term can be improved from O(x) to O\left(xe^{-c\sqrt{\log x}}\right), by modifying various steps throughout, however that will not play a role in the computations which follow.

Making the correct choices for q and a will lead to a proof of the result. The goal is then to choose q and \alpha such that |C(q,\alpha)| is maximized. Montgomery does this by exploiting the primes which are non squares for a particular modulus. If I take an odd number of non-squares, the product is a non square, and if I take an even number, the product is a square. This gives a correlation with the Mobius function since an odd number of factors returns -1 , and an even number of factors returns 1 . Working modulo 4 , let

\displaystyle q=\prod_{\begin{array}{c}  p\leq z\\  p\equiv3\text{ mod }4  \end{array}}p.

We choose z=\sqrt{\log x} so that q\sim e^{c\sqrt{\log x}}, and choose whether or not to remove the prime 3 from the product as to force \omega(q) to be even. Then let \alpha=\frac{3q}{4} so that we are trying to understand the sum

\displaystyle \sum_{d|q}\frac{\mu(d)}{d}s\left(\frac{3q}{4d}\right).

If \omega(d) is odd, then \frac{q}{d}\equiv\frac{1}{4}\text{ mod }4, and then \frac{\mu(d)}{d}s\left(\frac{3q}{4d}\right)=\frac{\mu^{2}(d)}{4d}. Similarly, if \omega(d) is even, \frac{\mu(d)}{d}s\left(\frac{3q}{4d}\right)=\frac{\mu^{2}(d)}{4d}. Thus we are looking at

\displaystyle  \sum_{d|q}\frac{\mu^{2}(d)}{d}\approx\sqrt{\log z}\sim\sqrt{\log\log x}.

The sum can be evaluated to be approximately \sqrt{\log z} by using facts about numbers composed only of primes in a particular arithmetic progression. If we instead chose \alpha=\frac{q}{4}, the sum reverses sign. That is C\left(q,\frac{q}{4}\right)=-C\left(q,\frac{3q}{4}\right). From this, we have to conclude that there exists n,m\leq xq with

\displaystyle R(n)\gg\sqrt{\log\log x}\ \text{ and }\ R(n)\ll\sqrt{\log\log x}.

Since q\leq e^{c\sqrt{\log x}}, this factor can be discarded because of the \log\log x term, and we have proven that

\displaystyle R(x)=\Omega_{\pm}\left(\sqrt{\log\log x}\right)

which implies the desired result

\displaystyle E(x)=\Omega_{\pm}\left(x\sqrt{\log\log x}\right).

About these ads

About Eric Naslund

I am an undergraduate mathematics student at the University of British Columbia.
This entry was posted in Analytic Number Theory and tagged , , , , . Bookmark the permalink.

3 Responses to The Sum of the Totient Function, and Montgomery’s Lower Bound.

  1. Marvis says:

    I haven’t seen the notation \Omega_{\pm}. Could you kindly let me know what it means?

    • Eric Naslund says:

      We write f(n) =\Omega(g(n)) for a positive function g(n) to mean that

      \displaystyle \limsup_{n\rightarrow \infty} \frac{f(x)}{g(x)} >0

      This is exactly the negation of the statement f(x)=o(g(x)). We write f(n) =\Omega_{\pm}(g(n)) to mean both

      \displaystyle  \limsup_{n\rightarrow \infty} \frac{f(x)}{g(x)} >0

      and

      \displaystyle  \liminf_{n\rightarrow \infty} \frac{f(x)}{g(x)} <0,

      which means that f(n) oscillates with size g(n).

  2. Pingback: Average of sigma(n) Part I: Upper bound for the error. | Analytic Thinking

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s