## onsdag 27 april 2016

### Reformulation of Clay Navier-Stokes Problem Needed 5

Fefferman concludes the official formulation of the Clay Navier-Stokes Problem with:
• Let me end with a few words about the significance of the problems posed here.
• Fluids are important and hard to understand....our understanding is at a very primitive level.
• Standard methods from PDE appear inadequate to settle the problem.
• Instead, we probably need some deep, new ideas.
Yes, fluids are hard to understand for a pure mathematician and the understanding appears to be on a very primitive level, and this has led to an unfortunate formulation of the problem leading into a fruitless search for either (i) blowup into infinite fluid velocities in finite time as non-smoothness, or (ii) not blowup as smoothness of solutions.

But for a fluid the distinction between smoothness and non-smoothness concerns the size of velocity gradients. It is a well-known fact since long that compressible flow may exhibit non-smoothness in the form of shocks with large velocity gradients but without large velocities.

The Clay Problem concerns incompressible flow (at unbounded Reynolds numbers), which does not form shocks but instead becomes turbulent for large Reynolds number with again large velocity gradients as expression of non-smoothness, and (most likely) without large velocities.

The clue to solve the Clay Problem offered in the official formulation by searching for infinite velocities in fluid flow, which Tao has picked up in recent attempts to solve the problem, thus appears to be misleading and as such is not helpful to mathematics as science.

Fefferman is asking for some new ideas, but closes the door to any form of communication with computational turbulence as a new idea towards understanding and resolution of the problem.

## tisdag 26 april 2016

### Reformulation of the Clay Navier-Stokes Problem 4

The official formulation of the Clay Navier-Stokes Problem by Fefferman includes the following statements (with (A) and (B) global existence+regularity and (1)-(3) Navier-Stokes equations):
• For initial data $u^0(x)$ not assumed to be small, it is known that (A) and (B) hold (also for $\nu = 0$) if the time interval $[0,∞)$ is replaced by a small time interval $[0,T)$, with $T$ depending on the initial data.
• For a given initial $u^0(x)$, the maximum allowable $T$ is called the “blowup time.” Either (A) and (B) hold, or else there is a smooth, divergence-free $u^0(x)$ for which (1), (2), (3) have a solution with a finite blowup time.
• For the Navier–Stokes equations ($ν > 0$), if there is a solution with a finite blowup time $T$, then the velocity $u_i(x,t)),1≤i≤3$ becomes unbounded near the blowup time.
We read that Fefferman claims that the distinction between (i) YES or (ii) NO to the question of existence+regularity for the Navier-Stokes equations, is between (i) bounded flow velocity for all time and (ii)  unbounded velocity for some "blowup time" $T$.

Fefferman here uses the same distinction as in the classical theory of ordinary differential equations (odes) based on a (correct) mathematical analysis showing that the only way a solution trajectory can cease to exist, is to tend to infinity in finite time.

But this argument cannot be generalised to partial differential equations (pdes), because a smooth solution to a pde can cease to exist as a smooth solution because of unbounded derivatives of the solution, without the solution itself becoming infinite (as required in the ode case).

The basic distinction for Navier-Stokes is instead between (i) laminar/smooth flow and (ii) turbulent/non-smooth for all time without blowup to infinity of the velocity, where non-smooth means large velocity gradients.

The official formulation of the problem is unfortunate by (incorrectly) claiming that the question can be reduced to a question of infinite velocities at finite blowup time. The Clay problem thus needs to be reformulated, since an incorrectly formulated problem can only lead in a wrong direction.

In a lecture about the problem, Cafarelli falls in the trap of Fefferman.

## måndag 25 april 2016

### Reformulation of Clay Navier-Stokes Problem Needed 3

The official formulation of the Clay Navier-Stokes Problem does not include any reference to the Reynolds number $Re =\frac{UL}{\nu}$ with $U$ a typical flow speed, $L$ length scale, and $\nu$ viscosity scale, and thereby makes no distinction between laminar/smooth flow at small Reynolds numbers and turbulent/non-smooth flow at large Reynolds numbers.

Since no bound on the Reynolds number is given, it can only mean that the Reynolds number can have any size, in particular be arbitrarily large. By normalizing $U$ and $L$ to unity, the viscosity thus can be arbitrarily small (or normalizing  viscosity and length scale to unity and letting $U$ become large), which means that the Clay problem includes the incompressible Euler equations as the incompressible Navier-Stokes equations with vanishingly small viscosity.

This is precisely what the book Computational Turbulent Incompressible Flow is about! As a basic example from the book, let us consider flow around a sphere under vanishing viscosity depicted in the following pictures:

We see a distinct large-scale separation pattern developing consisting of 4 tubes of counter-rotating flow attaching to the rear of of the sphere, which are dissolved into turbulent flow further down-stream. We see that the length of the tubes increases with decreasing viscosity, which is consistent with Kolmogorov's conjecture that the total amount of turbulent dissipation stays roughly constant under vanishing viscosity (along with total drag), requiring the surfaces of intense dissipation of the 4 tube pattern to extend further downstream.

We thus discover a vanishing viscosity solution to the incompressible Euler equations, which is fundamentally different from the formal exact solution in the form of potential flow, which is symmetric in the flow direction with symmetric attachment and separation without the 4tube gross pattern, a formal exact solution which is unstable at separation and thus not a limit of Navier-Stokes solutions.

The official problem formulation does not fit well with this situation, since stability aspects are left out and the presence of vanishing viscosity solutions is hidden, and the Clay Navier-Stokes problem thus asks for a reformulation away from the deadlock of the present (meaningless) formulation.

### Reformulation of Clay Navier-Stokes Problem Needed 2

The official formulation of the Clay Navier-Stokes Problem is unfortunate by not mentioning the Reynolds number $Re =\frac{UL}{\nu}$ with $U$ a typical flow speed, $L$ length scale, and $\nu$ viscosity scale, and thereby making no distinction between laminar/smooth flow at small Reynolds numbers and turbulent/non-smooth flow at large Reynolds numbers.

Instead the problem is formulated so as lead people to seek a solution in the form of blow-up (or not blow-up) to infinity of flow speeds in finite time, following a simple methodology borrowed from ordinary differential equations saying that as long as a solution trajectory is finite it can be continued for some time. But this is not the real issue for a partial differential equation like Navier-Stokes, where the essential distinction is instead between laminar/smooth and turbulent/non-turbulent flow.

The result is a problem formulation which is meaningless because it is both unphysical and unmathematical, and as such cannot be given a meaningful answer.

The problem is owned by a small group of pure mathematicians including Fefferman and Tao, who refuse to participate in any form of discussion about the problem and its formulation. This is not in the interest of anybody outside this group and thus not in the general interest of mathematics as science, which must be the interest of mr Clay in particular...

## torsdag 21 april 2016

### Velocity Blow-up to Infinity for Incompressible Euler?

In an effort to solve the Clay Navier-Stokes problem as formulated by Fefferman, Terence Tao in recent work seeks to construct a solution to the incompressible Euer equations with velocities becoming infinite in finite time, but does "not quite achieve" the goal.

Let me present some evidence indicating that the goal cannot be achieved. To this end we compare the incompressible Euler equations:
• $\frac{\partial u}{\partial t}+u\cdot\nabla u+\nabla p =0$
• $\nabla\cdot u=0$
with (i) vector-Burgers as a model of very compressible flow:
• $\frac{\partial u}{\partial t}+u\cdot\nabla u=0$
and (ii):
• $\frac{\partial u}{\partial t}+u\cdot\nabla u+\nabla p =0$,
• $\delta\Delta p=\nabla\cdot u$
with $\delta >0$ a small constant, as a model of slightly compressible flow.

For Burgers equation and so for (i), velocities may become discontinuous corresponding to the development of shocks over time, but velocities do not tend to infinity.

In case (ii) solving for the pressure p gives the following equation along a streamline $x(t)$:
• $\frac{du(x(t))}{dt} + \frac{1}{\delta}\nabla\Delta^{-1}\nabla\cdot u(x(t),t)=0$
which formally gives a bound on the possible growth of velocity in terms of $\frac{1}{\delta}$ preventing blow-up to infinity.

We conclude that neither very compressible nor slightly compressible flow appears to accommodate blow-up to infinite velocity. Is it then the incompressibility which will squeeze the flow to infinite pressure driving flow velocity to infinity? Far-fetched in my view.

On the other hand, we have strong evidence that Euler solutions become turbulent with substantial turbulent dissipation from large velocity gradients, while velocity does not spike to infinity. Again, the formulation of the Clay Navier-Stokes problem without reference to turbulence, appearently leads mathematicians into meaningless dead ends.

### Reformulation of Clay Navier-Stokes Problem Needed 1

The formulation of the Clay Millennium Problem about global smoothness of solutions to the incompressible Navier-Stokes equations by Charles Fefferman, circumvents the phenomenon of turbulence as the most important aspect of fluid flow from both mathematical and physical point of view. The result is a problem which is both meaningless and without solution, and thus cannot serve well as a Clay Millennium Problem.

The unfortunate formulation by Fefferman comes out in the recent attempts by Terence Tao to construct a solution with local blow-up of fluid speed to infinity in finite time. Tao thus seeks a negative answer to global smoothness by constructing solutions with flow speed going to infinity locally. But he does not succeed and there is no reason to expect that he ever will, because the viscous term in Navier-Stokes dominates the convective term on small scales. Tao working in conjunction with Fefferman, thus is led into a fruitless direction.

The question of global smoothness in Fefferman's formulation, should better be replaced by a question of turbulence with turbulent flow defined as flow with velocity $u(x,t)$ with given initial data $u(x,0)$ such that for a positive constant $C$ (which is not small)
• $\frac{\nu \int\vert\nabla u(x,t)\vert^2 dx}{\int \vert u(x,t)\vert^2 dx} > C$ for $t>0$
for all small viscosities $\nu > 0$. Note that by renormalizing initial data, the effective value of the viscosity can always be made as small as desired, see PS below. Non-turbulent = laminar flow then has a constant $C$ which tends to zero with $\nu$.

A turbulent solution would then correspond to a non-smooth solution in Fefferman's formulation, and then a laminar = non-turbulent solution to a smooth solution, and a Clay problem about global existence of laminar solutions would have a negative answer: For any given positive viscosity, there is data such that the corresponding Navier-Stokes solution becomes turbulent in finite time. Or turned the other way: For many initial data there is a viscosity such that the corresponding solution becomes turbulent in finite time.

Another aspect of Fefferman's unfortunate formulation is that the flow is supposed to fill all of space, or be periodic in space, which means that the completely crucial presence of flow boundary and choice of boundary condition, is neglected. There can be no rational reason to formulate a mathematical problem presented as being connected to physical reality of importance to humanity, in a way that makes any such connection meaningless.

PS1 renormalisation goes as follows: If $u(x,t)$ satisfies
• $\frac{\partial u}{\partial t}+u\cdot\nabla u-\nu\Delta u = 0$,
then $\bar u=\frac{u}{\alpha}$ with $\alpha >0$ satisfies
• $\alpha\frac{\partial\bar u}{\partial t}+\bar u\cdot\nabla\bar u-\alpha\nu\Delta\bar u = 0$
and thus $\bar u$ with renormalisation of time $t=\alpha\bar t$ satisfies:
• $\frac{\partial\bar u}{\partial\bar t}+\bar u\cdot\nabla\bar u-\bar\nu\Delta \bar u = 0$
with $\bar\nu =\bar\alpha\nu$ arbitrarily small with $\alpha >0$.

PS2 The Clay Navier Stokes problem is presented by the Clay Mathematical Institute as follows:
• Waves follow our boat as we meander across the lake, and turbulent air currents follow our flight in a modern jet. Mathematicians and physicists believe that an explanation for and the prediction of both the breeze and the turbulence can be found through an understanding of solutions to the Navier-Stokes equations. Although these equations were written down in the 19th Century, our understanding of them remains minimal. The challenge is to make substantial progress toward a mathematical theory which will unlock the secrets hidden in the Navier-Stokes equations.
But in the official formulation of the problem by Fefferman there is nothing about turbulence! Instead, it appears that the problem is deliberately cleverly formulated so as to completely exclude this fundamental aspect from the discussion, by focussing on blow-up instead of turbulence as expression of non-smoothness. I have many times tried to get this across to people spending time on seeking a solution and to mr Clay ready to spend money on a solution, so far with little success. But  for every year without solution the lack of meaning of the present problem formulation may become more understood. Maybe time is now ripe for a revision of the formulation of the problem, mr Clay?
In any case, waving with turbulence and then excluding turbulence is not correct science.

PS3 Here is copy of a letter to Tao:

Hi Terence

I see that you seek to construct solutions with blow-up to Euler/Navier-Stokes equations in an effort to solve the Clay Navier-Stokes problem as formulated by Fefferman. I have already tried to get across to you and Fefferman that the present formulation is not correct from scientific point of view (as I see it), since turbulence is named as the main unresolved mystery of the Navier-Stokes equations in the presentation of the problem by the Clay Institute, yet the formulation by Fefferman is made so as to exclude turbulence from the discussion.

I would appreciate if you could give your view on this apparent contradiction. I would also be happy if you could comment on the reformulation of the problem including turbulence suggested here:

http://claesjohnson.blogspot.se/2016/04/reformulation-of-clay-navier-stokes.html

I understand that you may want to discard my proposal because your time is limited, but working on a problem without meaningful answer may represent even more loss of time.

Best regards

Claes

## onsdag 20 april 2016

### Turbulent Euler Solutions and the Clay Navier-Stokes Problem 2

This is a continuation from the previous post:

We compute an approximate turbulent solution $U_h$ to the Euler equations using G2 on a given mesh with mesh size $h$ characterised by substantial turbulent dissipation. We ask if with $U_h$ given, it is possible to construct a function $\hat U_h$ which solves the Navier-Stokes equations for some viscosity $\nu_h$?

This can be answered by pointwise computing the Euler residual $E(\hat U_h)$ of a regularisation $\hat U_h$ of $U_h$ together with the Laplacian $\Delta\hat U_h$ and then defining
• $\hat h =\frac{E(\hat U_h)}{\Delta U_h}$.
If it turns out that $\hat h >0$, then we have a function $\hat U_h$ which exactly solves the Navier-Stokes equations with a viscosity $\hat h$, and if $U_h$ is turbulent, so will $\hat U_h$ be.

Depending on the variation of $\hat h$, we could argue that we have constructed an exact solution to a modified Navier-Stokes equation (with constant viscosity), with the modification depending on the variation of the computed $\hat h$, a solution which is turbulent and thus non-smooth.

This argument has a connection to that presented by Terence Tao in a setting of modified Euler/Navier-Stokes equations.  The difference is that we use a computed solution of great complexity instead the analytical solution of less complexity constructed by hand by Tao.

There is strong evidence from experimental observation and computing that solutions to the Navier-Stokes  equations with small viscosity, are always turbulent and thus that the Clay problem about global existence of smooth solutions has a negative answer. Thus it seems pretty clear that computational evidence can settle the Clay problem, but this may not be accepted by a jury of mathematicians trained in analytical mathematics developed before the computer.  It may be that without computational evidence the problem may stay unsolved for ever, or that an answer by analytical mathematics becomes so particular that the very meaning of the problem is lost.

### Turbulent Euler Solutions and the Clay Navier-Stokes Problem 1

Turbulent solutions of the incompressible Navier-Stokes equations with viscosity $\nu >0$ can be characterised as having substantial turbulent dissipation, that is, satisfying for all sufficiently small positive $\nu$ with normalisation of the velocity $u$ (for $t>0$ say):
• $\int\nu\vert \nabla u(x,t)\vert^2\, dx > C$
where $C$ is a positive constant.

Dimension analysis suggests that turbulent solutions are non-smooth Hölder continuous with exponent 1/3 on a smallest scale in space of size $\nu^{\frac{3}{4}}$ with $\vert\nabla u\vert\sim \nu^{-1/2}$.

We view such solutions as approximate weak solutions of the Euler equations (formally corresponding to $\nu =0$), or turbulent Euler solutions, thus characterised by substantial turbulent dissipation. Stability analysis and computation strongly suggest that all smooth solutions to the Navier-Stokes with small $\nu$ and the Euler equations, become turbulent over time, see Computational Turbulent Incompressible Flow.

Terence Tao struggles to analytically construct solutions to the incompressible Euler equations with blow up in finite time, which could possibly show blow-up also for Navier-Sokes,  but does "not fully achieve" the goal, which is to answer the Clay Navier-Stokes problem.

Let us compare our approach based on stability analysis/computation with that of Tao based on analytical construction of solution with blow-up. We thus give evidence that (i) turbulent solutions can be computed over global time, (ii) all smooth solutions become turbulent because of inherent instability, while Tao seeks to (iii) construct a very specific solution with blow-up for Euler and Navier-Stokes.

We see that our approach is complementary to that of Tao, or the other way around: (i)-(ii) concerns the general problem and gives life to solutions after blow-up as turbulent solutions, while (iii) concerns a very specific problem without life after blow-up.

The evidence of (i)-(ii) consists of stability analysis + high performance computation, while (iii) is
based on analytical computation by hand.  It may be that (i)-(iii) together capture the core aspects of the Clay Navier-Stokes problem using different forms of mathematics and "proofs".

## tisdag 19 april 2016

### P=NP? vs Escher's Descending-Ascending Monks vs 2nd Law

There is a connection to turbulent Euler solutions as counterexample to P=NP in Eschers's descending-ascending Monks, who walk around either descending or ascending all the time, yet coming back to where they started again and again. If you believe this is an illusion, then you have a good chance in understanding the counterexample.

Recall that turbulent dissipation forward/backward always means loosing energy and thus descending all the time. And then you cannot get back to where you started, unless you believe in the illusion of Escher's monks...Thus you cannot recover the initial state from a later state.

### P=NP? vs 2nd Law vs Turbulent Dissipation

This is a continuation of the argument from recent posts that the 2nd Law expressed by turbulent solutions to the incompressible Euler equations represents a counterexample to P=NP. The basic ingredients are the following:
1. Exact smooth solutions of the Euler equations such as potential flow are unstable and thus are not computable globally in time under finite precision.
2. Turbulent weak solutions can be computed globally in time with polynomial work. Such solutions have substantial turbulent dissipation independent of resolution and thus are substantially irreversible in time. More precisely, the irreversibility with substantial turbulent dissipation independent of resolution makes recovery of an initial state u(0) at time 0 from a computed solution u(T) at later time T, impossible with polynomial work.
3. The problem Q of recovering u(0) from u(T) is thus appears to be notP, while it is possible to check if a recovery candidate v(0) produces u(T) upon solution forward in time with polynomial work.
4. It is the unavoidable substantial turbulent dissipation which creates substantial irreversibility under polynomial work and thus shows that Q is notP, while forward solution with polynomial work makes Q NP and thus a counterexample to P=NP.
5. It is the unavoidable substantial turbulent dissipation which is the reality of the irreversibility of the 2nd Law and makes into a dictate, which cannot be escaped with polynomial computational work and thus neither in physical reality, with physics as a form of analog computation with finite precision, which by the necessary limitations of physical reality can only involve polynomial work.
6. The connection to physics as analog computation and the 2nd Law as a law of physics as computation, gives the P=NP? problem a more clear meaning by offering a sharp distinction between P=possible=physical and notP=impossible=unphysical, than in the standard setting of polynomial vs exponential growth, which may be impossible to clearly separate in any form of real computation.
7. The formulation of P=NP? as the question if it is easy to go forward from a position u(0) to end up at a position u(T), is it then possible to back-track to an initial position v(0) possibly different from u(0) such that going from v(0) will end up X? For example going down a tree branching one way or the other to X, it is possible to back-track to a possibly different initial position forward connected to X. In any case the P=NP? problem has a definite flavour of irreversibility as forward-easy and backward-not easy.

## fredag 15 april 2016

### Spilled Ink of Quantum Mechanics

Leading physics spokesman Leon Susskind starts his recent contemplation over quantum mechanics Everett vs Copenhagen with the following quotes:
• If quantum mechanics hasn’t profoundly shocked you, you haven’t understood it yet. (Niels Bohr)
• We have always had a great deal of difficulty understanding the world view that quantum mechanics represents. (Richard Feynman)
• Quantum mechanics is so confusing that I can’t even tell if there is a problem, but maybe it’s all ok because it works. There is probably not much profit in thinking about “interpretations” and even less in arguing about them.  (Dirac)
Susskind then contributes his own doubts and confession:
• Is there a problem, as Feynman suggests—and then suggests there isn’t—and then suggests that maybe there is? What is it that has caused so much angst and spilled ink?
• It is obvious that the Copenhagen Interpretation cannot be the last word.
• Now I feel that our current views of quantum mechanics are provisional.
We understand that what Susskind tells us, is that still after 100 years of intense intellectual work by several generations of most brilliant physicists, the state of quantum mechanics today is more confusing and speculative than ever (lots of spilled ink). No progress, only angst.

The game seems to be that the more you stress that quantum mechanics is not understood, the more insightful and knowledgeable you appear yourself.

### Counterexample to P = NP with Relation to 2nd Law

Let me here present the counterexample to P = NP from the previous post in more concise form.

We consider the Algorithm of solving the incompressible Euler equations over a time interval (0,T) by the stabilized finite element method G2.  We observe that solutions are turbulent with turbulent dissipation which remains substantial (positive) as the discretisation is refined.

We know that exact laminar solutions (such as potential solutions) of the Euler equations are unstable and as a result all solutions turn turbulent over time. We know that given an initial state u(0), the Algorithm delivers a final state u(T) at later time T > 0 with local mean-values of u(T) of certain size computable to a certain precision with polynomial work.

We pose the problem Q of computing the initial state u(0) from a computed final state u(T).  We ask of this can be done with polynomial work, that is, we ask if Q is P?

For any reconstruction candidate v(0), the corresponding v(T) can be computed with polynomial work, which allows check if v(T) is sufficiently close to u(T) to say that v(0) is a acceptable reconstruction of u(0). In other words, Q is NP.

We know that solving the Euler equations backward in time, cannot lead to an acceptable  reconstruction of u(0) from u(T) if the tolerance is small enough, because the substantial turbulent dissipation necessarily being introduced computing u(T) from u(0) in forward time, cannot be reversed. Instead additional substantial turbulent dissipation is introduced in backward time, independent of the work invested.

This means that a substantial gap will remain between original image u(0) and any reconstruction v(0) computed backward in time from u(T), independent of the work invested in the backward process.

Q is thus notP if Q is backward time-stepping. Is it possible to reconstruct u(0) in some other way?
Simply testing all u(0) is certainly exponential and the question is then if it is possible to restrict the testing or shooting? The complexity of turbulent flow with each preimage u(0) giving a computed image u(T) of great local mean-value variability, means that substantial restriction seems impossible and so Q appears to be notP by simply testing all possibilities or shooting in all directions.

The remaining possibility would be some iterative method with further restriction of testing by successive improvement of shooting. We thus ask if there is some thinkable iterative method to solve an identification problem of solving an equation of the form Eu(0) = u(T) with Eu(0) the solution of the Euler equations with initial data u(0) evaluated at time T.

Here E represents an operator which is locally exponentially both unstable and stable, and any attempt to solve such an equation iteratively would seem to fail because the spectrum (of a linearization) of E will be spread over the whole complex plane.

We conclude that solving Q by (a) backward time stepping, (b) restricted guessing, (c) iterative shooting, appears to be clearly notP.  Are there any other possibilities? If not, then Q would be notP.

We summarise: Turbulent Euler solutions have the very special property of:
1. Unavoidable substantial irreversibility from unavoidable substantial turbulent dissipation, independent of resolution.
2. Great output variability resulting from local exponential instability-stability.
3. Forward problem P up to local mean values.
4. Backward time stepping notP because of substantial irreversibility.
5. Iterative shooting notP because of spectrum with no restriction.
Altogether, turbulent Euler appears to give a counterexample to P = NP.

At the same time it gives meaning to the 2nd Law as an easy-forward and difficult-backward problem, which is not based on ad hoc introduction of diffusion or probability (which is the common way of giving "meaning" to the 2nd Law).

At the same time turbulent Euler gives a counterexample to the Clay Problem of global smoothness of Navier-Stokes.

The Euler equations thus represent a most remarkable mathematical problem, which cannot be solved exactly, but in computational form gives both meaning to the 2nd law and shows limits of computation and mathematics.

It is perhaps not so surprising that several seemingly unrelated and seemingly very difficult problems,  all have a common answer relating to turbulence as an observable phenomenon of reality, which cannot be simply an unresolvable mystery for ever hidden to human understanding.

I have tried over a long to get some understanding for the issues presenting themselves in the Euler equations, without too much success:
• I appears that mathematicians are not keen to bring in stability or well-posedness, although the mathematician Hadamard told them to do that, and the result is dead-lock.
• It appears that computer scientists prefer integers before real numbers and physics, and the result is a dead-lock with man-made algorithms.
• It appears that physicists prefer to speculate about multiversa and quantum foam beyond rationale.
• It appears that fluid mechanicians are stuck in infinitely thin Prandtl boundary layers beyond  computability.
But there is hope: reality is there to discover and understand.

## lördag 9 april 2016

### Counterexample to P = NP: Turbulent Irreversible Solutions of the Euler Equations

Let me here sketch a counterexample to the Clay Millennium P = NP problem (also connecting to the Clay Navier-Stokes problem) based on the computational theory and practice of turbulent solutions to the incompressible Euler equations developed in my book Computational Turbulent Incompressible Flow with Johan Hoffman.

We recall that the incompressible Euler equations are formally reversible in time, but the only solutions that exist over time are weak turbulent near-solutions and any such solution is irreversible in time because pointwise exponential instability transforms smooth solutions into non-smooth turbulent solutions in finite time with (positive) turbulent dissipation bounded away from zero as resolution in space and time tends to zero.

We know from the book how to compute turbulent solutions u(t) to the incompressible Euler equations from given initial data u(0) at time t = 0 over a period of time (0,T) with polynomial work with respect to space-time resolution. We now consider the problem Q of recovering a given u(0) from observed u(T). We ask if this is possible with polynomial work, that is, we ask if Q is P?

In other words, we ask if it is possible to solve the incompressible Euler equations backward in time with polynomial work?

By the above argument backward solution from u(T) necessarily produces substantial turbulent dissipation backward in time,  which makes recovery of initial data u(0) impossible, since u(T) results from u(0) with substantial turbulent dissipation in forward time. It is thus impossible to recover u(0) from u(T) by computing backward in time, if a sufficiently small tolerance (which does not need to be very small) for the recovery is chosen.

We conclude that it is impossible to recover u(0) from u(T) by solving the Euler equations backward in time, independent of the computational work invested, thus that Q is notP.

On the other hand, if we have a candidate for u(0), we can check by solving Euler in forward time if that candidate fits with u(T), that is, we can check Q by polynomial work, that is, Q is NP.

We thus have an example which is NP and notP, that is, a counterexample to P = NP.

Incidently, the above argument includes a solution to the Clay Navier-Stokes problem: Globally smooth solutions do not exist for small viscosity, only turbulent non-smooth solutions do

We now add more details to the argument:

The nature of the Euler equations as being formally reversible in time makes the above argument fundamentally different from a similar argument for the heat equation which represents an easy-forward difficult-backward problem, which can be solved without too much work by Tychonov regularisation and iteration/shooting forward in time. For the heat equation solution complexity decreases with time (exponential stability), while for the Euler equations solution complexity increases with time (exponential instability-stability, from laminar to turbulent), which (most likely) prevents solution by Tychonov regularisation and forward shooting.

It is thus the very specific nature of the Euler equations with both pointwise exponential increase and decrease of perturbations generating turbulent solutions existing over long time without blow-up, which presents to us a computational problem which is easy to check (NP) but difficult to solve (notP) as a counterexample to P = NP.

Computational Euler can be formulated as a discrete n-bit problem, which is not "man-made" in the sense of expressing the formidable complexity of physical turbulent flow,  to be compared with the attempts to design man-made n-bit counterexamples of P = NP (searching, traveling salesman, prime number factorization), which have not been successful so far, probably because they lack some of the richness of Computational Euler.

My conjecture is thus that Computational Euler Backward is a problem which is NP (easy to check) but notP because (i) time-stepping backward is impossible with any amount of work, (ii) iterative forward shooting requires exponential work because of the richness/complexity of turbulent Euler solutions, and there are no thinkable algorithms beyond (i) and (ii) which can be P.

PS1 It is natural to compare the Euler equation with (a) the heat equation and (b) the Lorenz equations. The heat equation forward/backward is exponentially stable/unstable and so notP by (i) but P by (ii) and thus does not offer a counterexample to P = NP. The Lorenz equations are exponentially unstable both forward and backward and thus do not offer a counterexample to P =  NP.  The Euler equations are both stable and unstable both forward and backward thus seem to offer a counterexample to P = NP.

PS2 Let me illustrate that the Euler equations though formally time reversible, in reality are time irreversible with major defect. Consider the flow around a circular cylinder with flow inflow from left depicted in blue in the below and observe that the solution is non-symmetric in the flow direction with the flow being laminar before attachment to the left of the cylinder and turbulent after separation to the right.  Consider now reversing velocities at a final time T with the intention to recover the solution  at an earlier time 0 < T, which is the same as reversing time to recover the state at time 0 from the state at time T :

A corresponding recovered solution obtained by time stepping the Euler equations is depicted in red below and shows a solution analogous to the forward solution in blue, but a red solution which is qualitatively different from the blue. The necessary creation of turbulence at separation thus makes solutions with opposite inflow velocities qualitatively different. We conclude that backward time stepping cannot solve our problem Q of recovering an earlier state from a later.

What remains is then iteration by shooting, and the idea is then that the richness of solutions to the Euler equations will make this impossible with P.

PS3 The P = NP or NP = P? problem has the form of a recovery/reconstruction problem with a forward problem which is NP (easy to check) and a backward problem, which if notP would be a counterexample to NP = P. This is the setting in cryptography based on problems which in forward form are easy to solve (check if a given prime factorisation of a given natural number is correct), but difficult in backward form (given a natural number, find its factorisation into prime numbers).

The difficulty in the setting go P = NP? of a man-made problems of this nature, is to show that there is no algorithm for prime factorisation which is P, and there seems to be no way to show that this is impossible, since the number of possible algorithms is limitless because the factorisation problem itself has such a simple nature, following the logic that the simpler the problem the more algorithms are possible.

In the setting of the Euler equations, we turn this around and thus consider a problem which is very complex with the idea that the number of possible algorithms then may be bounded (above 2) thus making exhaustive test possible.

PS4 Let me detail PS1  a bit. Consider the following characteristics of a recovery problem Q of identifying initial data u(0) from solution u(T) at later time T, in the setting (a) heat equation, (b) Lorenz and (c) compressible Euler:
• (a): Forward stable - backward unstable: Gross features of u(0) can be recovered from u(T) by solving backward in time by time stepping or shooting and thus Q is P. Damping of all fine details in forward time. Many u(0) give almost the same u(T), or, u(0) is not very specific.
• (b): Forward stable-unstable and backward stable-unstable: Both check and recovery/solve of gross features requires exponential work and thus Q not NP. No damping of some fine details in forward time.
• (c) Forward stable-unstable and backward unstable-stable as in (b), but with gross features of u(T) computable from u(0) with P, thus with Q as NP. The question is now if recovery of u(0) from u(T) is possible as P? We know that backward time stepping is notP, and we now ask if also shooting is P as in (a)? The difference from (a) is that some fine details of u(0) are not damped in forward time and thus may influence gross features of u(T). Recovery of u(0) thus appears to demand recovery of not only gross features of u(0) but also fine details, and to recover fine details by shooting may well so difficult that it is notP. In this case few u(0) give almost the same u(T) and so recovery is much more difficult than in (a).
We have thus nailed the problem of P = NP to a question of recovery of initial data u(0) for compressible Euler from solution data u(T) at later time T.  We have noticed that fine details of u(0) may influence gross features of u(T) and recovery of u(0) thus demands recovery of fine details of u(0), that is a very specific u(0), and the conjecture is then that this problem is notP.

PS5 In a setting of authentication the problem Q may take the following form: Choose an initial state u(0) of complex nature and solve the Euler equations forward with a specified discretisation in space-time (G2 with certain mesh) to obtain a final state u(T).  Use u(0) as key to a lock consisting of G2 with mesh and u(T). Knowledge of u(0) allows opening the lock by solving Euler with G2-mesh to get u(T). Here G2-mesh and u(T) may be public and the idea is that finding the key u(0) from public data will be exponentially difficult, while testing if a key candidate works or not is polynomially easy.

PS6 Alternatively, G2-mesh can be seen as a cryptographic hash function with property of computing from a given input or message u(0) an output or hash value u(T), such that (i) u(T) is easy to compute from u(0), (ii) it is difficult to recover  u(0) from u(T), (iii) small change of message will give large change of hash value, (iv) it is difficult to find two messages with the same hash value.

PS7 Turbulent solutions of the Euler equations express the 2nd law of thermodynamics as describing an irreversible process in time, where the irreversibility is a result of turbulent dissipation which is substantial and unavoidable. Exact solutions of the Euler equations (such as potential solutions) are reversible in time but are unstable and thus are both unphysical and uncomputable under finite precision. Turbulent Euler solutions of the Euler equations represent a computational process/algorithm with substantial irreversibility in time which is unavoidable and thus gives the 2nd law a rationale without the ad hoc assumptions of diffusion or tendency to disorder or stochastics which makes the 2nd law in standard form a mystery:

The irreversibility of Euler solutions is the result of the impossibility of computing exact solutions, and the necessary presence of turbulent dissipation in computable weak solutions. It is natural that a counterexample to P = NP can be found in this setting with the impossibility expressing notP and the possibility NP.

## tisdag 5 april 2016

### Ny blogg: NewMath: Mathematics-IT

Jag har startat en ny blogg i form av en version på engelska av reformprogrammet Matematik-IT under namnet NewMath: Mathematics-IT.

Appar som stöder programmet finns nu på App Store for nedladdning: Sök på "newmath" och Du finner 10 st appar att prova.

## måndag 4 april 2016

### Läroplan för Matematik enligt Svensk Modell?

Så här skulle läroplanen för matematik se ut om man ersätter svenska språket med matematik i nuvarande läroplan för svenska för grundskolan:

Matematik är människans främsta redskap för att tänka, kommunicera och lära. Genom matematiken utvecklar människor sin identitet, uttrycker tankar och förstår hur andra tänker. Att äga en rik och varierad matematik är betydelsefullt för att kunna förstå och verka i ett samhälle där kulturer, livsåskådningar, generationer och olika matematik möts.

Undervisningen i ämnet matematik ska syfta till att eleverna utvecklar kunskaper i och om matematiken. Genom undervisningen ska eleverna ges förutsättningar att ut­veckla sitt matematiska språk så att de får tilltro till sin matematiska förmåga och kan uttrycka sig i olika sammanhang och för skilda syften. Det innebär att eleverna genom under­visningen ska ges möjlighet att utveckla matematiken för att tänka, kommunicera och lära.

Undervisningen ska stimulera elevernas intresse för matematik. Genom under­visningen ska eleverna ges möjlighet att utveckla kunskaper om hur man formulerar egen matematik i skilda medier. Undervisningen ska även syfta till att eleverna utvecklar förmåga att skapa och bearbeta matematik enskilt och tillsammans med andra. Eleverna ska även stimuleras till att uttrycka sig genom olika estetiska matematiska uttrycksformer. Vidare ska undervisningen bidra till att eleverna utveck­lar kunskaper om hur man söker och kritiskt värderar matematisk information från olika källor.

I undervisningen ska eleverna möta samt få kunskaper om ren matematik från olika tider och skilda delar av världen. Undervisningen ska också bidra till att eleverna utvecklar kunskaper om olika former av tillämpad matematik. I mötet med olika typer av matematik, ska eleverna ges förutsättningar att utveckla sin egen matematik och sin förståelse av omvärlden.

Genom undervisningen ska eleverna ges möjlighet att utveckla sina kunskaper om matematik, dess normer, uppbyggnad, historia och utveckling samt om hur matematiskt språk varierar beroende på sociala sammanhang och medier. På så sätt ska undervis­ningen bidra till att stärka elevernas medvetenhet om och tilltro till den egna matematiska och kommunikativa förmågan. Undervisningen ska också bidra till att eleverna får för­ståelse för att sättet man kommunicerar matematik på kan få konsekvenser för andra människor. Därigenom ska eleverna ges förutsättningar att ta ansvar för den egna matematiken.

Undervisningen ska även bidra till att eleverna får möta och bekanta sig med såväl den nordiska grannmatematiken som nationell minoritetsmatematik.

Genom undervisningen i ämnet svenska ska eleverna sammanfattningsvis ges förutsätt­ningar att utveckla sin förmåga att
• formulera sig och kommunicera matematiskt
• läsa och analysera ren matematik och annan matematik för olika syften,
• anpassa matematiken  efter olika syften, mottagare och sammanhang
• urskilja matematiska strukturer och följa matematiska  normer
• söka information från olika källor och värdera dessa.

### P = NP?

The IT guru Donald Knuth has spoken and said that P = NP:
• I've come to believe that P=NP, namely that there does exist an integer M and an algorithm A will solve every n-bit problem belonging to the class NP in $n^M$ elementary steps.
• Some of my reasoning is admittedly naïve: It's hard to believe that P≠NP and that so many brilliant people have failed to discover why.
Knuth then continues to say that the question itself, P = NP? as one of the Clay problems, is not very interesting:
• My main point, however, is that I don't believe that the equality P=NP will turn out to be helpful even if it is proved, because such a proof will almost surely be nonconstructive. Although I think M probably exists, I also think human beings will never know such a value. I even suspect that nobody will even know an upper bound on M.
Knuth here expresses one of the main points I have been preaching to students, that quality without quantitative measure is meaningless:
• to distinguish between exponential and polynomial growth without careful quantitative measure, may be meaningless.
• to define limit in terms of $\epsilon$ and $\delta$ without specifying the dependence of $\delta$ upon $\epsilon$ in quantitative form, may be meaningless.
But Knuth's belief that anyway, whatever it means that P = NP, appears to result from a belief that the number of possible algorithms is so incredibly large that it is impossible to say that that there is no algorithm that does not solve a given problem in polynomial time, whatever that means.

Another approach is to consider a specific problem, for exemple solving the heat equation backward in time, which is believed to be exponentially difficult, compared to solving the heat equation forward in time, which  is known to be polynomial in time.

The number of known algorithms for the backward heat equation is two:
1. Backward time stepping (exponential)
2. Iterative solution based on forward time stepping (requires stabilisation).
The instability of the backward heat equation means that 1 is exponential, and depending on how solution tolerance/stabilization is specified, also 2. may come out to be exponential, I guess.

The question is then if there are other thinkable methods for solving the backward heat equation?
How to prove such a thing? What is a thinkable method? What says that the number of possible algorithms is large? Isn't it the other way around? That possible algorithms are few?

The P=NP problem is similar to another Clay problem, that on Navier-Stokes equations, in the sense that the quantitative measure needed to make the problem meaningful in the sense of Knuth, is lacking. Thus, at least two of the Clay mathematical problems appear to be meaningless as formulated, and one may ask what that says about contemporary mathematics, or about Clay?