torsdag 28 februari 2019

Planck's Desperate Mad Ugly Ad Hoc Trick: The Quantum

Planck's reasoning was mad, but his madness has that divine quality that only the greatest transitional figures can bring to science. (Abraham Pais in The Science and Life of Albert Einstein)

...the whole procedure was an act of despair because a theoretical interpretation had to be found at any price, no matter how high that might be... (Planck on the statistical mechanics basis of his radiation law)

Sabine Hossenfelder on Backreaction gives praise to the new book Breakfast with Einstein by Chad Orzel:
  • Physics is everywhere, that is the message of Chad Orzel’s new book “Breakfast with Einstein,” and he delivers his message masterfully.
  • In contrast to many recent books about physics, Orzel stays away from speculation, and focuses instead on the many remarkable achievements that last century’s led to.
Planck was not happy with his desperate mad ugly ad hoc trick of the quantum

Chapter 2 of the book has the title The Heating Element: Planck's Desperate Trick with the objective of describing the birth of quantum mechanics attributed to Planck's (ugly ad hoc) trick of avoiding the apparent ultraviolet catastrophe of classical wave mechanics by introducing the concept of a smallest package of energy named quantum: 
  • This “quantum hypothesis” does the necessary trick of cutting off the amount of light at high frequencies—exactly where the ultraviolet catastrophe happens. 
  • Planck initially introduced the quantum hypothesis thinking it was a “desperate mathematical trick.” 
  • Despite the many successes of his formula and the personal fame it brought him, Max Planck himself was never particularly satisfied with his quantum theory.
  • He regarded the quantum hypothesis as an ugly ad hoc trick, and he hoped that someone would find a way to get from basic physical principles to his formula for the spectrum without resort- ing to that quantum business. 
  • Once the idea was out there, though, other physicists picked it up and ran with it, most notably a certain patent clerk in Switzerland—leading to a complete and radical transformation of all of physics.
I have presented an alternative theory based on finite precision computation, which meets Planck's wish of explaining the black body spectrum from basic classical wave mechanics physics, which is presented on Computational BlackBody Radiation. Why not take a look, and see if you get enlightened? By a physical theory of blackbody radiation.

The idea of finite precision computation is the same as that used in a new explanation of the the 2nd law of thermodynamics discussed in the previous post on Boltzmann and his explanation based on (ugly ad hoc) statistics.

The master of ugly ad hoc tricks is Roger Stone as documented in his new book Stone's Rules. Such tricks can take you to the top of both science and politics! They can give you fame, but evidently not happiness. Another master of this game was the patent clerk in Switzerland, who also was unhappy with his theories, in particular the theory of the quantum he picked up from Planck, which gave him such immense fame:
  • If I would be a young man again and had to decide how to make my living, I would not try to become a scientist or scholar or teacher. I would rather choose to be a plumber or a peddler in the hope to find that modest degree of independence still available under present circumstances.
  • All these fifty years of conscious brooding have brought me no nearer to the answer to the question, “What are light quanta?”. Nowadays every Tom, Dick and Harry thinks he knows it, but he is mistaken.
  • For the most part I do the thing which my own nature drives me to do. It is embarrasing to earn so much respect and love for it. 
  • Why is it that nobody understands me, and everybody likes me? (Einstein in New York Times, March 12, 1944) 
PS1 Often a truth about science, or rather a truth about a shortcoming of some scientific theory, is more honestly expressed in popular science, as the truth of the ugly ad hoc science of the quantum in Orzel's book (because the audience is supposed to be ignorant), than in some professional scientific context hiding the shortcoming in some cover-up (because the audience is supposed to be knowledgable and critical). Therefore it is interesting to read popular science also for a scientist.

PS2 Planck's desperate ugly ad hoc trick (which originates from Boltzmann) has caused a lot of confusion among physicists. For example, quantum mechanics, which is not understood by any serious honest physicist,  is supposed to have some mysterious connection to the quantum of energy of Planck, but the fact is that quantum mechanics is based on Schrödinger's equation, which is a continuum mechanical model and not a discrete model build from small packets of energy. The confusion is exhibited in Real Quantum Mechanics offering a new form of and new view on Schrödinger's equation with the common confusion eliminated.  But it is not easy to get a discussion going on the fundamentals of quantum mechanics, since the confusion is so monumental resulting from a  desperate mad ugly ad hoc trick, supposed to be the foundation of modern physics. No wonder that physics is in crisis. See also Dr Faustus of Modern Physics.

PS3 Recall that it was Einstein who introduced the idea that light is made of discrete chunks of energy $h\nu$ as photons with $h$ Planck's constant in Joulesecond and $\nu$ frequency, in his heuristic Law of the photoelectric effect $h\nu + W = eU$ with $W$ work to release an electron
and $eU$ in electron volt eV with $U$ the stopping potential in volt and e the charge of an electron. I argue in Mathematical Physics of BlackBody Radiation and related blogg posts that the Law is to viewed as a frequency threshold condition, which has no relation to any idea of light as consisting of discrete photons or light quanta, which according to the above quote was also the view of the late Einstein.

The Law shows that Planck's constant $h$ appears as a conversion between energy related to light frequency $\nu$ (in Joule) and electron energy (in eV), for which Einstein received the Nobel Prize in Physics in 1921 with explicit mention that he did not get the Prize for his theories of relativity). 

PS4 Schrödinger's equation connects energy related to light frequency and electron energy and it is thus no wonder that the Planck constant appearing in Schrödinger's equation is the same as that in the Law of the photoelectric effect. Mathematical Physics of BlackBody Radiation also gives evidence that the Law of the photoelectric effect is a consequence of Schrödinger's equation, within a continuum model without photon particles and reference to Einstein's heuristic argument that a photon of sufficient energy can kick out an electron. 

 

fredag 22 februari 2019

Boltzmann 175 vs 2nd Law by Finite Precision Computation

Ludwig Boltzmann 1844-1906

Lubos on the Reference Frame recalls the 175th birthday of Ludwig Boltzmann:
  • Yesterday, Ludwig Eduard Boltzmann would have had a chance to celebrate his 175th birthday if he hadn't killed that chance by hanging himself at age of 62...
  • Boltzmann's reason powering the suicide were intellectually driven frustrations.
  • If he were resurrected and if he were around, he would probably ask me whether there's a reasonable chance that the people will get more reasonable when it comes to the ideas required for his new statistical picture of thermodynamics and physics in general. I would probably answer "No" and he would hang himself again. 
Lubos then enters into a defence of Boltzmann's 2nd law based on statistics and the related
Copenhagen interpretation of quantum mechanics with electrons randomly jumping around atom kernels, something which Einstein and Schrödinger never accepted. 

The problem Boltzmann tried to solve, with its tragic ending, is how formally reversible systems can show to have irreversible solutions. Boltzmann showed that you can hang yourself, but he could not un-hang himself, and he sought the explanation in statistics. Unsuccessfully according to Lubos, because still today people cannot understand what he was saying, about entropy and a 2nd law based on nonsensical statistics saying that something with a higher probability is more likely to happen.  I think this is not because people/scientists are stupid, which Lubos claims, but because what Boltzmann says makes sense only to Lubos.

I have presented a different explanation based on finite precision computation. This says that the reason that you cannot un-do things is lack of precision, and that all physics as well as digital computation is realised in finite precision. This means that you can enter a labyrint (the woods/world) with finite precision, like taking a step forward in time, but you cannot find your way out of the labyrint or retrace your path through the woods, go back in in time, because your are limited by finite precision. The arrow of time is an expression of finite precision computational physics. This is meaningful physics and different from Boltzmann's empty idea that the world moves form less probable to more probable states or from more ordered to less ordered states (with a start/bigbang as most ordered state inexplicable). 

I thus offer an explanation of the 2nd law of thermodynamics presented in many earlier blog posts 
and the book Computational Thermodynamics explaining that finite precision solutions of formally reversible system like the Euler equations of fluid mechanics can show to be irreversible, e.g. by the emergence of turbulence. This directly connects to a resolution of the Clay Navier-Stokes Problem reported in previous posts

The catch is that formally reversible systems can have irreversible solutions if precision is finite, and of course precision cannot be infinite, not in digital computation and neither in the physical world.

torsdag 21 februari 2019

Hamming and Tartar on Clay Navier Stokes Problem

Richard Hamming (1915-98)

The mathematician Richard Hamming said:
  • Mathematics is an interesting intellectual sport but it should not be allowed to stand in the way of obtaining sensible information about physical processes.
An example is given in the official formal formulation of the Clay Navier-Stokes Problem by Fefferman, which does not mention the world turbulencewhich in the informal presentation is central:
  • Waves follow our boat as we meander across the lake, and turbulent air currents follow our flight in a modern jet. Mathematicians and physicists believe that an explanation for and the prediction of both the breeze and the turbulence can be found through an understanding of solutions to the Navier-Stokes equations. Although these equations were written down in the 19th Century, our understanding of them remains minimal. The challenge is to make substantial progress toward a mathematical theory which will unlock the secrets hidden in the Navier-Stokes equations.
Everybody, except Fefferman, understands that turbulence is the secret hidden in Navier-Stokes and that the Clay problem, to be more than an intellectual sport standing in the way for sensible information, should ask about a mathematical theory unlocking the secret of turbulence. 

The informal presentation is sensible, while the formal presentation is nothing but an intellectual sport, which neither has practitioners since no progress towards a resolution has been made since 2000, or rather since 1932 when Leray proved existence of weak solutions with uniqueness or wellposedness left completely open.  

We have presented a resolution to a reformulated Clay Problem offering sensible information about the physical process of turbulence, by computation. We hope there are some sensible people that can show a reaction to our resolution. We show by computation that weak solutions exist, are non-smooth/turbulent and have wellposed mean-values such as drag and lift. 

Hamming also said:
  • The purpose of computing is insight, not numbers. … 
  • [But] sometimes … the purpose of computing numbers is not yet in sight.
Yes, we find that being able to compute (turbulent) solutions to Navier-Stokes equations opens to gain insight into the nature and manifestation of turbulence.  DFS is in sight and gives insight! 

So, what insight has DFS brought? Here is one major revelation:
  • bluff body flow = potential flow + turbulent 3d rotational slip separation.
Bluff body flow is thus computable by DFS, which offers a revolutionary new capacity to CFD with a vast field of applications for all sorts of vehicles or life moving through air and water, and the fluid mechanics is understandable!

Also note what the mathematician Luc Tartar says in the presentation of his book on Navier-Stokes:
  • To an uninformed observer, it may seem that there is more interest in the Navier-Stokes equation nowadays, but many who claim to be interested show such a lack of knowledge about continuum mechanics that one may wonder about such a superficial attraction. 
  • Could one of the Clay Millennium Prizes be the reason behind this renewed  interest?
  • Reading the text of the conjectures to be solved for winning that particular prize leaves the impression that the subject was not chosen by people interested in continuum mechanics, as the selected questions have almost no physical content.
  • The problems seem to have been chosen in the hope that they will be solved by specialists of harmonic analysis...
  • I  hope that this particular set of lecture notes...may help the readers understand a little more about the physical content of the equation, and also its limitations, which many do not seem to be aware of.
And as before: the pure mathematicians Fefferman, Constantin and Tao in charge of the problem formulation refuse to participate in any form of discussion.  Why? Lack of knowledge about continuum mechanics, with focus instead on harmonic analysis?

And remember:
  • What is computable is understandable. (Pythagoras)
Luc Tartar

   

onsdag 20 februari 2019

From Equation to Solution

This is a continuation of the previous post on the role of functional analysis, more precisely the role of the finite element method as a form of computational functional analysis.

We start with the basic partial differential equation of physics and mechanics, Poisson's equation:
  • $-\Delta u(x) = f(x)$ for $x\in\Omega$,
  • $u(x)=0$ for  $x\in\Gamma$, 
where $\Omega$ is a domain in space with boundary $\Gamma$, $f(x)$ is a given function defined on $\Omega$ and $u(x)$ is the solution to the equation defined on $\Omega$ and $\Gamma$. The game is: Given $f(x)$ find $u(x)$ satisfying Poisson's equation.

We can think of the differential equation $-\Delta u(x)=f(x)$ as expressing force balance at the point $x$ with $u(x)$ the deflection of an elastic membrane under a transversal force or load $f(x)$, in case $\Omega$ is two-dimensional.  There are endless other interpretations.

So far so good, the partial differential equation $-\Delta u=f$ captures complex physics in very compact beautiful mathematical form, and so is marvellous, but there is one caveat: The formulation of the equation gives no clue to how to determine the solution $u(x)$. The equation is like a rebus without any hint of resolution.

It is here that functional analysis enters by offering a reformulation of the differential equation $-\Delta u =f$ into variational form: Find $u\in V$ such that
  • $\int_\Omega \nabla u\cdot\nabla v\, dx = \int_\Omega fv\, dx$ for all $v\in V$,       (1) 
where $V$ is a collection (function space) of possible solutions, from which a best possible solution $u(x)$ is determined by the relation (1). Formally (1) is obtained by multiplying the differential equation $-\Delta u=f$ on both sides with an arbitrary function $v\in V$ and integrating over $\Omega$ using integration by parts to see that (using that $v=0$ on $\Omega$)  
  • $-\int\Delta uv\, dx =\int_\Omega\nabla u\cdot\nabla v\, dx$. 
In the finite element method the space $V$ consists of piecewise polynomial functions over a triangulation of $\Omega$ and (1) is a linear system of algebraic equations, which can be solved by Jacobi iteration or Gaussian elimination. 

The differential equation as unsolvable rebus has thus been reformulated into variational form which allows a best possible solution to be computed by standard linear algebra software.  Here functional analysis enters in the variational formulation and the construction of the finite element space $V$.

The great thing is now that the same method works for virtually any (partial) differential equation, in particular the differential equations of science and technology: Reformulating the differential equation into variational form allows computation of best possible (approximate) solution. 

This is realised in the FEniCS Project which is software automating the whole process consisting of 
  • reformulation into variational form, 
  • construction of finite element space $V$,
  • computation of solution by linear algebra. 
The crown jewel is automated computation of best possible solution of Navier-Stokes equations which we claim resolves the Clay Navier-Stokes Problem and makes turbulent flow computable and thus understandable, for the first time. And this is only the beginning of a FEniCS revolution.

We understand that the differential equation $-\Delta u(x)=f(x)$ expresses local force balance (at the point x), while the solution $u(x)$ comes out as a global effect depending on $f(y)$ for all $y$ and not just $f(x)$. This means that to determine $u(x)$ requires computation collecting many local inputs to one global output.

The mathematics of Jacobi iteration then corresponds to the physics of relaxation where the system reacts to reduce force imbalance. Gaussian elimination (or even better multi-grid) is more efficient than Jacobi iteration, which allows mathematics to take a short-cut to solution compared to physical relaxation.

PS The Navier-Stokes-Euler equations for incompressible flow contains the equation
  • $\nabla\cdot u=0$ 
expressing the incompressibility, together with an equation expressing force balance according to Newton's 2nd law. The equation $\nabla\cdot u=0$ does not express force balance and appears more like a regulation stipulating a certain property of the solution (incompressibility) than a true law of physics like Newton's 2nd law. In DFS (near) incompressibility is instead expressed as a pressure law of basic form
  • $\Delta p=\frac{\nabla\cdot u}{\delta}$  
where $\delta > 0$ is a small parameter, with the effect of forcing $\nabla\cdot u$ to be small by pressure as an expression of some physics. The lesson is that a differential equation without solution procedure is only half of the story.  Stating laws without means of enforcing the laws may be empty.

tisdag 19 februari 2019

Banach and DFS and Clay Navier-Stokes Problem


This is an exercise in preparation for participation in a film about the Polish mathematician Stefan Banach who advanced functional analysis as mathematics describing relations between functions or analogies between analogies. My punch line is that the finite element method, as the subject of my work, is (nothing but) computational functional analysis following the spirit of Banach.

The crown of my work, together with Johan Hoffman and Johan Jansson, is Direct Finite Element Simulation DFS as solution of the Navier-Stokes-Euler equations without turbulence model or complicated wall model from a principle of best possible solution, in a situation where there is no exact solution. DFS brings revolutionary new capacity to Computational Fluid Dynamics CFD, which we (as a show case) claim resolves the Clay Navier-Stokes Problem by computation.

Functional analysis was formed by the mathematician Hilbert at the switch to modernity around 1900, with contributions from the Swedish mathematician Fredholm, and was further developed by Banach starting in 1920.  A prime objective was to justify mathematical models in the form of partial differential equations of solid and fluid mechanics and electromagnetics formulated during the 19th century by Laplace, Fourier, Navier, Stokes and Maxwell, by answering basic questions concerning existence and uniqueness of solutions, as well a construction of solutions by computation.

The basic element of functional analysis is a collection of functions named Hilbert space or Banach space equipped with a structure or geometry generalising that of ordinary three dimensional space. The solution of a given partial differential equation is then an element of a suitably chosen Hilbert or Banach space in basic cases determined by a principle of energy minimisation. The differential equation, which is impossible to solve directly by symbolic computation with pen and paper,  is thus reformulated into a minimisation problem over a function space, which allows construction of solutions as a limits of functions with decreasing energy computed according to the Banach Contraction Mapping Theorem.

Starting in the 1950s this form of computational functional analysis has been developed under the name of the finite element method into a universal method for computing solutions of the differential equations of science and engineering bringing revolutionary new capacities.  This success story was darkened only by Navier-Stokes-Euler equations of fluid mechanics, which were believed to demand computational power beyond anything which could be envisioned, the reason being the phenomena of turbulence and thin boundary layers involving small scales too costly to resolve computationally, the impossibilities presented in NASA CFD Vision 2030.

We show that with DFS the NASA CFD Vision 2030 is realised already today. By computational functional analysis in the spirit of Banach.

DFS and functional analysis gives a new perspective on differential equations representing ideal physics, however with uncomputable or non-existing exact solutions as in the case of Navier-Stokes-Euler,  and reformulations in terms of functional analysis with computable approximate solutions representing real physics.



tisdag 12 februari 2019

Kolmogorov/Onsager: Turbulent Velocity 1/3 Hölder Continuous

Let me here recall the derivation by a scaling argument of the law of Kolmogorov/Onsager stating that fully developed turbulent velocities are Hölder continuous with exponent 1/3.

If $dx$ is smallest scale in space and $du$ the corresponding variation of velocity u, then we have with $\nu >0$ the (small) viscosity:
  • $\nu (du/dx)^2 \sim 1$ (finite rate of turbulent dissipation)
  • $\frac{du\times dx}{\nu}\sim 1 $ (Reynolds number on smallest scale $\sim 1$).
We solve to get $dx\sim \nu^{\frac{3}{4}}$ and $du\sim \nu^{\frac{1}{4}}$ and so $du\sim dx^{\frac{1}{3}}$ showing Hölder continuity 1/3.

The idea is that the flow will by instability develop smaller and smaller structures until the local Reynolds number becomes so small ($\approx 1000$) that this cascade stops on a smallest scale generating the bulk of the turbulent dissipation.

We see that velocity gradients $\frac{du}{dx}\sim \nu^{-\frac{1}{2}}$ are large, since $\nu$ is small, and so velocities are non-smooth.

The official formulation of the Clay Navier-Stokes Prize Problem by Fefferman asks about existence of smooth solutions. By the above argument this question cannot have a positive answer and so the question does not serve well as a Prize Problem.

A pure mathematician may counter this argument by claiming that a velocity with very large gradients still can be smooth, just with very large derivatives. And so even a turbulent solution of the Navier-Stokes equations can be viewed to be smooth, just with very large derivatives, and so asking for existence of smooth solutions in fact can be meaningful and so the Prize Problem in fact is meaningful. I think this means twisting the logic and terminology, which is not in the spirit of meaningful mathematics, pure and applied.

lördag 9 februari 2019

Is Digital Computation a Form of Mathematics?

In the last two posts a resolution of the Clay Navier-Stokes Prize Problem is presented, a resolution based on digital computation. I have tried to get some comment on our proposed resolution from the group of pure mathematicians in charge of the problem including in particular its official formulation: Charles Fefferman, Terence Tao and Peter Constantin, to whom I refer as the Problem Committee.

Sorry to say, I can only report silence from the Problem Committee: no comment whatsoever!

How can we understand this state of affairs? Is it so that our resolution lacks scientific substance? No, it represents a true break-through unlocking the main difficulties of mathematical modeling and simulation of fluid flow and it is world-leading. No doubt about that!

The reason behind the silence is thus not lack of scientific interest, but probably rather the opposite: Our resolution being based on digital computation brings in a new kind of mathematics, which is different from that envisioned in the official formulation expressed in the frame of classical analytical theory of partial differential equations. It appears that the Problem Committee does not know how to react to this new kind of mathematics in the form of digital computation, and so silence is the only possible reaction, so far at least.

This connects to a wider question of the role of mathematics in physics including fluid mechanics with particular focus on the new role of digital computation.

Now, mathematics can be seen as different forms of computation with classical pde-theory expressed as symbolic computation by pen and paper, and the new kind expressed by a computer executing the symbolic computation represented in the computer code.

So I again ask about the view of the Problem Committee on the possibility of resolving the Clay Problem by digital computation. Is it thinkable?  Or can only a resolution in the form of symbolic computation with pen and paper be accepted?  Is digital computation a form of mathematics?

Tao does not give any hope that solution by symbolic computation with pen and paper is possible!

Apparently Fefferman would be willing to give the Prize to Tao for a proof of blow-up towards infinite velocities, but so far Tao has not succeeded. But even if one day he would succeed, that would only mean that the mathematical model is no good as a model of real fluid flow, since no observation of infinite velocities has been made, and why give a Prize for a discovery that a model is no good? More  meaningful maybe to give the Prize for a result about a mathematical model of physical significance, like the one we give?

PS1 A pure mathematician might say that digital computation cannot deliver an answer for all (smooth) data and so would lack the generality of an answer by symbolic computation valid for any (smooth) data. To meet this criticism we can add that our resolution exhibits a different form of universality: We show that lift and drag of a body only depends on the shape of the body for high Reynolds number flow beyond the drag crisis at Reynolds number around $5\times 10^5$, that is for a very wide range of flows. Lift and drag depending only on shape is a form of universality. And we can compute lift and drag of any given body, case by case, but of course we cannot get a result for all bodies in one computation.

PS2 The official problem formulation by Fefferman takes as a fact that a smooth unique solution can cease to exist only if velocities become unbounded (referred to as blow-up at some specific finite time). But this is probably a misconception, since smooth solutions may turn into non-smooth solutions because velocity gradients become unbounded, which is what happens as a shock forms in compressible flow and turbulence develops in incompressible flow, while velocities stay bounded.

The official problem formulation is thus filled with misconceptions, and requires reformulation to become meaningful as a Mathematics Prize Problem.

onsdag 6 februari 2019

Wellposedness of Navier-Stokes/Euler: Clay Problem

This is a continuation of the previous post proposing a resolution of the Clay Navier-Stokes Millennium Problem with further remarks on the aspect of wellposedness identified by
Hadamard in 1902 as being necessary in order for a mathematical model to have physical meaning and relevance. The Navier-Stokes equations serve as the basic mathematical model of fluid mechanics and the Clay Problem can be viewed to reduce to the question of wellposedness, since the existence of (weak) solutions was established by Leray in 1932.

And this is the question we give an answer: We show that weak solutions are computable (exist) and are non-smooth/turbulent with wellposed mean value outputs. We do this by solving a (dual) linearized problem with certain data and show a bound of the dual solution (here for lift of a jumbojet) in terms of the data, which we refer to as assessment of stability, and which translates to an error bound on output of a computed solution in terms of its Navier-Stokes residual, showing that the output is well determined under the presence of small disturbance.

The dual linearized problem has a reaction term with coefficient $\nabla u$ with $u$ a computed velocity. The reaction term drives both exponential growth and decay with its trace being zero by incompressibility. The wellposedness of  computed turbulent solutions is reflected by cancellation effects from the reaction term with exponential growth balanced by exponential deacy from  oscillations of turbulent solutions.

We thus argue that we have resolved the Clay Problem by showing that weak solutions are computable/exist and show to be non-smooth/turbulent with wellposed mean-value outputs. In particular we show that lift and drag are wellposed and thus reveal the secret of flight.

It remains to be seen if our resolution will be accepted by the group of pure mathematicians owning the problem including Charles Fefferman responsible for the official problem formulation, Peter Constantin and Terence Tao. One thing is notable: Fefferman’s formulation does not involve the aspect of wellposedness and so missses the heart of the problem, if Navier-Stokes is viewed as a mathematical model of fluid mechanics, which is clearly emphasized in the official problem presentation:
  • Waves follow our boat as we meander across the lake, and turbulent air currents follow our flight in a modern jet. Mathematicians and physicists believe that an explanation for and the prediction of both the breeze and the turbulence can be found through an understanding of solutions to the Navier-Stokes equations. Although these equations were written down in the 19th Century, our understanding of them remains minimal. The challenge is to make substantial progress toward a mathematical theory which will unlock the secrets hidden in the Navier-Stokes equations.
All of this is presented in detail in this book supplied as evidence to the Clay problem committee with complementing material listed in the previous post. In particular the book contains a study of the (dual) linearized Navier-Stokes/Euler equations, a topic which for some reason has not attracted the attention of mathematicians despite its fundamental importance from mathematical point of view. In short, we feel that we have made substantial progress toward a mathematical theory which unlocks the secrets vidden in the Navier-Stokes equations, including the Secret of Flight.

Concerning the view of the problem committee recall the opening statement in the opening article Euler Equations, Navier-Stokes Equations and Turbulence by Peter Constantin
(in this book):
  • In 2004 the mathematical world will mark 120 years since the advent of turbulence theory. In his 1884 paper Reynolds introduced the decomposition of turbulent flow into mean and fluctuation and derived the equations that describe the interaction between them. The Reynolds equations are still a riddle. They are based on the Navier-Stokes equations, which are a still a mystery. The Navier-Stokes equations are a viscous regularization of the Euler equations, which are still an enigma. Turbulence is a riddle wrapped in a mystery inside an enigma.
In other words, total confusion in the committee in charge of problem formulation and evaluation of proposed resolutions. In particular, Fefferman formulates the problem as the questions of existence and smoothness, forgetting wellposedness, and claims that his problem was solved by standard pde-theory long ago in the case of two space dimensions and evidently has in mind a similar resolution in three dimensions by som ingenious new estimate derived by a clever pure mathematician. But wellposedness is essential also in two space dimensions and so Fefferman exposes the gulf between pure mathematics and mathematics of fluid mechanics, which is not helpful to science.

Fefferman would probably say that wellposedness is a consequence of smoothness, but this is not necessarily so since assessment of smoothness may involve stability factors of arbitrary size and so may say nothing about wellposedness.  But of course questions like this have to remain in the mist since the problem committee is not open to any form of discussion.