torsdag 22 maj 2014

Mr Clay and a Meaningless Navier-Stokes Prize Problem

Turbulent flow around a landing gear as non-smooth solution of the 3d incompressible Navier-Stokes equations, by CTLab KTH. Watch also turbulent flow around an airplane in landing configuration. To argue that these flows are smooth would be a meaningless abuse of mathematical language.

The Clay Institute of Mathematics (CMI) founded by Landon T. Clay celebrated the new Millennium by setting up 7 Prize Problems each worth $1 million,  presented in beautiful words:
  • The Clay Mathematics Institute (CMI) grew out of the longstanding belief of its founder, Mr. Landon T. Clay, in the value of mathematical knowledge and its centrality to human progress, culture, and intellectual life....
  • further the beauty, power and universality of mathematical thinking...deepest, most difficult problems... achievement in mathematics of historical dimension
  • elevate in the consciousness of the general public the fact that, in mathematics, the frontier is still open and abounds in important unsolved problems...
  • Problems have long been regarded as the life of mathematics.  A good problem is one that defies existing methods...whose solution promises a real advance in our knowledge. 
I have long argued that since the Navier-Stokes Prize Problem is formulated without including the fundamental aspects of wellposedness and turbulence, it misses these values and thus is not a good Prize Problem. Here is my argument again:

Consider the incompressible Navier-Stokes equations with viscosity $\nu >0$ in the case of (very) large Reynolds number $Re =\frac{UL}{\nu}$ with $U$ global flow speed and $L$ global length scale. Assume $U=L=1$ and thus $\nu$ (very) small. Such flows are observed physically and computationally to be turbulent with substantial velocity fluctuations $u\sim \nu^\frac{1}{4}$ on a smallest spatial scale $\epsilon\sim\nu^\frac{3}{4}$ with corresponding substantial viscous dissipation $\sim 1$.  For the jumbojet in the above simulation $Re\approx 10^8$ and the smallest scale a fraction of a millimeter. The heuristic argument to this effect goes as follows:

A: Breakdown to smaller scales only takes place for sufficiently large local Reynolds number (of size 100 or more), which gives the following relation for the fluctuations $u$ on the smallest scale $\epsilon$:
  • $\frac{u\epsilon}{\nu}\sim 1$.
B: Substantial dissipation on smallest scale $\epsilon$ means 
  • $\nu (\frac{u}{\epsilon})^2\sim 1$.
Combination of A and B gives $u\sim \nu^\frac{1}{4}$ and $\epsilon\sim\nu^\frac{3}{4}$ as stated. This can be viewed to express Lipschitz-Hölder continuity with exponent $\frac{1}{3}$ and thus that turbulent solutions for (very) small $\nu$ are non-smooth, because they are $Lip^{\frac{1}{3}}$ on (very) small scales. 

The existence of such turbulent solutions can mathematically be proved by standard methods by regularization on scales much smaller than $\epsilon$, which does not change the solution but the NS equation. 

For smooth data such solutions to regularized NS could formally be proved to be smooth in the sense of the formulation of the NS Prize Problem by Fefferman, but this would be in conflict with the observation that solutions are non-smooth  ($Lip^{\frac{1}{3}}$) on (very) small scales $\sim\nu^\frac{3}{4}$. 

The only mathematically and physically reasonable way to resolve this conflict of definitions, would be to view turbulent solutions to be non-smooth ($Lip^{\frac{1}{3}}$ on very small scales), and thus as weak solutions, with weakly small but strongly large Euler residuals, and the aspect of wellposedness would then be of focal interest.

Computational sensitivity (stability) analysis shows that turbulent weak solutions, are weakly wellposed in the sense that solution mean-values are not highly sensitive to perturbations of data (while point-values are).

Stability analysis further shows that globally smooth solutions with derivatives of unit size for smooth data of unit size, are unstable and thus are not physical solutions. 

The net result is that the present formulation of the NS Prize Problem is meaningless from both mathematical and physical point of view. A meaningful formulation must include wellposedness and turbulence as key issues, with existence settled by standard techniques, and a meaningful resolution would have to offer mathematical evidence of weak wellposedness and features of turbulence.

I have asked Terence Tao, as a world leading mathematician working on the Prize Problem, about his views on the aspects I have brought up, and will report his response. I have earlier many times asked Fefferman the same thing but the only response I get is "To me my formulation is meaningful". 

What would then Mr Clay say if he understood that the NS Prize Problem is not meaningful     
outside a small group of mathematicians (which may contain just one person), when comparing to the mission to which he donated his Prize:
  •  the value of mathematical knowledge and its centrality to human progress, culture, and intellectual life....
  • further the beauty, power and universality of mathematical thinking...deepest, most difficult problems... achievement in mathematics of historical dimension. 
PS It is remarkable (or deplorable) that my repeated request to start a discussion about the formulation of the Prize problem is met with complete silence from those in charge of the problem. If my view-points are silly, that could be said by those who know better. If they are not silly, maybe even relevant, then it would be silly (or deplorable) to not say anything.  In either case, silence is not reasonable and it is tiresome to keep silent under increasing pressure from the outside world to say something... 

onsdag 21 maj 2014

Tao on Clay Navier-Stokes and Turbulence?

Terence Tao is working on the Clay Navier-Stokes Prize Problem and in a recent post considers  Kolmogorov's power law for turbulence. A heuristic derivation goes as follows: The smallest spatial scale $\epsilon$ of a fluctuation $u$ of turbulent incompressible flow of small viscosity $\nu >0$ is determined by a local Reynolds number condition
  • $\frac{u\epsilon}{\nu}\sim 1$.
Assuming the smallest scale carries a substantial part of the total dissipation gives 
  • $\nu(\frac{u}{\epsilon})^2\sim 1$.
Combination gives 
  • $u\sim \nu^{\frac{1}{4}}$
  • $\epsilon\sim\nu^{\frac{3}{4}}$ 
suggesting that the turbulent solution is Lipschitz continuous with exponent $\frac{1}{3}$. 

My question to Tao posed as a post comment is if according to the Clay problem formulation, such a $Lip^\frac{1}{3}$ turbulent solution with smallest scale $\nu^\frac{3}{4}$ is to be viewed as a smooth solution for any small $\nu >0$?

tisdag 20 maj 2014

Answer to My Question about Formulation of Clay Navier-Stokes Prize Problem

Here is the response from the Clay Mathematics Institute on my message that the formulation of the Navier-Stokes Prize Problem does not include the fundamental aspect of wellposedness required for a mathematical model of a physics phenomenon to be meaningful:

Dear Dr Johnson,

Thank you for your interest in the Millennium Prize Problems. Complete details can be found at

As a matter of policy, the Clay Mathematics Institute does not join in discussion of the formulation of the Millennium Prize Problems, nor does it comment on potential solutions.  I am afraid that we have nothing to add to what is said on the CMI's website.

Best wishes,

Anne Pearsall (Mrs)
Administrative Assistant
Office of the President, Clay Mathematics Institute
Andrew Wiles Building
Radcliffe Observatory Quarter
Woodstock Road
Oxford OX2 6GG, UK

OK, so we learn that the Administrative Assistant of the President of the Clay Mathematics Institute, not the President himself,  "is afraid that we have nothing to add" and that the Institute "does not join in discussion of the formulation of the Millennium Prize Problems". 

Yes, this is indeed something to be afraid of, in particular if mr Clay himself understands that the formulation of the NS problem is unfortunate in the sense of lacking meaning to physics, and as a meaningless problem cannot have a meaningful solution. 

The fact that my question about the meaningfulnness of the NS Problem in its present formulation, is met by compact silence, may be interpreted as a silent acknowledgement that the formulation indeed is meaningless, and that it is purposely so in order to reserve the problem to meaningless mathematics and guarantee that, in Newton's words, "little smatterers" are kept out.

Wellposedness vs the Clay Navier-Stokes Problem?

In a sequence of posts I have argued that the omission of wellposedness in the Official Description of the Clay Navier-Stokes Prize Problem by Charles Fefferman, makes the problem meaningless. To support this I quote from Wellposedness and Physical Possibility by B. Gyenis:

Well posedness is widely held to be an essential feature of physical theories. Consider the following remarks of Mikhail M. Lavrentiev, Alan Rendall, and Robert M. Wald – leading experts in their respective fields of physics – intended as motivations for the continuous dependence condition:
  • One should remember that the main goal of solving mathematical problems is to describe certain physical processes in mathematical terms. In this case the initial data are obtained experimentally; and since measurements cannot be absolutely precise, the data contain mea- surement errors. For a mathematical model to describe a real physical process, the problem should be supplemented with some additional requirements reflecting, in a physical sense, the fact that the solution should have only small variations under slight changes of initial data or, to put it conventionally, the stability of the solution under small perturbations in the data. (Lavrentiev et al.; 2003, p. 6) 
  • The condition of continuity is sometimes called Cauchy stability. The reason for including it is as follows. If PDE are to be applied to model phenomena in the natural world it must be remembered that measurements are never exact but always associated with some error. As a consequence it is impossible to know initial data for a problem exactly and so if solutions depend on the initial data in an uncontrollable way the model cannot make useful predictions. Cauchy stability guarantees that this does not happen and thus represents a necessary condition for the application of PDE to the real world. (Rendall; 2008, p. 134) 
  • If a theory can be formulated so that “appropriate initial data” may be specified (possibly subject to constraints) such that the subsequent dynamical evolution of the system is uniquely determined, we say that the theory possesses an initial value formulation. How- ever, even if such a formulation exists, there remain further properties that a physically viable theory should satisfy. First, in an appropriate sense, “small changes” in initial data should produce only correspondingly “small changes” in the solution over any fixed compact region of spacetime. If this property were not satisfied, the theory would lose essentially all predictive power, since initial conditions can be measured only to a finite accuracy. It is generally assumed that the pathological behavior which would result from the failure of this property does not occur in physics. [...]2 (Wald; 1984, p. 224) 
These remarks express a sentiment widely shared among physicists: wellposedness is a necessary condition for models to describe real physical processes. Lack of wellposedness would be pathological and it “does not occur in physics,” at least not in describing forward time propagation of physical processes.

OK, so leading experts of physics consider wellposedness to a necessary requirement for a mathematical model of some physical phenomena to be meaningful. The Navier-Stokes equations is the basic model of fluid mechanics, and as such requires some form of wellposedness to be meaningful.

The leading mathematical expert Charles Fefferman formulates the Clay Navier-Stokes problem without reference to wellposedness and thus apparently considers wellposedness to not be a central aspect. But doing so Fefferman separates the mathematics of Navier-Stokes equations from physics, which goes against the reason of formulating a Prize Problems about a mathematical model of fundamental importance in physics.  

When I ask The Clay Institute and Fefferman to give a comment concerning these facts, I get zero response. I think my viewpoints are reasonable and essential and thus worthy of some form of answer.

måndag 19 maj 2014

Wellposedness and Turbulence Not Part of Clay Navier-Stokes Problem!

A central aspect of the mathematical theory of partial differential equations, such as the incompressible Navier-Stokes equations, concerns wellposedness, which is the sensitivity of solutions with respect to perturbations of data in suitable quantitative form. Without wellposedness in some form solutions have no permanence and meaning, since they can change arbitrarily subject to virtually nothing.

But the Official Description of Clay Navier-Stokes Prize Problem does not include the aspect of wellposedness.

A central aspect of incompressible flow described by the Navier-Stokes equations, is turbulence. 

But the Official Description of the Clay Navier-Stokes Prize Problem does not include any aspect of turbulence.

The Official Description is thus questionable, to say the least, from both mathematical and physical point of view, by leaving out what is fundamental.

When I point this out to Charles Fefferman who has formulated the Official Description of the problem, and Luis Cafarelli who gives a video presentation thereof, and Peter Constantin who acts as referee to evaluate proposed solutions and Terence Tao who works to solve the problem and to the President of the Clay Institute, I get no reaction but silence.

This is not reasonable, since the Navier-Stokes equations and the mathematics thereof belongs to us all and thus must be open to public discussion, in particular so when it has been elevated to a Millennium Prize Problem of importance to humanity.

I sent to following renewed request to the people involved to reveal their cards:

Dear Colleagues:

I try to get a response from you concerning my questioning of the Official Description of Clay Navier-Stokes Prize Problem expressed here

I get no response but compact silence. I don't think this is in the interest of a Clay Prize Problem as of concern to a wide mathematical and scientific community and not secluded to a very small closed circle.

The omission of both wellposedness and turbulence in the Official Description lacks rationality from both mathematical and physical point of view, and irrationality is against the principles of mathematics and physics.

I hope you can see that my questioning requires a response from you in your respective roles.

Sincerely, Claes Johnson

PS I raised the same question a couple of  years ago, and the only response then on my question how the Prize problem could be meaningful without including the aspect of wellposednes , was Fefferman's short reply: "It is meaningful to me". I think this answer misses that fact that science is not only a private thing.

söndag 18 maj 2014

Crisis in Mathematics Education in France like in Sweden

Mathematics education is falling freely also in France as reported on images des Maths (in my translation):
  • The many problems present in mathematics education today is of concern to everybody. Results are falling since 1990.
  • Many people speak thereof but few do anything about it.
  • The debate is troublesome in the community of mathematicians, and even more so for the general public.
  • The phenomenon has several causes: 
  • One is the training of mathematics teachers. It was better before.
This is the same analysis as in Sweden based on the following postulates:
  1. The training of math teachers was good before and math eduction was then working.
  2. Today math education does not work anymore and the reason can only be that the training of math teachers is not as good as before.
  3. Hence what is needed is re-training of math teachers to the old standard.
Billions of tax-payer money is now spent in Sweden spent on re-training in collegial form, where teachers without "good" education "lift" each other into the old level of training. 

Of course, the result is small and much money and effort is lost. 

What is forgotten, both in France and Sweden, is that in our computer time, math education has a new role to play and the old role is outdated and cannot be resurrected. Very few people in the math community are willing to face this reality, and the result is that the fall of math education continues to new low levels each year, in France and Sweden alike. 

lördag 17 maj 2014

BodySoul Mathematical Simulation Technology Translated to Chinese

Today I recieved the following letter from Zhimin Zhang (with copy to Qun Lin as a leading Chinsese applied mathematician):

Dear Professor Johnson,

First, I would like to apologize for taking almost 4 years to get back to you about your book. The reason was that Professor Lin wanted to understand and "digest" your book more before talking with you.

To make a long story short, he likes your book very much and has organized a group of Ph.D. students to translate your book into Chinese. It is a book with more than 1600 pages and that is why it takes almost 4 years to complete. Now Professor Lin wants me to ask for your permission to publish the Chinese translation of your book. In addition, if you have updated your book, we would like to have the new version and update our translation.

We look forward to your favorable response, Zhimin.

I replied that I was glad to hear this and suggested to set up a formal agreement about the use of BodySoul Mathematical Simulation Technology in China. I will report what comes out of this.

I recall that the book is censored at KTH, and so apparently Sweden has stricter censorship than China.

The number of fresh engineering students each year is 1000 at KTH, while it is 10.000.000 in China.

Almost Dictatorial Consensus in Germany

An internal memo On the situation in the field of meteorology-climatology of the German Meteorological Society reveals a growing and widespread worry over the suppression of scientific views under almost dictatorial consensus:
  • ….how certain developments are becoming cemented into their scientific fields (foremost climatology) which from a scientific point of view simply cannot be accepted and do not comply to their professional ethics.
  • In meteorology-climatology every one includes a highly visible army of organized, little known persons; in Germany this is almost the entire public! 
  • The changes that have taken place in science as a result have in our opinion (and that of others) led to very negative impacts on the quality standards of science. 
  • For example expressed and disseminated meteorological flaws can hardly be contained and cannot be corrected publicly at all. Yet our meteorological scientists do not speak up.
  • And it is hardly perceived that behind these developments – admittedly – there is also a political objective for the transformation of society, whether one wants it or not. Currently global sustainable change is the same thing.
  • Meteorology-climatology is playing a decisive role this political action. The – alleged – CO2 consensus here is serving as a lever within the group that consists of known colleagues who deal with climate, but also consists of a large number of climate bureaucrats coming from every imaginable social field. Together both groups consensually have introduced a binding dogma into this science (which is something that is totally alien to the notion of science).
  • This is not the first time such a thing has happened in the history of science. Here although this dogma came about through democratic paths (through consensus vote?), in the end it is almost dictatorial. 
  • Doubting the dogma is de facto forbidden and is punished? In climatology the doubt is about datasets or results taken over from hardly verifiable model simulations from other parties. Until recently this kind of science was considered conquered – thanks to our much celebrated liberty/democratic foundation!
  • The constant claim of consensus among so-called climatologists, who relentlessly claim man-made climate change has been established, attempts to impose by authority an end to the debate on fundamental questions. 
  • Thus a large number of scientist colleagues end up being ostracized, and thus could lead to the prompting of actions that would have considerable burdens on the well-intended society. Such a regulation and the resulting incalculable consequence it would have for all people would in our view – and that of many meteorological specialists we know - be irresponsible with respect to our real level of knowledge in this field.
  • We must desire in general, and also in our scientific field, a return to an international scientific practice that is free of pre-conceptions and cemented biased opinions. 
  • This must include the freedom of presenting (naturally well-founded) scientific results, even when these do not correspond to the mainstream (e.g. the IPCC requirements).
The bullying of Lennart Bengtsson is a recent example of violation of scientific/democratic principles  in the name of "almost dictatorial consensus". Another is KTH-gate. Where is Western society heading?

fredag 16 maj 2014

Towards Computational Solution of Clay Navier-Stokes Problem 3

The formulation of the Clay Navier-Stokes Prize problem is unfortunate, or more precisely both mathematically and physically meaningless, because the following two completely fundamental aspects are not included:
  1. wellposedness
  2. turbulence.
To see the effect consider exterior flow with a slip boundary condition, which allows a unique stationary smooth near-solution as potential flow with a Navier-Stokes residual, which scales with the viscosity $\epsilon$. Smooth potential flow thus offers a solution to the NS equations with a
vanishingly small residual under vanishingly small viscosity. But potential flow is not stable since it  under small perturbation develops into a completely different turbulent solution. In other words, potential flow is not wellposed in any sense and thus not a physical solution.

The present problem formulation without 1 and 2 does not allow unphysical smooth potential flow to be distinguished from physical turbulent flow. The result is that the Clay NS problem has no meaningful solution and does not serve the purpose of a Prize problem.

Note that the Clay NS problem is introduced with the following description of the essence of the problem and its importance to humanity:
  • Waves follow our boat as we meander across the lake, and turbulent air currents follow our flight in a modern jet. 
  • Mathematicians and physicists believe that an explanation for and the prediction of both the breeze and the turbulence can be found through an understanding of solutions to the Navier-Stokes equations. 
  • Although these equations were written down in the 19th Century, our understanding of them remains minimal. 
  • The challenge is to make substantial progress toward a mathematical theory which will unlock the secrets hidden in the Navier-Stokes equations.
But turbulence is not an issue in the official formulation. The secret to unlock is turbulence, but that is not part of the problem formulation. Something is weird here. I have pointed that out to the President of Clay Mathematics Institute and will report the reaction. Here is the letter:

Clay Mathematics Institute

I want to convey the information that the formulation of the Clay Navier-Stokes problem is incorrect both mathematically and physically, because the fundamental aspects of (i) wellposedness and (ii) turbulence, are not included, as exposed in detail in the following sequence of blog posts:

The result is that the problem cannot be given a meaningful solution and thus does not serve well as a Prize problem. Evidence is given by the fact that no progress towards a solution has been made.

I have tried to engage Charles Fefferman, who has formulated the problem, Peter Constantin, who acts as a referee, and Terence Tao, who is working on the problem, into a discussion, but I get no response.

I hope this way to stimulate discussion, which I think would be more constructive than no discussion.

Sincerely, Claes Johnson 

Towards Computational Solution of Clay Navier-Stokes Problem 2

This is a continuation of a previous post: The basic energy estimate which is easily proved analytically by multiplying the momentum equation by the velocity $u_\epsilon$ and integrating, reads for $T>0$ with $Q =\Omega\times (0,T)$:
  • $\int_\Omega\vert u_\epsilon (x,T)\vert^2\, dx +\int_{Q}\epsilon\vert\nabla u_\epsilon (x,t)\vert^2\, dxdt =\int_\Omega\vert u^0(x)\vert^2\, dx$
or in short notation with obvious meaning:
  • $U(T) + D_\epsilon (U) = U(0)$,
which expresses a balance of kinetic energy $U(T)$ at time $T$ and dissipation $D_\epsilon (U)$ over the time interval $(0,T)$ summing up to initial kinetic energy $U(0)$. 

Computations with small $\epsilon$ (compared to data as $\Omega$ and $U(0)$) produce turbulent solutions characterized by 
  •  $D_\epsilon (U) =\alpha U(0)$ where $\alpha$ is not small,
that is solutions with substantial (turbulent) dissipation. For turbulent solutions $\vert \nabla u\vert$ is large, typically scaling with $\epsilon^{-\frac{1}{2}}$, even if initial data is smooth, which can be viewed as an expression of non-smoothness.

The basic energy estimate can thus be used to signify non-smoothness by substantial turbulent dissipation. The Clay problem can thus be reduced to the question of proving that the dissipation term is  substantial in the basic energy estimate. 

Evidence to this effect is given by computation. Analytical evidence can be given by the following argument: Smooth laminar solutions have small dissipation but smooth laminar solutions are all unstable. If the dissipation remained small it would mean that an unstable solution would remain smooth and unstable, which is not possible under perturbation. 

The dissipation therefore must be substantial in the basic energy estimate and only a non-smooth solution can exist (and does exist by computation). An answer to the Clay problem may thus be possible along the following lines, assuming the viscosity is small and data are smooth:
  1. Solutions exist for all time and do not cease to exist by blow-up.
  2. Solutions become non-smooth (turbulent) in finite time. 
  3. Solutions cannot stay smooth for all time, because any smooth solution is unstable. 
  4. Solutions are weakly well-posed in the sense that solution mean-values are stable to perturbations, because of a cancellation effect in turbulent solutions which is not present for smooth solutions.  
The group of mathematicians in charge of the problem (Fefferman, Constantin and Tao) do not answer my repeated requests to open a discussion about the formulation of the problem and possible approaches to solution. This is not helpful to progress. Mathematicians apparently want to have a heaven of their own, where they can explain phenomena which have no scientific relevance, but this is a dangerous strategy in the long run, because without connection to science funding may cease.

torsdag 15 maj 2014

Lennart Bengtsson vs Royal Swedish Academy on Swedish Climate Science and Politics

Lennart Bengtsson indicates that the statement from 2009 by the Royal Swedish Academy of Sciences on the Scientific Basis of Climate Change, authored mainly by himself as leading Swedish climate scientist and expressing (cautious) support of the CO2-alarmism propagated by IPCC, is due to a revision.

Since 2009 LB has turned from supporter to skeptic of IPCC CO2-alarmism, which he has made very clear in media outside Sweden. The question is now if LB will participate in forming the revision or not?

If the standpoint of LB as skeptic will dominate the revision, which is reasonable since he is the leading climate scientist in the Academy,  then the new statement will express skepticism to CO2-alarmism and there will be no scientific foundation for the current Swedish climate politics.

If the standpoint of LB shows to be incompatible with that of the Academy, then the revision will be formed without the participation of the leading climate scientist in Sweden and then will have no weight, and the result will be the same.

It seems that interesting times are awaiting the Academy and Swedish climate science and politics.

For an account of the related GWPF story see Climate Depot.

LBs recent article pointing to small climate sensitivity has been rejected to publication on political grounds since it question the dogma of climate alarmism. See article in The Times and Roy Spencer.

Towards Computational Solution of Clay Navier-Stokes Problem 1

The Clay Navier-Stokes problem as formulated by Fefferman asks for a mathematical proof of (i) existence for smooth initial data of smooth solutions for all time to the incompressible Navier-Stokes equations, or (ii) blow-up of a solution in finite time. No progress towards an answer has been made since the problem was announced in 2000. It appears that the available tools of mathematical analysis by pen and paper are too crude to give an answer.

Let me here sketch (see also earlier posts) an approach based on digital computation which may give an answer. We then consider the incompressible Navier-Stokes equations in velocity $u=u_\epsilon (t,x)$ and pressure $p=p_\epsilon (t,x)$:
  • $\frac{\partial u}{\partial t}+u\cdot\nabla u +\nabla p =\epsilon\Delta u$ 
  • $\nabla\cdot u =0$
for  time $t > 0$ and $x\in\Omega$ with $\Omega$ a three-dimensional domain, subject to smooth initial data $u_\epsilon (0,x)=u^0(x)$ and and slip or no-slip boundary conditions. Here $\epsilon > 0$ is a constant viscosity, which we assume to be small compared to data ($\Omega$ and $u^0$).

Computed solutions show the following dependence on $\epsilon$ under constant data:
  1. $\Vert\epsilon^{\frac{1}{2}}\nabla u_\epsilon\Vert_{L_2(L_2)} \sim 1$
  2. $\Vert\epsilon\Delta u_\epsilon\Vert_{L_2(H^{-1})}\sim \epsilon^{\frac{1}{2}}$
  3. $\Vert\epsilon\Delta u_\epsilon\Vert_{L_2(L_2)}\sim \epsilon^{-\frac{1}{2}}$.
Here 1 reaches the upper bound of the standard energy estimate, which can be proved analytically,
which shows that $\nabla u_\epsilon$ becomes large with decreasing $\epsilon$ as a quantitative expression of non-smoothness, with 2 a variant thereof.  Also 3 expresses non-smoothness in quantitative form with $\epsilon\Delta u_\epsilon$ being small in a weak norm but large in a strong norm.

Computation thus is observed to produce solutions to the Navier-Stokes equations with increasing degree of non-smoothness as $\epsilon$ tends to zero, which can be seen as an answer to the Clay question in direction of  (ii) but not quite since the solution does not cease to exist by "blow up" and continues as a non-smooth weak solution.

Computed solutions satisfying 3 are turbulent. Mean-value outputs of turbulent solutions show small variation as the viscosity becomes small, in particular with slip. This can be seen to express weak well-posedness under variation of small viscosity, which may allow to carry the conclusion from computationally resolvable small viscosity to vanishingly small viscosity beyond computation.

We may compare with the attempt by Terence Tao to construct a non-smooth solution by pen and paper in a thought experiment, where the computation is left to the reader of a 70 page dense "computer program" expressed in analytical mathematics. We let instead the computer compute the solution following a standard (freely accessible) computer program, which allows the reader to do the same and then inspect the solution and verify 3 and thus get an answer to the Clay problem.     

onsdag 14 maj 2014

Shocking Message from Lennart Bengtsson Muted by Climate Alarmists

Die Klimazwiebel publishes the following shocking letter from Lennart Bengtsson forced to resign from the advisory board of GWPF under group pressure from politically correct CO2 alarmists:
  • I have been put under such an enormous group pressure in recent days from all over the world that has become virtually unbearable to me. If this is going to continue I will be unable to conduct my normal work and will even start to worry about my health and safety. I see therefore no other way out therefore than resigning from GWPF. I had not expecting such an enormous world-wide pressure put at me from a community that I have been close to all my active life. Colleagues are withdrawing their support, other colleagues are withdrawing from joint authorship etc. I see no limit and end to what will happen. It is a situation that reminds me about the time of McCarthy. I would never have expecting anything similar in such an original peaceful community as meteorology. Apparently it has been transformed in recent years.
  • Under these situation I will be unable to contribute positively to the work of GWPF and consequently therefore I believe it is the best for me to reverse my decision to join its Board at the earliest possible time.
I have recently communicated with LB and expressed my great admiration for his courageous questioning of CO2 alarmism in media, because in his view the scientific reason is lacking. He then said that he would continue to fight for scientific truth following his responsibility as leading scientist. But LB as has now been muted by naked power and the order of climate alarmism is re-established. What a terribly sad story this is! For Sweden, Science and the World!

See also Climate Depot and Tallbloke and Bishophill and JoNova and Climate Audit and the reaction from David Henderson, Chairman, GWPF’s Academic Advisory Council:
  • Your resignation is not only a sad event for us in the Foundation: it is also a matter of profound and much wider concern. The reactions that you speak of, and which have forced you to reconsider the decision to join us, reveal a degree of intolerance, and a rejection of the principle of open scientific inquiry, which are truly shocking. They are evidence of a situation which the Global Warming Policy Foundation was created to remedy.
  • In your recent published interview with Marcel Crok, you said that ‘if I cannot stand my own opinions, life will become completely unbearable’. All of us on the Council will feel deep sympathy with you in an ordeal which you should never have had to endure.
  • With great regret, and all good wishes for the future.
No wonder that the reaction is so strong: Big values are at stake. The whole alarmist ship is sinking and desperation is spreading…One day not too far away LB will be glorified as a scientist ready to folllow his conviction, now only temporarily overpowered…., no matter what the cost may be…

PS1 As noted by Lubos the event my drive LB into a true skeptic position, rather than back to alarmism, a position now taken by many scientists and thus if not maximally comfortable probably livable.

PS2 More on the Swedish Klimatupplysningen and Antropocene and MSM outside Sweden:

Towards a Solution of the Clay Navier-Stokes Problem 2

The Clay Millennium Navier-Stokes problem concerns properties of solutions of the incompressible Navier-Stokes equations as the basic model of fluid mechanics of fundamental importance in both science and mathematics.

The Official Formulation Description by Charles Fefferman poses the following alternatives:
  1. Existence of smooth solutions for all time from smooth initial data?
  2. Cease of existence ("break-down" or "blow-up") of a solution from smooth initial data?   
No progress towards a solution has been made since the formulation in 2000. Existence of smooth solutions for all time seems impossible since the viscosity term is not strong enough. All efforts to construct a solution with blow-up have failed because the viscosity term is too strong. No answer thus seems to be possible and a scientific dead-lock is reached.

Over the years I have, without success, tried to convey the message that the reason for the dead-lock is  that Fefferman's problem formulation is both mathematically and physically meaningless, because the fundamental aspect of (Hadamard) wellposedness or stability of solutions to perturbations is not included. 

Including well-posedness leads to the following possible answer which is neither 1 nor 2 and which deals with case of small viscosity (compared to initial data):
  • Turbulent solutions always develop in finite time from smooth initial data.
  • A turbulent (non-smooth) solution is characterized by having a Navier-Stokes residual which is small in a weak $H^{-1}$-norm and large in a strong $H^1$-norm.
  • Turbulent solutions are weakly wellposed by having stable mean-value outputs. 
I have tried to get some comment from Terence Tao, Charles Fefferman and Peter Constantin, who are in charge of the problem formulation and serve as referees to evaluate proposed solutions. The response I get is that the problem formulation without wellposedness by Fefferman is fine as a mathematical problem, even if it does not make sense from physics point of view. The response is that it may well be that a solution will never be reached, but if so let it be. 

But why not include wellposedness and make the Clay Navier-Stokes problem meaningful from a physics point of view and then meaningful as a challenge to development of mathematics? Why not open to possibility instead of impossibility? Why spend major efforts on a meaningless question without answer? 

I pose this question to Fefferman, Constantin and Tao, with the hope of getting some response, to be reported.

PS1 We may compare with the lack of global warming since 2000: No progress of the temperature whatsoever. With this evidence one may ask if there may be some fundamental flaw in the idea of global warming.

PS2 Terence Tao sets out to "construct" a selfreplicating solution of the Navier-Stokes equations which "blows up" in a 70 page paper and pen excercise, which shows to be impossible. We let instead the computer construct solutions, which turns out to be possible, and we observe that the constructed solutions become turbulent and thus show a form of blow-up.

PS3 It does not seem that Fefferman et al are interested in communicating outside their own group and so they respond by silence, whatever it means.  Is this a sign of healthy strong science, which Mr. Clay presumably would prefer to support? The consequences are far reaching: If the Clay problem formulation is wrong, then something bigger is wrong.    

tisdag 13 maj 2014

Parameter-Free Fluid Models: How to Make Einstein Happy

In recent work (here and here) we have shown that mean-value outputs of computed turbulent solutions of the incompressible Navier-Stokes equations with small viscosity, vary little with the absolute value of the viscosity. This makes this mathematical model to an example of Einstein's ideal as a physics model without parameters or coefficients requiring experimental determination.

For example, we show in New Theory of Flight that the lift and drag of an airplane can be accurately computed using this model, thus ab initio without input of any experimental measurement. This is very remarkable and would have made Einstein very happy.

Augmenting this model to include self-gravitation as in the blog post Equivalence of Inertial and Heavy Mass, gives a parameter-free cosmological model, by choosing the unit of mass so that
  • $\Delta\phi (x,t) = \rho (x,t)$, 
where $\phi (x,t)$ is the gravitational potential and $\rho (x,t)$ mass density for a given unit of length specified by the $x$ coordinate, and the unit of time $t$ so that Newton's 2nd law takes the form
  • $\ddot x = - \nabla\phi (x,t)$,
connecting particle accelleration $\ddot x(t)$ to the gradient $\nabla\phi (x,t)$, where $x(t)$ is the trajectory of a particle of unit mass.

Such a  model can describe galactic scales after galaxy and star formation from interstellar dust under compression, as an incompressible fluid of small viscosity under self-gravitation, without any parameter to determine experimentally. This could have made Einstein even happier.

Towards Solution of the Clay Navier-Stokes Problem?

              Watch movie of turbulent flow as solution of the Navier-Stokes equations. 

Quanta Magazine reports in A Fluid New Path in Grand Math Challenge (Febr 24):
  • In a paper posted online on February 3, Terence Tao of the University of California, Los Angeles, a winner of the Fields Medal, mathematics’ highest honor, offers a possible way to break the impasse. 
  • He has shown that in an alternative abstract universe closely related to the one described by the Navier-Stokes equations, it is possible for a body of fluid to form a sort of computer, which can build a self-replicating fluid robot that, like the Cat in the Hat, keeps transferring its energy to smaller and smaller copies of itself until the fluid “blows up.” As strange as it sounds, it may be possible, Tao proposes, to construct the same kind of self-replicator in the case of the true Navier-Stokes equations. 
  • If so, this fluid computer would settle a question that the Clay Mathematics Institute in 2000 dubbed one of the seven most important problems in modern mathematics, and for which it offered a million-dollar prize. 
  • Is a fluid governed by the Navier-Stokes equations guaranteed to flow smoothly for all time, the problem asks, or could it eventually hit a “blowup” in which something physically impossible happens, such as a non-zero amount of energy concentrated into a single point in space?
  • Tao’s proposal is “a tall order,” said Charles Fefferman of Princeton University.
We read that the Grand Math Challenge of the Clay Navier-Stokes Problem is taken on by one of the world's sharpest mathematicians with the plan to construct a solution with smooth initial data which "blows up" in finite time, thus giving a negative answer to the Clay problem. 

Tao thus seeks to construct a "fluid computer" capable of answering a mathematical question concerning the Navier-Stokes equations.

Let us compare with our own approach to the Clay problem based on using a digital computer to solve the Navier-Stokes equations computationally, which offers the following answer for the case of small viscosity as presented in New Theory of Flight (see also blogpost):
  1. Computations produce from smooth initial data functions with Navier-Stokes residuals small in $H^{-1}$ and large in $H^1$, which are non-smooth solutions showing to have stable mean-value outputs and thus represent physical turbulent states.  
  2. Smooth solutions are unstable and thus do not represent physical states.       
In this analysis the aspect of stability is fundamental as identified by Hadamard as well-posedness. Unfortunately, the Clay problem formulation does not include the aspect of well-posedness, and thus is meaningless. Including well-posedness gives a new Clay problem, which can be answered in a meaningful way and this is what we seek to do.

Computations thus produce non-smooth approximate solutions which are well-posed in mean-value sense and thus physical solutions, while smooth solutions show to be unstable and thus are not physical solutions. Our answer is different from Tao's in that computed solutions initiated from smooth initial data do not "blow up" but instead turn turbulent with residuals becoming large in $H^1$ but with stable mean-value outputs.

I have asked Tao for a comment to the message of this post and will report.

More on the Clay problem here and here.

PS1 The fact that there has been no advance towards a solution of the Clay problem as formulated by Charles Fefferman in 2000, without reference to well-posedness, can be seen as evidence that the Clay question is ill posed and thus cannot be answered. The problem thus requires reformulation but mathematicians in charge of the problem formulation do not seem to be open to such a thing.

Hadamard's 1933 paper on the necessity of well-posedned seems to be forgotten. Strange. Very strange. The Navier-Stokes solution does not "blow up" but becomes non-smooth (turbulent), but this is not contained in the present formulation.

PS2 Quanta reports:
  • The real ocean doesn’t spontaneously blow up, of course, and perhaps for that reason, most mathematicians have concentrated their energy on trying to prove that the solutions to the Navier-Stokes equations remain smooth and well-behaved forever, a property called global regularity. 
  • Purported proofs of global regularity surface every few months, but so far each one has had a fatal flaw. (The most recent attempt to garner serious attention, by Mukhtarbay Otelbaev of the Eurasian National University in Astana, Kazakhstan, is still under review, but mathematicians have already uncovered significant problems with the proof, which Otelbaev is trying to solve.)
Amazing: It is observed that the ocean does not blow up spontaneously, but ocean motion is partly turbulent and thus is not smooth and well-behaved and thus falls outside the allowed categories in the Clay problem, as either staying smooth or blowing up. No wonder that the problem as formulated has no solution. See also following post.

måndag 12 maj 2014

Modern Physics as a Mess

Alexander Unzicker presents in Higgs Fake a relentless criticism of modern physics also presented on Youtube here and here. Take a look, and think yourself!

Korrespondens med Lennart Bengtsson (som lovar att fightas för vetenskapen)

Brev från mig till Lennart Bengtsson 11/5:

Hej Lennart
Som Du förmodligen noterat har jag upprepade kommentarer till Dina inlägg i media
uppmanat Dig att verka för att KVAs klimatuttalande skrivs om från politiskt korrekt stöd av IPCC till korrekt vetenskaplig analys av dogmen om koldioxidalarm.

Du har inte svarat på mina kommentarer men jag hoppas att Du vill svara på detta direkta mail och tala om hur Du ser på KVAs uttalande och om Du anser att det nu måste skrivas om. KVAs uttalande ligger till grund för svensk klimatpolitik och Du har som författare och ledande vetenskapsman ett stort ansvar att bära.

Vänliga hälsningar, Claes

Svar från LB:

Jag kan bara meddela Dig att just har blivit allvarligt kritiserad av en akademikollega att KVAs klimatyttrande var urvattnat och det var jag som var skuld till detta. Samtidigt anklagar Du mig för att jag skrivit ett alarmistiskt yttrande. Detta går väl knappast ihop? Jag förslår att Du kontaktar professor Olle Häggström så får Ni komma fram till en ny formulering som Ni båda kan stödja. KVA kommer i alla händelser skriva om yttrandet med detta blir knappast med min medverkan.

Mitt svar till LB:

Tack för svar Lennart.Varför kommer Du inte att deltaga i författandet av KVAs nya klimatuttalande, som Du säger är på gång? Är Du sågad eller väljer Du självmant att överlåta ansvaret till personer som vet mindre än Du? Hur bär Du i så fall Ditt ansvar som vetenskapsman?

Svar från LB:

Jag var tillsammans med flera medlemmar i akademins 5e klass ansvarig för det yttrande som blev klart i september 2009 och som sedan godkändes av KVA med två reservationer som jag kan erinra mig. Olle Häggström var inte en av dom så vitt jag vet utan han var i stort sett positivt. Han konsulterades under arbetets gång vid ngt tillfälle. Om nu akademien väljer att författa ett nytt yttrande så är kanske inte jag den rätte personen att göra detta efter alla personliga attacker jag utsatts för. KVA vill kanske ha en mindre kontroversiell person än jag som leder detta och som också är mer i linje med den uppfattning som föredras från politiskt håll. Jag kan förstå detta men delar inte en sådan uppfattning. Mitt ansvar som vetenskapsman är en personlig fråga och den kommer jag självklart att behålla. Att jag därför som nu utsätts och säkert kommer att utsättas för alla slag av kritik från både "vänster" och "höger" får jag naturligtvis leva med. Huruvida kritiken är befogat eller inte är det knappast för mig att avgöra. Här är det min uppfattning att Du delar den kritiska uppfattningen med Olle Häggström.

Mitt svar till LB:

Jag anser att Du har ett vidare ansvar än bara personligt för att KVAs nya yttrande kommer att baseras på vetenskap och inte politik. Du har modigt i media framfört Din övertygelse som vetenskapsman, och det är beundransvärt. Jag hoppas att Du nu inte viker Dig på grund av illasinnade påhopp utan gör det Du kan för att KVAs nya uttalande blir ett uttalande värdigt en vetenskaplig akademi och inte en ny soppa av politisk korrekthet. Kan Sverige räkna med detta?

PS Var vänlig och bunta inte ihop mig med OH, som gillar mig lika lite som Dig. Jag delar väsentligen Din uppfattning såsom varande en vetenskapligt baserad hållning, så långt vetenskapen nu har nått. Jag hoppas bara att Du fortsätter att hävda Dina insikter. Min enda kritik skulle uppkomma om Du avstår från att göra det.

Svar från LB:

Tack för Dina uppmuntrande ord. Jag lovar att fight back...

Why Feynman Said: Nobody Understands Quantum Mechanics

We have always had a great deal of difficulty understanding the world view that quantum mechanics represents. At least I do, because I'm an old enough man that I haven't got to the point that this stuff is obvious to me. Okay, I still get nervous with it.... I cannot define the real problem, therefore I suspect there's no real problem, but I'm not sure there's no real problem.

The (trivial) commutator relation  
  • $px - xp = ih$, 
where $x$ is the position (operator) and $p=\frac{h}{i}\frac{\partial}{\partial x}$ is the momentum (operator), is supposed to play a fundamental role in quantum mechanics,  in particular as the origin of Heisenberg's Uncertainty Principle:
  • $\sigma_x\sigma_p\ge \frac{h}{2}$,
where $\sigma_x$ is the standard deviation in measurements of position $x$, and $\sigma_p$ that of momentum. 

We see that both the commutator relation and Heisenberg's Uncertainty Principle concern the product of position and momentum. But such a product lacks physical meaning. Momentum $p$ has physical meaning and so has position $x$, but their product has no physical meaning.  

Momentum multiplied by velocity has a physical meaning as kinetic energy, but momentum multiplied by position does not. Force multiplied by velocity has a meaning as work.

Quantum mechanics is however obsessed with the product of momentum and position, with the message that because of the commutator relation they cannot both be determined at the same time and spot. The message is that this makes quantum mechanics fundamentally different from classical mechanics where supposedly momentum and position can both be determined.

There are two approaches to physics:
  1. Make it as simple and understandable as possible. 
  2. Make it as complicated and mysterioud as possible.     
Quantum mechanics has developed according to 2 as evidenced by Richard Feynman:
  • I think I can safely say that nobody understands quantum mechanics.
One reason is that the product of momentum and position is given an fundamental role in contradiction to the fact that it has no physical meaning. 

söndag 11 maj 2014

How to Win Any Debate: Claim You Understand Entropy!

John von Neumann (1903-1957) was a very clever mathematician who offered the following advice:
  • No one really knows what entropy really is, so in a debate you will always have the advantage (by pretending that you know).
This is still true, and causes a lot of confusion. If you want to improve your understanding then you could consult Computational Thermodynamics, which presents the 2nd Law of Thermodynamics resulting from the Euler equations for a compressible gas subject to finite precision computation in the following integrated form, with the dot signifying time differentiation (see the previous post):
  • $\dot K+\dot P = W-D$
  • $\dot E = -W + D$,  
where $K$ is kinetic energy, $P$ potential energy, $W$ work, $E$ heat energy and $D\ge 0$ is turbulent dissipation with $W > 0$ under expansion and $W < 0$ under compression. The sign of $D$ sets the direction of time with always transfer of energy from $K+P$ to $E$ from turbulent dissipation.

Here turbulent dissipation is the same as entropy production or the other way around:
  • Entropy production is the same as turbulent dissipation. 
This removes the mystery from entropy and you can now win any debate, by really knowing what entropy is! 

lördag 10 maj 2014

Lennart Bengtsson om Bränning av Böcker

Lennart Bengtsson kommenterar en debattartikel i dagens DN:
  • Med det stora antalet akademiker bland undertecknarna kunde man kanske väntat sig lite mer kritiskt och öppet tänkande och inte bara detta sagolika flum. Att världen idag är beroende av fossil energi till mer än 80% och med 1.4 miljarder människor som saknar tillgång till el och där halva jordens befolkning är underförsörjda med energi verkar knappast bekymra denna ljusets riddarvakt det ringaste.
  • Nästa steg blir väl att bannlysa det felaktiga tänkandet eller bannlysa eller rent av bränna olämpliga böcker som den framstående belgiske energiexperten Samuele Furfaris nyutkomna bok: ”Vive les énergies fossiles!” med undertiteln ”La contre-révolution énergétique” Det enda hoppfulla är väl att dessa untertecknare eller snarare deras klimatstridande studenter inte normalt läser böcker på franska. I slutstadiet får vi räkna med att även diverse olämpliga personer blir bannlysta i denna nysvenska omvända upplysningstid.
Mot detta står att LB deltog i det offentliga brännandet på KTH de 4e december 2010 (post with 4051 page views) av min matte-bok, eftersom matematiken hos enkla klimatmodeller ifrågasatte den då (och ännu) rådande dogmen av CO2-alarmism.  

Kan vi läsa LBs kommentar som ett uttryck för att LB inte skulle göra om samma sak idag?

Skulle man kunna säga att bokbränning inte är bra eftersom det leder till ökade CO2 utsläpp?

Strange Laws by Strangest Man: Dirac

               Paul Dirac, The Strangest Man, who conjured (strange) laws of nature from pure thought.

Paul Dirac coined in 1926 the name fermion after Enrico Fermi as an elementary particle with antisymmetric wave-function $\psi (x_1,…,x_N)$ as a function of $N$ three-dimensional space variables $x_1,…,x_N$, and a boson after Satyendra Nath Bose to have a symmetric wave-function.

Dirac conjectured that Nature is so constructed that only wave-functions which are either anti-symmetric or symmetric can occur, but could not give a reason other than mathematical beauty. Dirac was encouraged by the property of an antisymmetric wave function to change sign under permutation of two particles, which forbids two particles to be at the same spot (assuming the same spin), which he happily recognized as Pauli's exclusion principle.

Since then it has become an incontrovertible fact impossible to question that Nature only accepts either anti-symmetric or symmetric wave-functions, but no underlying reason has ever been presented, other than mathematical beauty (for people who rightly can admire such a thing).

But if there it has no physical reason, Dirac's conjecture may be wrong. The first evidence to this effect is that the wave-function for Helium appears to be neither symmetric nor anti-symmetric as representing a configuration with the two electrons separated into two opposite half spheres.  If Dirac's conjecture is wrong for $N=2$, it may well be wrong also for $N>2$ and then standard quantum mechanics collapses…

Basic Atmospheric Thermodynamics as 2nd Law

The debate on the temperature distribution in the atmosphere is going around in never-ending circulation just like the air in the atmosphere. Let us here recall the basic statements of my chapter Climate Thermodynamics in a famous book, which is condensed as the 2nd law of thermodynamics expressed in the following form with the dot signifying time differentiation:
  • $\dot K+\dot P = W-D$
  • $\dot E = -W + D$,  
where $K$ is kinetic energy, $P$ potential energy, $W$ work, $E$ heat energy and $D\ge 0$ is turbulent dissipation with $W > 0$ under expansion and $W < 0$ under compression. The sign of $D$ sets the direction of time with always transfer of energy from $K+P$ to $E$.

There are two basic temperature distributions with linear decrease with height as lapse rate (assuming zero heat conductivity): 
  • Isothermal atmosphere with zero lapse rate: $D$ maximal with $W=D$.
  • Maximal (dry adiabatic) lapse rate $=9.8\, C/km$ with $D=0$ minimal.
The observed lapse rate (of about 6.5 C/km) is somewhere between maximal and minimal. We note:
  1. Lapse rate may increase by slow laminar vertical circulation with ascending air cooling and descending air warming with $D=0$.
  2. Lapse rate may decrease by turbulent dissipation $D>0$ heating upper layers.
  3. A (partially) transparent atmosphere (like on Earth) heated from below may naturally develop a positive lapse rate by 1. 
  4. An opaque atmosphere (like on Venus) heated from above may become isothermal by heat conduction and may then develop a positive lapse rate by 1.  
The lapse rate is basic to planetary climate since it determines the surface temperature from the temperature at the top of the troposphere, and its dependence on the radiative properties of the atmosphere is a key question in global climate science.

Compare with the previous post Lapse Rate by Gravitation: Loschmidt or Boltzmann/Maxwell?

fredag 9 maj 2014

Why Insist on Quantum Mechanics Based on Magic and Contradiction?

The ground state of Helium is postulated to be $1s^2$ with two overlaying electrons with opposite spin and identical spherically symmetric spatial wave-functions in the first shell, which is not the ground state because its energy is too large. This is the starting point for the Schrödinger equation for many-electron atoms.

Here is a further motivation why it may be of interest to consider wave-functions for an atom with $N$ electrons as a sum of $N$ functions $\psi_1(x)$,…,$\psi_N(x)$, all depending on a common three-dimensional space coordinate $x$ (plus time)  as suggested in a previous post:
  • $\psi (x)=\psi_1(x)+\psi_2(x)+…+\psi_N(x)$.
We recall that Schrödinger's equation for the Hydrogen atom as the basis of quantum mechanics, takes the form:
  • $ih\frac{\partial\psi}{\partial t}=-\frac{h^2}{2m}\Delta\psi +V\psi$ for all $x$ and $t>0$,
with kernel potential $V(x)=-\frac{1}{\vert x\vert}$, $x$ a three-dimensional space coordinate, $t>0$ time, $h$ Planck's constant, $m$ the mass of an electron and corresponding one-electron wave-function $\psi (x,t)$ as solution. This equation is magically pulled out of a hat from the relation
  • $E =\frac{p^2}{2m} + V(x)$
expressing conservation of energy $E$ of a body of mass $m$ with position $x(t)$ moving in a potential $V(x)$ with momentum $p=m\frac{dx}{dt}$, by the following formal substitutions:
  • $E\rightarrow ih\frac{\partial}{\partial t}$,
  • $p\rightarrow\frac{h}{i}\nabla$,
followed by formal multiplication by $\psi$. Energy conservation for the Hydrogen atom then takes the form:
  • $E=K(t)+P(t)$ for all $t>0$, where
  • $K(t) =\frac{h^2}{2m}\int\vert\nabla\psi (x,t)\vert^2\, dx$ is the kinetic energy, 
  • $P(t)=\int \frac{\vert\psi (x, t)\vert^2}{\vert x\vert}dx$ is the potential energy   
of the electron, under the normalization
  • $\int\vert\psi (x,t)\vert^2\, dx=1$.
So far so good: The different energy levels $E$ of time-periodic solutions to Schrödinger's equation give the observed spectrum of the Hydrogen atom with corresponding wave-functions describing the distribution of the electron around the kernel. We see that the Laplace term gives rise to the kinetic energy as an effect of gradient regularization.  

But consider now the accepted standard text-book generalization of Schrödinger's equation to an atom with $N$ electrons:
  • $ih\frac{\partial\psi}{\partial t}=-\sum_{j=1}^N(\frac{h^2}{2m}\Delta_j -\frac{N}{\vert x_j\vert})\psi + \sum_{k < j}\frac{1}{\vert x_j-x_k\vert}\psi$,  
where $\psi (x_1,…,x_N,t)$ depends on $N$ three-dimensional space coordinates $x_1,…, x_N$ and time $t$, and $\Delta_j$ is the Laplace operator with respect to coordinate $x_j$, under the normalization
  • $\int\vert\psi\vert^2\, dx_1….dx_N=1$.
We see the appearance of the one-electron operators
  • $\frac{h^2}{2m}\Delta_j - \frac{N}{\vert x_j\vert}$     
with corresponding one-electron kinetic energies:
  • $K_j(t) =\frac{h^2}{2m}\int\vert\nabla_j\psi\vert^2\, dx_1…dx_N$, 
and electron-electron repulsion expressed by the coupling potential
  • $\sum_{k < j}\frac{1}{\vert x_j-x_k\vert}$.
We see that in this model each electron $j$ is equipped with its own three-dimensional space with coordinate $x_j$ and its own kinetic energy $K_j$, with interaction between the electrons only through the coupling potential.

The electron individuality and high dimensionality of the wave function $\psi (x_1,…x_N)$ is reduced by restriction to wave functions as products $\psi_1(x_1)…\psi (x_N)$ built from three-dimensional wave functions $\psi_1,…,\psi_N$ combined with symmetry or antisymmetry under permutations of the coordinates $x_1,…,x_N$, which eliminates all individuality of the electrons.

Extreme electron individuality is thus countered by permutations removing all individuality, but the individual one-electron kinetic energies $K_j$ are kept as if each electron keeps its individuality. This is strange.

To see the result, recall that the ground state of minimal energy of Helium with two electrons is supposed to be given by a symmetric wave function $\psi (x_1,x_2)$
  • $\psi (x_1,x_2)=\phi (x_1)\phi (x_2)$,      
where $\phi (x_1)\sim \exp(-2\vert x_1\vert )$ is spherically symmetric, the same for both electrons. The two electrons of the ground state of Helium are thus supposed to have identical spherically symmetric distributions denoted $1s^2$, see the periodic table above. The trouble is now that this configuration has energy (in Hartree units) $- 2.75$ while the observed energy is $-2.903$.

The true ground state is thus different from $1s^2$ and to handle this situation, while insisting that ground state still is $1s^2$ as in the table above, a so-called corrective perturbation is made introducing a dependence of $\psi (x_1,x_2)$ on $\vert x_1-x_2\vert$ in a Rayleigh-Ritz minimization procedure. This way a better correspondence with observation is reached, because separation of the electrons is now possible: If one electron is on one side of the kernel then the other electron is on the other side. But the standard message is contradictory:
  • The ground state configuration for Helium is $1s^2$, which however is not the ground state because its energy is too large ($-2.75$ instead of $-2.903$).
  • Smaller energy can be obtained by a perturbation computation but the corresponding electron configuration is hidden to readers of physics books, because the ground state is still postulated to be $1s^2$.     
If we minimize energy over wave functions of product form
  • $\psi (x_1,x_2)=\psi_1(x_1)\psi_2(x_2)$, 
without asking for symmetry, we find that the minimum is achieved with spherically symmetric  $\psi_1=\psi_2$, with too large energy as just noted. However, if we instead compute the kinetic energy based on the sum with common space space coordinate $x$
  • $\psi_1(x) +\psi_2(x)$ 
as suggested in the previous post, then separation of the electrons is advantageous allowing discontinuous electron distributions (joining smoothly) without cost of kinetic energy and better correspondence with observation is achieved.

  • The standard attribution of individual kinetic energy appears to make the individual electron distributions "too stiff" and thus favors overlaying electrons rather than separated electrons, requiring Pauli's exclusion principle to prevent overlaying of more than two electrons. 
  • If kinetic energy is instead computed from the sum of individual electron distributions, electron "stiffness" is reduced and separation favored. 
  • Since the standard individual one-electron attribution of kinetic energy is ad hoc,  there is little  reason to insist that kinetic energy must be computed this way, in particular when it leads to an incorrect ground state already for Helium. 
  • Attributing kinetic energy to a sum of electron wave-functions allows discontinuous electron distributions joining smoothly without cost of kinetic energy. Electron individuality is here kept as individual distribution in space, while kinetic energy is collectively computed from the assembly.  This would be the way to handle individuality in a collective macroscopic setting and there is no reason why this would not be operational also for microscopics.
  • Since the stated ground state as $1s^2$ for Helium is incorrect, there is no reason to believe that any of the other ground states listed in the standard periodic table is correct. 
  • If so, then the claim that the standard Schrödinger's equation explains the periodic table has little reason.
PS1 The standard argument is that the standard multi-d Schrödinger equation must be correct since there is no case known for which the multi-d wave-function solution does not agree exactly with what is observed! But this is not a correct argument, because (i) the multi-d Schrödinger equation cannot be solved, (ii) even if the wave-function could be determined its physical meaning is unclear and so comparison with reality is impossible. The standard argument is to turn (i) and (ii) from scientific disaster into monumental success by claiming that since the wave-function is impossible to determine, there is no way to prove that it is not correct.

Realizing that arguing this way does not follow basic scientific principle may open to searching for different forms of Schrödinger's equation, as non-linear systems of equations in three space dimensions instead of linear multi-d scalar equations, which are computable and have physical meaning, as suggested.

PS2 The standard way to handle that the standard linear multi-d Schrödinger equation is uncomputable is using Density Functional Theory (DFT) awarded the 1998 Nobel Prize in Chemistry, as a non-linear 3d scalar system in electron density. DFT results from averaging in the standard linear multi-d Schrödinger equation producing exchange correlation potentials which are impossible to determine. If the standard multi-d linear Schrödinger equation is questionable, then so is DFT.        

torsdag 8 maj 2014

Quantum Statistics as Salvation from Catastrophe?

Planck awarding the Planck Medal to Einstein in 1929 for his elaboration of Planck's idea of discrete of quanta of energy into quanta of light, an idea which Planck viewed as a "hypothetical attempt" resulting from an "act of desperation".

To understand a theory of physics it is helpful to seek the reason the theory was developed. In The Conceptual Development of Quantum Mechanics by Max Jammer we read:
  • Quantum theory had its origin in the inability of classical physics to account for the experimentally observed distribution in the continuous spectrum of black-body.
  • It is convenient to define the first phase in the development of quantum theory the period in which all quantum conceptions and principles proposed referred exclusively to black-body radiation or harmonic vibrations.  
  • …the study of the single physical phenomenon of blackbody radiation led to the conceptions of quanta and to quantum statistics of the harmonic oscillator, and thus to results which defied the principles of classical mechanics and, in particular, the equipartition theorem.
  • It was generally agreed that classical physics was incapable of accounting for atomic and molecular processes.
  • Planck obviously regarded the use of the law of chance… merely as a provisional device… in his own opinion his new theory was but a "hypothetical attempt" to reconcile the law of radiation with foundations of Maxwell's doctrine, and not a final solution to the problem.
Quantum mechanics thus developed from Planck's hypothetical attempt to save Wien's classical radiation law with radiance of frequency $\nu$ scaling like $T\nu^2$ with $T$ temperature, from an ultraviolet catastrophe with the radiance apparently tending to infinity without any bound on the frequency $\nu$.

To save the world from this catastrophe, Planck against his basic convictions as scientist seeing no way other out, then gave up causality as the essence of science by corrupting his deterministic harmonic oscillators by statistics. And on this shaky ground quantum mechanics was formed. No wonder that quantum mechanics in its present form is a catastrophe (with uncomputable wave-functions without physical meaning), although depicted as an imposing intellectual structure of great beauty. 

But can statistics really save us from catastrophe? Catastrophe may be the result an unfortunate throw of dice by fate, but you don't avoid a catastrophe by letting dice throw decide how to steer your car.  

Computational Blackbody Radiation describes a different way of avoiding the ultraviolet catastrophe with statistics replaced by a constructive version of classical mechanics based on finite precision computation. From this starting point a quantum mechanics without statistics may be possible to formulate. If so the present catastrophe of quantum mechanics can (perhaps) be avoided.


onsdag 7 maj 2014

Is Blackbody Radiation Universal?

In a recent series of articles Pierre-Marie Robitaille questions the idea of universality of blackbody radiation. Let us see what the analysis of the model studied at Computational Blackbody Radiation can say.

The model consists of a wave equation for a vibrating atomic lattice augmented with small damping modeling outgoing radiation. The model is characterized by a lattice temperature $T$ assumed to be the same for all frequencies $\nu$ and a radiative damping coefficient $\gamma$ with corresponding radiance $R(\nu ,T)$ depending on frequency and temperature according to Planck's law (with simplified high-frequency cut-off):
  • $R(\nu ,T)=\gamma T\nu^2$ for $\nu\leq\frac{T}{h}$,
  • $R(\nu ,T)=0$ for $\nu > \frac{T}{h}$,
where the parameter $h$ defines the high-frequency cut-off.  The model will subject to frequency dependent forcing $f_\nu$ reach equilibrium with incoming = outgoing radiation:
  • $R(\nu ,T) =\epsilon f_\nu^2$ for $\nu\leq\frac{T}{h}$,
  • assuming for simplicity that frequencies $\nu >\frac{T}{h}$ are reflected,
where $\epsilon\le 1$ is a coefficient of absorptivity = emissivity. The radiative qualities of a lattice can thus be described by the coefficients $\gamma$, $\epsilon$ and $h$ and the temperature scale $T$. 

Assume now that we have two lattices 1 and 2 with different characteristics $(\gamma_1,\epsilon_1, h_1, T_1)$ and $(\gamma_2,\epsilon_2, h_2, T_2)$ which are brought into radiative equilibrium. We will then have 
  • $\epsilon_1\gamma_1T_1\nu^2 = \epsilon_2\gamma_2T_2\nu^2$ for $\nu\leq\frac{T_2}{h_2}$ 
  • assuming $\frac{T_2}{h_2}\leq \frac{T_1}{h_1}$ 
  • and for simplicity that 2 reflects frequencies $\nu > \frac{T_2}{h_2}$.    
If we choose lattice 1 as reference, to serve as an ideal reference blackbody, defining a reference temperature scale $T_1$, we can then calibrate the temperature scale $T_2$ for lattice 2 so that 
  • $\epsilon_1\gamma_1T_1= \epsilon_2\gamma_2T_2$,
thus effectively assigning temperature $T_1$ to lattice 2 by radiative equilibrium with lattice 1 acting as ideal blackbody, thus effectively using 1 as a reference thermometer, assuming it has maximal cut-off. Any lattice 2 will then mimic the radiation of lattice 1 in radiative equilibrium and a form of universality will be achieved. 

In practice lattice 1 is represented by a small piece of graphite inside a cavity with walls represented by lattice 2 with the effect that the cavity will radiate like graphite independent of its form or wall material. Universality will thus be reached by mimicing of a reference, viewed as an ideal blackbody, which is perfectly understandable, and not by some mysterious deep inherent quality of blackbody radiation. Without the piece of graphite the cavity will possibly radiate with different characteristics and universality may be lost.

The analysis indicates that the critical quality of the reference blackbody is maximal cut-off and equal temperature of all frequencies, and not maximal absorptivity = emissivity = 1, since the effective parameter is the product $\epsilon\gamma$. 

  • All dancers which mimic Fred Astaire, dance like Fred Astaire, but all dancers do not dance like Fred Astaire.     

söndag 4 maj 2014

A Three-dimensional Multi-Electron Wave Function

Consider a wavefunction $\psi$  for an atom with $N$ electrons as a sum of $N$ functions $\psi_1(x)$,…,$\psi_N(x)$, all depending on a common three-dimensional space coordinate $x$ (plus time):
  • $\psi (x)=\psi_1(x)+\psi_2(x)+…+\psi_N(x)$,
with associated energy as the sum of kinetic energy, attractive kernel potential energy and repulsive interelectron energy:
  • $E(\psi )= \frac{1}{2}\int\vert\nabla\psi\vert^2dx - \int\frac{N\psi^2}{\vert x\vert}dx+\sum_{j\neq k}\int\int\frac{\psi_j^2(x)\psi_k^2(y)}{2\vert x-y\vert}dxdy$,
under the normalization
  • $\int\psi_j^2dx =1$ for $j=1,...,N$,
where $\psi_j(x)$ represents the distribution of electron $j$.

The ground state is determined as the state of minimal energy determined as the solution of a non-linear system of equations in three space dimensions expressing minimality.  We see that minimization favors atomistic wavefunctions $\psi (x)=\sum_j\psi_j(x)$ built from electronic wave functions $\psi_j$ with disjoint supports, which makes the interelectronic repulsion energy small without cost of kinetic energy.

The ground state of Helium thus will have its two electrons separated into two half-spheres with corresponding wave functions $\psi_1(x)$ and $\psi_2(x)$ meeting smoothly at a common separation surface. It is possible that this is the origin of the Zweideutigkeit or two-valuedness expressed in Pauli's exclusion principle, which Pauli did not like because it was ad hoc without rationale.

The  sequence of posts on Quantum Contradictions explores atomic ground states based on the above wave function with surprisingly good correspondence with observations, see also Many-Minds Quantum Mechanics.

We compare with standard quantum mechanics with multi-dimensional wave functions $\psi (x_1,…,x_N)$ depending on $N$ three-dimensional space coordinates $x_1$,…,$x_N$, typically in the form of a Slater determinant as a linear combination of products of $N$ functions $\psi_1$,…,$\psi_N$, each function separately depending on three space coordinates,  thus based on wavefunctions depending on altogether $3N$ space coordinates. Such multi-dimensional wave functions defy direct physical interpretation and are also impossible to compute for atoms with several electrons and thus do not belong to science. Yet they are supposed to be fundamental to atomistic physics.

The standard view is that macroscopic and microscopic (atomistic) physics are fundamentally different,  because microscopic physics demands a multi-dimensional wave function, while macroscopic physics is described by systems of three-dimensional functions. If also microscopic physics can be described by systems of three-dimensional functions, as indictated above, then there will be no fundamental difference between macroscopic and microscopic physics and a major obstacle for progress can be eliminated.

Computations based on wavefunctions of the above form are under way and will presented when available.  For simple hand calculations see here and here.

PS1 For Helium with two electrons at distance $\frac{1}{2}$ from the kernel and mutual distance $1$ as an approximate ground state configuration energy in the above model, we get $E = -3$, to be compared with the observed $-2.903$.

For Lithium with two electrons at distance $\frac{1}{3}$ from the kernel and mutual distance $\frac{2}{3}$ together with a third electron at distance 1 from an effective kernel of charge +1, we get $E = -8$, to be compared with the observed $-7.5$. The ground state energy of three electrons at distance $\frac{1}{3}$ from the kernel and mutual distance $\frac{1}{2}$, we get $E = -7.5$ indicating that the configuration with two electrons in an inner shell and one in an outer shell has smaller energy and thus is the actual ground state configuration for Lithium, thus obtained without reference to Pauli's exclusion principle.

PS2 Recall that the standard quantum mechanics is formulated in terms of a multi-dimensional wave function $\psi (x_1,x_2,…,x_N)$ depending on $N$ three-dimensional space coordinates $x_1$,…$x_N$, altogheter on $3N$ space coordinates, which is devastating because both physical interpretation and computational determination is impossible. To reduce the dimensionality typically an Ansatz is made as Slater determinants of three-dimensional wave functions $\psi_i$ as linear combinations of products of the form (subject to permutations of the coordinates):
  • $\psi (x_1,…,x_N)=\psi_1(x_1)\psi_2(x_2)….\psi_N(x_N)$,
leading to a set of one-electron wave equations coupled by complex exchange-correlation terms which are very difficult to determine. The above Ansatz with a sum instead of products of three-dimensional wave functions may offer more computationally managable and thus more useful models.

PS3 For Beryllium with 4 electrons, we get $E=-14$ from 2 electrons at distance $\frac{1}{4}$ from the kernel with mutual distance $\frac{1}{2}$, together with $E = -\frac{2}{3}$ from 2 electrons of width $\frac{1}{2}$ at distance $\frac{1}{4} + \frac{1}{2}$ from an effective charge of +2, which gives altogether $E = -14.667 which is exactly what is observed!!

PS4 For N electrons distributed over one shell at distance $\frac{1}{N}$ to the kernel assuming the average distance between any pair of electrons is $\frac{1}{N}$, we get $E = -\frac{N^2}{2}$, which is much larger than the observed $E \approx - N^2$ and thus is not the ground state configuration.  A multi-shell distribution in the model gives better agreement with observations and so the model may capture the real shell structure (without resort to any Pauli exclusion principle).

PS5 Note that the above model allows discontinuous electron distributions (joining smoothy) without cost of kinetic energy which favors electron separation. We compare with Hartree models as systems of one-electron models with continuous electron distributions for which separation requires kinetic energy cost and a resort to Pauli's exclusion principle is necessary to prevent more than two electron distributions to overlay.

PS6 To find the ground state, we can use time-stepping of the parabolic system
  • $\frac{\partial\psi_j(x,t)}{\partial t} = \Delta\psi (x,t) + \frac{N\psi (x,t)}{\vert x\vert}-\sum_{k\neq j}\int\frac{\psi_k^2(y,t)}{2\vert x-y\vert}dy\,\psi_j(t,x)$ for $t > 0$, $j=1,…,N$,
with successive normalization to $\int\psi_j^2(x,t)\, dx=1$ after each time step and $\psi =\sum_{k=1}^N\psi_k$.  Further
  • $V_k\equiv\int\frac{\psi_k^2(y,t)}{2\vert x-y\vert}dy$,
can be computed by solving $-\Delta V_k = 2\pi\psi_k^2$.