An important part 20th century mathematics has been devoted to analysis of partial differential equations PDEs as concerns (i) existence and (ii) regularity of solutions. A PDE is a continuum model with infinitely many degrees of freedom.
Proofs of existence typically start from some a priori bounds on solutions to regularised equations with existence of solutions settled and then obtain solutions of the original equation through a limit process.
The main components of an existence proof are the a priori bounds, which can require complicated and lengthy mathematical analysis.
Once existence of solutions is proved, further mathematical analysis can prove properties of solutions typically as bounds on derivates showing regularity. Again the analysis can be complicated and lengthy.
A famous challenge in the form of a Clay Millennium Prize Problem is to give an analytical proof of existence and regularity of solutions to the Navier-Stokes equations for incompressible fluid flow. No progress on this open problem has been reported since 2000.
But there is a different approach to (i) and (ii) in terms of computation where in each given case a an approximate solution to the equations is computed in a step by step manner after discretisation of the PDE into a finite number of degrees of freedom which can be processed by numerical linear algebra. The computational process either halts or delivers after a finite number of steps of choice an approximate solution, which can thus be inspected a posteriori as to qualities. It is thus possible to evaluate in what sense the approximate solution satisfies the PDE and accept or recompute with better discretisation.
We can thus meet a fundamental difference:
- (A) Analytical mathematics proving properties of solutions of a PDE for many possible data a priori before/without computation.
- (C) Computational mathematics producing for given data an approximate solution for inspection.
With suitable regularisation/discretisation (C) always will deliver, while (A) can only in simple cases. In the case of the Navier-Stokes (A) has not delivered anything, while (C) has delivered turbulent solutions for inspection.
The fundamental equation of Standard Quantum Mechanics StdQM is Schrödinger's Equation SE as a linear partial differential equation in $3N$ spatial dimensions for an atomic system with $N$ electrons. Because of the linearity existence of a solution can be proved as (A), but the high dimensionality defies closer analysis of solutions. Neither can (C) deliver because computational cost is exponential in $N$. The result is that both (A) and (C) meet serious difficulties in StdQM.
In RealQM the situation is different as concerns (C) because computational complexity is linear or quadratic in $N$, and the computation does not break down because of the presence of the Laplacian in SE acting as regularisation. (C) thus can reveal everything in RealQM in principle. For (A) the task is more challenging since RealQM is a non-linear model and only an a priori bound on total energy is directly available.
Sum up: (C) delivers for Navier-Stokes and RealQM, while (A) meets very big difficulties.
Successful computation of an approximate solution can be seen as a mathematical proof of existence of that particular approximate solution, a computational proof. A priori analysis can be important to design the computational process, but is not needed for existence or a posteriori evaluation.
With increasing computer power (C) gains more momentum and combines with AI. (A) has to struggle with limited human brain power, which does not really grow, and it is not clear what help AI can give.
In particular, (C) can deliver massive training data for AI in a case by case manner to learn about the world including turbulent flow and molecules. (A) offers training in analytical proofs but less about the world. What can AI learn from the 100-page proof of Stability of Matter by Dyson-Lenard discussed in recent posts?
Inga kommentarer:
Skicka en kommentar