tisdag 21 juni 2016

New Quantum Mechanics 3: Why?

Modern physics is being based on (i) relativity theory and (ii) quantum mechanics, both viewed to be correct beyond any conceivable doubt, but nevertheless (unfortunately) being incompatible. The result is a modern physics based on shaky grounds of contradictory theories from which anything can emerge, and so has done in the form of string theory and multiversa beyond thinkable experimental verification.

The basic model of quantum mechanics is Schrödinger's equation as a linear equation in a wave function depending on $3N$ spatial dimensions for an atom with $N$ electrons. Schrödinger's equation is an ad hoc model arrived at by a purely formal extension of classical mechanics without direct physical meaning and rationale. Schrödinger's equation is thus viewed as being given by God with the job of physical interpretation being left to humanity in endless quarrels. In this sense quantum mechanics is rather religion than science and the present state of physics maybe a fully logical result.

Experimental support for Schrödinger's equation is in incontestable form only available in the case of Hydrogen with $N=1$, since for larger $N$ the multidimensionality prevents both analytical and computational solution.  The message of books on quantum mechanics that solutions of Schrödinger's equation always (have to) agree with observations, rather reflect a belief that a God-given equation cannot be wrong, than actual human experience.

But if we as scientists do not welcome the idea of an equation given by God beyond human comprehension,  then we may find motivation to search for an alternative atomic model which is computable and thus is possible to compare with physical experiment.  This is my motivation anyway.

And God said:

And then there were Atoms!

New Quantum Mechanics 2: Computational Results

I have now tested the atomic model for an atom with $N$ electrons of the previous post formulated as a classical free boundary problem in $N$ single-electron charge densities with non-overlapping supports filling 3d space with joint charge density as a sum of electron densities being continuously differentiable across inter-electron boundaries.

I have computed in spherical symmetry on an increasing sequence of radii dividing 3d space into a sequence of shells filled by collections of electrons smeared into spherically symmetric shell charge distribution. The electron-electron repulsive energy is computed with a reduction factor of $\frac{n-1}{n}$ for the electrons in a shell with $n$ electrons to account for lack of self repulsion.

Below is a typical result for Xenon with 54 electrons organised in shells with 2, 8, 18, 18 and 8 electrons with ground state energy -7413 to be compared with -7232 measured and with the energy distribution in the 5 shells displayed in the order of total energy, kinetic energy, kernel potential energy and inter-electron energy. Here the blue curve represents electron charge density, green is kernel potential and red is inter-electron potential. The inter-shell boundaries are adaptively computed  so as to represent a preset 2-8-18-18-8 configuration in iterative relaxation towards a ground state of minimal energy.

In general computed ground state energies agree with measured energies within a few percent for all atoms up to Radon with 86 electrons.

The computations indicate that it may well be possible to build an atomic model based on non-overlapping electronic charge densities as a classical continuum mechanical model with electrons keeping individuality by occupying different regions of space, which agrees reasonably well with observations. The model is an $N$-species free boundary problem in three space dimensions and as such is readily computable for any number of $N$ for both ground states, excited states and dynamic transitions between states.

We recall the the standard model in the form of Schrödinger's equation for a wave function depending on $3N$ space dimensions, is computationally demanding already for $N=2$ and completely beyond reach for larger $N$. As a result the full $3N$-dimensional Schrödinger equation is always replaced by some radically reduced model such as Hartree-Fock with optimization over a "clever choice" of a few "atomic orbitals", or Thomas-Fermi and Density Functional Theory with different forms of electron densities.

The present model is an electron density model, which as a free boundary problem with electric individuality is different from Thomas-Fermi and DFT.

We further recall that the standard Schrödinger equation is an ad hoc model with only formal justification as a physical model, in particular concerning the kinetic energy and the time dependence, and as such should perhaps better not be taken as a given ready-made model which is perfect and as such canonical (as is the standard view).

Since this standard model is uncomputable, it is impossible to show that the results from the model agree with observations, and thus claims of perfection made in books on quantum mechanics rather represent an ad hoc preconceived idea of unquestionable ultimate perfection than true experience.

onsdag 1 juni 2016

New Quantum Mechanics as Classical Free Boundary Problem

Let me (as a continuation of the sequence of posts on Finite Element Quantum Mechanics 1-5) present an alternative formulation of the eigenvalue problem for Schrödinger's equation for an atom with $N$ electrons starting from an Ansatz for the wave function
  • $\psi (x) = \sum_{j=1}^N\psi_j(x)$      (1)
as a sum of $N$ electronic real-valued wave functions $\psi_j(x)$, depending on a common 3d space coordinate $x\in R^3$ with non-overlapping spatial supports $\Omega_1$,...,$\Omega_N$, filling 3d space, satisfying
  • $H\psi = E\psi $ in $R^3$,       (2)
where $E$ is an eigenvalue of the (normalised) Hamiltonian $H$ given by
  • $H(x) = -\frac{1}{2}\Delta - \frac{N}{\vert x\vert}+\sum_{k\neq j}V_k(x)$ for $x\in\Omega_j$,
where $V_k(x)$ is the potential corresponding to electron $k$ defined by 
  • $V_k(x)=\int\frac{\psi_k^2(y)}{2\vert x-y\vert}dy$, for $x\in R^3$,
and the wave functions are normalised to correspond to unit charge of each electron:
  • $\int_{\Omega_j}\psi_j^2(x) dx=1$ for $j=1,..,N$.
One can view (2) as a formulation of the eigenvalue problem for Schrödinger's equation, starting from an Ansatz for the total wave function as a sum of electronic wave function according to (1), as a classical free boundary problem in $R^3$, where the electron configuration is represented by a partition of $R^3$ into non-overlapping domains representing the supports of the electronic wave functions $\psi_j$ and the total wave function $\psi$ is continuously differentiable.

Defining $\rho_j = \psi_j^2$, we have
  • $\psi\Delta\psi = \frac{1}{2}\Delta\rho-\frac{1}{4\rho}\vert\nabla\rho\vert^2$, 
and thus (2) upon multiplication by $\psi$ takes the form
  • $-\frac{1}{4}\Delta\rho+\frac{1}{8\rho}\vert\nabla\rho\vert^2-\frac{N\rho}{\vert x\vert}+V\rho = E\rho$ in $R^3$,                   (3)
  • $\rho_j\ge 0$, $support(\rho_j)=\Omega_j$ and $\rho_j=0$ else, 
  • $\int_{\Omega_j}\rho_jdx =1$,
  • $\rho =\sum_j\rho_j$,
  • $V\rho=\sum_{k\neq j}V_k\rho_j$ in $\Omega_j$,
  • $\Delta V_j=2\pi\rho_j$  in $R^3$.
The model (3) (or equivalently (2)) is computable as a system in 3d and will be tested against observations. In particular the ground state of smallest eigenvalue/energy E is computable by parabolic relaxation of (3) in $\rho$. Continuity of $\psi$ then corresponds to continuity of $\rho$.

We can view the formulation (3) in the same way as that explored for gravitation, with the potential $V_j$ primordial and the electronic density $\rho_j$ defined by $\rho_j =\frac{1}{2\pi}\Delta V_j$ as a derived quantity, with in particular total electron-electron repulsion energy given by the neat formula
  • $\frac{1}{2\pi}\sum_{k\neq j}\int V_k\Delta V_jdx=-\frac{1}{2\pi}\sum_{k\neq j}\int\nabla V_k\cdot\nabla V_jdx$
in terms of potentials with an analogous expression for the kernel-electron attraction energy. 

Many Big Bangs: Universe Bigger Than You Think

Astronomer Royal Lord Rees has made a statement:
  • There may have been more than one Big Bang, the Astronomer Royal has said and claims the world could be on the brink of a revolution as profound as Copernicus discovering the Earth revolved around the Sun.
  • Many people suspect that our Big Bang was not the only one, but there’s a whole ensemble of Big Bangs, a whole archipelago of Big Bangs.
  • The theory is still highly controversial, but Lord Rees said he would ‘bet his dog’ on the theory being true.
This fits with the view I have presented in posts on a new view on gravitation, dark matter and dark energy, with gravitational potential $\phi$ viewed as primordial from which matter density $\rho$, which may be both positive and negative, is generated by 
  • $\rho = \Delta\phi$, 
through local action in space of the Laplacian $\Delta$. In this model a Big Bang corresponds to a small local fluctuation of $\phi$ around zero, which generates an much bigger fluctuation of matter density by the action of the Laplacian. 

In this model substantial matter may be generated locally from small fluctuations of gravitational potential opening the possibility of an endless number of Big Bangs seemingly created out of nothing. 

You can test the model in the app Dark Energy on App Store.

tisdag 31 maj 2016

Samtal med Kodcentrum om Matematik-IT

Jag har idag haft ett konstruktivt och mycket trevligt samtal med Jessica Berglund och Lisa Söderlund på Kodcentrum om eventuellt samarbete vad gäller att sprida programmeringens evangelium till svenska elever och svensk skola.

Kodcentrum har hittills satsat på Scratch som introduktion till programmering och verkar ha behov av att kunna leverera vidareutbildning,  och kanske Matematik-IT därvid kan vara ett alternativ. Vi får se om Kodcentrum vill utnyttja denna möjlighet under nästa läsår. Chansen finns...

Vad gäller programmeringsplattform har jag alltså använt Codea (programmering på iPad för iPad), men det finns många alternativ, tex Corona för PC/Mac som använder samma språk som Codea (Lua). Codea kostar några kronor, medan Corona är gratis.

Sen finns det ju många andra möjligheter som Xcode, Swift, Python, JavaScript, Perl....Till slut måste man välja något specifikt språk/plattform, om man vill säga/göra något konkret med någon mening... på samma sätt som med kärleken, som är ju evig och det bara är föremålen som växlar...

PS Det finns en inställning som verkar ha många företrädare, att möta Regeringens uppdrag till Skolverket att införa programmering i skolan, inte med att helt sonika följa uppdragets direktiv och göra så, utan istället ersätta konkret programmering med betydligt mindre konkreta slagord som "digital kompetens" och "datalogiskt tänkande". Tanken är alltså att gå runt den heta gröten och inte ta del av den och istället konsumera ev urvattnade derivat av den goda och stärkande gröten.

Inte att lära sig programmera (det behöver man inte eftersom det finns så många programmerare), utan istället lära sig att det finns något som kallas programmering. Inte att de facto få lära sig multiplikationstabellen och att använda den, utan istället få lära sig att det räcker att veta att den finns och att några kan den (den behövs ju egentligen inte eftersom det finns så många mini-räknare).

Tanken borde istället vara att om man äter programmeringsgröten och smälter den, så får man bättre förmåga att utveckla både "digital kompetens" och "datalogiskt tänkande", om det nu är huvudsaken, än om man bara går runt gröten.

Det är bättre att kunna multiplikationstabellen och kunna använda den, än att inte kunna den och inte veta att använda den, även om det finns miniräknare. Varför? Därför att människan är en tänkande varelse och tänkande bygger på förståelse.

måndag 30 maj 2016

New Theory of Flight: Time Line

Potential flow around circular cylinder with zero drag and lift (left). 
Real flow with non-stationary turbulent 3d rotational slip separation and non-zero drag (right). 

The New Theory of Flight published in J Mathematical Fluid Mechanics, can put be into the following time line:

1750 formulation by Euler of the Euler equations describing incompressible flow with vanishing viscosity expressing Newton's 2nd law and incompressibility in Euler coordinates of a fixed Euclidean coordinate system.

1752 d'Alembert's Paradox as zero drag and lift of potential flow around a wing defined as stationary flow which is
  1. incompressible
  2. irrotational
  3. of vanishing viscosity
  4. satisfies slip boundary condition 
as exact solution of the Euler equations.

1904 resolution of d'Alembert's paradox of zero drag by Prandtl stating that potential flow is unphysical, because 4. violates a requirement that real flow must satisfy
  • no slip boundary condition.
1904 resolution of d'Alembert's paradox of zero lift by Kutta-Zhukovsky stating that potential flow is unphysical, because 2. violates that a sharp trailing edge in real flow creates 
  • rotational flow.
2008 resolution of d'Alembert's paradox of zero drag and lift by Hoffman-Johnson stating that potential flow is unphysical, because 
  • potential flow it is unstable at separation and develops into non-stationary turbulent 3d rotational slip separation as a viscosity solution of the Euler equations with substantial drag and lift.  
Recall that d'Alembert's paradox had to be resolved, in one way or the other, to save theoretical fluid mechanics from complete collapse, when the Wright brothers managed to get their Flyer off ground into sustained flight in 1903 with a 10 hp engine. 

Prandtl, named the Father of Modern Fluid Mechanics, discriminated the potential solution by an ad hoc postulate that 4. was unphysical (without touching 2.) and obtained drag without lift.

Kutta-Zhukovsky, named Fathers of Modern Aero Dynamics, discriminated the potential solution by an ad hoc postulate that 2. was unphysical (without touching 4.) and obtained lift without drag. 

Hoffman-Johnson showed without ad hoc postulate that the potential solution is unstable at separation and develops into non-stationary turbulent 3d rotational slip separation causing drag and lift. 

The length of the time-line 1750-1752-1904-2008 is remarkable from scientific point of view. Little happened between 1752 and 1904 and between 1904 and 2008, and what happened in 1904 was not in touch with reality. For detailed information, see The Secret of Flight.

1946 Nobel Laureate Hinshelwood made the following devastating analysis:
  • D’Alembert’s paradox separated fluid mechanics from its start into theoretical fluid mechanics explaining phenomena which cannot be observed and practical fluid mechanics or hydraulics observing phenomena which cannot be explained.
The only glimpse in the darkness was offered by the mathematician Garret Birkhoff in his 1950 book Hydrodynamics, by asking if any potential flow is stable, a glimpse of light that was directly blown out by a devastating critique of the book from fluid dynamics community, which made Birkhoff remove his question in the 2nd edition of the book and to never return to hydrodynamics.

The 2008 resolution of d'Alembert's Paradox leading into the New Theory of Flight by Hoffman-Johnson,  has been met with complete silence/full oppression by the fluid mechanics community still operating under the paradigm of Hinshelwoods analysis. 

söndag 29 maj 2016

Restart of Quantum Mechanics: From Observable/Measurable to Computable

                Schrödinger and Heisenberg receiving the Nobel Prize in Physics in 1933/32..

If modern physics was to start today instead of as it did 100 years ago with the development of quantum mechanics as atomistic mechanics by Bohr-Heisenberg and Schrödinger, what would be the difference?

Bohr-Heisenberg were obsessed with the question:
  • What can be observed?
motivated by Bohr's Law:
  • We are allowed to speak only about what can be observed.
Today, with the computer to the service of atom physics, a better question may be:
  • What can be computed?
possibly based on an idea that
  • It may be meaningful to speak about what can be computed. 
Schrödinger as the inventor of the Schrödinger equation as the basic mathematical model of quantum mechanics, never accepted the Bohr-Heisenberg Copenhagen Interpretation of quantum mechanics with the Schrödinger wave function as solution of the Schrödinger equation interpreted as a probability of particle configuration, with collapse of the wave function into actual particle configuration under observation/measurement. 

Schrödinger sought an interpretation of the wave function as a physical wave in a classical continuum mechanical meaning, but had to give in to Bohr-Heisenberg, because the multi-dimensionality of the Schrödinger equation did not allow a direct physical interpretation, only a probabilistic particle interpretation. Thus the Schrödinger equation to Schrödinger became a monster out of control, as expressed in the following famous quote: 
  • If we have to go on with these damned quantum jumps, then I'm sorry that I ever got involved.
And Schrödinger's equation is a monster also from computational point of view, because solution work scales severely exponentially with the number of electrons and thus is beyond reach even for small $N$.

But the Schrödinger equation is an ad hoc model with only weak formal unphysical rationale, including the basic ingredients of (i) linearity and (ii) multi-dimensionality.

Copenhagen quantum mechanics is thus based on a Schrödinger equation, which is an ad hoc model and which cannot be solved with any assessment of accuracy because of its multi-dimensionality and thus cannot really deliver predictions which can be tested vs observations, except in very simple cases.

The Copenhagen dogma is then that predictions of the standard Schrödinger equation always are in perfect agreement with observation, but a dogma which cannot be challenged because predictions cannot be computed ab initio.

In this situation it is natural to ask, in the spirit of Schrödinger, for a new Schrödinger equation which has a direct physical meaning and to which solutions can be computed ab initio, and this is what I have been exploring in many blog posts and in the book (draft) Many-Minds Quantum Mechanics.

The basic idea is to replace the linear multi-d standard Schrödinger equation with a computable non-linear system in 3d as a basis of a new form of physical quantum mechanics. I will return with more evidence of the functionality of this approach, which is very promising...

Note that a wonderful thing with computation is that it can be viewed as a form of non-destructive testing, where the evolution of a physical system can be followed in full minute detail without any form of interference from an observer, thus making Bohr's Law into a meaningless limitation of scientific thinking and work from a pre-computer era preventing progress today.

PS It is maybe wise to be a little skeptical to assessments of agreement between theory and experiments to an extremely high precision. It may be that things are arranged or rigged so as to give exact agreement, by changing computation/theory or experiment.

lördag 28 maj 2016

Aristotle's Logical Fallacy of Affirming the Consequent in Physics

One can find many examples in physics, both classical and modern, of Aristotle's logical fallacy of Affirming the Consequent (confirming an assumption by observing a consequence of the assumption):
  1. Assume the Earth rests on 4 turtles, which keeps the Earth from "falling down". Observe that the Earth does not "fall down". Conclude that the Earth rests on 4 turtles.
  2. Observe a photoelectric effect in accordance with a simple (in Einstein's terminology "heuristic") argument assuming light can be thought of as a stream of particles named "photons" . Conclude that light is a stream of particles named photons. 
  3. Assume light is affected by gravitation according the general theory of relativity as described by Einstein's equations. Observe apparent slight bending of light as it passes near the Sun in accordance with an extremely simplified use of Einstein's equations. Conclude universal validity of Einstein's equations.
  4. Observe lift of a wing profile in accordance with a prediction from potential flow modified by large scale circulation around the wing. Conclude that there is large scale circulation around the wing. 
  5. Assume that predictions from solving Schrödinger's equation always are in perfect agreement with observation. Observe good agreement in some special cases for which the Schrödinger equation happens to be solvable, like in the case of Hydrogen with one electron. Conclude universal validity of Schrödinger's equation, in particular for atoms with many electrons for which solutions cannot be computed with assessment of accuracy.
  6. Assume there was a Big Bang and observe a distribution of galaxy positions/velocities, which is very very roughly in accordance with the assumption of a Big Bang. Conclude that there was a Big Bang.
  7. Assume that doubled CO2 in the atmosphere from burning of fossil fuel will cause catastrophic global warming of 2.5 - 6 C. Observe global warming of 1 C since 1870. Conclude that doubled CO2 in the atmosphere from burning of fossil fuel will cause catastrophic global warming of 4 - 8 C.
  8. Assume that two massive black holes merged about 1.3 billion years ago and thereby sent a shudder through the universe as ripples in the fabric of space and time called gravitational waves and five months ago washed past Earth and stretched space making the entire Earth expand and contract by 1/100,000 of a nanometer, about the width of an atomic nucleus. Observe a wiggle of an atom in an instrument and conclude that two massive black holes merged about 1.3 billion years ago which sent a shudder through the universe as ripples in the fabric of space and time called gravitational waves...
  9. Observe experimental agreement of the anomalous magnetic dipole moment of the electron within 10 decimals to a prediction by Quantum Electro Dynamics (QED). Conclude that QED is universally valid for any number of electrons as the most accurate theory of physics. Note that the extremely high accuracy for the specific case of the anomalous magnetic dipole moment of the electron, compensates for the impossibility of testing in more general cases,  because the equations of QED are even more impossible to solve with assessment of accuracy than Schrödinger's equation.
The logic fallacy is so widely practiced that for many it may be difficult to see the arguments as fallacies. Test yourself!

PS1. Observe that if a theoretical prediction agrees with observation to a very high precision, as is the case concerning the Equivalence Principle stating equality of inertial and gravitational (heavy) mass, then it is possible that what you are testing experimentally in fact is the validity of a definition, like testing experimentally if there are 100 centimeters on a meter (which would be absurd).

PS2 Books on quantum mechanics usually claim the there is no experiment showing any discrepancy whatsoever with solutions of the Schrödinger equation (in the proper setting), which is strong evidence that the Schrödinger equation gives an exact  description of all of atom physics (in a proper setting). The credibility of this argument is weakened by the fact that solutions can be computed only in very simple cases. 

fredag 27 maj 2016

Emergence by Smart Integration of Physical Law as Differential Equation

Perfect Harmony of European Parliament: Level curves of political potential generated by an empty spot in the middle.

This is a continuation of previous posts on a new view of Newton's law of gravitation. We here connect to the Fundamental Theorem of Calculus of the previous post, allowing a bypass to compute an integral by tedious laborious summation, using a primitive function of the integrand:
  • $\int_0^t f(s)ds = F(t) - F(0)$  if  $\frac{dF}{dt} = f$.
This magical trick of Calculus of computing an integral as a sum without doing the summation,  is commonly viewed to have triggered the scientific revolution shaping the modern world.

The magic is here computing an integral $\int_0^t f(s)ds$ in a smart way, rather than computing a derivative $\frac{dF}{dt}$ in a standard way.

The need of computing integrals comes from the fact that physical laws are usually expressed in terms of derivatives, for example as an initial value problem: Given a function $f(t)$, determine a function $F(t)$ such that
  • $DF(t) = f(t)$ for $t\ge 0$ and $F(0) = 0$,
where $DF =\frac{dF}{dt}$ is the derivative of $F$. In other words, given a function $f(t)$, determine a primitive function $F(t)$ to $f(t)$ with $F(0)=0$, that is, determine/compute the integral
by the formula
  • $\int_0^t f(s)ds = F(t)$ for $t\ge 0$. 
Using the Fundamental Theorem to compute the integral would then correspond to solving the initial value problem by simply picking a primitive function $F(t)$ satisfying $DF = f$ and $F(0)=0$ from a catalog of primitive functions, allowing to in one leap jump from $t=0$ to any later time $t$. Not very magical perhaps, but certainly smart!

The basic initial value problem of mechanics is expressed in Newton's 2nd Law $f=ma$ where $f$ is force, $m$ mass and $a(t)=\frac{dv}{dt}=\frac{d^2x}{dt^2}$ is acceleration, $v(t)=\frac{dx}{dt}$ velocity and $x(t)$ position, that is,
  • $f(t) = m \frac{d^2x}{dt^2}$.           (1)
Note that in the formulation of the 2nd Law, it is natural to view position $x(t)$ with acceleration $\frac{d^2x}{dt^2}$ as given, from which force $f(t)$ is derived by (1) . Why? Because position $x(t)$ and acceleration $\frac{d^2x}{dt^2}$ can be observed, from which the presence of force $f(t)$ can be inferred or derived or concluded, while direct observation of force may not really be possible. In this setting the 2nd Law acts simply to define force in terms of mass and acceleration, rather than to make a connection with some other definition of force.

Writing Newton's 2nd law in the form $f=ma$, thus defining force in terms of mass and acceleration, is the same as writing Newton's Law of Gravitation:
  • $\rho = \Delta\phi$,                          (2)
thereby defining mass density $\rho (x)$ in terms of gravitational potential $\phi (x)$ by a differential equation. 

With this view, Newton's both laws (1) and (2) would have the same form as differential equation, and the solutions $x(t)$ and $\phi (x)$ would result from solving differential equations by integration or summation as a form of emergence. 

In particular, this reasoning gives support to an idea of viewing the physics of Newton's Law of Gravitation to express that mass density somehow is "produced from" gravitational potential by the differential equation $\rho =\Delta\phi$. 

To solve the differential equation $\Delta\phi =\rho$ by direct integration or summation in the form
  • $\phi (x) = \frac{1}{4\pi}\int\frac{\rho (y)}{\vert x-y\vert}dy$,
would then in physical terms require instant action at distance, which is difficult to explain. 

On the other hand, if there was a "smart" way of doing the integration by using some form of Fundamental Theorem of Calculus as above, for example by having a catalog of potentials from which to choose a potential satisfying $\Delta\phi =\rho$ for any given $\rho$, then maybe the requirement of instant action at distance could be avoided.

A smart way of solving $\Delta\phi =\rho$ would be to use the knowledge of the solution $\phi (x)$ in the case of a unit point mass at $x=0$ as
  • $\phi (x)=\frac{1}{4\pi}\frac{1}{\vert x\vert}$ 
which gives Newton's inverse square law for the force $\nabla\phi$, which is smart in case $\rho$ is a sum of not too many point masses. But the physics would still seem to involve instant action at distance.

In any case, from the analogy with the 2nd Law we have gathered an argument supporting an idea to view the physics of gravitation as being expressed by the differential equation $\rho =\Delta\phi$ with mass density $\rho$ derived from gravitational potential $\phi$. Rather than the opposite standard view with the potential $\phi$ resulting from mass density $\rho$ by integration or summation corresponding to instant action at distance.

The differential equation $\Delta\phi =\rho$ would thus be valid by an interplay "in perfect harmony" in the spirit of Leibniz, where on the one hand "gravitational potential tells matter where to be how to move" and "matter tells gravitational potential what to be".

This would be like a Perfect Parliamentary System where the "Parliament tells People where to be and what to do" and "People tells Parliament what to be".

PS There is a fundamental difference between (1) and (2): (1) is an initial value problem in time while (2) is a formally a static problem in space. It is natural to solve an initial value problem by time stepping which represents integration by summation. A static problem like (2) can be solved iteratively by some form of (pseudo) time stepping towards a stationary solution, which in physical terms could correspond to successive propagation of effects with finite speed of propagation.

torsdag 26 maj 2016

Fatal Attraction of Fundamental Theorem of Calculus?

Calculus books proudly present the Fundamental Theorem of Calculus as the trick of computing an integral
  • I=$\int_a^b f(x)dx$,
not by tedious summation of little pieces as a Riemann sum
  • $\sum_i f(x_i)h_i$
on a partition $\{x_i\}$ of the interval $(a,b)$ with step size $h_i = x_{i+1} - x_i$, but by the formula
  • $I = F(b) - F(a)$, 
where $F(x)$ is a primitive function to $f(x)$ satisfying $\frac{dF}{dx} = f$,

The trick is thus to compute an integral, which by construction is a sum of very many terms, not by doing the summation following the construction, but instead taking just one big leap using a primitive function.

On the other hand, to compute a derivative no trick is needed according to the book; you just compute the derivative using simple rules and a catalog of already computed derivatives.

In a world of analytical mathematics, computing integrals is thus valued higher than computing derivatives, and this is therefore what fills Calculus books.

In a world of computational mathematics, the roles are switched. To compute an integral as a sum can be viewed to be computationally trivial, while computing a derivative $\frac{dF}{dx}$ is a bit more tricky because it involves dividing increments $dF$ by small increments $dx$.

This connects to Poisson's equation $\Delta\phi =\rho$ of Newton's theory of gravitation discussed in recent posts. What is here to be viewed as given and what is derived? The standard view is that the mass density $\rho$ is given and the gravitational potential $\phi$ is derived from $\rho$ as an integral
  • $\phi (x) = \frac{1}{4\pi}\int\frac{\rho (y)}{\vert x-y\vert}dy$,
seemingly by instant action at distance. 

In alternative Newtonian gravitation, as discussed in recent posts, we view instead $\phi$ as primordial and $\rho =\Delta\phi$ as being derived by differentiation, with the advantage of requiring only local action.

We thus have two opposing views:
  • putting together = integration requiring (instant) action at distance with dull tool.
  • splitting apart = differentiation involving local action with sharp tool. 
It is not clear what to prefer?