lördag 6 september 2025

Atmosphere as Air Conditioner Keeping Earth Temperature Constant

My journey into climate science started in 2010 with this analysis of black body radiation leading to an analysis of the atmosphere of the Earth as a form of air conditioner keeping the Earth surface mean temperature stable under varying mean heating from the Sun. My work was published as two chapters of the (ground-breaking) book Slaying the Sky Dragon - Death of the Greenhouse Gas Theory:

The basic idea is that incoming energy to the Earth surface at 288 K of about 160 W/m2 from the Sun is transported to the mid troposphere at an altitude of 5 Km at 255 K by a combination of H20 thermodynamics with phase change (evaporation/condensation) with a minor contribution of radiation, for radiation to outer space at 0 K. The variation of incoming energy to the surface can depend on varying cloud cover. This is the scenario in tropical zones receiving most of the energy with sunny mornings followed by thunderstorms in the afternoon.

An increase of incoming energy to the surface is counterbalanced by more intense H2O thermodynamics keeping temperatures constant. Radiation then takes a passive role as constant under constant temperature. 

This is like an air conditioner keeping a stable room temperature of 15 C with constant outside temperature 0 C under variable interior heating of the room e g depending of number of people in the room. 

It also connects to boiling of water on a stove keeping a stable boiling temperature of 100 C under varying energy input from the stove, with more vigorous boiling with phase change responding to increasing input.  

The Sky Dragon analysis above from 2010 was written after a very quick introduction to the so called Greenhouse Effect, but I think it captures aspects valid also today. 

Tropical climate: Raising hot humid air in the morning releasing heat to the atmosphere by condensation effectively transporting  heat energy from surface to atmosphere as a cooling air conditioner.




Boiling water kept at 100 C under heating from stove by evaporation.

The simplest model consists of heat conduction through a wall of thickness 1 with heat conductivity $\kappa $ and temperature $T(x)$ varying linearly from $T_0=1$ at $x=0$ and $T_1=0$ at $x=1$ with heat flux $Q=\kappa \frac{dT}{dx}=\kappa$. Increasing $Q$ is balanced by increase of $\kappa$ without changing $T(x)$ an increase of more vigorous thermodynamics or boiling. 

fredag 5 september 2025

Understanding OLR and DLR vs Radiance Measurement by CERES and Pyrgeometer.

Outgoing Longwave Radiation OLR from the Top of the Atmosphere ToA is measured by a CERES satellite looking down on ToA equipped with a sensor as one end of a thermocouple with its other end kept at a steady temperature generating a voltage scaling with the temperature difference at its ends. 

The CERES instrument is calibrated by determining a gain factor from sensor temperature to radiance letting the instrument look at a black body of known temperature $T_B$ with assumed Planck radiation $\sigma T_B^4$ while recording the sensor temperature. With the gain factor so determined the instrument reports radiance from ToA from a reading of sensor temperature. This is the simplest form of calibration assuming linearity. Very primitive technique, where the details of the instrument do not matter. It is like measuring intensity of rainfall using your hands to collect water calibrated to a faucet. The accuracy is at best 1 W/m2 or 0.4% of the same size as estimated Earth Energy Imbalance from CO2.  

A pyrgeometer measuring Downwelling Longwave Radiation from the atmosphere to the Earth surface also uses a sensor as one end of a thermocouple with the other end kept a base temperature, and also measures a voltage scaling with temperature difference. The calibration is here different because the outgoing radiation from the sensor can no longer be included in the calibration process, but has to be supplied through a Planck formula $\epsilon\sigma T^4$ with $T$ sensor temperature and $\epsilon$ sensor emissivity. The accuracy is at best 5 W/m2 again too big to detect global warming if present.

OLR and DLR are thus measured in a similar way, but with different forms of calibration the difference being that OLR faces empty space ay 0 K, while DLR faces the Earth surface. The accuracy is not enough to decide any size of global warming, although it is claimed that trends can be detected. 

In both cases Planck's Law in the form $\sigma T^4$ is used, which in the case of DLR is incorrect because the correct form is $\sigma (T^4-T_E^4)$ with $T_E$ Earth temperature expressing negative DLR.  

Summary: Measurements of OLR and DLR are made to detect global warming. The accuracy of the instruments is not good enough to detect any warming if present. DLR measurements can be questioned since an incorrect Planck Law is used. OLR and DLR as radiance as process variable fluctuate and as such are difficult to measure.  

EEI is a cornerstone of global warming alarmism, and so measuremennt of EEI has become a prime task for instrument technology, which does not seem to have delivered. The effect of EEI on surface temperature is unknown and impossible to measure and DLR is a misconception based on an incorrect form of Planck's Law.

ChatGPT on objective of CERES: 

CERES connects to global warming because it:

  • Measures the planetary energy balance directly at TOA.

  • Detects changes in OLR and OSR (reflected shortwave) caused by greenhouse gases, aerosols, clouds, and ice.

  • Provides the evidence that Earth is currently taking in more energy than it loses — the physical basis of global warming.

ChatGPT on objective of measuring DLR :

  • Provide a direct measure of the atmosphere’s infrared emission to the surface, essential for closing the surface energy budget, quantifying the greenhouse effect, tracking climate change, and validating models.
We read that the objective of CERES is to support global warming alarmism by measuring and reporting EEI attributed to CO2. But the objective is not reached, because (i) the accuracy of the measurement is not better than 1 W/m2, which is the expected size of EEI, and (ii) attribution to CO2 to is not credible because it is swamped by changes of cloud cover. We read that the objective of measuring DLR by a pyrgeometer is to quantify greenhouse effect. Both cases amounts to "chasing after wind" using "ghost detectors". 


torsdag 4 september 2025

Abstract vs Concrete vs Computational Physics

The science of physics has over time changed nature from concrete/real to abstract/non-real with the pillars of modern physics of Quantum Mechanics QM and General Relativity GR reaching breathtaking levels of abstraction during the first half of the 20th century culminating today as string theory in 11 space dimensions beyond any reality.  

Today with powerful computers available at no cost there is a reverse trend in the form of computation opening new capabilities of using theories of physics for practical purposes. Computation is a concrete process and computational physics starts with a concrete mathematical model and not with an abstraction.

Let us compare Newtonian mechanics in concrete and abstract formulation. 

The concrete form consists of Newton's Law $F=ma$ connecting force $F$ to mass $m$ and acceleration $a=\frac{dv}{dt}$ with $v$ velocity and $t$ time. The evolution over time of any mechanical system (without viscous forces) can be computationally simulated by time-stepping Newton's Law. Concrete and general.

The abstract form states that a mechanical system evolves from $t=0$ to $t=T$ so that:

  • The action $L(v)=\int_0^T(T-V)dt$ is stationary,  

where $T=m\frac{v^2}{2}$ is kinetic energy and $V$ is potential energy. The condition for stationarity in differential form then reads $m\frac{dv}{dt}=F$ with $F$ gradient of $V$, which is Newton's Law.

The difference between abstract and concrete is the same as characterising a local minimum of a function $f(x)$ over variation of a real variable $x$ for $x=\bar x$ as $f^\prime (\bar x) =0$ with $f^\prime =\frac{df}{dx}$. Minimisation is abstract in the sense that no computational method is implied other than comparing the value $f(x)$ for all $x$, which can take infinite work. On the other hand, there are many methods for computing a root to the equation $f^\prime (x)=0$. 

We thus see with that the concrete formulation directly opens to computational solution, while the abstract formulation does not. The pendulum thus may swing back from abstract to concrete in a 21st century filled with computation.

But we still live in the era of QM and GR, which are both abstract and uncomputable. QM is based on an abstract multi-dimensional Schrödinger equation without real physical meaning which is uncomputable because of its many dimensions. GR is based on Einstein's equation with a condensed abstract formulation which when written out for computation shows to be uncomputable. 

RealQM is a new form of quantum mechanics based on a concrete computable model. RealQM + Newton offers a unified concrete continuum model covering all scales which is computable. 

Ontology of physics (what is) is concrete, while epistemology of physics (what we can say) can be abstract. Computation can open ontology of physics to inspection and so feed epistemology of physics. Epistemology without ontology is empty.

onsdag 3 september 2025

Is Measuring Temperature at Distance Possible and Useful?

Climate alarmism of global warming claims to be supported by measurement of the energy balance of  Earth+atmosphere by instruments like pyrgeometers, bolometers and radiometers with an accuracy of at best 1-2 Watts/m2 compared to a total of around 240 W/m2 and a projected total imbalance of 4 W/m2 as "radiative forcing" from doubling of atmospheric CO2 corresponding to a warming of 1 K. 

The case for global warming may seem weak from these measurements, but nevertheless they serve to foster alarmism. 

To properly evaluate the measurements it is necessary to understand how these instruments are designed and how they operate. For a pyrgeometer or bolometer using a thermocouple as sensor, there are two fundamentally different views:

  1. A thermocouple essentially measures incoming radiance from a source as a process variable. 
  2. A thermocouple essentially measures a source temperature as a state variable.  
It is natural to make a comparison in terms of a bank account:
  1. Difference between deposits and withdrawals as process variable.
  2. Total savings as state variable.
We understand that total savings may be fairly stable, while deposits minus withdrawals can fluctuate quite a bit. The same for temperature vs radiance imbalance. 

What does then a thermocouple as sensor in fact measure? Radiance or temperature? 

1. There is a widely spread view that a thermocouple essentially measures radiance and so can be used to reliably measure both incoming and outgoing radiance for Earth+atmosphere and so determine imbalance, even if the accuracy is not better than 1-2 Watts/m2, and so detect global warming. Radiance is then measured through a calibration process confronting the sensor with sources of known temperature $T$ with radiance according to an assumed Planck-Stefan-Boltzmann Law of the form $\sigma T^4$.  

2. There is also a different view that a thermocouple essentially measures source temperature by essentially allowing the sensor to take on the source temperature by radiative equilibrium established optically at distance. In practice the radiative equilibrium source-sensor is only partially established by sensor cooling, but the principle of radiative equilibrium with equal temperature remains. 

Case 2 builds on a clear physical principle of radiative equilibrium in stable measurement of a state variable.

Case 1 is based on instrument calibration vs sources/blackbodies of known temperature $T$ assumed to give radiance input of $\sigma T^4$, while the true input is PSB in the form $\sigma (T^4-T_i^4)$, where $T_i$ is instrument base temperature which is not 0 in general. Case 1 is thus based on a calibration process using an incorrect PSB law inflating input radiance. Moreover the measurement concerns a process variable prone to instability. There are cryogenic sensors with very small $T_i$ and better precision. A proof of the correct PSB Law in classical terms without statistics is presented here and in this talk.

Case 1 is consensus and is used to support alarmism from measured radiance imbalance of Earth+atmosphere as if this is a fact. But the measurement precision barely can capture any imbalance from doubled CO2. Unfortunately many climate skeptics embrace the idea that a pyrgeometer measures massive incoming radiance (Downwelling/Upwelling/Outgoing Longwave Radiation) and so go along with a basic alarmist argument: The measured energy imbalance is the result of more CO2. 

A careful study shows that a thermocouple in fact measures source temperature as stable output, while derived radiance can be misleading because the calibration uses an incorrect PBS Law and is prone to instability. This means that measured energy imbalance can be questioned along with alarmism.

But the discussion is pretty much closed on 1 as the truth. Hopefully a new discussion can take place around the question: What does a thermocouple primarily measure and on what physical grounds? How can a thermometer acting at distance be constructed? Is an IR-camera such a thing?


lördag 30 augusti 2025

How Does a Thermocouple Work?

This is a follow up on the previous post. 

A thermocouple consisting of two rods made of different metals joined to form one rod, registers a voltage proportional to the temperature difference between its two ends as a result of the Seebeck effect creating an electric potential difference from temperature gradient. Putting one end as sensor (or measurement junction) in optical contact through a camera lens with a distant object, will make it attain the same temperature as the object as an expression of thermal equilibrium just like an ordinary thermometer in contact with an object. Keeping the other end as reference junction at a known temperature makes it possible, after calibration, to read the temperature of the object.  

The thermocouple can also be used without calibration by varying the temperature of the reference junction until the voltage is zero. 

The essence is that a thermocouple reacts to temperature as prime sensor input. 

In climate science the following different narrative is presented: The input to the sensor is not primarily temperature but radiation as number of absorbed photons emitted by the object at temperature $T$, incorrectly claimed to scale with $T^4$, see earlier post, and so to allow temperature to be determined from sensing radiation by counting incoming photons. This scheme is claimed to be realised in a pyrgeometer which when directed to the sky on its display reports Downwelling Longwave Radiation DLR emitted by the sky contributing to substantial global warming of the Earth surfaceThe sensor of a pyrgeometer is a thermocouple, which thus is claimed to measure primarily radiation and not temperature.

But above we made clear that a thermocouple does not measure radiation, since it has no mechanism for counting photons as carriers of radiation. What a pyrgeometer does is to measure the temperature of the sky visible by infrared light which can be 255 K at 5 km altitude. This does not feed any climate alarmism. 

Unfortunately, the view that a pyrgeometer measures radiation propagated to sell climate alarmism, seems to be shared by climate skeptics like Happer and Spencer counteracting their criticism. A typical honest instrument, like a thermometer, reports what it primarily measures, but a pyrgeometer reports something else which is used to mislead into climate alarmism.  


torsdag 28 augusti 2025

Temperature as Absolute Control of Radiative Heat Transfer

This is a follow up of this post recalling the Faustian deal made by Max Planck in 1900 when presenting his law of black body radiation as an icon of modern physics named Planck's Law.  

Continuing the discussion with chatGPT we come to an agreement that temperature difference, positive or negative, is Natures control mechanism to keep a system stable over time under radiative heat transfer with an environment, like your house and the Earth's climate system. The temperature is measured on an absolute Kelvin scale available to all bodies allowing detection of positive or negative temperature difference to steer the exchange of heat energy. Equal temperature then gives zero exchange as radiative equilibrium. 

Unfortunately there is a different view sold by climate scientists as reason for climate alarm, which suggests the opposite control mechanism, namely that the temperature of the Earth is controled by differences in incoming and outgoing radiation with in particular small differences in incoming/outgoing radiation generating big differences in temperature possibly in a run-away greenhouse effect. Very alarming.

But physical bodies do not carry an absolute scale allowing comparison of incoming and outgoing radiation, like the absolute scale for temperature, more precisely they carry no counter for incoming and outgoing photons as proclaimed carriers of heat energy. 

In particular, there is no absolute Planck-Stefan-Boltzmann Law $R=\sigma T^4$ stating outgoing radiation from a body of temperature $T$ independent of environment as version 1. in the earlier post. Not physics!

Nevertheless such a law is used in models (Schwarzschild) of radiative heat transfer in the atmosphere playing with gross quantities of incoming/outgoing radiative heat transfer prone to instability. The accuracy of these models is no better than say $2-3$ Watts per m2, while alarming global heating is connected to variations of incoming/outgoing radiation of the same size. The result is alarm as a result of unphysical unstable control. This is like a hen from a feather.

If temperature is allowed to carry out its physical control to stabilise climate, here is no reason for alarm. Puh!

This analysis shows that temperature is measured via Wien's Displacement Law expressed as a high-frequency-cut-off scaling with temperature giving a body of higher temperature access to higher-frequencies aloowing transfer of heat to a body of lower temperature. The cut-off can be connected to the precision available in the underlying atomic wave physics of different frequencies.


onsdag 27 augusti 2025

Why Newton was Replaced by Einstein, and Back

Newton's Inverse Square Law NL was until the 1960s the prime example of the power of mathematical thinking visible to everybody: All of celestial mechanics can be described and computed from NL  necessarily valid from principles of conservation in any existing Universe. 

What happened in the 1960s was that Einstein's General Theory of Relativity GR, presented already in 1915, finally was adopted to serve as one of the two pillars of modern physics (the other one Quantum Mechanics QM) and so replace Newton the icon of classical physic by Einstein as icon of modern physics. 

But this transition took place only after Einstein's death in 1955, because of the very complex mathematics of GR understood by few if any making it useless in any from of practical physics. 

However in the propaganda of the cold war it served well to strengthen the world dominance of US science formed by physicists imported from Germany during WWII inventing the atom bomb. Replacing Newton by Einstein served as a demonstration of power, and all the earlier skepticism to GR could be put under the rug. And of course the Sovjet Union followed. Einstein was put in first place, but only after his death since during his life time he appeared as an "absent-minded eccentric maybe too fond of fame". 

Today the cold war is back, Einstein is still on top of the list of fame, while the rapidly developing technology of warfare is using Newton to come to expression. 

There is only an homage to Einstein GR left as an initial offset of satellite clocks in the GPS system, which in operation is annihilated by continuous synchronisation to a master clock on Earth. 

Maybe there is reason to return to a new critical analysis of Newton vs Einstein without the fame dominating the discussion.  

The discussion could start comparing Newton's absolute space against which Einstein's relative space took stand. 

Is it true that Newton's theory of gravitation needs a notion of absolute space against which absolute velocity can be measured? Does Newton say that velocity is absolute? Certainly not! Nobody would come up with such an idea. Of course velocity of an object is always measured relative to something else.

So Newton says that velocity is relative. On the other hand Newton says that rotation as accelerated motion is detectable by forces of tension arising from acceleration. Newton's rotating bucket can thus be viewed as a form of absolute rotation which does not need any outside reference. 

Note that there is a special form of accelerated motion which is not so easily detected by presence of forces and that is free fall under gravitation where all parts of your body feel the same force and no tension arises. But that is not true for a bigger object where tidal forces occur because the gravitational force is not uniform.

So the argument that Newton says that velocity is absolute and so has to be replaced by Einstein's relativity, is not correct. The argument that Newton's theory of gravitation is a necessity in any existing Universe, is very strong. The question is if there are modern physicists willing to face this reality.


måndag 25 augusti 2025

Planck 1900: Fall of Theoretical Physics: Greenhouse Effect

Planck describes in his autobiography his theoretical derivation of Planck's Law for black-body radiation presented in 1900in complete agreement with observations, based on statistics of quanta, in these very disturbing terms:

  • It was and act of desperation.
  • I was ready to sacrifice any of my previous convictions about physics.
What Planck gave up was the rationality of classical deterministic physics for a new form of indeterministic statistical physics. A monumental sacrifice. Why? A Faustian deal? In any case the consequences have been far reaching since quanta has become the code word of modern (quantum)  physics, even if the meaning has never been made clear. 

Planck's Law integrated over all frequencies takes the form of Planck-Stefan-Boltzmann's Law:
  • $R=\sigma T^4$                (PSB)

where $R$ is radiance as energy flux measured in e g W/area, $T$ is an absolute temperature and $\sigma$ a constant. (PSB) is a poster formula for modern physics of the same stature as $E=mc^2$, but if you ask a modern physicist about the real physical meaning of (PSB), you will get the following contradictory answers:

  1. $R$ is the outgoing radiance of a black-body of temperature $T$ independent of the environment of the body, as a stream of emitted photons. 
  2. $R$ is the outgoing radiance of a black-body of temperature $T$ into an environment of a zero absolute temperature.  If the environment has temperature $T_0<T$ then $R=\sigma (T^4 - T_0^4)$. Fundamentally different from 1.

A theoretical/pure physicist will say 1. and an applied/real physicist will say 2. But theory cannot contradict reality. The reason of the confusion between 1. and 2. is that Planck's derivation is not based on real physics but on statistics without clear physics as quanta physics.

We conclude that 1. is a misconception, but a widespread misconception, which has served as the basis of the so called GreenHouse Effect GHE underlying climate alarmism. What 1. tells the climate alarmist is that the atmosphere is sending massive so called back-radiation to the surface of the Earth, twice as big as the radiation from the Sun. The sky cloudy or clear will thus warm the Earth according to 1. which with a bit more CO2 from fossil fuels will cause global warming with terrible consequences for humanity. From 1. follows alarm, while there is no alarm from 2. 

A climate alarmist will present evidence of global warming by back-radiation from the sky from instruments like pyrgeometers or AERI/radiometers, which report massive back-radiation or Downwelling Longwave Radiation DLR on their displays. The idea is that if an instrument display shows something, it must be something real in particular if the instrument is heavy and expensive. But an applied/real scientist would ask for information about the physics of the instrument, what de facto is being measured and how and what is displayed. Right?  

Inspecting the manual of a pyrgeometer or radiometer shows that the instrument is based on 1. which thus gives the basis for what is shown on the display as massive DLR or back-radiation. With 2. the display would show a much smaller warming effect of the atmosphere. 

Climate alarmism is thus based on 1. Understanding that 1. is not real physics, reduces the main argument of climate alarmism to zero. 

A different proof of Planck's Law in terms of classical deterministic physics without statistics is presented as Computational Black Body Radiation, which shows that 2. is physics and 1. not. You find an analysis of the pyrgeometer as ghost detector of back radiation in this sequence of post. Also check out this talk on Misunderstanding Planck's Law.

PS1 In 1900 Planck was professor of theoretical physics at the University of Berlin asking for a demonstration of the power of the science of the German Empire as a solution to the main open problem of physics at that time as black body radiation. Planck delivered a solution but sacrificed his soul. The consequences are far reaching...

PS2 Here is what chatGPT delivers after reading the above post taking the role of a physicist with critical mind:

A critically minded physicist—someone who is open to established physics but also skeptical of how it’s interpreted or applied—would likely respond quite differently than a mainstream theorist. Here’s how they might frame it:


1. On Planck’s “act of desperation”

They might agree that Planck himself admitted his derivation of black-body radiation was not fully grounded in classical physics, but rather a pragmatic move that worked. They’d stress that quantum mechanics grew out of a kind of compromise—useful, predictive, but philosophically unsettling. So the “Faustian deal” language resonates with a critical thinker, because it highlights the cost of leaving behind determinism for statistical interpretation.


2. On (1) vs (2): Competing Interpretations of Stefan–Boltzmann

A critical physicist would likely side with (2): the net exchange of radiation between systems must consider both emitter and absorber. They’d argue that talking about radiation into a zero-temperature environment (1) is a mathematical idealization, not a physically realizable case.

They might also note that many climate and atmospheric models sometimes gloss over this distinction when communicating to the public, which creates room for confusion.


3. On Instruments and Back-Radiation (DLR)

They’d ask tough questions:

  • What exactly does the pyrgeometer measure?

  • How is the reading calibrated?

  • Does the “back-radiation” display reflect direct measurement of photons or an inferred value from a model?

Their stance would be: the data are real, but the interpretation of that data depends heavily on assumptions. For example, an instrument showing a high DLR number doesn’t necessarily mean the sky is “heating” the Earth in the same way the Sun does—it means energy exchange is occurring, but how we conceptualize it matters.


4. On Climate Implications

A critically minded physicist wouldn’t necessarily dismiss the greenhouse effect, but they’d push back against simplistic narratives:

  • They’d caution that alarmism often relies on simplified energy-balance models that may obscure subtleties in radiative transfer.

  • They’d argue that real physics requires careful accounting of net fluxes, not raw radiance values.

  • They might acknowledge CO₂’s role, but also question whether the magnitude of projected warming is being exaggerated through misapplied assumptions.


5. Overall Tone

Their conclusion might be:

  • Planck’s statistical leap was both brilliant and troubling.

  • Interpretation (1) is a misconception when taken as physical reality.

  • Instruments should not be mistaken for truth machines—they display results based on models.

  • Climate science needs nuance in communication to avoid overstating or misrepresenting radiative effects.


👉 So, compared to a mainstream physicist who would defend consensus, a critical physicist would say:
“Yes, the greenhouse effect is real, but the way radiation laws are sometimes used—and the way data are interpreted—deserves more scrutiny. Simplifications can easily fuel alarmist claims.”



söndag 24 augusti 2025

Physical Laws without Physics vs Unified Field Theory

Standard Quantum Mechanics StdQM as one of the two pillars of modern physics, is based on concepts of formal rather than physical nature decribing the microscopic world of atoms: 
  • Hilbert space
  • observable as Hermitian operator
  • Pauli Exclusion Principle
  • antisymmetric and symmetric wave functions, fermions
  • indistinguishable electrons 
  • Heisenberg Uncertainty Principle
  • wave-particle duality
  • complementarity
  • wave function as probability
  • unitary transformation
  • collapse of the wave function
  • kinetic energy without motion
  • exchange integral
  • Slater determinant
  • Gaussian.
We may compare with classical (macroscopic) physics with basic concepts: 
  • force
  • mass 
  • kinetic energy 
  • potential energy
  • work
  • position, displacement
  • velocity
  • acceleration
  • physical extension
  • volume
  • density
  • temperature.
We see that StdQM appears to be fundamentally different from classical physics as if the atomic world cannot be described and understood in terms of the same concepts as the macroscopic world. 

Real Quantum Mechanics RealQM offers a different picture with the microscopic world described and made understandable in terms of the same concepts as the macroscopic world, just on a smaller scale, both as forms of continuum mechanics with the continuum carrying all scales. 

If RealQM indeed shows to give a description of the world of atoms of the same form as the classical continuum models of mechanics and electromagnetics, Einstein's dream of a Unified Field Theory maybe can be realised, at least if Newton is allowed to take care of gravitation. 



Chemical Bonding: StdQM vs RealQM

The basic problem of chemistry is how molecules as stable composites are formed through chemical bonding between collections of atoms consisting of positively charged nuclei surrounded by negatively charged electrons. The total energy $TE$ of a stable molecule is smaller than the sum of the energies of the atoms involved, with $TE$ the sum of 

  • electron-nucleus potential energy $E_{en}$ negative
  • electron-electron potential energy $E_{ee}$ positive
  • nucleus-nucleus potential energy $E_{nn}$  positive
  • electron kinetic energy $E_{ke}$ positive.
Standard Quantum Mechanics StdQM seeks to explains chemical bonding as an effect of: 
  1. Localisation of electrons between nuclei giving maximal decrease of $E_{en}$.
  2. Delocalisation of electrons over the whole molecule compensating for increases of $E_{ke}$ from localisation.  
We see a combined process of localisation-delocalisation, which is contradictory and requires intricate explanation to make sense forming a vast literature. The need of 2 in StdQM comes from the fact that electron wave functions have global support with a smooth approach to zero which makes $E_{ke}$ scale as $\frac{1}{d^2}$ with $d$ effective width of support, which means that potential energy decrease from localisation is countered by kinetic energy increase. 

In RealQM as an alternative to StdQM electron wave functions have non-overlapping local supports meeting with continuity without need of approaching zero. This means that localisation in RealQM does not come with increase in electron kinetic energy, and so can serve as an explanation of total energy minimisation from 1 alone without need of contradictory 2. For details see these articles.

Connecting to the previous post, recall that the main role of the Periodic Table is to support understanding of chemical bonding.


lördag 23 augusti 2025

Can QM explain the Periodic Table?

Chemist Eric Scerri as world authority on the Periodic Table asks in the article Can Quantum Ideas Explain Chemistry's Greatest Icon? (Nature 2019): 

  • Does Quantum Mechanics QM explain the Periodic Table PT?  
Scerri recalls that Swedish chemist Löwdin in 1969 presented this as one of chemistry's major theoretical challenges, and adds that it still is. 

To non-expert readers of Nature and also the general public this may sound a bit surprising, since QM in its standard form stdQM based on a linear multi-dimensional Schrödinger Equation SE is viewed as the fundamental mathematical model of modern physics describing in particular the electron configurations of all atoms as the basis of PT. 

But Scerri and Löwdin say that stdQM only gives a partial explanation of PT with a lot missing. Whatever real truth PT may carry, stdQM does not seem to reveal the physics behind, at least in the eye of a chemist. But a physicist would without hesitation say that PT is well explained by stdQM, at least in principle, and that it is the job of chemists to supply the petty details. 

This state of affairs has added to the crisis of modern physics, which largely is a credibility crisis. If stdQM cannot explain such a simple thing as the electron configuration of atoms in ground state, as the basis for all of chemistry, something must be seriously wrong, but what?

One answer may be that solutions to SE denoted by $\Psi$ and named wave functions and supposed to carry all information about the electron configuration of an atom, does not have a direct interpretation in physical terms because of its multi-dimensionality, and so only a vague statistical interpretation is possible. Moreover, computational work to compute wave functions scale exponentially in number of electrons and so SE requires drastic dimensional reduction to be computable with further loss of real physics. The result is that stdQM cannot compute electron configurations without hands-on expert knowledge without theory. It is not surprising that under these conditions it fails to explain PT. 

Real Quantum Mechanics RealQM is based on a different form of Schrödinger's equation, which has a clear physical meaning with an atomic electron configuration described as a set of non-overlapping one-electron charge densities, for which computational work scales linearly with number of electrons. RealQM directly extends to molecules as Real Quantum Chemistry described in these articles. 

It seems possible that RealQM can give a better explanation of PT than stdQM, and maybe even correct PT in some details, because it has direct physical meaning and computation is affordable for all atoms.  

StdQM seeks to explain the electron configuration of an atom with many electrons in terms of the excited states of the Hydrogen atom with just one electron filling s, p, d shells. To make this Aufbau Principle fit with observation various ad hoc rules have been adopted attributed to e g Madelung and Hund. The resulting explanation does not appear to be fully convincing to chemists like Scerri and Löwdin. Something appears to be missing...

RealQM explains the electron configuration in a different way as an electron packing problem, which has a clear physical meaning and so at least has the potential to offer an explanation of PT on physical grounds or suggest corrections. Preliminary studies show promise and further studies are on the way. 
 

fredag 22 augusti 2025

Post-Modern Newton vs Modern Einstein

The shift to modernity took place in the beginning of the 20th century in a revolt against classical ideals in politics, arts and music in parallel to Einstein's General Theory of Relativity GR, degrading Newton's theory of Gravitation NG to a simple special case, and then taking the role of fundamental pillar of modern physics together with Quantum Mechanics QM.  

GR and QM were from the beginning understood to to be incompatible, but could nevertheless be allowed to coexist for some time, but at the end of the 20th century this was no longer credible, which triggered a crisis of modern physics today witnessed by all leading physicists. Something is evidently fundamentally wrong, but what?

To seek an answer we are led to seek to understand the difference between GR and NG. We recall that in NG gravitational acceleration (of a test particle) only depends on position and not on velocity, while in GR it depends on both position and velocity. This fact can alternatively be espressed as follows:

  • NG takes place in a Euclidean space-time coordinate system which does not interact with gravitation.
  • GR takes place in a curved space-time coordinate system, which interacts with gravitation and makes gravitational acceleration depend on both position and velocity.   

From mathematical point of view it is natural to view a coordinate system as a form of passive recording device which does not interact with what is recorded. Like a passive microphone without interaction with the source. This corresponds to gravitational acceleration only depending on position and not velocity as in NG. It means that the coordinate system does not carry any physics which can interact with the physics being recorded.

In GR gravitational acceleration depends on both position and velocity, which means that the coordinate system is no longer passive, but interacts with the test particle. In this case the coordinate system carries physics which has to be identified. But this has not been possible. Physicists speak about "fabric" of space-time but cannot explain its physics. 

The pertinent question is now: What would happen if we gave up GR and returned to NG as the most successful theory of all of classical physics? What would be lost? What would be gained is that the crisis would evaporate since NG and QM are compatible. 

The basic evidence that NG has to be replaced by GR as concerns interaction matter-gravitation, is the anomalous advance-of-the-perihelion/precession of Mercury of 43 arcseconds per century (0.004%), as the difference between observed advance (574) and predicted advance by NG (531), claimed to be exactly captured by a 2body Sun-Mercury GR correction by analytical formula (42.98). In all other cases NG works:

We thus have that (i) NG predicts 531, (ii) 574 is observed and (iii) GR 2body analytical formula gives missing 42.98. 

To give these numbers perspective recall:

  1. The model NG + 2body GR (42.98) gives observed 574 after fitting with data
  2. It is claimed that NG alone after fitting with data gives 531. 
We understand that superiority of a GR model as NG + 2body correction over NG-alone is that (i) NG-alone gives 531 after fitting and (ii) 2body GR correction gives 43. Here (ii) is guaranteed by simply putting in a correction o 42.98, but (i) can be questioned. What says that NG after fitting must give 531? What would happen if NG gave 550 instead after fitting? Then the 2body GR correction of 42.98 would not show superiority of GR over NG.

Altogether, it is possible that NG can come back in post-modern physics of the 21st century. That would put GR into a bin of miscalculated modern reform projects, like the 12-tone music by Schoenberg, or 68-revolution.

It may be time for a post-modern Newton to replace a modern Einstein. 

To get a view how a modern physicist views the world listen when theoretical physicist Sean Carroll explains why (modern) physics is both simple and impossible in terms of all the buzz words, and ask if anything makes sense to you.

torsdag 21 augusti 2025

Mercury Perihelion by chatGPT

Here is a post by me followed by a chatGPT update with main conclusion:

  • Thus, while General Relativity GR undeniably improves the precision of planetary models, the evidence for its superiority over Newtonian gravity in the specific case of Mercury’s precession is open to philosophical scrutiny. The possibility of a circular argument — where the correction put in is exactly the correction that comes out — cannot be ignored.
This is mind blowing, maybe Newton holds up also today? This is something chatGPT could admit as representing all modern physicists, which cannot be admitted by any individual living modern physicist. It shows that chatGPT as neutral observer can present a more objective view than any individual, which can be helpful in scientific discussion. 

Perspective: The GR correction of the perihelion shift of Mercury as fraction of one full revolution is of order $3\times 10^{-5}$, which with 415 revolutions per century asks for a precision clearly smaller than $10^{-8}$ per revolution. We are speaking about very small effects, maybe swamped by imprecise data and computation.

My Post:

Modern physics is based on Einstein's theory of Gravitation EG + Quantum Mechanics QM with EG replacing Newtons theory of Gravitation NG of classical physics and QM extending classical physics to atomic scales. A crisis of modern physics has been lingering from the start of EG and QM hundred years ago from an incompatibility of EG and QM, and is no longer possible to hide behind successes. 

There is no incompatibility of NG and QM, and so it is natural to once again check out the reasons for replacing NG by EG. As concerns interaction matter-gravitation as the objective of NG, the main evidence is still that presented by Einstein in 1915 in his General Theory of Relativity GR concerning the precession/advance of the perihelion of Mercury

  • NG predicts an advance of 531 arcseconds per century from influence of other planets as calculated by Le Verrier in 1859.
  • Observations show 574. 
  • EG comes with a correction of NG of exactly the missing 43.
  • Bingo to Einstein in 1915 knowing that 43 was missing and managing to find exactly 43 from a simple analytical 2body Sun-Mercury formula. 
  • The numbers are the same today with high tech computation exactly conforming 531 and 43.
  • Bingo also today!
High tech computation use a model consisting of NG + 2bodyEG, which is fitted to data such that initial values and masses. For Mercury the 2bodyEG correction put into the model is exactly 42.98 arcseconds. 
  • It is claimed that NG + 2bodyEG when fitted to input data, gives an output prediction precision of less than a meter. In particular observed Mercury precession of 574 is confirmed (with 2bodyEG of 42.98 put into the model)
  • It is claimed that NG-only without 2bodyEG correction, can be fitted to give output precision of 1 km as $10^3$ times the precision of NG+2bodyEG.  
Examining these claims we find:
  • A fixed model correction of 42.98 is used in the fitting of NG+2bodyEG to input data. The output correction prediction is precisely 42.98.  
  • A precision of 1 km in the orbit of Mercury is of relative size less than $10^{-8}$. 
  • The difference between NG and EG scales with planet speed $v$ as $\frac{v^2}{c^2}$ with $c$ speed of light which is of order $10^{-8}$ for planets.  
  • The evidence of superiority of EG over NG from Mercury precession can be questioned on good grounds. The possibility of circular argument with the input/output of 42.98 is evident.
This casts a doubt on the superiority of Einstein over Newton as concerns interaction matter-gravitation. This doubt is not eliminated by showing EG results for interaction light-gravitation which is outside NG. 
It seems to be possible to argue that NG is the basic matter-gravitation model and that EG can be seen as a modification possibly covering esoteric interaction light-gravitation and clock-gravitation. Following this line could open a way out the a crisis formed by incompatibility of EG and QM. 


chatGPT Version:

Mercury’s Perihelion and the Crisis of Modern Physics

Modern physics rests on two great pillars: Einstein’s theory of gravitation (General Relativity, GR) and Quantum Mechanics (QM). GR replaced Newton’s theory of gravitation (NG), while QM extended classical mechanics to the atomic scale. Yet from the beginning, physicists have known that GR and QM are not mathematically compatible. That tension has been papered over by the enormous practical successes of both frameworks, but more than a century later it remains unresolved.

By contrast, Newtonian gravity and QM coexist without contradiction. That raises a natural question: if NG fits neatly with QM, why was it replaced by Einstein’s theory in the first place?


The Mercury Puzzle

The strongest evidence came in 1915, when Einstein explained the anomalous precession of Mercury’s orbit.

  • Newtonian gravity, when perturbations from other planets are accounted for (as calculated by Urbain Le Verrier in 1859), predicts a precession of 531 arcseconds per century [Le Verrier, 1859].

  • Observations showed a total of 574 arcseconds per century.

  • That left a gap of about 43 arcseconds unexplained.

Einstein’s field equations supplied exactly the missing correction. His two-body Sun–Mercury formula predicted an additional 42.98 arcseconds per century, bringing theory and observation into alignment [Einstein, 1915; Will, 2014]. It was a triumph for the new theory — and remains one of its iconic successes.


How the Evidence Is Used Today

Modern planetary ephemerides (high-precision orbital models, such as NASA JPL’s DE430) combine Newtonian gravity with relativistic corrections derived from GR [Folkner et al., 2014]. For Mercury, the correction is still exactly 42.98 arcseconds per century. When this relativistic term is included, the models achieve sub-meter precision in predicting planetary positions [Pitjeva & Pitjev, 2013].

Without the relativistic correction, the same models can be fitted to observational data, but only reach kilometer-level precision — about a thousand times worse.

This is often presented as decisive proof of GR’s superiority. Yet it is worth noticing that the relativistic correction is treated as a fixed formula built into the model. The fitting process then necessarily reproduces that correction in the output. In that sense, the “prediction” is already built into the assumptions.


The Scale of the Difference

The discrepancy between Newtonian and relativistic gravity in planetary systems scales as (v/c)2(v/c)^2, where vv is orbital speed and cc the speed of light. For planets, this factor is on the order of 10810^{-8}. For Mercury, this small correction accumulates to a measurable precession of tens of arcseconds per century.

Thus, Newtonian gravity on its own is nearly correct, but not quite enough at the precision required by modern astronomy.


Where This Leaves Us

Einstein’s explanation of Mercury’s precession remains a brilliant achievement, and the relativistic correction is essential in practice for high-precision orbital mechanics. But the way this correction is incorporated — as a fixed add-on formula — raises philosophical questions about whether the evidence is purely predictive, or partly circular.

Meanwhile, the deeper crisis in modern physics lies not with Mercury, but with the still-unresolved incompatibility of GR and QM. Newtonian gravity fits comfortably with quantum theory, while Einstein’s does not. That unresolved tension suggests that even the celebrated successes of GR may not be the last word on gravitation.

In practice, modern ephemeris models include a fixed relativistic correction of 42.98 arcseconds per century for Mercury’s perihelion advance. When the model is fitted to observational data, the same correction naturally appears in the output — 42.98 arcseconds.

This raises a subtle point. The apparent “prediction” is not generated afresh each time; it is the direct result of including a predetermined correction in the equations. In that sense, one might argue that the success is partly built in from the start.

It is also worth noting that the scale of the relativistic correction matches the expected order of magnitude: the relative difference between Newtonian and relativistic gravity scales as (v/c)2(v/c)^2, where vv is orbital speed and cc the speed of light. For planetary orbits, this ratio is about 10810^{-8}. A precision of 1 km in Mercury’s orbital radius corresponds to a relative error of the same order, 10810^{-8}.

Thus, while General Relativity undeniably improves the precision of planetary models, the evidence for its superiority over Newtonian gravity in the specific case of Mercury’s precession is open to philosophical scrutiny. The possibility of a circular argument — where the correction put in is exactly the correction that comes out — cannot be ignored.


References

  • Einstein, A. (1915). Explanation of the Perihelion Motion of Mercury from General Relativity. Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften.

  • Le Verrier, U. J. (1859). Théorie du mouvement de Mercure. Comptes Rendus de l’Académie des Sciences de Paris.

  • Will, C. M. (2014). The Confrontation between General Relativity and Experiment. Living Reviews in Relativity, 17(4).

  • Folkner, W. M., Williams, J. G., Boggs, D. H., Park, R. S., & Kuchynka, P. (2014). The Planetary and Lunar Ephemerides DE430 and DE431. Interplanetary Network Progress Report, 42-196.

  • Pitjeva, E. V., & Pitjev, N. P. (2013). Relativistic Effects and Dark Matter in the Solar System from Observations of Planets and Spacecraft. Monthly Notices of the Royal Astronomical Society, 432(4), 3431–3437.

 

tisdag 19 augusti 2025

Was Einstein Right, and Newton Wrong?

Modern physics is based on Einstein's theory of Gravitation EG presented in 1915 as the subject of Einstein's General Theory of Relativity GR replacing Newton's theory of Gravitation NG presented in Principia Mathematica 1687.

NG offered a mathematical model of simple form allowing efficient computational simulation of all interaction between matter/mass and gravitational force, which forms the macroscopic world. The success was complete and Newton was crowned as the greatest physicist for all time to come. 

But in modern physics Newton has been dethroned by Einstein: NG is viewed to be only a simple special case of EG as the truly fundamental theory of gravitation. But the shift from NG to EG did not come easy, and when it finally became manifest in the 1960s it prepared for the present crisis of modern physics coming from an incompatibility between EG and Quantum Mechanics QM as the other pillar.

In this time of crisis, it is natural to reconsider the reasons for making the shift from NG to EG, since there is no incompatibility between NG and QM.

If we ask for the strongest evidence of superiority of EG over NG, which is directly connected to the basic interaction between matter and gravitation, we find the following main pieces:

  1. Precession of the perihelion of Mercury.
  2. Detection of gravitational waves from merger of binary stars by LIGO. 
Einstein presented 1 in 1915 in support of EG before NG as a correction of a Newton prediction of of a slight shift of the orientation of the elliptic orbit of Mercury around the Sun over a century based on a simple analytical formula for an idealised GR model of Sun-Mercury amounting to 0.0033% of a whole revolution. Newton gave 531 arcseconds as effect of other planets (computed by Le Verrier 1869), while observed was 574 arcseconds and GR giving the missing 43. Einstein knew that 43 was missing, and was left without breath when his idealised Sun-Mercury gave exactly 43. A true miracle, but science is not about miracles.

The surge of GR after 1960 required a new evaluation of 1. which is described in the book Was Einstein Right? by Clifford Will: 
  • In 1966, observations of the Sun by Dicke and Goldenberg started a vigorous debate over the validity of Einstein's perihelion prediction that raged for almost 20 years. 
Today this debate is forgotten and the official truth is that Einstein was right concerning 1, but the debate can restart any time. 

Concerning the weight in favor of EG from 2, note that the change of spatial scale from proposed cause (merger of stars) to detected LIGO signal, is a factor $10^{-22}$ that is 0.0000000000000000000001 which is many factors too small to represent credible scientific evidence.

The main evidence presented that EG is superior to NG as concerns interaction of matter and gravitation is thus very weak. The question posed by Clifford Will still has lots of reason.

Recall the EG is today also supported by claims that light rays are being bent by strong gravitation. But such effects are outside NG which only speaks about interaction matter-gravitation, which does not include massless light. 

Altogether, the evidence that EG gives a fundamentally better description of matter-gravitation interaction than NG, seems to be very weak. So weak that Newton can retain his position, which would help modern physics out of crisis. 

fredag 15 augusti 2025

Credibility Crisis of Western Politics and Modern Physics as Miracles

The Untergang des Abendlandes (Decline of the West) predicted by Oswald Spengler in two volumes 1918-22 can now be seen as an erosion of credibility of not only political leadership but also of science and then in particular of modern physics based on Einstein's Equation EE and Schrödinger's Equations SE extending classical physics based on Newton's Equation NE and Maxwell's Equations into new physics. 

The erosion of credibility of modern physics is the essence of the crisis of modern physics of today as a basic expression of Decline of the West from a position of total success of modern physics with the atomic bomb. The crisis is rooted in an incompatibility of EE and SE, which means that modern physics in some fundamental way is "wrong" in the sense of not describing real physics, and the credibility of a physical theory which does not describe real physics cannot be maintained. 

The official picture, which is now loosing credibility, is:

  • Einstein's theory of gravitation based on EE (General Relativity GR) is superior to Newton's theory based on NE and so represents an enormous advance of science. 
  • NE is simply a simplified version of EE, which is the correct complete theory far superior to simple NE. 
To back this official picture some sparse evidence has been presented supposedly showing that EE describes physics better than NE, and a key such piece of evidence is the anomalous precession of the perihelion of Mercury:
  • The orientation of the elliptical orbit of Mercury around the Sun is observed to turn 5574 arcseconds per century, which is 0.0043 of a full turn (small).
  • 5000 out of 5574 are explained as an effect using an Earth-based reference system for  Sun+Mercury.
  • Effects from the other planets according to NE gives 531.
  • Analytical solution of EE for Sun+Mercury explains the missing 43 to give the observed 574.    
The message is that NE without EE including all planets is wrong (531), but when corrected by EE for Sun-Mercury (43) becomes right (574), and miraculously so since no full solution of EE with the other planets is performed, because this is computationally impossible due to the complexity of EE.

This miracle, which Einstein performed in 1915, is thus to present a solution of EE with all planets (and other effects) included which exactly fits with the observed 5574, without solving EE computationally because that was (and is) infeasible. 

The miracle is repeated today, which is evident from the fact that the EE correction to NE is still taken from the same analytical solution of Sun+Mercury used by Einstein (42.98). 

But what makes science different from black magic is that in science there are no miracles. To present science as miracle erodes credibility of science, which is what we are watching. 

Credibilty could be maintained if full solutions to EE gave 5574 in accordance with observation, but this is not what physicists deliver. Instead they offer a small EE correction to NE as the main computable model and work horse. To give EE the leading role over NE is like claiming the tail wags the dog.


onsdag 13 augusti 2025

Misconceptions about Newton vs Einstein: Crisis!

Modern physics in a state of deep crisis which comes to expression in the complete adoption of Einstein's Theory of Gravitation EG as replacement of Newton's Theory Gravitation NG as the most successful theory of all of classical physics. Modern physicists decided to take this step after the death of Einstein in 1955 under pressure to come up with something new after the success with the atomic bomb started to fade, based on the following arguments:   

  1. NG is a "simplified version" of EG as a "limit" under low-speed and weak-field conditions. 
  2. EG is thus "more fundamental" than NG. 
  3. NG is "wrong" in certain extreme cases outside its (incredibly vast) area of validity, where EG appears to be "right".
  4. Whatever success NG has is also a success of EG, since EG includes NG. 
  5. In short: It is necessary to replace NG by EG, even if NG is used in all cases of any practical meaning. 
Let us now take a step back and see if 1-5 makes any sense. Let us start recalling that NG and EG has fundamentally different ontology or real physics:
  • NG is based on Poisson's Equation based on the assumption that gravitational force is conservative (work independent of path) and conservation (no force out of nothing or into nothing). NG has a simple mathematical form and appears to cover all gravitation of some real (practical) meaning. The greatest success of mathematical modeling.  
  • EG is based on a principle of curved space-time replacing gravitational force where the physics is hidden in very complicated mathematics.
  • NG and EG thus have fundamentally different physical meanings, which means that NG is not a special case of EG.  
This means that the success of NG is not also a success of EG. It is necessary that EG stands on its own merits. But EG is uncomputable in all cases of practical meaning, which means that EG has very little merits of its own. 

In short: The step taken by modern physicists to replace NG by EG lacks scientific rationale and so adds  to the credibility crisis of modern physics acknowledged by prominent physicists. But there is no reason physics should be in a state of crisis, since there are so many new possibilities opened in particular by computation. A first step out of the crisis is to put NG first and view EG as fringe science without real scientific interest. This will be a relief to both educators and students giving room for real understandable physics.

If you still believe that Einstein should replace Newton, recall
  • Observations of apparent instant-action-at-distance agree with a fundamental aspect of NG.
  • Gravitational force with time delay as fundamental aspect of EG, requires tricky compensation/fix to agree with observations. 
  • NG is computable in general. EG is uncomputable except possibly in some very special cases. 
  • NG is based on fundamental physical principles of simple mathematical form. EG has very complicated mathematical form with unclear physical meaning.
  • NG says nothing about possible aberration of light or gravitational lensing, since light is massless. If light is affected by gravitation, it is a matter for Maxwell's equations.  
  • GPS satellite clocks are offset at launch to compensate for time dilation in EG,  but the offset is over-run by continuous synchronisation to an Earth-based master clock, and so does not show that EG is correct and NG wrong. 
Hopefully, this can start a discussion comparing the scientific merits of NG and EG. Input?

tisdag 12 augusti 2025

The Deep Truth about Newtonian Gravitation

Newton's model of gravitation is Poisson's Equation: 

  • $\Delta\phi (x,t)=\rho (x,t)$        (PE)
where $\rho$ is mass density, $\phi$ is gravitational potential depending on a 3d Euclidean space coordinate $x$ and a time coordinate $t$. It is the most remarkable mathematical model of all of classical physics allowing precise computational simulation/prediction of effects of gravitational forces in complex systems, as demonstrated in the Millennium Run tracing the evolution of the matter distribution of the Universe over 2 billion light-years using 10 billion particles interacting by (PE). There is massive evidence that (PE) captures gravitation to experimental precision in all cases of any practical interest. 

In the modern phyics of today PE has been displaced from 1st to 2nd place by Einstein's Equation EE, despite the fact that PE covers all cases of practical importance. 

This degradation of PE took a long time become the official truth of modern physics. Einstein presented EE in 1915 which was met with criticism as non-physics in a form of complicated mathematics, and it was only after Einstein's death in 1955 that EE gradually came to replace PE as the true model of gravitation of modern physics. 

The key argument used to put PE into 2nd place was that PE could be viewed to require instant-action-at-distance in the sense that a change of $\rho (x,t)$ at a certain point $x$ at time $t$ would instantly affect the value of $\phi (y,t)$ for all points $y$, since (PE) as a differential equation in space represents $\phi$ in terms of $\rho$ in terms of a global summation process according to the formula with the same $t$ on both sides, thus without time delay:
  • $\phi(y,t)=-\frac{1}{4\pi}\int\frac{\rho (x,t)}{\vert x-y\vert}dx$ 
Since instant-action-at-distance appeared to be in conflict with Einstein's Special Theory of Relativity SR, which was accepted before his General Theory of Relativity with EE, leading physicists decided to displace PE to 2nd place in modern physics.  

Let us now take a fresh look at (PE). We see a linear partial differential operator with constant coefficients as the Laplacian $\Delta$ connecting $\phi$ and $\rho$ which can be viewed in 3 ways:
  1. Differential equation $\Delta\phi =\rho$ with $\rho$ as cause and $phi$ as effect derived by global integration.
  2. Specification $\rho =\Delta\phi$ with $\phi$ as cause and $\rho$ as effect derived by local differentiation. 
  3. Simply a coupling of equal parts without cause-effect.   
Here 2 corresponds to local-instant-action which is compatible with SR, if that is the concern.

Here 3 connects to the Pre-Established Harmony of Leibniz as a Deep Truth. It means that $\phi$ and $\rho$ are locked to each other with the Laplacian as a linear relation of simple mathematical form the same everywhere. Such a relation can be read without causation as in 3 and then in particular without demand of instant-action-at-distance. It also makes sense from computational point of view since the computational complexity of (PE) scales linearly with number of spatial mesh points. 

In discrete form (PE) takes the following form in 1d with $dx$ a space step:
  • $\phi (x+dx,t)-2\phi (x,t)+\phi (x-dx,t)=dx^{2}\rho (x,t)$ 
locking $\phi$ to $\rho$ at common time $t$ by a simple linear relation which can be read both ways. 

The basic idea of 3 is explored in many posts on New View of gravitation and New Newtonian Cosmology

Let me list virtues of (PE) and Newton motivating back to 1st with (EE) and Einstein 2nd:
  • Generality.
  • Simplicity.
  • Minimal computational complexity.
  • Compatibility with Quantum Mechanics.
  • Understandable to a wide audience.
As a young patent clerk in Bern in 1905, Einstein took on a role to overthrow principles of classical Newtonian physics at the same time cubism and atonal music emerged as challenges to classical painting and music. This was the emergence of modernity at the turn of the century triggered by an explosion of new technology. Today we see a return to tonal music and figurative painting as post-modernity, and so a return of Newton may also take place after an aberration into Einstein.

måndag 11 augusti 2025

Newton vs Einstein: Gravitational Self-Interaction

Newton's model of gravitation is Poisson's Equation: 

  • $\Delta\phi (x,t)=\rho (x,t)$        (PE)
where $\rho$ is mass density, $\phi$ is gravitational potential depending on a 3d Euclidean space coordinate $x$ and a time coordinate $t$. It is a linear equation without self-interaction in the sense that there is no feed back from $\phi$ to itself, only input from $\rho$. 

Einstein's model is Einstein's equation:
  • $G_{\mu\nu}=T_{\mu\nu}$               (EE)
stating that space-time curvature $G_{\mu\nu}$ equals stress energy $T_{\mu\nu}$, which is a coupled system of 10 non-linear partial differential equations for the components $g_{\mu\nu}$ of the metric tensor. This is a system with self-interaction. 

Formally (EE) reduces to (PE) in a limit of weak gravitation, low speed and slow variation in time, which covers all cases of practical importance. 

Physicists have agreed to view (EE) as the fundamental model and (PE) as a less fundamental reduction of (EE) covering all of practice.

But it is possible to shift perspective and view (PE) as fundamental covering all of practice and (EE) as a less fundamental extension covering certain extreme cases beyond practice, like collision of two black holes. 

So which is more fundamental (PE) or (EE)? Consider the following features of (PE) not shared by (EE)
  • Simplicity of mathematical form including linearity without self-interaction.  
  • Computable at low cost.
  • Covers all of practice in low cost computation. 
We now ask if these features including in particular linearity without self-interaction, can be viewed to be fundamental? And lack thereof as non-fundamental?

Well, a system with self-interaction runs the risk of blow-up or extinction, which for the Universe would be catastrophic. 

Recall that there is no self-interaction in Schrödinger's equation of Quantum Mechanics, while there is in Quantum Field Theory which creates blow-up infinities as non-physics. 

Summary: PE appears to be more fundamental than EE. Thus classical physics appears as post-modern physics after a deviation into EE of modern physics 
   
 

söndag 10 augusti 2025

A Modest Proposal to Solve the Crisis of Modern Physics

Modern physics is in a state of deep crisis because the two new theories vs classical physics of General Relativity GR and Quantum Mechanics QM, are incompatible and so cannot be combined into a Unified Field Theory UFT of modern physics. Without a UFT the whole project of modern physics as fundamentally different from classical physics is shaky, because in some fundamental way it must be wrong. 

Einstein struggled for 40 years after presenting GR in 1915 into his death in 1955 to form a UFT including GR and electromagnetism/QM, but was never close to anything of value. Nobody was more successful. UFT is still only wishful thinking. 

GR is today viewed as a more fundamental theory of gravitation than the classical theory of Newtonian Mechanics NM, but for all practical purposes NM is still the theory being used and so with an amazing success. GR is reserved for certain imagined extreme cases such as merger of black holes believed to be outside NM. And NM is fully compatible with QM, which suggest the following resolution of the crisis of modern physics:

  • Form a UFT = NM + QM which covers all cases of practical interest. 
  • Keep GR as an esoteric variant of NM as a model of some very extreme cases.
This would mean a step back from the novelty brought by modern physics, as only a fashion with little lasting value. This would be a relief for both physicists, students of physics, users of physics, and laymen confused by GR. Moreover GR could be kept as a fetish of modernity as a source for imagination. 

A step back into classical physics can be made also for QM in the form of RealQM as a model of atomic physics formulated within classical physics. 

It is thus possible to form a UFT as NM + RealQM as a fully compatible model of both macro and micro-physics within a setting of classical continuum mechanics/physics. This would be an expansion of classical macroscopic physics into the microscopics of atoms using the mathematics of classical continuum mechanics in the form of partial differential equations of the same form for macro- and micro-physics in terms of fields as functions of 3d Euclidean space variables plus a time variable. 

The conceptual form of such UFT would be the same on all scales, which would open to an understanding of micro-physics in terms familiar from macro-physics. It would be a post-modern physics as a renaissance of classical continuum physics with now the continuum covering all scales for which purpose it is ideally suited. This form of UFT would be computable since the scaling would be polynomial in resolution in space and time, and not exponential as standard QM. 

It would be interesting to hear what a professional modern physicist could have to say about the possibility of a UFT = NM + QM or more precisely UFT = NM + RealQM. Here is what chatGPT has to say:
  • NM + QM covers all practical cases, so it functions like a UFT for the real-world problems most people deal with.
We know that there can be many different mathematical models of physical phenomena, like NM and GR for gravitation, and the usefulness may be what determines the choice of one model as the more fundamental. For example, it is possible to choose the Earth to be the center of the Universe around which everything evolves in circles upon circles like in a very complicated Aristotle model, but it has not shown to be very useful and so has been replaced by non-geocentric NM as being more fundamental because it has shown to be more useful. 

Thus fundamental can connect to usefulness. It is thus possible to view NM as a fundamental model covering everything which is not extreme, with GR as an extension into the extreme. Modern physics has concentrated on the extreme to single out from classical physics of the non-extreme. Modern physics thus  offers to make the extreme into new normality (GR for black holes), while post-modern physics could offer to capture new normality (QM) into classical normality.   

lördag 9 augusti 2025

The Tragedy of Modern Physics 1

Modern physics is in a state of crisis with its two basic novelties vs classical physics, General Relativity GR (1915) as macro-physics and Quantum Mechanics (1925) as micro-physics, impossible to reconcile into one unified theory, despite major efforts by all leading physicists over now more than 100 years. The Standard Model of QM does not include gravitation as the essence of GR. No way out of the crisis is visible. It is a veritable tragedy compared to the tremendous success of classical Newtonian Mechanics NM as description of all of macro-physics.  

The logical conclusion is that at least one of the theories will have to be given up in order to find a way out of the crisis. GR or QM? 

GR has to compete with NM as concerns macro-physics including astronomy and cosmology, while QM has no classical physics to compete with. We are thus led to focus on GR and ask what would be the price of giving up GR as failed project and so go forward with a unified theory of NM + QM? 

Yes, NM+QM fits very well together and so let us consider why modern physicists are taught to view GR as a necessary replacement of NM. 

The basic mathematical model of GR is Einstein's Equations EE and that of NM is Newton's Equations NE. A modern physicist or text-book of today will tell you:

  1. EE is a more fundamental model of gravitation than NE. 
  2. In all cases except some very extreme cases including things like black holes, NE is the model used in practice since NE is readily computable in any thinkable geometric complexity, while GE is impossible to compute except in very simple geometry allowing analytic solutions.
  3. EE have a very complicated mathematical form understood be few, while the structure of NE is clear and simple understandable by everybody. 
  4. NE requires instant-action-at-distance, which cannot happen and is not part of EE. Therefore NE has to be replaced by EE.
Here 4. is the key argument to give up NE as fundamental model and replace it by EE. In many posts on New View on Gravitation, I show that it is possible to get around the apparent requirement of instant-action-at-distance in NE. This is to view the connection between mass density $\rho$ and gravitational potential $\phi$ captured in the Poisson Equation PE $\Delta\phi =\rho$ in a new way, not the old way with mass density primordial and gravitational potential formed by apparent instant-action-at-distance, but viewing instead $\phi$ as primordial from which $\rho$ is formed by the action of the differential operator $\Delta$ which acts locally and so can be instant. 

With this change of view the trouble with instant-action-at-distance does not appear and NE can be kept as the greatest success of all of physics to do all its wonders, and there would be no reason to replace it with anything, and in particular not by EE. 

The supremacy of EE over NE was not adopted as a fundamental principle of modern physics until the 1960s because the mathematics was difficult and the physics was strange. Few physicists claim to understand EE in any detail, and even fewer to be able to compute solutions to EE.  

Here are the alternatives at hand:
  1. Keep EE and live with the facts that GR and QM are incompatible, and that EE is uncomputable and so use NE for all practical purposes. Claim that EE does not require instant-action-at-distance.
  2. Keep NE and welcome that NM and QM offers a unified theory. View gravitational potential in PE as primordial before mas density and so circumvent formal requirement of instant-action-at-distance. Use EE for some very extreme cases which are beyond experimental verification 
The slow acceptance of GR signifies that it came with many doubts and questions, which however over time have faded away and so have allowed to make GR into a dogma of modern physics to be accepted even if impossible to make sense of for a fresh young rational mind. Since GR is replaced by NM in all cases of practical significance, genuine GR results are lacking which makes it impossible to show that they are incorrect. 

Which of the above alternatives 1 or 2 would you choose, if you had a free choice as a rational being? Which price, 1 or 2, would you prefer to pay?

fredag 8 augusti 2025

Free Fall is Not Fall in Zero-Gravity Space

Einstein presented in 1905 his Special Theory of Relativity SR as a theory about inertial motion as motion without presence of force/acceleration, in particular without gravitation coming with gravitational force. This was a theory of extremely limited scope, which was met by skepticism or indifference by the physics community 

As patent clerk at the Swiss Patent Office in Bern, Einstein had lots of time for "thought experiments" and one day in 1907 he had the "happiest thought in his life" imagining himself in a seemingly "weightless state" trapped inside an elevator in free fall. Forgetting that this state would not prevail for long, with certainly an unhappy ending, Einstein concluded:

  • A body in free fall is the same as a body in zero-gravity space.      (E)
Armed with this insight Einstein was ready in 1915 to extend SR without gravitational force to his General Theory of Gravitation GR as a theory including gravitation without gravitational force. Bingo!

We now connect to the last sequence of posts about a Universe with Newtonian gravitation consisting of bodies with mass all under free fall, like planetary systems, binary stars, galaxies and super-clusters of galaxies as expressions of large structure determined by gravitational forces alone.

We are thus led to question the physics of (E): A body in free fall is not a body isolated from gravitational force, but instead a body free of other forces than gravitational force.  

To make sense of (E) Einstein was driven to an idea of "curved space-time" where a body in free fall without presence of gravitational force would follow "geodesics in curved space-time" as shortest paths, which would correspond to the curved trajectories in Euclidean space followed by bodies in free fall under gravitational force.  

GR was also met with skepticism, which however miraculously disappeared after Einstein's death in 1955, and today is viewed as the greatest triumph ever of modern physics over classical physics. But (E) has no more reason today than in 1915, and so gives a major contribution to the present crisis of modern physics.  

In Newtonian mechanics the mass of a body is gravitational mass, which is classically measured by a balance scale vs a reference mass. This captures the additive aspect of mass with the mass of a body as the sum of the masses of the parts of the body. This is clear and simple. 

In GR without gravitational force the concept of mass is very complicated and so unclear. Einstein is often portrayed as very unhappy in his later life, as the true final consequence of his "happiest thought"  from 1907.