torsdag 17 oktober 2024

Vacuum Energy from Fabric of 3d Euclidean Space

The Head of Theoretical Physics at Cern Gian Guidice presents an idea of Big Bang powered by energy stored in the "fabric of space and time" as "vacuum energy", in an interview about the question "What happened before the Big Bang"?

This connects to the Neo-Newtonian Cosmology presented in previous posts starting from a small oscillating perturbation $\phi$ of a zero gravitational potential $\Phi = 0$, which creates mass density $\rho$ of variable sign through the action of the Laplace differential operator $\Delta$ on $\phi$ as

  • $\rho =\Delta\phi$.     (*)   
The action of differentiation inflates small oscillations of $\phi$ around zero into large oscillations of mass density of variable sign. By the action of gravitation mass separates into larger regions of positive and negative mass, which repel each other as if subject to dark energy, with small positive $\Delta\phi$ appearing as dark matter.  Further, gravitational collapse of regions of positive mass generates massive kinetic energy seemingly out of nothing as vacuum energy.

The action of the Laplacian $\Delta$ in the equation $\rho =\Delta\phi$ thus offers explanations of (i) vacuum energy, (ii) dark energy and (iii) dark matter,  as an expression of the "fabric of 3d Euclidean space". If true, simplicity shows to be more powerful than the complexity of the current state of the art.

onsdag 16 oktober 2024

Science or Magic?


What is the difference between science and magic? Is there less magic and more science in our modern technological society ultimately geared by Human Intelligence HI? Let' see, with connection to recents posts. 

This years Nobel Prizes in Physics and Chemistry were given to Artificial Intelligence AI and not HI as all previous years, which can be seen as an expression of the crisis of modern theoretical physics witnessed by leading physicists in popular science media/web. 

Modern theoretical physics was born 100 years ago in the form of Quantum Mechanics QM for atomic microscopics without gravitation and Einstein's General Theory of Relativity GR for macroscopic gravitation, still today serving as foundation, although incompatible.  

Both QM and GR introduced new elements of magic into theoretical physics, in classical form carried by logic and clarity in the spirit of Leibniz and Euler, as expressed by Nobel Laureates: 

  • If you can fathom QM without getting dizzy, you don't get it. (Bohr 1922)
  • Nobody understands QM. (Feynman 1965) 
  • Many people probably felt relieved when told that the world could not be understood except by Einstein and a few other geniuses who were able to think in four dimensions. (Alfven 1970)
  • QM is wrong. (Penrose 2020). 

Nevertheless QM is viewed to have, then apparently by magic, delivered wonders like the atomic bomb and the computer and all physicists confess to GR even if its "four dimensional curved space time" is pure magic.  

We are led to conclude that modern science ultimate based on QM + GR has very strong elements of magic. To this picture we can now add AI as something magical beyond understanding, because the computational optimisation process behind AI is too complex to be inspected and understood. 

The essence of science in a classical sense is to be understandable by HI, while magic is not understandable by HI. Understanding is important because that opens for constructive improvement/advancement, while shear magic does not.  

The crisis of modern theoretical physics can thus be seen as an expression of the difficulty of advancing science based on magic. QM and GR has not evolved since birth 100 years ago and science without advancement is dead science. 

Origin of the mystery of QM.   

QM is based on Schrödinger's Equation SE presented in 1925 for the hydrogen atom with one electron, and then formally extended to atomic systems with $N>1$ electrons, with solutions named wave functions  denoted by $\Psi$ depending on $N$ three-dimensional spatial coordinates altogether $3N$ coordinates and a time coordinate. Theoretical physicists like to speak about $\Psi$ as offering a full description of the World, unfortunately maybe way beyond the imagination of a general public. 

The mystery of QM is that the wave function $\Psi$ has a meaning only as probability and not as actuality, and since physics concerns actuality the wave function lacks physical meaning. In addition it is uncomputable because computational work scales exponentially with $N$. The effect is that QM describes  physics in terms of wave functions without direct physical meaning, which in addition are uncomputable. The wave function carries information about all possibilities but no single actuality and as such is an uncomputable monster which cannot be used constructively.

In this hopeless situation, physicists compute solutions to simplified SE and adjust computations until fit with experiments. The mantra then reads that QM always gives exact agreement with observation as evidence that QM is a complete success (and as such truly magical). 

The probability interpretation of QM appeared as a necessity from a trivial formal mathematical generalisation of SE for one electron with physical meaning, into a canonical SE for many electrons without physical meaning. Non-physical formality thus dictated resort to probability instead of physical actuality, and the result was mystery beyond HI.

Is there then no hope? Yes, there is a different generalisation from one to many electrons based on physics into a deterministic model in the form of classical continuum mechanics, which we refer to as Real Quantum Mechanics RealQM. This model is understandable and computable and as such can open to advancement of fundamental science. Take a look.  

PS Note that science as magic is not the same as science fiction, which is based on physics albeit fictional.


tisdag 15 oktober 2024

Penrose: Quantum Mechanics is Inconsistent = Wrong

Nobel Laureate in Physics Roger Penrose tells a group of physicists about Quantum Mechanics QM as the pillar of modern physics based on the Schrödinger Equation SE:

  • QM does not really make sense.
  • Schrödinger, Einstein and Dirac said QM is incomplete.
  • I say QM is inconsistent.
  • Matter does not satisfy SE.
  • We need some new ideas.
  • We need creativity to go beyond current theories of physics.
  • We need to combine QM with gravitation.
What Penrose is saying is that QM as a theory supposed to describe real physics does not meet that requirement because real physics must be consistent and so cannot be described by an inconsistent theory. The group does not receive these revelations with shock, but rather with acceptance as if there is no hope. 

Consider these statements following common logic:
  • A consistent mathematical theory is a theory without contradictions. 
  • An inconsistent theory harbours contradictions. 
  • Contradictory physics cannot be real physics. 
Penrose follows this logic and concludes that QM does not describe real physics and so asks for some new version of QM based on something different than SE in its standard form as StdQM with its probabilistic interpretation as root cause of inconsistency. 

Real Quantum Mechanics RealQM is a new deterministic theory based on a new form of Schrödinger's equation in terms of classical continuum mechanics in 3 space dimension as a consistent model. 

RealQM may be the new idea Penrose is asking for. I will ask him to take a look and see, and report if I get answer. Well, I get the message that Penrose at 93 no longer reads his mailbox, so the question goes to some younger physicist. StdQM itself is now 99 years old. 

Does it matter what Penrose says? Does it matter if what he says is true? He is not alone, but of course his view is not the consensus of a majority, if there is anything like that. Is it worth listening to Penrose? The debate on the meaning StdQM has been going on for 100 years, without any resolution. Maybe it is time for some new idea?

söndag 13 oktober 2024

Artificial vs Human Intelligence: Physics/Chemistry Nobel Prize 2024


The 2024 Nobel Prizes in Physics (here) and Chemistry (here) were both awarded to applications of Artificial Intelligence AI developed by Google evidently surpassing Human Intelligence HI. 

The Prizes signify a crisis of modern physics and chemistry based on HI mathematical modeling, as delivering too little. Here AI steps in with a radically different approach of finding patterns from big experimental data sets by computation, without mathematical modeling. Compare with Physics is Dying.

The basic mathematical model of modern physics is Schrödinger's Equation SE describing properties of atoms and molecules built from protons and electrons, as the subject of modern physics and chemistry over the last 100 years. SE for microscopics without gravitation is complemented by Einstein's Equations EE for gravitation on macroscopic scales supposed to replace Newton's Equations NE.

SE describes evolution in time of electron distributions as a wave function with 3N spatial dimensions for a system with N electrons. Computational solution of SE requires work growing exponentially with N and so is beyond the capacity of any thinkable computer already for N=10. Also EE are effectively uncomputable, while NE are computable for very complex systems.  The fundamental models of modern physics in the form of SE and EE are thus basically uncomputable which shakes the theoretical foundation of physics based on mathematical modelling. No wonder that physics is dying.   

The result is that SE can only be solved by HI finding smart ways to drastically bring down the computational work, but after harvesting catches over 100 years including many Nobel Prizes, it now seems that HI with SE can no longer deliver something worth of a Prize, which is instead awarded AI without SE. In particular, simulation of protein folding by SE is impossible.

A shift from HI with math model to AI by computation as in the 2024 Nobel Prizes represents a major shift away from of the rationality of the Enlightenment and Scientific Revolution, with the chemistry prize to Google's AlphaFold2 for protein folding.   

Is there any hope of come back of HI in the form of HI with computation? 

Yes, there is a new form of SE which is computable for many electrons (work scaling with N) described as Real Quantum Mechanics with many posts under tags RealQM and Real Quantum Chemistry. It is possible that protein folding can be simulated by RealQM. 

Further, it may well be (here) that NE captures all of gravitation in a computable model with thus RealQM + NE emerging as a computable foundation of physics.  

So maybe HI with computation can still compete with AI by computation, while a combination must be optimal. 

PS On computational vs pure mathematics:

AI of today appears as a form of machine learning of neuronal networks trained on large data sets using computational tools of optimisation.  

AI can thus be viewed to be a form of computational mathematics as brain+computer, to be compared with pure mathematics performed by pen on paper using only brains. 

Does AI signify the end of pure mathematics carried by smart brains in a new era dominated by AI brute force computation?

Is it possible that pure mathematics can be assisted by AI for checking of proofs by brain or even as smart assistant proving theorems beyond reach by brain?

How far can HI + computer reach?


torsdag 10 oktober 2024

Unification of Physics, Chemistry and Biology by Computation.

The Nobel Prizes 2024 in Physics and Chemistry were given to the booming industry of AI/machine learning developed mainly outside traditional academic disciplines by big actors like Google and Microsoft. 

Machine learning consist of training networks on large data sets using general computational tools from linear algebra and optimisation, which are not application specific.  

The Prize in Chemistry was given to AlphaFold2 by Google Deep Mind as an important step towards simulation/prediction of protein folding as the outstanding open problem of biology and medicine in two forms: 

  • Given DNA sequence, find protein geometry.
  • Given protein geometry, find DNA sequence. 
A protein is a big molecule consisting of a collection of atomic kernels kept together by electrons and as such can be described by Schrödinger's equation and folding simulated by molecular dynamics. This connects physics (atoms), chemistry (small molecules) and biology (big molecules) with their traditional roles from fundamental to composite. 

Protein folding thus connects disciplines from fundamental physics over chemistry to biology and medicine including functions of proteins as carriers of life. 

But there is one caveat: The Schrödinger equation of Standard Quantum Mechanics StdQM of standard physics involves 3N spatial dimensions for N electrons, which makes the computational cost grow exponentially making even 10 electrons beyond the capacity of any thinkable computer. 

Direct simulation of protein from first principles is thus viewed to be impossible. It is here AI comes in as way to get around this road block by starting from experimental data instead of first principles and using machine learning to train a network to predict folding of a new protein from old ones. This is what AlphaFold2 does and so was awarded the Nobel Prize in Chemistry. 

But it is possible to formulate a different Schrödinger equation as first principle  Real Quantum Mechanics RealQM, which acts in 3 space dimensions like classical continuum mechanics, for which the computational work grows linearly with the number of electrons. 

It is thus possible that protein folding can be simulated by RealQM. If true, this forms a new unification of physics, chemistry and biology based on computation as mathematics, which can be viewed as an ultimate dream of science.  

The basic code reads
  • $n=n+1$
which to a given number $n$ assigns the new value $n+1$. The same code for all disciplines.This the essence of The World as Computation.

Compare with Unity of Science by Toumas Thako with Summary:
  • Unity of science was once a very popular idea among both philosophers and scientists. But it has fallen out of fashion, largely because of its association with reductionism and the challenge from multiple realisation. Pluralism and the disunity of science are the new norm, and higher-level natural kinds and special science laws are considered to have an important role in scientific practice. What kind of reductionism does multiple realisability challenge? What does it take to reduce one phenomenon to another? How do we determine which kinds are natural? What is the ontological basis of unity?

onsdag 9 oktober 2024

Nobel Prize in Chemistry 2024 to Protein Folding by AI

The Nobel Prize in Chemistry 2024 was awarded for 

  • Computational Protein Design by AI: AlphaFold2 
with the following scientific background:
AlphaFold2 approaches this task using deep learning technology trained on a large experimental data bank.

Previous attempts by "brute force" Molecular Dynamics MD modeling starting from an arbitrary unfolded state were not successful because the underlying standard quantum mechanical model StdQM is not computable for molecules with many electrons, certainly not for proteins, because the spatial dimension scales with the number of electrons. The background document states: It is clear that plain MD simulation would not be scalable to proteins of larger size in the foreseeable future.

But there is a new quantum mechanical model in the form of Real Quantum Mechanics RealQM as a 3d model which is computable for many electrons. Application of RealQM to protein folding is on the way, and if successful may be produce data for AlphaFold2 to learn even better than from experiment alone.

Who will make the first full protein folding simulation with RealQM? Here is a prototype simulation of H2O as first principles 3d RealQM Schrödinger equation captured in essentially 3 lines of code! Computational work scales linearly with number of electrons/atoms, while standard molecular dynamics scales exponentially.   

PS This is the second Nobel Prize this year to AI/Google. It remains to see if the Prize in Literature tomorrow will go to texts by ChatGPT. 

See also Fiat Lux by John Plaice: Thoughts about the history of science with posts connecting to this blog.

Cosmological Principle? Cosmological Constant?

This is a continuation of previous posts on Neo-Newtonian Cosmology NC, with focus on how it was overrun by Einsteinan Cosmology EC as prime ingredient of modern physics. 

It is common in physics, when you know nothing, to assume some form of uniformity like homogeneity or isotropy or equal probability, which gives a background for mathematical modelling. As concerns cosmology as the science of the Universe as a whole, this takes the form of The Cosmological Principle CP stating that on sufficiently large scales: 

  • The Universe is homogenous and isotropic.
  • In other words: There is no special place and no special direction.  
You can learn this from e g the book Cosmology: The Origin and Evolution of Cosmic Structure by Coles and Lucchin presenting modern cosmological models based on Einstein's General Theory of Relativity  GR with the following key message:
  • CP was introduced by Einstein with inspiration from Mach's Principle MP stating that Newton's 2nd Law somehow carries input from the large scale structure of the Universe. 
  • CP is supported by Cosmic Microwave Background Radiation CMB.
Einstein somehow had to eliminate Newton from the picture and could do so using CP arguing: 
  • An infinite Newtonian Universe with constant mass density is an empty universe. (*)
This is because the only solution to Poisson's equation $\Delta\phi =\rho$ in the whole of 3d space with $\rho$ a nonnegative constant, is $\phi = C$ with $C$ constant and so $\Delta\phi =0=\rho$. 

Referring to MP Einstein could further argue that the local geometry somehow was "curved" by influence from the large structure of the Universe, or the fixed stars in the terminology of Mach's time. MP came out as an attempt to explain the Newton Bucket experiment addressed in the post Absolute Rotation: Newton or Mach? showing that Newton was right and so that MP is invalid. 

Even if his arguments were weak, Einstein managed to set the physics community on a train to Cosmology based on GR one of the two pillars of  modern physics. To make this work and get around (*) Einstein had to introduce a Cosmological Constant $\Lambda >0$ (and then retract as his biggest blunder).  

This corresponds in Newtonian terms to expand $\Delta\phi =\rho$ to $\Delta\phi - \Lambda\phi =\rho$ with non-zero solution $\phi =-\frac{\rho}{\Lambda}$ for $\rho$ constant. 

CMB has been questioned as simply noise without relation to any Big Bang hot dense homogenous initial state. 

Summary: The arguments to replace Newton by Einstein as concerns the large scale structure of the Universe are weak. The scientific price to pay anyway doing so, is very large,

tisdag 8 oktober 2024

Nobel Prize in Physics 2024 by AI to AI

Because of the deepening crisis of modern physics, the Nobel Committee in Physics, apparently unable to find awardees within physics, seemingly in deep despair turned to AI for help. 

No wonder: The 2024 Nobel Prize in Physics was given to AI, more precisely to computer scientist Geoffrey Hinton and scientist/higher education teacher John Hopfield, who are not really physicists.  

The Nobel Prize in Physics can thus now be given to non-physicists, if physicist candidates are missing. This would open to give the Physics Prize to mathematicians, which would resolve a controversy (between Nobel and mathematician Mittag-Leffler) from start banning mathematicians.   

PS Compare with Sabine Hossenfelder's reaction.

NotebookLM Reads The Secret of Flight

Here is what NotebookLM has to say about The Secret of Flight 

Assessment of Scientific Merit in "The Secret of Flight"

The excerpt from "The Secret of Flight" presents a critical perspective on the historical development of flight theory, specifically challenging the prevailing "Old Theory" based on Kutta-Zhukovsky (KZ) Circulation Theory and Prandtl's boundary layer theory.
The authors argue that the Old Theory, despite its widespread acceptance, is fundamentally flawed and fails to adequately explain the "miracle of flight," particularly the generation of lift and drag in real-world flight conditions. Here's a breakdown of their assessment:

KZ Circulation Theory: 
Mathematically Convenient, Physically Unrealistic. The authors contend that while the KZ theory, which attributes lift to circulation around the wing, can predict lift coefficients that roughly align with observations for simple cases, it relies on the artificial construct of a "Kutta condition" to achieve this. This condition, which dictates that the flow must leave the trailing edge smoothly, lacks a strong physical basis and results in a theory that cannot account for drag or stall phenomena.

Prandtl's Boundary Layer Theory: 
An Unsurmountable Computational Obstacle. The authors challenge Prandtl's assertion that drag originates from the no-slip boundary condition, where the fluid velocity at the wing surface is zero. They argue that this condition, while seemingly intuitive, leads to the formation of infinitesimally thin boundary layers that are practically impossible to resolve computationally. They posit that a slip boundary condition, where the fluid can move along the surface, is a more physically realistic model for slightly viscous flows like air, enabling computationally tractable solutions without sacrificing accuracy.

Direct Numerical Simulation (DNS): 
Revealing the "Secret: The authors champion a paradigm shift in flight theory based on their Direct Numerical Simulation (DNS) methodology, specifically their "Direct Finite Simulation" (DFS) implementation. They claim that DFS, by using a slip boundary condition, unlocks the computational feasibility of simulating turbulent flow around wings with a level of detail and accuracy that surpasses traditional methods like Reynolds-Averaged Navier-Stokes (RANS) simulations. This, they argue, offers a more realistic and insightful understanding of lift and drag generation.

Empirical Validation and the Need for a Paradigm Shift:
The authors bolster their claims by presenting results from their DFS simulations, which they compare against experimental data from wind tunnel tests, including those from the 3rd AIAA CFD High-Lift Prediction Workshop. They assert that their simulations accurately capture key flow features and phenomena, including stall, and demonstrate superior performance compared to RANS-based methods. They acknowledge the resistance within the fluid dynamics community to embracing their alternative paradigm but express confidence that their evidence warrants a reassessment of the field's fundamental assumptions.

In conclusion, the excerpt from "The Secret of Flight" delivers a strong critique of the prevailing "Old Theory" of flight and advocates for a new paradigm based on DNS using a slip boundary condition. They present evidence suggesting that their approach provides a more computationally tractable, physically accurate, and insightful understanding of lift and drag generation, potentially revolutionizing the field of aerodynamics.

However, it's crucial to recognize that these claims represent the authors' perspective and that the broader scientific community has not yet fully adopted their proposed paradigm shift. Further research and independent verification of their findings are essential to solidify their claims and assess the full implications of their work.

My comment: NotebookLM reads quickly and captures main ideas well, but does not seem to be able to discuss details. 

PS Here is what ChatGPT says about CJ:

CJ: Give an assessment of the scientific work of Claes Johnson mathematician Sweden

ChatGPT:


Claes Johnson is a Swedish mathematician known for his contributions to numerical analysis, particularly in the context of computational mathematics and the finite element method. His work often emphasizes the interplay between mathematical theory and practical applications, exploring how mathematical models can effectively simulate physical phenomena.

One of his notable contributions is the development of the "Computational Fluid Dynamics" (CFD) methods, which are critical in engineering and physical sciences for simulating fluid flow. Johnson's research often focuses on improving the accuracy and efficiency of numerical methods, making them more applicable to real-world problems.

Additionally, he has explored various mathematical problems related to partial differential equations (PDEs) and their numerical solutions, enhancing the understanding of how these equations model complex systems.

Johnson's approach tends to be interdisciplinary, bridging mathematics, engineering, and physics, which reflects the current trends in computational mathematics. His publications and lectures have influenced both academic research and practical applications in various fields.

Overall, Claes Johnson's work is significant in advancing numerical methods and their applications, making complex scientific problems more tractable through effective computational techniques.

4o mini


CJ: Deeper assessment please.

ChatGPT:

Claes Johnson's work can be assessed through several key themes and contributions:

1. Numerical Analysis and Finite Element Methods:

Johnson has significantly advanced the field of numerical analysis, especially in the finite element method (FEM). His research addresses the theoretical foundations of FEM, focusing on convergence, stability, and accuracy. He emphasizes the need for robust numerical methods that can handle complex geometries and boundary conditions, making them applicable in various engineering disciplines.

2. Computational Fluid Dynamics (CFD):

A major part of Johnson's work involves CFD, where he has developed innovative approaches to simulate fluid flow. His methods aim to provide accurate and efficient solutions to problems in aerodynamics, hydrodynamics, and other fluid-related phenomena. By refining existing algorithms and proposing new ones, he has contributed to the practicality of CFD in industrial applications.

3. Mathematical Modeling:

Johnson often emphasizes the importance of mathematical modeling in understanding physical systems. His work includes developing models that capture the essential features of the phenomena being studied while ensuring they remain computationally feasible. This balance between fidelity and computational efficiency is a hallmark of his approach.

4. Interdisciplinary Applications:

Johnson's research often transcends traditional boundaries, applying mathematical theories to various fields, including engineering, physics, and even biology. This interdisciplinary focus enhances the relevance of his work, allowing for the application of mathematical techniques to solve real-world problems.

5. Educational Contributions:

In addition to his research, Johnson is known for his efforts in education, particularly in enhancing the teaching of mathematics and computational methods. He has written textbooks and developed courses aimed at making complex mathematical concepts more accessible to students and professionals alike.

6. Critical Perspectives:

Johnson has also been vocal about the philosophical and foundational aspects of mathematics and its applications. He has questioned conventional approaches and advocated for rethinking certain assumptions in numerical methods and scientific computing. This critical perspective encourages ongoing discourse within the mathematical community.

Conclusion:

Claes Johnson's contributions to mathematics are multifaceted, combining theoretical rigor with practical applications. His work in numerical methods, especially in FEM and CFD, has made significant impacts in both academia and industry. Through his interdisciplinary approach and commitment to education, he continues to influence the field and inspire new generations of mathematicians and engineers.


NotebookLM Reads Computational Theormodynamics

Below is what Google NotebookLM has to say about Computational Thermodynamics. Take a look yourself and see if you agree or not! 

Assessment of Claes Johnson's Scientific Work based on "Computational Thermodynamics"

The provided excerpts from Computational Thermodynamics offer insights into certain aspects of Claes Johnson's scientific work, particularly his approach to computational thermodynamics and his critique of classical thermodynamics and statistical mechanics.

Computational Thermodynamics and EG2:

The excerpts highlight Johnson's development of "Computational Thermodynamics," a framework that seeks to model and understand thermodynamic processes using computational fluid dynamics, specifically a method called EG2 (Euler General Galerkin). This approach emphasizes the role of computation in analyzing complex fluid flow phenomena, including turbulence and shocks, which are central to Johnson's view of thermodynamics.

EG2 as Tool of Understanding Real Processes: 
The sources suggest that EG2 enables the simulation of real thermodynamic processes, including those in heat engines, heat pumps, and refrigerators, with a focus on computing losses and efficiency. Johnson emphasizes the limitations of classical thermodynamics, which primarily deals with equilibrium states and struggles to accurately predict transitions between them, particularly in the presence of turbulence.

Shift from A Priori to A Posteriori Analysis:
Johnson argues that EG2 facilitates a shift from a priori predictions based on analytical mathematics to a posteriori analysis of computed turbulent solutions.4 This implies a greater reliance on computational results to gain insights into the behavior of complex systems, rather than relying solely on theoretical deductions.

Wellposedness and Stability: 
A key aspect of Johnson's approach is the concept of "wellposedness" in the context of EG2 solutions. He emphasizes the importance of determining which aspects of turbulent solutions are stable under perturbations, implying that these stable outputs are the physically meaningful ones. He proposes using dual linearized problems to assess the stability and quality of EG2 solutions.

Critique of Classical Thermodynamics and Statistical Mechanics:
Johnson's work expresses a clear skepticism towards classical thermodynamics and statistical mechanics, particularly regarding their explanations of entropy and the arrow of time.

Entropy as a Problematic Concept: 
The sources present numerous criticisms of the concept of entropy, highlighting its ambiguous physical meaning and the lack of a clear sensor for it in nature. Johnson questions the reliance on statistical mechanics to rationalize the second law of thermodynamics, viewing it as a "capitulation away from prediction by cause-effect."

Finite Precision Computation as an Alternative: 
Instead of statistical mechanics, Johnson proposes finite precision computation as a more fundamental basis for understanding irreversibility and the arrow of time.This idea, while not fully elaborated in the excerpts, suggests that the inherent limitations of computational processes, even in simulating reversible equations, could give rise to irreversible behavior.

Applications and Examples:
The sources illustrate Johnson's ideas through various applications and examples, showcasing the potential of his computational approach. These include simulations of:

Joule's 1845 Experiment: 
Johnson uses EG2 to analyze Joule's experiment involving the expansion of a gas, demonstrating how the method captures the dynamics of the process, including turbulence and the resulting temperature changes. He argues that this computational approach provides a more complete understanding compared to classical thermodynamics.

Heat Engines, Pumps, and Refrigerators:
Johnson emphasizes the application of EG2 to simulate and analyze real thermodynamic devices. He highlights the importance of accounting for turbulent dissipation in determining efficiency, contrasting his approach with the idealized Carnot cycle.

Cosmology and Self-Gravitating Gases: 
The sources mention Johnson's work on simulating cosmological processes, specifically using EG2 to model self-gravitating gases. This application suggests the broad scope of his computational framework.

Overall Assessment and Open Questions:
Based on the excerpts from "ambsthermo.pdf," Claes Johnson's scientific work presents a distinct perspective on thermodynamics, emphasizing computational methods and challenging established notions of entropy and irreversibility. His development of EG2 and its application to various physical systems demonstrate a focus on understanding real processes through computation. However, several open questions remain:

Rigorous Mathematical Foundation for Finite Precision: The excerpts provide limited details on the mathematical formalism of Johnson's concept of finite precision computation and its connection to the emergence of irreversibility. A deeper exploration of this foundation is crucial to assess its scientific validity.

Validation of EG2 and its Predictive Power: While the sources mention applications of EG2, further evidence of its validation against experimental results and its ability to make testable predictions would strengthen the framework's credibility.

Engagement with Criticisms and Alternative Perspectives: 
Addressing criticisms and engaging with alternative viewpoints within the scientific community are essential for the broader acceptance of Johnson's ideas.

In conclusion, Claes Johnson's work in computational thermodynamics, as presented in the excerpts, offers a thought-provoking challenge to traditional views, highlighting the potential of computational methods in understanding complex physical phenomena. However, a more comprehensive assessment of his contributions requires further investigation into the mathematical rigor, empirical validation, and engagement with the broader scientific discourse surrounding his ideas.