torsdag 31 oktober 2024

Testimony by Leading Physicist Leonard Susskind


Leading theoretical physicist Leonard Susskind sums up his experience at the end of his career as follows:

  • I can tell you with absolute certainty that String Theory is not about the world we live in.
  • We have to describe our world. That is our purpose.
  • So we need to start over. We have a lot of work to do. I do not know of any young people doing that. 
  • String Theory combines quantum mechanics and relativity in a very beautiful way. There is no other theory that reconciles quantum mechanics and relativity. 
  • String Theory does not describe the real world. 
  • Is there anything else? Not to my knowledge.Wolfram's hypergraph is a failure. I do not know who Eric Weinstein is. Penrose believes in all sorts of stuff that I do not believe in.
  • So we need to start over again.
  • To students: Think for yourself and do not listen to old people. What should you work on? I don't know and if I knew I would be working on it myself. Don't be afraid, follow your curiosity. 
  • If you do not think that you can do that, you are probably in the wrong field.
  • So we need to start over...
What can we get out of this? Well, yet another confession that modern theoretical physics is in deep crisis.

Anything new? Yes, Real Quantum Mechanics is in perfect harmony with Neo-Newtonian gravitation/cosmology. Don't be afraid to check out. Do not listen to old physicists. 

måndag 28 oktober 2024

The Hard Problem of Consciousness: Feelings?

Continuation on recent posts on consciousness.

David Chalmers in The Character of Consciousness identifies Easy Problems of Consciousness EPC (see PS below) as those that seem directly susceptible to the standard methods of cognitive science, whereby a phenomenon is explained in terms of computational or neural mechanisms. The Hard Problems of Consciousness HPC are those that seem to resist those methods, more precisely:

  • The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. 
  • This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. 
  • Then there are bodily sensations from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion; and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.
  • What makes the hard problem hard and almost unique is that it goes beyond problems about the performance of functions. To see this, note that even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience—perceptual discrimination, categorization, internal access, verbal report—there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience?
Chalmers evidently connects feelings to experience: An organism able to observe/experience things while completely lacking any form of feelings, would not be considered to be conscious, more like a robot. 

We may have the idea that a fish is not conscious and in particular has no feeling of pain and so can be killed without any remorse. On the other hand, a human being often overwhelmed by feelings, would be the prime example of a conscious being. 

What to say about this? Well, a simple idea is that we are equipped with feelings to help us survive in a variable environment. We feel pain when hit by an enemy arrow and so seek protection behind a shield and we feel love to reproduce, or more generally seek pleasure of both body and soul. The role of feelings is to make us focus on something important for survival, as not only a note from the bank that our account shows minus but as a hit in solar plexus.  

Of course any living organism could be expected to benefit from feelings and so from this perspective even a simple flatworm can feel pain and so be conscious in this sense.

It seems that HPC does not select human consciousness as special, which instead connects to the EPC of cognitive capacities. 

I have been led to an idea of consciousness as a brain representation/model of an exterior reality, which is constructed from sensory input/experience from the body, which ultimately is connected to sensations of pain or pleasure as lack of pain. The mind model then does not only have an exterior source, but also a bodily representation as (stomach) feeling.

Many books have been written on both EPC and HPC without any clear agreement. 

Again: Is self-consciousness an emergent cultural phenomenon? If you are fully occupied with survival of the group, do you have time/need for self-introspection?

PS The easy problems of consciousness, according to Chalmers, include those of explaining the following phenomena:
  •  ability to discriminate, categorize, and react to environmental stimuli
  •  integration of information by a cognitive system
  •  reportability of mental states
  •  ability of a system to access its own internal states
  •  focus of attention
  •  deliberate control of behavior
  • difference between wakefulness and sleep

lördag 26 oktober 2024

Consciousness: From Unity to Diversity to Unity

We read in the book A Universe of Consciousness by Edelman and Tononi as intro to Chapter 3: Everyman's Private Theatre: Ongoing Unity, Endless Variety:

  • Our strategy for explaining the neural basis of consciousness is to focus on the properties of conscious experience that are the most general, that is, that are shared by every conscious state. 
  • One of the most important of these properties is integration or unity. Integration refers to the fact that a conscious state cannot be subdivided at any one time into independent components by its experiencer. This property is related to our inability consciously to do more than two things at once, such as adding up a check while carrying on a heated argument. 
  • Another key, and apparently contrastive, property of conscious experience is its extraordinary differentiation or informativeness: At any moment, one out of billions of possible conscious states can be selected in a fraction of a second. We thus have the apparent paradox that unity embeds complexity—the brain must deal with plethora without losing its unity or coherence. Our task is to show how it does so.
We learn that unity and differentiation are (apparently contrastive) important aspects of consciousness. 

We find precisely these aspects in the connection between gravitational potential $\phi (x,t)$ and mass density $\rho (x,t)$ as the basic relation of Neo-Newtonian Cosmology in mathematical terms expressed as follows, with $x$ a Euclidean space variable and $t$ a time variable:
  • $\Delta\phi (x,t)=\rho (x,t)$     for all $x$ and $t$         (1)
  • $\rho (x,t) := \Delta\phi (x,t)$  for all $x$ and $t$        (2)
where $\Delta$ is the Laplacian differential operator acting on the space variable $x$ and $:=$ is computer code for assignment. We thus express the connection between $\phi$ and $\rho$ in two different forms: 

In (1) $\phi (x,t)$ appears as a global solution for all $x$ and given $t$ of the Poisson-Laplace equation $\Delta\phi =\rho$ with $\rho$ given, which can be expressed as an integral over all of space:
  • $\phi (x,t) =-\frac{1}{4\pi}\int\frac{\rho (y,t)}{\vert x-y\vert}dy$       (3)
In (2) $\rho (x)$ for a given $x$ is assigned the value $\Delta\phi (x)$ involving given local values of $\phi (y)$ for $y$ close to $x$. 

We understand that (1) represents a global summation process as integration, while (2) is a local process of extraordinary differentiation or informativeness. 

We thus find a mind-body relation in terms of $\phi$-$\rho$ expressed in (1) + (2) with $\phi$ representing global unity/mind/logos and $\rho$ local diversity/body/spirit. 

With the same $t$ on both sides in (3) formally requires instant action at distance, but since the kernel $\frac{1}{\vert x-y\vert}$ in (3) is quickly decaying with increasing $\vert y\vert$, instant action at distance is not required. In other words, (1) can be slow, while (2) as local process can be fast and must be to fit observation. 

Altogether, we see that the general features of consciousness presented by Edelman and Tononi, can be expressed in precise mathematical form in a cosmology context. 

Unity of consciousness restricts attention to only one thing at the same time, which can be seen as a limitation of performing (1) + (2) for only one mass density distribution/thing at a time, because of limited processing brain power.   

Incompressible Turbulent Fluid Mechanics vs Mind-Body

     Da Vinci contemplating incompressible fluid flow to solve the hard problem of consciousness.

Recall the previous post and the basic model of Neo-Newtonian Cosmology in terms of gravitational potential $\phi$ as primordialmass density $\rho$, momentum $m$ and $u=\frac{m}{\rho}$ material velocity all depending on a Euclidean space coordinate $x$ and time coordinate $t$ with the dot on top representing differentiation with respect to time, as a model describing all of celestial mechanics:

  • $\rho = \Delta\phi$                                    (inverse square law in differential form)
  • $\dot\rho +\nabla\cdot m=0$                       (conservation of mass)
  • $\dot m +\nabla\cdot (um)+\rho\nabla\phi=0$    (conservation of momentum: Newton's 2nd Law)  

We compare with the basic model of all of (inviscid) incompressible turbulent fluid flow as Euler's equations:

  • $\nabla\cdot u=0$                                        (incompressibility)
  • $\dot\rho +\nabla\cdot m=0$                       (conservation of mass)
  • $\dot m +\nabla\cdot (um)+\nabla p=0$    (conservation of momentum: Newton's 2nd Law)  

where $p$ is pressure. Here incompressibility is imposed as a stipulation rather than law of physics, which can be realised through the physical law:

  • $\Delta p = -C\nabla\cdot u$                        (pressure law)

where $C$ is a large positive constant forcing $\nabla\cdot u$ to be small. We see that pressure $p$ and gravitational potential $\phi$ serve similar roles in Newton's 2nd Law steering the dynamics, and that both appear as solutions to a Poisson equation with infinite speed of propagation of effects from source to solution. We see that the pressure has the double role to drive the flow while securing incompressibility.  

We thus see that both $p$ and $\phi$ act as omnipresent immaterial controls of dynamics with instant connection to a whole material universe, as solutions to Poisson's equation with source of material form, thus as connection immaterial-material or mind-body in the setting of consciousness in recent posts.

fredag 25 oktober 2024

Gravitational Potential - Matter vs Mind - Body

This is a continuation of previous posts on the mind-body problem of consciousness.



In Neo-Newtonian Cosmology the gravitational potential $\phi (x,t)$ is immaterial, omnipresent and primordial assigning mass density $\rho (x,t)$ to matter by the action of the Laplacian differential operator $\Delta$ acting without delay on the Euclidean space coordinate $x$ for all time $t$:

  • $\rho (x,t) = \Delta\phi (x,t)$              (1)
Immaterial potential thus meets material mass density in (1) as instant top-down local assignment. Material matter moves by gravitational force field $\nabla\phi$ according to Newtons laws of motion
  • $\dot u = -\nabla\phi$            (2a)
  • $\dot x =u$,                                        (2b)
where $u$ is material velocity, $x$ is position and the dot signifies differentiation with respect to time. There is also a feed back from mass density distribution $\rho$ to gravitational potential as solution of the equation $\Delta\phi =\rho$ as a slow bottom-up global process

We can find a parallel as connection between an immaterial omnipresent mind/soul and material body of a human being. The mind top-down assigns meaning/life to all the cells making up the body. There is also a bottom-up process giving sensory feed back from the body to the mind. The mind is then different from the material brain and embraces the whole body like the gravitational field embraces the Universe. 

The material brain would serve a crucial role of processing sensory input in an analog of (2) into planning and action, but the mind would stretch outside the brain.

It thus seems possible to relate ideas about morphic fields presented by Rupert Sheldrake to physics as a synthesis of immaterial and material or mind and body.  

It is natural to compare with Leibniz idea of pre-established harmony as a form of parallelism between gravitational potential and mass density in the form $\Delta\phi =\rho$, where neither $\phi$ nor $\rho$ has clear primordial priority. They just go together in a perfect way without discussion. The above argument can accommodate also Leibniz view.    

Picasso and His Model: Feelings

Let us see if the relation between Picasso/Painter and his Model can tell us something about the nature of consciousness, connecting to the previous post with a brain model of the exterior world as central concept (cf Nobel Prize in Medicine 2014).

We see Picasso looking at the Model to make a Painting or model of the Model on the canvas, which we can assume agrees more or less with some form of mental model in Picasso's brain. We see Picasso watching both the Model and the Painting to see if they agree and if not update the Painting. 

We see that Picasso is very aware about the whole situation with his mind switching between  (i) looking at the Model, (ii) looking at the Painting, (iii) making a comparison by projecting the Painting on the Model and (iv) deciding next brush.  

Here (i) and (ii) connect to direct perception, while (iii) seem to depend on some higher level evaluation and (iv) represents action after evaluation. 

More generally, we may think of watching a Landscape as making a model landscape in the brain which is projected on the Landscape and updated until fit, to make us experience the Landscape as something real outside us and not its brain model.

The brain model of your body is projected back onto the body, so that you experience the pain from a wound at its physical location and not in brain. This makes it possible to experience pain in a limb, even if the limb has been amputated, thus as a phantom limb.

Where does consciousness come in? Well, of course in the form of awareness as readiness to input from (i) and (ii) but also in the evaluation (iii) and action (iv). 

What is missing, as the so called hard problem of consciousness, is here Picasso's feelings for both Model and Painting. Picasso may be in love with Model (pleasure) but may not be happy yet with the Painting (pain) and this in not captured in (i)-(iv). 

Does it matter? Probably so, because this is gives Picasso motivation to decide to paint the Model and then actually do it under pain to eventually reach a state of pleasure with a finished Painting. 

Of course, motivation is central to get anything done and so may be an important aspect of consciousness as a goal and road to reach the goal. This seems to connect to self-consciousness offering a mental model where the capacities of the Self can be tested to set goals.

A robot painter with AI can be envisioned to carry out (i)-(iv) in a mechanical way and then without the burden of feelings, which often appear as bodily sensations even if not coming from the body, like love or fear felt in your breast. 

Ultimately, we may connect feelings to (early) sensory input of pleasure and pain, which would serve as basis for motivation to reach pleasure. This is something a robot would miss and so the goal for a robot would have to be set by some consciousness equipped with feelings setting the goals for the robot. 

Is it possible to equip a robot with feelings?

 

torsdag 24 oktober 2024

What is Consciousness?



The present hype of AI brings up questions about consciousness:

  • Does ChatGPT express some form of consciousness?
  • What are basic features of consciousness?
  • What is the connecting to intelligence?  
A simple intuitive idea of consciousness of an organism might include, all for best survival: 
  1. model of the external world formed in the brain and projected onto the world
  2. updated by sensory organs and kept in a memory
  3. guiding actions 
  4. to reach certain goals
  5. in a variable environment
  6. self-consciousness allowing self-evaluation.

If the goals are reached one could say that the organism has some form of intelligence. The organism could be biological from flatworm to human being or some mechanical robot with AI. 

The model would be used to predict changes in the environment requiring actions by the organism. The prediction would involve testing different actions in the model as simulations and choosing the best to meet the goal.

A self-driving car continuously updates a 3d model of a variable environment and takes actions to reach a certain destination and so may be viewed to fulfil 1-5, while a human being would also meet 6.  ChatGPT would come near only to 1. 

A goal of seeking pleasure rather than pain would require sensory organs allowing the organism to feel pleasure and pain. This is not so far possible for mechanical robots (as far as we know), but maybe flatworms can. Pain is experienced in the brain and projected to the perceived origin of sensory input. 

One may ask if self-consciousness is a cultural phenomenon, which is not fundamental in primitive cultures where group-consciousness may be more useful? Maybe Greek gods served as some form of exterior self-consciousness.



Can Protein Folding be Computed?

Protein folding is a spontaneous process where a given unique string of amino acids folds itself in a solvent (water) into a macromolecule of unique 3d geometry, guided by 

  • formation of hydrogen bonds
  • hydrophobic interactions
  • van der Waals forces. 
Modern physics in the form of quantum mechanics comes with the message that protein folding as a form of molecular dynamics can be described by a Schrödinger Equation SE modeling a collection of atomic kernels held together by a collection of electrons. 

Computing solutions over time of SE starting with a given string would then produce a simulation of the folding process allowing the folded protein to be predicted from given string, and so also reversely a string to be predicted from given folded geometry. This would be immensely helpful for understanding of biological processes and drug design. 

But there is big problem with such a grand scheme: Computational solution of SE is impossible because the work grows exponentially with the number of electrons and so is beyond the capacity of thinkable computers already for 10 electrons, while the true number may be 100.000.  The reason is that SE involves $3N$ spatial dimensions for $N$ electrons. 

Modern physics/chemistry thus has nothing to deliver as concerns computational protein folding, which is illustrated by the fact that the 2014 Nobel Prize in Chemistry was awarded to Artificial Intelligence AI for protein folding trained on large experimental data sets without use of SE theory. 

Instead of letting AI take over completely, let us give human intelligence another chance and ask if maybe there is some other mathematical model than SE that can describe the folding?

And yes, there is a candidate in the form of a different atomic model named Real Quantum Mechanics RealQM in the spirit of Schrödinger as system of non-overlapping electronic charge densities in 3d space geared Coulombic interaction, for which the computational work grows linearly with the number of electrons. 

So there we are: It may be that protein folding is computable by RealQM. The computational process proceeds in time where for a given configuration of atom kernels and electronic charge distributions in 3d, a new configuration is computed from Coulombic interactions through electric potentials, with work scaling with  number of electrons. 

RealQM connects to ad hoc simplified versions SE based on Electron Orbitals or Density Functional Theory, but RealQM is fundamentally different since it is based on a new single principle of non-overlapping charge densities in a parameter-free model. 

You find laptop computations with RealQM for atoms and molecules on RealQM and on this blog under tags RealQM and Real Quantum Chemistry including computer codes essentially consisting of three lines for update of kernel positions, charge densities and potentials.  

Who will do the first RealQM simulation of protein folding?

Recall that the crisis of modern physics is the result of not delivering anything new, in particular nothing for protein folding. We have identified the non-computability of SE as the big trouble. But this is just one aspect of the basic troubling aspect of SE namely that it is a non-physical model for which no convincing physical interpretation has been found despite intense efforts by thousands of highly intelligent physicists ever since the formulation of SE was made 100 years ago. 

The attractive aspect of SE is that it is quick to formulate, allowing physicists to speak with loud voice about wave functions denoted by $\Psi$ as solutions to SE, while covering up that they are uncomputable and lack physical meaning. 

The result is that today the foundation of quantum mechanics is no longer a topic of study at physics departments, and is only pursued by some isolated enthusiast philosophers at philosophy departments. This is clearly not an optimal situation. 

There is no reason for atomic physics models to lack physical meaning nor computability.

onsdag 23 oktober 2024

Can You Feel that Somebody is Looking at You?


This is a continuation of the previous post connecting to the possibility that our minds are field-like and extend beyond our brains and can have effects beyond our brains put forward by Rupert Sheldrake, a very heretical thought from the point of view of conventional science. 

Sheldrake recalls that children up to age 10-11 years spontaneously believe in visual extramission included in two-way transfer between object and eye. That's why Roald's Dahls Mathilda has eye beams that come out her that can move things (cf Superman and X-men). Children love that because that's the way they think vision works. 

But in school children learn that this is not possible because there is a one-way stream of photons from the object entering into the eye and creating an image in the brain without feed-back to the object. And so this is what educated people in the West believe, and what Sheldrake questions and so get critiqued. 

Two-way transfer object-eye directly connects to a new view of heat transfer between objects by radiation presented as Computational Blackbody Radiation CBB.

To see this let us replace object-eye connected by light as electromagnetic wave, with two bodies B1 and B2 of temperature T1 and T2 radiatively connected by electromagnetic waves. CBB describes how heat energy is transferred from B1 to B2 if T1 > T2 by a phenomenon of resonance between B1 and B2 carried by standing electromagnetic waves as a two-way connection between B1 and B2 like a vibrating rope between the bodies. 

The fact that heat transfer is one-way from B1 to B2 comes from the presence of high-frequency cut-off increasing with temperature, which makes frequencies carried by B1 above the cut-off frequency of B2 cause heating of B2, while shared frequencies does not involve heat transfer. Here B1 serves as the object and B2 as the eye. 

There is then main transfer of heat energy from object to eye, as in the conventional view as a stream of photons as light particles, but object and eye are connected by standing electromagnetic waves. With a two-way connection it is not unthinkable that the object can somehow feel (very weakly) the presence of the observing eye, while this is impossible with one-way stream of light particles.  

Two-way connection by standing electromagnetic waves is captured in Maxwell's equations, while the physics of a stream of light particles is missing. 

The conventional view is that knowledge is transferred one-way from scientists to unschooled, but a true scientist can learn from anything. 

Two-way connection by standing electromagnetic waves relates to a holistic view of things as being interconnected, as opposed to a classical reductionistic view separating things. Today we are all connected through a two-way web.  


tisdag 22 oktober 2024

Morphogenesis by Resonance

                                                       Patterns formed by resonance?

The book Morphic Resonance by Rupert Sheldrake addresses the fundamental problem of how organised structures are formed in physics, chemistry, biology from elementary building blocks seemingly without information about the overall structure. How does a flower, bird or human being develop from a genetic code, which contains recipes for protein building blocks but no information about the whole structure? Sheldrake seeks an answer in the form of morphogenetic fields carrying this information as collective resonance phenomena

We are familiar with resonance in physics as the wave harmonics of a vibrating string. We understand that wave patterns develop from instabilities with tendency to increase crests and troughs of certain wave lengths. Watch dropping a stone in a pond.

Thus we expect to see form develop from resonance serving as morphogenesis and find this in particular in the case of fluid flow with turbulent vortices developing from convective instabilities as shown in Computational Turbulent Incompressible Flow. 

The non-radiating stable ground state of an atom is represented by the lowest harmonic of a Schrödinger wave equation, while higher harmonics are triggered for a radiating atom. Real Quantum Mechanics gives a new explanation of the lack of radiation from the ground state, as the mystery Bohr struggled with. 

Computational Black Body Radiation presents a new analysis of the transfer of energy from a source of light to a receiver as an atomic resonance phenomenon carried by standing electromagnetic waves without need to introduce photons as particles of light. A similar transfer is seen between two tuning forks carried by standing acoustic waves. 

Sheldrake's concept of morphic resonance thus comes to expression in physics and may serve also in chemistry and biology in more general forms. Maybe memory is carried by resonance... Maybe the fertilised egg carries the blueprint as a resonance bringing the genetic code to life.

Musical harmony is based on tonal resonance, while musical rhythm represent patterns over time. Singing in a choir unites single souls into one. 

Resonance can have the material form of vibrating strings, or immaterial as a common gravitational potential (recall Neo-Newtonian Cosmology) or more generally beliefs forming a society. An immense subject…

Resonance appears as an expression of instability of a system in the sense that a small periodic forcing causes large oscillations in the system. This happens if the periodicity of the forcing agrees with an eigenvalue of the system and the corresponding eigenfunction represents the shape of the system response.  This allows patterns to develop from small forcing in creation of form as morphogenesis "by itself".


söndag 20 oktober 2024

Gravitational Potential: Primordial Omnipresent Immaterial

This is a further reflection on Neo-Newtonian Cosmology expanding on the PS in the previous post on the work by Rupert Sheldrake on morphogenesis geared by immaterial presence.

The basic idea is that primordial omnipresent immaterial gravitational potential $\phi$ has been created from an infinitesimal fluctuation of a zero potential from which mass density $\rho$ emerges by the action of the Laplace differential operator $\Delta$:

  • $\rho =\Delta\phi$    (*)   
inflating fluctuations into substantial mass density of variable sign. The gravitational potential assigns mass to material matter consisting of positive and negative electric charge densities emerging from an electric potential as in (*)

The action of gravitational force separates matter into larger regions of substantial positive and negative mass, which repel each other as if subject to dark energy, with some part of $\Delta\phi$ appearing as immaterial dark mass. Further, gravitational collapse of regions of positive mass generates massive kinetic energy as vacuum energy. 

Altogether, structures as planetary systems and galaxies emerge from an omnipresent gravitational potential acting as immaterial organiser by assigning mass to matter in a form of cosmological morphogenesis. Here informations is directed from potential to mass density, which is the opposite of the current paradigm: 
  • Each atom attracts all other atoms in the Universe with short time delay.   (**)     
With (*) as primary process, (**) appears as a secondary slow process. 

The main trouble with (**) is that the physics of the connection/attraction between atoms has no answer. The current idea of some form of particles carrying attractive force between all atoms in the Universe has not been fruitful. It remains to see if (*) is more fruitful. In particular (*) is local and as such can be instant without assistance of force carrying particles.

Connecting to the 2024 Nobel Prize in Chemistry on protein folding, it may be that the total electric potential of a protein, like the total gravitational potential, acts as an organiser of the morphogenesis of the folding. See this post with connection to Real Quantum Mechanics.

 

torsdag 17 oktober 2024

Vacuum Energy from Fabric of 3d Euclidean Space

The Head of Theoretical Physics at Cern Gian Guidice presents an idea of Big Bang powered by energy stored in the "fabric of space and time" as "vacuum energy", in an interview about the question "What happened before the Big Bang"?

This connects to the Neo-Newtonian Cosmology presented in previous posts starting from a small oscillating perturbation $\phi$ of a zero gravitational potential $\Phi = 0$, which creates mass density $\rho$ of variable sign through the action of the Laplace differential operator $\Delta$ on $\phi$ as

  • $\rho =\Delta\phi$    (*)   
The action of differentiation inflates small oscillations of $\phi$ around zero into large oscillations of mass density of variable sign. By the action of gravitation mass separates into larger regions of positive and negative mass, which repel each other as if subject to dark energy, with small positive $\Delta\phi$ appearing as dark matter. Further, gravitational collapse of regions of positive mass generates massive kinetic energy seemingly out of nothing as vacuum energy.

The action of the Laplacian $\Delta$ in the equation $\rho =\Delta\phi$ thus offers explanations of (i) vacuum energy, (ii) dark energy and (iii) dark matter,  as an expression of the "fabric of 3d Euclidean space". If true, simplicity shows to be more powerful than the complexity of the current state of the art.

PS Matter made up electrons and protons appears as an analog to (*) with $\phi$ electric potential and $\rho$ charge density of variable sign, which form atoms and molecules into structures of matter as the material world endowed with mass from the gravitational potential. This leaves open the possibility that part of the gravitational potential remains to create dark mass according to (*) without material form. This connects to speculations by in particular Rupert Sheldrake. 

onsdag 16 oktober 2024

Science or Magic?


What is the difference between science and magic? Is there less magic and more science in our modern technological society ultimately geared by Human Intelligence HI? Let' see, with connection to recents posts. 

This years Nobel Prizes in Physics and Chemistry were given to Artificial Intelligence AI and not HI as all previous years, which can be seen as an expression of the crisis of modern theoretical physics witnessed by leading physicists in popular science media/web. 

Modern theoretical physics was born 100 years ago in the form of Quantum Mechanics QM for atomic microscopics without gravitation and Einstein's General Theory of Relativity GR for macroscopic gravitation, still today serving as foundation, although incompatible.  

Both QM and GR introduced new elements of magic into theoretical physics, in classical form carried by logic and clarity in the spirit of Leibniz and Euler, as expressed by Nobel Laureates: 

  • If quantum theory is correct, it signifies the end of physics as a science. (Einstein 1921)
  • If you can fathom QM without getting dizzy, you don't get it. (Bohr 1922)
  • It seems clear that the present quantum mechanics is not in its final form. Some day a new quantum mechanics will be discovered ....determinism in the way that Einstein wanted. (Dirac 1933) 
  • I don't like QM, and I'm sorry I ever had anything to do with it. (Schrödinger 1933)
  • Planck, himself, belonged to the sceptics until he died. Einstein, De Broglie, and Schrödinger have unceasingly stressed the unsatisfactory features of quantum mechanics and called for a return to the concepts of classical, Newtonian physics while proposing ways in which this could be done without contradicting experimental facts. Such weighty views cannot be ignored. (Born 1954)
  • It was not possible to formulate the laws of quantum mechanics in a fully consistent way without reference to the consciousness. (Wigner 1963)

  • Nobody understands QM. (Feynman 1965) 
  • Many people probably felt relieved when told that the world could not be understood except by Einstein and a few other geniuses who were able to think in four dimensions. (Alfven 1970)
  • QM is wrong. QM makes absolutely no sense.(Penrose 2020). 

Nevertheless QM is viewed to have, then apparently by magic, delivered wonders like the atomic bomb and the computer and all physicists confess to GR even if its "four dimensional curved space time" is pure magic.  

We are led to conclude that modern science ultimate based on QM + GR has very strong elements of magic. To this picture we can now add AI as something magical beyond understanding, because the computational optimisation process behind AI is too complex to be inspected and understood. 

The essence of science in a classical sense is to be understandable by HI, while magic is not understandable by HI. Understanding is important because that opens for constructive improvement/advancement, while shear magic does not.  

The crisis of modern theoretical physics can thus be seen as an expression of the difficulty of advancing science based on magic. QM and GR has not evolved since birth 100 years ago and science without advancement is dead science. 

Origin of the mystery of QM.   

QM is based on Schrödinger's Equation SE presented in 1925 for the hydrogen atom with one electron, and then formally extended to atomic systems with $N>1$ electrons, with solutions named wave functions  denoted by $\Psi$ depending on $N$ three-dimensional spatial coordinates altogether $3N$ coordinates and a time coordinate. Theoretical physicists like to speak about $\Psi$ as offering a full description of the World, unfortunately maybe way beyond the imagination of a general public. 

The mystery of QM introduced by Born, is that the wave function $\Psi$ has a meaning only as probability and not as actuality, and since physics concerns actuality the wave function lacks physical meaning. In addition it is uncomputable because computational work scales exponentially with $N$. The effect is that QM describes  physics in terms of wave functions without direct physical meaning, which in addition are uncomputable. The wave function carries information about all possibilities but no single actuality and as such is an uncomputable monster which cannot be used constructively.

In this hopeless situation, physicists compute solutions to simplified SE and adjust computations until fit with experiments. The mantra then reads that QM always gives exact agreement with observation as evidence that QM is a complete success (and as such truly magical). 

The probability interpretation of QM appeared as a necessity from a trivial formal mathematical generalisation of SE for one electron with physical meaning, into a canonical SE for many electrons without physical meaning. Non-physical formality thus dictated resort to probability instead of physical actuality, and the result was mystery beyond HI.

Is there then no hope? Yes, there is a different generalisation from one to many electrons based on physics into a deterministic model in the form of classical continuum mechanics, which we refer to as Real Quantum Mechanics RealQM. This model is understandable and computable and as such can open to advancement of fundamental science. Take a look.  

PS Note that science as magic is not the same as science fiction, which is based on physics albeit fictional.


tisdag 15 oktober 2024

Penrose: Quantum Mechanics is Inconsistent = Wrong

Nobel Laureate in Physics Roger Penrose tells a group of physicists about Quantum Mechanics QM as the pillar of modern physics based on the Schrödinger Equation SE:

  • QM does not really make sense.
  • Schrödinger, Einstein and Dirac said QM is incomplete.
  • I say QM is inconsistent.
  • Matter does not satisfy SE.
  • We need some new ideas.
  • We need creativity to go beyond current theories of physics.
  • We need to combine QM with gravitation.
What Penrose is saying is that QM as a theory supposed to describe real physics does not meet that requirement because real physics must be consistent and so cannot be described by an inconsistent theory. The group does not receive these revelations with shock, but rather with acceptance as if there is no hope. 

Consider these statements following common logic:
  • A consistent mathematical theory is a theory without contradictions. 
  • An inconsistent theory harbours contradictions. 
  • Contradictory physics cannot be real physics. 
Penrose follows this logic and concludes that QM does not describe real physics and so asks for some new version of QM based on something different than SE in its standard form as StdQM with its probabilistic interpretation as root cause of inconsistency. 

Real Quantum Mechanics RealQM is a new deterministic theory based on a new form of Schrödinger's equation in terms of classical continuum mechanics in 3 space dimension as a consistent model. 

RealQM may be the new idea Penrose is asking for. I will ask him to take a look and see, and report if I get answer. Well, I get the message that Penrose at 93 no longer reads his mailbox, so the question goes to some younger physicist. StdQM itself is now 99 years old. 

Does it matter what Penrose says? Does it matter if what he says is true? He is not alone, but of course his view is not the consensus of a majority, if there is anything like that. Is it worth listening to Penrose? The debate on the meaning StdQM has been going on for 100 years, without any resolution. Maybe it is time for some new idea?

söndag 13 oktober 2024

Artificial vs Human Intelligence: Physics/Chemistry Nobel Prize 2024


The 2024 Nobel Prizes in Physics (here) and Chemistry (here) were both awarded to applications of Artificial Intelligence AI developed by Google evidently surpassing Human Intelligence HI. 

The Prizes signify a crisis of modern physics and chemistry based on HI mathematical modeling, as delivering too little. Here AI steps in with a radically different approach of finding patterns from big experimental data sets by computation, without mathematical modeling. Compare with Physics is Dying.

The basic mathematical model of modern physics is Schrödinger's Equation SE describing properties of atoms and molecules built from protons and electrons, as the subject of modern physics and chemistry over the last 100 years. SE for microscopics without gravitation is complemented by Einstein's Equations EE for gravitation on macroscopic scales supposed to replace Newton's Equations NE.

SE describes evolution in time of electron distributions as a wave function with 3N spatial dimensions for a system with N electrons. Computational solution of SE requires work growing exponentially with N and so is beyond the capacity of any thinkable computer already for N=10. Also EE are effectively uncomputable, while NE are computable for very complex systems.  The fundamental models of modern physics in the form of SE and EE are thus basically uncomputable which shakes the theoretical foundation of physics based on mathematical modelling. No wonder that physics is dying.   

The result is that SE can only be solved by HI finding smart ways to drastically bring down the computational work, but after harvesting catches over 100 years including many Nobel Prizes, it now seems that HI with SE can no longer deliver something worth of a Prize, which is instead awarded AI without SE. In particular, simulation of protein folding by SE is impossible.

A shift from HI with math model to AI by computation as in the 2024 Nobel Prizes represents a major shift away from of the rationality of the Enlightenment and Scientific Revolution, with the chemistry prize to Google's AlphaFold2 for protein folding.   

Is there any hope of come back of HI in the form of HI with computation? 

Yes, there is a new form of SE which is computable for many electrons (work scaling with N) described as Real Quantum Mechanics with many posts under tags RealQM and Real Quantum Chemistry. It is possible that protein folding can be simulated by RealQM. 

Further, it may well be (here) that NE captures all of gravitation in a computable model with thus RealQM + NE emerging as a computable foundation of physics.  

So maybe HI with computation can still compete with AI by computation, while a combination must be optimal. 

PS On computational vs pure mathematics:

AI of today appears as a form of machine learning of neuronal networks trained on large data sets using computational tools of optimisation.  

AI can thus be viewed to be a form of computational mathematics as brain+computer, to be compared with pure mathematics performed by pen on paper using only brains. 

Does AI signify the end of pure mathematics carried by smart brains in a new era dominated by AI brute force computation?

Is it possible that pure mathematics can be assisted by AI for checking of proofs by brain or even as smart assistant proving theorems beyond reach by brain?

How far can HI + computer reach?


torsdag 10 oktober 2024

Unification of Physics, Chemistry and Biology by Computation.

The Nobel Prizes 2024 in Physics and Chemistry were given to the booming industry of AI/machine learning developed mainly outside traditional academic disciplines by big actors like Google and Microsoft. 

Machine learning consist of training networks on large data sets using general computational tools from linear algebra and optimisation, which are not application specific.  

The Prize in Chemistry was given to AlphaFold2 by Google Deep Mind as an important step towards simulation/prediction of protein folding as the outstanding open problem of biology and medicine in two forms: 

  • Given DNA sequence, find protein geometry.
  • Given protein geometry, find DNA sequence. 
A protein is a big molecule consisting of a collection of atomic kernels kept together by electrons and as such can be described by Schrödinger's equation and folding simulated by molecular dynamics. This connects physics (atoms), chemistry (small molecules) and biology (big molecules) with their traditional roles from fundamental to composite. 

Protein folding thus connects disciplines from fundamental physics over chemistry to biology and medicine including functions of proteins as carriers of life. 

But there is one caveat: The Schrödinger equation of Standard Quantum Mechanics StdQM of standard physics involves 3N spatial dimensions for N electrons, which makes the computational cost grow exponentially making even 10 electrons beyond the capacity of any thinkable computer. 

Direct simulation of protein from first principles is thus viewed to be impossible. It is here AI comes in as way to get around this road block by starting from experimental data instead of first principles and using machine learning to train a network to predict folding of a new protein from old ones. This is what AlphaFold2 does and so was awarded the Nobel Prize in Chemistry. 

But it is possible to formulate a different Schrödinger equation as first principle  Real Quantum Mechanics RealQM, which acts in 3 space dimensions like classical continuum mechanics, for which the computational work grows linearly with the number of electrons. 

It is thus possible that protein folding can be simulated by RealQM. If true, this forms a new unification of physics, chemistry and biology based on computation as mathematics, which can be viewed as an ultimate dream of science.  

The basic code reads
  • $n=n+1$
which to a given number $n$ assigns the new value $n+1$. The same code for all disciplines.This the essence of The World as Computation.

Compare with Unity of Science by Toumas Thako with Summary:
  • Unity of science was once a very popular idea among both philosophers and scientists. But it has fallen out of fashion, largely because of its association with reductionism and the challenge from multiple realisation. Pluralism and the disunity of science are the new norm, and higher-level natural kinds and special science laws are considered to have an important role in scientific practice. What kind of reductionism does multiple realisability challenge? What does it take to reduce one phenomenon to another? How do we determine which kinds are natural? What is the ontological basis of unity?

onsdag 9 oktober 2024

Nobel Prize in Chemistry 2024 to Protein Folding by AI

The Nobel Prize in Chemistry 2024 was awarded for 

  • Computational Protein Design by AI: AlphaFold2 
with the following scientific background:
AlphaFold2 approaches this task using deep learning technology trained on a large experimental data bank.

Previous attempts by "brute force" Molecular Dynamics MD modeling starting from an arbitrary unfolded state were not successful because the underlying standard quantum mechanical model StdQM is not computable for molecules with many electrons, certainly not for proteins, because the spatial dimension scales with the number of electrons. The background document states: It is clear that plain MD simulation would not be scalable to proteins of larger size in the foreseeable future.

But there is a new quantum mechanical model in the form of Real Quantum Mechanics RealQM as a 3d model which is computable for many electrons. Application of RealQM to protein folding is on the way, and if successful may be produce data for AlphaFold2 to learn even better than from experiment alone.

Who will make the first full protein folding simulation with RealQM? Here is a prototype simulation of H2O as first principles 3d RealQM Schrödinger equation captured in essentially 3 lines of code! Computational work scales linearly with number of electrons/atoms, while standard molecular dynamics scales exponentially.   

PS This is the second Nobel Prize this year to AI/Google. It remains to see if the Prize in Literature tomorrow will go to texts by ChatGPT. 

See also Fiat Lux by John Plaice: Thoughts about the history of science with posts connecting to this blog.

Cosmological Principle? Cosmological Constant?

This is a continuation of previous posts on Neo-Newtonian Cosmology NC, with focus on how it was overrun by Einsteinan Cosmology EC as prime ingredient of modern physics. 

It is common in physics, when you know nothing, to assume some form of uniformity like homogeneity or isotropy or equal probability, which gives a background for mathematical modelling. As concerns cosmology as the science of the Universe as a whole, this takes the form of The Cosmological Principle CP stating that on sufficiently large scales: 

  • The Universe is homogenous and isotropic.
  • In other words: There is no special place and no special direction.  
You can learn this from e g the book Cosmology: The Origin and Evolution of Cosmic Structure by Coles and Lucchin presenting modern cosmological models based on Einstein's General Theory of Relativity  GR with the following key message:
  • CP was introduced by Einstein with inspiration from Mach's Principle MP stating that Newton's 2nd Law somehow carries input from the large scale structure of the Universe. 
  • CP is supported by Cosmic Microwave Background Radiation CMB.
Einstein somehow had to eliminate Newton from the picture and could do so using CP arguing: 
  • An infinite Newtonian Universe with constant mass density is an empty universe. (*)
This is because the only solution to Poisson's equation $\Delta\phi =\rho$ in the whole of 3d space with $\rho$ a nonnegative constant, is $\phi = C$ with $C$ constant and so $\Delta\phi =0=\rho$. 

Referring to MP Einstein could further argue that the local geometry somehow was "curved" by influence from the large structure of the Universe, or the fixed stars in the terminology of Mach's time. MP came out as an attempt to explain the Newton Bucket experiment addressed in the post Absolute Rotation: Newton or Mach? showing that Newton was right and so that MP is invalid. 

Even if his arguments were weak, Einstein managed to set the physics community on a train to Cosmology based on GR one of the two pillars of  modern physics. To make this work and get around (*) Einstein had to introduce a Cosmological Constant $\Lambda >0$ (and then retract as his biggest blunder).  

This corresponds in Newtonian terms to expand $\Delta\phi =\rho$ to $\Delta\phi - \Lambda\phi =\rho$ with non-zero solution $\phi =-\frac{\rho}{\Lambda}$ for $\rho$ constant. 

CMB has been questioned as simply noise without relation to any Big Bang hot dense homogenous initial state. 

Summary: The arguments to replace Newton by Einstein as concerns the large scale structure of the Universe are weak. The scientific price to pay anyway doing so, is very large,

tisdag 8 oktober 2024

Nobel Prize in Physics 2024 by AI to AI

Because of the deepening crisis of modern physics, the Nobel Committee in Physics, apparently unable to find awardees within physics, seemingly in deep despair turned to AI for help. 

No wonder: The 2024 Nobel Prize in Physics was given to AI, more precisely to computer scientist Geoffrey Hinton and scientist/higher education teacher John Hopfield, who are not really physicists.  

The Nobel Prize in Physics can thus now be given to non-physicists, if physicist candidates are missing. This would open to give the Physics Prize to mathematicians, which would resolve a controversy (between Nobel and mathematician Mittag-Leffler) from start banning mathematicians.   

PS Compare with Sabine Hossenfelder's reaction.

NotebookLM Reads The Secret of Flight

Here is what NotebookLM has to say about The Secret of Flight 

Assessment of Scientific Merit in "The Secret of Flight"

The excerpt from "The Secret of Flight" presents a critical perspective on the historical development of flight theory, specifically challenging the prevailing "Old Theory" based on Kutta-Zhukovsky (KZ) Circulation Theory and Prandtl's boundary layer theory.
The authors argue that the Old Theory, despite its widespread acceptance, is fundamentally flawed and fails to adequately explain the "miracle of flight," particularly the generation of lift and drag in real-world flight conditions. Here's a breakdown of their assessment:

KZ Circulation Theory: 
Mathematically Convenient, Physically Unrealistic. The authors contend that while the KZ theory, which attributes lift to circulation around the wing, can predict lift coefficients that roughly align with observations for simple cases, it relies on the artificial construct of a "Kutta condition" to achieve this. This condition, which dictates that the flow must leave the trailing edge smoothly, lacks a strong physical basis and results in a theory that cannot account for drag or stall phenomena.

Prandtl's Boundary Layer Theory: 
An Unsurmountable Computational Obstacle. The authors challenge Prandtl's assertion that drag originates from the no-slip boundary condition, where the fluid velocity at the wing surface is zero. They argue that this condition, while seemingly intuitive, leads to the formation of infinitesimally thin boundary layers that are practically impossible to resolve computationally. They posit that a slip boundary condition, where the fluid can move along the surface, is a more physically realistic model for slightly viscous flows like air, enabling computationally tractable solutions without sacrificing accuracy.

Direct Numerical Simulation (DNS): 
Revealing the "Secret: The authors champion a paradigm shift in flight theory based on their Direct Numerical Simulation (DNS) methodology, specifically their "Direct Finite Simulation" (DFS) implementation. They claim that DFS, by using a slip boundary condition, unlocks the computational feasibility of simulating turbulent flow around wings with a level of detail and accuracy that surpasses traditional methods like Reynolds-Averaged Navier-Stokes (RANS) simulations. This, they argue, offers a more realistic and insightful understanding of lift and drag generation.

Empirical Validation and the Need for a Paradigm Shift:
The authors bolster their claims by presenting results from their DFS simulations, which they compare against experimental data from wind tunnel tests, including those from the 3rd AIAA CFD High-Lift Prediction Workshop. They assert that their simulations accurately capture key flow features and phenomena, including stall, and demonstrate superior performance compared to RANS-based methods. They acknowledge the resistance within the fluid dynamics community to embracing their alternative paradigm but express confidence that their evidence warrants a reassessment of the field's fundamental assumptions.

In conclusion, the excerpt from "The Secret of Flight" delivers a strong critique of the prevailing "Old Theory" of flight and advocates for a new paradigm based on DNS using a slip boundary condition. They present evidence suggesting that their approach provides a more computationally tractable, physically accurate, and insightful understanding of lift and drag generation, potentially revolutionizing the field of aerodynamics.

However, it's crucial to recognize that these claims represent the authors' perspective and that the broader scientific community has not yet fully adopted their proposed paradigm shift. Further research and independent verification of their findings are essential to solidify their claims and assess the full implications of their work.

My comment: NotebookLM reads quickly and captures main ideas well, but does not seem to be able to discuss details. 

PS Here is what ChatGPT says about CJ:

CJ: Give an assessment of the scientific work of Claes Johnson mathematician Sweden

ChatGPT:


Claes Johnson is a Swedish mathematician known for his contributions to numerical analysis, particularly in the context of computational mathematics and the finite element method. His work often emphasizes the interplay between mathematical theory and practical applications, exploring how mathematical models can effectively simulate physical phenomena.

One of his notable contributions is the development of the "Computational Fluid Dynamics" (CFD) methods, which are critical in engineering and physical sciences for simulating fluid flow. Johnson's research often focuses on improving the accuracy and efficiency of numerical methods, making them more applicable to real-world problems.

Additionally, he has explored various mathematical problems related to partial differential equations (PDEs) and their numerical solutions, enhancing the understanding of how these equations model complex systems.

Johnson's approach tends to be interdisciplinary, bridging mathematics, engineering, and physics, which reflects the current trends in computational mathematics. His publications and lectures have influenced both academic research and practical applications in various fields.

Overall, Claes Johnson's work is significant in advancing numerical methods and their applications, making complex scientific problems more tractable through effective computational techniques.

4o mini


CJ: Deeper assessment please.

ChatGPT:

Claes Johnson's work can be assessed through several key themes and contributions:

1. Numerical Analysis and Finite Element Methods:

Johnson has significantly advanced the field of numerical analysis, especially in the finite element method (FEM). His research addresses the theoretical foundations of FEM, focusing on convergence, stability, and accuracy. He emphasizes the need for robust numerical methods that can handle complex geometries and boundary conditions, making them applicable in various engineering disciplines.

2. Computational Fluid Dynamics (CFD):

A major part of Johnson's work involves CFD, where he has developed innovative approaches to simulate fluid flow. His methods aim to provide accurate and efficient solutions to problems in aerodynamics, hydrodynamics, and other fluid-related phenomena. By refining existing algorithms and proposing new ones, he has contributed to the practicality of CFD in industrial applications.

3. Mathematical Modeling:

Johnson often emphasizes the importance of mathematical modeling in understanding physical systems. His work includes developing models that capture the essential features of the phenomena being studied while ensuring they remain computationally feasible. This balance between fidelity and computational efficiency is a hallmark of his approach.

4. Interdisciplinary Applications:

Johnson's research often transcends traditional boundaries, applying mathematical theories to various fields, including engineering, physics, and even biology. This interdisciplinary focus enhances the relevance of his work, allowing for the application of mathematical techniques to solve real-world problems.

5. Educational Contributions:

In addition to his research, Johnson is known for his efforts in education, particularly in enhancing the teaching of mathematics and computational methods. He has written textbooks and developed courses aimed at making complex mathematical concepts more accessible to students and professionals alike.

6. Critical Perspectives:

Johnson has also been vocal about the philosophical and foundational aspects of mathematics and its applications. He has questioned conventional approaches and advocated for rethinking certain assumptions in numerical methods and scientific computing. This critical perspective encourages ongoing discourse within the mathematical community.

Conclusion:

Claes Johnson's contributions to mathematics are multifaceted, combining theoretical rigor with practical applications. His work in numerical methods, especially in FEM and CFD, has made significant impacts in both academia and industry. Through his interdisciplinary approach and commitment to education, he continues to influence the field and inspire new generations of mathematicians and engineers.


NotebookLM Reads Computational Theormodynamics

Below is what Google NotebookLM has to say about Computational Thermodynamics. Take a look yourself and see if you agree or not! 

Assessment of Claes Johnson's Scientific Work based on "Computational Thermodynamics"

The provided excerpts from Computational Thermodynamics offer insights into certain aspects of Claes Johnson's scientific work, particularly his approach to computational thermodynamics and his critique of classical thermodynamics and statistical mechanics.

Computational Thermodynamics and EG2:

The excerpts highlight Johnson's development of "Computational Thermodynamics," a framework that seeks to model and understand thermodynamic processes using computational fluid dynamics, specifically a method called EG2 (Euler General Galerkin). This approach emphasizes the role of computation in analyzing complex fluid flow phenomena, including turbulence and shocks, which are central to Johnson's view of thermodynamics.

EG2 as Tool of Understanding Real Processes: 
The sources suggest that EG2 enables the simulation of real thermodynamic processes, including those in heat engines, heat pumps, and refrigerators, with a focus on computing losses and efficiency. Johnson emphasizes the limitations of classical thermodynamics, which primarily deals with equilibrium states and struggles to accurately predict transitions between them, particularly in the presence of turbulence.

Shift from A Priori to A Posteriori Analysis:
Johnson argues that EG2 facilitates a shift from a priori predictions based on analytical mathematics to a posteriori analysis of computed turbulent solutions.4 This implies a greater reliance on computational results to gain insights into the behavior of complex systems, rather than relying solely on theoretical deductions.

Wellposedness and Stability: 
A key aspect of Johnson's approach is the concept of "wellposedness" in the context of EG2 solutions. He emphasizes the importance of determining which aspects of turbulent solutions are stable under perturbations, implying that these stable outputs are the physically meaningful ones. He proposes using dual linearized problems to assess the stability and quality of EG2 solutions.

Critique of Classical Thermodynamics and Statistical Mechanics:
Johnson's work expresses a clear skepticism towards classical thermodynamics and statistical mechanics, particularly regarding their explanations of entropy and the arrow of time.

Entropy as a Problematic Concept: 
The sources present numerous criticisms of the concept of entropy, highlighting its ambiguous physical meaning and the lack of a clear sensor for it in nature. Johnson questions the reliance on statistical mechanics to rationalize the second law of thermodynamics, viewing it as a "capitulation away from prediction by cause-effect."

Finite Precision Computation as an Alternative: 
Instead of statistical mechanics, Johnson proposes finite precision computation as a more fundamental basis for understanding irreversibility and the arrow of time.This idea, while not fully elaborated in the excerpts, suggests that the inherent limitations of computational processes, even in simulating reversible equations, could give rise to irreversible behavior.

Applications and Examples:
The sources illustrate Johnson's ideas through various applications and examples, showcasing the potential of his computational approach. These include simulations of:

Joule's 1845 Experiment: 
Johnson uses EG2 to analyze Joule's experiment involving the expansion of a gas, demonstrating how the method captures the dynamics of the process, including turbulence and the resulting temperature changes. He argues that this computational approach provides a more complete understanding compared to classical thermodynamics.

Heat Engines, Pumps, and Refrigerators:
Johnson emphasizes the application of EG2 to simulate and analyze real thermodynamic devices. He highlights the importance of accounting for turbulent dissipation in determining efficiency, contrasting his approach with the idealized Carnot cycle.

Cosmology and Self-Gravitating Gases: 
The sources mention Johnson's work on simulating cosmological processes, specifically using EG2 to model self-gravitating gases. This application suggests the broad scope of his computational framework.

Overall Assessment and Open Questions:
Based on the excerpts from "ambsthermo.pdf," Claes Johnson's scientific work presents a distinct perspective on thermodynamics, emphasizing computational methods and challenging established notions of entropy and irreversibility. His development of EG2 and its application to various physical systems demonstrate a focus on understanding real processes through computation. However, several open questions remain:

Rigorous Mathematical Foundation for Finite Precision: The excerpts provide limited details on the mathematical formalism of Johnson's concept of finite precision computation and its connection to the emergence of irreversibility. A deeper exploration of this foundation is crucial to assess its scientific validity.

Validation of EG2 and its Predictive Power: While the sources mention applications of EG2, further evidence of its validation against experimental results and its ability to make testable predictions would strengthen the framework's credibility.

Engagement with Criticisms and Alternative Perspectives: 
Addressing criticisms and engaging with alternative viewpoints within the scientific community are essential for the broader acceptance of Johnson's ideas.

In conclusion, Claes Johnson's work in computational thermodynamics, as presented in the excerpts, offers a thought-provoking challenge to traditional views, highlighting the potential of computational methods in understanding complex physical phenomena. However, a more comprehensive assessment of his contributions requires further investigation into the mathematical rigor, empirical validation, and engagement with the broader scientific discourse surrounding his ideas.