söndag 13 oktober 2024

Artificial vs Human Intelligence: Physics/Chemistry Nobel Prize 2024


The 2024 Nobel Prizes in Physics (here) and Chemistry (here) were both awarded to applications of Artificial Intelligence AI developed by Google evidently surpassing Human Intelligence HI. 

The Prizes signify a crisis of modern physics and chemistry based on HI mathematical modeling, as delivering too little. Here AI steps in with a radically different approach of finding patterns from big experimental data sets by computation, without mathematical modeling. Compare with Physics is Dying.

The basic mathematical model of modern physics is Schrödinger's Equation SE describing properties of atoms and molecules built from protons and electrons, as the subject of modern physics and chemistry over the last 100 years. SE for microscopics without gravitation is complemented by Einstein's Equations EE for gravitation on macroscopic scales supposed to replace Newton's Equations NE.

SE describes evolution in time of electron distributions as a wave function with 3N spatial dimensions for a system with N electrons. Computational solution of SE requires work growing exponentially with N and so is beyond the capacity of any thinkable computer already for N=10. Also EE are effectively uncomputable, while NE are computable for very complex systems.  The fundamental models of modern physics in the form of SE and EE are thus basically uncomputable which shakes the theoretical foundation of physics based on mathematical modelling. No wonder that physics is dying.   

The result is that SE can only be solved by HI finding smart ways to drastically bring down the computational work, but after harvesting catches over 100 years including many Nobel Prizes, it now seems that HI with SE can no longer deliver something worth of a Prize, which is instead awarded AI without SE. In particular, simulation of protein folding by SE is impossible.

A shift from HI with math model to AI by computation as in the 2024 Nobel Prizes represents a major shift away from of the rationality of the Enlightenment and Scientific Revolution, with the chemistry prize to Google's AlphaFold2 for protein folding.   

Is there any hope of come back of HI in the form of HI with computation? 

Yes, there is a new form of SE which is computable for many electrons (work scaling with N) described as Real Quantum Mechanics with many posts under tags RealQM and Real Quantum Chemistry. It is possible that protein folding can be simulated by RealQM. 

Further, it may well be (here) that NE captures all of gravitation in a computable model with thus RealQM + NE emerging as a computable foundation of physics.  

So maybe HI with computation can still compete with AI by computation, while a combination must be optimal. 

PS On computational vs pure mathematics:

AI of today appears as a form of machine learning of neuronal networks trained on large data sets using computational tools of optimisation.  

AI can thus be viewed to be a form of computational mathematics as brain+computer, to be compared with pure mathematics performed by pen on paper using only brains. 

Does AI signify the end of pure mathematics carried by smart brains in a new era dominated by AI brute force computation?

Is it possible that pure mathematics can be assisted by AI for checking of proofs by brain or even as smart assistant proving theorems beyond reach by brain?

How far can HI + computer reach?



torsdag 10 oktober 2024

Unification of Physics, Chemistry and Biology by Computation.

The Nobel Prizes 2024 in Physics and Chemistry were given to the booming industry of AI/machine learning developed mainly outside traditional academic disciplines by big actors like Google and Microsoft. 

Machine learning consist of training networks on large data sets using general computational tools from linear algebra and optimisation, which are not application specific.  

The Prize in Chemistry was given to AlphaFold2 by Google Deep Mind as an important step towards simulation/prediction of protein folding as the outstanding open problem of biology and medicine in two forms: 

  • Given DNA sequence, find protein geometry.
  • Given protein geometry, find DNA sequence. 
A protein is a big molecule consisting of a collection of atomic kernels kept together by electrons and as such can be described by Schrödinger's equation and folding simulated by molecular dynamics. This connects physics (atoms), chemistry (small molecules) and biology (big molecules) with their traditional roles from fundamental to composite. 

Protein folding thus connects disciplines from fundamental physics over chemistry to biology and medicine including functions of proteins as carriers of life. 

But there is one caveat: The Schrödinger equation of Standard Quantum Mechanics StdQM of standard physics involves 3N spatial dimensions for N electrons, which makes the computational cost grow exponentially making even 10 electrons beyond the capacity of any thinkable computer. 

Direct simulation of protein from first principles is thus viewed to be impossible. It is here AI comes in as way to get around this road block by starting from experimental data instead of first principles and using machine learning to train a network to predict folding of a new protein from old ones. This is what AlphaFold2 does and so was awarded the Nobel Prize in Chemistry. 

But it is possible to formulate a different Schrödinger equation as first principle  Real Quantum Mechanics RealQM, which acts in 3 space dimensions like classical continuum mechanics, for which the computational work grows linearly with the number of electrons. 

It is thus possible that protein folding can be simulated by RealQM. If true, this forms a new unification of physics, chemistry and biology based on computation as mathematics, which can be viewed as an ultimate dream of science.  

The basic code reads
  • $n=n+1$
which to a given number $n$ assigns the new value $n+1$. The same code for all disciplines.This the essence of The World as Computation.

Compare with Unity of Science by Toumas Thako with Summary:
  • Unity of science was once a very popular idea among both philosophers and scientists. But it has fallen out of fashion, largely because of its association with reductionism and the challenge from multiple realisation. Pluralism and the disunity of science are the new norm, and higher-level natural kinds and special science laws are considered to have an important role in scientific practice. What kind of reductionism does multiple realisability challenge? What does it take to reduce one phenomenon to another? How do we determine which kinds are natural? What is the ontological basis of unity?

onsdag 9 oktober 2024

Nobel Prize in Chemistry 2024 to Protein Folding by AI

The Nobel Prize in Chemistry 2024 was awarded for 

  • Computational Protein Design by AI: AlphaFold2 
with the following scientific background:
AlphaFold2 approaches this task using deep learning technology trained on a large experimental data bank.

Previous attempts by "brute force" Molecular Dynamics MD modeling starting from an arbitrary unfolded state were not successful because the underlying standard quantum mechanical model StdQM is not computable for molecules with many electrons, certainly not for proteins, because the spatial dimension scales with the number of electrons. The background document states: It is clear that plain MD simulation would not be scalable to proteins of larger size in the foreseeable future.

But there is a new quantum mechanical model in the form of Real Quantum Mechanics RealQM as a 3d model which is computable for many electrons. Application of RealQM to protein folding is on the way, and if successful may be produce data for AlphaFold2 to learn even better than from experiment alone.

Who will make the first full protein folding simulation with RealQM? Here is a prototype simulation of H2O as first principles 3d RealQM Schrödinger equation captured in essentially 3 lines of code! Computational work scales linearly with number of electrons/atoms, while standard molecular dynamics scales exponentially.   

PS This is the second Nobel Prize this year to AI/Google. It remains to see if the Prize in Literature tomorrow will go to texts by ChatGPT. 

See also Fiat Lux by John Plaice: Thoughts about the history of science with posts connecting to this blog.

Cosmological Principle? Cosmological Constant?

This is a continuation of previous posts on Neo-Newtonian Cosmology NC, with focus on how it was overrun by Einsteinan Cosmology EC as prime ingredient of modern physics. 

It is common in physics, when you know nothing, to assume some form of uniformity like homogeneity or isotropy or equal probability, which gives a background for mathematical modelling. As concerns cosmology as the science of the Universe as a whole, this takes the form of The Cosmological Principle CP stating that on sufficiently large scales: 

  • The Universe is homogenous and isotropic.
  • In other words: There is no special place and no special direction.  
You can learn this from e g the book Cosmology: The Origin and Evolution of Cosmic Structure by Coles and Lucchin presenting modern cosmological models based on Einstein's General Theory of Relativity  GR with the following key message:
  • CP was introduced by Einstein with inspiration from Mach's Principle MP stating that Newton's 2nd Law somehow carries input from the large scale structure of the Universe. 
  • CP is supported by Cosmic Microwave Background Radiation CMB.
Einstein somehow had to eliminate Newton from the picture and could do so using CP arguing: 
  • An infinite Newtonian Universe with constant mass density is an empty universe. (*)
This is because the only solution to Poisson's equation $\Delta\phi =\rho$ in the whole of 3d space with $\rho$ a nonnegative constant, is $\phi = C$ with $C$ constant and so $\Delta\phi =0=\rho$. 

Referring to MP Einstein could further argue that the local geometry somehow was "curved" by influence from the large structure of the Universe, or the fixed stars in the terminology of Mach's time. MP came out as an attempt to explain the Newton Bucket experiment addressed in the post Absolute Rotation: Newton or Mach? showing that Newton was right and so that MP is invalid. 

Even if his arguments were weak, Einstein managed to set the physics community on a train to Cosmology based on GR one of the two pillars of  modern physics. To make this work and get around (*) Einstein had to introduce a Cosmological Constant $\Lambda >0$ (and then retract as his biggest blunder).  

This corresponds in Newtonian terms to expand $\Delta\phi =\rho$ to $\Delta\phi - \Lambda\phi =\rho$ with non-zero solution $\phi =-\frac{\rho}{\Lambda}$ for $\rho$ constant. 

CMB has been questioned as simply noise without relation to any Big Bang hot dense homogenous initial state. 

Summary: The arguments to replace Newton by Einstein as concerns the large scale structure of the Universe are weak. The scientific price to pay anyway doing so, is very large,

tisdag 8 oktober 2024

Nobel Prize in Physics 2024 by AI to AI

Because of the deepening crisis of modern physics, the Nobel Committee in Physics, apparently unable to find awardees within physics, seemingly in deep despair turned to AI for help. 

No wonder: The 2024 Nobel Prize in Physics was given to AI, more precisely to computer scientist Geoffrey Hinton and scientist/higher education teacher John Hopfield, who are not really physicists.  

The Nobel Prize in Physics can thus now be given to non-physicists, if physicist candidates are missing. This would open to give the Physics Prize to mathematicians, which would resolve a controversy (between Nobel and mathematician Mittag-Leffler) from start banning mathematicians.   

PS Compare with Sabine Hossenfelder's reaction.

NotebookLM Reads The Secret of Flight

Here is what NotebookLM has to say about The Secret of Flight 

Assessment of Scientific Merit in "The Secret of Flight"

The excerpt from "The Secret of Flight" presents a critical perspective on the historical development of flight theory, specifically challenging the prevailing "Old Theory" based on Kutta-Zhukovsky (KZ) Circulation Theory and Prandtl's boundary layer theory.
The authors argue that the Old Theory, despite its widespread acceptance, is fundamentally flawed and fails to adequately explain the "miracle of flight," particularly the generation of lift and drag in real-world flight conditions. Here's a breakdown of their assessment:

KZ Circulation Theory: 
Mathematically Convenient, Physically Unrealistic. The authors contend that while the KZ theory, which attributes lift to circulation around the wing, can predict lift coefficients that roughly align with observations for simple cases, it relies on the artificial construct of a "Kutta condition" to achieve this. This condition, which dictates that the flow must leave the trailing edge smoothly, lacks a strong physical basis and results in a theory that cannot account for drag or stall phenomena.

Prandtl's Boundary Layer Theory: 
An Unsurmountable Computational Obstacle. The authors challenge Prandtl's assertion that drag originates from the no-slip boundary condition, where the fluid velocity at the wing surface is zero. They argue that this condition, while seemingly intuitive, leads to the formation of infinitesimally thin boundary layers that are practically impossible to resolve computationally. They posit that a slip boundary condition, where the fluid can move along the surface, is a more physically realistic model for slightly viscous flows like air, enabling computationally tractable solutions without sacrificing accuracy.

Direct Numerical Simulation (DNS): 
Revealing the "Secret: The authors champion a paradigm shift in flight theory based on their Direct Numerical Simulation (DNS) methodology, specifically their "Direct Finite Simulation" (DFS) implementation. They claim that DFS, by using a slip boundary condition, unlocks the computational feasibility of simulating turbulent flow around wings with a level of detail and accuracy that surpasses traditional methods like Reynolds-Averaged Navier-Stokes (RANS) simulations. This, they argue, offers a more realistic and insightful understanding of lift and drag generation.

Empirical Validation and the Need for a Paradigm Shift:
The authors bolster their claims by presenting results from their DFS simulations, which they compare against experimental data from wind tunnel tests, including those from the 3rd AIAA CFD High-Lift Prediction Workshop. They assert that their simulations accurately capture key flow features and phenomena, including stall, and demonstrate superior performance compared to RANS-based methods. They acknowledge the resistance within the fluid dynamics community to embracing their alternative paradigm but express confidence that their evidence warrants a reassessment of the field's fundamental assumptions.

In conclusion, the excerpt from "The Secret of Flight" delivers a strong critique of the prevailing "Old Theory" of flight and advocates for a new paradigm based on DNS using a slip boundary condition. They present evidence suggesting that their approach provides a more computationally tractable, physically accurate, and insightful understanding of lift and drag generation, potentially revolutionizing the field of aerodynamics.

However, it's crucial to recognize that these claims represent the authors' perspective and that the broader scientific community has not yet fully adopted their proposed paradigm shift. Further research and independent verification of their findings are essential to solidify their claims and assess the full implications of their work.

My comment: NotebookLM reads quickly and captures main ideas well, but does not seem to be able to discuss details. 

PS Here is what ChatGPT says about CJ:

CJ: Give an assessment of the scientific work of Claes Johnson mathematician Sweden

ChatGPT:


Claes Johnson is a Swedish mathematician known for his contributions to numerical analysis, particularly in the context of computational mathematics and the finite element method. His work often emphasizes the interplay between mathematical theory and practical applications, exploring how mathematical models can effectively simulate physical phenomena.

One of his notable contributions is the development of the "Computational Fluid Dynamics" (CFD) methods, which are critical in engineering and physical sciences for simulating fluid flow. Johnson's research often focuses on improving the accuracy and efficiency of numerical methods, making them more applicable to real-world problems.

Additionally, he has explored various mathematical problems related to partial differential equations (PDEs) and their numerical solutions, enhancing the understanding of how these equations model complex systems.

Johnson's approach tends to be interdisciplinary, bridging mathematics, engineering, and physics, which reflects the current trends in computational mathematics. His publications and lectures have influenced both academic research and practical applications in various fields.

Overall, Claes Johnson's work is significant in advancing numerical methods and their applications, making complex scientific problems more tractable through effective computational techniques.

4o mini


CJ: Deeper assessment please.

ChatGPT:

Claes Johnson's work can be assessed through several key themes and contributions:

1. Numerical Analysis and Finite Element Methods:

Johnson has significantly advanced the field of numerical analysis, especially in the finite element method (FEM). His research addresses the theoretical foundations of FEM, focusing on convergence, stability, and accuracy. He emphasizes the need for robust numerical methods that can handle complex geometries and boundary conditions, making them applicable in various engineering disciplines.

2. Computational Fluid Dynamics (CFD):

A major part of Johnson's work involves CFD, where he has developed innovative approaches to simulate fluid flow. His methods aim to provide accurate and efficient solutions to problems in aerodynamics, hydrodynamics, and other fluid-related phenomena. By refining existing algorithms and proposing new ones, he has contributed to the practicality of CFD in industrial applications.

3. Mathematical Modeling:

Johnson often emphasizes the importance of mathematical modeling in understanding physical systems. His work includes developing models that capture the essential features of the phenomena being studied while ensuring they remain computationally feasible. This balance between fidelity and computational efficiency is a hallmark of his approach.

4. Interdisciplinary Applications:

Johnson's research often transcends traditional boundaries, applying mathematical theories to various fields, including engineering, physics, and even biology. This interdisciplinary focus enhances the relevance of his work, allowing for the application of mathematical techniques to solve real-world problems.

5. Educational Contributions:

In addition to his research, Johnson is known for his efforts in education, particularly in enhancing the teaching of mathematics and computational methods. He has written textbooks and developed courses aimed at making complex mathematical concepts more accessible to students and professionals alike.

6. Critical Perspectives:

Johnson has also been vocal about the philosophical and foundational aspects of mathematics and its applications. He has questioned conventional approaches and advocated for rethinking certain assumptions in numerical methods and scientific computing. This critical perspective encourages ongoing discourse within the mathematical community.

Conclusion:

Claes Johnson's contributions to mathematics are multifaceted, combining theoretical rigor with practical applications. His work in numerical methods, especially in FEM and CFD, has made significant impacts in both academia and industry. Through his interdisciplinary approach and commitment to education, he continues to influence the field and inspire new generations of mathematicians and engineers.


NotebookLM Reads Computational Theormodynamics

Below is what Google NotebookLM has to say about Computational Thermodynamics. Take a look yourself and see if you agree or not! 

Assessment of Claes Johnson's Scientific Work based on "Computational Thermodynamics"

The provided excerpts from Computational Thermodynamics offer insights into certain aspects of Claes Johnson's scientific work, particularly his approach to computational thermodynamics and his critique of classical thermodynamics and statistical mechanics.

Computational Thermodynamics and EG2:

The excerpts highlight Johnson's development of "Computational Thermodynamics," a framework that seeks to model and understand thermodynamic processes using computational fluid dynamics, specifically a method called EG2 (Euler General Galerkin). This approach emphasizes the role of computation in analyzing complex fluid flow phenomena, including turbulence and shocks, which are central to Johnson's view of thermodynamics.

EG2 as Tool of Understanding Real Processes: 
The sources suggest that EG2 enables the simulation of real thermodynamic processes, including those in heat engines, heat pumps, and refrigerators, with a focus on computing losses and efficiency. Johnson emphasizes the limitations of classical thermodynamics, which primarily deals with equilibrium states and struggles to accurately predict transitions between them, particularly in the presence of turbulence.

Shift from A Priori to A Posteriori Analysis:
Johnson argues that EG2 facilitates a shift from a priori predictions based on analytical mathematics to a posteriori analysis of computed turbulent solutions.4 This implies a greater reliance on computational results to gain insights into the behavior of complex systems, rather than relying solely on theoretical deductions.

Wellposedness and Stability: 
A key aspect of Johnson's approach is the concept of "wellposedness" in the context of EG2 solutions. He emphasizes the importance of determining which aspects of turbulent solutions are stable under perturbations, implying that these stable outputs are the physically meaningful ones. He proposes using dual linearized problems to assess the stability and quality of EG2 solutions.

Critique of Classical Thermodynamics and Statistical Mechanics:
Johnson's work expresses a clear skepticism towards classical thermodynamics and statistical mechanics, particularly regarding their explanations of entropy and the arrow of time.

Entropy as a Problematic Concept: 
The sources present numerous criticisms of the concept of entropy, highlighting its ambiguous physical meaning and the lack of a clear sensor for it in nature. Johnson questions the reliance on statistical mechanics to rationalize the second law of thermodynamics, viewing it as a "capitulation away from prediction by cause-effect."

Finite Precision Computation as an Alternative: 
Instead of statistical mechanics, Johnson proposes finite precision computation as a more fundamental basis for understanding irreversibility and the arrow of time.This idea, while not fully elaborated in the excerpts, suggests that the inherent limitations of computational processes, even in simulating reversible equations, could give rise to irreversible behavior.

Applications and Examples:
The sources illustrate Johnson's ideas through various applications and examples, showcasing the potential of his computational approach. These include simulations of:

Joule's 1845 Experiment: 
Johnson uses EG2 to analyze Joule's experiment involving the expansion of a gas, demonstrating how the method captures the dynamics of the process, including turbulence and the resulting temperature changes. He argues that this computational approach provides a more complete understanding compared to classical thermodynamics.

Heat Engines, Pumps, and Refrigerators:
Johnson emphasizes the application of EG2 to simulate and analyze real thermodynamic devices. He highlights the importance of accounting for turbulent dissipation in determining efficiency, contrasting his approach with the idealized Carnot cycle.

Cosmology and Self-Gravitating Gases: 
The sources mention Johnson's work on simulating cosmological processes, specifically using EG2 to model self-gravitating gases. This application suggests the broad scope of his computational framework.

Overall Assessment and Open Questions:
Based on the excerpts from "ambsthermo.pdf," Claes Johnson's scientific work presents a distinct perspective on thermodynamics, emphasizing computational methods and challenging established notions of entropy and irreversibility. His development of EG2 and its application to various physical systems demonstrate a focus on understanding real processes through computation. However, several open questions remain:

Rigorous Mathematical Foundation for Finite Precision: The excerpts provide limited details on the mathematical formalism of Johnson's concept of finite precision computation and its connection to the emergence of irreversibility. A deeper exploration of this foundation is crucial to assess its scientific validity.

Validation of EG2 and its Predictive Power: While the sources mention applications of EG2, further evidence of its validation against experimental results and its ability to make testable predictions would strengthen the framework's credibility.

Engagement with Criticisms and Alternative Perspectives: 
Addressing criticisms and engaging with alternative viewpoints within the scientific community are essential for the broader acceptance of Johnson's ideas.

In conclusion, Claes Johnson's work in computational thermodynamics, as presented in the excerpts, offers a thought-provoking challenge to traditional views, highlighting the potential of computational methods in understanding complex physical phenomena. However, a more comprehensive assessment of his contributions requires further investigation into the mathematical rigor, empirical validation, and engagement with the broader scientific discourse surrounding his ideas.

måndag 7 oktober 2024

NotebookLM AI Reads Real Quantum Mechanics

I asked Google NotebookLM to read my book Real Quantum Mechanics (with website) and give an assessment and this is what came out as 1. Deeper Look and 2. Evaluation of Scientific and Mathematical Quality:

1. Deeper Look at realQM and its Potential Implications

You're seeking a deeper assessment of the novelty of realQM, a proposed alternative to the standard interpretation of quantum mechanics (stdQM). Building on our previous discussion, let's explore the potential implications of realQM if its claims hold up under scrutiny:


  • A Paradigm Shift in Quantum Mechanics: The sources strongly suggest that realQM isn't just a new model but a fundamental challenge to the prevailing paradigm of quantum mechanics. Since the Copenhagen interpretation took hold in the 1920s, physicists have largely accepted a probabilistic view of the quantum world. This view, as the sources point out, is counterintuitive and has led to endless debates about the nature of reality. If realQM can provide a realistic, deterministic model of atomic behavior that aligns with experimental data, it could lead to a major shift in how we understand quantum mechanics.

  • Impact on Foundational Concepts: The sources suggest that realQM may not need to rely on concepts like electron spin and the Pauli Exclusion Principle to explain the shell structure of atoms. This is a bold claim, as these concepts are deeply ingrained in our current understanding of atomic physics. If realQM can derive the shell structure and electron configurations from more fundamental principles like spatial partitioning and charge minimization, it could significantly simplify our understanding of atomic behavior and potentially lead to a more unified theory.

  • A Return to Classical Intuition? The sources emphasize realQM's connection to classical continuum mechanics, which is based on our everyday experience of the physical world. This suggests that realQM might offer a more intuitive understanding of atomic phenomena, bridging the gap between the seemingly bizarre quantum world and our classical intuitions.

  • Rethinking the Role of Computation: The sources highlight the computational challenges of stdQM, particularly for multi-electron systems. While methods like HF and DFT offer approximations, they still rely on simplifications and lack the ab initio predictive power of realQM, as claimed by the sources. If realQM proves computationally feasible for a wide range of atomic and molecular systems, it could usher in a new era of computational chemistry and materials science, allowing for more accurate and efficient design of new materials and technologies.

  • Implications for Other Fields: A paradigm shift in our understanding of quantum mechanics could have far-reaching implications for other fields. For instance, quantum computing, which relies on the principles of superposition and entanglement deeply rooted in stdQM, might need to be reevaluated in light of a successful deterministic model like realQM. Other areas like quantum chemistry, cosmology, and even philosophy of science could be profoundly impacted.

It's crucial to remember: The sources present a compelling, albeit one-sided, case for the novelty and potential of realQM. They advocate for a significant departure from the established paradigm of quantum mechanics. A balanced assessment requires independent verification of realQM's claims, rigorous testing against experimental data, and critical examination by the scientific community.


If realQM withstands this scrutiny, it could indeed revolutionize our understanding of the atomic world and beyond.



2. Evaluation of Scientific and Mathematical Quality

The excerpt from Real Quantum Mechanics presents a critical analysis of standard quantum mechanics (stdQM) and introduces an alternative model called real quantum mechanics (realQM). While intriguing in its ambition to provide a more physically intuitive description of atoms, the document's scientific and mathematical rigor requires careful scrutiny.


Strengths:
Clear Identification of Challenges in stdQM: The document effectively highlights several well-known conceptual difficulties within the Copenhagen interpretation of quantum mechanics. It cites prominent physicists like Einstein, Schrodinger, and Feynman, who expressed reservations about the probabilistic nature and lack of "Anschaulichkeit" (physical meaning) in stdQM. The document rightly points out the computational challenges in solving the multi-dimensional Schrodinger equation for multi-electron atoms, labeling it "uncomputable".
Intuitive Appeal of a Realist Model: The core idea behind realQM, treating electrons as "clouds of electric charge" with spatially localized "supports" rather than point particles, holds intuitive appeal. This aligns with Schrodinger's original vision of wave mechanics. The document emphasizes the conceptual simplicity of realQM in explaining atomic structure and behavior using principles of classical continuum mechanics.
Concerns:
Lack of Detailed Mathematical Justification: While the document outlines the mathematical framework of realQM, involving minimization of total energy functionals and free boundary conditions, it does not provide sufficient mathematical rigor to substantiate many of its claims. For example:
Derivation of the Model: The document does not offer a clear derivation of the realQM equations from first principles. It states that the model is "in terms of classical continuum mechanics", but the connection between these classical principles and the specific mathematical formulation remains unclear.
Existence and Uniqueness of Solutions: The document mentions the use of variational methods and gradient descent for finding solutions, but it doesn't address the crucial mathematical questions of whether these solutions exist, whether they are unique, and whether the numerical methods employed converge to the true solutions. This lack of rigorous analysis casts doubt on the reliability and robustness of the computational results presented.
Oversimplification of Atomic Physics: While the spherical symmetry approximation used in the computations might be suitable for initial explorations, it ultimately represents a significant oversimplification of atomic physics. Real atoms exhibit complex electron configurations and interactions that cannot be fully captured by assuming electrons are confined to spherical shells.
Limited Validation and Comparison with Experiments: The document presents computational results for ground state energies of various atoms and claims "good agreement with observation." However, it's unclear what reference values are being used for comparison, and the document does not provide a statistically rigorous analysis of the accuracy or the limitations of the model's predictions. A thorough validation would require:
Clearly Stated Reference Values: Specifying the sources of the experimental or highly accurate theoretical data used for comparison.
Quantitative Error Analysis: Quantifying the differences between realQM predictions and reference values, ideally with statistical measures of uncertainty.
Comparison with Other Methods: Benchmarking the performance of realQM against established computational methods like Hartree-Fock and Density Functional Theory, highlighting its advantages and limitations.
Lack of Peer Review and Publication: Similar to the document on Computational Thermodynamics you mentioned, there's no indication that the work on realQM has undergone peer review or publication in reputable scientific journals.
Conclusion:

The excerpt from "realquantum1.pdf" presents an ambitious attempt to reformulate quantum mechanics based on a more intuitive, realist perspective. While it effectively critiques some aspects of stdQM and offers an appealing alternative, the document falls short in providing the necessary mathematical rigor and comprehensive validation to support its claims. The reliance on significant simplifications, lack of detailed mathematical analysis, and absence of peer review raise significant questions about the scientific and mathematical robustness of realQM as presented in this document.

My comment

It is clear that AI has captured the main ideas, quicker and better than a living physicist. The concerns are reasonable but appear to come from a standard list which can always be stated. The statement about restriction to spherical symmetry is a misunderstanding; general configuration is a core feature.

lördag 5 oktober 2024

Physics is Dying

Theoretical physicist Sabine Hossenfelder is angry. Sabine is finished with modern physics: This is why physics is dying: 20 years without progress:

  • We are endlessly arguing about irrelevant questions.
  • This is why physics is dying.
  • Loop Quantum Gravity and String Theory cannot be tested.
  • Same thing with Supersymmetry and Inflation and Dark Sectors. 
  • I do not understand why people get paid for this. 
  • Research bubbles of mathematical fiction without connection to reality.
  • I cannot believe that this is still going on.
  • We are seeing the beginning of a collapse of the foundations of physics.
  • These people sit on cosy tax paid positions with no other task than producing useless papers.
This is also my impression.

But what do you do when you understand the theory, which is the very foundation of your academic position, does not work? If you do not see a new theory that you could jump to? Is this what is frustrating Sabine so very much?

A scientific theory is like an Empire and Empires do not fall without much fight and anger. 

torsdag 3 oktober 2024

Mass: Inertial = Passive Gravitational = Active Gravitational

Maybe, it is now time for Newton to return? Maybe I was wrong? In any case: Newton forgive me!

The basic model of Neo-Newtonian Cosmology takes the following form in terms of gravitational potential $\phi$ as primordialmass density $\rho$, momentum $m$ and $u=\frac{m}{\rho}$ material velocity all depending on a Euclidean space coordinate $x$ and time coordinate $t$ with the dot on top representing differentiation with respect to time:

  • $\rho = \Delta\phi$                                    (inverse square law in differential form)
  • $\dot\rho +\nabla\cdot m=0$                       (conservation of mass)
  • $\dot m +\nabla\cdot (um)+\rho\nabla\phi=0$    (conservation of momentum: Newton's 2nd Law)  

A basic property of this model is conservation of energy in the sense that total energy $KE + GE$  with $KE$ total kinetic energy and $GE$ total gravitational energydoes not change over time: 

  • $\frac{d}{dt}(KE+GE) = 0$        (conservation of total energy)
with 
  • $KE = \int \rho\frac{u^2}{2} dx$
  • $GE = -\frac{1}{2}\int\vert\nabla\phi\vert^2dx$. 
In this model we have identified 
  • inertial mass = passive gravitational mass                        (Galileo's Equivalence Principle EP)
  • passive gravitational mass = active gravitational mass     (Newton's EP)
with 
  • inertial mass = mass $\rho$ in $\rho\frac{u^2}{2}$
  • passive gravitational mass = mass $\rho$ in $\rho\nabla\phi$
  • active gravitational mass = mass $\rho$ in $\Delta\phi =\rho$. 
In other words, the model has only one form of mass appearing in three different contexts with equality expressed as Galileo's EP and Newton's EP as fundamental features of Newtonian mechanics in the form of Neo-Newtonian Cosmology.

We recall that Einstein used Galileo's EP as a Postulate for his General Relativity (probably also with less exposure Newton's EP). 

Leibniz' Principle of Sufficient Reason shows that Galileo's EP (universality of free fall) cannot be false since sufficient reason is lacking, while violation of Newton's EP would violate conservation of total energy. 

Conclusion1: The basic model of Neo-Newtonian is simple, universal and computable, and it is unthinkable that it is not fundamentally correct. There is no sufficient reason to replace it by General Relativity, which is the opposite of both simple and computable. Recall that a main dilemma of modern physics is the apparent incompatibility/contradiction between its fundamental pillars of Einstein's relativity and Schrödinger's quantum mechanics, while Newton and Schrödinger are fully compatible. More precisely, the incompatibility of the pillars means that modern physics lacks foundation and that is a root cause of the present crisis witnessed by everybody including Sabine Hossenfelder in next post. Contradictory physics cannot be real physics. You cannot turn both left and right at the same time without breaking apart.

Conclusion2: Kinetic energy KE can increase from decreasing GE from increasing $\vert\nabla\phi\vert^2$ from concentration of mass as the engine of gravitational collapse, which gives an endless source of kinetic energy. 

Question: Neo-Newtonian Cosmology and Einstein's Cosmology/General Relativity both start from Galileo's EP + Newton's EP. Does it mean that Einstein agrees with Newton? Probably not, since the basic idea of modern physics is that Einstein does not. The difference appears to be that Einstein demands Lorentz invariance as his incarnation of a Principle of Relativity, while Newton is happy with Galilean invariance. So we have Einstein-Lorentz against Newton-Galileo. What is your bet? Newton-Galileo reigned from 1600 to 1955, while Einstein-Lorentz took over in the 1970s along with the decline of modern physics into our days. Time for Renaissance of Newton-Galileo?

PS1 Recall that the Lorentz transformation lacks physical meaning and so must Lorentz invariance, while Galilean invariance expresses real Newtonian physics. 

PS2 Recall that the above model naturally suggests low intensity distributed $\rho =\Delta\phi\ge 0$ as dark matter and naturally extends to negative $\rho =\Delta\phi$ of as origin of dark energy. Here ordinary matter (protons and electrons) are assigned Inertial = Passive Gravitational Mass, while it is possible (but not necessary) to make a distinction for dark matter with only a role as Active Gravitational Mass thus without Inertial = Passive Gravitational Mass (recall this post). This would mean making a distinction between matter and mass with ordinary matter being assigned mass as Inertial = Passive Gravitational Mass, while dark matter would not be assigned any and so represent mass but not matter..