lördag 30 juli 2022

The Catch of CO2 Alarmism vs Homeopathy

The catch of CO2 alarmism is that CO2 as an atmospheric trace gas is claimed to have considerable impact on global temperature as a "greenhouse gas", by IPCC estimated to 3-5 C warming upon doubling of the concentration from 0.03% in preindustrial time, with today 0.04%. It is also claimed that this reflects a near saturation (logarithmic) effect of CO2 with additional warming from another doubling being very small. The catch is then that CO2 as a greenhouse gas of very small concentration is claimed to have a big effect up to 0.03% as a very substantial part of an estimated total atmospheric effect of +33 C, but much smaller effect above 0.03% by saturation. 

Now, a physical system for which a small cause can have a big effect is an unstable system and as such cannot persist over time. This is like steering your car with a very small joystick sensible to every little shake of your hand. You do not get far with such a control mechanism. 

So, if global temperature is largely controlled by CO2 as a trace gas as a very small cause, the Earth faces the possibility of a "runaway greenhouse effect" with huge warming upon doubled CO2. But this is not what we see when observing less than 1 C warming with CO2 raising 0.01% from preindustrial level. The proposed explanation is saturation of the warming effect of CO2 as a greenhouse gas, which then must have been reached already today. 

But there is another more natural explanation of the little observed warming from increased CO2, namely that CO2 as trace gas has no (observable) influence on global temperature at all. This is like removing the control function of steering your car from the sensitive joystick to a conventional steering wheel. You can then let your kids play with the joystick as much as they want under an illusion that they steer the car. 

This connects to a principle of homeopathy of dilution where the concentration of a substance supposed to be beneficial can be reduced to only a trace while keeping/improving the effect. You can then be a believer, or you can as a non-believer say that the substance has no beneficial effect whatsoever independent of dose. 

The catch of CO2 alarmism is thus to claim a big effect from a small cause without runaway effect and then eliminate the possibility that the small cause de facto has only a small effect. This dilemma has not been resolved. 

The big effect of CO2 as trace gas is claimed to be demonstrated by its blocking effect (causing warming) on Outgoing Longwave Radiation OLR as the big ditch in the radiation intensity spectrum around frequency 600 as reported by AIRS spectrometers in satellites with the green curve representing OLR from an atmosphere without CO2 as presented by William Happer (time 25.00) (see also article):

The effect of doubling CO2 is represented by a slight broadening of the ditch in red upon doubling as a small effect by saturation. The ditch in OLR causing global warming is thus presented as a big effect up to 0.03% and a small effect upon doubling, as discussed above with connection to homeopathy.

But there is a catch in the above spectrum, since what AIRS de facto measures is the temperature of atmospheric CO2 at the highest level where it can be detected (220 K at the tropopause), which is possible even at trace concentrations, but what is reported is radiation intensity from Planck's Law as if CO2 as a trace gas is blocking considerable amount of radiation. The above diagram is presented as the main evidence that CO2 as trace gas causes considerable global warming, but then without runaway effect. But the diagram can be questioned since it reports something which is not directly measured. Maybe a true OLR has no ditch and so no (observable) warming from CO2. Maybe the spectrometer acts as a ghost detector like that detecting Downwelling Long Wave Radiation DLWR discussed in this post. What do you think? 

Happer is a skeptic claiming little warming from doubled CO2 as the slight broadening of the ditch. But Happer seems to believe in its present big effect represented by the green curve without ditch. It would be interesting to hear what Happer says about the possibility that the diagram is the result of ghost detection, and so I have asked him and am waiting for response.

PS1 The fact that an AIRS spectrometer directly measures temperatures and not radiation intensity (like a pyrgeometer) is supported by the fact that it contains both coolers and radiators at 150 K and 190 K. Also recall this post.

PS2 Communication with Will Happer: (see invitation to TNT Radio)

Claes: 

Does a pyrgeometer directly record temperature or radiation intensity? 

Will: 

RADIATION INTENSITY AND TEMPERATURE ARE TWO DIFFERENT THINGS. PYROMETERS MEASURE RADIATION INTENSITY (OR FLUX), NOT TEMPERATURE. IN EARTH'S ATMOSPHERF RADIATION IS SELDOM EVEN CLOSE TO A THERMAL EQUILIBIUM STATE (THE PLANCK STATE) WHICH CAN BE DESCRIBED BY A TEMPERATURE. THE CLOSEST THING TO THIS SITUATION IS THE INTERIOR OF OPTICALLY THICK CLOUDS AT NIGHT, WHEN SHORT MEAN FREE PATHS FOR THERMAL RADIATION AND THE ABSENCE OF SUNLIGHT PRODUCE NEAR EQUILIBRIUM THERMAL RADIATION AT THE TEMPERATURE OF THE LOCAL CLOUD PARTICULATES.

Claes: 

Thanks for response, which I much appreciate. We are both (CO2 alarmism) skeptics and so a main question can be what arguments are best to debunk CO2  alarmism. You seem to say that a pyrgeometer directly measures radiation flux and not temperature, whereas I have the opposite impression. 

Let us seek an answer by looking into the design of a pyrgeometer, which consists of (1) a thermopile reading/measuring voltage U scaling with end temperature difference dT, (2) a silicon dome/window and (3) temperature sensor measuring the temperature T_dome of the dome. The source temperature T_source can then (after calibration) be determined as T_source = T_dome + dT. Again, what is de facto measured is (1) voltage U scaling with dT and (3) thermopile temperature

There is, as far as I understand, no sensor measuring incoming radiation flux into the dome. Incoming radiation is calculated by postulating outgoing radiation according to Planck-Stefan-Boltzmann of magnitude sigma T_dome^4, but this is a fictional quantity as if the dome is in radiative contact with outer space at 0 K but in fact is in radiative contact with a source of higher temperature.

A. Can we agree that what de facto is measured is (1) thermopile voltage scaling with end temperature difference and (3) dome temperature?

B. Can we agree that there is no sensor directly measuring incoming radiation?  

C. You compute a climate sensitivity (without feedback) of about 1 C, about the same as that presented by CO2 alarmists. Is that a complication for a skeptic? Would it help us if climate sensitivity could be estimated to be less than 0.1 C?

Hope you can sort out A-C for me. 

My attempt to understand blackbody radiation is here https://computationalblackbody.wordpress.com

Will:

If I have understood you correctly, I don't agree with what you say about temperature and intensity or flux measurements. But I don't have time for extensive correspondence on this topic. Spectral intensity measurements are often expressed as equivalent temperatures. But the basic measurement is of energy fluxes which produce voltages or currents in sensor elements. I attach a paper that describes one such instrument. If you are not already familiar with how high-resolution spectrometers work, it would be worth your time to study the paper. A black-body calibrating source is needed to convert intensity (voltage) measurements to equivalent temperature.

Temperature is both obvious from our sensations of hot and cold, and at the same time profoundly subtle from a thermodynamic point of view. Here it is intimately associated with the idea of systems in thermal equilibrium, and with the abstract concept of entropy. The most fundamental definition of the temperature of a body in thermal equilibrium is the rate of change of its internal energy with entropy, under conditions that no work is done. This definition is usually too abstract to be of much use.

Ideal Planck radiation has a temperature and has the maximum possible entropy for radiation of a fixed total energy. But it is unusual to find real Planck radiation in nature. Besides having the Planck distribution of intensity over frequency, the radiation must also be completely isotropic. Sunlight can be roughly isotropic inside an optically thick terrestrial cloud, but the spectral intensity is orders of magnitude too weak for the sunlight to be in thermal equilibrium. Except in the interior of optically thick clouds, Earth's thermal radiation is highly anisotropic, and it usually does not have a Planck spectral distribution because of the complicated absorption spectra of greenhouse gases. But cloud particulates are pretty good grey bodies with no sharp spectral features. The particulates absorb, emit and scatter much more powerfully that greenhouse gases. So multiple scattering, emission and absorption of thermal radiation inside optically thick, isothermal clouds can produce radiation which is close to the Planck limit for the temperature of the particulates in the cloud.

Claes:

Thanks for response! I think you can settle the following issues quickly understanding that your time is limited.

1. It seems we agree that what a pyrgeometer de facto by its design measures, is temperature. Is this your opinion?

2. The instrument you refer to uses bolometers which according to definition are sensitive thermometers whose electrical resistance varies with temperature. Can we then agree that bolometers de facto by design measures temperature?

3. You say that "Spectral intensity measurements are often expressed as equivalent temperatures”. Yes, there is a connection between the temperature of a (black/grey) body and the radiative transfer/flux of heat (or in short radiation) from the body, but it depends on the temperature of the surrounding of the body and thus cannot be concluded from temperature alone. Can we agree on that?

4. A bit more precise: The idea that a body at temperature T radiates according to Planck’s law with radiative flux scaling with T^4 independent of the surrounding/background temperature, is a misinterpretation of Planck’s Law, which in correct form states that the radiation scales with (T^4 - T_b^4) with T_b the background temperature. Can we agree on this? 

(This is analysed in detail on http://computationalblackbody.wordpress.com including a new proof of the correct Planck law without resort to statistics). 

Will:





fredag 29 juli 2022

Unphysical Schwarzschild Equation for Heat Transfer by Conduction

Schwarzschild's equation for heat transfer by radiation through the atmosphere is a two-stream upwelling-downwelling heat transfer model with the net heat transfer expressed as the difference between upwelling and downwelling streams. It is the basic model for radiative heat transfer used in climate models.

To get perspective on this two-stream model, let us see what a two-stream model for heat transfer by conduction instead of radiation would look like. We shall then compare with the standard one-stream model for vertical heat conduction through a horisontal layer, which is Fourier's Law:

  • $q(x) = -\gamma\frac{T(x+dx)-T(x)}{dx}$   or $q(x) = -\gamma\frac{dT}{dx}$,    (1)

where $q(x)$ is net heat transfer/flow, $\gamma$ is a heat conductivity coefficient, $T(x)$ is temperature, $x$ is a vertical coordinate and $dx$ a small increment.

A two-way version of Fourier's Law takes the form

  • $q(x) =\gamma\frac{T(x)}{dx} - \gamma\frac{T(x+dx)}{dx}$   (2) 

where the net heat transfer $q(x)$ is expressed as the difference of two gross streams  $\gamma\frac{T(x)}{dx}$ and $\gamma\frac{T(x+dx)}{dx}$ in opposite directions. But this two-way version is unstable because you are dividing temperatures $T(x)$ and $T(x+dx)$ by the small quantity $dx$, and so it is both uncomputable (by dividing by small $dx$) and unphysical since an unstable system does not have permanence over time. Note that the derivative $\frac{dT}{dx}$ does not suffer from the same instability since $dT$ is small.

We conclude that a Schwarzschild two-stream model for heat transfer by conduction is unstable, uncomputable and unphysical. Can we expect that Schwarzschild's two-stream model for heat transfer by radiation does not suffer from the same deficiency?  

It remains to formulate a stable one-stream model for heat transfer by radiation through an absorbing/emitting gas. An interesting such model is presented here, to which I will return.  

We can take the argument one step further by expressing conservation of heat energy in the form 

  • $\frac{q(x)-q(x-dx)}{dx} = f(x)$  (3)
where $f(x)$ is a heat source. Combined with (2) this gives the model (with $\gamma =1$)
  • $\frac{T(x)}{dx^2} - \frac{T(x+dx)}{dx^2} - \frac{T(x-dx)}{dx^2} +\frac{T(x)}{dx^2}=f(x)$  (4) 
which in the spirit of Schwarzschild splitting the source $f(x)$ into a contribution to "upwelling heat flux" $q_{up}$ and "downwelling heat flux" $q_{down}$, we can write as  
  • $q_{up}(x)\equiv\frac{2T(x)}{dx^2} =\frac{f(x)}{2}$                                                   
  • $q_{down}(x)\equiv\frac{T(x+dx)}{dx^2} + \frac{T(x-dx)}{dx^2}=-\frac{f(x)}{2}$ 
with net flow $q(x)=q_{up}(x)-q_{down}(x)$ balancing $f(x)$ according to (4). These are unstable unphysical equations that do not make sense. We compare with combining (3) with (1) into the standard heat equation 
  •  $-\frac{d^2T}{dx^2} \approx -\frac{T(x+dx)-2T(x)+T(x-dx)}{dx^2}=f(x)$
which makes perfect sense as a differential equation allowing stable solution. We understand that Schwarzschild's two-stream model for heat transfer by conduction is no good. Is it then no good also for heat transfer by radiation?

torsdag 28 juli 2022

Measuring Downwelling Long Wave Radiation by Ghost Detector

In previous posts I have shown that CO2 alarmism is based on the idea that the atmosphere is warming the Earth surface by Downwelling Long Wave Radiation DLWR or Back Radiation as the physics of the Greenhouse Effect, the existence of which is supposed to be documented through measurements with various instruments such as pyrgeometers and radiometers reporting data of the typical form (with clear skies in the beginning and end of the period and cloudy in between):

We see here depending on cloudiness measured DLWR ranging from 280 to 400 Watts/m2, to be compared with around 200 Watts/m2 from Short Wave Radiation from the Sun, thus very substantial, about two extra Suns. The conclusion from measurement is thus that the atmosphere depending on cloudiness is warming the Earth surface more than the Sun, and from that discovery send an alarm message that just a tiny bit of change of the atmosphere like doubling of the concentration of the trace gas CO2 can cause catastrophical global warming of up to 3-5 C. The argument is that if you can measure something (DLWR) with some instrument (pyrgeometer) that something which your are measuring must exist. But is this a valid scientific argument?

No, it is not necessarily so without looking inte the functioning of the instrument: In earlier posts (compare with Wikipedia pyrgeometer) I have shown that a pyrgeometer is a ghost detector, which reports massive DWLR from a formula of the form 

  • DWLR = pyrgeometer measurement + OLWR   (1)

where OLWR is assumed to be Outgoing Long Wave Radiation from the instrument into a background of zero Kelvin according to Planck's Law. But OLWR is a fictional massive ghost radiation since the instrument is communicating with the atmosphere and not the zero Kelvin outer space. What the pyrgeometer actually measures is in fact the temperature difference between the warmer Earth surface and the somewhat colder atmosphere (but not outer space at zero Kelvin), which is of moderate size and according to Stefan-Boltzmann's law scales with the heat transfer from the Earth surface to the colder atmosphere. 

The pyrgeometer thus measures a moderate heat transfer from Earth surface to atmosphere, which together with the massive fictional non-physical OLWR of size 390 Watts/m2 at mean Earth surface temperature around 15 C, becomes massive fictional non-physical DWLR of size 280-400 Watts/m2 as reported by (1) as the postulated physics of a fictional non-physical Greenhouse Effect.

Recall that a pyrgeometer measures (as a voltage) the difference between the end temperatures of a thermopile with one end in contact with a dome/window directed to the atmosphere and the other with the instrument itself together with a thermometer measuring the instrument temperature. Although apparently a temperature difference de facto is measured, calculated DWLR according to (1) is reported with OLR postulated (but unphysical).  

We may compare with a situation where you have hired a plumber or lawyer to do a certain job and after completion the net work accomplished is recorded and you are presented with a bill of the form

  • total cost = net recorded result  + claimed invested effort by plumber/lawyer    (2)

and where the claimed invested effort can be anything and the net recorded result can even be negative, which is the analog of (1), while the total cost to pay is big. This can be the case if you have not agreed on a total cost for the job ahead, and just hope that you will not get ripped off in an open-ended contract (current account) like (2). 

But in climate alarmism you do get ripped off by the use of a ghost detector reporting a DLWR, which does no exist because it is the Earth surface which warms the colder atmosphere and not the other way around. 

Why should you allow to get ripped off, if it is not necessary? By getting fooled about a fictional non-physical Greenhouse Effect recorded by a ghost detector? But the cheating is so simple that it may be difficult to discover. More details in this presentation from Climate Sense 2018.

Note that the above picture reports more warming from cloudy skies at night, which fits with experience and so deceptively can be used to sell the idea that DWLR is real.  

For physically correct versions of Planck-Stefan-Boltzmann's Law see Computational BlackBody Radiation. 

If you are not convinced about the role of DWLR/Back Radiation in selling CO2 alarmism, take a look at the following energy budget diagram presented by NASA:


See (with support from slides and hand-out to innocent students) that the Earth surface absorbs 48.0% of incoming sunlight (yellow) and an additional 100% (two extra Suns) from Back Radiation (brown), while the Earth surface emits 117.0% more the twice what is absorbed. How do you react when you understand that you are being fooled?

söndag 24 juli 2022

The Sky Dragon Slayers Moving On

The book Slaying the Sky Dragon: Death of the Greenhouse Gas Theory (2011) was produced by a group of "Slayers", where I participated with two chapters, headed by John Sullivan who opened Principia Scientific to take the mission further. 

More than ten years has passed since the book was published seeing a steadily increasing momentum of the Slayers and their message with John Sullivan, Joe Olson and Joe Postma now running Sky Dragon Slaying as one of the shows on the new channel TNT Radio Live.  Take time off and listen to the shows and you will get enlightened. I appeared in the Jason show last week. See report on Principia Scientific.

Finite Precision Computation/Physics and Heat Energy

Euler CFD as a parameter free Theory of Everything ToE for slightly viscous incompressible fluid flow is a prime example of the idea of finite precision digital computation capable of simulating physics as a form of finite precision analog computation. Euler CFD captures turbulent flow from a principle of best possible digital solution of Euler's equations in a situation where exact (laminar) solutions are all unstable without permanence over time and so are unphysical and cannot be observed. 

The essence of turbulent flow captured by Euler CFD is the production of heat energy in turbulent dissipation from residual stabilisation in a situation where Euler residuals can be made small only in a weak mean value sense, but blow up in a strong pointwise sense. The dissipative mechanism thus expresses an impossibility to computationally resolve the flow because of finite precision, which in physical terms means production of heat energy as small scale unordered motion. 

Radiative heat transfer also involves an aspect of finite precision in the sense that a body viewed as a set of oscillators is capable of radiating only frequencies below a certain cut-off frequency scaling with temperature because synchronisation of the oscillators necessary for radiation is in finite precision impossible above cut-off. 

There is connection between turbulent flow and radiative heat transfer in that the heat energy generated in turbulent dissipation ultimately is released in radiation, and so gives a meaning to heat energy as unordered motion as unsynchronised oscillatory motion.  

Recall from the blog post 2nd Coming of the 2nd Law that finite precision computation/physics explains why certain processes are irreversible as processes where large scale kinetic energy/ordered motion is transformed into small scale kinetic energy/disordered motion, which cannot be reversed because the precision required to restore large scale order from small scale disorder is not there. This is like restoring all your manuscripts after a tornado has swept them into little pieces, or your hard disk has collapsed. 

Finite precision computation open an approach to the 2nd Law which is different from the standard based on statistics. Small scale disorder is the result of turbulent dissipation as a finite precision resolution of increasingly complex flow arising from flow instability, a resolution which cannot be reversed in finite precision.  It is like necessary (because storage is limited) chopping digits/details, which cannot be retrieved.

Heat energy as internal energy as small scale disordered motion is low quality energy in the sense that transformation to other forms of energy such as large scale motion comes with severe losses.  This puts limits to the efficiency of steam and combustion engines transforming heat energy into piston motion. On the other hand electric energy is high quality energy typically generated from large scale motion in generators allowing efficient electrical motors returning large scale motion. Heating by electricity is thus involves a form of quality degradation, which can be expensive, while heating by burning fossil fuels is efficient and cheap.    

lördag 23 juli 2022

What Is Heat Energy?

Heat energy is a central element in both thermodynamics and radiative heat transfer. But what is in fact heat energy?

The 1st Law of Thermodynamics states that the total energy as kinetic energy plus (internal) heat energy remains constant in a system (without chemistry/fission/fusion) with no energy exchange with its surrounding. The 2nd Law of Thermodynamics states that transformation of kinetic energy into heat energy is irreversible. 

The Planck-Stefan-Boltzman Law (PSB Law) expresses that the transfer of heat energy by electromagnetic radiation from a warmer body of temperature $T_w$ to a colder body of temperature $T_c<T_w$ scales with $T_w^4 -T_c^4$. 

Computational Thermodynamics and Computational BlackBody Radiation present a new approach to uncover the mysteries of both the 2nd Law of Thermodynamics and the PSB Law based on a principle of finite precision computation/physics. In  this setting heat energy takes the form of small scale unordered kinetic motion.

In thermodynamics kinetic energy thus takes the form of large scale ordered motion and small scale unordered motion which is the result of turbulent dissipation into heat energy. The 2nd Law expresses that the process of turbulent dissipation is irreversible because in finite precision unordered small scale motion cannot be coordinated into large scale ordered motion. Heat energy here appears as "internal energy" with limits set by the 2nd Law as concerns transformation to "external energy" as large scale kinetic motion.

In radiative heat transfer the temperature of a body determines a cut-off frequency scaling with temperature with heat energy as atomic vibrations with only frequencies below cut-off appearing in synchronized ordered form capable to generating outgoing radiation. Here the finite precision limit thus scales with the inverse of the temperature and the heat transfer from a warm to a cold body consist only of the frequencies above cut-off for the colder and below cut-off for the warmer. 

In both cases heat energy is a result of an impossibility arising from finite precision computation. In thermodynamics heat energy is unresolvable unordered small scale kinetic motion. In thermodynamics a body absorbs heat energy as unordered kinetic motion for frequencies above cut-off. 

In short, heat energy emerges as a rest product of finite precision computation/physics meeting unresolvable scales of motion. 

Even if now heat energy is a form of rest product, it does not mean that it cannot be recycled into useful energy to some extent. In thermodynamics a gas expanding into a larger volume creates turbulence which is turned into heat energy, which can be used to do work when expanding into an even bigger volume. In radiative heat transfer a colder body when heating up by absorbing heat in unordered form from a warmer body, increases its cut-off and so can radiate higher frequencies in synchronised ordered form.   


fredag 22 juli 2022

Computability of Turbulent vs Laminar Flow

Euler Computational Fluid Dynamics CFD shows that mean values such as lift and drag from the turbulent flow around all sorts of vehicles/bodies moving through air or water are computable at low computational cost, while point values of the fluid flow and body forces in space and time are uncomputable. Euler CFD shows that the mean values are stable quantities insensitive to mesh resolution and small changes of geometry, while point values are very sensitive. 

In Euler CFD this is captured by a dual linearised solution, which in the case of an underlying turbulent oscillating base flow through cancellation can be of moderate size as an expression of mean value stability. This comes out as independence of lift and drag for flow with large Reynolds number beyond drag crisis.  

Laminar flow on the other hand may be less stable because of base flow without oscillation and cancellation, and thus may require large computational cost to correctly capture. It comes out as a possible dependence of lift and drag on smaller Reynolds numbers before drag crisis. An example of laminar flow is potential flow which is unstable/unphysical and thus uncomputable as solution of the Euler equations.

So, in certain (mean value) sense, turbulent flow can be more easy to compute/predict than laminar flow, which can be viewed to be paradoxical, but then in fact is not. 


torsdag 21 juli 2022

Similarity Between Fluid Turbulence and Radiative Energy Transfer

In the TNT Radio interview in the previous post I suggest a similarity between fluid turbulence and radiative heat transfer connecting to a phenomenon of high frequency "cut-off".  

Fluid motion transfers large scale ordered kinetic motion/energy over a cascade of successively smaller and smaller scales into a smallest scale depending on viscosity of unordered kinetic motion/energy as turbulence perceived as heat energy with the smallest scale representing the cut-off. The total energy transfer from large to smallest scale shows to depend mainly on the largest scales with thus little dependence on the smallest scales set by viscosity, thus with little dependence on viscosity once small enough. This is Kolmogorovs law of finite rate of turbulent dissipation.

Computational BlackBody Radiation describes radiative heat transfer between two bodies/oscillators $B1$ and $B2$ of temperature $T1$ and $T2$ with $T1>T2$ as an ordered resonance phenomenon with a frequency "cut off" increasing with temperature with energy balance for frequencies below the lower cut-off for $B2$, while frequencies above and below cut-off for $B1$ are absorbed by $B2$ in the form of unordered high-frequency oscillations perceived as heat energy by $B2$. The result is transfer of energy from the warmer body to the colder body. 

In both cases there is thus a split between ordered large scale motion and unordered small scale motion beyond cut-off perceived as heat energy. In both cases the small scale unordered motion perceived as heat energy is the consequence of an impossibility of sustained ordered motion: In fluid motion an impossibility of transferring energy to smaller scales in ordered fashion, and in radiative transfer an impossibility to balance frequencies above cut-off for the colder body, as an effect of finite precision

There is a connection to the 2nd law of thermodynamics with the transformation of large scale motion into heat energy as small scale unordered motion, is irreversible. In radiation it means that heat energy transfer is one-way from warm to cold.

Interview on TNT Radio Live

Yesterday I was interviewed on TNT Radio Live on themes of mathematics and science including radiative energy transfer connecting to CO2 alarmism and turbulent fluid flow, with remarks on the current energy crisis caused by ban on fossil energy driving hyperinflation:

To give more substance on the CO2 alarmism story from TNT the same day, please listen to Tony Heller with his remarkable blog Real Climate Science presenting lots of unique illuminating historic facts: 


tisdag 19 juli 2022

Control of Inflation by Interest Rate?

Hyperinflation is now hitting Western society after a long period of very low interest rates and money printing combined with low inflation. State Banks are now raising interest rates to curb the inflation. But is this possible and advisable? Let see what common sense can say:

1. Interest rates can be seen as a cost of capital balancing risk. With very low interest rates anybody can borrow big money for any investment, many of which will not deliver, and that is not good. On the other hand, with high interest rates even clever people will not dare to invest, and that is not good. So a moderate interest rate maybe around 2-4% may be optimal. Instead we have had a period since 2008 with essentially zero interest rate with raising stock and housing markets to record levels.

2.  We now see a hyperinflation caused by increasing cost of energy because of a massive turn to solar and wind energy away from fossil and nuclear energy. But this is not true inflation but simply increased costs for energy production. 

3. But you cannot lower energy costs by raising interest rates, only increase,  and so the attempted policy of raising interest rates to curb inflation caused by an energy crisis cannot work. But it seems sound to give up the zero interest rate policy, which has caused a stock market and housing bubble, which is a form of hyperinflation although it is not included in the consumer price index used to measure zero inflation.  

4. On the hand, turning to more efficient fossil and nuclear energy production would lower consumer price index to a deflation, which would give people more for the money and it would be insane to seek to balance by letting interest rates go negative as has been the current policy.

5. So the way out of the coming crisis is to return to fossil and  nuclear energy production, and that is what we are now seeing coming. This is nothing which is controlled by a very low or very high interest rate, a moderate would be fine. The obsession by the Swedish State Bank boss Stefan Ingves to seek to control inflation up and down by the interest rate up and down, has been based on ideas in direct opposition to common sense and thus led to very negative consequences for people. 

Here is the variation of US inflation and interest rate over the period 1998-2022 showing that the interest rate does not control the inflation:





   

måndag 18 juli 2022

Euler CFD vs Navier's Friction Boundary Condition

Navier's friction boundary condition in the Navier-Stokes equations for fluid flow takes the form

  •  $\nu\frac{\partial u}{\partial n}=\beta u$,      (1)
where $\nu > 0$ is a small viscosity, $u$ tangential velocity, $n$ unit normal into the fluid and $\beta \ge 0$ a skin friction coefficient.  For $\beta =0$ this is a zero-friction slip condition, while for $\beta$ large it is a no-slip condition $u=0$. 

For given $\nu >0$ there is a break even between balancing fluid and friction forces when $\beta\approx\sqrt{\nu}$ with $\beta >\sqrt{\nu}$ causing substantial reduction of $u$ creating a (laminar) boundary layer of width $\approx\sqrt{\nu}$ with $\frac{\partial u}{\partial n}\approx \frac{1}{\sqrt{\nu}}$, and $\beta <\sqrt{\nu}$ causing little reduction of $u$ approaching a slip condition without boundary layer. 

The drag crisis at $\nu \approx 10^{-6}$ with Reynolds number $Re\approx 10^6$ signifies a change from laminar no-slip boundary layer to an effective slip condition from a turbulent boundary layer as explained here.  This corresponds to a friction coefficient $\beta\approx 10^{-3}$ indicating a switch to slip in accordance with measured skin friction coefficients:



So, for $Re\approx 10^6$ an observed skin friction $\beta \approx 10^{-3}$ gives experimental support that slip can be used as an effective boundary condition, since $\beta \approx 10^{-3}$ causes little reduction of tangential velocity and has small contribution to total drag in bluff body flow.

A connection with potential flow with slip and formally $\nu =0$ can be made by observing that potential flow satisfies on the boundary
  • $\frac{\partial u}{\partial n}=-u$,
and so for any $\nu >0$

  • $\nu\frac{\partial u}{\partial n}=-\nu u$,

  • thus with formallly a very small negative friction coefficient very close to 0 and slip in (1). Navier-Stokes with slip thus formally tends to potential flow as the viscosity tends to zero, while in reality interior turbulent flow develops because of instability, a turbulent flow which is computable by Euler CFD with millions of mesh points without need to resolve thin boundary layers because of effective slip. 

    Euler CFD can be supplemented with positive skin friction and independence of e. g. drag for small skin friction can be observed.

    Note that (1) with $\beta =0$ forms a weak boundary layer to satisfy $\frac{\partial u}{\partial n}=0$, while residual stabilisation in Euler CFD does not create any boundary layer.


    torsdag 14 juli 2022

    Nature of Turbulence vs Euler CFD

    What makes Euler CFD into a parameter free model or Theory of Everything ToE for slightly viscous incompressible flow is:
    1. Finite rate of turbulent dissipation effectively independent of Reynolds number $Re$ after transition to turbulence. 
    2. Slip serving as effective boundary condition for $Re$ beyond drag crisis at $Re\approx 500.000$.
    Here 1 reflects the nature of turbulence as an energy cascade from large to small scales,  developing from shear or opposing flow instability in slightly viscous incompressible flow,  into a smallest scale $h \sim Re^{-0.75}$ where the energy is dissipated into heat with an amount which is determined by large scale features. This is the essence of classical qualitative turbulence theory stating independence of rate of turbulent dissipation for $Re$ beyond transition to turbulence, which could be at $Re\approx 10^3$ with $h\approx 10^{-2}$ which opens to capturing mean values such as drag in bluff body flow balancing turbulent dissipation in computations with $10^6$ mesh points, to be compared with unreachable $10^{14}$ to resolve smallest scales if $Re =10^6$ common in aerodynamics of vehicles (largest DNS reaches $10^{11}$).  

    Turbulence is thus uncomputable to smallest physical scales for $Re >10^6$ in any foreseeable future, but mean-value quantities such as drag and lift are computable today with desk top power, because of the nature of turbulent flow. Turbulent flow is thus in its details uncomputable but readily computable as concerns mean-values for large variety of problems of great practical interest.  This is very remarkable as an answer to a basic open problem in fluid mechanics.

    Further 2 reflects that it is not necessary to resolve thin no-slip boundary layers scaling with $Re^{-0.5}$ asking for $10^{13}$ mesh points, since slip does not generate any boundary layers and thus the above millions of mesh points suffice.  Specifically, main stream turbulence is not mainly generated from no-slip boundary layers as the core element of Prandtl's boundary layer theory dominating modern fluid mechanics making CFD into an impossibility. 

    Altogether 1 and 2 express the basic nature of turbulent flow, which makes it possible for Euler CFD as best possible solution to the Euler equations to serve as a quantitative ToE for slightly viscous flow meeting the NASA 2030 CFD Vision. Listen to the breakthroughs Parviz Moin believes (time 18.21) are needed to reach the Vision, which have now been reached by Euler CFD. 

    Note that Euler CFD can be seen as a form of Large Eddy Simulation LES with a turbulence model which is automatically generated from a principle of best possible solution to the Euler equations and slip as wall model.

    Note also that slip can be seen as the most basic and simple wall model expressing zero friction between fluid and wall as an expression of observed very small skin friction, to be compared with complex wall models supposed to capture thin turbulent boundary layers arising from no-slip asking for computational resolution beyond reach.      

     

    onsdag 13 juli 2022

    Euler CFD vs Cornerstone of Turbulence Theory

    The cornerstone of classical turbulence theory is that the rate of turbulent dissipation $\epsilon$ is independent of the Reynolds $Re =\frac{UL}{\nu}$ once big enough, where $U$ is typical large scale flow speed, $L$ typical large spatial scale and $\nu$ viscosity. The rationale is that turbulent dissipation occurs at the smallest spatial scale independent of the absolute size of the smallest scale. But the size of the turbulent dissipation varies with the large scale features of the flow. 

    More precisely, a smallest Kolmogorov spatial scale $dx\sim \nu^{0.75}$ with velocity fluctuation $du\sim \nu^{0.25}$ results from $\frac{dudx}{\nu}\sim 1$ and $\nu(\frac{du}{dx})^2\sim 1$ with thus $du\sim dx^\frac{1}{3}$ expressing Hölder continuity of turbulent flow with exponent $\frac{1}{3}$ as the Onsager conjecture.

    Euler CFD fits into this picture with turbulent dissipation from residual stabilisation of the form $C\frac{h}{\vert u\vert }R(u)^2$ with $R(u)$ the Euler equation residual and $h$ the mesh size with $C$ a stabilisation constant. Euler CFD can be seen as a best possible solution of the Euler equations in the balanced sense that $R(u)\sim h^{0.5}$ is small in a weak sense and also $hR(u)\sim h^{0.5}$ in a strong sense, with the same scaling of $du$ and $dx$ as above with $\nu = h$. The resort to best possible reflects that it is impossible to find physical solutions with residual $R(u)$ being small in a strong sense (because all exact solutions are unstable and thus nonphysical). 

    Euler CFD shows to deliver mean values quantities such as drag and lift on bluff body flow which are independent of mesh size once small enough and stabilisation parameter.  This gives evidence of turbulent dissipation being independent of effective Reynolds number, that is evidence of the cornerstone of turbulence theory. But Euler CFD also delivers the size of the turbulent dissipation from case to case depending on large scale flow features. Euler CFD thus can be seen as a quantitative parameter-free turbulence model while classical turbulence theory is only qualitative. 

    Recall from previous posts that observed independence of drag on Reynolds number once large enough directly connects to independence of turbulent dissipation. 

    An overview of the support of the cornerstone is given here. For DNS support see this. 

    Recall that the Taylor turbulent length scale $\sim Re^{-0.5}$ as the scale with significant turbulent dissipation, while the Kolmogorov smallest scale $\sim Re^{-0.75}$, which with 3 smallest scale modes per Taylor mode gives $Re^{0.25}\sim 3$, that is $Re\sim 100$ (in accordance with observation) as requirement for turbulence with Taylor microscale $\sim 10^{-1}$ resolved by mesh size $\sim 10^{-2}$. This indicates that Euler CFD with millions of mesh points can deliver drag and lift which do not change under mesh refinement (as observed). It connects to DNS for isotropic turbulence in a cube starting with a $32^3$ mesh with $128^3$ mesh reached by Kent in 1985 with $1024^3$ by Gotoh et al in 2000.

    Recall that residual stabilisation focusses dissipation to smallest scales and thus fits into the Kolmogorov picture of constancy of turbulent dissipation for large enough Reynolds numbers.  The scale invariance of turbulence dissipation reflects that the Euler equations are scale invariant. 

    måndag 11 juli 2022

    Why Euler CFD is a Zero-Cost Parameter Free ToE

    The fact that for slightly viscous incompressible bluff body flow with Reynolds number $Re>500.000$, Euler CFD with slip boundary condition (without boundary layers) serves as a parameter-free essentially zero-cost computational model without dependence on Reynolds number as a Theory of Everything ToE in the spirit of Einstein, depends on two key circumstances:

    1. Finite rate of turbulent dissipation effectively independent of $Re$ after transition to turbulence (e.g $Re >200$ in isotropic turbulence) (reference).
    2. Slip serving as effective boundary condition for $Re > 500.000$ (NACA0012 previous post).
    Here 1. reflects Kolmogorov's conjecture and means that a mesh size of around 1/200 of gross dimension is sufficient to capture turbulence. Further, 2. reflects that Euler CFD with slip can capture drag and lift with little dependence on $Re$ beyond drag crisis around $Re=500.000$.  

    Altogether, Euler CFD can capture drag and lift beyond drag crisis with a mesh size of around 1/200 of gross dimension thus at essentially zero computational cost (because no thin boundary layers have to be resolved). 

    The fact the drag and lift coefficients do not include dependence on $Re$ (with $Re =\frac{UL}{\nu}$,  $U$ typical flow speed, $L$ typical length scale and $\nu$ typical viscosity) and yet can serve as measures of drag and lift for $Re$ beyond drag crisis, gives observational evidence that indeed drag and lift have a very weak dependence on $Re$ beyond drag crisis, as shown in the previous post (see reference showing drag independence for $Re>10000$ for a collection of blunt bodies).  Also recall that drag and rate of turbulent dissipation balance, and so observed independence of drag beyond drag crisis supports 1.   

       

    måndag 4 juli 2022

    Euler CFD as Parameter Free CFD as ToE

    The Euler equations in velocity-pressure $(u,p)$ and $(x,t)$-coordinates are invariant under a rescaling of velocity $u$ into $\bar u =\frac{u}{U}$ with $U$ a reference speed such as free stream speed in bluff body flow with corresponding rescaling of pressure $p$ into $\bar p=\frac{p}{U^2}$ and time $t$ into $\bar t =Ut$ without rescaling of space with thus $\bar x = x$. The scaling of pressure with $U^2$ conforms with Bernoulli's Law and the scaling of drag force $\sim C_DU^2$ from a drag coefficient $C_D$. The propulsion power to balance drag thus scales with $U^3$. The Euler equations are thus formally invariant under change of velocity scale as an expression of formally zero viscosity or infinite Reynolds number.

    The basic energy estimate of Euler CFD expresses a balance between rate of loss of kinetic energy and computational residual-based turbulent dissipation of the basic simplified form $C\frac{h}{\vert u\vert}|\vert u\cdot\nabla u\vert^2$ both scaling with $u^3$. The propulsion power is balanced by the rate of loss of kinetic energy and so by turbulent dissipation. The drag coefficient can thus alternatively be computed from total turbulent dissipation.


    The (remarkable) fact that the drag coefficient $C_D$ does not include dependence of the Reynolds number $Re$, expresses observations that drag depends little on $Re$ beyond drag crisis, which connects to Kolmogorov's conjecture of finite limit of turbulent dissipation as well as mesh and stabilisation independence in computation. The functionality of the drag coefficient supports Euler's Dream that Euler CFD offers a Theory of Everything ToE for slightly viscous incompressible flow with independence of $Re$ beyond drag crisis. Since total drag shows little dependence on $Re$ while in principle it has a contribution from skin friction with a skin friction coefficient (scaling with $U^2$) decreasing with $Re$, the skin friction contribution appears to be small, in contradiction to a common conception of major contribution: If major drag indeed would come from skin friction, then drag would decrease with increasing $Re$, but it does not beyond drag crisis.


    Notice that the Navier-Stokes equations with constant viscosity $\nu$ with turbulent dissipation intensity $\nu\vert\nabla u\vert^2$ scaling with $u^2$, are not velocity scale invariant and thus carry a dependence on $Re$ possibly making computational solution impossible for large $Re$. 


    Recall that the definition of $Re =\frac{UL}{\nu}$ with $U$ a reference speed and $L$ a reference length and $\nu$ a viscosity is not well determined and so independence of mean value quantities such as drag, lift and pitch moment is a necessary requirement to make CFD predictable.


    Here is experimental evidence that $C_D$ for NACA0012 at zero angle of attack does not depend on $Re$ beyond drag crisis:



    Notice the reduction of $C_D$ by a factor 2 from $Re =100.00$ to $Re > 500.00$ as an expression of drag crisis.