- A(U) = F
where F is given forcing (input), A is a differential operator and U the corresponding system state (output).
In physics the input generates the output by some physical mechanism, viewed as some form of
analog computation, which is represented mathematically by digital computation with the output U being computed from the input F in computational solution of the differential equation.
The stability of the computational process, in analog physical or digital mathematical form, reflects how perturbations of input transform through the computational solution process to perturbations in output. If small perturbations of input transform to small perturbations of output, then the process is stable, while if small perturbations in input may result in large
perturbations in out, the problem is unstable.
As a basic example let us consider Newton's 2nd Law
- M dV/dt = F
where M is the mass of a particle moving with velocity V(t) under the action of the force F(t)
as functions of time t. With F(t) input and V(t) output this is a stable problem, since the solution process corresponds to integrating F(t), which is a stable summation process.
However, if we turn data around and view V(t) as input and F = M dV/dt as output, then the
process is unstable, since differentiation is unstable because it involves division by the small
quantity dt: Small perturbations in V(t) are magnified by the large factor 1/dt in the computation of dV/dt.
We now connect to the old question of cause-effect in a relation between two entities F and
U connected by an equation A(U) = F. The question is what is the cause and what is the effect,
F or U?
If the problem A(U) = F with F input and U output is a stable problem, then it is natural to view F as the cause and U the effect: The effect will then be essentially the same under small perturbations of the cause. On the other hand if it is an unstable problem, then it is not natural to view F as the cause and U as the effect, since small variations in F may give vastly different effects.
This argument has led us to identify cause-effect from stability point of view into a new analysis of the physics and mathematics of the 2nd law of thermodynamics and blackbody radiation, as developed in
- Computational Turbulent Incompressible Flow
- Computational Thermodynamics
- Computational Blackbody Radiation.
In the setting of Newton's 2nd law M dV/dt = F, we may thus view the force F as cause and
the velocity V as the effect, but not the other way around.
In climate science a basic problems concerns the relation between global temperature and
level of atmospheric CO2, which with temperature as input and CO2 as output may very well be stable, while CO2 as input and temperature as output may very well be unstable.
A diffusion process like heat conduction is stable with heat source and initial temperature as input and temperature at a later time as output. On the other hand, "backdiffusion" with
temperature input at a later time than output is unstable without cause-effect and thus unphysical. Softening a sharp image in Photoshop into a blurred image by diffusion is a stable
process, while sharpening by "backdiffusion" is unstable.
The same holds for "backradiation" proposed as a mechanism for global warming by "greenhouse" gases, as explained in Slaying the Sky Dragon: Death of the Greenhouse Gas Theory.
dV and dt are not unrelated. your argument is a pure mathematical argument with no relation to reality. you cannot have a finite change of V while t goes to zero.
SvaraRaderafrom what i understand you fail to see the implications of your models to reality and back and focus only on computational methods.
let me try to explain my point.
a famous paradox is the one of achilles and the tortoise. a way of solving the paradox could be the analogous of your finite computational precision. at some point the difference in position of achilles and the tortoise is smaller than the precision and they are in the same position. or you could say that space is quantised coming to the same conclusion.
but you could also observe that the sum of all the increments is actually finite and you can give a physical explanation to that: the time for achilles to cover decreasingly long space interval decreases quickly enough.
the last explanation is the best one, because it explain reality, it's simple and does not demand leaps of faith. moreover it explains a pletora of other phenomena. there's no need to introduce quantisation of space on the basis of the achilles-tortoise experiment.
when you try to apply the same procedure to black body radiation no attempt to preserve continuity succeeds in explaining reality (in fact the result is wrong even in principle with the ultraviolet catastrophe being obviously wrong without the need of experimental check).
planck's model implies a change of our view of the world, which is hard to accept, but account for experiments and it's relatively simple mathematically.
your model with finite computation does the same thing. you claim it not to make ad hoc assumption but in fact i'd say that the opposite is true. your precision limit is extremely ad hoc, you introduce h as a completely arbitrary parameter and give no motivation whatsoever for its being.
planck's model takes the consequences of the introduction of quantisation and makes guesses of what that would imply in other cases. it turns out that quantasation of em-waves can explain a number of other phenomena. it implies a change of our view of em-radiation but, once we've done that, we can explain a number of phenomena with a consistent model.
you say, no no no, light is wave, everybody knows that. to explain black body radiation you just have to make the calculation in a way that hide quantisation under another name, let's call it finite precision computation. and everybody is happy! the meaning of the precision parameter? none! just accept that.
for that reason is your model inferior to planck's