What is Biosimulation?
Biosimulation as a discipline, or rather a modelling philosophy, is 100 years old. Its start can be dated to the predator-prey model published by Alfred J. Lotka (1925) and Vito Volterra (1926) independently of each other, and now called the Lotka-Volterra model. Later Andrew Hodkin and Alan Huxley published their famous model of the nerve impulse (1952) based on numerous and accurate experimental data. In the years after Edward N. Lorenz’ discovery of chaos (1961), a paradigm shift took place during the 70s and 80s with focus on nonlinear dynamics both in physics, chemistry, and biology, and spreading to many other disciplines. The result was a dramatic change in the precision and usefulness of modelling biological systems. At the same time the use of computers went from big, institutional giants using punch cards and with very restricted capacity, to more and more personal computers, so suddenly every scientist could make even very complex calculations. The result was a scientific “Klondike” period, where all kinds of nonlinearity was in play, no matter how weird it looked. To this came that the individual scientist usually worked alone or in isolated groups without knowing what other scientists found. Slowly, during the end 80s and start 90s, meetings and journals began to coordinate the different approaches. Out of the Babel-like situation grew several types of biological modelling. Some were still linear or only slightly nonlinear, representing the old methods; but many new types adapted the nonlinear view. Three types may stand out:

Top-down modelling. This is a simple model type, only a little different from a black box approach. It is tailored for the actual application, it does not need all details, and the modelling is fast and straightforward. Top-down modelling is especially used industrially, e.g. to fit experimental or trial data. Not so much to find mechanisms behind the data, because the black box approach makes it almost impossible, but in order to measure the result of external stimuli, relate them to the experimental setup, and use them in the further development. A disadvantage is that top-down models are difficult to validate, and important details may be missing. Many models may fit equally well to the found data, especially if the uncertainty of the data is large. To this comes that new applications of the model require new tailoring. A point that is often forgotten. A simple example on a top-down model is the Lotka-Volterra model that only contains two equations and four constants.

Bottom-up modelling. In contrast to the top-down model, the bottom up model is very detailed. The aim is to take “all” details into account, so the model becomes as complete as possible. The advantage is that the model is more generally applicable and very useful for research, because it opens up for a detailed study of the individual elements. But especially nonlinear bottom-up models require large IT-resources and the computer programming is complex and extensive. This may result in instabilities and errors in the calculations. On the other hand, extension of the model is relatively simple, when new details are discovered. The problem is here that it is difficult to predict the effect of the new details, let alone the possible effect of unknown, but significant, underlying mechanisms. Academic models are often of the bottom-up type, and often seen in PhD-theses or like. Due to their complexity and many parameters, they are of limited value, when fitting to data. On the other hand, the Hodgkin-Huxley model can be regarded as a well-constructed bottom-up model, where each function and parameter in the model is based upon a large number of experimental data. In this way much of the uncertainty is removed.

Biosimulation or “Both ends against the middle”.
Biosimulation utilize the best of the two methods. Typically, the modelling starts with a complex bottom-up model, including all known mechanisms. Thereafter, the number and complexity of the elements is reduced as much as possible, while preserving the nonlinearities. This makes it possible to study the effect of the nonlinearities in a more detailed setup. Since it is mechanism based, biosimulation is able to validate the used mechanisms and reveal new, unknown mechanisms. This makes it useful for both academia and industry. While the models may be complex, they require limited IT-resources and may run rather fast. The disadvantage is that in balancing the model between the two approaches, there is a razor-sharp edge between including too much or too little. To construct a proper biosimulation model requires therefore an extensive biochemical, physiological and/or clinical expertise. Even to use pre-fabricated models may be problematic.
Biosimulation is now covering many facets. Apart from being a theoretical method or philosophy, it is now also associated with specific it-programs etc., but in the Biosimulation Letters, it is always the above method that is relevant, and the trade-off between the top-down and bottom up in the model is central for the modelling. In this way, it should be possible to bring the two ends together.

*Author: Morten Colding-Jørgensen (March 2025)
*Candle cartoons by Paul M. Diderichsen