- There is a relationship between volatility-stability and slow-fast
- why does business favor stability?
- because stability affords the option to be deliberate
- options are so interesting because time is a central parameter that determines value
- there is a framework for evaluating options, including their time-scale
- an option's value approaches zero as it approaches exercise (expiration)
- so long as there is lots of time-to-expiration, there is still a possibility for the option to be valuable
- but as we approach the time horizon - and as option-time grows scarce - it becomes increasingly certain how the option will be valued at the time of exercise
- certainty is inversely proportional to the value of the option
- the more certain, the less valuable
- acting fast is altogether different from being deliberate
- fast reactions match the time-scale of high volatility
- when things are stable, there is less value in responding quickly
- volatility is sometimes associated with risk
- business may be "uncomfortable" with volatility; but they are not immune to it
- to the contrary, they are subject to volatility
- many assets lose value due to volatility
- some assets are considered risky ("risk-on"); others are considered "risk-off"
- risk is operationalized as a formula
- but it is more fundamental than that
- likelihood of outcomes, combined with magnitude of those outcomes, produces a simple model of risk
- time is rarely a component of risk calculation
- in part because risk models are just approximations of risk
- one of the strongest constraints on business decisions is time
- this is also a constraint on language models
- when processing time is constrained, language models are pressed the same way people are
- the time-scale at which language models are interpreted is the human time scale, which is grounded in the real-world passage of time as we humans preceive it
- unlike many other simulations, in which time can be compressed to run at the speed of computation, language model results are evaluated in the context of the real world
- and when an LLM has less time to think, its answers are worse
- so language models are subject to time constraints similar to the way humans are