We have the impact of algorithmic progress with estimates that algorithms improve enough to cut compute times in half every 16 months. However, this was measured on ImageNet, where researchers are directly optimizing for reduced computation costs.
It seems less likely that researchers are doing as good a job at reducing computation costs for “training a transformative model”, and we can increase the halving time to 2-3 years, with a maximum of somewhere between 1-5 orders of magnitude