Who would disagree that the run-time of mergesort is and it’s asymptotically optimal?
Not many programmers I reckon, except perhaps to question whether it’s talking about
a model of computation that’s not sufficiently close to a real computer, for example a quantum
computer or one that performs arbitrary operations in parallel (possibly
involving sticks of spaghetti).

However, if you try to understand how to formalize what it means for a sort
to run in and for it to be optimal,
it’s surprisingly difficult to find a suitable computational model, that is,
an abstraction of a computer which elides all but the important
details of the computer: the operations it can perform, and how the memory
works.

In this post, I’ll look at some of
the most common computational models used in both practice and theory, and
find out that they’re all flawed in one way or another, and in fact in all
of them either mergesort doesn’t run in or there’s
asymptotically faster sorts.