After reading this post by Julia Evans, which considers CPU speeds somewhat more deeply than its title implies (“Computers are Fast”), a fragment from a recent conversation with one of my computer science professors came to mind.
Simply, and somewhat paraphrased: “almost all processes are rapidly becoming I/O-bound.”
Not so long ago, in OS Design class, one homework and several questions on exams tasked us to carefully identify whether a process would be I/O-bound or CPU-bound based on its actions and properties. Would “I/O-bound” have consistently been the correct answer?
Not according to the professor of that class, at least, since I remember a few answers to the contrary. And I’d be willing to wager that there remain enough computationally-intensive tasks that OSs must take CPU-bound costs into consideration when scheduling processes, at least in some areas of work.
But might gains in speed, parallelism, and optimization eventually sway the balance?
My guess is yes — but only for personal-computing tasks. For example, I’ve never personally run a highly complex physics particle simulator on a time-slotted supercomputer, but I’d bet most of that isn’t too memory-heavy, especially compared to the insane number of calculations required (interesting note about reducing calculation cost).
And imagine how that is for some higher-order function, like prime factorization (well, I guess it’s not officially known to be superpolynomial at this point). The time required to compute can be enormous, but space complexity doesn’t need to be too bad (they’re just integers, after all).
I’m curious to see how things turn out over the next few years. It’s an exciting time to be computing! — and when isn’t it?