I have a tendency to prefer the kind of "inside-out" control that Kent is advocating in this piece, though I've never head it called "inversion of control" before. The idea of giving your users the ability to manipulate the output of a given function/API is a great way to futureproof your work and something I think is generally useful doing earlier rather than later.
The problem comes, as he identifies, when you overdo it:
What if that's all we ever neededfilter
to do and we never ran into a situation where we needed to filter on anything butnull
andundefined
? In that case, adding inversion of control for a single use case would just make the code more complicated and not provide much value.
So how do you walk that fine line? Like most things, there's a lot of grey in the decision. Personally, I like to do things with no inversion first, sticking to the principle of "doing one thing well". Then, if I (or others) find that they need to extend that functionality, I begin adding overrides. Overrides are great for maintaining existing code and allowing for an inversion of control. For example, in the pseudo-array-filter example Kent uses, I would cede control of the filter options to a prop/variable (as he does), but then add a line that populates that variable with the original null
and undefined
checks if it's blank. That way existing implementations still work, nothing breaks, and moving forward the code is much more flexible (and, crucially, avoids the multiple-extension spaghetti that we're trying to avoid).
Sure, occasionally someone might inadvertently duplicate that setup, by re-specifying the default behaviour in the function call, but that's fine. If you didn't set the default behaviour every call would have to do that anyway, so it still saves time in the long run, and you avoid pre-extending code that never needs the flexibility in the first place.