nil, null, let's call the whole thing off.
Apple has recently announced the Swift programming language, along with language specifications and documents to support it. I'm not particularly interested in learning much about this language, in no small part because it seems like it's tightly coupled to Apple's walled garden. However, I am interested in discussing nil and/or null, which I will call nil from now on, because Swift calls it that, Clojure calls it that, and Ruby calls it that.
Swift promises to disallow the use of nil in a variable except in those cases which someone has specifically allowed for nil; be it a parameter, a variable declaration or what have you.
Having written 0 Swift programs, my opinion on this is tangentially valuable at best, but I have written many programs in many other languages, and each of those languages has always had some concept of nil, and many of them were entirely birthed after nil was well explored and people understood the costs of nil, and chose to include it anyway. I'd like to discuss some arguments about why I don't find Swift's approach valuable. Additionally I'd like to discuss some other ways languages have approached the problems nil introduces.
- PRIMO: Nil is an elemental idea that exists when discussing logical action sequences and a valuable inclusion in your programming language. Contemplating map retrieval, if one attempts to retrieve a key from a map which does not contain that key, one has essentially three potential outcomes:
- Return nil, or its moral equivalent.
- Return some sentinel value replacing nil (e.g. 0)
- Treat the retrieval of an absent key as an exceptional case and provoke an error condition in the language's error facility.
- SECUNDO: I view prohibiting nil as a form of static typing. The supremacy of static vs. dynamic is a can of worms that I am not particularly interested in opening; people have built many useful things using both static and dynamically typed programming languages. My personal preference tends toward dynamic programming languages, as I feel there is more implicit openness in the interfaces of systems built in dynamically typed languages. Taking any class of values and ruling them out of participation a priori, such as the class of values that are nil, is a static typing move that I feel incurs costs out of proportion to its benefits. I am particularly skeptical, as my understanding is that Swift defaults one's program into this particular form of typing, and as a programmer one is forced into opting out of this typing if one finds it undesirable.
- TERTIO: There are very good language constructs for dealing with nil in a way that causes nil values to not ruin everything. For example, Ruby treats nil as a singleton instance which can have methods extended to it. This is used to great effect in e.g. Rails to help catch nil errors for that specialized domain. Clojure treats nil as a distinct and viable dispatch candidate for protocols; so when one attempts to use a protocol fn against nil, the nil typed version is invoked.
I'd like to spend a little more time on that third point because it is the most important. Nil is not a problem in programs; the way programmers are afforded tools to interact with nil is the real problem. What Ruby and Clojure offer is a way to specify to the computer a categorical definition of the behavior of nil for the lifetime of that program, and the absence of that language feature, such as in classical Java, is where the real pain point arises. There is certainly a domain of functionality for which nil inputs are going to be problematic, and having a facility to avoid receiving nil inputs seems like it could provide some amount of value. But heralding this as saving us from the problems of nil feels a bit too much like throwing the baby out with the bathwater.