Wed, Jul 14, 2010
In my last place, we employed a graduate software engineer and one day I was asked to review his code. So we sat down and had a look at an app he was working on. Can’t remember what it was now but I seem to remember it was written in Visual Basic, though ’m not sure about that either now. Anyway, having a glance over the code, large chunks of it were commented out with a warning along the lines of “this code is dangerous, do not use!”. OK, fair enough, graduate and all that and we worked together to sort out problems. That’s a fairly common thing when you’re starting out in coding. You do things like that.
Then, years later, I came across Ruby and its “!” mechanism. Put simply, if you put ! after a method name, you’re saying “this method is dangerous, do not use”. It’s a coding convention which most people follow and what’s worse is, you’re not meant to write a “dangerous” method unless you’ve already written a “safe” version of the method.
Why on earth would you want to write “dangerous” code? Dangerous usually means the method changes a param, rather than returning a value computed from the param, so I presume dangerous equates to more “efficient”. Who’s kidding who here? Efficient for the compiler maybe but what about the maintenance programmer? Wondering where all those bugs come from and why every method ends in a !
What state has software engineering reached when languages positively encourage you to write “dangerous” code?