Thursday, June 02, 2022

The way we test applicants for programming jobs makes no sense

I'm acquainted with a young man who has finished one programming job and is looking for his next one. 

He's applied for a few software engineering jobs and has generally sailed through the first couple of evaluations, leading to an online test of programming skill (typically four different programming problems, to be solved in an hour or so). 

But the skills these tests demand have no discernible relation to any business problem I'm aware of. 

Examples: one problem requires that the applicant write code to find the longest palindrome in an arbitrary string of characters (a palindrome is a string that reads the same forwards and backwards). Another requires a string to be printed out in a weird "zig-zag" format that has never been needed in the entire history of business computing. And this is the basis of how we choose whom to hire? 

The scoring of answers gives extra points for code that runs faster. And since these problems have been around for years, they've been attacked by numerous amped-up nerd geniuses whose self-worth is wrapped up in shaving a few milliseconds off execution times. Thus, any normal programmer who solves the exercise will always be judged against solutions that make their efforts seem inadequate on a raw speed criterion. Keep in mind: the applicant is generally allowed 15–20 minutes to dash off a solution: not nearly enough time for the thoughtful design process necessary to produce correct, reliable code that works even in the "edge cases". Again, this bears no relationship to how programmers are expected to function in the real world.

The inordinate emphasis on execution speed is only marginally relevant to good "real world" programming practice. Given the choice, most businesses would opt for code that is easy to understand, maintain, and modify—over a tricky algorithm that shaves a few machine cycles off execution time but is dense, fragile, and incomprehensible without major study. 

Modern computing systems are very fast, and have the ability to optimize code before running it. And in any real-world business application, overall processing speed is likely to depend far more on things like the speed of the network and the responsiveness of a remote database server, than on the details of the algorithm used. The programmer has no control over these limiting factors. 

The tests are designed to favor shoot-from-the-hip hotshots over programmers who actually engineer their software by carefully considering all the factors that go into good software and balancing them to produce the best solution. 

Is this the best the industry can do at selecting people to be in charge of one of their most important assets, their base of mission-critical software?