It’s in vogue these days to confidently declare that college degrees aren’t useful. Such unfounded assertions might even be backed by lonely data points (“Gates/Zuckerberg/person de jour didn’t graduate from college, the smartest coworker at my company has no formal training,” etc). This trend has accelerated lately with the proliferation of the rumor that Google and Facebook occasionally hire people right out of high school, while conveniently ignoring the fact that it wouldn’t be unusual for entire teams at these companies to be composed of PhDs. Maybe you should just play the lottery instead?
A lot of lies start out with a grain of truth, and this particular one is no exception. The fact is that its becoming easier to be an innovator and coder – barriers to entry are decreasing, costs are falling through the floor, and accessibility is on the rise. But software engineering as a profession suffers from a credibility problem, since there’s no standard for becoming a software engineer. Sure, its also true that a degree doesn’t make one. True skill isn’t denoted by a diploma, but there’s certainly a correlation. And you know what they say about hiring: its better to say no to a good candidate than yes to a bad one.
I’m not making the case that college has no substitute. But today’s best tech companies are going to be 97+% degree holders and the truth is that statistically its more likely a degree holder will be competent. What other evidence is there that formal education matters? I can think of a few simple brain teasers, all based off of real experiences. Anyone with a degree from any respectable institution should be able to answer these elementary questions about computers:
- You add two large integers and the result is -340183. What happened?
- What makes a map an abstract data structure? Describe a few concrete implementations and their runtime performance characteristics as a function of n.
- You’re using 2 nested for loops to sort an array. Why is this bad?
- Why is it that you can address 4GB of memory on a machine with only 2GB of installed memory?
If you were awake for second or third semester of college, you’d be able to answer these slightly more advanced questions:
- You allocate an object on the stack and pass its pointer back to your caller. Why is this bad?
- Why can’t 2 processes communicate by passing pointers to data between them?
- How can you prove that there is no lossless compression algorithm that always makes the input smaller? That is to say, for any compression function F there exists an input x where sizeof(F(x)) > sizeof(x)
- Why can’t you implement reliable mutual exclusion at the software level without hardware support on an operating system that uses preemptive scheduling?
These are the kinds of concepts that you don’t regularly encounter in day-to-day work, but that you’d learn in college. So why does it matter that we understand them? Easy: because having foundational knowledge lets us reason about how things work when abstractions leak. And leak they will. You can deal with them the right way or risk tanking your product because you never learned the CS equivalent of germ theory: you’re just a witch doctor prescribing snake oil, and most of the time things work out, but when they don’t — you can’t figure out why.
In addition to creating software that is reliably working instead of mostly working, if you’re doing groundbreaking of any sort, it helps to have an understanding of where the state of the art is, so you don’t go around poorly reinventing the wheel for a few years before you realize your objective has already been proven mathematically impossible. This is the real reason companies like Microsoft and Google suck up CS PhDs and seemingly have nothing to show for it; the kinds of products they enable aren’t front and center, they’re the secret sauce. PhDs at those companies are going to invent things like MapReduce and Real-Time Human Pose Recognition, but you know them as Google Search and Xbox Kinect. High schoolers without college degrees weren’t involved.
If you don’t know what you don’t know, it’s easy to declare that the knowledge gaps in your understanding aren’t important. That’s the Dunning-Kruger effect, and it’s also why business majors with The Next Facebook idea never make it out of pre-alpha. They don’t know how utterly outclassed they are until it’s too late.
The last reason the degree matters is somewhat subjective, but I think it’s worth consideration. Anybody can start a personal project or code up the fun part of that twitter clone. But shipping is grueling and unforgiving work bound by tight time constraints. It’s not only hard, it’s a lot of work. Getting a degree is supposed to reflect at least a modicum of work ethic, so it is not just a testament to qualifications but a reflection of values: here is a person who did what it takes to get to the finish line. In the real world, finishing stuff is the only part of working that matters.
Integer overflow. The integer type is signed and the sum has wrapped around.
A map just provides dictionary lookup of key-value pairs, implementations would typically be a tree or hash, which make time vs space tradeoffs (logn vs constant time).
Suboptimal big-oh growth of n^2.
Virtual memory abstracts this away and uses the pagefile as a backing store.
Stack allocated memory goes out of scope when the function returns, so the pointer will be referencing invalid memory.
Processes get their own virtual address space from the operating system so addresses from a different process are meaningless.
Pigeonhole principle. Easily proven by enumerating all possible compression results for data of size 4bits, for example. You would need input-output mappings to be injective, but by definition fewer bits means there arent enough combinations to achieve this, and thus you won’t be able to decompress losslessly.
You can’t guarantee that an if condition still holds just because it was the previous statement in a multithreaded environment with preemptive scheduling.