Don’t get a Masters in Computer Science

I am pretty sure most software engineers should get a BS in computer science. I’ve written extensively about this. But I’m often asked by prospective engineers whether it’s worth the effort to get the MS too. In the past I’ve mostly dodged on this, with a hedged answer I would charitably paraphrase as “umm, probably no, but maybe yes, if you find a subfield you really like.”

Today I realized that this is terrible advice. If you have to ask, you should not get a master’s degree in computer science.

Why? Because all you MS CS candidates suck at the most basic interviews.


Like I sometimes have trouble differentiating between people with an MS and people who have literally never coded in their lives. But maybe that’s because they aren’t mutually exclusive:

  • I don’t do this anymore, but I used to just ask fizzbuzz over the phone, and the candidates who routinely failed this were either masters students or masters grads looking for their first job.
  • For some MS CS grads, reversing a string is literally a half-hour affair, and doing it in-place without an O(n) memory allocation is considered “tricky.”
  • I once had a poor soul with a masters degree spend 10 minutes failing to name a way to communicate between 2 computers.

I don’t know what’s going on here.

But I have a few theories:

1) Software engineering experience compounds, but instruction in CS fundamentals offers diminishing returns after 4 years. I might be suffering from some Dunning-Kruger here as I only have a BS, but the vast majority of fundamental, broadly applicable theory seems to taper out after ~3 years of quality instruction, in my experience.

2) MS programs lack even remotely standardized curriculum or admissions requirements. Master’s programs seem to fall into two camps: the “we’re vetting you for a PhD” camp, and the “professional degree” camp (which is very likely a cash cow for the university). Both camps assume you have prior exposure to the subject matter, and therefore won’t have a well-structured curriculum in fundamentals. But if an MS CS program doesn’t teach CS fundamentals (that’s what the BS is for, right?), and doesn’t require a BS CS for admission, how does that ensure graduates have a baseline level of knowledge upon graduation? It doesn’t.

3) MS students have low or no exposure to actual coding. A lot of MS degree work I’ve seen either involved studying esoteric algorithms or mathematical proofs, or research that mostly involved bragging about how the machine running a neural network has 256GB of RAM. I took a few graduate level courses back in my day, and I’d venture at least half of them required no coding whatsoever. Now recall the part about no structured curriculum, and you are well on your way to a choose-your-own-adventure degree that could easily see you to graduation day writing about about as much code as a real engineer might deploy to production before lunch today.

Of course, it goes without saying this isn’t all candidates from all schools. But it is a pattern, and these days I just reflexively de-prioritize talking to MSCS candidates because to do otherwise is a setup for disappointment.

The truth is, I suspect this state of affairs is a mix of correlation and causation. I know it’s wrong, but “if this candidate was any good, he would’ve gotten a job on the strength of his skills rather than making his resume fancier while waiting out the recession or whatever” has crept into the back of my mind before.

It’s simple. We, uh, kill the batman.

It doesn’t really have to be this way. If your goal is to be the best engineer that you can, those 2ish years of extra experience you get in the industry make a big difference. Those are your learning years where you absorb hard-won experience from your seniors on engineering trade-offs and how to work on teams with existing codebases under real multidimensional constraints.

And if your goal is to make the most money you can, an MS almost never pays off unless you just happened to specialize in something that is both rare and highly in demand. Otherwise, if you are lucky, you are looking at, compared to a fresh BS CS grad, a pay bump of ~$10k. Maybe. Forget comparing to someone who graduated with the BS CS one or two years ago; they’ve left you in the dust.

This should be obvious, if you think about it for a moment. New grad engineers increase their skills and value tremendously over 2 years; they get commensurate increases in salary to reflect this[1], and the average person who took those 2 years to get an MS CS is starting from an experience deficit and never catches up. It’s no wonder then that it only offers a ~$5-10k salary bump: it isn’t all that valuable on its own.

So don’t get a master’s degree[2]. It probably won’t pay off, and your engineering career will suffer. There are exceptions, but they don’t apply to Joe Shmoe with an MS from Nowheresville.

[1] Mostly by changing jobs, because employers in this industry seem to routinely under-level new grad engineers as they gain experience, but that’s another rant for another time.

[2] But if you do, get a BS CS first. I see again and again that most successful people with master’s degrees started with the BS.

How we sped up our background processing 150x

Performance has always been an obsession of mine. I enjoy the challenge of understanding why things take as long as they do. In the process, I often discover that there’s a way to make things faster by removing bottlenecks. Today I will go over some changes we recently made to Privy that resulted in our production application sending emails 150x faster per node!

Understanding the problem

When we starting exploring performance in our email queueing system, all our nodes were near their maximum memory limit. It was clear that we were running as many workers as we could per machine, but the CPU utilization was extremely low, even when all workers were busy.

Anyone with experience will immediately recognize that this means these systems were almost certainly I/O bound. There’s a couple obvious ways to fix this. One is to perform I/O asynchronously. Since these were already supposed to be asynchronous workers, this didn’t seem intuitively like the right answer.

The other option is to run more workers. But how do you run more workers on a machine already running as many workers as can fit in memory?

Adding more workers

We added more workers per node by moving from Resque to Sidekiq. For those who don’t know, Resque is a process-based background queuing system. Sidekiq, on the other hand, is thread-based. This is important, because Resque’s design means a copy of the application code is duplicated across every one of its worker processes. If we wanted two Resque workers, we would use double the memory of a single worker (because of the copy-on-write nature of forked process memory in linux, this isn’t strictly true, but it was quite close in our production systems due to the memory access patterns of our application and the ruby runtime).

Making this switch to Sidekiq allowed us to immediately increase the number of workers per node by a factor of roughly 6x. All the Sidekiq workers are able to more tightly share operating system resources like memory, network connections, and database access handles.

How did we do?

This one change resulted in a performance change of nearly 30x (as in, 3000% as fast).

Wait, what?

Plot twist!

How did running more workers also result in a performance increase of 500% per worker? I had to do some digging. As it turns out, there’s a number of things that make Resque workers slower:

  • Each worker process forks a child process before starting each job. This takes time, even on a copy-on-write system like linux.
  • Then, since there are now two processes sharing the same connection to redis, the child has to reopen the connection.
  • Now, the parent will have to wait on the child process to exit before it can check the queue for the next job to do.

When we compounded all of these across every worker, it turns out these were, on average, adding a multiple-seconds-long penalty to every job. There is almost certainly something wrong here (and no, it wasn’t paging). I’m sure this could’ve been tuned and improved, but I didn’t explore since it was moot at this point anyway.

Let’s do better – with Computer ScienceTM

In the course of rewriting this system, we noticed some operations were just taking longer than felt right. One of these was the scheduling system: we schedule reminder emails to be sent out in redis itself, inserting jobs into a set that is sorted by time. Sometimes things happen that require removing scheduled emails (for example, if the user performs the action we were trying to nudge them to do).

While profiling the performance of these email reminders, I noticed an odd design: whenever the state of a claimed offer changes (including an email being sent), all related scheduled emails are removed and re-inserted (based on what makes sense for this new state). Obviously, this is a good way to make sure that anything unnecessary is removed without having to know what those things are. I had a hunch: If the scheduled jobs are sorted by time, how long would it take to find jobs that aren’t keyed on time?

O(n). Whoops!

It turns out that the time it took to send an email depended linearly on how many emails were waiting to be sent. This is not a recipe for high scalability.

We did some work to never remove scheduled jobs out of order – instead, scheduled jobs check their validity during runtime and no-op if there is nothing to do. Since no operations depend linearly on the size of the queue any more, its a much more scalable design.

By making this change, we saw an increase in performance of more than 5x in production.

Summing up

  • Moving from process-based to thread-based workers: ~6x more workers per node.
  • Moving from forking workers to non-forking workers: 5x faster.
  • Removing O(n) operations from the actual email send job: 5x faster.
  • Total speedup: Roughly 150x performance improvement.

Compounding Advantages

The biggest myth about successful people is the “overnight success.” There’s basically no such thing. This is a great platitude, which happens to be true, but how can we deconstruct it down to its quintessential lesson?

The first point of order is to understand where advantages that lead to success come from. They might come from raw talent – which I won’t focus on, because it isn’t something you can control for (and experience is often confused with raw talent, because they look the same to outsiders). Or they might come from external sources – such as growing up with good financial security, in a two-parent household, in a well-off neighborhood with good schools. Those types of advantages are mostly out of your control as well, so that’s out too. Finally, there is experience.

Experience is the advantage most under your control. When most people ask me for advice about careers in computer science, they often know they are at a disadvantage (often because they are switching career tracks), but aren’t sure of the most efficient way to erase that deficit. But what appears to be an insurmountable disadvantage is usually the result of years of hard work, or a lack thereof.

So how does one gain experience without any experience? Isn’t that like the some sort of catch-22?

Not really. If it were, then by definition the industry couldn’t possibly exist, now could it?

(Normally, when people claim that it’s a catch-22, they’re just being unrealistic about what types of jobs are actually entry-level, or, more likely, they aren’t willing to do what it takes to become qualified for entry level jobs. In fact, software engineering is one of the easiest jobs to gain experience in, because all you need is a keyboard and monitor that eventually connects to the internet, and some free time. So whining about it is just immature.)

This isn’t really an essay on how to get into software engineering, since I’ve already written a bit on that topic. But there is a recurring theme, which is that it takes consistent application of conscious effort to build and maintain the credentials to become an engineer. And most importantly, all experience advantages start small, and compound over time. So the best way to become the best engineer is to start coding, a lot. Today.

Why coding?

Because while software engineering is about much, much more than just coding, coding is the most important part. It’s the only part you can’t skip. It’s also one of the easiest skills to show off and test for.

OK. So what should you code?

There’s no one-size-fits-all answer, but here’s a few starting points:

1) Go to Codecademy and start one of the courses. It almost doesn't matter which one, since they're all pretty solid.
Pros: Structured learning with helpful hints and explanations, sense of progression.
Cons: Toy problems that don't require reading existing code as much as the other options, an extremely useful skill.
2) Take a Coursera course (core concepts with programming involved -- data structures, algorithms, operating systems).
Pros: Online-classroom environment, instructor-led with a focus on fundamentals.
Cons: Academic in nature, which is actually sort of a plus, but it won't maximize lines/code per day.
3) Download a release of Ruby on Rails and start a web app.
Pros: Good documentation and explicit best-practices, more "realistic" than some guided courses.
Cons: Undirected learning. Requires product management to design things to code, which is a distraction. Too much Ruby/Rails "magic" abstracts away important concepts.
4) Browse Github (etc) and find an open source project to contribute to.
Pros: Working on released software, chance to interact with other coders. Most "realistic" experience.
Cons: Reading code is significantly harder than writing code.
5) Download the iOS / Android SDK and create a mobile app.
Pros: Everyone loves mobile.
Cons: Learning programming, a programming language, how to read documentation, and a complex API at the same time can be extremely overwhelming.

So…About that degree thing

I’m of the opinion that most software engineers should get a Bachelor’s in Computer Science. I’ve hammered on this point before. There are exceptions though. Like, do you know your computer science fundamentals (data structures, algorithms, operating systems, programming paradigms, software lifecycles)? Do you have practical software engineering experience (e.g., measured in years), doing work that shipped?

If not, I still recommend a CS degree, because it’s an excellent signaling mechanism, and you can complete one full-time in less than the traditional 4 years. However, coding boot camps have been all the rage lately, and I wanted to touch on them briefly.

Basically, coding boot camps are an excellent option for many people (and I know of many who have successfully gone this route), but I don’t recommend them in general because the best engineers aren’t minted in 12 weeks. It’s a different story if you already have some experience under your belt, but don’t want to get a full-on BSCS. But in that case, a coding boot camp generally isn’t really tailored for you anyway, since most programs don’t require existing experience by design. And that means you lose the benefits of a compounding advantage by not building on existing experience.

This is the main advantage of following a degree-granting program. It starts with the fundamentals, and then builds on that foundation with programming experience and core theory, leveraging your existing knowledge.


You gain a small advantage, compounding itself.

Why I picked Microsoft over Amazon

It’s interviewing season, and that means people are going to get offers really soon. I’ve been wanting to write a blurgh post about my decision to pick Microsoft over Amazon for some time now, and I’ve been asked for my reasoning a couple times. So maybe I can help others make the right choice.

I may be rationalizing my decision in hindsight, but it turns out there were a number of advantages Microsoft has over Amazon; here is the view from 10,000 feet:

  1. Substantially better benefits (health, wellness, employee stock purchase plan, 401k matching, perks), and slightly better overall compensation. You can increase your cash income an additional 5-8% risk-free by taking full advantage of ESPP, 401k, and other game-y things with your health benefits.
  2. Generous relocation package + annual performance bonus. It makes up for not getting a hiring bonus at least.
  3. Stock vesting is substantially faster (for Amazon, stock vesting is all backloaded so the last 80% or so vests in years 3 and 4). At MSFT the vesting is a linear 25% every year.
  4. I get my own office, and work-life balance is generally better. Microsoft’s median employee tenure backs this up.
  5. No on-call rotations[1]. Annual performance bonuses in cash, in addition to stock. Did I mention work-life balance?

And a handy table I put together, mostly from a combination of sources (stars denote uncertainty) and my highly scientific opinion:

Microsoft Amazon favors
Relocation (from east coast) all-expenses-paid or $5000 cash, tax-assisted (2011) all-expenses-paid or $7500 cash, tax-assisted Amazon
Signing bonus None in 2011, there may be a small one now ~25% base in 2 installments, pro-rated for 2 years Strongly Amazon
Hiring Stock Grant ~60% of base, vesting: 25% per year ~50% base, vesting: 5% 1st yr, 15% 2nd yr, then 20% every 6 months Microsoft
Base salary 60-75th percentile (on average, industry norm +15%) 50-75th percentile (on average, industry norm +10%) Leaning Microsoft
Base salary increase 0-9%, 3.5-4% is typical on average less than 3.5% Microsoft
Annual cash bonus On average 10% of base usually none Strongly Microsoft
Annual stock grants < 10% of base Between 10-15% of base* Amazon
Promotions see trajectory discussion see trajectory discussion Amazon
401k matching 50% of contributions up to 6% of base salary (3% match) 50% of contributions up to 4% of base salary (2% match) Microsoft
Employee Stock Purchase Plan 10% discount, purchases capped at 15% of base salary none Strongly Microsoft
Other fringe benefits Prime Card, free onsite health screenings, various health incentives & rewards, charity+volunteering match, discounted group legal plan for routine legal work 10% off up to $1000 in purchases per year Strongly Microsoft
Health see health benefits discussion see health benefits discussion Leaning Microsoft
Kitchen Soft drinks, milk, juice, tea, on-demand Starbucks, espresso Tea, powdered cider, drip coffee Leaning Microsoft
Time off 3 weeks vacation, 10 paid holidays, 2 personal days 2 weeks vacation (3wks after 1st year), 6 paid holidays, 6 personal days Microsoft
Location Redmond Seattle Strongly Amazon
Tools/Platforms Closed source Microsoft stack, proprietary. Many legacy desktop platforms, lots of new services Open source Linux stack. Almost entirely services-based, many legacy concerns. Best-in-class deployment tools. Strongly Amazon
On-call Expected of most engineers (unless product has no services component, increasingly unlikely) Expected of most engineers Leaning Microsoft
Median Age 33 32
Median Tenure 4.0 years 1.0 years

Career Trajectory

The great thing about Microsoft is that there’s always a career path for people who want to become valued individual contributors. However, you should be aware that the difficulty level ramps up pretty quickly. Generally, most ICs are unlikely to earn the title of Senior SDE in less than 4-5 years, and Microsoft will rarely consider someone for a lead engineer (the first rung in the management ladder[2]) who has fewer than 6-7 years under his belt. However, the promotions don’t stop just because you don’t want to be a manager – excellent ICs can earn titles like Principal Engineer, Distinguished Engineer, and Technical Fellow who are respected and valued as much as Corporate Vice Presidents.

At Amazon, expect a lot of responsibility to ramp up fairly quickly, along with somewhat higher chances for advancement — both because Amazon is growing faster, and because it has higher rates of attrition (I suspect attrition is higher at the bottom than the top; but I have no evidence for this). Three years out of college is not atypical for being offered SDM I (first rung on management track). This is partly because of the horrible retention; by the time you hit 3 years, you’re more tenured than about 80% of the company. Anecdotally, I have heard talented Microsoft ICs on the management track note to me that specific Amazon counterparts are progressing faster (to development manager) than themselves. So if management track progression is your goal — pick Amazon, not Microsoft.

Health Benefits

Microsoft in 2014

  • 100% preventative care covered, always.
  • HSP ($1000-$2500 annual employer HSA contribution, $1500-$3750 deductible; $1000-$2500 coinsurance) or HMO (no deductible/limited coinsurance, copays of $20-$100 for outpatient service)
  • full or partial dental coverage + payroll credit
  • vision: free annual eye exam and up to $225 of vision hardware per year; lasik benefit
  • free gym membership OR up to $800 in cash reimbursement for fitness purchases OR $200 cash
  • free life insurance – 2x annual base pay
  • long term disability insurance – 60% of monthly income up to $15,000
  • optional accidental death & dismemberment

Amazon in 2014
Documents I got my hands on weren’t heavy on details. I’m just going to go out on a limb here and say Microsoft’s health benefits are better. Here is a copypasta from their careers page that tells you approximately nothing about how they compare to Microsoft:

  • A choice of four medical plans, including prescription drug coverage, designed to meet your individual needs, with domestic partner coverage
  • Dental plan
  • Vision plan
  • Company-paid basic life and accident coverage as well as optional coverage at a low cost
  • Company-paid short- and long-term disability plan
  • Employee assistance program including dependent-care referral services and financial/legal services
  • Health-care and dependent-care flexible spending accounts

Fringe Benefits

Allow me a moment to blow you away with the absurd benefits Microsoft offers. Prime Card gives you random discounts on everything from Apple products to local restaurants. It also gets you discounted admission (I think $5?) to IMAX movies. Microsoft hosts free onsite health screenings for general health, flu shots, glucose/cholesterol testing, etc — and even gives away gift cards for attending. They have a charity matching program – they’ll match dollar for dollar every contribution you give to registered charities, and also pay $18/hr to any charity you volunteer at to increase your impact. There’s a discounted group legal plan that costs, I think, something like 30-40 dollars a month for routine legal work. There’s tuition reimbursement. There’s generous paid maternity (AND paternity) leave.

Amazon discounts 10% (up to $100 off) of annual purchases, which is cool too, I guess.

Commuting, Culture & Tools

Microsoft runs free shuttles to most major residential areas nearby – the largest private bus system in the world, in fact. On top of that, they provide a free ORCA card for unlimited free travel on the local bus system.

Amazon also has a free ORCA card on offer, but only a limited private shuttle system between campuses.

Amazon is in Seattle proper, where anything vaguely resembling nightlife happens; Microsoft is on the so-called Eastside across a narrow bridge where basically nothing does. This is not an insignificant issue for many people who work at Microsoft but want to live in Seattle – this is likely to extend your commute by at least 30 minutes each way.

As far as tools go, both companies have first-rate toolchains. Amazon probably leads here, as they have a very impressive toolset, dependency management system, and deployment process. On the other hand, Microsoft’s approach to the software engineering process is both much more disciplined, and less flexible. They produce some of the finest program managers. And almost all their tools are closed-source, so you’re unlikely to be using, say, git, unless you work at Amazon. The downside of Amazon’s agility is a sometimes chaotic software development process; getting stuck on a team with a mandate to improve a service while simultaneously fixing bad architecture/rush job warts are not uncommon, and unrewarding.

Work-life balance is manageable at both companies. I’ve had a number of 60 hour weeks, maybe even a few 70-hour weeks near shipping time. They were out of the norm. I’m inclined to say Microsoft requires fewer hours on average than Amazon, where people might see 45-50 / week as closer to normal. Everyone will tell you “how much work you get done” matters more than “how many hours you put in.” This is a half-truth – you need to put in the right amount of face time, don’t be on either side of the bell curve.

If you want to work in a fast-paced environment leading the way in services, cross your fingers every time you deploy, and don’t mind getting paged in the middle of the night, work for Amazon. If you want to make slightly more money shipping desktop software (or deploy services like you would ship desktop software), and pretend with your 100,000 coworkers that the company is becoming “agile,” work for Microsoft.

Thanks to all the fine folks who answered my questions and reviewed early drafts of this.


[1] This is no longer true on many teams at Microsoft. For the most part I hear its not as bad as at Amazon, but there can be what I charitably call “rough patches” when a team implements on call rotations for the first time, and invariably screw things up until the alert frequency can be tuned correctly (ASK ME HOW I KNOW).
[2] Microsoft recently moved away from formal lead positions as of 4Q 2014, bringing it into alignment with most other companies like Google and Amazon. Basically all ICs report to a dev manager now, and a “lead” engineer has no direct reports any more, but has de facto authority over a project or team. This doesn’t change the fact that progressing from an IC to a manager at Microsoft is both very hard and takes a long time.

The State of Securing HTTP in 2014

In a post-Snowden world, it’s not unreasonable to ask that every site consider deploying “all HTTPS, all the time,” even if just to troll the guys who really want to track what videos I’m watching on Youtube. Here’s how the process breaks down, based on my research and experience. This is not a guide, but a general overview of the state of the art.

Performance was a major reservation I had going into this. So I’m happy to say: SSL/TLS is not only inexpensive, it’s ridiculously cheap. In my case, CPU load increase was on the order of 1%, which is a rounding error. Memory usage might have gone up by a couple KB per connection, and there was no noticeable increase in network overhead. In an end-to-end test, a browser with a cold cache actually loaded and rendered secure pages in (statistically speaking) the same amount of time. On the server side: visually it’s impossible to tell from perf graphs when SSL was enabled. Serving > 6 million hits a month.

There are a couple of things needed to achieve this transparent performance level while still achieving high security, and it is almost entirely configuration-dependent with very little having to do with the application itself.

Session resumption

SSL and TLS support an abbreviated handshake in lieu of a full one when the client has previous session information cached. The full handshake takes two roundtrips (plus one from the TCP handshake), but session resumption can save the server from doing an RSA operation for the client key exchange, as well as a roundtrip. But the big win is the reduction of a roundtrip.

Here is where things start to get weird. There’s at least two forms of session resumption. The session identifier method is baked into SSL and is therefore supported by default – enable this on the server and it should Just Work. The one issue with this is that the server must maintain a session cache. Worse, if you have multiple nodes in the backend you need to either move this session cache into a shared pool or implement some kind of session affinity (urgh). Or perhaps you could do something even stupider and have SSL terminate at your load balancer, defeating the purpose of having a loadbalancer.

Luckily, TLS has an optional extension (described in RFC 5077) that makes the client store the session resumption data, including the master secret that was negotiated and the cipher. This session ticket is further encrypted and readable only by the server to prevent tampering. In short, implementing TLS session tickets as a resumption protocol gives you the best of both worlds – a faster, abbreviated handshake without requiring the server to cache anything. Unfortunately this is an optional extension to TLS, so it is not supported by all clients and servers, and a pool of servers needs to be configured properly so they can share tickets by using a common encryption key (unless you have session affinity, in which case it doesn’t matter).

Certificate Chains

Most certificates require an intermediate certificate to be presented along with them. There could even by several intermediates. Ensure the following:

1) Send all required certificates to validate the chain. If you don’t, things will probably still work because browsers will tolerate anything short of genocide, but they’ll probably do a DNS/TCP/HTTP dance in the middle of your TLS handshake to some other server to grab the certificate, which is obviously no good.

2) Don’t send any extraneous certificates. Besides the obvious slowdown from sending unnecessary data, you *might* in some cases cause an extra roundtrip if you overflow your TCP window and end up having to wait for an ACK before sending more data. So yeah, fewer packets matter here.

OCSP Stapling

OCSP is the Online Certificate Status Protocol. When a client receives the server’s certificate, it normally connects to the certificate authority to ask if the certificate has been revoked. You can save the client a roundtrip to the CA server if you enable the OCSP stapling extension, in which the server periodically connects to the CA to perform the OCSP check itself, then staples this response to the client during the handshake.

This works because the CA’s response is both time-stamped and signed. Clients can be assured that the OCSP response has therefore not been tampered with, nor can it be used in a primitive replay attack since it has an expiration tied to its validity.

There are some aggravating issues here. One is that OCSP stapling only allows one response to be stapled at a time, which is problematic for certificate chains and will probably end with the client making its own OCSP calls anyway. Another is that OCSP responses can be relatively heavy (like 1KB-ish), which combined with the certificates themselves can overflow the TCP window and cause a roundtrip for ACKs, as mentioned above.

Cipher Suites

This part can get filled with conjecture and hypotheticals pretty quickly, but basically you have two choices to make:

1) Cipher: A stream cipher like RC4 is fast and doesn’t require padding; this saves bytes on the wire. A block cipher like AES will need padding bytes, but may be more secure. AES-256 is overkill for a 1024-bit public key, but that’s not really a concern since most keys are 2048-bit now.

2) Key exchange: RSA or DHE+RSA? With ephemeral Diffie-Hellman support you can enable Perfect Forward Secrecy, which prevents recorded traffic in the past from being decrypted even if the private key is compromised. This is really powerful and really secure, but it also breaks debugging tools like Wireshark since having the private key doesn’t help you decrypt the traffic. You’ll also handshake at about half the speed of pure RSA.

The reality is that it’s more likely you’ll get pwned by some buffer overflow or Heartbleed-type bug than have someone factor your key, so keep this in mind when selecting your key exchange algorithm and cipher.

Other Considerations

You may need HSTS (HTTP Strict Transport Security) if you have concerns about SSLStrip vulnerabilities. SSLStrip is a man-in-the-middle attack in which the MITM silently redirects requests to non-HTTP pages and then simply copies the transmitted plaintext data. You can implement “HSTS” at the application level selectively by detecting and forwarding to the HTTPS version of your page. But if you are doing this nonselectively, why write more code to do something slower higher up in the stack? A lack of protection at both the application and protocol level will result in fun things like this though:

Application level changes may be required. Some code is more prone to breaking or causing issues when served over HTTPS. Poorly written Javascript for example can break if it isn’t expecting HTTPS, and external services that don’t support HTTPS can be problematic. Specifically, all images/resources need protocol-agnostic links AND need to be served securely as well, otherwise you’ll get an annoying “insecure page” warning in the browser since not every resource was sent securely.

Older browsers like IE 6/7/8 screw everything up and I don’t think they even support TLS; you can either support IE 6 / Windows XP or your connection can be secure.

There’s some moderate, but not insurmountable, IT overhead to enabling SSL/TLS. Apache and Nginx support SSL/TLS either out of the box or via easily enabled modules, so it isn’t hard to set them up and configure them to use OpenSSL. But configuring the options as explained above is paramount. So is keeping OpenSSL and other components up to date.

You also need to remember to renew and swap out expiring certs, as well as securely store and back up private keys, cert signing requests, passwords and certificate authority information as well. This is because in (one|three|five) years, it’ll be time to remove the expired certificate and deploy a new one, so you better A) remember where you got your certificate; B) remember how to log in and buy, request, and obtain a new one; and C) test, deploy, and restart your services using the new certificate.


You should enable SSL. It’s not that hard, performance is a non-issue, and you can buy a cert for about $15 a year. Put a date on your calendar to renew it.

Raymond Chen’s lessons

A random collection of wisdom from Raymond Chen and The Old New Thing. I plan to keep this updated as I discover/remember more of them.

Windows doesn’t have an expert mode because you are not an expert.
This is just the Dunning-Kruger effect in play: people who are not experts pretty much by definition lack the ability to judge whether they are experts or not. “Expert users” using the advanced features of Windows invariably make feature requests that are equivalent to the beginner feature that already exists.

The hatchway is still secure, even if you opened it with the key.
It’s not a security bug if the user has to first give permission to elevate. Bogus security reports of this nature generally go like this:

  1. Do something that requires elevation, such as replacing an application’s DLL with a malicious copy.
  2. Run the application.

Except, it’s not a security bug because step 1 required elevation, and therefore an administrator’s consent.

Eventually, nothing is special any more.
If you create special functions or flags in your API to give them extra functionality, they will in practice become the defaults over time, as programmers cargo-cult their way through programming. Eventually people find that the regular function “doesn’t work” (for various definitions of “work”), and that the special function does.

Providing compatibility overrides is basically the same is not deprecating a behavior.
“If you provide an administrative override to restore earlier behavior, then you never really removed the earlier behavior. Since installers run with administrator privileges, they can go ahead and flip the setting that is intended to be set only by system administrators.”

Appearing to succeed is a valid form of undefined behavior.
Undefined means anything can happen, including: returning success, nothing, formatting your system drive, playing music, etc. So it is futile to ask “if the documentation says doing x results in undefined behavior, why does it appear to work?” Also, one cannot rely on a specific form of undefined behavior; relying on it implies the behavior is defined and contractual.

The registry is superior to config files.
Config and .ini files are deprecated in favor of the registry because:

  1. ini files do not support unicode.
  2. Security is not granular (how do you restrict a group from editing a certain part of the file?)
  3. Atomicity issues with multiple threads/processes can lead to data loss on the flat file (the registry is a database).
  4. Denial of service issues – someone could just take an exclusive lock on your config to screw with you.
  5. ini can store strings only, so if you need to store binary you’d have to encode it as a string.
  6. Parsing files is slower, and writing settings would require loading and reparsing the whole file.
  7. Central administration via group policy would be exceedingly difficult compared to a registry.

Computer science: do not confuse the means with the ends.
It is often said that the purpose of garbage collection is to reclaim unused memory, but this is incorrect. The purpose of garbage collection is to simulate infinite memory. Reclamation is just the process by which this is achieved. For example, a null garbage collector is provably correct if you have more physical memory than your program needs. Similarly, allocating a value type on the stack is an implementation detail. It’s not a requirement that it is on the stack, only that it is always passed by value.

Open source isn’t a compatibility panacea.
You don’t get rid of compatibility problems by publishing source code; in fact that makes it easier to introduce compatibility issues because it exposes all the internal undocumented behaviors that aren’t contractual.

You can’t satisfy everyone about where to put advanced settings.
This is a specific case of not being able to delight all the people all the time when the audience is measured in billions. Most people prefer advanced settings in one of five categories (quoting Raymond):

  1. It’s okay if the setting is hidden behind a registry key. I know how to set it myself.
  2. I don’t want to mess with the registry. Put the setting in a configuration file that I pass to the installer.
  3. I don’t want to write a configuration file. The program should have an Advanced button that calls up a dialog which lets the user change the advanced setting.
  4. Every setting must be exposed in the user interface.
  5. Every setting must be exposed in the user interface by default. Don’t make me call up the extended context menu.
  6. The first time the user does X, show users a dialog asking if they want to change the advanced setting.

Each item is approximately an order of magnitude harder than the last, and the final one is objectively user-hostile. Whatever you decide to implement, the other five groups will call you an idiot.

Cleanup must never fail.
Low level cleanup functions don’t have very many options for recovering from failure, so they must always succeed (they may succeed with errors, but that is not the same as failing).

Don’t use a global solution to a local problem.
Since an operating system is a shared playground, you can’t just run around changing global settings because that’s how you like it. If two applications with opposing preferences tried this, one or both of them would break; the correct approach is to change the setting in a local scope to avoid breaking other applications.

A platform must support broken apps; otherwise you’re just punishing the user.
Compatibility with apps, including incorrectly written apps, is crucial for platforms because users expect programs to work between versions of Windows. It is tempting to be a purist and declare that the apps should break, which will force the developers to fix them. In practice, the developers either don’t care, no longer exist, or don’t have the source code any more. Users will instead blame the platform and/or not upgrade.

Users hate it when they can’t cancel.
If you have a long running operation or some multi-step wizard, the user should be able to cancel. It should be clear what will and will not be saved or committed when they cancel.

Geopolitics is serious business.
It can be illegal to have a map with incorrect labels or borders (the correctness of which depends on who is looking), or to call disputed territories (such as Taiwan) countries in some places.

The USB stack is dumb because it’s dealing with dumb manufacturers.
Some USB devices have identical serial numbers which can cause non-deterministic behavior and arbitrary settings assignment, so Windows has no choice but to pretend every device is unique. This is why if you unplug/re-plug a device into a different USB port, Windows treats it like a new device and forgets all your settings. More generally, Windows could be smarter, but then things would break.

Avoid Polling
Polling prevents the hot code and all code leading up to it from being paged out, prevents the CPU from halting to a lower power state, and wastes CPU.