The lessons of Y2K, 20 years later

- ADVERTISEMENT -
Afghan coders practice at the Code to Inspire computer training center in Herat, Afghanistan April 24, 2018. Photo: REUTERS/Mohammad Shoib

As people plan for this year’s New Year’s Eve celebration, chances are that most will not be stocking up on canned goods, taking out large amounts of cash from the bank or purchasing a backup generator. But 20 years ago excitement for the start of the year 2000 was mixed with fear that the rollover to 2000 might cause computer systems to fail globally, with potentially apocalyptic consequences. Year dates had been entered as two digits – e.g. “99” – and the rollover to “00” might cause catastrophic failures: Would the lights stay on? Would the banks fail? Would planes fall out of the sky? These questions – the Y2K bug – overshadowed the transition to the new millennium

But as The Washington Post announced in a Jan. 1 headline: “Y2K bug has no bite so far.” The millennium bug had been squashed.

To the extent that Y2K is remembered today, it is largely as something of a joke: a massive techno-panic stoked by the media that ultimately amounted to nothing. Yet, avoiding catastrophe was the result of serious hard work. As John Koskinen, the chairman of President Bill Clinton’s Council on Y2K, testified in the early weeks of 2000, “I don’t know of a single person working on Y2K who thinks that they did not confront and avoid a major risk of systemic failure.”

That danger was averted was thanks to a group of experts recognizing a problem, bringing it to the attention of those in power and those in power actually listening to experts. Twenty years later, we are able to look at Y2K with derision, not because Y2K was a hoax, but because concerned people took the threat seriously, and did something about it – a lesson for addressing myriad problems today.

The origins of the Y2K problem seem almost quaint at a time when people regularly carry hundreds of gigabytes of computing power in their pockets. But in the 1960s, computer memory was limited and expensive. Therefore, programmers chose to represent dates using six characters rather than eight, meaning the date Oct. 22, 1965, was rendered as 102265. This format worked for decades, saving valuable memory and speeding up processors. Insofar as it represented a potential risk, it was a problem for the future. And thus, even though computer scientist Robert Bemer tried to sound the alarm in 1971, it seemed like there was plenty of time to fix the two-digit date problem.

Computers assumed that the first two digits of any year was “19” – and they began running into problems once they started encountering dates that occurred after Dec. 31, 1999. These problems ranged from the seemingly comical (a 104-year-old woman being told to report to kindergarten), to the frustrating (credit cards with expiration dates of “00” were denied as having expired in 1900), to the potentially catastrophic (the risk that essential infrastructure could fail).

While computers had made important inroads into the business and government sectors in the preceding decades, the 1990s saw personal computer use increasing dramatically. The ’90s saw the launches of the first graphical web browser (Mosaic), Microsoft’s Windows 95, Apple’s first iMac and iconic video games like Wolfenstein 3D and Tomb Raider. As a crisis involving computers, Y2K hit at the very moment when more and more people were coming to see computers as integral to their daily lives.

By the time Peter de Jager published his attention-getting article “Doomsday 2000,” in 1993 (in Computerworld), the Y2K crisis was no longer some far off threat – and people in power took the problem seriously. Writing to Clinton in July 1996, Sen. Daniel Patrick Moynihan warned that a study by the Congressional Research Service had confirmed that “each line of computer code needs to be analyzed and either passed on or be rewritten.” Moynihan offered a provocative warning, “the computer has been a blessing; if we don’t act quickly, however, it could become the curse of the age.”

Luckily, in the less than four years between Moynihan’s letter, and Dec. 31, 1999, a great deal was done. Members of both parties in Congress worked closely together (even in the midst of an impeachment) to monitor the compliance efforts being made by the government, and industry – devoting particular attention to the work being done to ensure that utilities, as well as the financial and health-care sectors, would be ready. Clinton launched his own Council on Y2K, headed by Koskinen, to coordinate efforts in the United States, while the World Bank-backed International Y2K Cooperation Center worked to help other countries prepare for Y2K. In a 1998 speech, Clinton said that “if we act properly, we won’t look back on this as a headache, sort of the last failed challenge of the 20th century. It will be the first challenge of the 21st century successfully met.”

Nevertheless, there was widespread concern that something would go wrong. With barely a year until the rollover, a poll conducted by Time Magazine and CNN found that 59 percent of respondents were either “somewhat” or “very concerned” in regards to the “Y2K bug problem.” And those foretelling doom were not only to be found on the societal fringe. In its “100 Day Report” issued on Sept. 22, 1999, the Senate’s Special Committee on the Year 2000 Technology Problem applauded the progress that had been made but couched this praise in lingering concerns.

A legion of programmers and IT professionals squashed the millennium bug by checking and rewriting millions, if not billions, of lines of code. Best practices and solutions were liberally shared between the government and businesses, and carefully constructed contingency plans were put in place just in case the bug was still lurking in some systems. The Y2K problem was solved, and the world woke up on Jan. 1 to a normal, non-apocalyptic day. But instead of registering this outcome as a challenge successfully met, the public experienced the nonevent of Y2K as a joke or even a hoax.

Two flawed understandings drove the idea that Y2K was all just an expensive hoax: that anticipated and much-hyped international failures did not happen, and that nothing went wrong. But actually, Y2K remediation efforts revealed that many other countries were not as computer reliant as the United States, which reduced the chance of failures. Furthermore, thanks to the sharing of information and expertise, other nations were able to benefit quickly from solutions to Y2K that had been devised in the United States.

And while many believed that “nothing happened,” there were actually hundreds of Y2K-related incidents. These included problems at more than a dozen nuclear power plants, delays in millions of dollars of Medicare payments, ATM issues worldwide and problems with the Department of Defense’s satellite-based intelligence system. That problems were fixed quickly is largely attributable to the small army of programmers who spent the first hours of the year 2000 monitoring sensitive systems.

Y2K was a technological problem, but it revealed that as societies became heavily reliant on complex computerized systems technological problems became a matter of concern for everyone. Faced with a potentially catastrophic problem, experts, businesses and the government mobilized the resources necessary to mitigate the risks before they could trigger a disaster. Most estimates suggest that $100 billion was spent in the United States, $8.5 billion of that by the federal government, to squash the millennium bug, and though it is impossible to say that every cent was well spent that amount still dwarfs the estimated cost of doing nothing.

Twenty years after the successful navigation of the Y2K crisis, few of us worry that a date rollover is going to cause the world’s computer systems to crash. However, we do worry about the pervasiveness of misinformation online, the spread of corporate surveillance, the ways in which racist and misogynistic biases are reproduced in algorithms, about massive data breaches and about a deluge of other problems that seem to have been exacerbated as computerized devices have become ubiquitous. Y2K revealed the ways in which we expose ourselves to new risks as our lives and societies become entangled with complex computer systems the inner workings of which we often fail to understand, and that is a problem that is still very much with us today.

But it also showed that addressing these challenges is possible, when there is close cooperation among researchers, government and businesses, both nationally and internationally. People rose to the challenge in 2000. That’s not something to laugh at, that’s something to celebrate.

– – –

Zachary Loeb is a Ph.D. candidate at the University of Pennsylvania; he works on the history of technology, disasters and doom-saying. He is writing a dissertation on Y2K.

Share

LEAVE A REPLY

Please enter your comment!
Please enter your name here