From PGP To ChatGPT: The 1990s are back, right?

What we can learn from the First Crypto War as we examine the impact of AI on the global tech policy landscape

On  June 5, 1991, Phil Zimmerman released Pretty Good Privacy (PGP) encryption. Quite rapidly, it found its way onto the Internet and powered some of its most exciting innovations. Zimmerman had developed PGP to provide individuals with the means to securely encrypt emails and files. The release of PGP democratized access to encryption technology that had previously been under the tight control of governments and large corporations. Essentially, PGP allowed individuals to take action themselves to protect their privacy and sensitive information from unauthorized access. To Zimmerman and his supporters, it was the epitome of public interest tech; defined and shaped by democratic norms and ideals. 

Tech Innovation Challenges the Status Quo

The reaction from the national security community to the widespread availability of strong encryption through PGP was quick and severe. Government agencies, particularly those involved in law enforcement and intelligence gathering, argued that the ability for individuals to send encrypted messages that could not be decrypted and read by authorities posed a significant challenge to national security and law enforcement efforts. They feared that it would hinder their ability to conduct surveillance and fight crime. They also claimed that there was a clear danger that such technology could be used by bad faith actors, including terrorist organizations, organized crime, and child predators.  

The core of the controversy surrounding PGP was the issue of export controls. That’s because at the time of PGP's release, the United States had strict export controls on cryptographic software, which were classified as munitions under the International Traffic in Arms Regulations. The perceived “dual use” nature of encryption from a national security perspective meant 

that it was treated like rocket launchers or grenades. By 1993, Zimmerman would find himself facing a federal investigation for potentially violating these export controls after PGP spread worldwide via the Internet. The case highlighted the challenges of applying traditional arms control measures to software, as well as the global nature of the Internet—which did not and still does not respect national borders. 

Government Enters the Debate, Often with Mixed Messages

The U.S. Government, however, was initially motivated to keep these export controls in place to slow the international spread and uptake of advanced encryption methods. In 1993, the Clinton-Gore Administration’s public policy response came in the form of the Clipper Chip, a technology that encrypted communications but provided so-called “backdoor” access to unencrypted versions of that content for law enforcement and intelligence agencies. The Administration’s push for adoption of the Clipper Chip escalated the increasingly vocal policy debate into the so-called “First Crypto War.”  

The debate that ensued on Capitol Hill highlighted the policy drawbacks of allowing greater access to strong encryption, but also the strong public interest use cases.  Specifically, the benefits of widely available secure encryption that were cited by supporters included human rights and individual liberties protections, enhanced privacy, and driving economic growth. Indeed, corporate entities entered the policy fray asserting the significant financial losses these export restrictions imposed on American tech companies, putting at risk their ability to market and sell tech products and services abroad. With the availability of foreign encryption technologies

worldwide growing rapidly, the justification for such export controls was being sharply called into question.

As Policymakers Learn More, Policy Choices Become Clearer

In response, Congress considered a range of legislative proposals, some aimed at regulating encryption standards and ensuring that encrypted communications were accessible to law enforcement and intelligence agencies. Others explored provisions that sought to relax export controls and permit commercial encryption to develop without such requirements. A key vote was coming whether to proceed with relaxing export controls or to keep encryption under tight control and ensure “backdoor” access for law enforcement and intelligence agencies. The House Commerce Committee sat right at the center of the debate. In 1997, the legislation before the panel at that time was the Security and Freedom through Encryption (SAFE) Act. 

The debate was intense and drew upon controversial moments from our history. The FBI Director lobbied members of the panel directly. In what may now seem quaint, the debate was also bipartisan — there were Republicans and Democrats on each side of the debate. The amendment to safeguard the freedom to use strong encryption was advanced by then-Representative (now Senator) Ed Markey (D-MA) and Rep. Rick White (R-WA). On the opposing side,  it was a counter amendment from former New York City policy officer Rep. Tom Manton (D-NY) and former FBI agent, Rep. Mike Oxley (R-OH).

MSNBC tech reporter Brock Meeks covered the committee debate. In his report from 1997, one can see themes relevant to current AI policy discussions:

“Oxley and his troops rallied behind the scare-mongering rhetoric that now, after more than four years of debating encryption policy on Capitol Hill, anyone following this story can recite by rote. ‘Drug traffickers, child pornographers, terrorists and organized crime’ will laugh in the face of Congress and the American public if they are allowed access to unbreakable encryption, Oxley ranted. The FBI won’t be able to catch ‘the bad guys,’ he said, if we don’t allow them easy access to all coded messages.

“Markey put a fine edge on his argument about three hours into the debate, wresting away time allotted to another member. The Oxley approach is doomed, he said, for the simple fact that criminals aren't going to use any kind of government-approved breakable encryption. Further, mandating a trap door in all encryption products means that all Americans and businesses would be subject to ‘any college sophomore’ with ‘rudimentary’ hacking skills, able to exploit the built-in holes. Third, U.S. industry and jobs would certainly be lost because no foreign business is going to buy any encryption product with a back door stamped, ‘FBI Enter Here.’” — Brock Meeks, MSNBC News, 9/24/97

While Congress ultimately decided against building in a “backdoor” for the FBI and breaking encryption in the 90s, it did, however, take steps to pass other laws in the 90s that acknowledged the shift from analog to digital technologies. For example, to assist law enforcement, it passed the Communications Assistance for Law Enforcement Act, to provide access to digital communications pursuant to valid legal requests. And to address unbreakable encryption in the copyright context, it added affirmative defenses to breaking digital rights management mechanisms consistent with “fair use” principles as part of the Digital Millennium Copyright Act in 1998. 

It’s certainly hard to imagine the Internet today without strong encryption, especially for e-commerce purposes and the billions of dollars of investments and transactions that rely upon it.

Tech History Often Rhymes

If we listen closely, we can still hear echoes of the debate from the 1990s over PGP and encryption, particularly  in the contemporary discussions about technology, privacy, and national security in the context of artificial intelligence (AI). Similar to the early days of PGP, when OpenAI released ChatGPT into the wild on November 20, 2022, it set off concerns about the risks to safety and national security and potential use by bad faith actors.  

The intelligence community — and the largest AI companies — assert the “dual use” nature of the technology. Others — somewhat predictably — argue for licensing regimes. Indeed, within a little over a year, AI’s rapid development and global dissemination has prompted calls for strong regulations—and with it, the issue of export controls has re-emerged. Once again, governments and policymakers will have to grapple with the dual-use nature of a technology; in this case AI’s potential to drive innovation, investment, and economic growth, while simultaneously posing admittedly non-trivial challenges to its safe and ethical use and concomitant national security risks.

During the coming months as policy frameworks and AI governance is debated, not only will the largest AI companies be subject to scrutiny, but open source frameworks may become targets for regulations and controls as well. Open source holds great promise for AI.  Such approaches typically foster collaboration and innovation, allowing developers worldwide to contribute to and improve upon existing technology, leading to more robust and efficient solutions. And by providing access to a wide range of tools and libraries, open-source AI could accelerate research and development, enabling both individuals and organizations to prototype and deploy models more quickly. 

A key question is how to ensure that open source AI products can be developed and released safely and ethically. And as with encryption widely distributed and accessible over the Internet, open-source AI technology may be accessed, modified, and distributed—by anyone around the world. This will make export control mechanisms less effective. Finally, open-source AI could help to promote transparency for policymakers and regulators and thereby build trust, as the underlying code could be examined, verified, and modified by anyone, ensuring that AI technologies are more secure and less prone to bias.  

Open source may pose competitive threats to the largest AI companies as well.  Licensing regimes, export controls, and other regulations may be pushed by such behemoths for narrow, competitive advantage—building a moat around their first mover advantage, and protecting the billions they have invested in closed, proprietary models.  Recently, Mozilla and the Center for Democracy & Technology led an impressive  group of academics, researchers, think tanks and advocacy groups in a letter to U.S. Secretary Raimondo, stressing the public interest benefits that open source AI offers to society.

Past Debates Shed Light On Current AI Policy Challenges

In summary, the comparison between the cryptographic software debates of the 1990s and today’s AI discussions reveals several key themes:

  1. Both illustrate the ongoing struggle to find a balance between innovation and security, the rights of individuals versus the needs of the state, and the global nature of technology that defies easy control. 

  2. The advent of PGP encryption and the historic debates over privacy, encryption, and national security from the 90s certainly reverberate through today's discussions about AI and the need for regulation, licensing, or export controls. 

  3. There are, however, significant differences —especially the rate of technological change. The AI field is clearly advancing rapidly, with potential applications and implications that are far broader, more complex, and potentially more transformational, than those we could have ever conceived of in the era of  encryption development.

Ultimately, it’s down to us. As AI continues to evolve, the lessons learned from the encryption debates of the 1990s may offer valuable insights. We must navigate both the real challenges and immense opportunities presented by AI. The future has once again arrived.

Next
Next

The AI Innovative Adoption Model in the MENA Region: A Policy & Business Perspective