On October 30th, President Biden issued an
executive order on safe, secure, and trustworthy artificial intelligence.
The order purports to be a direction and standard by which America “leads the way” in AI by “seizing the promise and managing the risks of AI.” What does that mean? Considering the EU has had a committee working on this for two years, it’s an oddly placed fast follow. I called for standards in XXX Businesses, developers, and creators will need help untangling the rhetoric. While agencies are developing the directed standards, what are we to do?
It’s easy to shred this executive order, but leaders look for people to come together during difficult times. Without making this a 20-page white paper, here are the highlights:
There are eight major sections with a total of 26 sub-sections.
- New Standards for AI Safety and Security
- Protecting Americans’ Privacy
- Advancing Equity and Civil Rights
- Standing Up for Consumers, Patients, and Students
- Supporting Workers
- Promoting Innovation and Competition
- Advancing American Leadership Abroad
- Ensuring Responsible and Effective Government Use of AI
The needs and tests remind me of my Life Sciences and Pharmaceutical consulting practices. There are lessons learned and techniques that can be lifted. More later.
“Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government.”
Standards are required to define what is “most powerful,” and what are the lists of tests? NIST (National Institute of Standards and Technology) is the candidate for the US. This section requires “critical information” to be shared and reviewed before commercial release. This is the most contentious of sections and happens to be the first. There is a lot to define and unpack.
Ultimately, guardrails that are well-documented, unambiguous, and consistent with ethical and business practices are required.
“Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy.”
Another way to say this is to build and test for “do no harm.”
“Protect Americans’ privacy by prioritizing federal support for accelerating the development and use of privacy-preserving techniques”
Yes. Enuff said.
“Address algorithmic discrimination”
The “Advancing Equity and Civil Rights” section contains several variants of this statement. Ultimately, ensuring the data and algorithms have no biases is essential.
The list goes on with directional guidance. Addressing these will take time, resources, debate, and more. The document feels rushed, but there are solutions.