Should A.I. be regulated? To what extent?

This morning I read that President Joe Biden has issued the U.S. government’s first-ever executive order on artificial intelligence (AI). This order mandates new safety assessments, provides equity and civil rights guidance, and initiates research into AI’s influence on the labor market.

The executive order is divided into eight primary components:

  1. Establishing new safety and security standards for AI.
  2. Protecting consumer privacy in AI applications.
  3. Advancing equity and civil rights, especially in preventing AI algorithms from perpetuating discrimination.
  4. Protecting consumers, especially in healthcare and education sectors.
  5. Supporting workers by analyzing the potential labor market implications of AI.
  6. Promoting innovation and competition in AI.
  7. Collaborating with international partners on global AI standards.
  8. Providing guidelines for federal agencies’ use and procurement of AI and expediting the hiring of AI-skilled workers.

So, while there are valid arguments on both sides, many experts believe that a balanced approach is needed. This might involve regulating specific high-risk applications of AI (like autonomous vehicles or facial recognition) while allowing more freedom in lower-risk areas. Collaboration between AI experts, policymakers, industry leaders, and other stakeholders is crucial to create effective and informed regulations.

What do you guys think?

I am not a fan of U.S. executive orders, which often violate the “separation of powers” concept, which requires that the legislative branch, NOT the executive branch, drafts laws.

2 Likes
2 Likes
1 Like
1 Like

@kirkmahoneyphd
When INSTAGRAM came on the scene in 2010, people jumped on board, not knowing what the long term effects of it would be. Fast forward 13 years later & knowing what we know now, I think it’s wise to be a bit concerned about any new technology which promises to revolutionize this or that.

Slow & steady is always best IMO.

1 Like