(609) 297-5880 info@votelight.com
Select Page

The NationalVote is a current United States issue which people vote on to share their opinions with their representatives.

Artificial Intelligence: To Regulate or Not to Regulate?

SUMMARY: On June 14, 2023, the European Parliament, the legislative branch of the European Union, passed a draft of the AI Act, proposing new restrictions on the riskiest issues posed by artificial intelligence (AI) technology. The proposed law would significantly curtail uses of facial recognition software and require AI platforms such as ChatGPT to disclose more about the data used to create their programs.

While the act’s passage is a first step and a final version of the law will be released later this year, the EU’s action prompted the question whether the United States should consider regulating this new technology.

On June 20, Senate majority leader Chuck Schumer, D-NY, told the Center for Strategic & International Studies (CSIS) that lawmakers must acknowledge the coming changes with AI, noting that many “want to ignore” necessary regulations due to AI’s complexity.


The EU’s AI Act has different rules for different risk levels. The new rules establish obligations for providers and users depending on the level of risk from artificial intelligence. While many AI systems pose minimal risk, they need to be assessed, according to an EU news release, of which the following information is republished.

Unacceptable risk AI systems are systems considered a threat to people and will be banned. They include:

  • Cognitive behavioral manipulation of people or specific vulnerable groups: for example, voice-activated toys that encourage dangerous behavior in children
  • Social scoring: classifying people based on behavior, socio-economic status or personal characteristics
  • Real-time and remote biometric identification systems, such as facial recognition

High risk AI systems are systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories:

1) AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts.

2) AI systems falling into eight specific areas that will have to be registered in an EU database:

  • VoteLight RSS FeedBiometric identification and categorization of natural persons
  • Management and operation of critical infrastructure
  • Education and vocational training
  • Employment, worker management and access to self-employment
  • Access to and enjoyment of essential private services and public services and benefits
  • Law enforcement
  • Migration, asylum and border control management
  • Assistance in legal interpretation and application of the law.

Generative AI, like ChatGPT, would have to comply with transparency requirements:

  • Disclosing that the content was generated by AI
  • Designing the model to prevent it from generating illegal content
  • Publishing summaries of copyrighted data used for training

Limited risk AI systems should comply with minimal transparency requirements that would allow users to make informed decisions. After interacting with the applications, the user can then decide whether they want to continue using it. Users should be made aware when they are interacting with AI. This includes AI systems that generate or manipulate image, audio or video content, for example deepfakes.

You can vote your choice on this Issue at: https://votelight.com/issues/167

VoteLight makes it easy to vote on Issues, based on where you live, or to create your own Issues for your city, county, state, or our country.