New York state will monitor its use of AI after signing new bill into law

MT HANNACH
4 Min Read
Disclosure: This website may contain affiliate links, which means I may earn a commission if you click on the link and make a purchase. I only recommend products or services that I personally use and believe will add value to my readers. Your support is appreciated!

New York state government agencies will be required to conduct reviews and publish reports detailing how they use artificial intelligence software, under a new law signed by Gov. Kathy Hochul.

Hochul, a Democrat, signed the bill last week after it passed state lawmakers earlier this year.

The law requires state agencies to conduct evaluations of any software that uses algorithms, computer models, or AI techniques, then submit those evaluations to the governor and key legislative leaders and publish them online.

It also prohibits the use of AI in certain situations, such as an automated decision about whether a person receives unemployment benefits or child care assistance, unless the system is constantly monitored by a human.

WATCH | Canada invests in the Artificial Intelligence Security Institute:

Canada launches AI watchdog to oversee safe development and use of technology

Amid rapid global progress and deployment of artificial intelligence technologies, the federal government has invested millions to combine the minds of three existing institutes into one capable of keeping an eye on potential dangers ahead.

Law protects workers from limitation of working hours due to AI

Civil servants would also be protected from having their hours or duties limited due to AI under the law, addressing a major concern raised by critics of generative AI.

State Sen. Kristen Gonzalez, a Democrat who sponsored the bill, called the law an important step in putting guardrails in the way emerging technology is used by state government. ‘State.

Experts have long called for greater regulation of generative AI as the technology becomes more widespread.

Some of the biggest concerns raised by critics, besides job security, include security concerns around personal information and that AI could amplify misinformation due to its propensity to make up facts, repeat false statements and its ability to create images close to photorealism. on the prompts.

Several other states have implemented laws regulating AI, or are in the process of doing so. In May, Colorado introduced the Colorado AI Act, which sets out requirements for developers to avoid bias and discrimination in high-risk AI systems that make important decisions, taking effect in 2026. Many AI bills will also come into force in the new year. in California after being signed into law in September, including one requiring large online platforms to identify and block misleading election-related content, and another that requires developers to be open about the datasets used to train their systems.

Canada does not have a federal regulatory framework for AI, although a proposed Artificial Intelligence and Data Act (AIDA) was included in Bill C-27. This project is still under study, with no specific date to know whether or not it will become law. Earlier this fall, the federal government also announced the launch of the Canadian Institute for Artificial Intelligence Safety, which aims to advance AI safety research and responsible development.

Alberta is working to develop its own regulations regarding artificial intelligence, the privacy commissioner said in March, focusing specifically on privacy issues such as deepfakes.

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *