Kislaya Prasad
Artificial intelligence is already affecting our lives in many positive ways, automating tasks, helping to diagnose medical issues, and acting as a voice-controlled virtual assistant for many. Still, there is a very real danger of misuse and unintended consequences of the technology, as we saw recently in Maryland, with the filing of what is believed to be the first criminal case against someone for allegedly using AI to create a revenge video against an employer. Consequently, governments here, and around the world, have been grappling with the question of how best to regulate it.
Last year President Joe Biden issued an executive order on AI that established new standards for safety and security. The EO noted that AI heightens the incentive of developers to collect and exploit personal data and called on Congress to pass data privacy regulations. There appears to be new momentum on this front, with Democratic Sen. Maria Cantwell and Republican U.S. Rep. Cathy McMorris Rodgers, both of Washington state, having just announced a bipartisan privacy bill (the American Privacy Rights Act).
Another feature of AI that has led to calls for regulation is its potential to heighten bias and discrimination. There have been several well-publicized instances of bias in algorithms that are entrusted with highly consequential decisions (e.g., predicting how sick a patient is). Independent bias auditing has been proposed as a potential solution. Other proposals seek to protect consumers, patients, students, and workers in various ways.
While the EO on AI directed relevant government agencies to take action, Congress has not enacted significant new laws to regulate AI. To fill the gap, several bills have been introduced in state legislatures. According to the National Conference of State Legislatures, at least 40 states, along with Puerto Rico, the Virgin Islands, and Washington, D.C., introduced AI bills in the 2024 legislative session. While the bills are too varied to summarize in full, they include some important categories:
- Bills addressing criminal use of AI, such as the creation of child pornography, or of synthetic voice or image likenesses in an attempt to commit fraud (e.g., the distribution of “deepfakes” and other deceptive media to influence elections).
- Bills creating disclosure requirements when content is generated or decisions are reached using AI.
- Bills that restrict how automated decision tools (such as for hiring employees) are used.
- Bills providing protection against discrimination by AI. Within this last category are bills that reiterate existing rights (removing ambiguities that arise when discriminatory decisions are made by algorithms instead of persons) and bills to require impact assessments or create standards for independent bias auditing. Crime, employment, education, health, and insurance have been singled out for particular attention by the state legislatures.
In the absence of federal AI regulation, concern is growing that we are headed toward a system of patchwork legislation coupled with weak enforcement. There is the additional danger of a race to the bottom if states try to attract business by promising a lax regulatory environment. An argument can be made for avoiding heavy-handed regulation. For instance, United Kingdom Prime Minister Rishi Sunak has asserted that “the U.K.’s answer is not to rush to regulate… we believe in innovation… and in any case, how can we write laws that make sense for something we don’t yet fully understand?”
While Sunak’s premise that AI is not sufficiently well understood seems flawed, the possibility that regulation will hamper innovation needs to be taken seriously. This is a point made often by spokespersons for the tech industry. The tradeoff between protecting individual rights and hampering innovation was debated in the European Union before it settled in favor of the comprehensive Artificial Intelligence Act. However, whether the costs of complying with regulation would in fact be so high as to materially detract from innovation is still very much an open question.
A national survey of 885 U.S. executives that I recently conducted sheds light on this question. I asked respondents about their perception of costs of compliance and their support for specific AI regulation proposals. This group included individuals directly involved in making decisions related to the adoption and implementation of AI within their company who are likely to be knowledgeable about compliance costs.
Respondents were asked if they supported regulations mandating disclosure of AI use and data collection policies, bias regulations mandating third-party auditing, and mandates requiring explanations for autonomous decisions. Support for regulation was surprisingly high; more than 70% of respondents either strongly supported or somewhat supported each type of regulation. This was true despite the fact that a majority of individuals felt that complying with regulation would impose either a moderate or significant resource challenge. For this group, the benefits of regulation clearly outweigh compliance costs.
The bills being debated by the states are a guidepost for what is needed at the national level — disclosure of AI use, protections against bias and discrimination by algorithms, and oversight to ensure safe and fair use of autonomous decision tools. This needs to be combined with the strengthening of existing laws to cover new phenomena, such as price fixing by algorithms. The proposed data privacy bill is a welcome first step. In addition to data privacy protections, it includes a section on civil rights and algorithms to address some forms of discrimination. By setting national standards it would simplify compliance relative to a patchwork of state laws. However, given the current political climate and calendar, there is uncertainty about where this draft legislation is headed. There is every reason to wish it success, and from there move forward to comprehensive national regulation of AI. Developments in AI are taking place at too fast a pace to put off sensible regulation.
Kislaya Prasad is a research professor at the Robert H. Smith School of Business and academic director of its Center for Global Business.