Published on:

Artificial Intelligence Laws: An Evolving Landscape

Artificial Intelligence, or AI, has evolved rapidly over the past couple of decades.  The uses of AI have extended to different segments of our society, and humans have benefitted from it in various ways.  Lawyers and legal researchers have also found ways to harness the power of AI.  AI has enabled them to sort through month’s, or even year’s, worth of information in minutes.  This is especially helpful when considering how, according to IBM, over 90% of all data was created in the last two years.  However, AI in the law goes far beyond its practical uses by lawyers for information gathering and discovery.  The rise of AI has presented a number of issues and questions in the legal field, especially involving products liability.

AI and Products Liability

Recently, companies have used AI both in their creation of products, and in the products themselves.  A major issue regarding this is who should be held liable in the event the AI product causes an accident or an injury.  There is debate on whether the programmer or manufacturer of the AI product should be held liable, as well as which legal standard should apply in cases involving AI.  A main reason for this issue is the rapidness that AI has developed over the recent decades.  The government and other regulatory bodies have had difficulty keeping up with how quickly AI has evolved.  This has left people who develop AI and manufacture products from it unsure on how AI will be regulated in the future.  Adding to this issue is how there are a number of different definitions used to describe AI, as well as how it has a wide variety of usages.  While companies have benefitted greatly from AI, they also must recognize the risks its use may create for them.

As for which standard should be used in the event an AI product causes injury, the current debate revolves around whether products and machines using AI should be held to the strict products liability standard or negligence standard.  In regards to AI used in automobiles, such as the Autopilot function in some Tesla models, some scholars have argued that strict liability should be used.  Even some car companies, such as Volvo and Mercedes, have come out and said that they will accept full liability if their use of AI in their cars causes an accident.  However, there has been nothing in the law makes this a legal obligation.

When it comes to AI use in un-customizable conventional software, both courts and the Uniform Commercial Code (UCC) have treated this as a product subject to strict liability.  This type of liability focuses on defects in a product’s manufacturing process, design, or warnings given.  However, when the product is customized for a specific user, courts have treated this use as a service  subject to only the negligence standard. Opponents of the strict liability standard have argued that its use will hinder the development of AI technology and its incorporation into our society.  Other supporters for the negligence standard have argued that it should be used when a company or producer can prove that its AI product is safer than a reasonable person in the particular situation.  Until the courts are presented with this issue and come to a decision on it, it is difficult to predict what standard will be applied. While its unclear what direction the law will go in terms of AI, it is absolutely apparent that AI’s presence in the law is here to stay.

At our law firm, we help clients navigate through the legal obstacles.  Please do not hesitate to contact our artificial intelligence attorneys for any questions.