Neurolaws and privacy rights are still in the development stages. Neurological advances have allowed scientists to connect electrodes to the brain for analytical procedures. These electrodes can be connected in a non-invasive manner so they can download brain data. Now, this brain data can be analyzed to help patients with brain disorders such as epilepsy, depression, Parkinson’s disease or Alzheimer’s disease. Moreover, a human’s brain data may be analyzed to determine the truth and existence of intent.

Neuroscientists have been able to use advanced non-invasive techniques to observe and analyze cerebral neurochemical changes in the human brain. They have access to several technologies including, PET, SPECT, MRI, fMRI, and EEG. In fact, functional MRI (fMRI) is able to measure the brain’s activities under resting and activated conditions. It can be used to identify, investigate and monitor brain tumors, congenital anatomical abnormalities, trauma, strokes, and chronic nervous system disorders (e.g., multiple sclerosis).

Therefore, there is the potential of abuse when it comes to this new technology and that’s why legal scholars are concerned about privacy rights. Accordingly, needless to say, the right of privacy should be protected according to the state or federal rules, including, but not limited to, the Health Insurance Portability and Accountability Act (“HIPAA”) which was passed to address medical privacy concerns.  Scholars have argued that voluntary informed consent must be granted by the individual to use brain information. In other words, this type of confidential medical information cannot be used without the person’s knowledge and permission. The Bill of Rights has granted the right of privacy to citizens under certain terms and conditions. In fact, the Fourth Amendment protects privacy rights against unreasonable searches and seizures by the government. Also, every state has promulgated similar privacy laws which can be more strict than their counterpart federal laws. However, the question is whether our thoughts belong to us.

According to Biomedcentral, the four laws of neurotechnology are as follows: (1) right to cognitive liberty; (2) right to mental privacy; (3) right to mental integrity; and (4) right to psychological continuity. We’ve discussed some of the legal and ethical issues related to neurotechnology laws in previous articles. Today, our plan is to discuss neurolaws and evaluate the legal and ethical issues.

What are neuroscientists doing at this time?

Neurotechnology is on the verge of expansion especially since there more interest on the topic by the medical and technology sectors. Neuroscientists have thought about the possibilities of connecting electronic devices such as electrodes to the brain and analyzing the information. Now, it has become a tangible possibility due to the expansion of science and information technology which allow structural measures, neural activity and connectivity, molecular composition, and genomic variation of the brain. These abilities have been made easier by exponential advances in computational ability, artificial intelligence, machine learning, and development of sophisticated databases. Neurotechnology can predict a person’s danger level, probability to recidivate, evaluate intent, evaluate competence to stand trial, reveal biological mitigating factors that could explain criminal behavior, distinguish chronic pain from malingering, regain lost memory, and differentiate between true and false memories.

Neurotechnology device manufacturers should take legal and ethical issues into consideration when implementing microchips into a patient’s body. There are two methods being used at this time. First, is the “non-invasive” method where electrodes are inserted on the head’s surface as electrode caps which pick up electrical fields from the brain. So, the electrodes are not penetrating the patient’s body. Second, is the “invasive” method where the electrodes are placed inside the brain’s tissue which can be used to diagnose neurological diseases such as epilepsy.

Neurotechnology raises important legal issues since the human brain is connected to an external device such as a computer. For instance, one question is whether the external device changes the human’s brain activity, and if so, what could be the potential consequences. Artificial intelligence will inevitably change the legal framework in the near future especially since it will be used in the human body. So, the issues of privacy and data security will always come up.

What are the major concerns?

We can all agree that the brain is one of the most important organs we have in our bodies. The human brain is in charge of biological and neurological procedures such as memory, speech, perception, sleep, and emotion.

What is neurotechnology?

Neurotechnology is a scientific field that consolidates and connects electronic devices with the nervous system. Neurotechnology can pose interesting and complex ethical and legal issues since it can be used to create a so-called “interface” between the brain and computers. Neuralink is an example of this technology which is initiated by inserting a microchip into the brain. The brain-computer interfacing technology is arguably a positive step towards merging humans and artificial intelligence. The proponents argue that this technology could allow humans to overcome diseases and disorders such as Alzheimer’s, Parkinson’s disease, blindness, anxiety, depression, and insomnia. The opponents argue that this technology will be overly invasive and could create unanticipated complications.

The European Union has developed an artificial intelligence strategy to simplify research and rules and regulations. It’s focusing on building a trustworthy environment. The European Union’s approach to this new technology is to implement a legal framework to address fundamental rights and safety risks. It plans to implement rules to address liability issues. It also plans to revise the sectoral safety legislation and modify the rules and regulations. The new framework grants developers, deployers, and users a certain amount of clarity if it becomes necessary for them to intervene if legislation does not cover the issues.

Artificial intelligence can be used in critical infrastructures such as manufacturing and transportation. This technology can be used in education and vocational training such as preparing, taking, and scoring exams. Robotic technologies are already being used in medical products that would allow robot-assisted medical procedures. Law enforcement agencies can use this technology but they should not interfere with the general public’s fundamental rights (e.g., free speech, religious beliefs, privacy). The state and federal courts can use it for assistance in evidence comparison and evaluation. Biometric technology can also be used in conjunction with artificial intelligence.

On April 21, 2021, the European Commission has published a proposed set of laws to regulate the usage of artificial intelligence in its jurisdictions. The Artificial Intelligence Act consolidates a risk-based approach on the pyramid of criticality to assess the risks. There are four major risk levels starting from minimal risk, limited risk, high risk, and unacceptable risk. This proposed legislation implements a new enforcement agency called the European Artificial Intelligence Board (“EAIB”) which consists of national supervisors who will have oversight power. The Artificial Intelligence Act will have an extraterritorial effect on all providers, marketers, or distributors whose products or services reach the European Union’s market. This regulation defines artificial intelligence as a collection of software development frameworks that include machine learning, expert and logic systems, and statistical methods.

Artificial intelligence (“AI”) is defined as a system that imitates human intelligence to conduct similar tasks by improving itself based on the submitted or collected information. Artificial intelligence can be used in various industries such as manufacturing, automobiles, education, medicine, and financial services. Artificial intelligence can be used to detect and defend against cybersecurity intrusions, solve technical problems, lower production management tasks, and assess internal compliance for accepted vendors. Artificial intelligence technology is affordable and can produce faster results when compared to human interactions.

The terms artificial intelligence, machine learning, neural networks, and deep learning are not the same. Machine learning is a subset of artificial intelligence. Deep learning is a subset of machine learning. Neural networks create the backbone of deep learning algorithms and imitate the human brain by using specialized algorithms. It’s also important to realize that deep learning is different from machine learning. There are three main types of artificial intelligence: (1) Artificial Narrow Intelligence; (2) Artificial General Intelligence; and (3) Artificial Super Intelligence. For example, chatbots and virtual assistants (e.g., Alexa, Siri) are considered artificial narrow intelligence since they’re unable to incorporate human behaviors or interpret emotions, reactions, or tones.

What are the potential cybersecurity issues?

The experts describe “metaverse” as an alternate digital real-time existence offering a persistent, live, synchronous, and interoperable experience. It is where the real world meets the virtual world. Well, it sounds like science fiction but it’s going to be reality soon. The question is what are the legal issues and what remedies are available now.

Intellectual Property Issues

First, can copyright licenses protect the use of a work? The copyright laws should protect the original works as described in the applicable statutes such as the Copyright Act. In general, it protects original works of authorship, including, literary, dramatic, musical, and artistic works, such as poetry, novels, movies, songs, computer software, and architecture.

The term “metaverse” is a combination of “meta” and “universe.” This new concept allows users to interact with each other in virtual worlds and buy and sell names, goods/services, properties, and avatars. They can also organize, host, and attend events in virtual worlds.

The consumers will be using blockchain technologies and digital currencies. Blockchain is a database that includes network computers that share information across the internet. For example, Bitcoin uses blockchain technology to update its ledgers. So, several of these newer platforms are powered by blockchain technologies that use digital currencies and non-fungible tokens (“NFTs”) which allow a new type of decentralized digital asset to be designed, owned, and monetized. The NFT is a virtual asset that promotes the metaverse. It’s an intangible digital product that links ownership to unique physical or digital items (e.g., artistic work, real estate, music videos). In other words, each NFT cannot be replaced with another one because it’s unique and irreplaceable. So, for example, if you own an NFT, it will be recorded on blockchain and you can use it for electronic transactions. In fact, with NFTs, artifacts can be tokenized to create ownership digital certificates for electronic transactions.

What are the potential legal issues?

The internet is a combination of computer networks and electronic devices (e.g., smartphones, laptops) that can communicate with each other on various platforms. The internet has allowed people to immerse themselves in a world where they can create profiles on social media websites and freely interact with each other. It is certainly an intriguing phenomenon and an interesting part of today’s technological advancements. However, at this stage, technology companies are working on a different project called the “metaverse” which would combine the internet with augmented and virtual realities where the users can interact with each other as avatars.

What is metaverse?

It is made up of the prefix “meta” which means above or beyond and the stem “verse” which is a back-formation from “universe.” It’s generally used to describe the concept of a future iteration of the internet, made up of persistent, shared, three-dimensional virtual spaces linked into a perceived virtual universe.  It may not only refer to virtual worlds, but the internet as a whole, including the entire spectrum of augmented and virtual realities. It refers to an immersive digital environment where people interact as avatars. Its concept encompasses an extensive online world transcending individual tech platforms, where people exist in immersive and shared virtual spaces

Our law firm has received thousands of calls from actual or potential clients who were concerned about false, disparaging, or defamatory comments that were made about them on the internet. These comments were made by known or unknown individuals on websites, blogs, or forums such as Twitter, Facebook, Yelp, Reddit, or Instagram. The callers were obviously disconcerted and wanted to know the available legal remedies.

The federal Communications Decency Act (“CDA”) that is codified under 47 U.S.C. Section 230 has a direct effect on online defamatory comments that are made on social media platforms. This federal statute states that Congress finds the following:

(1) The rapidly developing array of Internet and other interactive computer services available to individual Americans represent an extraordinary advance in the availability of educational and informational resources to our citizens.