Articles Posted in Government

There are no mandatory data retention laws in the United States of America. See https://www.eff.org/issues/mandatory-data-retention; Cf. Anne Cheung & Rolf H. Weber, Internet Governance and the Responsibility of Internet Service Providers, 26 Wis. Int’l L.J. 403 (2008); Christopher Soghoian, An End to Privacy Theater: Exposing and Discouraging Corporate Disclosure of User Data to the Government, 12 Minn. J.L. Sci. & Tech. 191, 209-214 (noting that some ISPs in Sweden have enacted zero data retention policies in response to customer demands, but none of the major American ISPs or telecommunications carriers have made such enactments). There is a probability that service providers will delete the relevant data from their database servers in the near future. So, if the plaintiff or petitioner fails to take timely action, then their database servers may no longer yield the requested basic subscriber information.

In addition, from an international aspect, organizations that are subject to the General Data Protection Regulation (“GDPR”) should know its requirements wherein includes personal data being “kept in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the personal data are processed.” It’s important to note that some states such as California and Virginia have promulgated similar statutes on this topic. The California Privacy Rights Act (“CPRA”) and Virginia’s Consumer Data Protection Act (“CDPA”) have the same or similar provisions in this respect.

The courts have recognized that, absent a court-ordered subpoena, many of ISPs, that qualified as “cable operators” for purposes of state or federal laws (e.g., 47 U.S.C. § 522) were effectively prohibited from disclosing identities of putative defendants to plaintiff. Digital Sin, Inc. v. Does 1-176 (S.D. N.Y. 2012) 279 F.R.D. 229. Thus, Internet service providers should comply with the subpoena pursuant to the rules. Plaintiffs can issue subpoenas to request basic subscriber information from the service provider that yields the identifiable information. Plaintiffs should utilize any and all options to resolve the discovery dispute without judicial intervention. However, if the service provider fails or refuses to comply with the subpoena, then the plaintiff must seek a court order to obtain the necessary information (i.e., basic subscriber information) to identify the anonymous defendants. Our law firm regularly conducts investigations to prove a specific account was used to access our client’s electronic devices, email accounts, or online storage devices.

Neurolaws and privacy rights are still in the development stages. Neurological advances have allowed scientists to connect electrodes to the brain for analytical procedures. These electrodes can be connected in a non-invasive manner so they can download brain data. Now, this brain data can be analyzed to help patients with brain disorders such as epilepsy, depression, Parkinson’s disease or Alzheimer’s disease. Moreover, a human’s brain data may be analyzed to determine the truth and existence of intent.

Neuroscientists have been able to use advanced non-invasive techniques to observe and analyze cerebral neurochemical changes in the human brain. They have access to several technologies including, PET, SPECT, MRI, fMRI, and EEG. In fact, functional MRI (fMRI) is able to measure the brain’s activities under resting and activated conditions. It can be used to identify, investigate and monitor brain tumors, congenital anatomical abnormalities, trauma, strokes, and chronic nervous system disorders (e.g., multiple sclerosis).

Therefore, there is the potential of abuse when it comes to this new technology and that’s why legal scholars are concerned about privacy rights. Accordingly, needless to say, the right of privacy should be protected according to the state or federal rules, including, but not limited to, the Health Insurance Portability and Accountability Act (“HIPAA”) which was passed to address medical privacy concerns.  Scholars have argued that voluntary informed consent must be granted by the individual to use brain information. In other words, this type of confidential medical information cannot be used without the person’s knowledge and permission. The Bill of Rights has granted the right of privacy to citizens under certain terms and conditions. In fact, the Fourth Amendment protects privacy rights against unreasonable searches and seizures by the government. Also, every state has promulgated similar privacy laws which can be more strict than their counterpart federal laws. However, the question is whether our thoughts belong to us.

According to Biomedcentral, the four laws of neurotechnology are as follows: (1) right to cognitive liberty; (2) right to mental privacy; (3) right to mental integrity; and (4) right to psychological continuity. We’ve discussed some of the legal and ethical issues related to neurotechnology laws in previous articles. Today, our plan is to discuss neurolaws and evaluate the legal and ethical issues.

What are neuroscientists doing at this time?

Neurotechnology is on the verge of expansion especially since there more interest on the topic by the medical and technology sectors. Neuroscientists have thought about the possibilities of connecting electronic devices such as electrodes to the brain and analyzing the information. Now, it has become a tangible possibility due to the expansion of science and information technology which allow structural measures, neural activity and connectivity, molecular composition, and genomic variation of the brain. These abilities have been made easier by exponential advances in computational ability, artificial intelligence, machine learning, and development of sophisticated databases. Neurotechnology can predict a person’s danger level, probability to recidivate, evaluate intent, evaluate competence to stand trial, reveal biological mitigating factors that could explain criminal behavior, distinguish chronic pain from malingering, regain lost memory, and differentiate between true and false memories.

Neurotechnology device manufacturers should take legal and ethical issues into consideration when implementing microchips into a patient’s body. There are two methods being used at this time. First, is the “non-invasive” method where electrodes are inserted on the head’s surface as electrode caps which pick up electrical fields from the brain. So, the electrodes are not penetrating the patient’s body. Second, is the “invasive” method where the electrodes are placed inside the brain’s tissue which can be used to diagnose neurological diseases such as epilepsy.

Neurotechnology raises important legal issues since the human brain is connected to an external device such as a computer. For instance, one question is whether the external device changes the human’s brain activity, and if so, what could be the potential consequences. Artificial intelligence will inevitably change the legal framework in the near future especially since it will be used in the human body. So, the issues of privacy and data security will always come up.

What are the major concerns?

The European Union has developed an artificial intelligence strategy to simplify research and rules and regulations. It’s focusing on building a trustworthy environment. The European Union’s approach to this new technology is to implement a legal framework to address fundamental rights and safety risks. It plans to implement rules to address liability issues. It also plans to revise the sectoral safety legislation and modify the rules and regulations. The new framework grants developers, deployers, and users a certain amount of clarity if it becomes necessary for them to intervene if legislation does not cover the issues.

Artificial intelligence can be used in critical infrastructures such as manufacturing and transportation. This technology can be used in education and vocational training such as preparing, taking, and scoring exams. Robotic technologies are already being used in medical products that would allow robot-assisted medical procedures. Law enforcement agencies can use this technology but they should not interfere with the general public’s fundamental rights (e.g., free speech, religious beliefs, privacy). The state and federal courts can use it for assistance in evidence comparison and evaluation. Biometric technology can also be used in conjunction with artificial intelligence.

On April 21, 2021, the European Commission has published a proposed set of laws to regulate the usage of artificial intelligence in its jurisdictions. The Artificial Intelligence Act consolidates a risk-based approach on the pyramid of criticality to assess the risks. There are four major risk levels starting from minimal risk, limited risk, high risk, and unacceptable risk. This proposed legislation implements a new enforcement agency called the European Artificial Intelligence Board (“EAIB”) which consists of national supervisors who will have oversight power. The Artificial Intelligence Act will have an extraterritorial effect on all providers, marketers, or distributors whose products or services reach the European Union’s market. This regulation defines artificial intelligence as a collection of software development frameworks that include machine learning, expert and logic systems, and statistical methods.

The experts describe “metaverse” as an alternate digital real-time existence offering a persistent, live, synchronous, and interoperable experience. It is where the real world meets the virtual world. Well, it sounds like science fiction but it’s going to be reality soon. The question is what are the legal issues and what remedies are available now.

Intellectual Property Issues

First, can copyright licenses protect the use of a work? The copyright laws should protect the original works as described in the applicable statutes such as the Copyright Act. In general, it protects original works of authorship, including, literary, dramatic, musical, and artistic works, such as poetry, novels, movies, songs, computer software, and architecture.

Our law firm has received thousands of calls from actual or potential clients who were concerned about false, disparaging, or defamatory comments that were made about them on the internet. These comments were made by known or unknown individuals on websites, blogs, or forums such as Twitter, Facebook, Yelp, Reddit, or Instagram. The callers were obviously disconcerted and wanted to know the available legal remedies.

The federal Communications Decency Act (“CDA”) that is codified under 47 U.S.C. Section 230 has a direct effect on online defamatory comments that are made on social media platforms. This federal statute states that Congress finds the following:

(1) The rapidly developing array of Internet and other interactive computer services available to individual Americans represent an extraordinary advance in the availability of educational and informational resources to our citizens.

Doxing has become a major problem on the internet since it usually violates the victim’s privacy rights. It is a form of unwarranted harassment and stalking on the web as the culprit shares the victim’s personal information with the general public and encourages them to target the victim. Hence, the victim could feel exposed on the internet and be left without legal protection.

The doxing party reveals personal information about a person or legal entity on the web in a typical case. The doxing party is usually savvy in extracting personal information from third-party websites or in hacking electronic devices. This personal or private information is illegally obtained in violation of the victim’s privacy in an effort to annoy or harass him or her for no legitimate purpose. In other words, it is an act that constitutes “harassment” under the applicable statutes such as California Code of Civil Procedure section 527.6.

There have been many doxing incidents in the past years. For example, there was doxing of abortion providers wherein their personal information was exposed to the general public. The court held this violation was considered an incitement to violence and not subject to free speech rights. Hacktivists called the “Anonymous Group” have been responsible for exposing information of law enforcement agents as an effort to retaliate against investigations. They have also released information about the Ku Klux Klan in reference to the shooting of Michael Brown. In addition, there have been misidentification incidents in connection with the Boston Marathon bombing on Reddit where Sunil Tripathi was mistakenly identified as a suspect.

We have explored the nature and capabilities of augmented and virtual reality (“AR/VR”) technologies in previous articles. We have discussed how these technologies can collect, store, and share personal or confidential information with third parties. The user information that’s collected may be stored and shared for financial gain in most situations. The third-party service providers (e.g., Google, Microsoft, Facebook, Instagram, Twitter) that have access to these technologies may conduct data analysis to learn more about their users through behavioral marketing. The AR/VR technology manufacturers may implement some type of user surveillance for profit. However, it should be noted that these practices should be conducted with the user’s knowledge and authorization.

Now, with that being said, the users should be protected by the state, federal, and international legislators and policymakers. The legislators and policymakers should consider implementing the proper safeguards within their laws that would protect the consumers. We have mentioned the main issues in previous articles which, include, but may not be limited to, data privacy and cybersecurity. Data privacy is a key component to any kind software and hardware technology. There have been multiple cases where the manufacturer failed to implement user protection safeguards. The Federal Trade Commission (“FTC”) has prosecuted legal actions against manufacturers and other commercial organizations. For example, In the Matter of Zoom Video Communications, Inc., Zoom was required to implement a robust information security program to settle allegations that it had engaged in a series of deceptive and unfair practices that undermined user security. In addition, in another case, LifeLock was forced to pay $100 million to settle contempt charges that it violated the terms of a federal court order that required it to secure consumers’ personal information and prohibited it from deceptive advertising. The FTC has been charged with the task of prosecuting consumer fraud. Please refer to https://www.ftc.gov/enforcement/cases-proceedings/terms/249 for more information.

Regulatory uncertainty plays an important role in the future of AR/VR technology since many of the existing laws do not address each and every issue. Although the existing laws provide certain guidance to the device and application manufacturers, however, there are certain and cognizable loopholes that should be addressed by state, federal, and international legislators. So, for example, there should be clarity on the scope of tracking software that has been implemented in the technology. Also, on a side note, there should be a way to fully disclose the technology’s capabilities and to obtain user consent – i.e., the device and application manufacturers should provide an opt-out option to avoid unfair, deceptive, or misleading advertising. It’s important to note that the FTC Act (codified under 15 U.S.C. §§ 41-48), under Section 5, grants the federal agency the right to file legal actions. The term “unfair or deceptive acts or practices” includes such acts or practices involving foreign commerce that: (i) cause or are likely to cause reasonably foreseeable injury within the United States; or (ii) involve material conduct occurring within the United States. So, in essence, the federal agency promotes transparency and disclosure in order to properly inform and protect consumers.

The technology that we are using on a daily basis provides certain and cognizable advantages and disadvantages. The advantages are great and have allowed the public to have access to a wide range of options. The disadvantages, include, but are not limited to, security and privacy discrepancies. Technology operates to enhance a business model, idea, or operation. This is usually done by collecting and selling information for profit. These types of data collection and marketing activities have been heavily regulated by state and federal agencies in recent years. However, with every new technology, there will be new challenges.

Augmented and virtual reality technologies are no different from other types of technologies in that they are fully capable of being abused when they fall into the wrong hands. Augmented and virtual reality software or hardware applications are designed to enhance user experiences by storing and sharing information across the network. This information may include personal or confidential information that would not otherwise be accessible by third parties. Nonetheless, the designers or manufacturers of these applications make it much easier to gain access and share information with third parties – e.g., marketing or advertising agencies – which pay an incentive for gaining access to them.

The state and federal legislators should pay close attention to these technologies and their operation mechanisms so they can update existing laws and implement new laws that would properly address consumer-related issues. Now, if the AR/VR technologies are collecting health or medical information, the Health Information Portability and Accountability Act (“HIPAA”) comes into play. Also, if the AR/VR technologies are collecting a minor’s information, then the Children’s Online Privacy Protection Act (“COPPA”) would be applicable.