Deepfake rules and regulations have been developing in the recent times. The term “deepfake” comes from two separate words – deep learning and fake – which uses artificial intelligence technology to create fake pictures or videos. The creator can utilize special software programs to create the picture or video by face swapping. This has become a problem because it can violate the victim’s privacy rights and public image.
We can detect the false image by conducting a reverse-image search. So, in other words, if the fake image was made by using another image on the web, the original version should be found. The fake image may be also detected by close evaluation. So, for example, the person in the fake video may not blink or yield normal facial expressions. It may also be detected through magnification or physiological analysis.
The victim’s legal rights can be violated by the deepfake creator or publisher. In most cases, it raises an issue regarding privacy rights. In California, false light is a legal cause of action that can be used by the plaintiff against the defendant who improperly represented the plaintiff who was embarrassed or offended by those actions. The plaintiff may argue that any reasonable person in the same or similar circumstances would be embarrassed or offended. The plaintiff may bring a cause of action for defamation against the creator and argue that the false factual statement – i.e., picture or video – was not privileged and had a tendency to damage his or her reputation in the community. The plaintiff may also file a legal action for misappropriation or right of publicity of the picture or video was utilized to promote a promote or service. Now, if the plaintiff suffers from emotional distress (e.g., depression, anxiety, insomnia), then he or she may also bring a cause of action for intentional or negligent infliction of emotional stress. See https://www.justia.com/trials-litigation/docs/caci/1600/1600 for more information.