Articles Posted in Internet Law

Following from libel proof individuals to the realm of Twitter, and the “Wild-West” approach towards online statements, comes an interesting idea.  It is given that most people will communicate anonymously on the web.  So, if a person is a victim of libel, then how can he/she recover? The online service provider technically didn’t publish it, but only acted as the forum. The person who published the statement cannot be easily found because the statement was posted under a pseudonym.  So, what if the online service provider could be forced to give up identifying information (e.g., name, address, telephone, email, IP address) of the commenting individual? How much is that anonymity worth? Is there a way to actually engage in defamation and get away with it?

How does anonymity make things harder?

Naturally, an unknown person is difficult to sue in court.  The amount of damages he or she could pay is difficult to ascertain. While there are rules allowing a lawsuit without knowing the individual’s identity­­ (which is common in some cases), however, it adds the difficulty in discovering the identity of the “Doe Defendants.”

On the Internet, individuals can go out and make attempts to rib each other, or to mock certain celebrities or infamous individuals. This opens the realm of libel and slander laws to expand towards online activities. Yet, depending on the person’s history, defamation may be borderline impossible.  If defamation is harm to one’s reputation, then theoretically it should be impossible to harm an irredeemable reputation.  This idea is a concept known as being libel proof — i.e., a person who cannot be defamed any longer.  So, can a completely libel-proof person exist? How could someone argue the individual is libel proof? How might this affect online communications?

What is libel proof?

Libel proof means, quite simply, that a person cannot be defamed any further.  Generally, to even satisfy libel, it would have to be an unprivileged false written statement that was published towards third parties (compared to slander, which is an unprivileged false oral statement that was published towards third parties).  Even then, defamatory statements are judged differently to protect free speech interests.

This month, we’re looking at various constitutional issues and tangential actions. Of these, there’s a recent hot-button issue regarding the purpose of “freedom of speech” online. From fake news to political speech on websites, the notion of “what is allowed” and “what should be allowed” is still raised by people.  So, what can a website do to maintain the balance between free speech rights and acceptable community standards? Is there any responsibility to allow negative views? What is the risk, if any, towards censorship?

Freedom of speech online

In the wake of 2016, there’s a new question of online service providers that if they allow people to express themselves then they should either act as a gate keeper or grant carte blanche to all users.  Most notably, there’s been the Facebook “fake news” complaints, as well as the actions of a Reddit executive towards supporters of Donald Trump. In the case of Facebook, there were both complaints that it was discriminatory not showing stories from every end of the political spectrum, and negligent that it was not taking action to curtail “fake news” and their influences.  For Reddit, an executive had made edits to statements by Trump supporters to change comments critical of him to individuals that were managing the Reddit group.

As we close out the year and enjoy the new technology from the holiday season, one piece of technology stands out as a forerunner.  It is something that we’ve dreamed and written about to the point it is a staple in science fiction. An artificial intelligence that anticipates and responds to a person’s desires and questions. This is the new technology, the “digital assistant,” such as Alexa, Siri, Cortana, and Google Home. These digital assistants manage to carry on conversations and answer questions.  How can these digital assistants think? How can they change and learn how to respond properly? Does the way these digital assistants work put data at risk?

How do digital assistants work?

Much like wearable technology, the digital assistant relies on “chatter” between itself and another computer hooked up through the internet.  However, the chatter tends to be slightly more reliant. Digital assistants, while they may have a few pre-programmed responses, are mostly reliant upon internet access to perform their duties. Alexa cannot work without WiFi, and Siri cannot work without a decent connection to data. When a person asks a digital assistant a question, the question is essentially pushed from the receiving device to the Cloud where it is answered, or some of the instructions are put out for the phone to follow. However, this may also entail, akin to a search history, a sort of assistant database where a person’s recorded voice may be kept, and in the case of the Amazon Echo at least, a user’s feedback on how Alexa did her job to allow it to grow and become more efficient, learning slang, or picking up on verbal tricks that are more similar towards human activities.

Wearable devices become more popular as the holiday season approaches. Among various new technologies, there’s a focus on the idea of wearable devices, which include items like smartwatches, fitness trackers, and other electronic accessories that can help make life easier.  However, with that comes the risk of privacy and security.  What would you need to know about your wearable device? What are the limitations of wearable devices? How secured are they, who has access to or owns the stored data?

What type of data do wearable devices collect?

When it comes to wearable devices, it is important to realize that the most prevalent data it stores tends to be personal, health, and fitness-related information.  For instance, the wearable device may track steps, take a pulse, measure heart rate, and in the case of the newer Apple Watch 2.0, they could record your geographic position. However, when it comes to other data, the wearable device’s ability is limited for the time being.

In recent years, we have all heard the expression before, but how does someone really “break the Internet?” Recently, an incident arose where a large network of electronic devices joined together resulting in a major interference with online businesses and services. Amazon, Netflix, and Yahoo, were hobbled temporarily due to various flaws in the Internet of Things. This flaw allowed individuals to create what’s known as a botnet, to launch a massive DDoS attack to effectively shut down services.  So, how would we prevent a similar incident from occurring? Should you be concerned about your smart devices? What about your websites and online services?

How did the Internet of Things become weaponized?

As it stands, the Internet of Things, which comprises of smart devices that connect online for the convenience of individuals, became weaponized against service providers, and created a “botnet.”  Effectively, some type of malware was downloaded onto these smart devices prompting them to send requests to certain websites. When these websites become overwhelmed by the requests, it resulted in websites crashing, or becoming generally unavailable to the users.  Here, one might wonder how, but the real answer was due to a lack of knowledge, training, and security. Unlike regular computers, tablets, and cellphones, smart devices do not always have the capability for security updates. With this, even for those devices that might be on a more secure network, the Internet of Things still entails those devices being connected online. This makes them vulnerable to more pinpointed attacks.  From there, the controller of the botnet can use the Internet of Things to launch the DDoS attack and crash a network.

We have discussed protecting someone’s image using the right of publicity, right to privacy, and the privacy laws that protect biometrics. Yet, images are first and foremost images.  So, certain rights exist for the protection of images. Firstly, it includes copyright laws. An ongoing trend is how individuals, famous and otherwise, use the Digital Millennium Copyright Act (DMCA) to demand takedowns and manage photographs. While this is still moderately controversial, it begs several questions. For example, what is required to use these claims to protect images? Why might someone use the DMCA takedown demand instead of one of the other methods of protecting images? How is this controversial if it allows individuals to protect privacy?

How would the DMCA work?

The DMCA allows individuals to issue “takedowns” to internet hosting services and to websites to remove copyrighted materials. The first hurdle is to yield actual copyright over the photograph. To be eligible, the work must be a type of copyrightable work (e.g., photograph, sound, written word), written by a human author and either created or arranged with a minimum amount of originality and creativity. In most cases, this might include, a “selfie” or a similar picture that has been taken by you. It’s worth noting that this is something that only applies within the United States, and the other elements to register a copyright, like creativity, are relatively easy to meet.

We’ve discussed the nature of this before, but the EU-US Privacy Shield has gone into full effect. This program essentially restricts the ability of U.S. commercial entities to do business in the European Union due to the ability of the U.S. government to use international businesses to improperly conduct surveillance on citizens within the European Union.  In response, the European Union removed the blanket ability of U.S. companies to do business with European Union members as part of the Safe Harbor provision. The Safe Harbor provision was loosely drafted in its self-certification, prompting the switch to the Privacy Shield today. As it stands now, this program is still in its fledgling stages, with registrations beginning on August 1, 2016.  These registrations begin with a murky area of international commerce. So, how could one join the privacy shield? Is your organization even be eligible? What might happen if an organization refuses to participate?

How can you join the Privacy Shield?

The Privacy Shield is open to any business that is subject to regulation by the Federal Trade Commission (FTC) or Department of Transportation (DOT).  In general, conducting business and affecting commerce would qualify entities under this regulation, although, there are some exceptions, such as, financial institutions, labor associations, and non-profit organizations that may not qualify.  After meeting the base qualifications, an entity may then “self-certify” by coming up with a plan that meets the basic requirements of the EU-US Privacy Shield.  This would include measures to protect the data of European customers and employees stationed in Europe, even after ending participation in the Privacy Shield.

The internet with its “remix culture” often appropriates images and videos to create new things. Yet, this also includes personal images. Be it “Bad Luck Brian,” “Overly Attached Girlfriend,” or some exploitable image, how could one protect his or her personal image from being remixed and exploited for a financial incentive?  This is also a question appearing outside of the internet in particular with book covers and music videos. How might one protect his or her own face and body? What is the best method of protecting one’s image?  Is this related to the right of privacy or right of publicity?

How could a person protect his/her own face and image?

Outside of simply preventing your image to be published online by avoiding social media, preventing photos to be taken, or spending your days behind a mask, the only way to protect your image comes up after an incident has occurred online.  The right over one’s own image can be boiled down to privacy claims with three main types of laws protecting it. First, the right to privacy. Second, is biometric privacy law.  Third, is the right of publicity. Of the three, biometric data is the newest with statutes in Illinois and Texas and minor provisions drafted in Iowa, Nebraska, North Carolina, Oregon, Wisconsin, Wyoming, and New York.  The idea of a biometric privacy law is that it creates a “privacy right” over an individual’s biometric features (e.g., fingerprint, retina, iris scans). Yet, ultimately this would only serve to protect one from larger entities.  To that point, the law in Texas lacks a private right of action but permits the State Attorney General to instigate legal action.

In the current news is another emerging technology, which is called Augmented Reality. In general, augmented reality (“AR”) uses technology to artificially create the reality a person experiences. For example, this could be a pair of glasses that shows a person’s contact information when his/her face is seen, or mobile apps, like Pokemon Go, which interact with your location and surroundings to create aspects of the game. Yes, Pokemon Go, the new mobile app juggernaut that has emerged into the market, is something that up to now, hasn’t taken place on such a massive scale. Yet, this new application has created unique legal questions. What can we do with this experience that encourages people to travel all over? How might one protect his/her property from players? Is there any way to stop Niantic, the creator of the game, from using your property in the game?

How does Pokemon Go work?

Before addressing the legal problems that arise from the game, it’s important to know how the game works. As stated before, Pokemon Go is a form of AR, using GPS data from the location to help generate the variety of creatures that can appear in a location.  In addition, certain locations and landmarks are coded to either give players items, or act as “goals” for them to capture for a team. There are small images on the markers, with titles and occasionally small descriptions. While many of these locations may be in public, or on publicly-accessible property, there are others that appear to be on privately owned or closed-off property.  While it appears that there are some deals with Niantic to add goals at the locations of real-world partners, however, it is not the norm.