This week’s article explores the European Union’s brewing copyright law and its possible effects on the internet. Proponents intend for the law to modernize and suit copyright law for the digital age. Critics say the law will make the internet substantially less free. Today we discuss the Directive on Copyright in the Digital Single Market, and more specifically, three of its most recently approved provisions that could pose problems to internet freedom: its right for press publishers, its filtering obligations, and its text-and-data-mining stipulations.
The law’s right for press publishers would allow news companies to collect compensation when their stories are shared on social media platforms. Known as the “link tax,” it would require platforms to purchase a license to post current-events information coming from news institutions. Current copyright law already protects journalistic articles as literary works; republishers must ask permission to use such content. The proposed right, however, effectively expands this protection to data and facts that have already been published. Whereas only creative descriptions or puns in headlines are now protected, mere non-creative fact could be too; this would effectively hold information for ransom. The purpose of copyright law is to grant a limited monopoly over specific creative works and original ideas. To extend the law to envelop full ideas or factual content is nonsensical, and stymies the very processes copyright is meant to assist. Rather than foster innovation by protecting its fruit, the law would chill it by stealing its raw material. It would obstruct citizens from running businesses and from creating original products using factual information. In a region without the First Amendment, there is cause for concern.
The law’s filtering provision would require all website hosting providers to use filtering software that checks content against a database of copyright material. As the law stands, platforms such as YouTube, Facebook, and Twitter are not liable for the copyright infringement of their users, as long as when they are notified of it, they take it down. The users who post it, however, are still liable to authors or authorship-rights holders. The current law attempts a balance between honoring the investment of creative authors and promoting innovation through the spread of information. The “notice and takedown” process allows rights holders to notify the platform, requires that the platform take action but only once it’s told, and reminds users that they may ultimately be held accountable for infringement; this spreads liability out somewhat evenly. The proposed version would subject this process to automation. This would nominally place the majority of liability on platforms by forcing them to monitor content proactively. However, the users and their speech will feel the brunt due to the platforms’ much stricter resultant guidelines. The arbiter of this would be a machine, checking content against a copyright database, which would include factual material. The necessary software also doesn’t exist—allowed uses of copyrighted content like parody or criticism would be at risk because artificial intelligence cannot distinguish them from infringement. This imperils important content such as university lectures, for example.